I was reading about the value objects(a.k.a. Data Transfer Objects) pattern. From what I read it gives a lot of convenience with respect to movement of data between layers. I also read the moment we retrieve data from the database & create value objects comprising this data we also land up with the problem of stale data which is not in synch with the underlying datastore. A scenario cited is when the underlying data in the database changes after the creation of the value objects.
I would want to know how this problem of stale data is resolved in a real world scenario. One solution (i dont know if this is the right solution to avoid the stale data problem) is to wrap even the read/getData operations in transactions & lock the data to prevent it from changing in the underlying datastore.
Say if we have a scenario with Struts as the presentation layer, EJB as the business layer & Hibernate as the persistence layer What we can do here is to create transaction at the EJB layer while fetching data. (Ideally I dont how whether it is a good idea to use transactions even for fetching data. Having a database background we are never used to wrap Select calls in a transaction).
But here again the transaction starts & end at the EJB layer. So when the value objects reach the Struts layer there is every possible chance that again the underlying data represented by the value objects may change & again we encounter the stale data problem
Can some one ellaborate on Value objects & how to avoid stale data problem
Originally posted by manish ahuja: I was reading about the value objects(a.k.a. Data Transfer Objects) pattern.
Please note that the term Value Object typically refers to something that is not a DTO - the authors of the J2EEpatterns produced a quite inconvenient name conflict here. (As a consequence, I totally misunderstood the subject line of this thread.) See http://faq.javaranch.com/view?ValueObject
I would want to know how this problem of stale data is resolved in a real world scenario.
As far as I can tell, most applications resolve it by simply acknowledging it.
Do you have a more concrete example of the problem you need to solve?
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
As Ilja stated you can't "avoid it"; you have to explicitly deal with it. If performance is the primary concern usually optimistic concurrency control is implemented. I.e. you associate each record of data with a version number that is incremented each time the record is updated. The update also always checks that the "old" version number is still in place before it performs an update (usually by putting the old version number in the UPDATEs WHERE clause). If the update detects that the version number has changed the entire transaction is rolled back. In systems where performance is much less of a concern and it is more important that no concurrency collisions occur, pessimistic locks are sometimes implemented. In such a case data is "tagged" (locked) with a user or transaction id which denies everyone else access to the data (even read-only access - you don't want anyone or anything making decisions on data that you intend to change). The data is then "untagged" (unlocked) only when the transaction is completed. In such a system you even "tag" (i.e. lock) data that you aren't going to change, but which may impact your transaction because you don't want anyone else invalidating any pre-conditions that your transaction processing is based on.