Hi This is concerning a topic I've been researching for a project I'm working on. Optimistic locking is a the process of ensuring that you don't 'clobber' data in an application when two clients access the info for read / modification. Some of the solutions are very ingenious, but I have a few concerns to those that use time stamping and versioning. They (the solutions) seem to only solve the problem where your objects are mapped directly to one table / one row in a database. (Client object = 1 row in the ClientTable). For example they say if you have a Clients' information, add a field to the class that holds the timestamp taken from the corresponding row in the database. Upon update use the timestamp in the WHERE clause of the UPDATE query and if no rows/results are returned, then the information was changed, if not, your update happenned safely. How do you use these methods of data checking when you have a heavy weight object comprised of relations and/or information from multiple related tables? I.E. I have a "Client" that 'has-an' "Address" and a "InvoiceList". The "Address" in the database could be a combination of some text fields and even foreign keys to a Country / State / Municipality table. The InvoiceList could be many many rows from the InvoiceTable. Even using Lazy instantiation with my objects to avoid instantiation unless needed and to prevent redundant locking. I stll can have a whole heck of alot of logic required to check all thse timestamps upon update. Any pointers / suggestions? Resources: Javaworld Article - Optimistic Locking Pattern TheServerSide.com - Time Stamps for Transactions. ------------------ SOURCE CODE should be SURROUNDED by "code" tags. Click here for an example [This message has been edited by John Bateman (edited July 23, 2001).]
SOURCE CODE should be SURROUNDED by "code" tags.
Joined: Apr 02, 2001
John, Partly went through the article at JavaWorld!Must say it is an intersting topic considering the fact that most EJB Developers don't think to delve into it! I believe, although EJB specifications doesn't mention about providing support for Isolation Levels, the DB usually takes care of it.Also, the Application Servers like BEA Application Server provides tools to prevent dirty reads, unrepeatable reads and phantom reads. It is best to leave this low-level services to the Application Server and concentrate on the business logic. I will post my thoughts on the authors' discussions on this article by tommorow. Thanks, Sandeep PS :
Originally posted by John Bateman: How do you use these methods of data checking when you have a heavy weight object comprised of relations and/or information from multiple related tables?
Just a thought!You would be updating only one table at a time although you may require data from multiple table to decide whether to make an update.In case you are updating one or more tables, you may need to keep a timestamp field in all the tables you believe you would updating.Then probably either of the authors' solution fits the bill. As the author suggests the timestamp solution will work if and only if every object which writes the record in the DB also modifies the timestamp! I will come up with more views on this tommorow. Hope this makes some sense -- Sandeep
<b>Sandeep</b> <br /> <br /><b>Sun Certified Programmer for Java 2 Platform</b><br /> <br /><b>Oracle Certified Solution Developer - JDeveloper</b><br /><b>-- Oracle JDeveloper Rel. 3.0 - Develop Database Applications with Java </b><br /><b>-- Object-Oriented Analysis and Design with UML</b><br /> <br /><b>Oracle Certified Enterprise Developer - Oracle Internet Platform</b><br /><b>-- Enterprise Connectivity with J2EE </b><br /><b>-- Enterprise Development on the Oracle Internet Platform </b>
Joined: Mar 09, 2000
Hi Thanks for the reply. I really wasn't thinkikng straight when I asked about updating heavy weight objects. You are totally correct, since I can't do relational updates I would just have to find a way to use introspection to determine if my objects have been populated and then issue an update command for each. Now I have to determine how many levels deep I need to go. As for all writes updating the timstamp. I beleive this is automatic in most DB servers. The other point I forgot to mention is that the company I am working for has already taken a BMP (bean managed persistance) direction for its' architecture so I can't leave the dirty read checks / locking to the container (at least not yet). Although with the research I'm doing I may have to talk to the architect when he gets back from vacation and see if we can either change the direction of the persistance management or find a good solution for our locking mechanism. I'm happy that the project isn't really in full swing yet. We've still got like a year and a bit until our first deliverable. Plus we're waiting for Websphere 4 until we actually commit to anything container based so it's definately not too late (IMHO) to make architectural changes. I await your comments on the article.