This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
Consider a scenario where two users are trying to update same data in the database.They first select the data from the database table and then update some of the records in the same table after checking the values. Suppose user A has selected record 1, 2 and 3. Then he updates record 2 and 3. Before user A updates the records, user B selects records 1, 2 & 3 and updates records 1 and 2 after some criteria is met. So now the updates done by user A will be overwritten by user B. How do i avoid this situation? Is there any way to control the simulataneous updates without degrading the performance?
There are various ways to handle it, the I like is most are
Add a column in table let say last updated time (Datatype Timestamp), always insert/update it with current timestamp. And whenever you are trying to update a row use this timestamp in you where clause.Therefore once one user update the row , other user's query will not satisfy where clause and will not have any effect on database.
Another similar approach is that use a running counter instead of timestamp.
I am not sure how counter will help in this scenario Best approach is to define row level locks on Table if your database supports it Or define isolation level of transaction
Any lock on database may have concequences specially in big application.Still I would with column of last update,
A update counter will work in same way as timestamp. Only difference is that it will have counter of update operation rather timestamp.
Both approach has their own pros and cons
Joined: Jun 09, 2000
Any lock on database may have concequences specially in big application
Still defining isolation levels on transactions or locks on rows are better than doing two hits on database two maintain data consistency. If we have 1000 rows then we will be hitting table 2000 times to maintain data integrity and I believe we won't want our application to be slow
Besides this ROW-LEVEL locks are only for ROW so they are not escalated to table level to impact performance