Hi guys, My LockManager instance treats data base/record lock scenario in the following way:
As you can see, to provide a database lock I wait for all other record locks to be unlocked before i provide database lock. To provide record lock I ensure that the specified record is not locked and there is no database lock. This implementation is working perfectly. But I percieve a problem with this implementation. Suppose there are thousands of clients interrogating, editing, locking & unlocking the data instance (ex. web clients). In such a scenario attaining a database lock will be very difficult and a matter of chance in my implementation, because it is possible that the recordLockMap never gets empty due to repeated record locking request from different clients. So in my views I should implement the above solution in the following way:
This code will also do the job. Since it starts locking each and every record, so with each record lock it is reducing the scenario of recordLockMap never getting empty. However I percieve a problem with this implementation also. Suppose there are only 5-7 clients interrogating data instance and requesting lock and unlock then each time a client calls for a database lock a wasteful procedure of locking all records one by one will be followed by the LockManager. This can be a significant drawback for a large database. Is this justified ?? Also the second implementation increases the chances of dead locks. Lets consider another scenario different from FBN. Suppose a particular implementation requires two records in same database to be locked to perform a single task. Now if client A requests lock on record no. 20 and locks it and then issues request to lock record no. 5 but just then another client B requests a database lock and locks records from record no. 1 to record no. 19 but its request to lock record no. 20 is blocked as it is locked by client A. On the other hand client A is also blocked as it is waiting for record no. 5 to be unlocked by client B. So a deadlock occurs. Since the second implementation is locking records from 1 to the one it is blocking on the scenario of deadlock is significantly more than the first implementation. I agree that the scenario i presented are vague and are not normal situations, but i want to know that is there a better way out.. What are your views in this regard ?? Any suggetions are welcome...
Joined: Jan 30, 2002
Hi Vishal, Looping thru and grabbing all the locks is the way Mark did it. I must admit I considered that but decided on setting a flag and waiting for all the locked records to be released. One possibility to guarantee the success of method one would be to set a flag called dblockPending, when the request to lock the database is initialized. After the dblockPending flag is set, any client requesting to lock a record would cause an exception to be thrown. Anyway, by using either method 1 or method 2 you will not be penalized, so I wonder if it's worth worrying too much about? Michael Morris [ September 20, 2002: Message edited by: Michael Morris ]
Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius - and a lot of courage - to move in the opposite direction. - Ernst F. Schumacher
Joined: Oct 08, 2001
After the dblockPending flag is set, any client requesting to lock a record would cause an exception to be thrown.
I did the same ting, except that instead of throwing an exception, the thread wait()s until the flag is unset. The entire code to implement lock manager (including record lock/unlock and database lock) is just 26 lines of code in my implementation, and since the subject line is about elegancy, I think that the code
is more elegant than
However, these two code segments implement the different design decisions, -- the first one passively waits until all records have been unlocked (while not allowing to lock new records), and the second piece of code acts just like another client, locking one record at a time. I would say both decisions are perfectly acceptable. Eugene.