is this switching stuff really necessary ? i fail to see requirements leading to this decision from all i have read so far.
But is this not effectively the same as locking the DB ?
I mean when 1 client is creating in your app can another client be reading ?
It's just a matter of interpretation of course, but I think if you document your decision with reasonable supporting arguments, they can't punish you with a deduction as large as the one David got.
As for the lock DB. My create() method is already fully synchronized and I have only 1 Data instance.
But does this stop thread from clientA switching out half way thru create() and clientB running delete() to effectively change the ground under clientA's feet as it were ?
Ok but how does clientB get lockCookie1 ?
It does not matter how clientB gets lockCookie1. The fact that you are implementing a solution which would prevent this from happening adds another layer of security. Realistically, clientB should never get lockCookie1. But from a maintainability argument, you could say that by implementing this additional level of security (via a second Hashtable), clientB is prevented from executing a locked-record operation with the wrong lockCookie.
What about the case Zee mentioned where I have to reuse deleted records when creating new ones if possible.
clientA does a read() and figures that record 2 is deleted.
Meanwhile clientB goes and deletes record 1
ClientA creates a new record over record 2.
This is not a big deal really but to be correct they should have used record 1.
By locking the whole DB I can avoid this.
What am I missing here ?
I think you are missing the KISS principle! (Keep It Simple Stupid)Try to keep it as simple as possible. As long as you check to see if a record is deleted (to reuse it) when doing a create new record operation you should be fine.
One suggestion would be to synchronize the entire create new record operation and call a searchForDeletedRecord() method within that operation. If the create new record operation (or method) is synchronized then the executing thread cannot slice out mid-operation. Since it is synchronized, only 1 thread at a time can execute that operation. Therefore, the scenario you described should not occur. This is how I designed my solution.
However, this will not work if you are using multiple Data class instances with each instance using its own RandomAccessFile to read from and write to the database file. I had only 1 Data class and RAF in my solution (which resided in my RMI Remote Implementation object) - because of the requirement to use lockCookies.
The way I understand your server design is that the database file name must be specified at the time when the server is started. In other words, if the client wanted to connect to a database with a different name, the server would need to be restarted. If that's correct, I am speculating that it was the reason for a big deduction on the server design.
2. You say you keep a hashtable of Client ID and lockCookie to ensure that the holding client accesses the correct record when performing an operation that requires locking.
I'm a bit confused.
When I successfully lock a record I get back a lockCookie.
Now any methods that change the data must pass in a recordNum and the lockCookie. If they don't match then you can't do it.
I can't see why you need the second hashtable ?
public long lockRecord(final long recNo) { long cookie = //get the cookie as identifier try { //cookies is the hashmap to store the locked record. synchronized (cookies) { while (cookies.containsKey(new Long(recNo)) || cookies.containsKey(new Long(LOCK_DB))) { cookies.wait(); } cookies.put(new Long(recNo), new Long(cookie)); } } catch (InterruptedException e) { ////// } return cookie;}