Howdy Ranchers!
Yesterday I've sat seriously with my
SCJD assignment (URLyBird 1.1.1) and I've got some questions about the locking mechanism.
1. I want to use a Singleton
pattern for the Data class,
2. I will create a recordCache (TreeMap) [ recNo =>
String[] ] which will read the records during startup of the server and will do CRUD operations on it. This structure will be written into the database flat file at the exit of the application. Perhaps it can be modified to do some kind of checkpoints in the background to prevent lost-edits when application crashes. TreeMap will let me select the first never-used record number that might be used as an insertion point, if no other record is available for reusing.
Data class will use some kind of DataAccessManager to load or commit changes to the flat file.
3. I will use a HashMap as a lockMap which will map RecNo => LockCookie [ Integer => Long ].
4. When I thought about the locking mechanism I draw a pseudocode which creates few other questions like:
4.a In my opinion there is no need to synchronize the read method (neither on the recordCache nor the lockMap) [read 5.b for more details],
4.b Create, delete and update need to be synchronized on both - recordCache and lockMap,
4.c Lock and Unlock methods are synchronized not only on the lockMap but also on the recordCache (to be sure that the record does exist when we create a lock on it; if it doesn't - RNFE is thrown)
4.d When I said that the synchronization is on both maps, then I mean this is a synchronized block inside synchronized block. I'm curious if it isn't a possible flaw in the design, as simple change in the order of synchronization blocks can lead to the deadlock. OK - I remember that I synchronize always first on lockMap and then on recordCache, but isn't it too dangerous for the successor "Junior
Java Developer"?
5. I've got a problem when thinking about find method... Basicaly, my business layer consists of two methods - search(String[] criteria) and book(Integer recNo). The problem I address is the possible data change between getting the ID's (from find method in Data class) and executing read for each ID.
Between these steps, some
thread can change the data and while fetching each record detailed data we can get some that doesn't match our criteria:
So we get a set in which one of the hotel's names (record no 5) is not 'MyHotel' which we wanted.
5.a Solution - locking the recordCache while executing Data.find(-) method? A big hit for parallel execution.
5.b Solution - doing second pass record filtering in business layer? OK - it might be necessary anyways, because the Data.find(-) returns "criterion*" matches, while the business layer should return only exact matches ("criterion").
Then again - we are facing the bigger problem here - some records could be changed during our find operation, so right now (when we filter the records in business layer) the database consists of some records that matches the criteria but wasn't found.
I guess that this non-repeatable read matter should not be taken under consideration (or possibly, this assumption should be noticed in project documentation) as I can only think that Solution 5.a would solve this problem.
6. In my locking mechanism I am not using any ReentrantLock, ConcurrentMap or any of the classes in java.util.concurrent, as I think that the mechanism based on synchronized blocks should work fine. Am I making a mistake, and should use rather the java.util.concurrent.* classes to satisfy my assessor? Is the using of Java API more valuable than using mechanisms built in the language itself?
7. When the lock cannot be acquired at the precise moment, I am using the Object.wait() method in a while loop checking if the lock can be acquired. The problem is that I am waiting on the whole lockMap object. The unlock method is notifying all waiting objects for this lockMap (when it is releasing ANY of the locks).
7.a This solution has a drawback, as each unlock operation is notyfing all waiting Threads, even if the released lock wasn't the one they were waiting for.
7.b I was hesitating to lock on a single lockMap's value, as I wasn't sure if it won't be a possible design flaw, when the cookieValue will be created using autoboxing and therefore two different cookies might exist in the lockTable with the same reference (because of the "Long pool"). But I guess if only the developer will remember to create the cookies with new Long(-) method it should be fine - buy then again, this is a potential risk...
7.c Another solution could be to create a LockManager class which will hold the pair - ReentrantLock and lockValue (cookieVal). The original lockMap will be changed from simple [recNo => lockValue] into [recNo => lockManagerEntity]. This will give the ability to notify only those Threads that are waiting for the specific lockedRecord and to change this pseudocode:
into this
I'm not sure which one is more intuitive or if the notifying threads waiting only on the specific record is so important that it's worth of implementing ReentrantLock and complicating a bit the lockMap...
Gosh, that's a one hell of a big post -
you should get a reward for getting to this place (of course if you
didn't cheat and didn't go here at the first place! :-)
A lot of questions came into my mind but these are the most disturbing ones. Thanks in advance for all replies :-)
Cheers!