aspose file tools*
The moose likes Developer Certification (SCJD/OCMJD) and the fly likes Possibility of Locking a Record, that has just been deleted. Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of The Java EE 7 Tutorial Volume 1 or Volume 2 this week in the Java EE forum
or jQuery UI in Action in the JavaScript forum!
JavaRanch » Java Forums » Certification » Developer Certification (SCJD/OCMJD)
Bookmark "Possibility of Locking a Record, that has just been deleted." Watch "Possibility of Locking a Record, that has just been deleted." New topic
Author

Possibility of Locking a Record, that has just been deleted.

Tony Collins
Ranch Hand

Joined: Jul 03, 2003
Posts: 435
When deleting records from the DB, I sync on the recordCache. Therefore when locking a record sync on the recordCache to check the record is there then lock on my static list of lockedREcords to Add the lock.
I see a case where a Record Could be found when locking, come out of the sync block, a thread could delete it and then a lock could be granted on a deleted record. Has anyone any idea how to avoid this or do you think it is beyond the scope of the project. Here is my code

Mark Spritzler
ranger
Sheriff

Joined: Feb 05, 2001
Posts: 17250
    
    6

Yes, I think you are going wya beyond the scope of the assignment. In the assignment there isn't any GUI that allows the user to actually delete a record. So there won't be a time in this version for a user to delete a record, then another user to try to lock that record.
Mark


Perfect World Programming, LLC - Two Laptop Bag - Tube Organizer
How to Ask Questions the Smart Way FAQ
Tony Collins
Ranch Hand

Joined: Jul 03, 2003
Posts: 435
Though if a delete method was added, it could happen. I'd be interested to know if anyone else has found an elegant simple way to avoid this.
I can't hold the monitor on my recordCache until I've gained the lock as this would stop any update of the recordcache while one thread is suspended.
Tony
[ August 19, 2003: Message edited by: Tony Collins ]
Jim Yingst
Wanderer
Sheriff

Joined: Jan 30, 2000
Posts: 18671
[Mark]: Yes, I think you are going wya beyond the scope of the assignment. In the assignment there isn't any GUI that allows the user to actually delete a record. So there won't be a time in this version for a user to delete a record, then another user to try to lock that record.
Well with this logic there's really no obligation to provide any delete() implementation at all, is there? I might agree that this particular bug may be unlikely enough that you can live with it, if you can't find an elegant solution. But calling thread safety "way out of scope" seem a bit much I think.
Tony - it seems you need to recheck if the record has been deleted, after you've established that no other threads hold a lock. In fact you may not even need the first check (the one currently coded) but you do need the second one. Take that synchronized (recordCache) block and put it inside the while loop. This raises the spectre of nested locks, so you've got to be careful. You can do it if you always acquire the locks in the same order - in this case, first sync on lockedRecords, then on recordCache. Make sure your synced code for recordCache never, ever tries to sync on lockedRecords though - that can lead to deadlock. I would enforce this by making a separate class for the RecordCache, and putting the recordCache sync in that class, and make sure the RecordCache doesn't even have a reference to lockedRecords (or anything else it doesn't need).
Another possibility is to reduce the number of different things you sync on. In this case, the complexity of having separate sync for the cache and the locking may be too much (if you can't refactor as suggested above). If so, you could just sync on the data access instance instead - and have all relevant code sync on that same thing. This would probably increase sync lock contention; I assume you're trying to avoid that. But consider that there's a good chance (especially with caching) that the main limiting factor in performance for your system will actually come from the networking code - in which case the extra sync contention is not a big deal.
It seems that your cache contains all valid records, right? (Since you're throwing RecordNotFoundException if a record is not in the cache.) Then another option is using the retrieved Record object as the object you sync on, rather than lockedRecords. In face the whole concept of record locking can be implemented inside the Record class. Just add some fields to Record, such as boolean isLocked and long cookie. Then you don't need this other recordLocks object at all. This is basically why my own design does, as discussed in my last post (Aug 18) here. Maybe that's too alien to your current design, but I think it may be worth considering. The sync contention is less than your current design, since most threads are working with different records. All threads still need to compete for a common sync on recordCache, but once they retrieve the Record (which should happen pretty quickly) they can drop that sync, and sync on the Record instead.
Good luck...
[ August 20, 2003: Message edited by: Jim Yingst ]

"I'm not back." - Bill Harding, Twister
Mark Spritzler
ranger
Sheriff

Joined: Feb 05, 2001
Posts: 17250
    
    6

Jim, I agree thread safety is very important and not beyond the scope of the assignment.
I can only say that they will not be looking to see if you can delete a record and have a different client try to lock the record after it has been deleted when they grade your assignment.
Mark
Tony Collins
Ranch Hand

Joined: Jul 03, 2003
Posts: 435
I agree with both of you.
The problem with syncronizing data access is that I have an instance of data for each client, so that won't work. Maybe the only way is to have the nested sync's but I'll have to look into my code very carefully and I don't know if it warrants the extra complexity and the potential bug that is present in future updates.
The API for data almost sugests that you could edd up locking a deleted record as update throws recordNotFoundException
Tony
Tony Collins
Ranch Hand

Joined: Jul 03, 2003
Posts: 435
Well I've realized I don't need to sync on the recordCache if I put the check in the code sync'ed on shared locks after the lock No has been obtained. As that record can't be updated if another thread doesn't have it's lock. The check must be here as a thread could release a lock after deleting the record.
I assume that another thread adding to recordCache( HashMap) concurrently won't affect the read with a different key. Or do I need to use a sync'd Map ?
I assumed even though HashMap is not thread safe a thread could update/add/delete a different element in the map concurrentlly without causing problems ??? Does anyone now if my assumption is correct ?

Tony
Max Habibi
town drunk
( and author)
Sheriff

Joined: Jun 27, 2002
Posts: 4118
Hi Tony,
I'm just catching up here: but why not just have a static cache, shared by all instances of Data? That will guarantee that they are all on the same page, regarding the number of records(assuming they synchronize access to the cache, of course).
updated: implicit assumption: there is only a single static structure, and it internally holds a flag or some such that tells you if the record is deleted and/or locked: maybe even by whom. The would, it seems, make your issue disappear.
M
[ August 20, 2003: Message edited by: Max Habibi ]

Java Regular Expressions
Jim Yingst
Wanderer
Sheriff

Joined: Jan 30, 2000
Posts: 18671
I assumed even though HashMap is not thread safe a thread could update/add/delete a different element in the map concurrentlly without causing problems ??? Does anyone now if my assumption is correct ?
It's incorrect. As the API says, "If multiple threads access this map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally." "It" refers to the whole Map - they're saying that if just one thread modifies the map structurally (add/remove) then all threads that access parts of the map, read or write, must be synchronized. I realize the phrasing is not perfectly clear (maybe this is the source of the pernicious myth that you needn't sync while reading a map?). But consider - if just one thread does structural modifications, and you sync just that one thread (i.e. you sync the code that does the structural modifications, which is performed by only one thread) - that sync will have no effect. (Well, there are some minor effects not relevant here.) If other code isn't synced to, then the one synced threead still can run concurrently with other threads. Sync only prevents concurrent access if both (all) relevent threads are synced, using the same monitor. In the case of HashMap, adding or removing data can cause the map to resize itself, whcih is a somewhat complex process. If a read occurs during the resize, you could easily end up returning null even though a key is really in the Map, or getting some sort of RuntimeException, because the sturcture of the map is changing while you try to read it. Which is bad.
Morgan Bath
Ranch Hand

Joined: Jan 16, 2004
Posts: 196
On my solution deleted records are not removed, they are over written. So this is no different from the problem of two threads both updating the same record.
I am not worrying about it. I acknowledge the solutions limitation in this respect and give my reasons for leaving it so: Its not terminal for the application, its a hugely unlikely scenario so not likely to be frustrating for the user, and the work required to prevent it is IMHO not worth the reward.


Morgan
SCJP (1.4), SCJD (1.4), SCWCD (1.3), SCBCD (1.3)
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Possibility of Locking a Record, that has just been deleted.