• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Database Thread Safety, pls review

 
Ranch Hand
Posts: 555
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Phil,

Mmh... I am waiting for Mark telling us that try ... finally constructs are too hard to be understood by a junior programer. Keep it simple guys !



Well, lets be honest, if Mark or Max will say so, we could not justify design by refencing on their words. From one hand they are right: it is major design decision making process to be tested, not such details. From other hand I will feel myself uncomfortable leaing this issue open.
Hi Andrew, since you are online let me ask one question:
To my opinion there is one more problem/disadvantage in ThreadSafeDatabase 2/4 IN CASE of cached database (first was no concurrent read): find method MUST be always carefully developed and changed by "junior programmer", otherwise it would work improperly with read() results (since read is synchronized, find is not). Example:
find loopes and call read.
read returns an object respresening a record from Collection (cached db).
Now, we have to compare the record with criteria and so on.
In the meantime an update method can change record (if update is done not by putting an updated record in collection, but by getting a record object from collection and changing its attributes). Of course read method can clone a record object (String[] record; return record.clone()); But cloning objects can then really bring us to memory overhead.
Is it right?
I just have decided to list all advantages and disadvantages of each design (for cached database) to take a final decision.
Tx,
Vlad
 
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
OK, I've spent a bit more time looking at the rest of this thread, and have some miscellaneous comments. My net access will probably still be spotty for the next week or so, so you may or may not see further followups from me...
One addendum about ThreadSafeDatabase1 - I originally suggested moving all sync blocks to either the cache class or the record class. I should note that in my own code it turned out to be impractical to move the record locks into the Record class, since that requires the Record class to know where to write records to (i.e. must have access to the FileChannel), which requires either a new instance variable or a new parameter - which is possible, sure, but ultimately simpler to just do the sync in Data since we already have access to the FileChannel or whatever other data we use to write stuff.
[Vlad]... So far so gut. If write in database fails, the cache his already updated. It is wrong. So, think it is must be like this:
Good point; I tossed off that code too quickly. Although in truth, if the write throws an exception I don't think any of these designs have recovery mechanisms in place. E.g. how do we know the write error didn't occur midway through the write? Maybe due to a hardware/memory failure? I mean really, there shouldn't be any IOException for any "reasonable" cause that most of us would consider in-scope for this assignment. If something bad happens anyway, we really have no idea what the cause is, or whether the data in the file (and/or cache) has been left in a consistent state. If we needed more elaborate transactional security, I think all these designs will need some work.
Having said that though, sure, it makes sense to increase our chances of having consistent data by updating the cache after the write rather than before. So good catch.
[Vlad]: ThreadSafeDatabase2/4:
Ok,
I think Andrew is right: Synchronizing on this is even simplier (but it will work only if Data class is a singletone or mutition).

That was true for the original ThreadSafeDatabase2 as well. If you have multiple Data instances, each one has a separate mutex object. You could rewrite that part of course (more easily than you can if you sync on an implicit this) but don't overlook it.
Also, regarding the TreeSet in find(): Why TreeSet? I assume we want the results to be ordered, and unique. Well, they're integers generated by incrementing a counter. They're going to be ordered by increasing value, and unique. As such, using an ArrayList seems sufficient to me. Enforcing order and uniqueness with a TreeSet may communicate the intent better, if the junior programmers are completely clueless, but to me it just seems like we're making the JVM do extra work sorting, for no real reason. And if we're determined to coddle the junior programmers, well, most newbies I know are more likely to be familiar with an ArrayList or (ugh) Vector than they are a TreeSet anyway.
 
Vlad Rabkin
Ranch Hand
Posts: 555
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Jim,
Tx for your comments.

Although in truth, if the write throws an exception I don't think any of these designs have recovery mechanisms in place.


Yeas, that is right, but we don't want to develop our own transaction protocol...

it turned out to be impractical to move the record locks into the Record class


Agreed.

That was true for the original ThreadSafeDatabase2 as well. If you have multiple Data instances, each one has a separate mutex object.


??? mutex is static, so all instances od Data have one mutex, but it is not so important. Most of us have a single Data instance.

find(): Why TreeSet?


If a List a used containing all record (deleted incl.) you are absoilutely right. If a Map is used (to avoid holding deleted records in cache) there no other choice, other than to use TreeSet.
I wanted to avoid having an object Record (which allows to have method isDeleted()), that is why use Map.
Anyway, I guess your idea is better: having a List make some things a simpler.
Tx,
Vlad
 
Jim Yingst
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
??? mutex is static
Ah, I see now - you confused me by writing "static private" rather than "private static". Nevermind then...
If a List a used containing all record (deleted incl.) you are absoilutely right. If a Map is used (to avoid holding deleted records in cache) there no other choice, other than to use TreeSet.
Mmmm, actually I was talking about ThreadsafeDatabase2 and 4, not the cached ThreadsafeDatabase1. Look at your own initial code for ThreadsafeDatabase2. Where do the record numbers come from? You start at recNo = -1 and increment. You use the Map to look up a record and decide if it should be included or not, but even if all records are included, you've already guarnateed that each recNo is a different Integer, and they are added to foundKeys in increasing numeric order. If some are missing (because the Map lookup determines a record was deleted, or because the record does not meet the criteria) then the recNos are still unique and in increasing order - there are just not so many of them.
Hum, looking again at the code shown for ThreadsafeDatabase1, the same arguments apply. For the code shown, the recNos are unique and in order. Whether you subsequently look the recNos up in a Map or not.
Incidentally it's possible to represent deleted records in a cache using nulls at appropriate List positions rather than an isDeleted flag. The size of the List would still have to equal the number of available record slots, deleted or no, but the deleted records wouldn't have to have Record objects taking up memory. You'd need to watch the sync on the create() to ensure that two simultaneous create()s could never create a new Record for the same slot. But you have to do that anyway, right? (Actually, I'm not sure why I didn't just use nulls rather than an isDeleted field myself; I think I may refactor that now.)
One more note. The concept of record-level synchronization and the concept of full caching need not go together. My first design had record-level sync and no caching - everything was read from the DB as necessary. There does have to be some sort of in-memory structure to hold some sort of instances which serve as mutexes for the indifividual records. But these mutexes do not need to take up any more memory than a new Object(). They don't have to have memory for all the fields as well, which is what takes most of the memory. Well, I'd at least have an instance field to hold a cookie value. You could even keep the things in a WeakHashMap so they could be GC'd any time that particular record is not in use. Though that could get a bit tedious since you'd need to recreate a new mutex on ever read() (assuming that any previous mutex was GC'ed) and it seems simpler to just keep one tiny Object in memory per record, period. Of course that's nothing next to the effort required to actually read every single record from the DB for find() (assuming the data's not cached) which is what most people are doing anyway, so if memory overhead avoidance is your goal, well, go right ahead then.
 
Vlad Rabkin
Ranch Hand
Posts: 555
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Jim,

Mmmm, actually I was talking about ThreadsafeDatabase2 and 4, not the cached ThreadsafeDatabase1.


Ok. In that case it is true. I just automatically copied the code of find from TSData1 to TSData2.

..., the same arguments apply(For threadSafeData1). For the code shown, the recNos are unique and in order.


Nope. Unfortunately HashMap.keySet() returnes unsorted set of keys.


Incidentally it's possible to represent deleted records in a cache using nulls at appropriate List positions rather than an isDeleted flag


True. That is what I have decided to do if I use List instead of Map.


The concept of record-level synchronization and the concept of full caching need not go together.


True. My first design was also with cache. Then switched to cache. And have decided NO MORE design changes, otherwise I will never end up with the assignement!
Synchronization design is only issue for me to take a decision about.
I like your design, because it is fine-graned, but you still cannot afford concurrent reads, which would probably make ReadWriteLock TSData3 (which allows concurrent reads, but is not fine-graned as yours one) faster.
Tx,
Vlad
[ August 28, 2003: Message edited by: Vlad Rabkin ]
 
Vlad Rabkin
Ranch Hand
Posts: 555
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Phil,
I found some article about Interruption. It can be interesting for you:

In the examples illustrating the sleep(), join(), and wait() methods, you may have noticed that calls to each of these methods are wrapped in a try statement that catches an InterruptedException. This is necessary because the interrupt() method allows one thread to interrupt the execution of another. Interrupting a thread is not intended to stop it from executing, but to wake it up from a blocking state.
If the interrupt() method is called on a thread that is not blocked, the thread continues running, but its "interrupt status" is set to indicate that an interrupt has been requested. A thread can test its own interrupt status by calling the static Thread.interrupted() method, which returns true if the thread has been interrupted and, as a side effect, clears the interrupt status. One thread can test the interrupt status of another thread with the instance method isInterrupted(), which queries the status but does not clear it.
If a thread calls sleep(), join(), or wait() while its interrupt status is set, it does not block but immediately throws an InterruptedException (the interrupt status is cleared as a side effect of throwing the exception). Similarly, if the interrupt() method is called on a thread that is already blocked in a call to sleep(), join(), or wait(), that thread stops blocking by throwing an InterruptedException.


So, theoretically we really ignore InterruptedException, by saying we don't call interrupt with remark "If we do it future we will change the code to throw InterruptedException or to do like you have suggested". Then, our try/finally blcok will look a bit better.
Best,
Vlad
 
Bartender
Posts: 1872
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Vlad,
Thanks for that article. It seems to confirm what I wrote myself, especially this excerpt :

If a thread calls sleep(), join(), or wait() while its interrupt status is set, it does not block but immediately throws an InterruptedException (the interrupt status is cleared as a side effect of throwing the exception).


So it seems to be good practice Thread.currrentThread().interrupt() from within the catch block to keep the caller thread state unaffected by our own code.
But as nobody else than both of us gave his opinion about it yet (Andrew ? Max ?), I am still not sure about it.
Best,
Phil.
 
Vlad Rabkin
Ranch Hand
Posts: 555
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Phil,
I asked Max to at the topic, but he didn't find. I have sent him a link.
So, I hope that he we appear tonight in this topic.

So it seems to be good practice Thread.currrentThread().interrupt() from within the catch block to keep the caller thread state unaffected by our own code.


Yes, but it make then headackes later: you have to check if the thread is interrupted and so on. Since we don't need to interrupt I beileve can just ignore it. Let's see what what our gurus say about all of it...

Best,
Vlad
 
author and jackaroo
Posts: 12200
280
Mac IntelliJ IDE Firefox Browser Oracle C++ Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Philippe,

But as nobody else than both of us gave his opinion about it yet (Andrew ? Max ?), I am still not sure about it.


I have been avoiding this, because I don't really have strong feelings one way or the other as to how InterruptedException should be handled. I can see the points in several arguments.

So it seems to be good practice Thread.currrentThread().interrupt() from within the catch block to keep the caller thread state unaffected by our own code.



Why are you doing this? I can see two basic possibilities for handling InterruptedException:
  • Ignore it. We don't support interruptions, we dont ever throw them. Go back to waiting for lock.
  • Catch it and throw it to the calling class. This is what I did in my submission. Possibly throw it as a RuntimeException since it should not have occured in the first place.


  • In either case I cannot see why you would interrupt yourself.
    In the first case, it would cause big problems. Thread goes into wait state. Something else interrupts it, throwing the Exception. As part of the Exception handling you interrupt the thread again, then go back to wait state which is immediately interrupted because of our interrupt which goes into the Exception handler which interrupts .....
    In the second case, we are throwing our own Exception. That should be all that is needed. Setting the interrupted state a second time seems to be overkill. If you want the user to know that the exception was caused by an interupt and it is not clear by the type of exception you are throwing, then I would consider chaining the exception to be a more "standard" way of doing this.

    (From a much earlier post) If I remember, you just waited 48 hours to get the result, an amazing short time ! So yes, it's simply impossible to read everything so fast.


    Are you implying that I write a lot?
    Regards, Andrew
     
    Philippe Maquet
    Bartender
    Posts: 1872
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator


    Yes, but it make then headackes later: you have to check if the thread is interrupted and so on.


    No Vlad, from my MultiReadSingleWriteSynchronizer class, I don't need to care about it. My purpose is that I even don't need to know anything about the threads which call my synchronizing service nor want to put any constraint about how those threads are coded. It means that if such a caller thread needs to check its interrupted status, as I cannot interfere with its own job I must let its interrupted status uncleared. Is it more clear ?

    I asked Max to at the topic, but he didn't find. I have sent him a link.
    So, I hope that he we appear tonight in this topic.


    I hope so too. It would be great if Max could give us his opinion on this.
    Best,
    Phil.
    [ August 28, 2003: Message edited by: Philippe Maquet ]
     
    Philippe Maquet
    Bartender
    Posts: 1872
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Andrew,
    Thank you for coming in.

    In either case I cannot see why you would interrupt yourself.
    In the first case, it would cause big problems. Thread goes into wait state. Something else interrupts it, throwing the Exception. As part of the Exception handling you interrupt the thread again, then go back to wait state which is immediately interrupted because of our interrupt which goes into the Exception handler which interrupts .....


    Well, no because interrupt() throws InterruptedException only if the interrupted thread is in a wait state. As it's not the case anymore when you get an InterruptedException (of course), calling interrupt() from the catch block won't throw a new InterruptedException.
    Now from the article (but I knew it from my own tests), when an InterruptedException is thrown, the interrupted status is swallowed :

    the interrupt status is cleared as a side effect of throwing the exception


    In short :
  • interrupt() tells a thread that it is "interrupted". What the given thread will do with that piece of information is purely conventional.
  • While running, a thread may check whether it has been interrupted by calling the interrupted() method. This method is static, acts on Thread.currentThread() and clears the interrupted flag.
  • Any thread may check whether a given thread has been interrupted by calling its isInterrupted() method. This method doesn't clear that thread interrupted flag.
  • Now how a given thread could check its interrupted flag when it is in a wait state ? It cannot of course, so it is signaled to it by the InterruptedException, but with the side effect that its interrupted flag is cleared.


  • So, if you want to avoid to throw InterruptedException from your beginXXX() methods (for simplicity of the calling code which would have to catch it if you throw it) and you don't want to make any assumption about how calling threads have or not to deal with interruptions, then you need to rearm its interrupted flag yourself (by calling Thread.currentThread.interrupt()) if an InterruptedException is thrown.
    Mmh ... I think I run out of arguments now. MAAaaAaax !!! Are you out there ?
    Best,
    Phil.
    [ August 28, 2003: Message edited by: Philippe Maquet ]
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Phil,
    I will return to sync design discussion (Our interruption problem fight must wait for Max )
    I can see one more disadvantage of RWSync. There are some nested locks in the code.
    They are not dangerous, I have tested it, but still it is not nice.
    Example:

    Do you have something the same, or your managed somehow to avoid it?
    The only idea what I have is to call first checkAccess, which would return boolean (false if no such lock/cookie) and then begin RWLock synch. If then record is found and result of checkAccess was false throw SecurityException, but it looks weard:

    Best,
    Vlad
    [ August 28, 2003: Message edited by: Vlad Rabkin ]
     
    Philippe Maquet
    Bartender
    Posts: 1872
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Vlad,
    In your second code snippet, your first comment is false : or your caller owns the lock and another caller cannot lock neither unlock it, or your caller does not own the lock and a SecurityException is thrown. So it's OK.


    I can see one more disadvantage of RWSync. There are some nested locks in the code.
    They are not dangerous, I have tested it, but still it is not nice.


    That nested locks issue does not relate to our way of synchronizing. I have a similar situation for cache access because even for a read operation on Data I may have a write operation in the cache (if the record to be read was not in the cache). And if it was in the cache, I have a write operation on the cache item found (I just "synchronize" on it) to increase its hits count and update its last access time.
    We cannot avoid all nested locks. It's more a question of beeing reasonable in nesting, beeing aware of them, and be very very cautious.
    Best,
    Phil.
    [ August 28, 2003: Message edited by: Philippe Maquet ]
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Phil,

    In your second code snippet, your first comment is false


    Logically thinking you are right. I have to think whether technically it shouldn't be a problem, but at the moment I agree with you.

    It's more a question of beeing reasonable in nesting, beeing aware of them, and be very very cautious


    From one hand I agree with, from other hand Max has strongly critisized having a multiple or/and nested locks. The best example was lock() method (if I had not test it, I would never realize that this lead to the deadlock)
    I really hope that Max/Andrew/Jim would review and comment this point also.
    Tx,
    Vlad
    [ August 28, 2003: Message edited by: Vlad Rabkin ]
     
    Philippe Maquet
    Bartender
    Posts: 1872
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Vlad,

    Max has strongly critized having a multiple or/and nested locks


    Well, anybody would criticize the principle of having multiple or/and nested locks. Because they bring a risk of deadlock which is the worst flaw you can have in any program.
    But if there is no reasonable way to avoid them in a given context, they are defendable IMO.
    Here are all the possible nested synch paths from a Data.readRecord() method in my own design :
    If the record was in cache : (BR=beginRead ER=endRead BW=beginWrite EW=endWrite)
    Data.BR (1) - Cache.BR (2) - synchronized(cacheItem) (3) - Cache.ER - Data.ER
    If the record was not in cache :
    Data.BR (1) - Cache.BR (2) - Cache.ER - Cache.BW (2') - Cache.EW - Data.ER
    and reading them, I know that I cannot get a deadlock. Now are those nested locks justified ? I think so :
    (1) = concurrent read access on Data
    (2) = concurrent read access on Cache
    (2') = exclusive write access on Cache (record is added if not found in (2) and sometimes the cache is
    cleaned up)
    (3) = exclusive write access on cacheItem (to update hits and last access time)
    because how to avoid them while keeping some read concurrency (and the far better performance brought by it) ?
    I don't see another way of doing.
    Regards,
    Phil.
    [ August 28, 2003: Message edited by: Philippe Maquet ]
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Phil,

    (1) = concurrent read access on Data
    (2) = concurrent read access on Cache


    Ok.

    I don't see another way of doing.


    There are two other designs ThreadSafeData1 (Jim) and ThreadSafeData4 (Andrew)
    Well, I don't want to make you angry by arguing the issue. I use RWLock and nested locks too. I just want to hear from as many people as possible that it is cool Otherwise I seriously consider switching to one of other 2 designs.
    Best,
    Vlad
     
    Philippe Maquet
    Bartender
    Posts: 1872
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Vlad,

    There are two other designs ThreadSafeData1 (Jim) and ThreadSafeData4 (Andrew)


    Or I missed something or neither ThreadSafeData1 (you or Jim ?!), nor ThreadSafeData4 (Andrew) allow concurrent reads on both Data and Cache....

    I just want to hear from as many people as possible that it is cool Otherwise I seriously consider switching to one of other 2 designs


    Mmh... I see even other ways of deciding which would be the best design :
  • organize some opinion poll on the subject and follow the majority
  • toss a coin
  • ...


  • Best,
    Phil.
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Phil,

    organize some opinion poll on the subject and follow the majority



    Neither TS1, nor TS2 provide concurrent read, but
    TS1 allows fine-graned locking (record based, not database based)
    TS4 is very simple, which is a major issue...
    Now, I suppose that our discussion about Interruption makes no sense, since I beileve both solution are acceptable. Just look at this link (I assume you know who is Doug Lea)
    http://gee.cs.oswego.edu/dl/concurrent/dist/docs/java/util/concurrent/locks/ReentrantLock.html
    There are two methods:
    - lock()
    - lockInterruptibly()
    But then you can read some remarks:

    This implementation supports the interruption of lock acquisition and provides a Condition implementation that supports the interruption of thread suspension. It also favors interruption over normal method return.


    Sources are here: http://gee.cs.oswego.edu/dl/concurrent/dist/
    They are compiled, but I guess we can find them somewhere on here:
    http://gee.cs.oswego.edu/dl/
    Best,
    Vlad
    [ August 28, 2003: Message edited by: Vlad Rabkin ]
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi all,
    I will leave you after tomorrow for a week, because of having vacation. I deserve some rest from hard arguing with Phil (Just a joke).

    I will try again to summarize results of our discussion(as I understand it). Sorry for a big mail, but it is not easy to summarizy discussion containing 57 messages
    There are 3 possible synchronization design which have discussed:
    ThreadSafeData1 (rought representation of Jim's idea):
    (1) one db-level sync lock to access an element from the ArrayList - this lock is dropped at the end of this step
    (2) one record-level sync lock to do anything with a record - this is also dropped at the end of this step.
    Advantage:
    - it is fine-graned synchronization,
    - it is relative simple.
    Disadvantage:
    - No concurrent read is possible,
    - a bit (but not much) lock overhead because record locks
    - requires a BIT more care for some methods (add/update/find to my opinioin) to avoid mistakes.
    ThreadSafeData2/4 (rought representation of Andrew's idea):
    All methods (lock/unlock/update/add/delete) except find are synchronized on the same object (e.g. this, static Object mutex, Collection cachedDB).
    Find is not synchronized, but each readRecord it invokes. It means dirty reads are allows, but no garbled data as a result is possible.

    Advantage:
    - it is extremely simple, which is one of the major issues
    - find does not block database for a long time since it is divided on many separate locks for read
    Disadvantage:
    - No concurrent read is possible
    - it is not fine-graned synchronization, since whole database is locked while any operation expect find
    - the total response time of find can be long
    ThreadSafeData3:
    ReadWriteLock Synchronization (with preffered writelock):
    Advantage:
    - allows concurrent read (e.g. even find method can be synchronized since it allows other read/lock/unlock operations to proceed)
    - no real need for record lock since concurrency level is very high
    - pretty elegant solution where the most synchronization logic is hidden in RWLock class, which can be chenaged without affecting business logic
    Disadvantage:
    - requires minimum one try/finally block in dataclass to make sure lock will be released.
    - a bit more complicated as ThreadSafe2Data2/4
    - requires some caution to avoid deadlocks
    There are some issues open:
    1) Question to Andrew:
    Andrew, you have said that find() in ThreadSafeData4 is thread safe: if another thread updates/deletes a record, the find method still hold a reference to the old copy.
    to my opinion there are some details, which developer should never forget to avoid concurrency problem:
    In this case Andrew is right:
    record = new Record(.., ..., ..);
    list.set(recNo, record);
    , but this will not work:
    Record record = (Record) list.get(recNo);
    record.setX(..);
    Another solution is that read return always a clone of the record, but it can cause memory overhead.

    Andrew, is it right?
    2) Question to Max (Andrew has already replied and I support his point).
    ReadWriteLock.acquireReadLock()/WriteLock() (the methods can theoretically throw an InterruptedException (which is not possible in our application, because we never call Thread.interrupt())
    - we can leave the interrupted status of this thread uncleared and not throw InterruptedException as Phil suggests
    (advantage the caller doesn't have then catch InterruptedException, which allows to have only one try/finally construct instead of try/catch + try/finally construts, which is ugly)
    - we can rethrow IOInterruptedException instead of Interrupted exception as in your uplication with (DVD.reserveDVD())
    (advantage if decide later ti use time-out or so we would not have to change any code in Data class, disadvantage ugly try/catch+try/finally constructs).
    - we can rethrow an RuntimeException (as Andrew suggested and like probably this solution the most), since InterruptedException would mean an implementation error, because we never invoke Thread.Interrupted()
    (advantage: the caller doesn't have to catch it, so simple try/finally construct, disadvantage: expandability: if we later decide to invoke interrupt method for time-outs and force client to catch it, Data class must be then changed to do it)
    Max, would comment each of these options?
    3) Question to Andrew, Max, Jim (Phil already replied):
    Expecially in ReadWriteLock case, I beileve it is nice to separate synchronization from Database logic (I mean not already 1000 times discussed LockManager). Having Data class without any synchronitazion at all (except lock/unlock methods) and ThreadSafeData class as composition or inheritance, which would have synchronization logic:

    Advantages: One developer can express him efforts on developing IO and another one on synchronization independantly
    Disadvantages: such method like checkAccess (throwing SecurityException if no such lock exist) in update/delete methods are synhronized (in my case on cached DB object).
    So, having these in two separate classes will force me to haqve nested locks. Good thing is that these locks are on different objects and do not produce deadlocks ( I have tested it). Bad thing is, I guess we should anyway avoid using nested locks.
    Having all the things in data class I can avoid nested locks by some tricks, I described in previous messages.
    What would you suggest guys?
    4) Question to Max/Andrew (Phil, I know already your opinion ):
    Do you still think that ReadWriteLockSynchronization is a bad idea?

    Thanx all of you!
    Best,
    Vlad
    [ August 28, 2003: Message edited by: Vlad Rabkin ]
     
    Philippe Maquet
    Bartender
    Posts: 1872
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Vlad,

    I will leave you after tomorrow for a week, because of having vacation. I deserve some rest from hard arguing with Phil (Just a joke).


    Have a good vacation !
    I'll appreciate further discussions with you Vlad after you come back, you seem to do a very good job and your posts are very interesting.
    Best,
    Phil.
    [ August 29, 2003: Message edited by: Philippe Maquet ]
     
    Andrew Monkhouse
    author and jackaroo
    Posts: 12200
    280
    Mac IntelliJ IDE Firefox Browser Oracle C++ Java
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Vlad,
    This has been a great discussion. Thanks for running with it.
    The ReadWriteLockSynchronization is very nice. And I have bookmarked this thread for future reference. But (you knew there was going to be a but didn't you ) I think that some of the solutions presented here are far too complex for such a simple assignment.
    Sure the simpler solutions do not allow for simultaneous reads/writes. But why should this be a problem? The amount of data being read / written for any one record is so small that any block is going to be extremely short. And we are not anticipating thousands of clients or thousands of records.
    So my preference remains with the simpler solutions.
    Have a great holiday - we can always take this up again when you get back (if Philippe allows us all to have a break (just kidding Phil)).
    Regards, Andrew
     
    Philippe Maquet
    Bartender
    Posts: 1872
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Andrew,

    I think that some of the solutions presented here are far too complex for such a simple assignment.
    (...)
    if Philippe allows us all to have a break (just kidding Phil)).


    Don't worry about it Andrew. It will take time before I come back on this forum with any complex stuff.
    I even got you rid of all references to my own design in Kim's thread about locking. They just brought some confusion in it while I couldn't get the feedback I hoped about it. Writing posts takes time, so writing ones which bring no benefit neither for the readers nor for the writer is just loss of time. It's far more beneficial to save it, invest it in developping the assignment, and finally get it done faster.
    Best,
    Phil.
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Andrew,

    you knew there was going to be a but didn't you


    Yeap . Honestly saying it was pretty calm on the beginning before Phil came in!

    Have a great holiday - we can always take this up again when you get back (if Philippe allows us all to have a break (just kidding Phil)).


    Tx Andrew. Phil, could you please do me a favour ... ?
    Andrew, honestly saying I am almost broken to accept your design.
    Could you try to answer my question 1 from last message. I just want to think about implementing design details instead of still choosing a design on the way to Italy
    Phil,
    I guess there an answer of Max to InterruptedException here:
    https://coderanch.com/t/183941/java-developer-SCJD/certification/NX-data-consistent

    Just as an FYI, I would argue that the InteruptedException should cause an explicit fatal exception to thrown: one that ripples up to middle tier as a FatalSystemException, and eventually to the user as a FatalGUIException, and which cause the system to shutdown. This is how most apps deal with this sort of thing in the real world.


    , but as I said really beielve botj of our ideas are acceptable. ( I have decided to throe RuntimeException and avoid 2 try construcs).

    Thans again many times guys,
    Have fun there!
    Vlad
     
    Andrew Monkhouse
    author and jackaroo
    Posts: 12200
    280
    Mac IntelliJ IDE Firefox Browser Oracle C++ Java
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Vlad,

    Andrew, you have said that find() in ThreadSafeData4 is thread safe: if another thread updates/deletes a record, the find method still hold a reference to the old copy.


    Correct, so the client will still need to validate the records it reads as a result of this find. I do not consider this a problem, as you have to validate anyway (to get around the problem where the client expects an exact match, but the find performs a "starts with" match).

    In this case Andrew is right:
    record = new Record(.., ..., ..);
    list.set(recNo, record);


    And that is what is happening in this case.

    Another solution is that read return always a clone of the record, but it can cause memory overhead.


    I don't think we need to be too concerned about the memory issues of the clone - the memory structure should be very short lived on the server side.
    Regards, Andrew
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Andrew,
    Tx for your reply,

    I don't think we need to be too concerned about the memory issues of the clone


    Great point. I have just read an article at javaword.com which says that cloning object is a good idea to avoid synchronization. In this case I don't need to lock on record after a record has been retrieved (by read method) in find method (in case of cached database). So, since memory is not a big deal, I have a wonderfull excuse to clone to avoid problems with cached database.
    Tx Andrew,
    Vlad
    P.S. Could you take a look at this topic: Separation of data access and synchronitazion logic.
     
    Consider Paul's rocket mass heater.
    reply
      Bookmark Topic Watch Topic
    • New Topic