• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Synchronizing read and find

 
Ranch Hand
Posts: 555
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi all,
I am almost read done with my assignement.
Analazing result of the other I can see that almost all people preparing URLBirld assignement have lost a lot of scores for locking. Nobody from them knows a reason.
I have first tried to use ReadWriteLock allowing to read simultaniously as
long as nobody writes and so. Having 2 objects for synchronization
(1 for read-write lock) and one for lock/unlock functionality I run to the problems with dead-locks. I decided to simplify everything and I synchronize now everything on the cashmap containing all records (I use database cashing): read, find, update, create, delete, lock, unlock.


All the books for certification preparation do the same(they just synchronize on this, what is alsmost the same what I do).
However, I beleive that I have to optimize somehow read/find.
Some people do not synchronize read/find at all, saying they allow dirty-reads.
I don't really understand how it should work then: I beleive it is inacceptable to deliver corrupted data to the client(e.g. instead of hotel name Sheraton -> Shera). To find the records I have to iterate over my HashMap, which can lead to the exception if any other thread changes any record, because of fail-fast mechanism of HashMap Iterator).
Max have said that threads make their local copies ob this object, so fail-fast exception cannot occur as well as non-consistant values cannot appear.
Honestly saying I don'r really understand why: If he is right what is the sense of fail-fast mechanism of Iterator (when can it happen then???)
Could somebody help me to understand this issue:
1) Is it Ok to synchronize on read/find blocking the whole database each, but having the code simple and stable?
2) If not does somebody used RWLock and it was integrated to Locka for lock/unlock?
3) If you allow dirty locks and and also cash the database, how do you iterate over this HashMap or List?
Thanx,
Vlad
 
Ranch Hand
Posts: 435
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
My code writes to the db file as shown below. As I write one record out to the file in one write method. I don't see how a dirty read can be made. The record is updated or it isn't.

 
Bartender
Posts: 1872
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Vlad,
I think that trying to ensure maximum concurrency is a good thing when designing a database system. Even if many people here (including Max I think) wouldn't care about performance and concurrency in this assignment, it's up to you to decide which level of attention you'll pay to it.
I've written a separate class which may be used as a multiple reads / single write synchronizer and I use three instances of that class in Data :
  • All public methods coming from DBAccess fall in two categories : they are read entry-points (readRecord, findByCriteria) or write entry-points (createRecord, updateRecord, deleteRecord). They all use a "main" such synchronizer to allow concurrency of "read" methods.
  • But even a global "read" operation may have to perform some "write" operation. When I try to read a record from cache, if it is not found it must be added to it. So my cache class uses its own multiple reads / single write synchronizer.
  • Finally, as I use a write cache too which performs actual writes in a separate thread, and as the FileChannel doc seems unclear to me as far as concurrency is concerned ("Other operations, in particular those that take an explicit position, may proceed concurrently; whether they in fact do so is dependent upon the underlying implementation and is therefore unspecified."), I decided to use a third synchronizer to allow multiple concurrent reads in the file (in case the underlying implementation would allow it), guarding myself against any dirty reads.


  • Best,
    Phil.
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Phil,

    it's up to you to decide which level of attention you'll pay to it


    It seems not to be true, because one of has got 22 point for locking (out 80) for synchronizing read and write....
    Thanx for your describing your design. I have done now a bit differently,
    but it made me think me that I have to optimize synchronization.
    Tx!
    Vlad
     
    Tony Collins
    Ranch Hand
    Posts: 435
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Why don't you cache your DB Vlad. Have it as a write through Cache so when a client requires a update, update the cache then write to the DB. That way your find/read would not not required sychronisation.
    It's quite a quick change.
    Tony
    [ August 01, 2003: Message edited by: Tony Collins ]
     
    Philippe Maquet
    Bartender
    Posts: 1872
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Tony,

    Why don't you cache your DB Vlad.


    Vlad uses a cache already.

    Have it as a write through Cache so when a client requires a update, update the cache then write to the DB. That way your find/read would not not required sychronisation.


    If you write your updates to the cache (which is common sense), you need to synchronize cache access. And as you do for writes, you also need to synchronize reads from the cache, preventing multiple reads...
    Best,
    Phil.
    [ August 01, 2003: Message edited by: Philippe Maquet ]
     
    Tony Collins
    Ranch Hand
    Posts: 435
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Aren't we just worried about reading a record that is halfway through being written ? And if we are using a hashtable and using the sychronised method put( to add a new/updated record won't we be OK or am I missing something ?
     
    Tony Collins
    Ranch Hand
    Posts: 435
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Aren't we just worried about reading a record that is halfway through being written ? And if we are using a hashtable and using the sychronised method put( to updated a record) won't we be OK or am I missing something ?
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Tony,

    Aren't we just worried about reading a record that is halfway through being written ?


    Yeap

    And if we are using a hashtable and using the sychronised method put


    No, I don't use hashtable, becuase Hashtable has all get/put methods synchronized (which is not performant).
    I use HashMap and that was my problem.
    We have long discussions earlier about FileChannel and etc. I don't rely
    on FileChannel synchronization, moreover since I use cashed database,
    it has nothing to do with FileChannels.
    So, either you explicitily synchronize on HashMap object all read/write methods or use hashTable. Both solutions are not good, simple, stable, but not performant, since each read block other reads.
    I did use ReadWriteLock mechanism (it has nothing to do with lock/unlock LockManager). I had some dead-lock situation because of conflicts between
    ReadWriteLockSynchronization manager and LockManager (for lock/unlock), since I had to work lock permanently on two different locks.
    I fixed now the problem with ReadWriteLock Manager. Still
    I beleive that Max is right: we schould try to work only with one kind of lock (otherwise there s always a risk of dead-lock), I just don't have any idea how can I do all these things only with one lock, avoiding complexity.
    Cheers,
    Vlad
     
    Ranch Hand
    Posts: 49
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    I am debating on the same issue for URLyBird. Planning to synchronize the "read" method and not the find method. The find method has to loop through all the records, and it will affect performance. As for dirty reads
    we have to handle with RecordNotFoundException - in case a user tries to update it.
    I do have a question for everybody.. If we use a HashMap to store the locks, then we need to sync on the hashmap to do any modification to the locks, in order to support the fail safe mechanism. This is equivalent to getting a database lock.
    Is it better to use a hashtable as we can work on individual locks without sync-ing the collection. But, Urlbird does say that we need not be concerned with concurrency at database level. Is there an alternative to work with individual locks.
     
    Tony Collins
    Ranch Hand
    Posts: 435
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Vlad,
     
    Tony Collins
    Ranch Hand
    Posts: 435
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Vlad,
    I use a Hashtable so the update of my Cache is atomic. So I create the new record then add it to the Hashtable. All the put method does is add the reference to my record, to the map, this is a quick operation and therefore holds the semephore( or whatever it's called in Java) for a short period of time. So I don't see that it is a major issue. The atomicity of the underlying DB write mechanism is no longer an issue.
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi,
    First atomicity not putting/getting to/from HashTable must be guarantted.
    The atomicity of whole transaction must be guaranteed: write-to-file + put in HashTable. That is why I use HashMap and put both write and put and synchronized block.
    Second, it is ok to synchrnoze completely write, but it is not Ok (it is only my personal opinion) to synchronize on read or find.
    one read should not block another read!!! It neither make sense, not satisfied performance. So optimized it by ReadWrite Lock mechanism, which allows read and find any time concurrently as long as nobodx writes.
    It made the code more complecited, but more reasonable.
    Again, it is my personal opinion. That was exactly the subject of the topic
    whether we should do it.
    Cheers,
    Vlad.
     
    Tony Collins
    Ranch Hand
    Posts: 435
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Yeah,
    it is a good question. I think maybe we can use Collections.synchronizedMap to sync writes to the cache and access the map un-sync'ed for reads.
    But the question that comes to mind is what happens if you read a collection whilst it is being updated ? I would expect you would receive an Invalid Reference to an object, could this then cause a run time exception ??
    Anyone ??
     
    Tony Collins
    Ranch Hand
    Posts: 435
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    I think the solution is simple!
    You only have to update the cache(update reference to record) when you create a new record, therefore the cache can be Unsynchronized. A write only occurs when the a lock is held so a dirty write can't happen. A dirty reads aren't a danger to the system. So therefore no sychronization but safe.
     
    Vlad Rabkin
    Ranch Hand
    Posts: 555
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Tony,
    lock guarantees atomicity for write (update).
    What happens if you update HashMap and read from it concurrently?
    Dirty read is Ok, but I am concerned not anout dirty read, but inconsistent value:
    Record #1, Value ABC
    updated value shoulc be DEF,
    but your read DEC!
    Again, Max ment thread make local copy of the hashmap, so it cannot happen,
    but I don't understand why.

    Vlad
     
    reply
      Bookmark Topic Watch Topic
    • New Topic