• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

criteria search concurrency

 
Greenhorn
Posts: 20
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi all,
For criteriaFind i was just going through the doom scenarios .
When the search starts (after parsing) i first load all records in an Array. While reading all the records an other thread could delete a record. Now my record count is changed while in the loop and an exception is bound to be thrown. Should i first aquire a database lock before reading all the records?
Please help me on this doom scenario.
Kind regards
Arjan Broer
 
author
Posts: 3252
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Arjan Broer:
While reading all the records an other thread could delete a record. [...]

There's that. Depending on how you implement criteriaFind, there are concurrency issues with add() and write() as well. The simplest and most sane approach is to take your cue from the find() method which does, after all, exactly the same job that criteriaFind() does if in a more limited way.
Note that find() is synchronized and uses the primitive, non-synchronized seek() and readRecord() methods to access the database.
Note also that your criteraFind() method will probably consist of two halves: (1) parse the critera String (2) loop through all the database record and apply the criteria to them. In most implementations, only (2) needs to be synchronized.
- Peter
 
Arjan Broer
Greenhorn
Posts: 20
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Peter,
I don't see how i could use the find methode sinds i don't have the value for the key field. So i should retrieve the records an other way, but then i'm stuck with the same problem.
Regards (groeten)
Arjan Broer
 
Ranch Hand
Posts: 213
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Use the get method as it requires a record number and it's syncronized for thread safety.
 
Ranch Hand
Posts: 89
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I think not to synchronize criteriafind at all works too:
I parse the criteria, then loop trough all record ids (1 to max) get the record through the synchronized getRecord(idx) do my criteria thing and add it to the reponse (if not excluded)
After all when the underlying db changes i still just get a snapshot of what's there while I access.
As far as I see there the atomic unit is a record and not the whole criteriaFind result.
What do you think ?
Bern
 
Peter den Haan
author
Posts: 3252
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Arjan Broer:
I don't see how i could use the find methode sinds i don't have the value for the key field.

Actually I'm not suggesting you do; just that you draw your inspiration from it. They're quite similar.
- Peter
 
Peter den Haan
author
Posts: 3252
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Bernhard Woditschka:
I parse the criteria, then loop trough all record ids (1 to max) get the record through the synchronized getRecord(idx) do my criteria thing and add it to the reponse (if not excluded) [...] What do you think ?

That it's very tempting, as it improves the level of concurrency supported by the Data class. It is a questionable approach though, for the following reasons.
First, its concurrent behaviour will be different on different machines or JVMs. The issue is quite subtle. It revolves around what happens to the recordCount variable when another thread adds a record. You are probably accessing this variable outside a synchronized block; due to the way the Java language specification defines thread semantics, you may or may not be seeing the recordCount increment until after the next synchronized method call. If you'd just reached the end of the loop, this may make the difference between seeing the newly added record or not.
You get into more serious problems if a later enhancement would make it possible for recordCount to decrease --- then you immediately face the possibility of your criteriaFind throwing exceptions. If you choose to stick with this implementation, this issue should really be documented in the code.
Second, if criteriaFind is part of the Data class, it also introduces an inconsistency of approach. Arguably, find() and criteriaFind() ought to work in the same way, since they're doing much the same thing.
It's not a black-and-white right-or-wrong issue. As I said, your approach will work for the moment and have better concurrency than an approach which executes the entire search within a synchronized method. But it is no simpler than using seek() and readRecord() and I have my reservations as indicated above. Your call.
- Peter
 
Bernhard Woditschka
Ranch Hand
Posts: 89
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I see your point. I assumed the record cont can ony increase - but this 'contract' may change anytime which could cause my code to fail - hmm.
It just feels wrong to lock the whole db (what actually happens when synchronizing) for the read access.
I did a test on my record locking mechanism wit concurrent applications and it was amazing to see how even a single lock held too long slows down all of the other apps (not to mention the table lock)
I guess it boils down to find some solution with minimum of downsides here.
Bern
 
Matt Ghiold
Ranch Hand
Posts: 213
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I do not know that I agree. The flat file is the poorest solution I could think of for this application, but because we are stuck with it we use it and take it for what it's worth. Most of the things mentioned in this thread are solved by databases and are much more efficient then a flat file and with the bulk of good free databases out there this would be a no brainer and would never see the light of day inside a real java shop.
I do not think sun intended for us to rewrite a flat file database, instead we needed to take what we were given and do the best we could with what we had, which was not much.
Just my $0.02
 
Ranch Hand
Posts: 2545
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Idealy, you should lock all the records while doing serach, but I didn't do it. And I passed anyway.
 
ranger
Posts: 17347
11
Mac IntelliJ IDE Spring
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Idealy, you should lock all the records while doing serach, but I didn't do it. And I passed anyway


Huh? Boy am I really glad that Oracle does not do this, otherwise users would be waiting forever to see their data.
When reading data you should not lock records. And doing a search is reading data, not updating.
Mark
 
Arjan Broer
Greenhorn
Posts: 20
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Mark,
I do not agree with you. When reading data you do want anyone changing the data while you read. Oracle does lock the records on a read. But Oracle has Share lock, write locks, exclusive share locks, exclusive write locks and a hole bunch of intended locks.
Though i do agree with you when you say this could be sligthly out of scope for this assignment.
Regards
 
Peter den Haan
author
Posts: 3252
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Arjan Broer:
Oracle does lock the records on a read. But Oracle has Share lock, write locks, exclusive share locks, exclusive write locks and a hole bunch of intended locks.

While Oracle does have all those locks, it does not normally lock records for a plain run of the mill read! The Oracle Concepts document from the Oracle 8i documentation talks a bit about read consistency:
  • Guarantees that the set of data seen by a statement is consistent with respect to a single point in time and does not change during statement execution (statement-level read consistency)
  • Ensures that readers of database data do not wait for writers or other readers of the same data
  • Ensures that writers of database data do not wait for readers of the same data
  • Ensures that writers only wait for other writers if they attempt to update identical rows in concurrent transactions
  • (My emphasis). In other words, Oracle implements a versioning/snapshot mechanism rather than locks to achieve basic statement-level read consistency. Writes, of course, do acquire a shared update lock that can be shared with readers but not with other writers.

    Though i do agree with you when you say this could be sligthly out of scope for this assignment.

    Why do you say that only now! I've just gotten locks to work on my FBN relational database with the clustering feature and was about to upload the two DVDs with my assignment to Sun
    - Peter
    [ January 15, 2003: Message edited by: Peter den Haan ]
     
    Mark Spritzler
    ranger
    Posts: 17347
    11
    Mac IntelliJ IDE Spring
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Thanks Peter.
    Mark
    reply
      Bookmark Topic Watch Topic
    • New Topic