hi sean, i think the option 2 is more performance.if you choose option 1,then every client search requires will lead to load the whole db file into memory,search in the memory,and then release the memory data just loaded.it is more expensive than persist the whole db file in memory just like your option 2 do. another,one client just can lock one record at a time.so "lock all of the matching records for a request" will not happen. hope this help!
It sounds to me like you are trying to decide whether or not to implement a cache for your data. I personally did not, choosing instead to use RandomAccessFile. The preformance hit for I/O is negligable, and you don't have to worry about persisting data.
You can really do it either way and you will be fine. Search this forum for "cache" and you will find pleanty of threads discussing the pros and cons.
“Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.” - Rich Cook
Joined: May 24, 2004
If i go for option 1 is it ok to lock all of the matching records for a request?
Only if you are implementing seperate readLock/writeLock methods (which is not hard to do, but probably outside the scope of this assignment). Otherwise you will lose concurant access to the records.
sean mc cusker
Joined: Mar 15, 2005
Ok, thanks all. So I will assume that the file does not grow humongously large. On the server I will load the file to memory (using RandomAccessFile?), persisting any changes to the file immediately (i believe i can also assume only one client will be accessing this at any one time). Also I think it would be better to have multiple instances, ie one per client, of the "database" on the server which the file gets loaded into and synchronize these instead of having just one, shred by all. Am I on the right track here guys or do I need to give it a serious re-think?