• Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

reading .db files records

 
sean mc cusker
Greenhorn
Posts: 4
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi all,
I am doing the B&S one and I have a question about reading the records from the db file.
I have identified 2 ways in which to do this but am sure which to choose:

1. Each request queries file and each match is sent to my database and then displayed on my GUI.

2. Just load whole file to database and then update file as changes made.

At the moment im more inclined to go for option 1 as the file could possibly grow hugely and loading whole thing could be very expensive, plus data integrity would become an issue.

If i go for option 1 is it ok to lock all of the matching records for a request? if so when should they be unlocked?

is there another way that would be more suitable.

Thanks in advance
 
joe lin
Greenhorn
Posts: 28
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
hi sean,
i think the option 2 is more performance.if you choose option 1,then every client search requires will lead to load the whole db file into memory,search in the memory,and then release the memory data just loaded.it is more expensive than persist the whole db file in memory just like your option 2 do.
another,one client just can lock one record at a time.so "lock all of the matching records for a request" will not happen.
hope this help!
 
Paul Bourdeaux
Ranch Hand
Posts: 783
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi sean,

It sounds to me like you are trying to decide whether or not to implement a cache for your data. I personally did not, choosing instead to use RandomAccessFile. The preformance hit for I/O is negligable, and you don't have to worry about persisting data.

You can really do it either way and you will be fine. Search this forum for "cache" and you will find pleanty of threads discussing the pros and cons.
 
Paul Bourdeaux
Ranch Hand
Posts: 783
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
If i go for option 1 is it ok to lock all of the matching records for a request?
Only if you are implementing seperate readLock/writeLock methods (which is not hard to do, but probably outside the scope of this assignment). Otherwise you will lose concurant access to the records.
 
sean mc cusker
Greenhorn
Posts: 4
  • 0
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Ok, thanks all. So I will assume that the file does not grow humongously large.
On the server I will load the file to memory (using RandomAccessFile?), persisting any changes to the file immediately (i believe i can also assume only one client will be accessing this at any one time).
Also I think it would be better to have multiple instances, ie one per client, of the "database" on the server which the file gets loaded into and synchronize these instead of having just one, shred by all.
Am I on the right track here guys or do I need to give it a serious re-think?
 
It is sorta covered in the JavaRanch Style Guide.
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic