File APIs for Java Developers
Manipulate DOC, XLS, PPT, PDF and many others from your application.
http://aspose.com/file-tools
The moose likes Developer Certification (SCJD/OCMJD) and the fly likes reading .db files records Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Certification » Developer Certification (SCJD/OCMJD)
Bookmark "reading .db files records" Watch "reading .db files records" New topic
Author

reading .db files records

sean mc cusker
Greenhorn

Joined: Mar 15, 2005
Posts: 4
Hi all,
I am doing the B&S one and I have a question about reading the records from the db file.
I have identified 2 ways in which to do this but am sure which to choose:

1. Each request queries file and each match is sent to my database and then displayed on my GUI.

2. Just load whole file to database and then update file as changes made.

At the moment im more inclined to go for option 1 as the file could possibly grow hugely and loading whole thing could be very expensive, plus data integrity would become an issue.

If i go for option 1 is it ok to lock all of the matching records for a request? if so when should they be unlocked?

is there another way that would be more suitable.

Thanks in advance
joe lin
Greenhorn

Joined: Dec 07, 2004
Posts: 28
hi sean,
i think the option 2 is more performance.if you choose option 1,then every client search requires will lead to load the whole db file into memory,search in the memory,and then release the memory data just loaded.it is more expensive than persist the whole db file in memory just like your option 2 do.
another,one client just can lock one record at a time.so "lock all of the matching records for a request" will not happen.
hope this help!


Looking for better solution...<br />SCJP1.4
Paul Bourdeaux
Ranch Hand

Joined: May 24, 2004
Posts: 783
Hi sean,

It sounds to me like you are trying to decide whether or not to implement a cache for your data. I personally did not, choosing instead to use RandomAccessFile. The preformance hit for I/O is negligable, and you don't have to worry about persisting data.

You can really do it either way and you will be fine. Search this forum for "cache" and you will find pleanty of threads discussing the pros and cons.


“Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.” - Rich Cook
Paul Bourdeaux
Ranch Hand

Joined: May 24, 2004
Posts: 783
If i go for option 1 is it ok to lock all of the matching records for a request?
Only if you are implementing seperate readLock/writeLock methods (which is not hard to do, but probably outside the scope of this assignment). Otherwise you will lose concurant access to the records.
sean mc cusker
Greenhorn

Joined: Mar 15, 2005
Posts: 4
Ok, thanks all. So I will assume that the file does not grow humongously large.
On the server I will load the file to memory (using RandomAccessFile?), persisting any changes to the file immediately (i believe i can also assume only one client will be accessing this at any one time).
Also I think it would be better to have multiple instances, ie one per client, of the "database" on the server which the file gets loaded into and synchronize these instead of having just one, shred by all.
Am I on the right track here guys or do I need to give it a serious re-think?
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: reading .db files records