wood burning stoves 2.0*
The moose likes Developer Certification (SCJD/OCMJD) and the fly likes Bodgitt and Scarper 2.2.1: Caching Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of Murach's Java Servlets and JSP this week in the Servlets forum!
JavaRanch » Java Forums » Certification » Developer Certification (SCJD/OCMJD)
Bookmark "Bodgitt and Scarper 2.2.1: Caching" Watch "Bodgitt and Scarper 2.2.1: Caching" New topic
Author

Bodgitt and Scarper 2.2.1: Caching

Adam Dray
Greenhorn

Joined: Sep 14, 2009
Posts: 7
To make my life easier, I've decided to cache the entire database in memory. I've read a bit about others' success with similar approaches.

I'm trying to keep the file-level class (ContractorFileAccess) separate from the memory-caching class (ContractorFileCacheAccess). ContractorFileCacheAccess extends ContractorFileAccess with cached versions of its methods.

It seems I can get away with not creating sequential record numbers at all at the db level. I can just use the (long) offset in the file for each record as the recNo. The memory cache doesn't care at all. It can have its own numbering scheme. Whatever.

When I write the database to disk from memory at shutdown, is there any compelling reason to write records back into the same slots they occupied before? I could just rewrite the file from scratch, "compressing" the removed records as I go.

Can anyone think of a reason not to do that?


SCJP5, SCJD6
Roel De Nijs
Bartender

Joined: Jul 19, 2004
Posts: 5126
    
  12

Hi Adam,

Welcome to the javaranch!

I used also the cache approach (and it made my life a whole lot easier). I didn't compress the file like you are planning to do. The only reason I can think of not to go that way is a possible futur enhancement. Because you use a cache, you could lose all data if server craches for example. To have the least possible loss of data you will write to the data file every hour for example (instead of at server shut down). And if you use compressing algorithm, you will end up with a mess.

And other thing of course is: it's not required, so don't do it

Kind regards,
Roel


SCJA, SCJP (1.4 | 5.0 | 6.0), SCJD
http://www.javaroe.be/
Adam Dray
Greenhorn

Joined: Sep 14, 2009
Posts: 7
Thanks for the reply, Roel!

I don't mean literal compression. I mean database compression, where you remove logically deleted records from the physical file. I just mean writing out the good records and not the deleted records.

The main issue is that I can't guarantee the order of the records without a little work, so the 10th record might end up getting written where the 1st record used to be. Since there is no primary key stored on the record (the recNos are purely an internal thing), it doesn't seem to matter, but the recNo could differ from one run to the next.
Roberto Perillo
Bartender

Joined: Dec 28, 2007
Posts: 2258
    
    3

Howdy, Adam!

Champion, what is the data structure you are using for caching? For instance, I used Map<Integer, Room> where Integer is the record number and Room is a class that represents a database record. You can face the position of the record in the database file as the record's primary key. Also, for deleted records, I kept them with null value in the Map. So, when there's an entry with null value, then it means it is deleted.


Cheers, Bob "John Lennon" Perillo
SCJP, SCWCD, SCJD, SCBCD - Daileon: A Tool for Enabling Domain Annotations
Roel De Nijs
Bartender

Joined: Jul 19, 2004
Posts: 5126
    
  12

Hi Adam,

I know you meant that, but that makes no difference. If you have 3 records in the JTable, 2nd one is deleted and then you write them back to file (= a future enhancement to save each hour to the database file, so you have less data loss if server crashes); the offsets of the 3rd record will be changed and user will get an error that that record doesn't exist anymore (because it became the 2nd record).

Kind regards,
Roel
Adam Dray
Greenhorn

Joined: Sep 14, 2009
Posts: 7
Roberto,

My cache data structure is a (Hashmap) Map<Long, Contractor>. The Long is the record number and the Contractor is the full record value object. I guess you're right that I can just keep my record-offset as the record number up through the caching layer and then use it to write back into the same position. It is so obvious now that you've said it. ;)



Roel,

That makes perfect sense. I get you now.


Thanks to both of you. I'm sure I'll have more questions later on!
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Bodgitt and Scarper 2.2.1: Caching
 
Similar Threads
Dealing with Deleted Records
URLyBird Locking
[NX:] caching records?
NX:Client crashed cause deadlock in LockManager
record cache