Could anyone suggest which is the best approach for the Server. Should data be held in memory (e.g. as a Vector) until the server is shutdown and then written to the db file. Or should the db file be constantly updated when a delete/create/update functions are called on the server.
I would prefer holding the Server data in memory until the server is shut down. I was going to justify this design choice with the following points in my choices.txt ...
Memory is cheap (a couple of thousand entries should not be a problem) Simplier design (more maintainable) Faster access times
Do you think this would be acceptable solution. Has anyone heard if keeping all the data in memory until shutdown is a no no.
Joined: Mar 22, 2005
Welcome to JavaRanch.
How about doing both - keeping it in memory for fast access, and writing it to disk for safety? Imagine someone tripping over a wire while nothing has been written to disk yet (yes, it does happen, even in hosting centers).
In order to prevent dataloss I keep everything on disc. I don't even use a disk cache at the moment (though I could implement one) because it's IMO beyond the scope of the assignment (a small scale system).
When and if performance becomes an issue (which isn't listed in the requirements) a cache could easily be added.
Joined: Sep 18, 2003
I keep in memory only a Map of record indexes(valid and active records).
Jeroen, you must do the something similar ? What you mean by :
I keep everything on disc.
SCJP, SCJD, SCWCD
Jeroen T Wenting
Joined: Apr 21, 2006
the only thing I keep in memory is the locks (of course) and a list of records that were deleted during the current session (just as a slight performance boost when inserting a new record, to prevent a complete tablescan for empty space).