In my current code, every read, update, create, find, and delete operation requires me to use RandomAccessFile to access the database flatfile. Part of the reason why I went with this approach is because I knew that upon server shutdown, the database file would be up-to-date with the latest information since all update/delete/create operations immediately access the flatfile.
However, I do not like having to handle IOExceptions with each read/write operation, so I was thinking of reading the database into memory once, and having all read/write operations interact with the database copy in memory. Then, when the server is shutdown (VM terminates), I would have it persist the data to the database file.
Is the first solution ok (how should I handle IOExceptions)? I like the second solution better, but I am worried about problems occurring as I write data to the hard drive in the shutdown hook thread.
My project states that there should only be one server or one standalone client accessing the datafile so this should work in theory.
I suppose the problem comes if the application crashes unexpectedly and you have potentially lost hundreds of user updates. Also the same is true if the shutdown write fails. If you are writing the data out constantly and there are I/O problems you will probably only lose a handful of updates but if you are writing the data out once and there are I/O problems then you lose everything.
I'm sure you can think of clever ways to code around this and make it safer but I'd worry about the complexity of such an application. The oft-repeated mantra on this forum is "keep it simple".
I chose just to read/write to the file directly, and not to cache. First of all, I thought caching would make the application more complicated (and the assignment sort of tells you to keep it simple). And secondly, I figured that most readings would be followed by a writing to the file - as people would only do a search if they (usually) plan to book also. As caching only makes sense in cases of multiple reads and relative scarce writes, I decided that it is not appropriate here.
Cless i can see a big trouble if you "commit" all operations when server is shutdown, if it terminates normally or by a Exception you can manage this maybe by a finally block or another strategy, but if your server shutdown for an Error you cannot manage this and you lose all operations made over the server.
There are another threads for how you handle exceptions not declared in the supplied interface, i think this is an important issue of assigment because i can learn good and bad practices to handle this.
I think cache records don't complicate to much the assigment, first i made all class of read/write records in datafile, then i see than a cache only for read records can improve my automated test (this are a bit delayed). You can ommit this, i can se than most important is whatever you decide is document your choice design and argument why you do this. I hope it help you.
SCJP, SCJD, now studying for SCWCD and working to be a better person
Joined: Jun 18, 2007
Thank you all for your responses. I have decided to keep my current design with constant read/writes directly to file. I do not think that I will implement a caching system as it will only complicate this project (the guidelines say that a simpler but less efficient solution is better than a complicated, efficient one).
I will take a look at topics on exception handling.
I find this topic very interesting. I am still working on my system for 5.0 (does anyone know the end date for 5.0 submits?) and I implemented my server with a write-through cache. I read the entire file at server startup and organized big hit search data into TreeMaps/ArrayLists trying to minimize data redundancy . This did complicate the server a bit but should make searching and obtaining certain lists (ex. get all unique locations) much faster. Writes updates the cache and the cache updates the file.
Of course this slows server startup but hopefully that is a fairly rare event. I feel that because of all the searching that there will be far more reads than writes so this scheme does file I/O only when it has too. I did not cache the writes because of potential server crashes. I feel it is not worth the risk.
Hopefully I am not too off-base here in that I am still working on my project but basically I have finished with the server end. I will most likely loose points here but it was fun to get it to work and hopefully my written justification will blunt the blow. Thanks for a great post topic.
I've lost count of the number of times we've had to kill -9 rogue JBoss processes at my current place of work. If we were caching data, that would be hundreds, if not thousands, of customer transactions lost into the void.
Persistent data storage (be it a flat text file, DB, or quantum entangled crystal lattice ( ) is designed to store persistent data. Any performance loss should always be carefully evaluated against the possibility of data corruption. I'd rather have slow performance than irate customers, personally.
McFinnigan? Never heard of him. Nobody here but us chickens...<br /> <br />SCJP for Java 1.4<br />SCJD for Java 5.0
Hey! I have implemented the deleted records cache only. It saves some reads from datafile when you try to verify if record exists or not. Also it helps to re-use deleted records. you need to synchronize on the cache object of course.