I finished all my back-end server code a while ago and have been working on the GUI. I started running through some scenarios in my head and got a little confused. In my B&S exam, the database provides no unique fields. I keep a hashmap that maps (int) record number to (long) position in the file. Each time a record is deleted or created, I call a method that jumps the begining of each record and inspects the first two bytes. If it's deleted, I ignore it (I actually add deleted positions to a queue and check the queue before I create a new record), if it's a valid record, I add the location and index to my map and increment my index counter.
So if I have the following values in my map: 0 70 1 238 2 459 3 690
And I delete record 2, record 3 now becomes record 2 and my map looks like this: 0 70 1 238 2 690
But what happens in the following scenario:
Thread A locks record 3. Thread B locks record 2. Thread B deletes record 2. Thread A updates record 3. ... but record 3 is actually record 2 as soon as thread B deletes the original record 2. So what happens to the update for record 3? My first thought is that I would have to lock access to the entire database, not just a single record. Or does this just mean that I should synchronize all my code on my index map?
Am I way off base on the way I'm handling "recNo"s ?
Hi Joshua: I have the similar primary key choice with you. I also used the index as a primary key. In my experience, when I deleted a record I just updated the flag to deleted and kept the other data (name, location) in database file. If a new record is created, I will reuse this record number(index) and update the new data to the same position. Shift the record number (index) will have a little trouble like you metion.