This week's book giveaway is in the OCAJP 8 forum. We're giving away four copies of OCA Java SE 8 Programmer I Study Guide and have Edward Finegan & Robert Liguori on-line! See this thread for details.
I am working on B&S. I used HaspMap to store name+location ---> recNo mapping. But when in deleteRecord(recNo, cookie) method, I have to delete the entry in the Map where the value is the recNo. Anyone can suggest some good way to delete by value? In this case, the key is unknown.
I don't want to iterate the whole HashMap just to find the recNo and delete the entry. That is too slow and not scalable.
Another option is that in my deleteRecord method, I call readRecord(recNo) to get the record and delete entry in the map by key. But that is an overhead.
Any suggestions? I post this question here because I thought someone might have similar issue when working on B&S.
By the way, my Data is singleton and I keep a HashMap for the above mapping there to facilitate things. In my createRecord, I check whether the key is already in the map and throw DuplicateKeyException if found. In my lockRecord, I check whether it containsValue(recNo) before locking.
I documented the assumption (name+location is key) in my choices.txt.
Champion... what if you have a HashMap<Integer, Room>, where Room is a class that represents a particular hotel room, and Integer is its number in the database? This way, you can ease things and have access to each record via its number.
My HashMap is not really a whole record cache but just key to recNo mapping. I thought about creating another HashMap which maps from recNo to key(or record). But the only place I need this new HashMap would be just in deleteRecord() where I need to clean out the first HashMap (key to recNo mapping). I won't be able to use it anywhere else. Might just be overkill?
My implementation is to persist data change immediately to db file. So I really don't use and don't have a whole record cache. My simple HashMap is light (only key to recNo) and I thought keeping it in sync would be very efficient instead of a whole record cache.
I could just get rid of my key to recNo mapping and instead using recNo to key(or record) mapping. But I would not be able to throw DuplicateKeyException without my map. Unless I choose not to implement the exception, I have to have my map.
Well, you won't be able to have name + location as key, because there may be multiple records with the same name and location.
I see you want to have a Map to know if a particular record is deleted or not, right? Well... what if you had Map<Integer, Boolean> where Integer is the record number and Boolean indicates if the record is deleted or not? I'm not really sure if this could work for you... it's just that I had a HashMap<Integer, Room> keeping the database records, and I also had a List<Integer> keeping the available record numbers, that is, the number of each deleted record.
Anyway, in your case, you have to make sure that the key is unique. You can use, for instance, the record number.
Joined: Apr 27, 2007
Yeah I agree assuming name+location as key is risky. I have been searching the forum to see how other guys made decision on this.
My createRecord() needs to throw DuplicateKeyException but the instruction did not tell me what is the key. I saw some guys just completely not to implement it. I might just not to implement the exception too and I hope SUN is ok with my choice. I have been hesitating on this for several days.
Well, if no concrete key, there could be records very similar. There could even be exact same record if no check in createRecord(). Have your guys ever tried to detect exact same record during creation? If so, did you throw DuplicateKeyException when exact same record already exists?
You know, most of us around here chose not to throw the DuplicateKeyException, because there isn't really a field (or combination of fields) that can be taken as unique. When creating a record, I just don't allow null values (except for the customer ID field).
Since you have B&S, you may have a primary key. Those of us with URLyBird have a valid argument that there is not one and choose not to throw the exception. You might want to limit your search on B&S to see how others handled it.
Oh, right! I forgot that. Before choosing not to throw DuplicateKeyException, verify if there is a field (or combination of fields) that can be taken as unique!
Joined: Apr 27, 2007
Thanks all for your comments.
After one more hesitating day , I decided to use key name+location. For the HashMap, I decided to use readRecord(recNo) to get the key and update the map accordingly. It is a bit overhead but I will live with it.