GeeCON Prague 2014*
The moose likes Developer Certification (SCJD/OCMJD) and the fly likes Using record cache and booking method Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


JavaRanch » Java Forums » Certification » Developer Certification (SCJD/OCMJD)
Bookmark "Using record cache and booking method" Watch "Using record cache and booking method" New topic
Author

Using record cache and booking method

Colin Duggan
Ranch Hand

Joined: Feb 23, 2008
Posts: 41
Im working on the UrlyBird assignment and have a question on the booking method.

I'm using thin client which provides a booking method. The booking method updates a record cache which will then write to the database file on shutdown of the application. My concern; how to handle two clients trying to book the same room.

Do i need a check in my write method which ensures records which have customer Ids associated with them don't get overwritten? How then do i communicate this to the end user? Especially if there was an issue with more than one of there updates

Thanks
Colin Duggan
Ranch Hand

Joined: Feb 23, 2008
Posts: 41
had a look at the faq http://www.coderanch.com/how-to/java/ScjdFaq#lockMethodNeeded

previously in my lock method i just checked if a record was not deleted but now i will also check if it has already been booked.

let me know if there is a better solution

cheers
Roel De Nijs
Bartender

Joined: Jul 19, 2004
Posts: 5300
    
  13

Hi Colin,

To be honest: checking in your lock-method to see if a room is already booked, is And for more than 1 reason: first your lock-method should only see if a record exists and is already locked (according to its contract). The contract does not tell anything about already booked records. Secondly if you add code to check the customer id (to see if a record is booked), it makes your Data class specific for handling room records and you can't reuse it for handling other database files.

There is a much better approach What do you think about this implementation of the book-method of your business service:
a) lock the record to be booked
b) read this record
c) check to see if the record is still available
d1) if record is available, update record with the appropriate customer id
d2) if record is not available, throw some business exception (which you can catch on the client and handle appropriately)
e) unlock the record

Hope it helps!
Kind regards,
Roel

SCJA, SCJP (1.4 | 5.0 | 6.0), SCJD
http://www.javaroe.be/
Colin Duggan
Ranch Hand

Joined: Feb 23, 2008
Posts: 41
cheers Roel.

i sent that previous reply too quickly without looking at the design. In the end i went with the same logic you described. Thanks again!
Jim Hoglund
Ranch Hand

Joined: Jan 09, 2008
Posts: 525
Colin wrote:
...a record cache which will then write to the database file on shutdown of the application.

Colin - this is a scary choice. I know we are discussing locking, but it seems that
this design choice could result in a failing grade. How will you justify waiting until
shutdown before persisting the data? I use a record cache too, but each record
change is only reported as successful after it has reached the database. h

Jim...


BEE MBA PMP SCJP-6
Roel De Nijs
Bartender

Joined: Jul 19, 2004
Posts: 5300
    
  13

Jim Hoglund wrote:this design choice could result in a failing grade.

I don't see why you could (or would) fail when using a record cache with persisting at shut down. I used exactly the same approach and justified this decision extensively (also mentioning drawbacks of this approach with possible solutions one could implement) in my choices.txt (and passed with flying colors )
Colin Duggan
Ranch Hand

Joined: Feb 23, 2008
Posts: 41
one problem i'm having with the cache is OutOfMemory exceptions being thrown when i run Roberto's class with more than 8000 threads. The program does not hang it just throws the exception can continues until completion. The OutOfMemory exception gets thrown from a different method (create, delete, update, find) each time and its difficult pin point where exactly the problem is. I'm going to try update my record cache Map to use SoftReferences (maybe i will need to write updates/deletes and new records directy to file with this approach) to see if that resolves the problem.

Any other suggestions as how i could handle this OOM?Thanks
Jim Hoglund
Ranch Hand

Joined: Jan 09, 2008
Posts: 525
Roel: Can you share your reasons for leaving the booking data
unsaved. It could be an extended period of time, certainly hours but
possibly even days or weeks. The disadvantages seem quite clear
and the cost to persist each record is small. Again, I use a record
cache too, to avoid going to the database while moving around in
the GUI. But I also confirm the save of each record to the user.

Jim ...
Roel De Nijs
Bartender

Joined: Jul 19, 2004
Posts: 5300
    
  13

Colin Duggan wrote:Any other suggestions as how i could handle this OOM? Thanks

If you get an OutOfMemoryError there is little you can do. Your server/application can't recover from this error (that's why it is an error and not an exception). That's a possible drawback of using a record cache. Just mention it in your choices.txt as a possible drawback and maybe suggest some solution and you'll be fine.
Colin Duggan
Ranch Hand

Joined: Feb 23, 2008
Posts: 41
cheers Roel!
Roel De Nijs
Bartender

Joined: Jul 19, 2004
Posts: 5300
    
  13

Jim Hoglund wrote:Can you share your reasons for leaving the booking data unsaved.

Code simplicity (no need to handle IOException each time), no need to keep 2 resources (database file and record cache) synchronized, i/o operations can result in performance loss,... The drawbacks are clear indeed, I mentioned them in choices.txt and suggested several work-arounds / solutions to get rid of these issues.

But there's nothing wrong with your approach, nor was mine. It's just a (design) decision you take (and defend). I just made the remark because you said that such an approach could result in a failing grade and that's completely untrue!
Jim Hoglund
Ranch Hand

Joined: Jan 09, 2008
Posts: 525
Thanks Roel. It's just that I come from a business background and
have supported mission-critical systems. Data integrity is king for
us and technical arguments carry little weight when data is at risk.

Jim ...
Jonathan Elkharrat
Ranch Hand

Joined: Dec 31, 2008
Posts: 170

Roel De Nijs wrote:
Colin Duggan wrote:Any other suggestions as how i could handle this OOM? Thanks

If you get an OutOfMemoryError there is little you can do. Your server/application can't recover from this error (that's why it is an error and not an exception). That's a possible drawback of using a record cache. Just mention it in your choices.txt as a possible drawback and maybe suggest some solution and you'll be fine.


there's something you can do.
try running the tests with a bigger heap size. (-Xmx 1024m for example)


SCJP 5, SCWCD 5, SCBCD 5
Roel De Nijs
Bartender

Joined: Jul 19, 2004
Posts: 5300
    
  13

Jonathan Elkharrat wrote:try running the tests with a bigger heap size. (-Xmx 1024m for example)

Of course, but that's not handling it, just deferring

But what he certainly has to do is investigate the source of the OutOfMemoryError. There are 2 possibilities:
a) because you use a cache, all records are saved, the more records are stored in the cache, the less memory is available and when you run out of memory, you get the exception
b) you have a memory leak and memory that should come available is not.

Whereas a) is completely normal behavior, you don't want b) to happen, so that requires a bit of investigation.

And as a final remark: for testing purposes you can use a command line option (e.g. -Xmx1024m), but remember that for assessing the assignment no extra command line arguments may be used except for those mentioned in the instructions.
Colin Duggan
Ranch Hand

Joined: Feb 23, 2008
Posts: 41
cheers guys.

When i run Roberto's test with the expanded memory i can raise the iterations from 1300 to 2000 (so thats about 10000 threads) before i get the OOM exception again. I've profiled the app (using yourkit) while running and nothing stands out (i could probably do with improving my profiling skills) only the map size. My machine only has 2gigs of Ram.
Roel De Nijs
Bartender

Joined: Jul 19, 2004
Posts: 5300
    
  13

Hi Colin,

I guess you are having some real memory issue. I remember using Roberto's data class test with 10000 iterations without no problems at all. But it's almost 2 years since I submitted my assignment, so my memory may be not that flawless anymore Therefor I unzipped my project again, changed Roberto's test again to fit my needs and give it a few runs (number of iterations = 2500).

Each of these runs completely successfully (no OutOfMemoryError):
  • with always deleting record 1: total number of records at the end was 31 (the 1st record is always reused for the new records)
  • with deleting a record with index between 0 and 50: total number of records at the end was 49 (sometimes a delete fails because the record does not exist, so new records are just added)
  • no deletion of records: total number if records at the end was 2531 (31 records to start with + 2500 newly created ones)


  • Remarks:
  • my application reuses deleted entries
  • each run was started with an original database file (containing 31 records in my case)
  • every run was started using -Xms4m -Xmx8m for memory from the command line
  • I also tried with 5000 iterations in the 3 situations described above and again each run completed successfully (with similar results)


  • Conclusion: I'm able to run more iterations (threads) with less memory, so I guess you clearly have some memory issue (leak). I would put some investigation into this.

    Kind regards,
    Roel
    Roberto Perillo
    Bartender

    Joined: Dec 28, 2007
    Posts: 2265
        
        3

    Howdy, Colin.

    Champ, aren't you instantiating objects somewhere in your locking mechanism without need? Maybe in your lock() method? Try to investigate this... I think that you may be instantiating more objects than necessary.


    Cheers, Bob "John Lennon" Perillo
    SCJP, SCWCD, SCJD, SCBCD - Daileon: A Tool for Enabling Domain Annotations
    Jonathan Elkharrat
    Ranch Hand

    Joined: Dec 31, 2008
    Posts: 170

    Objects are really small and even a million object shouldn't get near 1 gb of memory..

    maybe each thread got a complete cache by its own? or each call cause a to load the
    whole file into memory again...

    i guess it's something connected with the database, it's the biggest thing there...
    Colin Duggan
    Ranch Hand

    Joined: Feb 23, 2008
    Posts: 41
    cheers Roberto.

    Yeah, i think i may have over complicated my locking strategy. I was managing read and write locks using two additional Hashmaps (additional to the lock Map which keeps track of the locked records.) In these additional maps i track records waiting to read and records waiting to write along with the number of threads waiting for each of these operations. I then make decisions in my lock methods whether or not the incoming thread can obtain the lockcookie or whether it needs to wait. If the thread has to wait I update one of the waiting reads or waiting writes hashmaps. When a thread releases a lock on a record i notify the threads waiting to write or if none exist i notify threads waiting to read. With this strategy multiple threads can read at the same time and obviously only one can write/update at a time.

    Should i be looking at a more simplistic approach here?


    Jonathan Elkharrat
    Ranch Hand

    Joined: Dec 31, 2008
    Posts: 170

    as long as there's only one instance of map it's okay. it's the content of the map that can cause problems.
    maybe the map get bigger ang bigger... did you implement correctly the hashcode() of the data class?
    (try adding println(map.size) in a method that is called often so you can see it's size on the console)
    Colin Duggan
    Ranch Hand

    Joined: Feb 23, 2008
    Posts: 41
    why do you suggest i implement hashcode()? is it less memory intensive to override and use equals instead of containsKey. Also I just use the == operator to compare lockcookies (which i generate using System.currentTimeMillis)
    Roel De Nijs
    Bartender

    Joined: Jul 19, 2004
    Posts: 5300
        
      13

    Jonathan Elkharrat wrote:did you implement correctly the hashcode() of the data class?

    Why would that be necessary?
    Jonathan Elkharrat
    Ranch Hand

    Joined: Dec 31, 2008
    Posts: 170

    if you are using hashmap but dont implement the hashcode()
    every object is inserted in a different hash even if it is equal
    to one that already exist instead of replacing the old one.
    it's a thin possibility but it can be the reason a map grow huge...
    Roel De Nijs
    Bartender

    Joined: Jul 19, 2004
    Posts: 5300
        
      13

    I know what the hashCode is for, I'm just wondering why you would use the Data class as key in a map
    Jonathan Elkharrat
    Ranch Hand

    Joined: Dec 31, 2008
    Posts: 170

    sorry, i confused with the hashset..

    anyway, the hashcode of the key should be implemented correctly (i don't
    know what pair <key,data> he is using..)

    that was just a suggestion, a collection can quickly cause OOM error if not
    managed correctly...
    Roel De Nijs
    Bartender

    Joined: Jul 19, 2004
    Posts: 5300
        
      13

    Jonathan Elkharrat wrote:that was just a suggestion, a collection can quickly cause OOM error if not managed correctly...

    That depends on the objects you are adding to your collection. I could easily add 5000+ <Integer, String[]> in a map with just 8MB memory.
    Colin Duggan
    Ranch Hand

    Joined: Feb 23, 2008
    Posts: 41
    when the OOM exception is thrown my app is only using around 4.5mb of the 15mb of Heap space available to it.

    The hs_err file contains the message ;
    java.lang.OutOfMemoryError: requested 160008 bytes for Chunk::new. Out of swap space?
    Internal Error (allocation.cpp:272), pid=15728, tid=11416
    # Error: Chunk::new
    Roel De Nijs
    Bartender

    Joined: Jul 19, 2004
    Posts: 5300
        
      13

    Maybe this link may be helpful. And also release notes of java 1.5 mentions this error message.
    Colin Duggan
    Ranch Hand

    Joined: Feb 23, 2008
    Posts: 41
    from what i have read it seems like exhaustion of the native heap prevents new objects being created on the heap. So if the native heap contains the running threads and the Java heap then im still back to the issue of a leak in my code right? Some suggestions i have seen for this issue mentioned increasing the swap space by reducing the Java heap size, but this isint an option as i cant provide any additional arguments.

    Do you know how much space Java allocates on the heap for each new thread? Is it possible i have created too many threads and my JVM cant handle it? E.g if each thread is given 512k and eclipse allows up to 512mb then once i create 8 192 threads i should start to see problems. Does this make any sense?
    Roel De Nijs
    Bartender

    Joined: Jul 19, 2004
    Posts: 5300
        
      13

    I don't have a clue about how much space is used for an object, but I do know that it's not that big. This is also confirmed with the tests concerning this issue (see one of my posts above). I was able to execute 5000 iterations (that's 25000 threads + a bunch of other objects) with maximum 8mb heap size.

    Do you have already tried to print some memory information (freeMemory, totalMemory, maxMemory in the Runtime class)? Maybe that might give a hint about what's going on. Or maybe use the java heap analysis tool and see what's going on there and how many instances of each class are on the heap and if it makes sense to have that many.

    Good luck!
    Colin Duggan
    Ranch Hand

    Joined: Feb 23, 2008
    Posts: 41
    Cheers Roel, i had a look at the profiler again and made some changes. Now i can get the test running 8000 iterations (40,000 threads) without throwing any OOM. I had to rework my lock method, previously i first checked if the record was locked before checking if it existed, so i swapped that around and also added another check to make sure the record exists even after the thread gets the lock object (I'm using Re-entrant lock).

    I also went through the code and removed any unnecessary creation of Int, long or String objects. I also stripped out all the SystemOuts from the test class and just used the logger instead to highlight any issues. A bottle neck still exists in the lock method when a large number(running 8000+ iterations) of threads are waiting on the same lock object. I think this is just something I'm going to have to justify and provide possible work around for in my choices.txt?
    Roel De Nijs
    Bartender

    Joined: Jul 19, 2004
    Posts: 5300
        
      13

    Colin Duggan wrote:A bottle neck still exists in the lock method when a large number(running 8000+ iterations) of threads are waiting on the same lock object. I think this is just something I'm going to have to justify and provide possible work around for in my choices.txt?

    What do you exactly mean with a bottleneck because it's quiet normal that every thread has to wait if they all want to lock the same record.
    Colin Duggan
    Ranch Hand

    Joined: Feb 23, 2008
    Posts: 41
    yeah i just mean the threads are stuck waiting while some other thread updates/deletes etc.
    Roel De Nijs
    Bartender

    Joined: Jul 19, 2004
    Posts: 5300
        
      13

    Which one suites your situation best?
  • If they are waiting because other threads have locked the record they also need to lock, then that's the intended behavior.
  • If they have to wait (and can't lock an available record), then your locking mechanism isn't working as required and should be changed.
  • If they are just waiting to get some CPU time, that's also normal behavior


  • In the 1st and last situation, you can mention it in choices.txt but there is not a real software solution.
    Colin Duggan
    Ranch Hand

    Joined: Feb 23, 2008
    Posts: 41
    i rule out the second situation because no matter what happens the test always runs to completion and ends without hanging, from this i assume that any thread which blocked looking for a record lock was later able achieve it and run to completion.

    The first point you mention best describes what I'm seeing. The specification does mention that the system is only to support CSR's so maybe i can ease up on the performance testing.
    Roel De Nijs
    Bartender

    Joined: Jul 19, 2004
    Posts: 5300
        
      13

    Colin Duggan wrote:The specification does mention that the system is only to support CSR's so maybe i can ease up on the performance testing.

    I think you can
    Colin Duggan
    Ranch Hand

    Joined: Feb 23, 2008
    Posts: 41
    sorry to reopen this. Submitting in couple of weeks and need to close this. I did some more testing and noticed that my problems occur when writing to my memory cache in the create method. If i leave out the create thread in the test program i can run 1000000+ threads without issue but when i add back in the create thread i can only run approx 20000 iterations before getting an OOM. This number decreases even more when i add my logic in create method to reuse deleted records. Did you guys do anything different to make your maps more efficient? Any suggestions would be great! Thanks

    Roel De Nijs
    Bartender

    Joined: Jul 19, 2004
    Posts: 5300
        
      13

    Ok, let's use our common sense to make some thoughts about the things you experience.

    1/ when you disable the create thread, the record cache (map with records) will always stay same size, because no records are created, just updated. so threads are created and when they are finished, these objects are garbage collected,... There happens nothing more than some searches, reads and updates. So being able to run more than 1000000 threads when you disable the creating thread is the expected behavior.

    2/ when you enable the create thread (without reusing deleted entries), the size of the record cache will increase with every create thread that's executed. So seems just normal to me that you can run less iterations than when having no creates. When there is no memory left, you'll get an OOM. Again that's normal. You can "solve" this by increasing the memory, but that's not a real solution, because if you keep increasing the number of iterations you'll get an OOM at a given point (which is normal). This is one of the main drawbacks of using a record cache and you should address that in your choices.txt

    3/ when you enable the create thread (with reusing deleted entries), the size of the record cache will increase but because deleted entries are reused the increase will be not that high as in the previous point. So if the number of iterations decreases in this case, that's not normally and it seems you have some issue with your 'determine the next free record number"-logic.

    Hope it helps (and makes a bit sense )
    Colin Duggan
    Ranch Hand

    Joined: Feb 23, 2008
    Posts: 41
    Cheers Roel makes perfect sense. The third point for me is odd because my code to reuse an existing record is only a few lines long, i just get the first element ,deletedRecords.get(0) , in my list of deleted records list , add to my record cache along with the string[] and finally remove the deleted recordNo from my list ,deletedRecords.remove(0), so not much room to mess up


    Roel De Nijs
    Bartender

    Joined: Jul 19, 2004
    Posts: 5300
        
      13

    Why do you have a seperate map with deleted records Why not just storing null in your record cache for a deleted record? So you don't need this extra map and a String[] will be eligible for GC
     
    GeeCON Prague 2014
     
    subject: Using record cache and booking method