ch pravin wrote:I am running my program on a cluster having sufficient memory.The outer loop iterates for 16000 times and for each iteration of the outer loop the inner loop iterates for about 300,000 times.The error is thrown at different points in the program but it occurs only when I insert the code which stores the <String,Float> pair in a hashmap.Any suggestions will be grately appreciated.
Henry Wong wrote:
ch pravin wrote:I am running my program on a cluster having sufficient memory.The outer loop iterates for 16000 times and for each iteration of the outer loop the inner loop iterates for about 300,000 times.The error is thrown at different points in the program but it occurs only when I insert the code which stores the <String,Float> pair in a hashmap.Any suggestions will be grately appreciated.
Well, let's do some simple math... There are 16,000 iterations of the outer loop times 300,000 iterations of the inner loop gives you 4,800,000,000 iterations. Assuming that the string of the string pair is 80 characters long, giving the pair at about 84 bytes, and ignoring the memory needed for the nodes and stuff, gives you a total memory footprint of about a bit over 400GB.
Henry
ch pravin wrote:
I am clearing the hashmap everytime the outer loop starts,so the hashmap contains only about 300,000 entries.Also,the string size is 8 characters at max.
Henry Wong wrote:
It shouldn't be hard to confirm -- just put a check near the put to confirm that the count doesn't climb over some value, say 500k.
Otherwise, if you are already convinced that this is not the source of the leak, then you are back to square one. Get a profiler, and see what object types are growing pass your expectations.
Henry
ch pravin wrote:
The size of the hashmap grows upto 40,000 and then the error is thrown.I am not sure how to use a profiler.Can you shed some light on it as to what it will do? I am executing the code as a jar file on a remote machine.
Ilari Moilanen wrote:I trust that you have uused Google to solve your problem? One of the first results is this
http://stackoverflow.com/questions/1393486/what-means-the-error-message-java-lang-outofmemoryerror-gc-overhead-limit-excee
which indicates that increasing the heap size could solve your problem. Of course if there is a major problem and the heap memory is not freed correctly (as Henry Wong suggested) then increasing heap space will not help in the long run.
Ilari Moilanen wrote:I trust that you have uused Google to solve your problem? One of the first results is this
http://stackoverflow.com/questions/1393486/what-means-the-error-message-java-lang-outofmemoryerror-gc-overhead-limit-excee
which indicates that increasing the heap size could solve your problem.
Henry Wong wrote:
This is actually a good point. I just assumed when the OP said that there were plenty memory, that the JVM was configured to use all of that memory. Just how much heap space is the JVM configured for?
Henry
ch pravin wrote:I tried increasing the heap size to 8GB,that made the hashmap grow upto 80K before the code threw java heap space exception.You told that there could be a possible memory leak when each data point is getting processed.Can you suggest me some steps to resolve the issue?
ch pravin wrote:
I am not sure how much heap space is the JVM configured for,since it's a remote machine.I tried increasing the heap size to 8GB,that made the hashmap grow upto 80K before the code threw java heap space exception.
Henry Wong wrote:
Keep in mind, that with the exception of the Azul JVM and its pauseless GC, that most JVMs don't work well with a heap that big. A Sun JVM may take a long time to collect on a 8GB heap, and under certain condition, it may never finish.
Henry
ch pravin wrote:I am not sure how to use a profiler.Can you shed some light on it as to what it will do and which one is the best one?
Too many men are afraid of being fools - Henry Ford. Foolish tiny ad:
a bit of art, as a gift, that will fit in a stocking
https://gardener-gift.com
|