The application I am currently hooked to is using Native C code to do all the searching and sorting and returns the result back to the java Code.
When a large job is submitted , the application generally has had it (The damn thing crashes!!).
the issue was traced to the Native Code , and it is appearing to be running out of virtual address space when a large job submitted, the available physical ram, virtual memory all seem to be ok :
The log file from inside the C code. Failed to allocate handle of 131072 bytes for file <C:\zapOtherSrc\cpp\dev\mods\util1\STRMCACH\strmcach.cxx> at line 99.
Heap Manager statistics
]r*** 1 MEMORY ALLOCATIONS FAILED!
Currently allocated: 1,106,050 handles, 999,874k
Total allocated: 40,389,251 handles, 45,530,388k
Total freed: 39,283,201 handles
Total won't see free: 0 handles
Total realloced: 2,487,794 handles
Maximum amount allocated: 999,874k from file <C:\zapOtherSrc\cpp\dev\mods\base\STR\str.cxx> at line 1328.
Largest allocation: 27,321k from file <C:\zapOtherSrc\cpp\dev\prods\zap\engine\misc\lsconverter.cpp> at line 328.
(These numbers do not include MFC, CObject-derived and specialty allocs.)
(The Currently and Max amount allocated also do not include our 'new' allocs.)
System memory in use: 54%
Physical memory size: 2,047m, free: 1,861m
Page file size: 4,095m, available: 4,095m
Virtual address space size: 2,047m, available: 9m
End Heap Manager statistics
Exception message: Out of memory.
Exception thrown from file <C:\zapOtherSrc\cpp\dev\mods\base\HM\hm.cxx> at line 390
Now is there is a way to increase the memory allocated to the C code ? Till the point I have searched , the -Xmx and -Xms options only allocate memory for the java Heap, and the Native heap is separate , so increasing the java heap size wont do anything.
Is there a way to start up the JVM so that it allocates more memory for the Native code? or are we looking at the 32 bit restriction presented by the OS?
Thanks in advance!
In theory: there is no difference between theory and practice ; in practice there is!
There is no such thing (from java point of view) as a "native heap" so you don't have the equivalent of -Xmx and -Xms when using jni.
I think you are facing here an OS limitation. I don't know if this is a 32 bit restriction (you are able to address 2^32-1 - kernel, drivers, etc.. memory usage).
If i were you I will try to decrease your java heap (in order to free some native memory) or increase the windows virtual mem.
Keep us updated ... this is an interesting issue
posted 10 years ago
after doing dome searching , it is really starting to look like an OS restriction. The Virtual address space available per process , the way windows [A 32 bit Windows OS] allows is 2^32 meaning 4GB.
out of this 4GB , 2GB is allocated as the user space and the rest is the kernel space which can not be used to do use full work by the process.
If you look at the logs posted above, Physical memory is available , virtual memory is available what is not available is the Virtual address space! my guess is, the native C process swells up to maximum permissible level from the OS and yet demands more , this is where the OS intervenes and stops the memory allocation.
This results in an out of memory error , which is sent to the java layer .
I'm experiencing the same problem (32-bit Windows XP with 2GB of RAM, Java 1.6). Has anyone found a solution or an explanation?
From my Java app, I'm using JNI to dive down into C code and malloc() memory. Trying to allocate 500 MB fails. Allocating 1 MB 500 times works. The lower the Xmx setting, the larger a chunk I can allocate and the more small chunks I can allocate. Using small chunks, I can allocate much more than the Xmx setting, so the "native" allocations are not limited to the Java heap's size.
I should mention that on Mac 10.6 (32-bit and 64-bit JVM), I can run the same Java application (and JNI code) and allocate large chunks in sizes I would expect.
My speculation is that on 32-bit Windows the JVM is seriously fragmenting RAM and that the more memory one gives the JVM via Xmx, the more it'll fragment RAM. This is as far as I've gotten and am hoping there's some way to tell the JVM to be more compact in its allocations.