I actually never set Thread stack but this is what I found out about it:
Deep recursive algorithms can also lead to Out Of Memory problems. In this case, the only fixes are increasing the thread stack size (-Xss), or refactoring the algorithms to reduce the depth, or the local data size per call.
I believe that someone suggested 1024KB but I could not start the server with that size.
Any tips hints and suggestions would be greatly appreciated.
I'm not sure what you mean by "guaranteed performance tuneup".
Tomcat's defaults, for the most part, are set for production. Most of the changes I make slow things down but make more friendly for production (reloadable, logging etc...).
The memory changes that you mentioned are less Tomcat specific and more JVM specific. If you want to learn how to tweak the JVM, you might want to try some google searches that don't include Tomcat.
This was my first find when I googled "Java JVM Tuning"
I run Spring app server and every time I reach 10000 active connections the server runs out of resources and requires reboot. I have been able to work around the issue by setting limits on the applications being run.
I don't know how to cross 1024 maximum memory limit. I can pup up the server with 16GB of ram and since it is on the cloud the processors would be no issue either but I don't know enough about tomcat or more precisely JVM to go start tuning this out.
There for I'm looking for some sources to start learning about it. Including spreading the View-Model processing on several Tomcat instances.
I've read that the 1024 M limit is due to 32 bit OSes can't give more than 1024 megs of contiguous blocks of ram to a single application and that JVMs require contiguous blocks. I don't know whether this a Windows OS thing or if it's an Intel chip thing.
If it's the latter, switching to Linux wouldn't help. The fix, I've read, is to upgrade to a 64bit system.
I'm hitting the 1024 ceiling myself. I'm looking at a couple options. One is to upgrade hardware to have a 64 bit system.
I'm also toying with the idea of taking some of the more memory intensive operations (PDF merging and creation, for example) and forking a new process for them. Tomcat give you the option to do this with JSP compilation.
would work with Spring framework. I heard about Terracotta but I'm not sure how that would work since I haven't even seen clustered tomcat or spring yet.
I don't use much JSP. I'm running the front-end on sitemash and freemarker.
I would imagine there would be some problems with application context. I'm running most of the pages stateless but some are statefull and that could be a problem.
The logs are standard vanilla stuff about Catalina not having enough memory.
I will install 64 bit system so I can test the JVM with more memory allocation.
Did anyone try to run tomcat with 16GB of ram on 64 bit system?
Kees Jan Koster
Joined: Mar 31, 2009
Please note that there are a dozen or so forms of OOME's. Please post the message part of the exception, so we can see what happend precisely.
I have no idea what OOME means. If you have link to at least one forum that is in your opinion good I would appreciate it.
Every-once in while I get java.lang.OutOfMemoryError: Java heap space.
I tweak the heap sizes around with garbage collector every time and it is good after that for a while. But there is only as much you can do on 32 bit JVM.
I'm trying to find out if I can run 64 bit CentOS with JVM (JDK 6.0) that is 64bit. But I fount out that it will be problem with intel processors. The AMD once seem to be the only processors that will work with 64bit JDK 6.0.
At this point it seems that I should open post on 64bit JDK.
Are you running it on your own server or do you have dedicated hosting or do you run on cloud space?
The Saloon on Javaranch is a great forum. Probably the best. I like http://java-monitor.com/forum/ too, but I am biased, since I operate that forum. ;-)
The error you posted confirms this is a heap space issue and not something else.
I run JDK 1.6 in 64-bit mode on both AMD and intel processors. It just works. I have my own hardware.
I would suggest you also make heap dumps and use profiling to try to reduce the memory footprint of your application. That will give you more time to buy hardware.
Gotta admit I was scratching my head about "OOME" myself. I have enough trouble with NPE.
Getting good stack info is critical. I lost nearly a month of my life on a, ahem, OOME that was only reproducible by allowing the system to run for 3-5 days. Turned out to be a "cache" that didn't understand that part of the difference between a cache and a bottomless pit is that a cache is expected to have a limit beyond which it starts discarding stuff.
The worst thing was, this "cache" was from a major vendor, who had even explicitly documented its lack of a discard mechanism. But not very well.
Customer surveys are for companies who didn't pay proper attention to begin with.