We are using WAS 5.0.2 for J2EE application. OS - Solaris 8 with 8 GB Total Memory.
I've read that WAS recommended MaxHeap size should be 1/4 of total memory and thus it is set to 2GB. Lately, we are experiencing performance issues and I am wondering if we need to increase the HeapSize to say, 4 GB. I will be doing some load tests at various heap sizes but would like to know from your experience what percent of total memory should WAS heap size consume. Also, what percent of the Max should the Min Heap size be? I've read that Min be anywhere from 1/4 of Max to same as Max.
Joined: Nov 04, 2001
You might already have this thought in mind but just to put from my side - I think those guidelines of 1/4 heapsize or things are helpful only upto some level.
It would be really our application we have to analyse to figure out what to change in order to solve the issue because "what if increasing the heap doesn't solve the damn problem and it comes back after 4 monhts?"..
Finding out the issue would be really application specific but may be what we can consider is, 1. Did previous version of the code (if any) had such issue? If yes, why? If no, why?
2. Has anything changed in the way we access data which makes something perform badly and overall we see issue in performance
3. Has anything changed that introduces the "java memory leak" sort of problems?
4. Has anything new introduced short creation of many more objects and they are short lived? Do we need Caching?
5. Look into the GC pattern of our application to see what we can do to change anything there that will help...
These are some ideas I come up with at 12 in night on Saturday where I should be enjoying with my friends drinking and dining so forgive me if I sound too obvious ...
Why do you think increasing the heap will resolve your performance issues?
I would investigate what is causing the slowness. Depending on the problem, increasing the heap may provide some temporary relief, but you'll be better off long term if you identify and fix the root cause.
How often and how long is the JVM spending doing garbage collection? Check the native stderr logs to find out. How much memory is freed on each cycle? (If you are not seeing the gc log messages, you may need to enable verbosegc.)
If you are seeing frequent gc's that aren't freeing much memory and/or the heap continues to grow over time, take heap dumps and analyze the contents to see what objects are using memory. Comparing heap dumps taken over a period of time can help identify objects that are leaking.
How many threads are in the web container thread pool and what percent is utilized? A large number of executing threads can consume a lot of memory. A pool with too few threads is a bottleneck. If you have available CPU/memory/database resources, you may be able to increase the number of threads in the pool.
One indication that you need more threads (or more WebSphere instances) is that the requests are processing through WebSphere very quickly, but the users are seeing slow response time and web container thread pool is maxed. (The requests are queueing on the web server.)
Take java thread dumps. Look for long running threads and/or many threads in the same method. Investigate why they are slow. (Database slowness, etc.) Are threads blocked waiting for a particular resource (database connections, synchronized methods, etc.)
Tools like Tivoli Performance Viewer (TPV) or WebSphere application monitor (WSAM) can help identify long running requests and connection pool and thread pool utilization in real time.
You will do well to investigate and resolve the root cause of the slowness rather than simply trying different heap sizes.
What I've mentioned above just scratches the surface of performance tuning a web app, but it's a start. Once you have the results from the above steps, I think you'll have a better sense of where to go next.