I'm having issues with a custom server application, and I'm hoping someone can point me in the right direction with some profiling tips.
The issue is that if I run embedded Tomcat 7 inside my server, I see the CPU for my java process going very high and staying very high for a long time. I'm running in my dev environment, which is Windows 8 running in Parallels for Mac on a dual core machine, with everything running on the same machine (client, server, MSSQL Server plus Eclipse and other applications).
The server listens for connections on a couple of ports. A custom client connects to it on these ports. Embedded Tomcat adds the ability for the server to listen to HTTP traffic on another port.
The test case I've used is as follows:
1. Start the server
2. Load a client and log in (connects to the server, the server retrieves data from a local MSSQL Server database, builds some objects, replies to the client)
Running without embedded Tomcat: CPU spikes at 70% but quickly falls back to 0%. Client login takes <2s.
Running without embedded Tomcat but with standalone Tomcat running (for comparison): CPU spikes at 70% but quickly falls back to 0%. Client login takes <2s.
Running with embedded Tomcat: CPU goes up to 50% and stays there (until it falls back to 0% when the login has finished). Client login takes 90s.
I should also point out that, even when I was running Tomcat, I wasn't hitting it's ports. I was running exactly the same client, with exactly the same user login, so embedded Tomcat *shouldn't* have been doing anything.
I've tried profiling with Visual VM, and found it difficult to find anything meaningful (which is why I need some help). Memory usage is as expected (nice saw-tooth pattern, no overall rise over time). Thread dumps don't show me anything unexpected, just things I expect to be happening, taking much, much longer than usual. I couldn't figure out a way to show exactly which of the threads was eating the CPU.
So, any tips / hints / ideas as to how I should proceed? Any suggestions very gratefully received.
You could be experiencing a GC problem, not with the heap (since you have a saw-tooth pattern) but with the permanent generation, which can make the GC run a lot more. You can install the VisualGC plugin in VisualVM to get more information, or you can run with the verbose GC logs to see the frequency and duration of GC runs.
Another option is the JVM being partially swapped, which makes the GC crawl, but you don't mention having different memory settings (or in the cases where it runs normally, you don't reach the same maximum).
Joined: Feb 10, 2011
Thanks for the reply Frank.
I've installed the VisualGC plugin and looked at some of the memory settings, and here's what I've found:
The JVM has a max memory setting of 910M in both 'bad' mode and 'working' mode.
Perm gen is around 29M in 'bad' mode, vs 24M in 'working' mode.
GC frequency during login started off the same, but as time dragged on frequency increased in bad mode until they were every few seconds. The time taken of the GCs appeared to be the same in both cases, and didn't increase as they became more frequent. The average time taken per GC (total collections / time taken) was about the same in both modes.
Class Loader Time looked different in both cases. In 'bad' mode there was a saw-tooth pattern throughout the login time, loading approx. 3800 classes and taking approx. 10s. In 'good' mode the bar was solid, but the numbers didn't increase beyond 3200 and 2s.
I also checked the page in / page out monitor (in mac OSX). I could see a few page ins happening in both modes, but they were very infrequent.
What I'm seeing looks (to my, inexperienced eye) more like symptom than cause.