• Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

Industry standard for max CPU usage

 
Nitin Gaur
Greenhorn
Posts: 27
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hello,

My question is - Is there any industry standards for max CPU utilization for enterprise java app? Say 50%

We have JEE5 app running on weblogic9.2. One particular request was taking long processing time and shooting up CPU usage to 40%. I am able to reduce it 10% by re-writing the code and manual profling (checking start/end timings). But in case of 5 concurrent requests it still goes upto 35% and our NFR is 20 concurrent users. In this case CPU usage will definitely shoot up to 75% that is too high.

Thanks
-Nitin
 
steve souza
Ranch Hand
Posts: 862
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
There isn't a standard I would bet.

You might even have a cpu utlization of 100% and that is ok. It is all about what is your bottleneck (if anything). If we could all have a high cpu utilization and meet our users needs, that simply means we haven't bought a more powerful computer than we need. Why would you think you have a problem if your cpu utilization is high, but you are meeting all of your users needs?
 
Nitin Gaur
Greenhorn
Posts: 27
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Steve,

cpu utilization of 100% is OK? Server admin won't appreciate it! But you helped me to better re-phrase my question -
I want to know upper threshold value of cpu utilization above which we consider to buy more powerful computer and I am sure it is not 100% but less than that.

-Nitin
 
Paul Clapham
Sheriff
Pie
Posts: 20980
31
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
That's because the CPU usage is not the measure you should be looking at. You should be asking whether your users are getting good response time. (Of course you have to have some idea of what is "good" in this context.) If they are, then it doesn't matter how much of the CPU you are using. If they aren't, then you have to find out why. The answer may or may not involve CPU usage.
 
Pat Farrell
Rancher
Posts: 4678
7
Linux Mac OS X VI Editor
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
In the olden days of Mainframes, the whole operations was designed to keep the "system" at as close to 100 % utilization as possible, because the system was expensive, and the boss did not want to buy a bigger mainframe.

In these days of nearly free servers, this is not how things are run.

Generally, the marketing department says something like: Our customers will leave us and go to Google (or slashdot or whatever) if they response to trivial requests is not within 100 miliseconds and the response to big requests is not under 1 second.

The engineers' mouths drop to the floor, and the engineering manager says "then get me 100 thousand servers" to which the marketing guys say "we don't have that kind of money".

What is then typically done is to negotiate a Service Level Agreement. You say "we want transactions X, Y, and Z to be completed under 100 milliseconds 95% of the time, and transactions A, B, and C, to be completed under 1.50 seconds 95% of the time."

From that, you can engineer a solution. But to engineer it, you need to know how many simultaneous users to design for. Its one thing to handle 100 simultaneous users, its quite another to handle fast response when you are on the front page of Digg.com and Slashdot.com

 
R van Vliet
Ranch Hand
Posts: 144
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Nitin Gaur wrote:Steve,

cpu utilization of 100% is OK? Server admin won't appreciate it! But you helped me to better re-phrase my question -
I want to know upper threshold value of cpu utilization above which we consider to buy more powerful computer and I am sure it is not 100% but less than that.

-Nitin


Frankly, the solution is to hire a better server administrator. Machines that do *not* use vast majority of their processing power are being wasted. Having two machines running apps that have a peak cpu load of 40% pretty much means you've wasted the money on the second server and are wasting energy cost overhead for running two machines (rather than one at full capacity). The reality is obviously not quite that black and white but the point stands. Basically I'd fire a server admin that forces me to buy twice the hardware the actual demand requires.

Tresholds for being new or better hardware should be almost exclusively based on current or, much preferred, the projected degradation in quality of service. Hardware usage metrics can be an indicator for this obviously but they aren't what you should base these decisions on in my opinion.
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic