I could not easily tell from your posts whether you are trying to run the tests on the same computer as the
Tomcat server. I
think you are running it on a different box, but I am not sure. For the benefit of anyone else reading this post who is not sure why this might matter - rule 1 of stress testing is not to run anything on the computer that is running the system under stress. In other words, if you were testing a tomcat application that has database connections and sends emails out, then you would have multiple servers in the mix. One that
only runs Tomcat and your web application, one that runs the database (and possibly the email sink), and one (or more) that is running the testing software.
You mentioned that all you were able to find on JMeter was for testing on a local server - there is far more to JMeter than just that. JMeter has capabilities that were designed for stress testing. When doing a stress test you would normally set JMeter up to write the most basic of statistics out to a file, then process the statistics later - processing in real time is a bad idea. Normally you would also want to run JMeter in headless mode. You may also want to run multiple JMeter instances in server mode, all controlled by one master JMeter application.
vinod kumar h v wrote:
Now tell me how to measure the stress ?? This is the current work am done with.. what are the other thing that i need to do??
Once again, I think you might want to look at JMeter. Not necessarily looking at using it if you are already happy with what your program is doing, but more for looking at what statistics it gives you.
Most people care about transactions per second (TPS) or Throughput and elapsed time for transactions. You probably care about the number of successful transactions versus the number of error transactions.
Minimum, maximum, average, and standard deviation are all good statistics for those metrics.
JMeter also has plugins that allows you to monitor CPU and memory load. It is good to monitor these on both the computer under load and the computer running the test. I have been caught before in a situation where we ramped up the load to such a point that the computer running the test could no longer handle the TPS, so we were seeing artificially low numbers. Once we spread the load to 4 test clients, our throughput became reasonable.
Unlike a load test tool, I would expect any decent stress test tool to have a configurable "ramp" for the test. As an example, I might want to run 1000 loops, where each loop starts at 500 TPS and ramps up to 1500 TPS. Then I would consider graphing those results so that I can see where we consistently get to the edges of reasonable load.
Note that it might not be reasonable to try and fix performance issues at the failure point. What you will typically find is that an application will process TPS in an almost linear fashion up to a certain point, then start leveling off. Find that point and fix the bottleneck there, then go back and do your tests and fixes all over again. This way you will actually be fixing the same problems that will be affecting your users first. If you stress your application to breaking point, then try to fix the biggest bottleneck, you may be fixing a problem that only affects a small number of users.