This week's book giveaway is in the Mac OS forum. We're giving away four copies of a choice of "Take Control of Upgrading to Yosemite" or "Take Control of Automating Your Mac" and have Joe Kissell on-line! See this thread for details.
Hi All this is vijay regarding performance for java web application.we are using Extjs as front end and jboss server, and mysql as backend.For loading grid contains 100000 DB for single user loading the grid from the DB the query is taking 4 seconds.But for the 100 users it is taking 50 seconds.why not for 100 users it is less than 10 seconds.we have to minimize the time less than the 10 seconds.for performance testing we are using the jmeter tool.and we are also using the mule server as ESB.
First you should find the reason why do you have such performance for 100 users. Try to use some JVM monitoring tool (jconsole,jprofiler,visualVM) to monitor your web app and ESB. Try to find out whether the bottleneck is your app or the ESB.
One of the reasons why the performance is going down with more users could be running out of memory. When you run out of memory(which can happen with more users) then GC is running more often and consuming lot of cpu time. In this case you could try to extend java memory settings.
Your first steps are always to determine where the bottlenecks are. Is your DB slow? is it the computation you are doing? is your algorithm O(n^2) (or something even worse)?
You can spend months and thousands of dollars chasing what you think is the problem, and make incremental improvements, or you can figure out where the real issue is and focus on it. To do this, you need some kind of profiler that tells you where the delays are.
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
I'm not exactly sure whether that's 100,000 rows; but if it is, might this not be a place to start looking?
100,000 rows is an awful lot of data; and if you multiply that by 100 users you may be talking about a LOT of network traffic.
As the others have said, tracking down bottlenecks is not easy; but that's the statement that leapt out at me immediately.
Is there some reason you need to get 100,000 rows?
Does every user need to get all 100,000 rows every time?
Could query results be cached in some way? Most db's have techniques for throttling or caching the amount of data that's returned for any given query; or maybe your application could rationalize requests in some way.
At the end of the day, if you absolutely must do it, then you must; and perhaps throwing some money at a bigger network pipe, or more server memory might ease the problem. However, when I see figures like that I always worry about scalability.
My 2 cents. FWIW.
Isn't it funny how there's always time and money enough to do it WRONG?
Articles by Winston can be found here