• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

How to code review for performance?

 
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi,
I have been assigned the task of reviewing the code for performance issues. I have been handed a checklist (with suggestions like stringbuffer,arraylist usage,set object handles to null, no prints etc). Beyond this I need to find out redundant processing as well. I am a performance tester with limited development experience, though adept with Java language. I am not liking the way it has been going on so far. I have begun with foundation classes that are used most commonly. The classes are long with about 50 methods each. I am just quickly running through the code. I am not convinced that this is the best way to handle code review. I am looking for suggestions on how the code review process could be improved and speeeded up.
Kind of things I have found are stringbuffer use case, classes not set to null etc. Just a few code level things (not per the cheklist and required more than cursary look) like - fetching entire table rows to do the rowcount when a simple count(PK) would have sufficed etc. Just to emphasise that such findings are few as : a. My app design knowledge is limited and b.
I have to run through large number of lines. If I start trying to understand every line of code to check for scope for efficiency I will be spending more time learning application and coding techniques.
Thanks for your thought processes.
Vikram
 
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Generally eye-balling code for details like string concatenation will not find the real bottle-necks. There are profilers around that will watch your code run and tell you how long various methods take or what methods are chewing up the most time. These often point out algorithmic "opportunities for improvement" that you'd never spot intuitively.
Say you were asked to visually compare a bubble sort to a quick sort. Hmmm, ten lines of code, simple nested loops vs maybe a hundred lines, recursion, very complex logic. Which would you choose in your code reviews?
There are static source code analysis tools that automate an amazing array of checks on correctness, style, performance, standards, etc. JTest from Parasoft is a good commercial example. It can do things like suggest StringBuffer instead of string concatenation.
 
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Vikram Chandna:
I have been assigned the task of reviewing the code for performance issues.


In my opinion, trying to guess performance issues by looking at the code doen't work well. Using a profiler to *measure* performance is much more effective.
What performance problems do you encounter? What exactly is the goal of your code review?

I have been handed a checklist (with suggestions like stringbuffer,arraylist usage,set object handles to null, no prints etc).


With all due respect, to me this list seems to be totally useless. In most cases, using StringBuffer, setting references to null etc. will do virtually *no* difference in performance. You need to apply those techniques very well directed to see any effect. Most often algorithms used, intelligent caching etc. will have a much more direct impact.

fetching entire table rows to do the rowcount when a simple count(PK) would have sufficed etc.


Just as an aside, a "count(*)" is ways faster than a count on specific columns, as far as I know.

Just to emphasise that such findings are few as : a. My app design knowledge is limited and b.
I have to run through large number of lines. If I start trying to understand every line of code to check for scope for efficiency I will be spending more time learning application and coding techniques.
Thanks for your thought processes.


I would propose something along the lines the following process:
- identify a performance problem (like "when I click this button, the system takes 5 seconds to respond. It is required to only take half a second at max.")
- sit down with a developer intimate with the code. Write a performance test highlighting the problem. Profile the test to find where the code spends it time.
- try to fix the performance problem. If a fix you tried didn't help, but made the code more complex, undo it.
- once you fixed the problem, reflect on what you learned. Communicate your learnings to your teammates.
rinse, repeat.
Did that help?
 
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I agree with Stan and Ilja. (Which is good practice in general, I think.) If you do a code review for performance that's not driven by observations of specific performance problems you have, and not heavily based on the results of profiling tools, then most likely you will generate a lot of nitpicking advice which may make you feel like better programmers, but which has very little actual effect on the performance.
I suppose if you're dealing with novice or even intermediate programmers, there may be some worthwhile things you can say about how to improve their code, without actually measureing its performance with a profiler - but if so, it usualy falls under the category of simplifying to improve readability and maintainability. Often, making things simpler and easier to understand also makes them faster. If so, great; if not, you really need profiling data before you can give any other useful advice. So doing a code review for readability and maintainability makes sense; reviewing for performance probably does not.
[VC]: fetching entire table rows to do the rowcount when a simple count(PK) would have sufficed etc.
[IP]: Just as an aside, a "count(*)" is ways faster than a count on specific columns, as far as I know.

This may depend on the DB. I remember once getting a performance boost by replacing count(*) with count('x'), since using a literal made it clearer that there was no need to retrieve any data at all. This was something like five years ago, and I don't recall what DB I was using - I suspect that most quality commercial products these days have SQL interpreters smart enough to not retrieve any unneeded data in either case, but maybe not. Alternately, some DBs may have a standard optimization for count(*), but they may not recognize count('x') as something they could optimize the same way. So I'd hesitate to give any absolute rules here - if performance of this particular query is worth optimizing, try both count(*) and count('x') for your particular application, and see if it makes a difference. Which is the way it goes for most performance optimizations.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Jim Yingst:
[IP]: Just as an aside, a "count(*)" is ways faster than a count on specific columns, as far as I know.[/b]
This may depend on the DB. I remember once getting a performance boost by replacing count(*) with count('x'), since using a literal made it clearer that there was no need to retrieve any data at all.


That's interesting - I didn't even know that it's legal SQL.
"count(column)", on the other hand, is counting the number of rows with non-null values in the specified column, so I guess it must be invariably slower than "count(*)". I could be wrogn.
 
Jim Yingst
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Sorry, when I said "this may depend on the DB" I really meant that what I was about to say might depend on the DB. (Regarding the select count('x')). I agree that it seems pretty likely that count(*) will be faster than count(column_name).
I could be wrogn.
 
Vikram Chandna
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Stan, Ilja & Jim, thanks for spending your precious time in helping me out.
I will synchronize my code review activities with profiling activities. It does make sense to have tools to do the code check. While I haven't used any tool thus far, I would say the tool should work as following:
a. Does the job of finding out of easy to discover things like string concatenation, vector usage etc. Further it should be rule based and allow us to build the rules.
b. Raise red flag selectively for methods that appear to have scope of being more efficient. Not sure exactly sure how to determine the rulses for raising the red flag, but I guess a few checks could be evovled.
Thanks anyway. If I make any discovery on how to better manage code review I will share with this group.
Regards,
Vikram
[ September 30, 2003: Message edited by: Vikram Chandna ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Vikram Chandna:
Stan, Ilja & Jim, thanks for spending your precious time in helping me out.


You're welcome!


Thanks anyway. If I make any discovery on how to better manage code review I will share with this group.


I would still be very interested in the reason for doing the review at all. If you'd like to elaborate on the circumstances...
 
Stan James
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I guess we all said there's little point in visually examining for little things like StringBuffers. But there is value in visually examining designs at a higher level. Your SQL example was a good one. I work on an n-tier app, fat client, server, MQ-Series to legacy systems, etc. And an easy thing to look for is too many trips across the wire between boxes, or even between processes on the server (Forte 4GL, not Java). For example, AFTER a particular web page was found to be too slow, we looked at the code and found it retrieved a complex object graph, appended one or two little nodes and sent the whole graph back to the server which rewrote all kinds of database stuff because it didn't know exactly what had changed. A new server API to simply append new nodes knocked seconds off this thing. But the original programmer was probably correct to try the existing APIs (which happen to be perfect for the fat client) before optimizing. Especially because "deliver on time" was way up the list from "run fast" on the management priority at the time.
 
Author and all-around good cowpoke
Posts: 13078
6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The Java Performance Tuning web site is a great resource.
 
Ranch Hand
Posts: 2937
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
a. Does the job of finding out of easy to discover things like string concatenation, vector usage etc. Further it should be rule based and allow us to build the rules.
b. Raise red flag selectively for methods that appear to have scope of being more efficient. Not sure exactly sure how to determine the rulses for raising the red flag, but I guess a few checks could be evovled.

A Java profiler (such as OptimizeIt) actually doesn't work in that manner. Instead of scanning the source code for the potential bottlenecks and "raising the red flag", the profiler will simply record the time, each time a method is called, while your application is running. This approach makes it much more objective and easily reveals the real bottlenecks, as opposed to the speculative/imaginary ones. After the run, the profiler builds a tree where you can see the sequence of method calls, the number of times the methods have been called, and the total time spent in each method. The rest is straightforward: identify the 5% of the code where your app spends 95% of the time and refactor.
 
Vikram Chandna
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja the circumstance is that our system has gone live and there are reports flying in from everywhere of slow response times. Now we do have developers being asked to look at their respective codes to make them more efficient performance-wise. Only one person though for JProbe usage as we have only one license. I am a wild card resource and have been asked to review the util classes which are common for most transactions.
My limitations are, very limited functional knowledge and little J2EE development experience, though I have no problem with Java language per se.
What I did today I went to different developers and asked them to review one method. I noticed their approach. What I found was that the person dedicated to performance analysis was able to find most number of issues, though he himself doesn't have much idea about business logic and limited development experience. But I found he had 'sharp' eye. Other developers I went to were more keen in trying to understand what the method was doing and where it was being called. They couldn't see much because they were looking for efficiency at higher level. Kind of things that performance analyst found out were- declaring variable inside a loop, checking a value inside the loop when it could have been done outside and evaluation the value of upper limit in for loop for every iteration. His style did rub off a bit on me and I am doing better than yesterday. That apart the checklist has got a bit more ingrained in my mind so I am doing a bit better.
At my level it's all about speed and efficiency and not necessarily finding the low-hanging fruits that take care of 80% of our problems. The faster I can compile the findings at class-method level and disseminate it to the developers who are in rectfication mode the more we will achieve.
Eugene, naturally a profiler is transaction specific and will not get the kind of stats that I am looking for. Lot of stuff that I doing is something that could be automated. That's where JTest could have come into play. I was just imagining what an ideal tool could have done for us. I could have delegated lot of routine stuff to it, while I could have spent more time looking at logic and design. Since I don't have it, what I am doing is that I am trying to be as fast as I can, while catching as many performance bugs as I can.
Regards,
Vikram
 
John Smith
Ranch Hand
Posts: 2937
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
What I am trying to say is that trying to optimize the code without a profiler by just looking one method, one class at a time is a waste of time. You just don't know where the performance issues are, and it is unlikely that you will pinpoint them without actually running your app with a profiler.
I spent a lot of time optimizing a large app that we have (about 4,000 classes), and the biggest surprise was that the performance problems were not due to well known issues, such as String concatenation or buffered I/O, but rather because of using the wrong data structures and algorithms. After running the profiler and isolating the biggest offenders in the code, we refactored them and achieved enormous performance gains (many parts of the system now run as much as 50 times faster).
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Vikram, thanks for your detailed report!

Originally posted by Vikram Chandna:
Ilja the circumstance is that our system has gone live and there are reports flying in from everywhere of slow response times. Now we do have developers being asked to look at their respective codes to make them more efficient performance-wise.


How do you know which part of the code is responsible for the bad performance. BTW, is it really overall bad performance? Is good performance equally important for all use cases? Or could you start with optimizing the small part of the system which is used 80% of the time by your users?

Only one person though for JProbe usage as we have only one license.


Then get some free profilers for the other developers. Use the one inbuild in the JDK if nothing else.

Kind of things that performance analyst found out were- declaring variable inside a loop, checking a value inside the loop when it could have been done outside and evaluation the value of upper limit in for loop for every iteration.


Did you run before- and after-performance tests? It's likely that those "optimizations" didn't do much to overall performance. Especially the "declaring variable inside loop" thing - as far as I know, it even generates identical byte code! With other words, declaration of a local variable is a no-op.

At my level it's all about speed and efficiency and not necessarily finding the low-hanging fruits that take care of 80% of our problems. The faster I can compile the findings at class-method level and disseminate it to the developers who are in rectfication mode the more we will achieve.


I don't buy this. To me this is like saying "I don't have time to study satelite photos for hints of oil reserves - it will be more effective to use the time to send out hordes of drill teams and just collect the oil they find by sheer luck."

Lot of stuff that I doing is something that could be automated. That's where JTest could have come into play. I was just imagining what an ideal tool could have done for us.


I actually think that it wouldn't have made a big difference. I'd *bet* that your performance problem lies somewhere else!
Keep us up-to-date!
 
Ranch Hand
Posts: 1140
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Vikram Chandna:
Ilja the circumstance is that our system has gone live and there are reports flying in from everywhere of slow response times....


deja vu!
From my experience
Sometime back, we were asked to do something exactly similar to what you are doing now. We started reviewing the code to optimize it. After 3 weeks, the result was zero. Some methods were responding a bit faster, but no improvement in the application as such. Even using the profilers didn't help much.
The idea is simple. The strength of any chain lies in its weakest link
Say your methods are taking 4 seconds for completion, but if the slow network takes 5 secs, the result for the end user is 5 secs, no matter how much you tune your code! You should be start tuning the network to see any performance gains.
This, only profilers can tell you, not the code review. That too, better not stick only to the Java code profilers, try using something to monitor the Operating System, Network, Database, External Interfaces etc.
Just for statistics.
Initial condition: Avg. time = 15 secs (user acceptable level < 8 secs)
After code reviews and tuning: Avg. time = 15 secs!
After network & OS tuning: Avg. time = 3 secs
After network & OS tuning, but with the initial un-optimized code: Avg total time = 3.5 secs
To conclude, we 5 people spent 3 weeks and optimized 0.5 secs, which is of practically no use comparing to the user acceptance level!
 
Vikram Chandna
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja, My code review could make a difference. When I started with the work I also was not sure what I could get out of it. I argued with Performance analyst on the efficacy of generic code reviews. The presumption of 'code reviews do not help' is that code is reasonable. We have had a lot of instances of poor programming. As such atleast foundation class review might help. Anyway we do have people working on specific cases. At this point of time I think the best thing we could have done for performance is to distribute performance checklist to developers at the coding stage and had peer reviews to lookout for them. That is far more effective than anything else.
Mani, seems like every case is different. Most analysts have told me to always look at the code& design first and then get onto OS,webserver tuning etc. Thanks for for input anyway. Could you point out the OS parameters that helped in tuning? I had been keen to know to a composite reading that tells whether OS is in healthy state or not. For example we know know an Oracle database is well tuned when it able to fetch data for most of it's request from the cache ( I don't recall the counter name).
Any free profilers out there?
Our performance testing and analysis activities are disjoint, as such we don't measure the impact. I very much wanted a co-ordinated approach but the performance testing is controlled by client and their QA told me that his job is to verify performance for the SLA requirements.
The problems are coming from everywhere and I suspect it is due to high CPU usage. As such everyone is onto the job. We are taking up cases as they get reported. So first it was order, then pricing, then returns, then EJB based batch module is being redone with JDBC. It's pretty much all hands on board.
Just for your info, our application is replacing mainframes. We are having stateless design (that means no session maintaine, all info travels between client and server), all communication is XML based- all code pieces write to XML then the EJB does the job of writing to DB, the network bandwidth is only 1Mb and is also shared for email and sysadmin activities, swing client-AIX-Websphere-DB2- 4 CPU 4 Gb box, many external interfaces for order processing, implemented compression for request& response to help reduce the traffic- avg response size is 10Kb, target of 200 users.
6 months ago I was in project for a month doing performance testing when they pulled me out to put in other projects. I just returned to assist. My initial observation was that at 5 user dual CPU was hitting 100%.
Now this is what interests me most in the project- I very much feel that current infrastructure will not support 200 users. I am keen to see after how many days will my project manager put his hands up and say that is all we can achieve on this box. This decision is very important because for one weeks salary of the performance developers & testers we could purchase an additional box and maybe achieve the elusive performance figures. Although to come to this decision we would have to spend some time on scalability testing. My project manager doesn't realise this because this is his first major performance tuing exercise.
I am keen to cotribute the most I can on my assignment and also carefully watch the effectiveness of the exercise and learn from it.
regards,
Vikram
[ October 01, 2003: Message edited by: Vikram Chandna ]
[ October 01, 2003: Message edited by: Vikram Chandna ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Vikram Chandna:
The presumption of 'code reviews do not help' is that code is reasonable.


I wouldn't say that code reviews are useless. Actually, I personally prefer Pair Programming.
I'd just think that code reviews are far from being the most effective technique for fixing a performance problem.
For free java profilers, google for, well, "free java profiler". Also notice that the JDK is already coming with one: http://developer.java.sun.com/developer/TechTips/2000/tt0124.html#tip2
 
Jim Yingst
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The presumption of 'code reviews do not help' is that code is reasonable.
I think we (or at least I) have been saying that code reviews targeted specifically at performance are unlikely to help much. (Or are likely to be quite inefficient in terms of where you spend your time.) A code review could also be targeted more generally, putting more emphasis on other elements like readbility and program logic. If code is "unreasonable" it probably isn't very readable; if it were more readable, its other problems would be more obvious. Not always, but very often, in my experience. So if you want code reviews which emphasize readability, I say great, this will probably help in performance too, as well as any other problems down the road. If you target perfromance only (without profiling first), I think you'll waste most of the time spent.
 
author
Posts: 3252
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:
"Kind of things that performance analyst found out were- declaring variable inside a loop, checking a value inside the loop when it could have been done outside and evaluation the value of upper limit in for loop for every iteration."
Did you run before- and after-performance tests? It's likely that those "optimizations" didn't do much to overall performance. Especially the "declaring variable inside loop" thing - as far as I know, it even generates identical byte code! With other words, declaration of a local variable is a no-op.

Quite. Congratulations to the keen-eyed performance analyst -- (s)he has just succeeded in making the code less readable and maintainable for exactly zero gain. Started life as a C programmer, perchance, predating properly optimizing compilers?
The entire enterprise strikes me as total madness. If your company were a hospital, it would probably be trying to cure its patients by sending them straight into the operating theatre with a dozen blindfolded surgeons making arbitrary incisions in the hope that one of them will hit whatever organ it is that's causing the problems. Dear Vikram, please tell your manager that this is not how hospitals are usually run. The procedure I'm used to is that they take blood tests, make X-rays and the like, and only operate once a clear diagnosis has been made on the basis of the evidence thus gathered.
In the software development trade, your X-ray machine is the profiler. I can only repeat the point that others have made above.
Not that it isn't worth reviewing the codebase; if it is as poor as you indicate, it will be a major source of bugs and maintenance problems. You will want to review it for algorithms and design, not for performance. But that comes later; if the immediate problem is performance then that's where the efforts right now should be directed.
- Peter
[ October 09, 2003: Message edited by: Peter den Haan ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Peter den Haan:
The entire enterprise strikes me as total madness. If your company were a hospital, it would probably be trying to cure its patients by sending them straight into the operating theatre with a dozen blindfolded surgeons making arbitrary incisions in the hope that one of them will hit whatever organ it is that's causing the problems.


ROFL

Not that it isn't worth reviewing the codebase; if it is as poor as you indicate, it will be a major source of bugs and maintenance problems.


Fully agreed. I have seen drop bug lists by significant amounts just due to appropriate refactoring (removing duplicated code, splitting "god classes" etc.).
 
Greenhorn
Posts: 12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
All good comments above.
I guess I could add, make sure you know what your target performance requirements are. Otherwise, how will you know when you are finished?
Don't let the project manager say, "As fast as possible." Really sitting down and thinking about an acceptable response time for different transactions may help a great deal.
You may not need to optimize code at all as far as processing speed, but thread things in such a way that the application "responds" more smoothly. Perhaps you can initialize some things up front or make some calculations earlier on, which will make the application seem to perform better from the user perspective. etc etc. Perhaps calculating something in the background with a status bar will improve the user's experience and perception of the product.
Some calculations over large amounts of data take a long time, there's nothing that you can do about it... but improving the user experience and knowing your audience can help.
Chris
 
Vikram Chandna
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
OK, the final status.
I am moved into another project of mainframes which I do not like and will resign.
My findings go as mail to the manager and are probably resting in peace.
The performance chugs along as suddenly functionaly bugs get more severe than performance. Only fraction of users using the new J2EE system. Those who use are frightened by performance and thus drop out and help our systems to be faster.
OK, one thing I would like to say about performance community is that they are enamoured with 'low hanging fruits' findings. There are tales of firing a query for each record, sequnetial operations at some level etc. What if there are no obvious mistakes and instead it is a case of hundreds of places with scope for minor improvement coupled with inadequate hardware. I guess most would continue the search untill they find one spectacular issue.
Any take on this?
regards,
Vikram
 
Stan James
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Don't give "low hanging fruit" too bad a connotation. It just means finding the fixes with obvious benefit. More cost effective than shaving a thousandth of a second off every method in the system. Though you may still get there evenually!
My team's main chosen path for finding server performance / scalability (the same coin) problems is stress testing. As our product rolls out across the company to one user group after another, we simulate loads realistic for the next release and for the end-state user population. It took a long time to get the stress scripts to be realistic, and a long time to learn to read the results. We have been able to predict several stress points. A few we did not predict were evident after we knew what to look for, and were at least valuable in knowing when we had solved them.
A second approach is to use production logging of method entry and exit timings, again on the server. We have scripts that grep the logs periodically and tell us if something is out of spec.
The worst way to find performance problems is to have a user call to complain, but this happens when they try things we did not predict. Like using a work history search as an MIS tool to see how many whatsits we processed today. That query got thousands of rows back and essentially took out the whole web server.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Vikram Chandna:
What if there are no obvious mistakes and instead it is a case of hundreds of places with scope for minor improvement coupled with inadequate hardware.


Then you still need to find out about which places those are and which will give you the most benefit, so that you should tackle them first.
But fortunately most often there *are* (more or less obvious) low hanging fruits. I just reduced the startup time of a program from 40s to 8s by removing one SQL select statement which wasn't necessary anymore. The customer really liked that...
 
John Smith
Ranch Hand
Posts: 2937
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
What if there are no obvious mistakes and instead it is a case of hundreds of places with scope for minor improvement coupled with inadequate hardware.
The rule of the game is that 20% of the code runs 80% of the time. In the case of the app in our shop, it's actually more like 10% to 90%. I guess it is possible that it may also be 50%-50%, but then again, the best way to identify these inefficiencies is to run your app through a profiler, instead of making guesses based on visual inspection of the code.
 
Vikram Chandna
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Stan, Ilja & Eugene,
Thanks for sharing the insight.
Just a brief about me, I have been working as performance tester. With my new employer I will be assigned performance analyst job. I have some challenging time in front of me. Your advice will help.
 
Stan James
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
This must be a fun topic ... it seems to be living forever.
Your remark about one person with a sharp eye for potential performance problems (you probably didn't say "potential" but I sure would) reminded me of old COBOL mainframe days. We stored a lot of 6 digit dates YYMMDD (My first day at work in 1978 somebody warned me about the y2k problem, but laughed because they planned to be long gone.) The way mainframes packed numbers there was room for 7 digits and if you declared 7 instead of 6 the compiler used one less assembler instruction. So they declared 7. And for unsigned numbers the compiler inserted one more assembler instruction to guarantee the number was positive. So they made the field signed. The whole mission of COBOL was self documenting code (compared to assembler) and these guys documented an unsigned 6 digit number as a signed 7 digit number to shave off two machine instructions. Bad choices from those "experts".
Also in those days we were fanatical about avoiding physical IO. We perverted data structures to combine data into one read instead of two, adding all kinds of complexity to the code. Then I worked on a vendor product that did full file scans for data and online ad-hoc reports, and might do 20,000 reads in one transaction. And the response time was nearly the same as our torturously tuned code. D'oh! A lesson learned.
 
reply
    Bookmark Topic Watch Topic
  • New Topic