aspose file tools*
The moose likes Performance and the fly likes Performance Data Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Java » Performance
Bookmark "Performance Data" Watch "Performance Data" New topic
Author

Performance Data

Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
As I've noted in many threads in this forum on using profilers, it can often be confusing at first, because you have no benchmarks. I remember when I first used one, and saw thousands of Strings and ints. Many of these were from internal Java classes (i.e. those in java.* packages). Still, even then, it's hard to know how much is too much.
Does anyone know of any sites listing some generic benchmarks? Obviously all programs differ, but it seems to me some general metrics would be tremendously useful to the community. You could take some basic measurements like SLOC or Cyclomatic Complexity or any or basic measurements (and maybe some other notes like stand alone, deistributed, etc) and record some basic profiler data on them. It won't be ideal, but it might be a good starting point.
Does anyone know of a data set like this?

--Mark

PS If someone was motivated enough to start one, there's plenty of open source code which could be used for this purpose.
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96
I think what you want is really a tutorial. Most commercial profilers come with a tutorial as part of the help. For the freebies, you're paying for the freeness by needing to learn how to use it yourself.
--Jack Shirazi
JavaPerformanceTuning.com
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
I've used OptimizeIt extensively and used JProbe briefly. The tutorial was useful, but not for the need I describe above. Yes, within 30 minutes I was able to use the tool and gather data, but I had no ocntext for my data. I had no benchmark against which to base my numbers.
Consider, for example, a thermometer. You build a car enginer an run it. After 2 minute the engine temperature reads 400F. Is that good? Is it too high? Is it ok, but close to the high end? For this type of engine does it tend to jump up to near operating temperature right away, or does it slowly get up there, such that we can expect it to continue to climb. In this case, a single data point doesn't help. Now having some thermodynamics training will help you ballpark it (just like general softfare background might help you ballpark the data), but if you really want to fine tune it, you'll need a larger set of empirical data.
--Mark
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96
I can see what you're saying. But I still don't see the usefulness of profiling, other than essentially more tutorials.
Method profiling: You are looking for the methods which take the longest time. Ideally CPU time, since elapsed time includes waits on monitors and I/O. And you want the time for execution within the method code and not calls to external code. Try to speed up the method or change the app to avoid using the method (or use it less often). If one of the methods high up in the profile is the garbage collector executing, then you know there is also an object creation problem.
Memory profiling: Find which objects are created and dumped the most. Memory snapshots help. Track down the methods which create those objects. See if you can reduce the amount of object creation.
Benchmark Application 1 says Object Y was created the most, and could be reused to eliminate a bottleneck.
Benchmark Application 2 says Strings were used the most and could be improved by converting the String creation process in method X to do ...
How do these benchmark applications help? As tutorials, certainly they give you experience in finding and getting rid of bottlenecks. I do this in my book a lot. But as benchmarks to compare your application against, I can't see their usefulness unless your application happens to do very similar things.
--Jack Shirazi
JavaPerformanceTuning.com
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
What you describe is relative performance improvements, i.e. find the worst offenders and fix them. This is certainly useful.
Now let's consider a team of junior developers (yes, even in todays market, they exist). They do things like recreate instances instead of pooling them, or make a bunch of objects "global." An "objective" data set might help them realize their code is more then 1 standard deviation away from comparable sized code (in terms of SLOC) with respect to memory usage.
An automanufacturer can look at his assembly line, find the bottleneck, and speed it up. That adds value. But when it turns out his competitors all produce 3 times as many cars, a 5% performance improvement in your own line is missing the bigger picture.
--Mark
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96
Recreating instances is fine. In most cases with the latest JVMs you get better performance than pooling. There are no performance problems with globally accessible objects. This is a maintenance or security problem. Huge code bases are not necessarily more likely to be a performance problem (except for the overhead of extra classloading, so startup may need tuning).
The example you give of the auto manufacturer is a design bottleneck. Profilers don't tell you anything about design bottlenecks. One of the reasons patterns are popular is because they can guide you to effficient designs.
These are implementation and design issues you are describing. Design should be addressed at early stage with performance as a focus. Patterns are the closest you'll come to benchmarks for efficient designs. Implementation should focus on functionality until stable. Implementation performance should not be considered until components are functionally stable. then profiling will get you your improvements.
--Jack Shirazi
JavaPerformanceTuning.com
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Jack Shirazi:
Recreating instances is fine...

Ok, bad examples, I was trying to come up with some quickly off the top of my head.

Originally posted by Jack Shirazi:
The example you give of the auto manufacturer is a design bottleneck. Profilers don't tell you anything about design bottlenecks. One of the reasons patterns are popular is because they can guide you to effficient designs.
...
These are implementation and design issues you are describing.

Not at all. I simply said the assembly line was slower, I didn't say why. One reason could be because every other stage of the assembly line is on the opposite side of the building, so they constantly have to move the unit back and forth--bad design. Another possible reason is that, by law, each step of the manufacturing process must be approved by a certified inspector, but the comany only hired one inspector for the whole line--efficency.
I've seen quite a few production lines and both reasons are common. While detailed analysis can turn up both, its not always practical, i.e. there's a high cost fo inspecting a very complex process. Software happens to be extremely complex, with thousands of individual steps. Aggregate numbers help to give indications of whether you are in the right range, just as on the assembly knowing that all competitors are 3 times as productive gives you a clue. In fact, in many cases, simply knowing that you can do something is very valuable.
Originally posted by Jack Shirazi:
Implementation should focus on functionality until stable. Implementation performance should not be considered until components are functionally stable. then profiling will get you your improvements.[/URL]

I disagree for reasons I note here. Heck, most projects aren't "fully stable" until 97% into the project, usually just shortly before shipping when its in final QA. I suspect you meant "mostly stable" at which point we're just quibbling over where to draw the line of when to start testing.
--Mark
Wouter Zelle
Ranch Hand

Joined: Apr 12, 2002
Posts: 30
Mark, an important problem with the data that you like to have is the diversity in software code. Cars are very simple things, they all have basically the same requirements with a few variables like safety, passenger space, cargo room, mileage, etc. Software is much more complex. A Java3D game is totally different from a J2EE server app. I can't see how you can collect meaningful high-level profiling indicators that work for both domains. Even within a domain there can be vast differences in the code. If you run the various tests that you described, you will probably find yourself with a lot of data that can't be linked to specific problem spots. The best you can do is probably low-level indicators (massive object creation, dependency loops, etc).
[ May 01, 2003: Message edited by: Wouter Zelle ]
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96
The automanufacturer example of layout being design and inspector numbers being implementation is nice. And if he "profiled" his production line in the latter case, he would quickly identify that the major bottleneck is waiting for the inspector to inspect each step. It may be complex to profile a production line, but not that complex for a Java app.
Lets look again at the example of that automanufacturer. Your way to help performance tuning is to look for the benchmark. In this case other automanufacturers run their assembly lines 3 times faster, so he knows he has to improve. My way is to performance tune to targets. Well he needs initial targets. Of course in this case that would be the rival manufacturer's production rate. So he starts with the target "throughput", and keeps repeating profile/fix until he gets there. In terms of Java applications, I always emphasize that you must have targets before you start tuning, otherwise you have no idea how much tuning you need to get done. So this doesn't separate us.
But my understanding is that you aren't really talking about throughput and response time targets. Every application should have these targets before you start tuning, or you're just blowing in the wind. My understanding is that you are talking about lower level benchmarks such as the types of objects and their frequency, lifetimes, etc. That's much lower level than the production line throughput. I think that's more equivalent to, say, statistics on the individual workers on the production line. Productivity statistics.
So our primary goal is the target throughput. A secondary goal is to optimize individual productivity levels. In the absence of specific time targets, you can use an ROI (return on investment) target. For performance tuning, this works by identifying inefficencies and determining whether fixing them pays back for itself. In tuning terms, this goes something like: it is almost always worth fixing bottlenecks taking 10% of application time. it is rarely worth fixing bottlenecks taking 0.1% of application time. The return on halving the time on a 10% bottleneck is a speedup of 5% of the app. The return of halving the the time on a 0.1% bottleneck is a speedup of 0.05% of the app.
So your way says, I want individual productivity levels for workers at other factories. But I would say that just because they work at other factories that doesn't mean their productivity levels are at all relevant. You might be lucky, and they are. Or you might get totally misleading benchmark productivity levels, and set too low targets, or impossible to achieve targets.
Instead, I say ignore the other individual productivity levels. Profile your factory and find the bottlenecks looking for people waiting around (method execution bottlenecks) and for piles of discarded waste or unusable material (the closest analogy I could think of for object creation bottlenecks). You have reached your targets when the the primary targets are reached in any case, but assuming your bonus depends on how much you beat that primary target, the secondary target will give the best way to the highest bonus. Basing your targets on low level stats from other factories is misleading. Looking for benchmarks from low level statistics of other apps is misleading.
--Jack Shirazi
JavaPerformanceTuning.com
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96
BTW, I don't advocate (and never said) waiting until the project is "fully" stable before tuning. I said wait until components are functionally stable. Components of most application are functionally stable when the functionality specified is working, but not fully QA'ed. At this stage the component still needs to have all it's potential runtime paths tested and any bugs fixed, but the primary runtime paths are working and so can be profiled. This usually comes at the unit testing stage. That's way before the app is in pre-production. Then you have integration testing stages where you can exercise full application paths for the components you have, and often the pre-production QA phase runs performance testing in parallel.
And a good number of projects work out that the best time to do the implementation performance testing is at pre-production, and do schedule that. Performance testing unstable apps and unstable components will give you nothing at all but wasted resources more than half the time. Which is the kind of ratio that makes it very expensive.
--Jack Shirazi
JavaPerformanceTuning.com
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Wouter Zelle:
Mark, an important problem with the data that you like to have is the diversity in software code.... A Java3D game is totally different from a J2EE server app.

I never said otherwise. We would definately have to account for this in different categories, just like auto production lines are different then plastic injection molding for childrens toy parts.

Originally posted by Jack Shirazi:
[QB]Lets look again at the example of that automanufacturer. Your way to help performance tuning is to look for the benchmark. In this case other automanufacturers run their assembly lines 3 times faster, so he knows he has to improve. My way is to performance tune to targets. Well he needs initial targets. Of course in this case that would be the rival manufacturer's production rate. So he starts with the target "throughput", and keeps repeating profile/fix until he gets there. In terms of Java applications, I always emphasize that you must have targets before you start tuning, otherwise you have no idea how much tuning you need to get done. So this doesn't separate us.

That's the motivation--most companies do not have such targets! I would estimate well over 80% of software projects in the last 10 years didn't bother with significant performance requirements. Moreover, most companies/people wouldn't know how to derive them.
You mentioned how fixing a 10% bottleneck is worth it, but .1% is not. I agree. But in many cases, people don't know that it can be fixed. The automanufacturer would love to increase his production 100x, but figures it would probably not be cost productive to do it. But knowing that his competitors get 3x productivity, suggests that this may be financially viable. Basically, knowing where to look is quite valuable (in many endeavors).

Originally posted by Jack Shirazi:
[QB]
But my understanding is that you aren't really talking about throughput and response time targets. Every application should have these targets before you start tuning, or you're just blowing in the wind. My understanding is that you are talking about lower level benchmarks such as the types of objects and their frequency, lifetimes, etc. That's much lower level than the production line throughput. I think that's more equivalent to, say, statistics on the individual workers on the production line. Productivity statistics.

No, I am talking about overall targets. I believe we should measure individual producitivty of workers, as well, but that's a different issue. A better analogy might be comparing an auto manufacturer to not only other automanufacturers, but also manufacturers of other large equipment, like construction equipment, buses, boats, etc (automanufactuers are best, but early on, they may be few and far between). Again, because so many companies do not have a clue where to peg their performance targets, this will help them get a back of the envelope calculation.

--Mark
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96
I've been to many customers who had no performance targets in place. It has never taken them more than two days to come up with basic response times, throughput transaction rates and concurrency level targets, and they knew before I needed to explain that those were the statistics that needed to be specified. You'd have to be pretty far out of it to be developing a J2EE app and not know that these are the statistics you need to target. Mostly they just needed someone to tell them to write down target numbers, and use them (yes, I get hired to state the obvious. It is so much cheaper to buy and read my book). They don't usually have too much trouble coming up with target numbers.
I have seen several game projects described. In every case the developers knew exactly what performance targets they were going for, frame rates and polygons. Every J2SE GUI application I've heard of knew that they needed a responsive GUI. That meant no frozen screens. Their problem wasn't in specifying the performance targets, it was in achieving them. They also knew that they needed to target response times for user initiated activity.
Which development project have you been on or know of where they could not specify performance targets? I certainly agree that too many projects do not have those targets. But in my experience that has always been because they just didn't consider performance until the customer or director came along and said "My God, that's slow".
--Jack Shirazi
JavaPerformanceTuning.com
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Jack Shirazi:
Which development project have you been on or know of where they could not specify performance targets?

I can cite a couple different types of examples.

First, as you note, they often say, "reasonable response time to the user," but they don't know what this means. Big companies like IBM do usability testing and determine what a reasonable response time is. Some comapnies who can't do the research themselves still keep up with reports. Other companies just guess. There is a difference between half a second and 2 seconds, and its not always evident which is appropriate until you actually see it (and even then there is disagreement).
I used to work for a company in which the consultants would often do database tuning in conjunction with our work. In one case, a database calculation went from about 150 hours to 15 hours. The company had no clue what reasonable performance was until they met us (or that is to say, that they could expect such performance).
I was involved with wireless systems back in 2000. Even today, let alone back then, most development houses have no clue what to expect from the network. Sure I can call a wireless provider and get some statistics on the medium, but its hard to come by traffic data. The system is effected by time and location, and sometimes even outside events (e.g. +/-3 around a football game, cell phone use goes up in a certain location, although this is different then temporal events like rush hour calls). On top of all that, the carriers were all promising upgrades and new services. The bottom line is that when our software came out, it was anyones guess as to how long things would take, and where to draw the line for making the user wait for a response. We took our best guesses and modified it as best we could.
In both of the cases above, someone working with unfamiliar technology (albeit primarily a database) and adaption of a new technology, some basic numbers would be helpful as guideposts.
--Mark
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96
If 150 hours was adequate performance, it doesn't matter even if it could have been done in 2 seconds. If 150 hours is inadequate performance, then they must have had a target time that the calculation needed to be done in. What difference what is achievable? Your business requirements should drive target performance times, not some arbitrary target of what can be achieved by other applications.
The fact that you didn't know how long things would take (wireless system) is one thing. That is not uncommon. That's what performance testing is for. Not having targets under different loads is another. Marketing's job is to do the customer research to find out what was acceptable to users. Leaving it to engineers to set times with no input from sales/marketing just means that you get them coming back saying "that's too slow".
Here's some figures for you
User interface:
Response time is the primary performance target.
Primary guideline, make sure you set the users' expectation of how long any task will take. In the absence of a guide from the app, the user expects task time to be more or less instantaneous (so is always disappointed)
UI should remain responsive at all times, even when user initiated activity is occuring. For any activity which will take more than a tenth of a second, status should be displayed. For less than half a second, status messages are sufficient, but beyond that a status bar is preferred.
In the absence of expectation, if an activity will take more than one second user patience begins to stall. Over two seconds and the chance of abandoning the activity starts to increase. By eight seconds, the chances are very high that the user will abandon the task (various studies including IBM studies and more recent web page interactive studies).
Another study shows that the users memory of the "average" response time is actually the response time that corresponds to approximately the 90th percentile response time, i.e. the response time which is higher than 90% of response times encountered for the task.
Sound streams and video streams are different. Users identify stalls and gaps in streams very easily. Lowering the resolution is better than losing time segments. I don't have hard figures in this area.
Server systems
Throughput (number of requests served per second/minute); transactions rates (number of transactions completed per second/minute); response time; and concurrency levels (the number of simultaneous requests being handled by the server) are the primary performance targets.
Good performance (achievable currently with J2EE) has sub-second response times and hundreds of (e-commerce) transactions per second. Servlets running on an average single server configuration machine can serve tens of dynamically built pages per second.
Near real-time systems (e.g. telco systems):
If a response will take longer than 200 ms, that needs to be signalled to the caller. No response in 500 ms is essentially a lost request. This essentially means that round-trip response times should be 500 ms or less. http://www.ietf.org/rfc/rfc2543.txt
What other systems do we you to talk about?
--Jack Shirazi
JavaPerformanceTuning.com
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Jack Shirazi:
If 150 hours was adequate performance, it doesn't matter even if it could have been done in 2 seconds. If 150 hours is inadequate performance, then they must have had a target time that the calculation needed to be done in. What difference what is achievable? Your business requirements should drive target performance times, not some arbitrary target of what can be achieved by other applications.

You and I know this, but quite a number of people involved with software do not. Most engineers I know do not understand this concept (they think cool technology is sufficent justification for a product--the Mallory Motive). But it's not just engineers, there's no shortage of business folk who have no clue what the market demand really is.
As for the case above, was 150 hours sufficent? Of course it was. They worked that way for years. Was 15 hours better? Much better, now they can generate weekly reports and get faster feedback. But they didn't necessarily know that they could; i.e. they didn't come to us saying "we need it faster," they came to us asking for something else and we said, "we can make it faster which would allow us to do something related to waht you want to make what you want even better." All to often the cart is put before the horse. Speaking of carts, you know, technically, a cart *is* sufficent for many people's needs. But a truck is better. If cars hadn't been invented life would go on, but knowing that we can use cars, we do. The same with computers, most of human history was just fine without them. So when there's a certain piece of functionality of performance missing from software, it's not necessarily that is isn't desired or useful, it may just be unknown.
--Mark
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96
But this isn't performance tuning, this is adding value. Identifying that their performance targets could be improved to add value to their system.
Your basic point that if you know what is possible from other projects then you can improve the system, is a value adding proposition. As such, you are right. The kind of figures you are talking about are not particularly useful to the performance tuner. They are useful for management, for marketing, for product development, for specifying the business case. Not application development.
So now I don't know whether to concede that you are right - the information is useful; or insist that you are wrong - the information is not particularly useful for the performance tuning process. I guess since the information is useful I must concede you are right.
You won't find that kind of information in many places. Specifically because it is useful to rival product development. I can't even report on my improvements made for my own customers most of the time because they don't want to give out that kind of information. And this is not profile information (your very first post), it is performance target specifications.
--Jack Shirazi
JavaPerformanceTuning.com
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Jack Shirazi:
Identifying that their performance targets could be improved to add value to their system... The kind of figures you are talking about are not particularly useful to the performance tuner.

So it sounds like semantics and where to draw the line. If I understand you correctly, a 10x speedup could be funtionally useful (as in the DB example), whereas a 1% speedup might not be useful to the same degree.
However, this implies an inflection point (albeit most likely quite shallow). Perhaps gaining a 10% speedup allows the company to save on hardware costs, which may work in a step function. Speaking of step functions, its not uncommon (although probably not the norm on most projects) to see a sharp degradtion in server performance after a certain threshold (e.g. works fine up to 130 users, but then rapidly gets slow, becoming unusable at 150). In many cases this may just be caching issues, and a little more memory helps--on the other hand, on a distributed system, or maybe embedded systems, this might not be an option.

Originally posted by Jack Shirazi:
You won't find that kind of information in many places. Specifically because it is useful to rival product development.

Yes, but herein lies my original motivation. Much like open source, the participants will get more out of it then they put it (we'll ignore the free rider problem for now).

Originally posted by Jack Shirazi:
And this is not profile information (your very first post), it is performance target specifications.

It sounds like there's been some disagreement between us on what "performance" means. Could you give me your definition (and any other definitions you think are relevant)?

--Mark
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96

a 10x speedup could be funtionally useful (as in the DB example), whereas a 1% speedup might not be useful to the same degree

Once a day, I run an analysis on my weblogs. It takes about 10 minutes. I run it in the background. I have not the slightest doubt that I could spend a day or two tuning the program and get it down to 1 minute. What would that gain me? Nothing functional. A couple of days wasted. I don't wait for a run to finish, I get on with other things (deleting junk mail, replying to other mail). A 10x speedup is functionally useless in this context. It would not be performance tuning, it would be playing around. This 10x speedup is worth nothing to me. I wouldn't pay a penny for it if someone offered to do it.
Does it help you to know that my weblog analysis program takes 10 minutes? Even if you were writing your own weblog analysis program?
It can take me one month to complete development of a training course. If I can improve that performance by 3%, I gain a day. It may not sound like much, but I'm desperate for all the time I can get. In some situations, a day of my time could be very valuable to me, worth paying for (because someone else is willing to pay me more).
It may well help someone to know that it takes me a month to develop a course. They can offer me a service to help me, which I might pay for. Or they can try and build a rival course, and knowing how long it takes means they know their lead, or lag. Or, if they hire someone to write a course, they have my benchmark to know what may be reasonable.
What if everone else takes one week to develop a course? That benchmark is misleading someone.
The usefulness of the speedup doesn't depend on the degree of speedup, it depends on the usefulness of the speedup. The usefulness of a benchmark is very dubious. It may be of some help as product development information. It may be completely misleading. And it could be more expensive to find out it is misleading than to ignore it in the first place.
--Jack Shirazi
JavaPerformanceTuning.com
Mark Herschberg
Sheriff

Joined: Dec 04, 2000
Posts: 6037
Originally posted by Jack Shirazi:
Once a day, I run an analysis on my weblogs. It takes about 10 minutes...A 10x speedup is functionally useless in this context.
...
Does it help you to know that my weblog analysis program takes 10 minutes? Even if you were writing your own weblog analysis program?


I fully agress such a speedup would be useless to you. However, if my program took 12 hours to run and it was important to see hourly stats about my website, speeding it up to 10 minutes would be quite valuable. Now maybe I don't know much about weblog analysis, and I think 12 hours is the best I can do, so I throw my hands up in frustration and suffer because of it.
Granted, a weblog is a contrived example, but I believe there are cases where parts or all of programs could provide useful numbers. Heck, how many people have written chat applications or other starter programs of a few thousand lines? Imagine if they had a benchmark to which to cmpare themselves? They could discover that their application takes twice the memory of the average program and realize "maybe there's a better way." They will do this early in their careers and subsequently develop good habits, both in terms of realizing they did something wrong, as well as learning to consider performance.
--Mark
Thomas Paul
mister krabs
Ranch Hand

Joined: May 05, 2000
Posts: 13974
This has been a great conversation and I have enjoyed reading both your comments.
Mark, if you have a business need to reduce time from 12 hours to 10 minutes then that is where you would do performance tuning. Maybe I don't know if I can reduce any time but I would concentrate my tuning efforts there. I would throw some tools and that problem, do some DB analysis and try to determine exactly what was taking so long. Then I would try to see if there was any way to speed up the process. Maybe I would find that I was already at optimum performance and there is nothing that can be done to speed it up. But at least I applied my efforts to the right place even if I got nothing out of it other than the ability to tell management that it can't be done any better.


Associate Instructor - Hofstra University
Amazon Top 750 reviewer - Blog - Unresolved References - Book Review Blog
Wouter Zelle
Ranch Hand

Joined: Apr 12, 2002
Posts: 30
Originally posted by Thomas Paul:
Maybe I would find that I was already at optimum performance and there is nothing that can be done to speed it up. But at least I applied my efforts to the right place even if I got nothing out of it other than the ability to tell management that it can't be done any better.
I don't think that optimum performance exists unless we are talking about a small computing core written in assembly. Performance optimization is always a tradeoff between effort spent, maintainability and speed.
Mike Curwen
Ranch Hand

Joined: Feb 20, 2001
Posts: 3695

I also have wondered about this question. To whit: Is there a generic baseline performance 'number' against which I can compare "similar" applications? So far, I haven't defined 'similar'. Does it mean executable lines of code?

But what I've come up with is what Jack and others have mentioned.. the answer is: "It really doesn't matter, because performance tuning is an exercise in fixing *ONLY* what is perceived as needing fixing."

I once worked for a place where we did consultation on making an old 2-tier app into a nice j2ee, 'portable-device' enabled app.

We were testing embedded databases and we ran them through performance testing. We ran the same sequence of inserts and selects against each db. The results ranged anywhere from 1 minute to 6 seconds. These figures had us really worried, and we identified the 'embedded database component' as a critical failure area. As an example, we concluded that to fill in a drop-down on the GUI would take 4 seconds. There were an average of 4 drop-downs on each screen. This was 16 seconds before the screen would even be ready to draw!

We sweated it. We thought "oh my god, that's awful". Until we presented to the client. They were thrilled *THRILLED* with that response time, because their current app took 40 second per screen.

And so we've already exceeded the client expectation, and so what's to optimize? Answer: nothing.
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96
The 12 hour weblog analysis program exactly illustrates what I'm trying to say. Here are two scenarios:
1. You want hourly stats. So you set yourself a target of sub-hourly analysis processing time. Performance tune and get there or give up. No need for outside numbers.
2. You want hourly stats. But you haven't got a clue that any kind of speedup is possible. So you give up. Then you find out that someone else analyses their logs in 10 minutes. This tells you that performance tuning your app is possible. Now you adjust your expectations and perf tune (or hire someone to do it). But it was not the performance tuning process that benefitted from that knowledge, it was your business targets that benefitted.
This is what I meant by the information being useful for setting performance targets, which is a business level activity, but not for performance tuning. The 10 minutes (or sub-60 minutes) target does not help the performance tuning process in any way. It is a contraint on the performance tuning. You need to carry on performance tuning until you reach the target. But 10 minutes doesn't tell you where the bottlenecks are in your app, nor what kinds of techniques will help you speed up the app.
--Jack Shirazi
JavaPerformanceTuning.com
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Performance Data