I find it very interesting, how the degree of "objectification" can effect performance. You can see from my signature I have a little chess program I have written. It's not at all advanced as chess programs go, but it works.
Anyways, I'm sure you can imagine that when you have a program that analyzes a few moves deep in chess, it needs to create and evaluate many thousands of possible positions, out of which only one will be selected, and the process more or less repeats itself again and again. And speed is everything.
A position typically consists of info that lends itself to simplified coding by using objects to represent pieces, squares, moves, positions and such. Another strategy is to store information in strings that can be easily parsed because the information is very standardized. you know exactly how much space, or how many characters, it takes to represent each piece of information.
Anyways, I don't really have a specific question, but...
I'm interested in how one evaluates the overhead involved in creating and discarding many thousands of relatively simple objects in a short time. And I was just wondering, in general terms, how the java performance experts would go about making such decisions when program speed was a major consideration.
The evaluation process involves analysis of telemetry statistics, profiling and instrumentation. For starters, you can learn about these concepts hands-on with the latest edition of NetBeans.
Joined: May 13, 2009
James Clark wrote:The evaluation process involves analysis of telemetry statistics, profiling and instrumentation. For starters, you can learn about these concepts hands-on with the latest edition of NetBeans.
Thanks bud. too bad Netbeans itself is such a resource hog. course I'm stuck with a PIII with 512 MB, so probably I should upgrade. In the meantime, I'll stick with Notepad, write two versions of the program, and code a millisecond timer on my makeMove() method.
a single machine code mov, that moves 123+10 = 133 = 0x85 into %eax which is the return register.
Modern JVMs usually have a lof different strategies for minimizing the pressure on the memory system; thread local allocation, nurseries, compressed references etc. If you want more detailed information about the allocation pattern in your application you really need to profile it. You could try the profiler that comes with JRockit. It will give you statistics about garbage collection pauses, where in the code most of the allocation occurs, allocation/s and size of the thread local areas. Start up your application like this:
and when the recording has finished open the recording file with JRockit Misison Control. You will probably need at least 100-200 Mb of memory to look at the recording, but not 400-500 Mb as with Netbeans. See this blog for more information:
Thanks for that. To be honest both suggestions are a little too advanced for my skill level, But I'll file this all for future reference.
I'm kind of feeling my way with this project, and the only person I have to please is myself, so I will probably experiment with a couple of different data models and do some really basic testing involving some kind of timer based on a Date() function, or something like that. I should probably post the results in this thread as soon as I have a couple of different models to compare.