*
The moose likes Performance and the fly likes Please welcome Jack Shirazi, author of Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of Android Security Essentials Live Lessons this week in the Android forum!
JavaRanch » Java Forums » Java » Performance
Bookmark "Please welcome Jack Shirazi, author of "Java Performance Tuning"" Watch "Please welcome Jack Shirazi, author of "Java Performance Tuning"" New topic
Author

Please welcome Jack Shirazi, author of "Java Performance Tuning"

paul wheaton
Trailboss

Joined: Dec 14, 1998
Posts: 20495
    ∞

Jack Shirazi's book "Java Performance Tuning" has recently been published by O'Reilly.
Jack has graciously offered to hang out here and talk a bit with us about performance and getting the most out of the Java VM.
My first question doesn't have as much to do with coding techniques, but with the new 1.3 VM. I saw some benchmarks and the 1.3 VM was kicking butt on the 1.2.2 VM. What's up with that?


permaculture Wood Burning Stoves 2.0 - 4-DVD set
Jack Shirazi
Author
Ranch Hand

Joined: Oct 26, 2000
Posts: 96
*chuckle* Choosing your benchmarks carefully so that your VM kicks butt is always good.
The 1.3 VM uses a HotSpot engine, where 1.2.2 is pure JIT. In case you missed what HotSpot technology means, the basic idea is that the VM profiles the code while it's running, then only generates native code for those bits of the app that are bottlenecked. The VM does this by running the app in interpreted mode with an internal profiler runnning at the same time. The app profile is constantly monitored, and if some code (method or loop) is staying too long at the top of the execution stack (the "hot spots" in the app), the VM generates native code for that method/loop, and swaps the interpreted bytecode for the native-code. In HotSpot 1.0, the VM had to wait until a method completed before the swap could happen, but in HotSpot 2.0 (which is the engine used in 1.3) the swap can happen while a method/loop is running.
Unlike the server-side HotSpot VMs (called HotSpot 1.0 and 2.0) the 1.3 VM is tuned for client-side running, which basically means "don't hang about as long before generating native code, and don't apply as many optimizations when the native code is generated, so that the code is not held up as long.". If you have a long-running process, you are probably better off using
the server-side HotSpot since the longer running time can be taken advantage of by the VM.
The upshot is that 1.3 VM effectively acts like it has a low-level performance tuning expert running inside the VM. He can speed up the code in the bits that need speeding up the most. But he only ever applies a limited set of optimizations. The result is that some things run quite a bit faster - I've seen double the speed for some tasks.
On occasion the VM can get it wrong, but not often. However, 1.1.6 and later 1.1.x JIT VMs can outperform 1.2 and 1.3 VMs for some tasks because those VMs have different task loads. In addition, people are pretty clever, and manual optimizations can often outperform the HotSpot optimizations. I have an article at http://java.oreilly.com/news/javaperf_0900.html
which runs through a basic tuning exercise on running a query against a collection. The article shows how HotSpot VMs start by outperforming, but can end up lagging after manual optimizations are applied.
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Please welcome Jack Shirazi, author of "Java Performance Tuning"
 
Similar Threads
Java's New HotSpot....Anybody tried it
Garbage collector
Jack Shirazi, author of "Java Performance Tuning" in our Performance forum
This week's giveaway!
Java Collections: Chapter 2: Decrement Performance