A little while ago Larry Osterman (
http://blogs.msdn.com/larryosterman/) wrote a post about optimisation tricks for counting the number of set bits in an int. I decided to try a few just out of curiosity of what
java would do. the first one was the naive implementation:
This takes about 11 secs for 500,000,000 runs.
The most interesting result was with a lookup table based solution
This seemed to be horribly slow; around 34 secs for the same number of iterations.
But then I tried using the server VM. With that one the first 500,000,000 took around 12ms and from then on 0ms for the same number!!
So the client VM in this case is over 24,000 times slower!!!
Anyone have any idea what is going on?