This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
I have experimented a bit with the classes Timer and TimerTask from java.util.* and found something odd. I wrote the following test code:
The TestTimerTask just simulates a longer running job. In my tests (without using Timer yet), it took ~9s to execute the run method with 500 as init parameter (tested multiple times). But when I execute TestTimer, I get the following output:
Job 1 starting in 2.5s...
Job 2 starting in 7.5s...
Job 3 starting in 12.5s...
Task Job 1 starting, 100 runs...
Task Job 1 finished: 97
Task Job 2 starting, 500 runs...
Task Job 2 finished: 9409
Task Job 3 starting, 500 runs...
Task Job 3 finished: 564
I have waited for 45s, finishing now!
Job 2 takes too long, so Job 3 cannot be started in time. It immediately starts once Job 2 finishes. So far I understand things, but why does Job 3 suddenly only take ~550ms to run instead of 9000ms? Both get initialized with the same value, they shouldn't take THAT different time to run! Or have I discoved the secret to accelerate Java? ;)
Run the code multiple times ... this may not be as unusual as you think.
The Java Runtime, and the JIT can be very smart in the code it executes. After passing it once it can find ways to optimize the code to run faster. The cornerstone of your work is a bunch of stacked for loops, which is a target the JIT can use to 'unroll' the for loops and optimize performance. This could be what is happening in your code. You can test it by running the class using:
and seeing if you get different outcome.
Joined: Feb 02, 2009
Thanks for the answer, I tried your suggestion. I suppose that option is equivalent to -Xint (interpreted mode)? I tried it with that option too and indeed got the same execution time now for both jobs.
I also tried to run the nested for loop multiple times after each other in the same program, like this:
Without -Xint, the first run was quick, but the following two weren't (with -Xint, all three were the same duration). This seems a very odd behaviour, why does the compiler optimize the loops only for the first run, but not for the others?
If the code is linear like that, then there is no repetition, so you can't talk about 'the first run' and 'the following two' runs. They are 3 separate for loops each run just one time. The code gets optimized as it runs the first time, so successive runs should be faster. So for a non-repeated length of code you showed there would be little or no optimization (the code may look exactly the same but it is different lines of code, each line getting touched just once, so no benefit of optimization).
So why does the first loop run faster than the other ones? No idea... There are a lot of optimizations that can occur. Perhaps the compiler can peek ahead a little bit and preemptively optimize code at the beginning of a method, I don't know.
I’ve looked at a lot of different solutions, and in my humble opinion Aspose is the way to go. Here’s the link: http://aspose.com