This week's book giveaway is in the OCMJEA forum. We're giving away four copies of OCM Java EE 6 Enterprise Architect Exam Guide and have Paul Allen & Joseph Bambara on-line! See this thread for details.
Because CPUs are fast. I had to go up to 40,000 to see any noticeable time taken. And even, it was only 2 milliseconds. For 400,000, it was 8 milliseconds - but the answer exceeded the capacity of an int so it wasn't valid.
Think about all the work your computer is doing every few milliseconds. It is running a bunch of programs, refreshing the screen, handing mouse/keyboard input, etc. The Fibonacci program is nothing in the scheme of all that.
And in fact we can even take a stab at calculating what we'd expect it to take. If we look at the JVM instructions executed:
The repeated part of the loop body is from line 6 of the javap output through line 24. It's 13 JVM instructions. Some JVM instructions may end up tracking directly to a single corresponding hardware instruction, but a lot of them will probably take several. To be on the conservative side let's say that each JVM instruction is 20 CPU instructions. That means that there are 260 CPU instructions per iteration of the loop, but lets call it 250 to simplify the math
Let's further assume that we have a 1 GHz CPU, and that due to pipeline stalls and what-have-you, it takes 4 clock cycles to execute 1 CPU instruction. So 250,000,000 CPU instructions per second.
Let's also assume that there are 20 programs running on our computer, and that each one gets an even slice of the time, and here we'll cheat a little and assume that context switching costs nothing.
So we get to execute 1/20th of 250,000,000 CPU instructions / sec., which is 12,500,000 CPU instr / sec., and at 250 CPU instr / iteration, that's 50,000 Fibonacci iterations per second.
A very conservative estimate says that your 40 iterations would take just under a millisecond.
If we're less conservative, and assume:
1 JVM instr = 1 CPU instr,
2 GHZ CPU
no pipeline stalls, so 1 CPU instr / clock cycle
no context switch between when we start timing and when we end
then we get 2,000,000,000 instr / sec * 1 iteration / 13 instr ~= 150,000,000 iterations per second.
A very optimistic estimate says that your 40 iterations would take about a quarter of a microsecond.
So in the end, a few microseconds for 40 iterations seems to be in the ballpark.
Besides what Jeanne and Jeff suggested (your computer is a lot faster than you expect), you should also keep in mind that System.currentTimeMillis() isn't ultra-precise. As the API documentation of that method says:
Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.
So it is not very well suited for timing very short amounts of time (less than a few hundred milliseconds).
Jesper de Jong wrote:Besides what Jeanne and Jeff suggested (your computer is a lot faster than you expect), you should also keep in mind that System.currentTimeMillis() isn't ultra-precise.
For sure, on many versions of Windows, the finest resolution you could get was about 18 milliseconds. It might report a number that was not a multiple of 18, but there were essentially random bits. 18, 19, 22, 25, 30 were all the same number, 18.
james lei wrote:I was skeptical how it took 0 millisecond (to be exact, took 3 microsecond) to complete?
And just to add a piece of practical advice to all the good general stuff you've received: you're much better off using System.nanoTime() to time pieces of code.
For one, thing, the docs guarantee that it will be the most accurate timing available - but don't mistake that for meaning that it will necessarily be any better than currentTimeMillis() (although it usually is).
Second: Both methods take a significant amount of time to execute themselves (and nanoTime() takes longer than currentTimeMillis()), so you need to factor that into your tests - eg, by running your test MANY times and calculating the average - otherwise you're likely to run into the effects of the Heisenberg principle (simply put: the accuracy of your timing is affected by the time it takes to measure the time it takes ).
Isn't it funny how there's always time and money enough to do it WRONG?
Articles by Winston can be found here