This week's book giveaways are in the Java EE and JavaScript forums. We're giving away four copies each of The Java EE 7 Tutorial Volume 1 or Volume 2(winners choice) and jQuery UI in Action and have the authors on-line! See this thread and this one for details.

A double has higher precision than a float. The number 1 can be represented perfectly in binary (in just 1 bit). So there is no problem with comparing it. The number 0.1 must not have a perfect representation in binary. (I've never been inspired to take a close look at binary fractions so I can't provide any details.) Here the higher precision of the double gives a more accurate representation than the float. Essentially there is a rounding error that is the difference.

I tried this and when expanding 1.1 to 12 decimal places I got: double = 1.100000000000, float = 1.100000023842

I don't know how many digits are really accurate on a double or a float. But anyway you can see there is a difference after 8 decimal places.

Franz Fountain
Ranch Hand

Joined: Nov 15, 2006
Posts: 58

posted

0

Hi James,

I got interested in this problem and wrote a program to see what's going on. Here's the code.