A double has higher precision than a float. The number 1 can be represented perfectly in binary (in just 1 bit). So there is no problem with comparing it. The number 0.1 must not have a perfect representation in binary. (I've never been inspired to take a close look at binary fractions so I can't provide any details.) Here the higher precision of the double gives a more accurate representation than the float. Essentially there is a rounding error that is the difference.
I tried this and when expanding 1.1 to 12 decimal places I got: double = 1.100000000000, float = 1.100000023842
I don't know how many digits are really accurate on a double or a float. But anyway you can see there is a difference after 8 decimal places.
Joined: Nov 15, 2006
I got interested in this problem and wrote a program to see what's going on. Here's the code.