You may notice a loss of precision going from int-to-float when you're dealing with large numbers. With small numbers (certainly upto 1-million - probably MUCH beyond that as well, but you get my drift) the numbers should match well.
Why is this? Well, if you're really techie, read on - but beware! I'm writing this turned midnight with no reference books to hand
Consider this: Both int and float are 32 bits. Now, 32 bits can take on 2**32 distinct "values" (that is, 2-raised-to-32nd power - ie. 2 times 2 times 2 ..... 32 times). Therefore, both int and float can only have this number of distinct values as well.
The int type holds these as precise integers upto (plus-or-minus) 2**16.
The float type holds a fantastically wider range of numbers: offhand, let's say from 10**-99 through to 10**99 (positive and negative).
BUT.....
You can't get something for nothing. The float type has a wider range of values, but remember that it can still only hold 2**32 distinct values. That means, if there is a "good" match with integers around zero, there has to be a "worse" match with integers far from zero - ie. BIG numbers.