Floating-point values are stored in format resembling scientific notation. For precise details, search on "IEEE 754 Standards." But basically, this means that some of the bits in a float are used to store a
value, and some of the bits are used to store an
exponent. As explained by Thomas De Vos in the original
thread you referenced, "A float contains as the most significant bit the sign, followed by 8 bits which represent the exponent and 23 bits for the value or called the mantissa."
Note: "...23 bits for the value."
Now, if the value is "normalized" (which means the exponent part is non-zero), then there is an
implicit "1-point" preceding the value. For example, a value of 1.1001 would only need to be stored as 1001, because the "1-point" at the beginning is implied.
So allowing for the implicit "1-point," if an int value requires 24 bits or more, it will require the full 23 bits allowed for a value when stored as a float. So let's take a look at int values requiring more than 24 bits, and see what happens when they are assigned to a float, then cast back to type int.
The code below starts at a given int value, then prints the next 100 int values for which no precision is lost in the conversion to float and back. As you will see by the output, all is well until more than 24 bits are required for the int value. But at that point, gaps (loss of precision) start to appear. In particular, note the
pattern at the far right of the binary representation.
To help see this pattern better, start with a higher int value, like (int)Math.pow(2, 27).