I write about this JLS example at great length in "Java Rules". It was very upsetting for me the first time I realized that a "loss of precision" could mean a different whole number. The following three type conversions can result in a loss of precision:

int to float

long to float

long to double

The explanation why a loss of precision is not considered a narrowing (read "unsafe") type conversion is simply that the significand of a floating-point type is designed to be imprecise. That is what makes them useful to scientists who store very large and small numbers in the exponent field of a floating-point type.

More precisely, an important distinction is made between a loss of magnitude and a loss of precision in the floating-point types. Primitive type conversions from integral to floating-point types are categorized as widening primitive conversions (that is, safe) because there is no loss of magnitude. A loss of magnitude is defined as a change in the exponent as a result of the type conversion. When discussing floating-point types, always remember:

Magnitude is to the exponent what precision is to the significand.

There is no potential for a loss of magnitude in the three type conversions under consideration, only a loss of precision. When converting from int to float or from long to float or double, the highest power of two is always stored in the exponent. Specifically, the highest power of two in Integer.MAX_VALUE can be stored in the exponent of a float. Likewise, the highest power of two in Long.MAX_VALUE can be stored in the exponent of a float or double. In fact, the exponents of the floating-point types can store powers of two that are much greater than Integer.MAX_VALUE and Long.MAX_VALUE. A loss of precision always refers to the significand.

The conceptual difficulty here is that you cannot think of the floating point types as simply numbers that include a decimal point. They have very specific uses. If "exact" results are required,

Java offers BigDecimal. Java may be the first programming language in which programmers routinely use a numeric class (instead of a primitive data type) for exact results. In the past various programming technics (such as rounding) have been used to compensate for the loss of precision in floating-point types. I could go on and on about this subject, but would refer you to "Java Rules" for a more detailed discussion.