The permitted implicit narrowing conversions of integral constant expressions to smaller integral types can never result in loss of precision.
Casting a double constant to a float may involve loss of precision, not only in obvious cases like "1.23456789012" but in subtle cases like ".3". It is not good to have language rules that can only be verified by running a program or by knowing hexadecimal fractions really well.
On the other hand, allowing implicit narrowing conversions of doubles to floats, only when there is no possible loss of precision, would make for lots of challenging exam questions and entertaining threads on this BB. Maybe I should suggest it to James Gosling if I ever meet him.