This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
In java impicit casiting is done only with byte,char,short primitive data types only and default value of intiger primitive is int and for the above primitive datatypes int is implicitly type casted,but for the floating point primitive types double is the default type and when you are trying to assing the floating value the variable like float=2.5 here 2.5 is nothing but the double value which cannot implicitly type casted to flaot so we need to explicitly mention as float=2.5f
ehn,... in case you are still confused, I suggest you go and study the K&B book for SCJP 1.4, there is very good explanation of how byte, char, short, int, long, float, double are constructed, then you will understand how "implicit" casting during assignement is permitted.
Basically, it depends on number of bits every data type is holding. In your example, byte is 8-bit, data range is -127 ~ 128, so 1 (literally a int value) is within the range, so it can be assigned to byte type.
Your float question can be answered in the book too, 1.0 (literrally a double value) should be a double type, so if you want to assign it to a float type, you should add "f" or "F" at the end of the value. --> 1.0f
If you want to be more amused, int 1 can be assigned to byte variable only like what you've written, if you try this: int i = 1; byte b = i; //compile error
In this case you should cast explicitly. byte b = (int) i;
Originally posted by Quintin Stephenson: I believe it is to do with precion and the way a value is constructed...
Yes, "precision" is the key.
Integral values (byte, short, etc.) are exact binary representations. They can be widened -- or even narrowed, if within range -- without losing any information. But floating-point values (float and double) are often binary approximations with different levels of precision.
Basically, this is because floats and doubles have a limited number of bits to store fractional values, and not all fractional values can be expressed in binary (sums of negative powers of 2) using the bits available.