Hello, I do not understand why when we try to assign a FINAL variable of type primitive data type (eg. int ) to a lower type (eg. byte) the compiler WILL NOT throw an error, only IF the value of the final variable is within range.... if you look at case 2 below if the variable g is NOT final will have compiler error: POSSIBLE LOSS OF PRECISION, same with case 3 when final var is larger than data range of byte. MY QUESTION IS: WHY with final variable in the first case we DO NOT have to explicitly downcast ??? Many thanks in advance!
Joined: May 05, 2000
Because the compiler knows that the final variable can never be changed to something else so it will never be beyond the range.