In the sample code below, only the short arithmetic is throwing a compiler error. Does anyone know why that is? Someone told me that it could be that adding two shorts can possibly be bigger than the maximum value that a short can hold. But if that's true, doesn't that apply to all the other primitive data types?
C:\Test.java:30: possible loss of precision found : int required: short short s = short1.shortValue() + short2.shortValue();
One way to get around the compiler error is to do this but I'm wondering if this will give unexpected results, e.g. two maximum value short values being added together. Then again, I could be wrong-- any input?
[ August 31, 2004: Message edited by: David Duran ]
Java really prefers that you do arithmetic operations with ints. (It will be interesting to see what happens as more and more 64 bit native machines appear.)
Casting the result to a short can lead to unexpected results if the actual result would be to big to fit in a short. What will happen is that the result will "wrap around" in binary, probably ending up negative. Similarly, a negative result with too large a magnitude to fit in a short will wrap around to something positive.
Of course, that's a risk with ints, too. Sometimes when the application calls for it, I test for that possibility ahead of time and maybe throw an error, but usually I just try to use a numeric type big enough to hold all forseeable results.
Originally posted by Warren Dew: Java really prefers that you do arithmetic operations with ints.
So much so that it actually converts short1.shortValue() and short2.shortValue() to int before doing the addition. It does this with byte and char too. Hence your error basically says, "got an int when expecting a short" and you go, "what, but I gave you two shorts!"
I'm sure this kind of messing with primitive types (there's more with method parameters and something called compile-time constants :roll: ) was responsible for a fair proportion of the 20% I dropped on my SCJP. Think I get it now though.