Why does this work
//final int x1 = 10;
//short x2 = x1;
And if you take the 'final' qualifier off the x1 declaration, the compiler/IDE whines and it doesn't work.
And, you'd think that if the above works, then doing the same thing between a long and an int might work, and it doesn't.
Oh, and a big rightOn to this site, I've been cramming for a couple of months for the cert and this place has really helped!
Closing post; thanks for the info!, that gives me more to dig with. Kb
It works with final because that makes x1 a compile-time constant, so it's as if you did
It's called a narrowing conversion, I think, and it happens automatically in some cases when the compiler can be sure that no precision will be lost.
As for why the inconsistency between int --> short and long --> int, that I don't know. I'd guess that maybe they decided to make it automatic for narrowing from int A) because it's so common, and/or B) because in particular int constants are so common--the default for all integer constant types if you don't specify otherwise.
The second half of that guess means that we can do this:
Jeff Verdegan wrote:The second half of that guess means that we can do this:without having to cast.
But presumably not:
byte b = 1L;
@Keith: The fact is that you should generally cast all narrowing conversions explicitly. For one thing, it tells whoever's reading your program exactly what's going on; and also that you haven't simply forgotten about it.
Furthermore, I'd get familiar with the different types of literal available. There are few things I find more annoying than:
double one = 1; despite the fact that it's perfectly legal.
(It should, of course, be: double one = 1.0)
Isn't it funny how there's always time and money enough to do it WRONG?
Articles by Winston can be found here
Joined: Oct 13, 2005
Surely double one is 2.
Actually, there is another way to write a double literal which is a whole number:-
double one = 2d; And you can even write it in hexadecimal, which might be
double one = 0x1.0p1; Which is of course 2.0