The width of an integer type should not be thought of as the amount of storage it consumes, but rather as the behavior it defines for variables and expressions of that type. The Java run-time environment is free to use whatever size it wants, as long as the types behave as you declared them. -------- from the Java Complete Reference Integers Topic
can any body explain me what is the difference between behaviour and use of memory as intended by the author?
I don't know much about this but I'll have my word. Well you see, the size of int is fixed only for us. But for the JVM, it is platform specific. But it is the responsibility of the JVM to ensure that we get the same behavior no matter how much memory an int occupies. If it occupied more space on one OS than other, that should not change the behavior of the program. That means that range of int should remain the same no matter how much memory it actually occupies. I've read something like this about floating point calculations and strictfp but I think it also applies to this case...
The integral types are byte, short, int, and long, whose values are 8-bit, 16-bit, 32-bit and 64-bit signed two's-complement integers, respectively, and char, whose values are 16-bit unsigned integers representing UTF-16 code units (§3.1).
So an int always occupies 32 bits (= 4 bytes). I think that may mean that on 64-bit PCs the JVM may leave half a word vacant when storing an int. I believe that used to happen for byte short and char, before Java1.4, but I might be mistaken.
But whatever the amount of storage used, we programmers ignore that. The idea behind using a high-level language is that we can forget that sort of thing. We simply remember that an int behaves as having a "width" of 32 bits, and that's that. [Please can we have a "that's that" smilie ]