This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
Since a long is 64 bits and a float is only 32 bits, how does the maximum long value fit into a float? Take a look at the following code. The maximum long value magically appears when saved in a float and reconverted into a long. How is this possible?
Yes 32 bits data can fit into the 64 bits as follow like see
How could you fit 64 bits of data into 32 bits? Well, you can't. However, you can represent the same data that is represented in 64 bits in 32 bits if you look at the bits in a different way. If you're familiar with scientific notation, you know that 10,000 is equivalent to 1x104. Note that we just wrote the same number two different ways. Floating point numbers are the scientific notation of the computer world. With a float, only a small number of the bits represent the number while another bunch of bits represents the exponent to which this number should be raised - it's effectively base 2 scientific notation.
So what does that mean as far as our conversions go? Well, using scientific notation, we can represent a far wider range of values using fewer bits than if we had not used scientific notation. The largest number a long can hold is 263 - 1. Meanwhile, a float, which is 32 bits shorter than a long, can hold up to (2-2-23)�2127.
You see, any value that can be represented by a long can be approximated by a float. Therefore, if you convert a long to a float, you have no risk of losing data. That means that a conversion from a long to a float, even though a long is "bigger" than a float, is considered a widening conversion. You see, whether a conversion is considered to be widening or narrowing is really about the "range" of the types, not the size. It just so happens that, if the types represent numbers in the same way (as ints and longs do), the comparison is the same - a larger size means a larger range. It's when the data types don't represent numbers in the same way that we have a problem, as is the case with longs and floats.
Once again, we have a case in which the two data types represent data in different ways. A short is a "signed" data type, meaning that it can represent both poitive and negative values. It contains 15 bits of numerical data and 1 sign bit to denote the sign. A char, on the other hand, is an "unsigned" data type; it holds only positive values. Therefore, a char has no need for a sign bit and contains 16 bits of numerical data. The range for a short is from -32768 to 32767. The range for a char is 0 to 65535.
As you can see, there are values that are representable by a short that can't be represented by a char; the value -12 is a good example. Likewise, there are values that are representable by a char that are not representable by a short; the value 64000 is a good example. Therefore, when we convert from a short to a char or vice versa, we run the risk of losing data. Therefore, converting in either direction is considered a narrowing conversion. In fact, converting a byte, which is only 8 bits, to a char is considered a narrowing conversion as well because a byte is signed. Char, as the only "unsigned" primtive data type is Java, is a little special.
Well, I think that's about enough for my discussion of narrowing and widening conversions. Just remember that these conversions are all about range, not about size.
OK. So I understand about scientific notation and the total range of a float being greater than that of a long. However there is also the issue of precision as in the number of significant figures. The max long value has 63 bits of precision (ignoring the sign bit), while a long can have at most 32 bits (actually it's going to need some of those bits for a sign and an exponent).
Now I know that the people that came up with the way of representing large floating point numbers on computers are very clever, but I don't see how 64 bits of information can fit into 32 bits without some loss of information.
Just to give a very mundane example of what I'm talking about to be absolutely clear. I'll ignore binary and just use decimal to simplify the discussion. Let's say I have a integer number with 8 decimal digits: 87,654,321 And then convert it to a number with 4 decimal digits of precision but expressed in scientific notation: 8.765 E+7 I have just lost the last 4 digits. If I reconvert it back to a regular integer I get: 87,650,000 Which approximates the original number, but with less precision.
How does the float type in Java get around this limitation? That's my real question. Maybe I didn't state it properly in the beginning, but the example with the MAX long value gets to the point I think.
Can anyone please explain this...even i would like to know the logic behind it.
Joined: Nov 15, 2006
I did some more experimenting with this and as I originally thought not all long values can be 100% accurately represented as floats. I just happened to pick one that does get represented 100% accurate.
If you run code like the following, you'll see lots of numbers that are not translated with 100% accuracy from long to float.
What I find slightly amusing is that Java will give a "loss of precision" error when translating a float to a long, but will give no warning of "loss of precision" when translating from long to float.
I know, I know... the error relates to range not number of significant digits. :roll:
I guess this is an Alice Through the Looking Glass situation where a word means exactly what Java wants it to mean, in this case the word "precision" has been given an altered meaning by Java. So now you know. Don't worry, I still like Java, even with all its quirks.