Hi Why the below code always evaluate to be true/0? Can anybody tell me how to do it correctly Float f= new Float (999999.99f); Float f2 = new Float(1000000.00f); System.out.println(f.compareTo(f2)); System.out.println(f.floatValue()== f2.floatValue()); Thanks Sanjay
What does System.out.println(f.floatValue()) show?
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
I think it should work if you use double instead of float. A float only has 24 bits to represent the binary digits in a number, and an eight bit number to specify where to put the decimal point. Ignoring the decimal point, you're trying to distinguish between 100 million and 99,999,999, which are 27 bit numbers, so the final three bits, which is where the numbers are different, are getting dropped. (I'm open to correction on this explanation) A double has a precision of 53 bits, which should be enough to distinguish between the numbers.
A rule of thumb: Never compare two floating-point numbers (double or some future "double double" included) with == or equals or = or EQ or eq. or whatever notation for equality in the specific programming languages (Machine Languages, assembly, Fortran, LISP, Pascal, c, VB, c++, Java, c#, ...), even it works sometimes. The results are mostly unpredictable or misleading.
That is a computer science ABC, I'm sorry to say it. I wish you are not offended. Read here for the reasons:Q. When I do floating-point operation, why some unwanted result comes out? even the question sounds a little different to the topic. However, the reason is exact the same. [ July 10, 2002: Message edited by: Roseanne Zhang ]
I think Rosanne's solution is going in the opposite direction from what Sanjay is asking. Roseanne is showing how to treat two floats which are "approximately equal" as if they are actually equal. Sanjay's problem is that the numbers he's using already are exactly equal when they're expressed as floats (or Floats). The float datatype is not capable of recognizing a distinction between numbers which are that close to each other. In the unlikely event that this distinction really is important, us double, or BigDecimal. Rosanee's solution is much more practical though - in the real world, numbers represented by floats are just not that precise, and it's much more useful to be able to recognize when numbers are approximately equal than is is to be able to detect differences that don't really mean anything.
My first part emphasized the exact equality of floating-point numbers in computer world is actually meaningless and why. Sanjay's question actually explained in the link I provided. After decimal to binary conversion, they might actually equal in the provided precision. More accurately, they might be exactly equal after the unavoidable (Yes! it is unavoidable even you are using BigDecimal, since no matter how precise it is, it is still finite. Of course, we might need to find a different example. ) round-up error caused by decimal to binary conversion. My second part provided a practical way for equality checking of floating-point numbers in computer world and real world. The answer obtained is not misleading, but more accurate to the problem you want to solve. [ July 10, 2002: Message edited by: Roseanne Zhang ]
Sorry Rosanne - I didn't look closely enough at both your posts. Incidentally decimal-binary conversion error is avoidable in BigDecimal if you're careful. E.g. you can't usually use new BigDecimal(double) (not for non-integral values anyway), but you can use new BigDecimal(BigInteger, int) instead. Of course for division you need to specify the desired scale, since a number like 1/3 will never be exactly representable in decimal - but that's not the same as binary-decimal conversion. For those who somehow think this level of precision is actually important, BigDecimal does fit the bill. It's cumbersome and slow, and usually unnecessary, but it works.
How about pi? Floaing-point number is fundamentally continuous, and computer is fundamentally discrete. The accuracy will be lost to certain degree no matter what you do. That is the point I want to get through!! Even you combines the power of computers in the entired world together, it is still finite!!! [ July 10, 2002: Message edited by: Roseanne Zhang ]
How about pi? Not exactly representable of course - but again, this has nothing to do with binary-decimal conversion. The distinction is important because most people are used to the idea that pi cannot be represented exactly in decimal, and the same for 1/3. They're used to seeing these sorts of problems when they use calculators. It's considered "normal". But they become confused when 0.1 + 0.2 != 0.3, since most calculators (and some other programming languages) either hide the binary-decimal conversion by rounding before the user sees it, or they avoid it entirely by using an internal format that is more decimal-based. E.g. binary-coded decimal, or integer "mantissa" times a power of 10 (which is what BigDecimal does). These latter formats are slower, but they do remove a large class of "errors" which seem to confuse people otherwise. I sometimes think float and double should only be allowed to programmers who understand the concept of significant figures. Unfortunately there doesn't seem to be a good way to implement this restriction. Naturally, your other coments are entirely correct. I just disagree with the original assertion that binary-decimal conversion is unavoidable in BigDecimal. [ July 10, 2002: Message edited by: Jim Yingst ]
I don't disagree with your fundamental point. It's fairly obvious. I was just discussing the subpoint of BigDecimal and binary-decimal conversion, which was less obvious, yet nonetheless relevant to Sanjay's original question.