Hi,
In the test code below the output precision varies based on the value of the double used for calculation.
What I really want is basically to shift the decimal places to the left as I am dividing by 100. ( converting % value to a regular value by dividing 100).
In some cases it displays too much precise value and in some cases does the expected.

"Any fool can write code that a computer can understand. Good programmers write code that humans can understand." --- Martin Fowler
Please correct my English.

amit punekar
Ranch Hand

Joined: May 14, 2004
Posts: 544

posted

0

Thanks for your replies.
What I do not understand why for 96.74 the result has so many decimal places and for 96.78 there are only 4 as expected.
That's precisely what I did not understand?

Yeah,I had read #20 and used BigDecimal to resolve this as below but was trying to find out the issue explained above.

amit punekar wrote:... Yeah,I had read #20 and used BigDecimal to resolve this as below but was trying to find out the issue explained above...

That item also contains links to articles explaining why this happens.

amit punekar
Ranch Hand

Joined: May 14, 2004
Posts: 544

posted

0

Thanks once again for replies,
I went through the beginner's article but then could not understand why it is not applied for all double values. why the result varies for one double value(96.74) than the other (96.78) ?
I might be missing some link due to lack of knowledge on this topic, between the article and the actual scenario which I posted earlier?
If someone can point me to the missing link it would be great.

You aren't missing a link or anything. It is just the nature of using base 2 as opposed to base 10. I'm not a math genius so I can't explain the nitty gritty technical details, but basically it boils down to the fact that some numbers can be calculated more precisly in base 2 than others.

I believe (and any super math smart people will probably say I am wrong) that 1/4 vs. 1/3 is a good example. Put in decimal form, 1/4 is .25. That is much easier to translate into base 2 than 1/3, which translates into .33333333333333333...

SCJA
When I die, I want people to look at me and say "Yeah, he might have been crazy, but that was one zarkin frood that knew where his towel was."

In representing 1/3 as a base-10 decimal, we might start with 0.3, which is 3/10. This is an approximation of 1/3. If we wanted to be more precise, we could add another digit, making it 0.33, which is 33/100. This is a better approximation (more precise), but still not exact. So we continue adding digits, getting closer and closer to 1/3 (adding precision), but never quite reaching it an exact representation.

Now suppose we had a limit on the number of digits we can use. Say, for example, we could not use any more than 4 digits to the right of the decimal. So when we try to represent 1/3, the best precision we can get is 3333/10000, which is not equal to 1/3. On the other hand, we have no problem representing 1/4 exactly as 0.25 because it fits within that limit.

Potential loss of precision in a computer is similar, using a base 2 system subject to the bit limits of floats or doubles.

amit punekar
Ranch Hand

Joined: May 14, 2004
Posts: 544

posted

0

Thanks Marc and W. Joe Smith for providing more explanation.
I appreciate your responses.