In our project, we are taking care that all the float numbers shuld be of two decimal places only and we r not using something like BigDecimal for this to keep it simple.

So, at all the divisions I am taking care of rounding it to two decimals. Now I found a strange test case, where in adding two numbers with two decimals each resulting in more decimals as below...

S.O.P(56.56+0.56) --- gives 57.120000000000005

S.O.P(56.56f+0.56f) --- gives 57.120003

Ideally it shud just result 57.12. Can I know the reason why it is happening like above.

Since you're only dealing with two digits all the time, you might want to consider keeping all of your numbers as ints. If it's possible, multiply by 100 and store as an int. Add your ints into a double and display your double divided by 100.

int i1 = 5656; int i2 = 0056; double d1 = i1 + i2; System.out.println(d1/100);

Originally posted by Darryl Failla: Since you're only dealing with two digits all the time, you might want to consider keeping all of your numbers as ints. If it's possible, multiply by 100 and store as an int.

This is good advice. And if, as I suspect from the fact you're using 2 decimal places, you are doing financial calculations, it is essential that you use integer arithmetic. Floating point is unacceptably inaccurate.

Not sure about the bit suggesting displaying using a double. This could still go wrong. [ July 19, 2006: Message edited by: Peter Chase ]

Betty Rubble? Well, I would go with Betty... but I'd be thinking of Wilma.