Kevin Simonson wrote:So you're saying that Java does the division of 1.0 by 13.0, stores the quotient in binary format, losing a little precision as it does that, and then with the {println()} command converts it to decimal, losing a little more precision as it does that, and results in a decimal number that's off from what it should be in its least significant digit? And that's okay?
It's not a question about whether it is "okay" or not, it's just how floating-point numbers work in Java and most other programming languages.
float and
double are not infinitely precise. If you think about it, they simply cannot be infinitely precise in principle, since they have a finite memory size (32 bits or 64 bits). To be able to store any arbitrary floating-point number with exact precision, you would need need an unlimited amount of memory.
If you need really precise numbers, use java.math.BigDecimal.