Because data types like
float and
double do not have infinite precision. This does not happen in only in
Java, it happens in almost every progamming language that exists.
Have a look at
Wikipedia: Floating point. Here's a very technical explanation:
What Every Computer Scientist Should Know About Floating-Point Arithmetic.
If you need to write software that deals with for example financial data (money), you shouldn't use float or double.
You should use class BigDecimal, which can store an arbitrary number of decimal digits.