The problem isn't with
Java or any programming language in particular. Remember that all numbers are stored as binary in the computer. Many floating point numbers cannot be stored accurately, so there will always be round off errors when doing any kind of calculations with them (even addition).
To help you understand, a simple example may make it more clear. Take the fraction 1/3. In base 10 decimal, this can be written as .33333-repeating forever. However, most people get tired of writing 3's forever, so we settle for rounding off to a certain number of digits. Let's just say we take two digits after the decimal place. So we can add .33 + .33 + .33 = .99. However, adding the original fraction we get 1/3 + 1/3 + 1/3 = 3/3 = 1. During the calculation, we lost .01.
The same thing happens when decimals are converted to binary floating point numbers. The loss of precision happens during the conversion, not when the converted numbers are added.