Can somebody point me to an explaination of why 0.1D is really 0.099999999999998590 and 1000 * 0.1 becomes 99.9999999999986? It's hard to accept a 'law' without a reason behind it.

because you cannot represent all decimal values exactly to 100% precision. most numbers are correct to (something like) eight significant digits. You can't actually store 0.1, but you can store 0.099999999999998590, which is usually close enough.

The reason (I think) is that java stores number by adding up powers of 2. so to get something close to 0.1, you store 1/16+ 1/32 + 1/128.... etc. (note these are not nec. the ACTUAL sum, I'm just trying to illustrate it ).

you can get pretty close most of the time, but not exact.

There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors

It's actually not native to java, it's the very binary-ness (if I may invent words) of computers. The fact that they base everything off of 1s and 0s, means that they do their math in binary (base 2), which means that they have the same problems with dividing certain things by 10 as we do when we divide 10 by 3. In decimal based math that just doesn't compute very cleanly.