doubles don't really store the exact value. when you get to large numbers, like 1.2e15, do you really care about the least significant digits??? 1,200,000,000,000,000 vs. 1,200,000,000,000,001 if we were required to care, then the range of floats and doubles would be MUCH smaller. the non-technical way to describe it (since i can't give the technical one) is that as the numbers get bigger, there are actually gaps between the numbers that ARE represented. Or, you can only have 15 or so significant digits. if you're looking at a number with say 16 digits, the one's place is basically ignored (or assumed to be 0). if you have a number with 20 digits, the last 5 are assumed to be 0. even if you add 1 to it, it gets sort of rounded off. does that make sense?

There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors