I'm bit puzzled..I should compare one double value with a sum of other double values

I've read it is because of rounding inaccuracy. To my understanding this is because of how numbers are stored (as binary values)

However, is there any way to successfully compare 0.2+0.2+0.2 to 0.6? Or similar. Numbers can be anything, not just 0.2..

With currency I could convert 1.34$ to 134 pennies and use that, but in my case that's not possible, as I don't know the actual numbers, so I can't just multiply the "source" with 100..

Or do I have to accept inaccuracy and threat two counted decimals equal if their difference is low enough..(with the example 0.6000000000000001-0.6=000000000000001--> numbers can be considered "equal")?

Well, you should use BigDecimal if you want to use exact decimal numbers instead of binary approximations of them. But you should also read the documentation for the BigDecimal constructor you used. Not just the one-line summary, the full documentation. Really. It answers the question you are asking here.

You will see this effect if you run this:
Result is:
0.200000000000000011102230246251565404236316680908203125
0.59999999999999997779553950749686919152736663818359375
0.2
0.6

Don't get me started about those stupid light bulbs.