What you you mean, precision is lost when you convert a float value to a double? Or when you perform an operation with float or double values? This makes no sense to me. You would lose precision converting from double to float, but not the reverse. Do your calculations using double - you'll be fine. The roundoff error in your answers will be
far too small to matter to anyone. Really.
OK, there are a small number of applications that actually do require more precision than this. If you
really have need for it, you can use the BigDecimal class. But only use this if you have a good idea of what the term "significant digits" means, and are certain that your application requires more of them.