My server is doing a bunch of floating point calculations (e.g. typically summing a list of double x double calculations, lots of sums and multiplications, far fewer divisons). We're finding perfomrance issues, but our profiler can't give us the level of detail necessary to answer the question I have.
The server is a Dell OptiPlex GX240 Intel pentium 4 2.4GHz with 1G of RAM. We're running Weblogic 7 (against an Oracle 9i DB on another machine) on top of Win2000.
I know that chips traditionally have a special floating point processor, this is usually not as integrated into the instruction path as is the arithmetic processing
unit. We take a performance hit both by more ocmplex calculations, as well as "travel time." (Maybe modern chip architectures have changed this and it is no longer true.)
Quantities are integral values (so could be long). Prices need only two digits of percision (so could be floats). Now its possible that a float multiplied by a long would overflow a non-double value. For this and a few other reasons, we made everything doubles. Now I'm thinking that maybe I should switch to longs. Because I only need two decimals of percision (maybe 3 if I need to consider rounding issue), I could simply treat my values as pennies instead of dollars.
However, one worry I'd have is if this would negatively impact performance, because now calcs previously done in the floating point processor will content for time on the arithmetic processing unit, along with other instruction based and general accounting needs.
Does anyone have any experience with something like this? I'd prefer not to have to modify all my code to test it (350+ classes). I'm also hestitant to simply make some test case, because I don't think I can easily replicate the loads on the two math processors.
Any thoughts, comments, or ideas?
--Mark