I would appreciate some help on this topic as I have access to only one compiler...
In Brogden's Mock Exam #28, the
explanation of the correct answer hinges on the concept of the imprecise representation of rational fractions in floating point types, to wit: ( ( 1.0F / 3.0F ) * 3.0F ) != 1.0F according to the explanation.
However Brogden's
actual question code in the question uses:
( ( 1.0F / 3.0F ) * 3.0 ) == 1.0F
(note the sneaky insertion of a double literal)
Based on some
test code examples, the above
only evalutates because of the arithmetic promotion of float to double. That is,
( ( 1.0F / 3.0F ) * 3.0D ) == 1.0F evaluates to false but
( ( 1.0F / 3.0F ) * 3.0F ) == 1.0F evaluates to true.
Does anyone else find this behavior or is it something magical that the Mac does (besides the magical things they do already)?
For the code he's written, the correct answer and the answer stated as correct match but the reason seems to hinge on floating point calculations involving type conversions not just floating point calculations.
Test code follows.
Best Regards,
Steve Butcher
exceptionraised@aol.com