This week's book giveaway is in the OCPJP forum. We're giving away four copies of OCA/OCP Java SE 7 Programmer I & II Study Guide and have Kathy Sierra & Bert Bates on-line! See this thread for details.
Why is it that float f = 1/3 doesn't generate a compile time error while float d = .3333 gives a "possible loss of precision" compile time error? I know for a fact that numbers with decimal places are double by default , but 1/3 = .333 right? I'm just confused. Thanks.