This week's book giveaway is in the OCAJP 8 forum. We're giving away four copies of OCA Java SE 8 Programmer I Study Guide and have Edward Finegan & Robert Liguori on-line! See this thread for details.
Why is it that float f = 1/3 doesn't generate a compile time error while float d = .3333 gives a "possible loss of precision" compile time error? I know for a fact that numbers with decimal places are double by default , but 1/3 = .333 right? I'm just confused. Thanks.