This week's book giveaway is in the Big Data forum. We're giving away four copies of Elasticsearch in Action and have Radu Gheorghe & Matthew Lee Hinman on-line! See this thread for details.

Floating-point numbers. Here the name is accurate.

Whole numbers, byte, short, int and long, but not char. Here the use of "sign bit" is inaccurate, because the bit has a value.

Floating point: You get bits 0-22 (23 bits reading from the right), which represent the fractional part of, say, 1.01010101010101010101010. Count carefully and I ought to have got 24 bits in there; you can actually get the equivalent of 24 bits' precision into the space of 23. That is for a float; a double uses 53 bits' precision squeezed into 52 bits.

The next 8 bits (float, 11 for a double) represent the exponent. You know you can have 1.23*10^45 and you usually write it as 1.23E45. Well, you can get the same with floating-point numbers. If you miss out 00000000 and 11111111 as those 8 bits, you get a total of 254 different values from 00000001 to 11111110. If you subtract 01111111 (127) from those numbers when you work out the exponent, so you can multiply that binary fractional number above by anything from 2^127 to 2^-126. 00000000 is reserved for smaller numbers to increase the range and 11111111 for infinity and NaNs. Now you have the magnitude of the number.

We get to the leftmost bit, no 31. If it is 0 the whole number is positive and if it is 1 the number is negative.

Integers: These don't have a true sign bit. They work in two's complement. Imagine a byte which has 8 bits. Imagine bit 0 means 1, bit 1 means 2, bit 2 means 4, bit 3 means 8, bit 4 means 16, bit 5 means 32, bit 6 means 64 and bit 7 means minus128. This isn't actually how two's complement works, but it is near enough to work out the values of your numbers. Some people call the leftmost bit the sign bit, but it actually has a value.