Literals, like -10, are ints (4 bytes). This won't fit into a char, which is 2 bytes. So you have to cast it to a char. What will happen ? The extra bits will be thrown. If you represent -10 in binary, it looks like : 11111111111111111111111111110110 (0xFFFFFFF6) Casting to a char will cut of the first two bytes, so it becomes : 1111111111110110 (0xFFF6). This represents a Unicode character whose value is 0xFFF6. If you search in a Unicode table (like here), you'll see that the Unicode for 0xFFF6 is not used, that's why '?'(garbage) is being printed.
I think since char data type is 16 bit unsigned it should not allow any -ve numbers.
A four bytes unsigned values and a four bytes signed values are both four bytes. They can still hold the same number of bits (32bits in this case). The way the first byte is interpreted depends on whether it's signed or unsigned. For example: One byte value in binary : 11111111 For a signed value, the first bit is interpreted as a sign flag. 1 means that the value is negative. So, for a signed byte, 11111111 = -1 (inverse all bits : 00000000, add 1 : 00000001 = minus one) For an unsigned byte, 11111111 = 255 (= 2^8 - 1)
To sum up. Both signed and unsigned bytes have the same number of bits, so you can put a signed byte in an unsigned byte. What matters is how the value will be interpreted. [ January 28, 2008: Message edited by: Christophe Verre ]