Char supports a 16-bit unicode character. Under the covers it is really a 16 bit (unsigned) integer and can therefore be assigned like your examples directly to an integer value. The trick is the integer value must be in its range or you would get a compiler error.
The largest number it can hold is 2^16 - 1 (^ as in power of not the bit operator ) which is 65535.
When you see the hex assignment it is simple the largest hex value will be 0xffff - because we only have two bytes. So, 0xbeef is fine.
For the octal number I don't know of a short cut other than converting the number to decimal and compare it to 65535. You could memorize that 65535 in octal is 177777 so any octal number <= is ok.
Any numeric assignment to a char could be cast to a char and get past the compiler in this case the char will only contain the right two bytes.
char c = (char)0xFFFFFFFFFF0041L; // what char does c hold?
HTH [ September 08, 2006: Message edited by: Tom Adams ]
Joined: Aug 11, 2006
Thanks a lot Tom,for such a brief description . I really really appreciate ..Thanks again
I’ve looked at a lot of different solutions, and in my humble opinion Aspose is the way to go. Here’s the link: http://aspose.com
subject: char representation question -- was please guide