Because these types also represent negative number the first bit represents the

**sign ** of the number so

1xxx xxxx is negative for a byte

1xxx xxxx xxxx xxxx is negative for a short

our 130 is 1000 0010 as a byte (-126)

and 130 is 0000 0000 1000 0010 as a short (130) even though short represents negative numbers as well the first bit is not on.

It is the first bit that matters

**prior** to doing any calculations. If this was an unsigned data type then it's values would all be positive and there would be no need to do any method of calculation for bits. As is the case of a char.

so if byte represented positive numbers only [0-255] instead of[-128, 127]

our 130 as 1000 0010 would be (130). No inverting bits or anything...just calculating the number.

However, since bytes do represent both positive and negative numbers the first bit effectively allots half to negative and half to positive numbers with zero being considered positive.

Originally posted by nagaraju uppala:

the most significant bit is 0 after doing 2's complement.

You only need to invert bits if it's negative...and if you inverted bits because it's negative (which would effectively always turn the first bit to zero) don't confuse yourself with "hmm, it's a zero now in the first bit must not be negative".

I think of it like this...you go from the range of positive numbers to the the range of negative numbers. You start with the smallest positive number and go to the highest positive number 0000 0000 to 0111 1111 [0,127] then you go to the smallest negative number and work your way to the largest negative number 1000 0000 to 1111, 1111 [-128, -1]

[ December 12, 2008: Message edited by: Paul Yule ]