Trying to understand what happens when converting from decimal to binary. I understand how to get a positive number to binary, but a negative decimal number is throwing me off. The number I am using is -42. I have illustrated a positive 42 below. Can someone help me understand what is happening? ... 128 64 32 16 8 4 2 1 ... 0 0 1 0 1 0 1 0 But what I'm seeing as -42 looks like this: ... 128 64 32 16 8 4 2 1 ... 1 1 0 1 0 1 1 0 When looking at a negative decimal, how do I know when the first 1 is actually part of the decimal value in binary?

my confusion stemmed from adding one to the result of 42 after inverting the bits. 42: 00101010 inv: 11010101 +1: 11010110 = -42

How is infinity expressed in binary format? Such as double1 / double2, where double2 = 0.0

leo donahue
Ranch Hand

Joined: Apr 17, 2003
Posts: 327

posted

0

Getting more confused. I thought I knew this until I tried to use the << on an integer value of 1. int i = 1; int result = i << 32; this will yield the result: result = 1 Ahh, but then i realized that this is translated: 1 * 1^32 which is 1. What is happening in the background besides simply shifting bits around?

It sounds like part of what is confusing is the twos complement binary form. This is the way that computers store integers, but it's not strictly binary. The binary form of -2 (base 10), for example, is -10. The computer represents the sign as 0 for + and 1 for -. However, this leads to a problem. 10000000 would be the same as 00000000, because one is -0 and one is +0. Besides wasting a number, having two forms for 0 complicates arithmetic. Therefore, computers use twos complement. You change the leading 0 (sign bit) to 1, but then also add 1 to the number. This makes arithmetic surprisingly easy, but you'll probably have to work through a couple of examples to see why. Actually, your textbook probably has a clearer discussion of the subject than anything I can do. As for your infinity question, there is no representation of infinity in integers. For floating point types, a specific sequence of bits is taken to mean NaN (not a number), which you get when you divide by 0. In math, there is a subtle difference between 0/0 and n/0, where n != 0. I think the computer just calls both of them NaN. The NaN sequence of bits, and other features of floating point numbers, are specified by an ISO standard. Most computers and languages use the same standard.