# question from literals

Pallavi ch

Ranch Hand

Posts: 76

posted 10 years ago

- 0

hi all,

I got a basic doubt in literals.

here it is....

How is it not generating a compile error??

plz don't tell me that it's compile time constatnt.

i tried the other way also... like assigning a long variable to float.

long a=999999999999999999L;

float b=z;

How can a long varialbe(8 bytes) is assigned to a float(4 bytes)??

is it something to do with the storage type?

Becoz....I observed from the output that float has different storage type to long.

Thanks in Advance

Lavanya

I got a basic doubt in literals.

here it is....

**float b = 1L;**How is it not generating a compile error??

plz don't tell me that it's compile time constatnt.

i tried the other way also... like assigning a long variable to float.

long a=999999999999999999L;

float b=z;

How can a long varialbe(8 bytes) is assigned to a float(4 bytes)??

is it something to do with the storage type?

Becoz....I observed from the output that float has different storage type to long.

Thanks in Advance

Lavanya

Parameswaran Thangavel

Ranch Hand

Posts: 485

Pallavi ch

Ranch Hand

Posts: 76

sai Venka

Greenhorn

Posts: 9

posted 10 years ago

- 0

Hope this article will help you to understand well.

Long data type range from -9223372036854775808 to 9223372036854775807, inclusive

Float data type range from about -3.4 * 10^38 to 3.4 * 10^38. This is somewhat machine-dependent.

So any long number can fit into a float variable. You can lose precision, but that doesn't count in this context.

First, consider that whenever we are working with floating point numbers, we are going to have to accept approximations. The reason is that many values have infinite decimal representation -- either with a repeating pattern (for example, 1/11 = 0.090909... or 1/3 = 0.3333...), or with an irrational, non-repeating pattern (for example, pi = 3.14159...). From a practical standpoint, we have to cut these representations off somewhere; and as soon as we do, we have an approximation. Or, in other words, we lose precision.

So, for the sake of a simple illustration, let's say that we decide to cut them off at the 3rd decimal place (without rounding). That is, we store 1/3 as 0.333, and 1/11 as 0.090, and pi as 3.141. None of these values are exact anymore, but we can easily store them. We're traded precision for (some degree of) practicality.

Under this standard (assuming a decimal after the first digit), our range is only 0.000 to 9.999. So to increase range, let's add just a few more digits and use scientific notation. For simplicity, we'll use base-10 (although in a computer, this would be binary). Now, when we store a 7-digit number of 1234567, we'll understand this to mean 1.234 x 10^567. Suddenly, we've greatly expanded our range. But the trade off is in precision: Most of these digits are just place-holding zeros (implied by the exponent) to convey magnitude rather than an exact quantity.

This is great for really big numbers, but what about really small numbers? Well, suppose we agree that this 3-digit exponent will automatically have a "bias" of 500 built into it. In other words, we'll always subtract 500 from whatever value is stored. So if we want an exponent of 234, we'll store 734. Why is this helpful? Because this allows us to imply negative exponents. If we store an exponent value of 123, then subtracting 500 will give us -377. Recall that a negative exponent will "move" the decimal to the left, so now we can represent extremely small numbers, as well as extremely large numbers.

We'll add one more refinement: A new digit at the beginning to indicate sign, with 0 indicating positive and 1 indicating a negative.

So now in a simple 8-digit representation, we can store numbers as small as (+/-) 1.000 x 10^(-500) or as large as (+/-) 9.999 x 10^499. So we've got an enormous range to work with -- far more than what we would have with any simple 8-digit representation of a whole number -- BUT our precision is limited to those 4 digits that aren't the sign or the exponent.

Long data type range from -9223372036854775808 to 9223372036854775807, inclusive

Float data type range from about -3.4 * 10^38 to 3.4 * 10^38. This is somewhat machine-dependent.

So any long number can fit into a float variable. You can lose precision, but that doesn't count in this context.

First, consider that whenever we are working with floating point numbers, we are going to have to accept approximations. The reason is that many values have infinite decimal representation -- either with a repeating pattern (for example, 1/11 = 0.090909... or 1/3 = 0.3333...), or with an irrational, non-repeating pattern (for example, pi = 3.14159...). From a practical standpoint, we have to cut these representations off somewhere; and as soon as we do, we have an approximation. Or, in other words, we lose precision.

So, for the sake of a simple illustration, let's say that we decide to cut them off at the 3rd decimal place (without rounding). That is, we store 1/3 as 0.333, and 1/11 as 0.090, and pi as 3.141. None of these values are exact anymore, but we can easily store them. We're traded precision for (some degree of) practicality.

Under this standard (assuming a decimal after the first digit), our range is only 0.000 to 9.999. So to increase range, let's add just a few more digits and use scientific notation. For simplicity, we'll use base-10 (although in a computer, this would be binary). Now, when we store a 7-digit number of 1234567, we'll understand this to mean 1.234 x 10^567. Suddenly, we've greatly expanded our range. But the trade off is in precision: Most of these digits are just place-holding zeros (implied by the exponent) to convey magnitude rather than an exact quantity.

This is great for really big numbers, but what about really small numbers? Well, suppose we agree that this 3-digit exponent will automatically have a "bias" of 500 built into it. In other words, we'll always subtract 500 from whatever value is stored. So if we want an exponent of 234, we'll store 734. Why is this helpful? Because this allows us to imply negative exponents. If we store an exponent value of 123, then subtracting 500 will give us -377. Recall that a negative exponent will "move" the decimal to the left, so now we can represent extremely small numbers, as well as extremely large numbers.

We'll add one more refinement: A new digit at the beginning to indicate sign, with 0 indicating positive and 1 indicating a negative.

So now in a simple 8-digit representation, we can store numbers as small as (+/-) 1.000 x 10^(-500) or as large as (+/-) 9.999 x 10^499. So we've got an enormous range to work with -- far more than what we would have with any simple 8-digit representation of a whole number -- BUT our precision is limited to those 4 digits that aren't the sign or the exponent.

Parameswaran Thangavel

Ranch Hand

Posts: 485

It is sorta covered in the JavaRanch Style Guide. |