IEEE floating point format

Barry Margolin barmar at think.COM
Mon Aug 7 12:57:27 AEST 1989


In article <9740 at alice.UUCP> ark at alice.UUCP (Andrew Koenig) writes:
>I don't see, though, why you describe denormalized numbers as `the
>loss of precision'.  Compared with the alternative, it's a gain in
>precision.  After all, the only other thing you could do would be
>to underflow to 0, which would lose all precision.

Denormalized numbers have less precision than normalized numbers.  In
a denormalized number, the leading zero bits of the mantissa don't
contribute to the precision of the number.

You are confusing accuracy with precision.  Think back to your high
school and college science course, where you had to write the
precision of experimental results explicitly.  When you write 1.3, it
implies that you only had two digits of precision (and you might write
1.3+/-.05); however, if you use a high-precision device you might
measure something as 1.3000, which is +/-.00005.  Precision,
therefore, is the number of significant digits you are sure of.

A denormalized number is more accurate than underflowing to zero, but
it isn't necessarily more precise than zero.

Barry Margolin
Thinking Machines Corp.

barmar at think.com
{uunet,harvard}!think!barmar



More information about the Comp.lang.c mailing list