How not to write a loop, revisited

Bill Gibbons bgibbons at Apple.COM
Sat Jun 25 04:55:19 AEST 1988


>>I believe that floating point arithmetic is exact as long as all the values
>>are integers with not too many bits ....
>>...

>Ahem, I hope you are using tests on the loop that do not depend on exactness.
>If not, two things can go wrong:
>1)  somewhere in your computation, a roundoff error creeps in, and
>  your results are no longer exact (example: raise a to the power b: for
>  int's this is typically done by multiplication (exact), for floats by
>  logarithms or similar (not exact, even if both a and b have integer
>  values)).
>2)  you port to a machine with a slightly different floating point
>  representation, and what used to be exact is no longer.
>  your results are no longer exact (example: raise a to the power b: for
>  int's this is typically done by multiplication (exact), for floats by
>  logarithms or similar (not exact, even if both a and b have integer
>  values)).
>...nobody I know *guarantees* that integers are representable (i.e. the
>closest approximation to 2 might be 1.999999)

If you press them for it, virtually everyone *guarantees* that integers are
representable, as long as they fit within the MANTISSA portion of a floating-
point number.  Almost everyone guarantees that add, subtract and multiply 
(of integer-valued floating-point numbers) produce exact results, as long as
the result is representable (i.e. the exact integer value still fits).  Divide
is less reliable, but everyone with hardware floating-point will give you the
right result for "trunc(x/y)".

Since add and subtract are safe, so are comparisons.

I agree that exponentiation is not safe, but raising a floating-point value to
an integer (typed as integer) power is always done with repeated multiplies
(in about log2(exponent) time), and since multiplies are accurate, so is this
type of exponentiation.

The log function is NOT accurate enough, but the usual reason for taking log
of an integer is to get the number of bits or decimal digits.  The bit length
can be computed from the floating-point exponent (using the IEEE logb()
function).  The number of decimal digits can be computed as
     (int) (log10(x)*(1.0+epsilon)) + 1
where epsilon is about 2**(-n+3), and floating-point numbers have "n" bits
in the mantissa.  This is pretty safe as long as the integer is less than (n-5)
bits long; the log() function on any given machine is almost always accurate 
enough to get this right.

As for converting between different machines, this is safe as long as both
machines have enough bits in the mantissa and a binary conversion is used.
Going through ASCII is usually safe, as long as you print enough digits and you
round the result after reading it back.

Bill Gibbons
(consulting to Apple)



More information about the Comp.lang.c mailing list