Inherent imprecision of floating point variables

Jim Giles jlg at lambda.UUCP
Sat Jun 30 09:39:32 AEST 1990


>From article <b3f.2688bfce at ibmpcug.co.uk>, by dylan at ibmpcug.co.uk (Matthew Farwell):
> [...compressed example ...]
> main() {
> 	float f; f = 0.0;
> 	while (1) {
> 		if (f == 10.0) break;
> 		printf("%f\n", f);
> 		f += 0.1;}
> 	printf("Stopped\n");}
> 
> If its all to do with conversion routines, why doesn't this stop when f
> reaches 10?

It doesn't (or shouldn't) have anything to do with the conversion routines.
The job of the conversion routines is to convert the decimal into the internal
representations and vice-versa.  They _should_ do this job as accurately as
possible - if the number is exacly representable in both bases you have a
right to expect exact conversion.

The above routine fails (on some machines) to terminate because 0.1(decimal)
is not exactly representable in base 2 (or many other bases for that matter).
The value of the increment is therefore approximated.  Depending upon the
precision and rounding mode of the machine in question, the above program
may or may not terminate.  This fault exists even if the conversion was
carried out to the nearest possible internal representation.

The value of 0.1(decimal) is 0.000110011001100110011...(binary).  This is
a repeating fraction, where the group ^^^^ are the repeated digits.  No
matter what precision your machine carries, it can't represent the whole
thing (not as a floating-point number anyway).  A hunderd repeated adds
with the approximate value _can't_ really equal 10.0 - though it might
coincidentally match if the precision and rounding modes exactlty cancel
each other out.

Solutions to this problem are to introduce new data types (like fixed-point
or rational) or to use a base 10 machine.  Both these solutions carry high
price-tags (in terms of speed or hardware or both).

J. Giles



More information about the Comp.lang.c mailing list