C Floating point arithmetic

Eugene D. Brooks III brooks at lll-crg.ARpA
Tue Dec 3 17:01:02 AEST 1985


>I hope never to have the misfortune to have to use one of your programs.
Lets be nice now, I am not picking on you, nor am I trying to sell you
one of my programs.  I write them for my own use and they work fine for me.

>The notion that "trying it both" ways guarantees a program that will never
>blow up is patently absurd and in fact accounts for many of the horror
I didn't say that trying it both ways guarantees a program will never blow
up. It demonstrates that the loss of precision by using single instead of
double is not important for the paticular run in question.  I said no more
and no less than that.  I don't think that this statement can be argued with.

The statement that you argue with is of course absurd, and I didn't make it.

Trying a code in double precision and finding it produces the same results
as single does not prove stability or robustness of a code if that is
what you think I said.  I will however argue that an unstable code can
be useful.  If you have an algorithm that produces a simulation of a physical
system in time, it can be useful and even be the "best" algorithm for your
application if at a time step size which produces good enough accuracy
(stability), ie you get your answers before the code blows up, the algorithm
is more efficient that an absolutely stable one.  The fact that there is
an exponentially growing error is not a problem if it does not get big
enough for you to see it before the run completes.  I know its living on the
ragged edge, but if it means you get the job done in one day instead of a week
you go for it.

It is very much like doing data collection in a down hole shot at Nevada,
the bomb of course destroys all of the measuring equipment, the trick is to
get your answers before it happens, and of course know when it has happened.
With a down hole shot there is never any doubt that things have blown up
on you.  A numerical algorithm is usually more subtle.

Doing things this way of course get the noses of numerical analysts out
of joint.  They take offense at the idea of using an algorithm that is
numerically unstable and therefore is "garbage".  All I can say is that
there are a lot of useful things to be found in another man's garbage and
if useful they are usually a bargain.


I think that the issues which started this chain of postings have gotten
side tracked.  This started up as a discussion on whether or not computation
on floats should be done in single precision.  If the programmer wanted double
precision he would have declared the variables in question to be double.  The
value of doing computation on floats in double is very dubious, especially
of the IEEE standard for 32 bit floating point is being used.  Having the
default precision for the constant 1.0 be double is also very dubious if
it tends to cause promotion of floats in a sum with it to double.

	float a,b;

	a = b + 1.0;	/* Gets done in double because 1.0 is a double.
			Gag me with a spoon. */



More information about the Comp.lang.c mailing list