Float Double Parameters

herndon at umn-cs.UUCP herndon at umn-cs.UUCP
Tue Apr 8 14:34:00 AEST 1986


  [Spare bits.]

  Oy!  Floats & Doubles have certainly caused their share of
troubles in both C & Unix.  I believe that the original reason
for doing all arithmetic in double precision originated with
the PDP-11, which had some mis-features in its floating point
processor.  It seems that on the PDP-11 (/45 certainly, and
I believe the /70 and others too) there was no way to write
the floating point processor status word.  Thus if more than
two processes were using the FPP, one of which was generating
overflows and the other of which was generating underflows,
the two processes would get each other's errors after context
switches.  I also recall that there was an instruction (SETD)
which caused all FPP instructions to be performed in double
precision, and that there was no way (i'm not sure here) to
query the FPP to see whether it was in double or single precision
mode.  Since double precision arithmetic was not much slower
than single, and floating point wasn't used that much anyway, it
was decided that all arithmetic would be done in double precision,
and the kernel would assume that all processes ran in double
precision mode.  During context switches, the kernel would save
all FPP registers, reload them for the new process, and leave
the FPP in double mode.  (Screwing any process which wanted to
use single precision arithmetic.)
  As far as I know (admittedly, not far) there is no longer any
good reason for the current schizophrenia about single/double
precision arithmetic, other than historical precedent.  Does
anyone know any other good reasons, such as design faults in
the VAX FPA?

				Robert Herndon
				...!ihnp4!umn-cs!herndon
				herndon at umn-cs
				herndon.umn-cs at csnet-relay.ARPA
				Dept. of Computer Science,
				Univ. of Minnesota,
				136 Lind Hall, 207 Church St. SE
				Minneapolis, MN  55455



More information about the Comp.lang.c mailing list