x/10.0 vs x*0.1

Chris Torek chris at mimsy.UUCP
Thu Oct 13 16:44:27 AEST 1988


In article <1700 at dataio.Data-IO.COM> Walter Bright suggests that one
>>>Try very hard to replace divides with other operations, as in:
>>>		x / 10
>>>	with:
>>>		x * .1

>In article <10332 at s.ms.uky.edu>, aash at ms.uky.edu (Aashi Deacon) notes:
>>According to theory, '.1' cannot be represented exactly as a floating
>>point number because in base2 it is irrational.  Wouldn't then the
>>first be better in this case?

Yes (subject to the usual constraints, i.e., that you know what you are
doing: if your input data has little precision, you can afford minor
degredations in computations).

In article <711 at wsccs.UUCP> dharvey at wsccs.UUCP (David Harvey) writes:
>For that matter, it is also very difficult to represent 10.0 (I am
>assuming you are working with floating point) in any floating point
>representation.

Not so.  `.1' is a repeating binary; but `10.0' in base 2 is merely
`1.01 E 11' (the exponent here is base 2 as well, 11_2 = 3_10):
1*8 + 0*4 + 1*2.  (In conventional f.p., one uses .101 E 100 rather
than 1.01 E 11, but it amounts to the same thing.)  Think about it a
while, and you will see that any integer that needs no more than M bits
to be represented in binary can be represented exactly in binary
floating point whenever that f.p. representation has at least M bits of
mantissa.  (Then, since the first bit after the binary point is always
a 1, you can drop it from the representation, and you need only at
least M-1 bits.  This is called a `normalised' number.)

>Also, if the operation is done in emulation mode (no
>floating point in MPU or if math coprocessor it is not in machine) the
>advantage will be nonexistent.

Again, not so: f.p. multiplication of normalised (ugly word, that)
numbers is actually the simplest f.p. operation, as you need re-
normalise only once, and that is only 1 bit and in a known direction
(down)% and can be done during the integer multiply phase.  The rest is
just integer multiplication and addition.
-----
% The number of bits in the result is the sum of the number of bits in
  the multiplier and in the multiplicand.  Since the first bit of both
  multiplier and multiplicand is always a 1, the first two bits of this
  result are 11, 10, or 01.  If 01, normalisation consists of shifting
  the result left 1 bit and decrementing the resulting exponent.
-----

>Even with the coprocessor (math ops) a MUL takes approximately the same
>amount of clock cycles a DIV does.

Only in poorly-implemented coprocessors.  (Your phrase `*the* coprocessor'
makes me wonder of which coprocessor you are thinking.)

>You would be much better served by making variables that are used
>constantly registers (if you have float registers) than some of this stuff.

That depends on your inner loop.

>Also, making Fortran indexing go backwards and C's go forwards ... for
>multiply dimensioned arrays does wonders to reduce the page faulting
>that normally occurs with multitasking/multiuser machines.

True.
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris at mimsy.umd.edu	Path:	uunet!mimsy!chris



More information about the Comp.lang.c mailing list