"wobble" in Floating Point (LONG) (was Re: comp_t)

j chapman flack chap at art-sy.detroit.mi.us
Sat May 18 23:23:35 AEST 1991


In article <599 at eskimo.celestial.com> nanook at eskimo.celestial.com (Robert Dinse) writes:
>In article <9105060921.aa11316 at art-sy.detroit.mi.us>, chap at art-sy.detroit.mi.us (j chapman flack) writes:
># 
># mantissa = comp_t & 0x1FFF
># exponent = comp_t & 0xE000

Yow!  Did I actually write that?  I meant
exponent = ( comp_t & 0xE0000 ) >> 13, of course.
># 
># double ticks = ldexp( (double)mantissa, 3*exponent)
># 
># The extra factor of 3 in the exponent bothers me, ...
>
>Specifically they stated that the exponent was base 8. Now, ldexp takes

Well, that confirms it, I guess.  It must have bothered me because I didn't
think about it long enough.  When I'm thinking about a problem, I sometimes
try to simplify it by thinking about smaller quantities and then generalizing.
This can get silly when the topic is floating point.  I should know by now.

What bothered me was the idea that as soon as you use up your 13 bits of
significand the first time, you have to jump from counting by 1 to counting
by 8.  You'd lose less resolution if you just jumped to counting by 2 at that
point.

My intuition stopped there; the actual useful fact about that
phenomenon is that you can define a measure called "wobble" which is the
amount by which the relative error of a floating-point representation can
vary for a fixed error of x "units in the last place."  For example, a
comp_t can count up to 8191 ticks without missing a tick.  From 8K to
64K-1 ticks, you count by 8.  So if a process actually used 8193 ticks,
the closest comp_t would be 8192, which is off by one-eighth of an ulp,
and differs from the real value by 0.0122%.  If a process used 65529 ticks,
the closest comp_t would be 65528, which is still off by 1/8 ulp, but only
0.00153%.  The relative error corresponding to a certain error in ulps can
vary by a factor of 8!

In general, if a floating-point representation uses base b, the relative
error corresponding to a fixed error in ulps can wobble by a factor of b.
So a floating-point representation with a binary exponent minimizes wobble,
which is why it intuitively appealed to me, though I didn't work it all out.

I just found the information above in David Goldberg, "What Every Computer
Scientist Should Know About Floating-Point Arithmetic" in the March 1991
_ACM Computing Surveys_.  Lots of other good stuff in there too.

Of course, the AT&T folks just traded off wobble for storage size and
dynamic range.  They wanted to stay in 16 bits.  With a binary exponent,
they could only measure about 2.9 hours on a 100Hz machine.  Using base 8,
a comp_t can measure about 5.4 years (with resolution of about 6 hours toward
the end).  If they had chosen base 4, the range would be about 15 days,
which would still result in some overflows for anybody who doesn't
remember to panic the system every two weeks.  ;-)

Now if they put all of that thought into the DESIGN, why couldn't they
have put some of it into the COMMENTS ??

>
>     I am curious how you arrived at that formula, but now all things
>considered it makes sense. Thanks for following up on my post!

I assumed the exponent would be either the high 3 bits or the low 3 bits, not
somewhere in the middle.  I ran sleep(1) a few times for periods that
varied by a few powers of two (in ticks).  Look at the values and see which
bits change when.  Make theories.  Compare to what acctcom says.  Repeat....
-- 
Chap Flack                         Their tanks will rust.  Our songs will last.
chap at art-sy.detroit.mi.us                                    -MIKHS 0EODWPAKHS

Nothing I say represents Appropriate Roles for Technology unless I say it does.



More information about the Comp.unix.programmer mailing list