real-time and 3D workstations.

"Kevin R. Martin" -578-4316, 213 DSLKRM%rsgate.rsg.hac.com at BRL.MIL
Fri Sep 15 08:26:36 AEST 1989


     Please excuse my delay in accessing and contributing to the engaging
discussion on realtime unix/graphics.  All the comments were great.  I couldn't
resist but to add a thing or two.

     First, on the discussion on realtime.  Realtime is what you need it to be,
no more or no less.  From James Martin, Design of Real-Time Computer Systems:
"A real-time computer system may be defined as one which controls an
environment by receiving data, processing them, and taking action or returning
results **sufficiently** quickly to affect the functioning of the environment
at that time."  Milliseconds and microseconds are interesting, but they don't
define realtime.  They just define our current limits at quantizing time and
reacting with it via computers.  (Of course the existance of "real-time
features" makes the job easier and helps to "classify" an operating system).

     Let's not forget that the Voyager sent back 'realtime' video, traveling at
the speed of light, which was received several hours after it encountered its
subjects :-)!

     Second, out of interest in the subject of realtime workstations, let me
refer to a case involving realtime unix/graphics and the measurement of time.

     In any discussion on measuring time we usually refer to these terms (among
others):

Resolution:	Minimum time interval that can be measured.
Accuracy:	The degree of conformity of the measure to a true value.
Precision:	The degree of refinement with which a measurement is stated.

     Now let's say the unix/graphics task is to measure the amount of time it
takes for an observer to react to an event (stimulus) in the graphics. 
Ignoring the speed of light (a valid assumption this time :-)), we would want
to measure the time from when the event occurs on the screen, until the
observer taps a response button.  To get a feeling for the time and distances
involved-- imagine a car appearing out of the fog, in your lane, heading in
your direction.  If your both traveling at about 35 mph, ten milliseconds
represents about one foot of closing distance.

     Lets assume the event (say a traffic light changing this time) takes place
in the middle of a non-interlaced display.  Since I presume I can't call
gettimeofday() at the exact point in time on the display when the event becomes
visible (say when half the pixels representing the event are turned on, or even
when the middle scan line of those representing the event comes on), I'll end
up calling it some time nearby.  The value I read may very well have
microsecond resolution, and accuracy of 1 millisecond compared to a 'real'
world clock.  However, because of the latency (I couldn't access the clock at
the desired point in time), the value I have to work may need to be stated with
a precision somewhere around a half a frame time (+/- 16.667 msec on a 30Hz
noninterlaced monitor).  And this is just the time measurement needed to
represent the start of the interval being measured!  Similar difficulties
result in measuring the end of this interval (and any interim points), and only
compound the precision loss.  Despite these limitations, we ARE measuring these
intervals with significantly better precision, WITH Silicon Graphics machines,
and some custom hardware.

     And yes it is true that human machine interaction can be considered on the
order of tenths of seconds.  But we would like to know just how good we humans
are and how much time we're losing.  Perhaps we can't always rely on humans to
be our only interface to machines...


Kevin R. Martin                         Internet: dslkrm at sccr50.dnet.hac.com
Radar Systems Group                     U.S. Mail:
Hughes Aircraft Company                 Loc: RC, Bldg: R50, M/S: 2723
GM Hughes Electronics Corp.             P.O. Box 92426
General Motors Corp.                    Los Angeles, CA 90009



More information about the Comp.sys.sgi mailing list