C I/O Question

guy at sun.UUCP guy at sun.UUCP
Thu Aug 7 20:17:24 AEST 1986


(Followups redirected to net.unix only, as this no longer has anything to do
with C I/O.)

> I've had a problem using this on an AT&T 3b5 running Syst 5R2.
> It seems that if you set the min character and timeout value,
> the timeout does NOT occur until the process receives at least
> ONE character.

That is exactly what's supposed to happen; there's no problem.  The
"c_cc[VTIME]" value was NOT originally intended as a read timeout.  It was
intended to work with the "c_cc[VMIN]" value in a fashion similar to the way
that some terminal drivers handle input silos on some terminal multiplexers.

The intent is that if data is coming in at a high rate, you don't want to
process each character as it comes in; you want to wait until a reasonable
number of characters have come in and process them as a group.  (In the case
of the terminal driver servicing interrupts, this means you get one "input
present" interrupt for the entire group, rather than one for each character;
in the case of a user program reading from a terminal, this means you do one
"read" system call for the entire group, rather than one for each
characters.)

However, if data is not coming in at a high rate, you don't want to wait for
more than some maximum length of time before processing the input;
otherwise, the response of the program to the input will be bursty.  If the
data rate is very variable, you want the system to handle both periods of
high and low data rates without having to explicitly switch modes or tweak
some parameter.

In the terminal driver, this is done by setting the "silo alarm" level to
the size of the group, which means that an "input present" interrupt will
occur when at least that many characters are available.  A timer will also
call the "input present interrupt" routine periodically.  That routine will
drain the silo.

This does mean that the "input present interrupt" routine may be called if
no input is present, since the timer goes off whether the silo is empty or
not.  One way of solving this is to adjust the silo alarm level in response
to the most recent estimate of the input data rate; in periods of low data
rate, the silo alarm level will be set to 1 and the timer can be disabled,
since the "input present" interrupt will occur as soon as a character
arrives.

Another way of solving this is to have the timer be part of the terminal
multiplexer, and have it go off only if the silo is not empty.

The equivalent of the silo alarm level is the "c_cc[VMIN]" value, and the
equivalent of the timer is the "c_cc[VTIME]" value.  The S3/S5 terminal
driver chooses the equivalent of the second solution to the problem of
spurious "input present" indications.  In the case of "read"s from the
terminal, it is necessary that some way of blocking until at least one
character is available be provided.  Most programs do not want to repeatedly
poll the terminal until input is available; they want to be able to do a
"read" and get at least one character from every read.

The System III driver did not support the use of the "c_cc[VTIME]" value as a
timeout.  The System V driver does; my suspicion is that somebody read the
documentation, thought it *did* act as a timeout, and filed a bug report
when it didn't.  Somebody then went off and "fixed" this "bug".  If you want
a real timeout, so that the read will complete if any data comes in *or* if
some amount of time has occurred since the "read" was performed (rather than
since a byte of data has come in), you have to set "c_cc[VMIN]" to zero; in
that case, the "c_cc[VTIME]" value acts as a read timeout.

This is explained in painful detail in the System V Interface Definition and
in Appendix C of the IEEE 1003.1 Trial-Use Standard; one hopes the
paragraphs devoted to explaining this migrate into the regular manual pages.
-- 
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy at sun.com (or guy at sun.arpa)



More information about the Comp.lang.c mailing list