Streams Loop-Around driver...

rki at apollo.UUCP rki at apollo.UUCP
Tue Jan 27 06:11:25 AEST 1987


In article <287 at desint.UUCP> geoff at desint.UUCP (Geoff Kuenning) writes:
> 
> The sp driver performs the cross-link when an M_PROTO (I think that's
> the right one) control message is sent.  The control message contains
> a pointer to the other stream which is to be cross-linked;  this pointer
> is generated using the I_PASSFP (pass file pointer) ioctl.  (The details
> are undocumented;  what you need to know is that the message contains
> the file pointer at offset zero and nothing else.)
> 
> [A discussion of Geoff's use of the sp driver]

I don't know why I don't let this pass like all of the other strange
articles about STREAMS that have appeared in the last year, but here
goes.

Geoff has certainly been very clever in figuring out how the SP driver
works, but his method of using it is rather baroque.  The intended method
of use is:

(1) a server process (e.g. RFS name server) opens any pair of minor devices
    via the clone interface and cross connects them via the I_FDINSERT ioctl,
    issued on one minor device with the file descriptor of the other,
    using a NULL data part and a control part just big enough to hold a pointer,
    with an offset of 0.  This in effect causes the creation of an M_PROTO
    message containing the address of the read (I think, I forgot which one) queue
    of driver on the target stream, which is sent down the control stream.
    Hence, the user process at no time needs to be in possession of a
    kernel address.  

(2) mknod() is used to create a name for the client end of the stream pipe.

(3) When a client process wants to obtain a private stream pipe to the server,
    it first does step (1) itself to obtain a private stream pipe.

(4) It then opens the client end of the server pipe, and uses the I_PASSFD
    ioctl to pass a reference to one end of the private pipe to the server
    process.

(5) The server process then forks; the parent goes back to listening on the
    server pipe and the child has a private conversation with the client
    on the client pipe.

This was used very successfully with the RFS name server.

> 
> One last note:  the stream buffer allocation scheme is simply stupid in
> combination with the high/low water mechanism.  I won't go into details,
> but if you send a lot of small packets that clog up, you will exhaust
> the stream buffer pool long before you hit the high-water mark.  In our
> case, the X clients used up the pool with line-drawing requests, and
> the server then couldn't get a buffer to post a mouse-click event that
> was necessary to terminate the line-drawing requests!
> -- 

I'll admit that I was not very satisfied with the flow control weightings;
once we had decided that the small message problem was going to cause
real-life difficulties, political problems prevented us from fixing them.
One easy way to alleviate the small message flow control problem is to 
change the flow control weighting tables, so that no block receives a
weight of less than 128 (bytes).  I found this quite satisfactory on my own
system, where network tty traffic was chewing up all the small message
blocks.  The PC7300 solution was to do flow control based on the number
of messages rather than the size of the messages; this was done by 
weighting ALL blocks as 1.  Unfortunately, these are not tunable parameters,
so you would have to recompile the system :=(ugh) to fix it.  [If you
are really desperate, you can probably have your driver write new
weighting values into the array upon initialization, but you didn't 
hear this from me.]

> 
> 	Geoff Kuenning
> 	{hplabs,ihnp4}!trwrb!desint!geoff

Bob Israel
apollo!rki

Disclaimer: The above ramblings in no way represent the opinions, intentions,
or legal negotiating positions of my employer, or of any past employer that
may have a proprietary interest in the subject matter.  



More information about the Comp.unix.wizards mailing list