BSD:sockets::SVID:(what?)

Doug McCallum dougm at ico.ISC.COM
Thu Jul 7 01:10:03 AEST 1988


In article <1093 at nusdhub.UUCP> rwhite at nusdhub.UUCP (Robert C. White Jr.) writes:
...
>When people talk about "sockets" they always seem talk about an "internal"
>issue, relating to the selection of one from a group of related endpoints
>(as relates to a driver module c.f. /dev/starlan becomes /dev/stx00-32).
>This is present in STREAMS through the "clone open" where the open of
>a master device results in the actual opening of a device segment or
>channel.  Is this a "socket" or something like it?

Using TLI/Streams to give comparisions, sockets provide basically the
same functionality as TLI+clone open.  The functionality is very
similar.  Sockets happen to do all the work in the kernel and not
partially in the application library like TLI.  Also, sockets
identify protocols by number and not by a device.

Sockets don't necessarily go through the file system.  TCP/IP, for
example, never appears in the file system and the call to "socket"
does the equivalent of the "t_open" call.  Where sockets have an
advantage over current V.3 TLI is that on a large system where you
might want more than 256 virtual circuits supported, a true socket
implementation would work but TLI would have to have multiple major
devices or move to a larger minor device number size.

The socket abstraction also supports binding to addresses and
establishing connections, setting options, etc.  In essence, they do
the same types of things that TLI does.  The differences are more
philosophy than anything else.

>
>From what I have gleaned, "sockets" are congruent to STREAMS "clone
>opens" but if this is the case, what would you need to change in
>streams?

Sockets provide for multiple protocol support and everything else that
TLI does.  It is possible to implement sockets (or something real
close) in the Streams environment, but the TLI (actually the Transport
Provider Interface) does not have the flexibility to implement all of
the semantics unless the underlying implementation has hooks to work
around the TPI limitations.

Some of the areas where TLI/TPI (assuming standard AT&T
implementation) won't do what sockets do are:

	provide a way for a child process which inherits a TLI
	descriptor to find out the address it is connected to.

	provide a way for a process to find out which protocol options
	are set.  (I don't care what the defaults are, I want to know
	what they are now.)

	set options (like TCP buffer size) after a connection is
	established.
	
	send datagrams (unit data) larger than 4K.

These are mostly nits, but sockets have these capabilities and I've
hit them all while trying to port applications from a socket
environment to TLI.  The one case that TLI doesn't even have even
close to the mechanism that sockets have is in the case of TCP urgent
data.  This almost maps to expedited data, but expedited data is
in-band.  There is no way to signal the process that this data has
appeared in the data stream until it reaches the front of the Stream
head.  sockets provide a signal for async notification and an ioctl to
query.  It makes it difficult to use urgent data for things like data
stream flushing if you can't detect the urgent data until you read it.

TLI also has some advantages over sockets.  In a full implementation,
a server can accept or reject connections based on any critieria it
wants.  The connection requests are handed to the server before the
connections are established and are normally only established if the
server accepts.  TLI has more flexibility in handling connections and
options.



More information about the Comp.unix.wizards mailing list