Unified I/O namespace: what's the point?

Richard A. O'Keefe ok at goanna.cs.rmit.OZ.AU
Mon Oct 8 14:04:12 AEST 1990


Submitted-by: ok at goanna.cs.rmit.OZ.AU (Richard A. O'Keefe)

In article <13220 at cs.utexas.edu>, brnstnd at kramden.acf.nyu.edu (Dan Bernstein) writes:
> Now we're looking at another possible addition to UNIX that hasn't been
> widely tested: a unified namespace for opening all I/O objects. But we
> already have a unified file descriptor abstraction for reading, writing,
> and manipulating those objects, as well as passing them between separate
> processes. Why do we need more?

If you have to use different functions for creating file descriptors
in the first place, then you haven't got a unified file descriptor
abstraction.  Suppose I want to write a "filter" program that will
merge two streams.  It would be nice if I could pass descriptors to
a program, but that's not how most UNIX shells work; I have to pass
strings.  Now, my filter knows what it *needs* (sequential reading
with nothing missing or out of order, but if the connection is lost
somehow it's happy to drop dead) so it could easily do
	fd = posix_open(argv[n], "read;sequential;reliable;soft");
and then it can use any file, device, or other abstraction which will
provide this interface.  My program *can't* know what's available.
If someone comes along with a special "open hyperspace shunt" function;
my program can't benefit from it.  If hyperspace shunts are in the
global name space and posix_open() understands their name syntax, my
program will work just fine.

Surely this is the point?  We want our programs to remain useful when
new things are added that our programs could meaningfully work with.

I can see the point in saying "shared memory segments aren't much like
transput; let's keep them out of the global name space", but sockets
and NFS files and such *are* "transput-like".  Anything which will
support at least sequential I/O should look like a file.  If that
means that some things in the global name space are "real UNIX files"
with full 1003.1 semantics but some things aren't, that's ok as long
as my programs can find out whether they've got what they need.

One point to bear in mind is that application programs written in
C, Fortran, Ada, &c are likely to map file name strings in those
languages fairly directly to strings in the POSIX name space; to
keep something that _could_ have supported C, or Fortran, or Ada
transput requests out of the file name space is to make such things
unavailable to portable programs.  If some network connections can
behave like sequential files (even if they don't support full 1003.1
semantics), then why keep them out of reach to portable programs?

(I have used a system where a global name space was faked by the RTL.
Trouble is, different languages did it differently, if at all...)

Even shared memory segments *could* support read, write, lseek...

-- 
Fear most of all to be in error.	-- Kierkegaard, quoting Socrates.


Volume-Number: Volume 21, Number 190



More information about the Comp.std.unix mailing list