Future at Berzerkeley

Barry Shein bzs at bu-cs.BU.EDU
Sat Mar 18 23:44:26 AEST 1989


From: davidsen at steinmetz.ge.com (William E. Davidsen Jr)
>  Is there a future for BSD? Ignoring the issue of when new releases
>will be available, I get the impression that virtually all of the
>hardware vendors have joined OSF or UNIX International. Since both of
>these systems will be SysV based, will there be a demand for BSD in
>three years? Five?

The unasked question here is "Is there a need for an experimental,
research version of Unix"?

Note that a lot of what is becoming SYSVR4 and OSFIX are the
incorporation of BSD features which, when introduced, were
experimental (and other experimental features over the years which
have been dropped.)

Right now there are three major sources of experimental Unix
implementations, Bell Labs, BSD and Mach. They are already
incorporating experimental ideas which the standards people are
probably a few years behind on standardizing.

Some examples:

1. What is the Unix parallel processing standard interface?  How
should things like spinlocks and barriers be incorporated? What
exactly are the set of primitives the kernel should provide to support
such development? How is the scheduler to be affected?

2. Is network-wide virtual memory a good idea? How would you implement
it? How should it be presented to the application programmer?

3. How can Unix be "personalized". Right now Unix's structure is
strongly oriented towards centralized time-sharing. Although it's not
that hard to use in a workstation environment a lot of its facilities
presume knowledgeable system administrators and operations personnel.
At other levels assumptions exist (taking an idea from the very good
Mark Weiser/L. Peter Deutsch article in a recent Unix Review), why
can't I single step a debugger into the kernel from an application (I
can do the equivalent on other workstation OS's) etc. Should I be able
to?  How would it work?

4. Is the Unix file system, unenhanced, the right view for personal
workstations with a few GB of disk? I would claim that the MacOS file
system view has collapsed as an abstraction with the popularity of
300MB or larger disks, as cute as it was with a few files. Is there a
similar threshold for the Unix system? It's 10PM, do you know where
your sources are?

5. Fujitsu claims they will be producing 64Mbit memory chips in a
couple of years. This means a 16Mbyte workstation, with the same chip
count, becomes a 1GB workstation. Does anything need to be evolved to
utilize this kind of change? Is it really sufficient to treat it as
"more of the same"?

6. What exactly should we be doing with networking hardware which
runs at memory bus speeds?

Who would answer these questions?  Standards organizations? I hope
not!

Granted a lot of folks ran BSD simply as a production Unix system,
particularly in Universities. Now production systems are available as
commodity items from manufacturers so they wonder why, given that
their needs are fulfilled, would anyone continue with the BSD releases?

But that's all wrong, you were running research versions of the
system.  If you're satisfied with being about 5 years behind the state
of the art (4.2 came out in 1983 and that's about where most
manufacturers are today, 4.3 is 1986 software so that's only three
years behind) then by all means do so.

In short, standardizing what we have today should not mean abandoning
the future.

	-Barry Shein



More information about the Comp.unix.wizards mailing list