Future at Berzerkeley

Barry Shein bzs at bu-cs.BU.EDU
Sat Mar 25 23:19:05 AEST 1989


>  That's what I was asking. Ten years ago there was some reason to go
>with the latest version of BSD because you needed virtual memory, fast
>file system, etc. This stuff will now be in SysV, along with NFS, RFS,
>streams, and a bunch of realtime things which are supposed to come in
>V.4.1 as I recall. Will there still be a need for BSD or mach among the
>people who don't do kernel research? If the commercail vendors go with
>SysV, as it seems they will, will universities find it easier to get
>fund$ for research on what vendors are selling and the government is
>buying? I'm looking for good reasons other than kernel research, and I
>don't think you need a totally new kernel to do that.
>-- 
>	bill davidsen		(wedu at crd.GE.COM)

The point is that there's more to the experimental unix versions than
kernel research. For example, it's likely that BSD will be the first
major variant with an ISO stack and you might be interested in using
that as a platform. Mach already has a form of lightweight processes
and if you're buying a parallel machine it's the closest you'll find
to some sort of widely used interface for writing truly parallel
programs.

These are not just little goodies, these open whole vistas of
opportunity to those who need these things. Parallelism looks like the
best shot we have at delivering thousands of MIPS at reasonable costs
in the next few years.

In fact, I believe that latter example should be enough to answer the
question. How exactly are you going to exploit the parallel hardware
you're going to be screaming for soon (:-) with SysV or OSFix?

Sure, you can limit yourself to coarse-grained parallelism and get a
lot of benefit (eg. piped shell commands run in parallel, various
compiles in make can run in parallel by just firing up more than one
cc command, time-sharing obviously benefits without doing anything
special.)

But what about data-driven programs where you need to fire up CPUs as
you run, probably dynamically calculating the optimal number of CPUs
to use for the next calculation? You can use vanilla Unix fork() but
it's lacking seriously and everyone I know who's thinking about the
problem has proposed at least some major change to fork semantics.
Otherwise the advantage you might have seen from parallelism goes down
the fork creation time rat-hole (actually, fork+shmem+signal setup
etc.)

Where do you think this sort of thing will come from? It's really
quite fundamental, not something a vendor is likely to whip together
satisfactorily. And when it shows up and you need it you'll start
considering running one of these research versions.

Again, if you don't need it obviously you won't perceive the value.  I
know plenty of people who don't need computers also. I think research
in Unix futures has become MUCH more critical now that almost every
vendor is relying on it to plot their fate.

I'm just having trouble seeing your point, do you think operating
systems are "finished" with the release of SysV/OSFix? Or do you see a
lesser percentage of folks running research versions?

Of course, that's because of all the folks who are running Unix but
used to be running VMS/DOS/PRIMOS/AOS/MVS/VM/CPM. Of course the herd
heads straight for the recent past, they never run research versions,
nor should they in most cases.

What was that Dennis Ritchie quote? "The number of Unix installations
has grown to 12 with more expected in the near future". Let's keep
some perspective here!

	-Barry Shein, Software Tool & Die



More information about the Comp.unix.wizards mailing list