Why not Multics? (was Re: BSD tty security, part 3: How to Fix It)

P E Smee exspes at gdr.bath.ac.uk
Sat May 4 03:05:18 AEST 1991


Gosh, a 'Multics' response from someone I don't recognize offhand. :-)
I agree with most of your points, but there are a couple I'd like to try
to 'clarify' -- or maybe put another viewpoint on.

In article <00673160066 at elgamy.RAIDERNET.COM> elg at elgamy.RAIDERNET.COM (Eric Lee Green) writes:
>Thus Multics was
>intimately tied to Honeywell's hardware, to the point where many portions
>of the system would munge on pieces of 80-bit pointers or, for that matter,
>were written in ALM (Multics assembly language, a truly horrendous beast...

72-bit pointers, actually. :-)  By the end, very little of the software
was in ALM, the vast majority in portable PL/1.  Opinions varied on how
easy it would port to a different word length, but my opinion was that
making it work shouldn't be too bad, but making it efficient might be.
For example, any 36-bit ints would have been declared either fixed bin
(35) or fixed bin unsigned (36).  On a 32bit machine, you just need a
compiler that will perform whatever kludges are necessary to make you a
36 bit integer field using (presumably) 2 32-bit words.  Then, in
parallel with checking that it works at all, you could have a team
running through to convert such declarations as didn't REALLY need 36
bits.  (Most int's were declared to take 18 bits (fb(17) or fbu(18)),
so the waste in converting to a 32-bit machine would have been less
dramatic.)  The instruction set was nothing to write home about
(something only a mother could love), for all the reasons you
mentioned.

(The above, by the way, is a feature of PL/1 which I wish C had, as I
think it aids portability no end.  You declare a variable in a form
which actually tells how many bits or digits of precision you need, and
it's up to the compiler to find you a piece of the machine big enough
to hold it.  None of the worries about 'how many bits in an int', which
take up so much space in this group.  You say 'I need 19 bits', and it's
up to the compiler to either find it, or tell you that's too big for
the implementation.)

>When Honeywell did the
>crash-and-burn with their Multics marketing, there wasn't anybody around to
>take up the slack. No Sun Microsystems equivalent to turn Multics into
>something ubiquitous in its market segment (like Sun did for Unix and
>workstations).

There were, however, people who expressed interest in trying..

>There did exist some other problems, of course. For one thing, some aspects
>of the Multics design were inherently less efficient than "normal" design
>practices (things such as the dynamic linking, where the first run of a
>program would produce a whole lot of traps to pull in routines from other
>segments). The OS was big, for another thing (one reason why it was so late
>to be released), and somewhat resource-hungry for its day (though "X" puts
>it to shame any time of the day). 

There were also compensations.  It was a point of pride for the dev
team that they managed to get so much efficiency out of such crude
hardware.  There was even a GCOS emulator, which would allow you to run
GCOS programs under Multics.  Many GCOS applications ran faster in the
emulator than under native GCOS, primarily because the 'segment'
mechanism DID allow simulations of GCOS I/O to run faster than real
GCOS I/O.

The mechanism also had a REALLY nice feature (which could bite), in
that it amounted to what are now called 'shared libraries' but which
could be personalised in whole or in part.  If you didn't like the way
printf made things look, for example, you could write your own version
of printf (making sure its args and return values worked the same), put
it somewhere in your PATH, and every time ANY program (your own or part
of the system) made a printf call in your process, it would use your
version rather than the standard one.

Contrarily, if you found a bug in the date-printing routine (as a
developer) you could install a fixed version of the date-printing
routine into the system, and it would automatically, no muss, 'fix' the
problem in every program which called that routine.  (This capability
goes a long way to explaining why there were a LOT of system
subroutines supplied, which you were expected to use AS DOCUMENTED for
virtually any primitive task.  The goal was to minimize the number of
things you had to touch to fix any given problem.  Compare that with
what would be required to get a new version of, say, atoi() into every
program on a Unix system that uses it.)

We were REALLY happy when HIS announced it was finally going to actually
design a set of Multics hardware from scratch; and equally down when they
cancelled the project.

>Multics-inspired derivatives live on here and there. A friend describes
>Primos as "what Multics would have looked like if designed in the USSR".
>The folks at Stratus (fault-tolerant computing) made their system look a
>lot like Multics. I seem to recall that Apollo's Domain OS stole a few
>things here and there from Multics, also. 

All three of the named companies were, at least in their early stages,
staffed by tech people hired away from HIS.  (Most of whom left because
they liked Multics, but perceived earlier than the rest of us that HIS
would never be persuaded to like it.)  Not surprising you can find so
many Multicious concepts in them. :-)

This is all a bit far from unix.wizardry, but the issues are the sorts
of things that I think the Unix developers should at least be thinking
about.  Picking up a few more Multics concepts (the correct few, of
course) could really add to the power of the Unix beast.

-- 
Paul Smee, Computing Service, University of Bristol, Bristol BS8 1UD, UK
 P.Smee at bristol.ac.uk - ..!uunet!ukc!bsmail!p.smee - Tel +44 272 303132



More information about the Comp.unix.wizards mailing list