Signals after exec

Ziya Aral aral at pinocchio.Encore.COM
Thu Mar 2 07:56:45 AEST 1989


In article <963 at ncr-sd.SanDiego.NCR.COM>, greg at ncr-sd.SanDiego.NCR.COM (Greg Noel) writes:

[After referencing our USENIX paper on "variable-weight" processes]

> ... If anybody knows how this kind of semantics was worked out (my 
> copy of the Proceedings isn't here, and my fuzzy memory
> of the paper itself says that they didn't discuss it), I'd be interesting
> in hearing about it.

In article <963 at ncr-sd.SanDiego.NCR.COM>, greg at ncr-sd.SanDiego.NCR.COM (Greg Noel) writes:

[After some much appreciated praise of our paper again asks: ]

> Encore is on the the net, and I think Brown is, as well.  Do any of the
> authors (or others) wish to discuss the subject further?

Ok, Greg, you asked for it...

Actually, the work on variable-weight processes was motivated
by Parasight, a parallel debugging/programmming environment for
shared-memory multis under MACH and BSD. Parasight runs 
parallel applications and various development tools in the same
space. Since this introduces a large degree of heterogenity
in the threads of control sharing one address space, it also violates
the assumption that multiple light-weight threads of control will be
essentially homogeneous (i.e. that they will share their entire
environment).  That is really the underlying assumption of
"light-weight" threads.

Our experience indicates that with real parallel programs, the
assumption of an entirely shared environment with multiple identical
threads of control:

a) Is not always true.
b) Is almost never entirely true.

There are many cases of private file descriptors or private signal 
handlers being quite useful to otherwise similar threads of control.

At a higher level, the idea of "light-weight" threads also complicates
the semantics of UNIX. In place of a single fundamental unit of execution,
such as a process, we now have to deal with two, one "light-weight" and
the other "heavy-weight". A semantic "gap" seems to inevitably appear
between them. In MACH for example, a group of threads shares a common
signal handler. So which one takes the signal? The answer is that the
first available takes it. This is not always appropriate behavior
and the answer "don't use signals" is not very helpful.

We took a slightly different approach. Instead of introducing a light-
weight process or "thread", we asked what gives UNIX processes their
"weight". It turns out that very little does so inherently. 

UNIX carries its history around with it. When a process is created,
it defines its system context, or resource set, in-line. It assumes
that this resource set will be private to that process, in keeping
with its original time-sharing perspective. It then proceeds to
initialize this entire environment. This is the only thing that
makes a process "heavy": the need to intialize and then drag around
a private universe, a seperate address space being by far the most
important component.

But if we seperate a resource from its process and make it idependently
usable by many processes, then we don't have to initialize but once.
The idea of sharable resources is really identical to things like SysV
shared memory regions supported at the process level. The result is that
processes become variably "weighted", their cost depending on the number of
private resources they initialize. 

In practice, the transformation is very easy. A level of indirection
is added to the PROC and USER data structures to reference resources
externally. Resources exist independently and have a link count 
assosciated with them. When the link count goes to zero, the resource
automagically disappears. Linking to resources is through inheritance.
A single new system call, resctl(), specifies whether the children of
a process will inherit links to its existing resources (or a subset of them),
or whether it will create new ones. The default is existing process
behavior.

The actual changes to the kernel are almost trivial, especially in
comparison to the very real work assosciated with making individual
resources multi-threaded and sharable. It is more a conceptual change
than a code change.

The result is "heavy-weight"  processes which have the same startup costs
and are identical to existing processes, "light-weight" processes which
have the same startup costs as MACH threads (somewhat faster actually,
we will publish some interesting numbers soon...), and every combination
in between with the added benefit of semantic uniformity across the whole
range.

Similar benefits should accrue to context switch times and so on. For
example, if 2 processes share the same memory resource, it should
not be necessary to reload the MMU, etc. when they are switched, provided
either that the scheduler knows about it or that second level scheduling
is provided. Our nUNIX kernel does not yet do this optimization.

As to the amount of state an execution unit absolutely has to carry
around with it, that is largely fixed and will not change whether we call
it a process, a thread, or a banana. So why invent new paradigms?

To be honest, the idea that this whole thing might be useful to
uniprocessors and a conventional, co-operating processes model did not
occur to us till much later. In fact, it is just the inverse of
the parallel case; while conventional processes are largely private, they
can often benefit from a degree of sharing.

PHEW!!!

I will be happy to fill in any details if anyone is left awake.
This should teach Greg NEVER to ask an author to "elaborate" on his work :-)



Ziya Aral		aral at multimax.encore.com

My opinions are my own.



More information about the Comp.unix.wizards mailing list