another argument against shared libraries

jbray at bbn-unix jbray at bbn-unix
Sat Aug 13 03:40:43 AEST 1983


From:  James Bray <jbray at bbn-unix>

Most certainly, the way to do it is to add multiple-segment referencing, and
it is just as certainly non-trivial. Processes should be able to reference some
reasonably large number of symbolically-named objects, and the kernel should
take care of all details of finding them, loading them if necessary, and
controlling the type (read-write-execute) of access. Or if one doesn't feel
up to hacking dynamic linking into the kernel, the references could be
interpreted and established at load time, --this also makes it optional, so
that tasks which are paranoid about referencing potentially changeable
run-time-libraries could instead use old-style libraries--, and the kernel
would then have to simply verify at exec-time that every object which may
be referenced by a process is in fact present, and then establish the correct
binding in the process' address-space. The process would for example contain
in its header information telling the kernel "I expect object <libc_version_
12345> to be available, I intend to execute it, and I expect it to begin
at 0400000 in my address space". It is not worth trying to do it on a machine
that doesn't have reasonably sophisticated memory-management.
  This sort of scheme also gives one sharable memory, and one can probably
also think of other nice things to do with it. Full-scale dynamic linking is
the way to go, but this is the last stop along the road before you get there.

  Granted, the implementation would take some doing...

  By the way, I will ask again: has anyone out there actually seen the
way system V actually does what it calls shared memory? I have yet to get
my copy, and am dying to know what it looks like. Did they in fact completely
redesign major chunks of the kernel, or is it some bizarre and ugly hack
involving disk files or worse...

--Jim Bray



More information about the Comp.unix.wizards mailing list