Performance of virtual memory systems

Ed Lee lee at fortune.UUCP
Wed Dec 14 04:02:46 AEST 1983


Much of the negative feelings about virtual memory are
based on the "smart" implementations of overlay memory
verses the "dumb" implementations of virtual memory.
"smart" or "dumb" refers to the "knowledge" of the program
behaviors and machine architectures by the system.
For overlay memory, the compiler supplies the "knowledge"
by understanding the language structures, program algorithms
and machine structures.  It is only fair to consider the
case when equivalent "knowledge" is applied in virtual
memory systems.  Presumablely, a dynamic paging policy would
be constructed by the operating systems according to past
runtime history of paging.  Virtual memory comes in handy
to construct and adapt the paging policy with regard to
changes in time, input and stored data.  How would you
predict these changes for overlay memory systems without
recompiling the systems once in a while ( If you have binary
only, that's tough ).  Furthermore, given the opportunities of
writing programs with large memory space, most programmers do.

4.2 is not a very "smart" implementation of virtual memory and it
comes with many large programs ( csh, vi and the kernel itself)
to do many things such as sockets, job controls, autoconf 
( some are useful, some are not ).  This is a problem of
abusing the implementation, not a problem of the implementation.
If you think that 4.2 is requiring too much memory, how about
system V's hashing of I-nodes, do they not use millions of
transistor memory?

	Ed Lee
	{amd70, ihnp4, harpo}!fortune!lee



More information about the Comp.unix.wizards mailing list