increasing RAM memory available to a large process--summary

Bruce Samuelson utacfd!utafll!bruce at central.sun.com
Fri Aug 25 01:51:32 AEST 1989


In Sunspots 8.89, I asked whether it is possible to "make more RAM memory
available to a memory-intensive process than Unix is willing to dole out,
in order to reduce its virtual memory paging."  I received one direct
response from Jordan Hayes (uunet!Morgan.COM!jordan).  We also discussed
reducing disk fragmentation.  Regarding allocating more physical memory to
a process, Jordan said it is not possible:

That's because the Unix paging strategy is to start paging (i.e., using
virtual memory) when you've used up something like 2/3 of available
memory, so that the degradation is smooth.  There's nothing you can do
except buy more memory or get a faster disk (so that paging isn't such a
big performance penalty) ... further, real memory gets fragmented, so
finding large chunks to give programs gets harder and harder ... rebooting
will help you for a while (if the first thing you do is start your
application), but in general you'll see about 2Mb out of 4Mb (after the
kernel) and about 4.5 out of 8Mb.

I exchanged further messages with him, with me wishing that Unix offered
some ways for users to tune its memory allocation strategy to the details
of their workload, and him saying that the strategy employed by SunOS is
the result of years of work and that "messing with it almost uniformly
brings disasterous results."  He did offer the following possible
solution:

My request: I'd like to lock a couple megabytes of my large process into
memory so it can't get paged out.

His solution: You can do this in SunOS (and probably Ultrix) -- try making
your data area a shared piece of memory -- then it has to stay in main
memory since multiple process are now potentially depending on it, and
would croak if it got swapped out.

I complained that Unix was tuned to a timesharing machine with many users,
while on workstations it should be tuned to a single user environment.  He
countered: "4.3BSD, which runs mostly on Vaxes in a timesharing
environment has a vastly different virtual memory subsystem that SunOS 4.0
does, which is used mainly in the manner that you describe."

We also discussed fragmentation in the BSD fast file system.  He cited
three references in considerable detail (one being the Karels / McKusick /
Leffler / Quarterman book "The design and implementation of 4.3BSD")
explaining how the file system minimizes fragmentation.  One constraint is
that if you try to cluster a file or related files on a cylinder group,
you could end up displacing files from this group to other non local
groups, degrading their localization.

Our correspondence ended with me wistfully suggesting that Unix and the
SunOS could still do better on memory allocation and disk fragmentation:

Memory allocation: 1) The user knows what programs he will run and should
be allowed more control over memory and cpu resource allocation.  I cited
realtime systems, and to some extent OS/2, as operating systems where this
can be done.  2) The cpu spends most of its time in an idle loop.  Why not
use this time constructively to reduce memory fragmentation?

Disk fragmentation: I had no suggestions for improving this for disks that
are busy all the time, but for ones that have a lot of idle time, there
must be some way to use it to improve fragmentation statistics.

Roger Boyle recently posted a message to Sun-Managers which also noted
that "according to 'ps', not all physical memory seemed to be in use".
However, "A call to Sun elicited the response that under V4 of the OS
(which we run), 'ps' and 'vmdisp' were likely to provide very inaccurate
figures (while under V3 they did not), and that this was the cause of the
problem."



More information about the Comp.sys.sun mailing list