Swap size for large memory machines

Eric Bergan eric at pyramid.pyramid.com
Thu Jul 27 16:01:40 AEST 1989


In article <78696 at pyramid.pyramid.com> csg at pyramid.pyramid.com (Carl S. Gutekunst) writes:
>In article <61633 at uunet.UU.NET> rick at uunet.UU.NET (Rick Adams) writes:
>>The only rule that makes any sense at all is main memory + swap
>>space must be greater than the sum of the memory use of all the processes
>>on your system at peak load. (This is obvious if you think about it.)
>
>Almost. Swap space and main memory do *not* add together on a demand-paged
>virtual memory system; swap space alone must be greater than the sum of all
>processes running on the system.

	Since the original posting asked about an OSx 5.0 system, this is
not correct. In OSx 5.0, main memory and swap space are concatenated together
as storage for processes. It is possible to run a machine with less swap
space than memory. So even though you might have 256Mbytes of memory (or
1Gbyte after the 4Meg chips come out), you could still stick with a single
b partition (~30Mbytes) if you wanted to.

>These days in commercial installations, the goal seems to be to have enough 
>RAM so that you don't page, and/or so response time is minimized. A trans-
>action processing system with 100 users that does one database query per user
>every 15 seconds would probably want at least 100MB of swap space, but might
>see no difference between 16MB RAM and 32MB RAM.
>
>My rule of thumb: 1MB of swap space per login user. 1MB of RAM per *active*
>user, plus 4MB for the kernel. Remember that users with windowing terminals
>(AT&T 630 or X Server) count as several users. And make adjustments for mongo
>sized applications that eat a horrific amount of memory, like FrameMaker.

	For database applications, I would use a slightly different
formula for RAM computations. Start with 4M for the kernel (remember the
good old days of 64K machines? Oh, well). Add a fixed amount of memory
equal to about two times the size of the frequently used database indices.
Add a per user memory usage that depends on the DBMS being used, and on
the front end application. (This can be anything from as little as ~40K
per user, to as large as 2-3M bytes.) Stir gently...

	The fixed amount can be varied - if the indices are too large to
all fit in memory, calculate the size of the index minus the bottom
level (assuming Btree or variant), and use that instead. Or, if disk IO
is a real bottleneck, and the frequently accessed data is small, add
more memory and try to get the data itself into memory.

	One other frequently asked question - unlike some other machines,
Pyramid does not store multiple copies of a shared memory segment in
swap space. Just one copy is stored, not one per process linked to the
segment. (In OSx 5.0, no copy is necessarily stored in swap.)

-- 

					eric
					...!pyramid!eric



More information about the Comp.sys.pyramid mailing list