RAM disk vs paging + buffer cache

watson at convex.UUCP watson at convex.UUCP
Fri Aug 15 07:14:00 AEST 1986


I believe the buffer cache is the way to go. I have made extensive
measurements on the Convex system, and our cache hit rate is
over 95% in the normal case. Often the cache hit rate is 99% for
times in the tens of seconds or more.

We dedicate 10% of memory to the buffer cache, so often we run
with caches in the range of 8 Mb (although this is user
configurable.) We do little physical I/O at
all on very large time sharing systems. Having such a large cache,
and multiple physical I/O paths, allows us to use larger block
file systems (blocksize 64kb, fragsize 8kb), striped across
multiple disks, to achieve I/O source sink rates on the order of
two megabytes per second or more. Of course the cache is not
useful for random reads. 

Conceptually, I agree that it would be nice if the file system buffer
cache and paging system cache were one. You could have one set
of kernel I/O routines, instead of two. You could dynamically
use pages in the most optimal way, for file buffering, or text
buffering. The problem is that you introduce serious coupling
between the I/O system and the VM system, which right now
are relatively uncoupled. You need to be very careful about
locking to avoid deadlocks between the I/O system and VM.
For example: you can't do I/O because there aren't any buffer
cache pages to be had, but you can't get pages because all
the processes in the VM system are locked doing I/O.
The other problem is you want to avoid copying the data if
possible. Now the Convex C-1 can copy large blocks of data
at 20 Mb/s. Nevertheless, we try to avoid the copying if at
all possible.

I haven't kept any statistics on text hit ratio.

One of the biggest problems we are currently facing is with
kernel NFS... I think its nice to have stateless servers,
but you must effectively disable the buffer cache for writing.
The penalty isn't too bad on a Sun class machine, but seems
really gross on a C-1 class machine. I am currently investigating
this issue. Anyone with ideas on this issue I'd enjoy hearing
from.

Those of you who want to discuss I/O and performace issues specifically
can mail to me at inhp4!convex!watson - I don't normally read
net articles.

Tom Watson
Convex Computers
Operating System Developer



More information about the Comp.unix.wizards mailing list