V.3 + top

Paul Fox fox at marlow.uucp
Sun Nov 13 10:16:11 AEST 1988


Well, after having been given the info about the region tables --
that is exactly what I was after. Top now displays the current
resident working set (as derived from the region table).

Now, is this a bug ? I've been trying to find out why my system
(and the ISC V.3 systems we use) are so slow compared to Xenix/386.

This is what I have done.

I have an editor, GRIEF, which edits files by reading them into memory.
I created a really big file (~1MB) and read it all in. This essenitally
caused everything else to be swapped out.

I then repeat this on another screen, so that the first GRIEF swaps
out. 

I then go back to the original GRIEF and tell it to go to the middle
of the file. This should involve paging in most of the code (~100KB) +
aa small fraction of the data file. However what happens is that the
2nd GRIEF swaps out in its entirety and about 95% of the 1st one swaps
back in.

I looked thru my code to ensure that GRIEF isnt randomly walking all
over its virtual address map, and as far as I can see it doesnt.

(By the way this is a 4MB machine with 400 disk buffers.
The startup of /unix says I have avail memory = 2510848).

Anyway, so I decided to sdb GRIEF and watch what happens when it
tries to go to the middle of the file. (Going to the
middle of the file involves adding 'n' to the current line number, and
the display code working out where to find lines n..n+24).

If I single step GRIEF, then only about 10-20% of GRIEF swaps back in,
and the system performs nicely as expected.

The question is why when GRIEF runs at full speed does the kernel
bring in the entire image ?

Another thing I did was to use crash to look at the regions
allocated to this process. From my understanding, a region is a description
of a contigous piece of memory, in multiple page units. I dont
know how V.3 'coalesces' pages into a region. But, GRIEF has regions
with the r_pgsz set to 30-70 pages long. What I presume is that
when a page fault occurs, V.3 swaps in the entire region even although
only one or two pages may be needed. This may have been added to V.3
as an 'optimisation', for example if all the pages happen
to be in consecutive sectors in the swap space. If so, I think
this is bloody stupid.

I think the issue of performance is V.3 swaps in too much, and
defeats the whole object of virtual memory. The V.3 virtual
memory system is about 60-70% a complete swapping system. 

Is this a bug, or have ISC & Microport badly tuned the kernel ?

PS. Can somebody tell me what good values for the high and low water
marks are for the page fault handler ? I have them set very close to
each other to try and avoid long periods of the bdflush/vhand processes
from writing to disk. However I think maybe having the high
water mark set higher may give a performance improvement.

Can somebody please respond. We dont have source code here so
I can't look it up myself (I usually have to disassemble the kernel
to work out whats wrong).

Many thanks in advance.


=====================
     //        o      All opinions are my own.
   (O)        ( )     The powers that be ...
  /    \_____( )
 o  \         |
    /\____\__/        Tel: +44 628 891313 x. 212
  _/_/   _/_/         UUCP:     fox at marlow.uucp



More information about the Comp.unix.microport mailing list