How many users _really_ ?

Piercarlo Grandi pcg at thor.cs.aber.ac.uk
Sat Aug 26 06:42:04 AEST 1989


In article <32824 at apple.Apple.COM> leech at Apple.COM (Jonathan
Patrick Leech) writes:

>In article <1045 at aber-cs.UUCP> pcg at cs.aber.ac.uk (Piercarlo Grandi) writes:
>>Agreed. In most light applications, i.e. o.a. or general timesharing, at any
>>one time 1/10th of the users are running a process. A Vax 11/780, which is a
>>much less powerful machine than a suitably configured 386, could easily run
>>two dozen (and three dozen with some effort) users doing small compiles
>>etc...

>   Only if you *like* waiting 5 minutes for "hello.c" to compile, or
>several seconds for screen updates (at 4800 baud, yet).  We had a 780
>as the main student machine at Caltech some years back, and it could
>not comfortably handle more than 10-15 users or so.

Between comfortable use at the 10-15 users level and at the two dozen
user level there is a world of difference, i.e. suitable configuration
(always assuming that no large jobs are run, e.g. tex/lisp/gnu
etc...).

Using the 10% rule, with 10-15 users one is likely to see maybe 1-2
processes active at one time, with two dozen it is more likely 2-3; not
a terribly big difference.

The real difference TO RESPONSE TIME lies in the io configuration. If
you have a single disc, or a single data path to peripherals, a lot of
full screen edits with dumb (i.e. silo/fifo less) serial lines, too
little buffers, then a 780 is on its knees even before the 10-15 users
mark.

Adding a second disc, raising the buffer cache to 20-25% of memory
(e.g. 1-2 megabytes), enabling pseudo dma or installing intelligent
serial lines, having /tmp not on the same disc where users file are,
and swap/paging on the disc not where the root and /usr are, make a big
big difference. If you have independent data paths for your discs (e.g.
a MASSBUS one and a UNIBUS ones) things get that much better (using 4.3
instead of 4.2 and 4.2 instead of 4.1 of course improves things
again).

The same is true, with adjustments, on a 386. A 386 with one 300 meg
drive is MUCH slower than one with two 150 meg ones, preferably on a
multithreaded controller (e.g. AHA154x) or on two single threaded
controllers, especially if the disc partitioning is done ok.

	On a 386 it is also important to ensure that there is enough
	real memory that swapping never occurs, even if this means a
	huge (huge!) waste of the commodity, as the system V swap
	algorithms is badly, badly brain damaged (in particular
	expansion swaps), even if paging ought to alleviate the
	occurence of the problems.

With many users, cpu times are not that important, or at least not as
important as I/O (especially to temporary files and swap) latency and
overhead.

Too bad that the default configurations of almost all commercial
systems are badly detuned (notably SUN's, by the way). I have
seen many a UNIX system or network (not to speak of MVS, VMS,
EXEC, etc...) running at a fraction of their potential load
because of obviously poor tuning.

	There is a problem also that many UNIX algorithms are simply
	unsuitable for today's large machines; one famous case is the
	beautiful BSD's `clock' page replacement algorithm, that was
	carefully designed for performance, ON A 512K SYSTEM (the
	original ucbvax). Too bad that on a machine with several
	megabytes the `clock' sweep may now take *minutes*, thus making
	it essentially useless... Similar is the case of the system V
	swap algorithm, as already noted.
--
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk at nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg at cs.aber.ac.uk



More information about the Comp.unix.i386 mailing list