The performance implications of the ISA bus

Boyd Roberts boyd at necisa.ho.necisa.oz.au
Fri Dec 21 10:36:07 AEST 1990


In article <PCG.90Dec19145630 at odin.cs.aber.ac.uk> pcg at cs.aber.ac.uk (Piercarlo Grandi) writes:
|
|The *real* problem is that most (all, I think) 386 UNIX disc (and tape!)
|drivers are poorly written, as they do not use pseudo-DMA, a standard
|technique of PDP/VAX drivers (it is even mentioned in the 4.3BSD Leffler
|book). This is described a bit later in this article.

Very probably.

|
|This is mostly because the driver is written so that each IO transaction
|involves only one sector. Therefore for every sector the top half of the
|driver starts the transaction, then sleeps, the bottom half gets
|activated by the interrupt and wakeups the top half.
|

The standard technique is for xxstrategy() sort the I/O on to a queue
of pending I/O operations and then call xxstart().  xxstart() peels
the next I/O off the queue and instructs the controller to do the I/O.

When xxintr() is called it picks up the completed I/O and calls iodone()
on the buffer, waking up anyone who's waiting for the buffer (there may
or may not be anyone waiting).  xxintr() call xxstart() and the process
is repeated until the queue of pending I/O's is empty.

This, of course, requires sane controllers but it's the standard way to
do the job.  More than that, it's the _textbook_ way of doing the job.
Even if you have a dumb controller, and it requires several request/interrupt
cycles, you do it at interrupt time, unless it's _really_ expensive.  It's
all a trade off.

|The sleep/wakeup between the top and bottom halves involves, on a busy
|system, two context switches, which is already bad, and, most
|importantly, calls the scheduler. There is a paper that shows that under
|many UNIX ports the cost of a wakeup/sleep is not really that of the
|context switches, but of the scheduler calls to decide who is going to
|run next, as this takes 90% of the time of a process activation.

Modern UNIX systems only use one context switch.  The switch to the
scheduler's context is no longer done.  The scheduler was never called
to do high level scheduling from the dispatcher.  The scheduler would
run periodically and _assist_ processes in running by swapping old
processes out and deserving processes in.

However, its context was `borrowed' to do the run queue search.  Its
_context_ and nothing more.  The search is cheap, although the switches
are usually expensive.  Modern UNIX systems search the run queue in the
context of the process who's giving up the CPU.

|Ah yes! Devoting to the cache 25% of available memory seems to be a good
|rule of thumb.

Sure.

|dougp> and a couple-MB RAMdisk for /tmp if I have the memory available.
|
|But /tmp should not be on a RAM disk, it should be in a normal
|filesystem even if actually almost never causing IO transactions as
|short lived files under /tmp should exist only in the cache.
|

Oh dear, it's RAM disk time again.  Where is that revolver?

|Unfortunately the "hardening" features of the System V filesystem means
|that even short lived files will be sync'ed out (at least the inodes),
|but this can be partially obviated by tweaking tunable parameters. For
|example enlarging substantially the inode cache (almost a simportant as
|the block cache), and slowing down bdflush. Overall instead of having a
|RAM disk for /tmp, I would devoted the core that would go to it instead
|to enlarging the buffer and inode caches.

Eh?  Writing things out doesn't cause them to be thrown away.


Boyd Roberts			boyd at necisa.ho.necisa.oz.au

``When the going gets wierd, the weird turn pro...''



More information about the Comp.unix.internals mailing list