ESDI controller recommendations

Vernon Schryver vjs at calcite.UUCP
Thu Aug 31 04:26:30 AEST 1989


In article <123922 at sun.Eng.Sun.COM>, plocher%sally at Sun.COM (John Plocher) writes:
> We have here the timeless tradeoff between software and hardware....
> This works well for a minimal system, but the high performance systems all
> have migrated to the "add CPUs/smarts to the I/O system" camp.  Examples
> here include Digiboard Com 8/i serial boards, TI/Intel graphics chips,
> Adaptec SCSI host adapter, and the above mentioned DPT hard disk controller.

Extrapolations from the PC-AT corner of the world to the universe are hazardous.

As has been true for the >20 years of my experience, the current trend
could also be said to be in the dumb direction.  Both trends are always
present.  The UNIX workstations which are delivering ~1MByte/sec
TCP/ethernet have dumb hardware.  More than one UNIX vendor is unifying the
UNIX buffer pool and page cache.  Some people would say the high end of
graphics performance is recently showing a lot more dumb hardware/smart
software.  In the low end, replacing CPU code or a hardware raster-op
engine with a DSP is not really not an increase in controller
intelligence.  You don't need a CPU or even DMA for USARTS below T1 speeds,
let alone UARTS; all you need are reasonble FIFO's and/or reasonble care
for interrupt latency in the rest of the system.  Recall that the crazy
UNIX tty code is the desendent of smart hardware for doing line disciplines.

> ...[various advantages of doing disk caching in hardware]... 
>   Ordered write back - Writes are cached and elevator sorted by the controller

This could be considered a bug.  If you are shooting for file system
reliability, the file system must be able to specify the order of some
writes.  For example, in UNIX style file systems it is better to write the
data blocks before a new inode, and the new inode before the file directory
data blocks containing the new name.  Any other order produces more chaos
if a crash occurs in the middle of what ever sequence you choose.

Perhaps considering the typical reliable database with "commit" operations
would be more convincing.  It is absolutely wrong for the controller to
re-order the writes of such a database manager.

> In general, pushing all this off onto the controller is a win because it simplifies
> the OS design and results in less main processor overhead to handle I/O to the
> disk.
>     -John Plocher

Putting more smarts in the controller is good, if the smarts cannot be used
for anything else (e.g. faster and longer-burst ECC, other strange error
recovery stuff, control loops to improve tracking, logic for lasers), if
there is no important information that the smarts cannot reach (e.g.
results of things like sync(2) or bdflush or binval()/btoss()/...  in the
UNIX buffer handling), and it is not possible to use the smarts of the
operating system (e.g. no one with the time and source to improve the UNIX
buffer code).

I know no one I respect who is in "OS design".  The good people are in
"system design", which requires trying to put things where they will do the
most good for the entire system, without regard for artificial distinctions
derived from organization charts, such as the file system department and the
controller group and the application division.   Of course, politics,
budgets, and schedules often warp implementations of good designs, but the
"right" place to put intelligence is not a simple matter of dogma.

Vernon Schryver
vjs at sgi.com



More information about the Comp.unix.i386 mailing list