ESDI vs SCSI

neese at cpe.UUCP neese at cpe.UUCP
Thu Apr 13 00:22:00 AEST 1989


Well, I guess I should jump in on this.  First, I have had experience
with ST506, ESDI, and SCSI.  ST-506 and ESDI operate, at the driver level,
in much the same manner.  ESDI is a much better interface than ST-506 and
is much faster than ST-506.  SCSI on the other hand is quite different from
either ST-506 or ESDI.  When one approaches SCSI one should be thinking
in terms of the system implementation, as there are many interface cards
for AT machines available.  With this, no one can say categorically that
any interface is better/worse than SCSI.  The key thing that SCSI brings
to the party is the multi-threaded I/O aspect.  With a good high performance
host adapter, you can easily get a minimum AT bus data rate of 6.7MBytes/
sec.  And on some systems, as high as 8MBytes/sec.  I know from where I
speak as I have a host adapter in my machine that does this.  Although
the SCSI drives are the limiting factor, as they can only achieve steady data
rates of up to 3.2MBytes/sec on large transfers (>32K).  The real thing
to look out for in any Xenix/Unix system is the overhead associated with
disk I/O.  These high data rates are wonderful, but they don't really do you
any good, if the CPU is bogged down with moving all the data around and
even if you use DMA (in burst mode) this still halts the CPU to some degree.
The one thing that many people overlook in a disk implementation is the
fact that while the kernel is in the driver code, everthing else is dead.
The kernel cannot do any scheduling for user processes.  So in an ST506/ESDI
driver, the kernel is dead, until the driver gets the block(s) that the
kernel requested.
With a good SCSI implementation, the kernel driver never has to wait for
a block requested due to the multi-threaded SCSI capability.  This allows
the kernel to allocate more CPU time for user processes.
This all sounds real good on paper, and I felt that way myself, so I
took it upon myself to write a benchmark that demonstrates how much
time the CPU spends on disk I/O.  The results were quite surprising.
I found that the CPU was able to get 35% more work done while there is
heavy disk I/O going on with my SCSI implementation vs a good ESDI
implementation (1:1, read ahead controller).  The data rate for the
ESDI drive was about 25% higher than the SCSI, but at the expense of
reducing the CPU time for other processes.
Just remember that there are good SCSI implementations and bad SCSI
implementations.  If anyone is interested in the benchmark, please
send E-mail and I will send the source to you.  BTW, the benchmark
is dependent on the SCO shared memory implmentation.

					Roy Neese
				UUCP @ {killer,merch,texbell}!cpe!neese
					Tandy Computer Product Engineering



More information about the Comp.unix.xenix mailing list