Caching disk controllers and 386 multiprocessor

John Pettitt jpp at slxsys.specialix.co.uk
Wed Jun 14 22:00:05 AEST 1989


>From article <1015 at aber-cs.UUCP>, by pcg at aber-cs.UUCP (Piercarlo Grandi):
> In article <4038 at slxsys.specialix.co.uk> jpp at slxsys.specialix.co.uk (John Pettitt) writes:
>     From the tests I ran the DPT wins over adding 2 MB of buffers to /xenix
>     (don't know about `real' unix).  The improvment gained by having more
>     buffers in /xenix was not great.  I think this was mostly due to the
>     cacheing code being rather old and not that well written (putting on
>     flame proof suit :-).
> 
> Well, there is not much you can do to improve the caching algorithm once you
> have hashed cache access. If Xenix 2.3.2 does not have it, tough; all BSDs
> and later SystemVs do have it.

Xenix does have hashed cache access.
The point I was making was not about cache access methods but about cache 
write back strategys.
> 
>     The basic problem seemed to be the cache write back code:  When lots of
>     buffers are added to xenix it fills almost all of them with dirty data
>     then spends several seconds writing them all back and the real time
>     response goes out the window.
> 
> But Unix does flush the cache to have less loss of data in the case of
> crashes. In particular, "hardened" filesystems implement directory and inode
> modifications with write-thru, instead of write-back, which kills the
> performance of dump restores when there are many small files.

You have missed the point.
The problem was that xenix would sit around for up to 2 or 3 seconds then try
and write over a meg to the disk in one go.  If you happend to want a file
that was on a block at the other end of the disk you had to wait, and wait,
and wait.
> 
>     The DPT is far better balanced in this respect and so gives a mush more
>     even throughput.
> 
> Either the DPT has battery backing for its 2MBs of cache, or it is a very
> dangerous proposition. Even if it has non volatile memory, it will lenghten,
> possibly by much, the moment in which data is written to the disc, which
> makes error reporting even more impricise than it is; and, I hope it has an
> explicit flush command, if you want to be sure that disc contents actually
> reflect what you think is on them.

Non volatile ram in the controller buys you very little, the write back
window on the DPT is only .25 of a second (from write to the controler to 
the time the sector is added to the controller to disk transfer Q).

And yes it does have a flush command.

Given that XENIX (and thats what the question was about) can wait 2 or 3 
seconds the added danger is minimal- if you are that worried buy a UPS.

> In practice you lose with cache in the controller vs. main memory also when
> you have two controllers, which is virtually a must (given that most are not
> multithreading, except SCSI ones) for high performance multiuser systems.

Before making statments that caching controllers lose, you should check the
facts.

1) My posting was based on real, timed benchmarks. 

2) Why do you need two controllers ?  I thought that the point of having
a cache was to avoid that sort of kludge hardware solution.

3) The DPT 3011/70 has 10 (count em) LED's to indicate what it's doing
so it must be a `real computer' (it's got flashing lights :-) :-) :-)

4) The DPT is easy to use, you plug it in and it works, no drivers, no broken
installs.  Given that XENIX users (for the most part) would not know where to
start with any other solution DPT have a good product.

> Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk at nsfnet-relay.ac.uk
> Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
> Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg at cs.aber.ac.uk
-- 
John Pettitt, Specialix, Giggs Hill Rd, Thames Ditton, Surrey, U.K., KT7 0TR
{backbone}!ukc!slxsys!jpp    jpp%slxinc at uunet.uu.net     jpp at specialix.co.uk
Tel: +44-1-941-2564       Fax: +44-1-941-4098         Telex: 918110 SPECIX G
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<



More information about the Comp.unix.xenix mailing list