Any disk de-fragmenters out there?

Chris Torek chris at mimsy.umd.edu
Sun Dec 3 03:17:00 AEST 1989


In article <504 at shodha.dec.com> alan at shodha.dec.com
( Alan's Home for Wayward Notes File.) writes:
[a bunch of reasonable stuff deleted, although he consistently misspells
`effects' (this from the American boy who spells -ize words with -ise :-) )]

>One potential disadvantage of the block layout is that files
>get scattered all over the disk.  The files in a given directory
>may be close together, but two different directory (two users
>for example) may be far part.  To help get around this Berkeley
>added request sorting to the disk drivers so that when the disk
>queues became full the requests would be served in such a way
>to get the best global through put from the disk.

Unix has had elevator sorting in the disk drivers since at least V7.
The BSD disksort code is not quite the same as the original, but it
was not `added', merely `preserved'.

>... In ULTRIX-32 V1.0 the DSA (UDA50) driver still had the Berkeley
>sort routine in it.

Did it?  I thought the 4.2BSD uda.c (the Ultrix 1.x driver was a variant
thereof) never called disksort().

>It was removed in V1.1 in the belief that there was no need to sort the
>requests twice.

Indeed.  It is, however, worth pointing out that even on many
multi-user machines, the disk queue is rarely deeper than one entry.  I
put some statistics in our kernels when I was working on the KDB driver
for the 8250 for the BSD Tahoe release, and found that 85% of all I/O
was through the buffer cache, and that about 97% of all I/O consisted
of at most two requests (two is not surprising, as breada() generates
two).  One of them gets handed to the device immediately, and the
second gets queued (on hp/up/rk/... disks) or is also immediately
handed to the device (uda/kdb disks), but in either case one is being
done when the second is generated, so the queue depth is 1.

The only time the queue gets deep on a typical multi-user VAX is when
it starts paging heavily.  The pageout daemon can generate upwards of
20 write requests at once. . . .

It would be interesting to add disk queue statistics to large NFS file
servers.  A professor here is doing disk studies, including this sort
of thing, but I am not sure what results he has for queue depth.  (He
has other results that show that the disk stays on-cylinder 1/3 of the
time, which is pretty good.  Seek distance on the 2/3 misses is not as
good, however.)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris at cs.umd.edu	Path:	uunet!mimsy!chris



More information about the Comp.unix.ultrix mailing list