System V file systems

The Beach Bum jfh at rpp386.Dallas.TX.US
Sun Oct 30 07:55:18 AEST 1988


In article <26599 at ucbvax.BERKELEY.EDU> bostic at ucbvax.BERKELEY.EDU (Keith Bostic) writes:
>In article <1988Oct27.173247.2789 at utzoo.uucp>, henry at utzoo.uucp (Henry Spencer) writes:
>> Or if you run your tests in a time-sharing environment, where the disk
>> heads are always on their way to somewhere else anyway.
>
>                                     If you have a system with an
>overloaded/limited number of disks, your paradigm is much more likely to be
>correct.

In the real world, where more than one process is accessing the disks at
any given time, the heads are always in the wrong place.

If you localize all of the file information for a given file, as the Berkeley
Fast File System does, you only need access more than one file to break it.

I have never seen a realistic benchmark [ multi-process, multi-file, random
access ] validate the claims BSD FFS puts forward - except to the extent that
having the larger block size dictates.  And soon USG Unix will have 2K blocks
so expect that advantage to diminish.
-- 
John F. Haugh II                        +----Make believe quote of the week----
VoiceNet: (214) 250-3311   Data: -6272  | Nancy Reagan on Richard Stallman:
InterNet: jfh at rpp386.Dallas.TX.US       |          "Just say `Gno'"
UucpNet : <backbone>!killer!rpp386!jfh  +--------------------------------------



More information about the Comp.unix.wizards mailing list