How does filling a disk to capacity affect performance?

Bob Brankley brankley at usfvax2.EDU
Thu Apr 14 06:59:11 AEST 1988


In article <92 at iravcl.ira.uka.de>, fsinf at iravcl.ira.uka.de writes:
> > ... It goes on to say that having the
> > disk over 90% would affect its performance.  Does anyone know [...]
> > if there are any reasons not to keep a disk at 96-99%??????
> 
> Maybe the reasons are ...
> I've heard (but not verified) you can crash *every* unix-system using the
> CP-command when there is not enough space on disk. CP will not check
> whether the disk is full and overwrite blocks which are not free. The original
> data will be lost; also the machine is likely to go down.
> 
> If this isn't right, please correct.
> 


According to the paper, "A Fast File System for UNIX" the reason for
keeping a 10% of a file system free is to improve performance.  The
original studies at Berkeley suggested that reserving 10% of any generic
file system will keep the BFF(Berkeley Fast File System) efficient. 
Filling a BFF above this 10% free mark causes most BFF's to switch from
minimizing seek time to minimizing disk storage.  This is the only
reason why "newfs" on Berkeley systems automatically reserves 10% of a
file system unless otherwise instructed.  Also, the only person who is
allowed to fill a BFF above the high water mark (10% free) is "root."
When the program "df" reports a file system as 100% capacity, root can
still fill the system up an additional 10%.  The only thing that filling
the file system does is to cause many "file system full" messages on the
system console. 


Bob Brankley
CSNET:  usfvax1!brankley at usf.edu
UUCP:   {ihnp4!codas, gatech}!usfvax2!usfvax1!brankley



More information about the Comp.unix.wizards mailing list