holes in files

Andrew Hume andrew at alice.att.com
Wed Dec 26 17:33:23 AEST 1990


In article <2809 at cirrusl.UUCP>, dhesi%cirrusl at oliveb.ATC.olivetti.com (Rahul Dhesi) writes:
~ In <8432:Dec1622:40:0790 at kramden.acf.nyu.edu>
~ brnstnd at kramden.acf.nyu.edu (Dan Bernstein) writes:
~ 
~    I want to make sure that blocks 17 through 22 (expressed in byte
~    sizes) will be guaranteed not to run out of space when I write to
~    them. You're saying that I should have no way to make this
~    guarantee.
~ 
~ Well, "df" works nicely.


	I rate this as a completely fatuous answer, devoid of
use and common sense. i had a similar problem. a network server
gets a request from a client to store a file of given length.
it is not permissible to say yes and then say no halfway through
the file. i do it by writing zeros to the given length first
and then saying yes/no and then read/write the actual data.
when do i do the df, exactly? actually, from what this thread has
uncovered, it might be safer to write non-zero data to avoid
smart filesystems. what scares me more are hyperintelligent
disk drives that have built in data compression and might be able
to take 20 blocks of some values but not be able to overwrite them
because of different compression rates.

	andrew hume
	andrew at research.att.com



More information about the Comp.unix.internals mailing list