holes in files

John R. Levine johnl at iecc.cambridge.ma.us
Thu Dec 6 02:52:48 AEST 1990


In article <10960:Dec507:07:4190 at kramden.acf.nyu.edu> brnstnd at kramden.acf.nyu.edu (Dan Bernstein) writes:
>In article <1990Dec5.052124.28435 at erg.sri.com> zwicky at erg.sri.com (Elizabeth Zwicky) writes:
>> Unfortunately, you have to get pretty intimate with the disk to tell that
>> the 20 meg of nulls aren't there
>
>Hardly. You just look at the file size. Other than the file size, there
>is no way a portable program can tell the difference between a hole and
>an allocated block of zeros.

On every modern version of Unix that I know of, there is no way for an
application to tell the difference between a block of zeros and a hole other
than poking at the raw disk.  The file size is the logical file size
including the holes, e.g. if you seek out to byte 1000000 and write
something, the file size will be 1000000 even though the file is mostly
holes.

For that reason, an entirely reasonable strategy is always to leave a hole
when writing a full block of zeros.  There may even be some versions of
Unix that do that automatically in the write() call.

-- 
John R. Levine, IECC, POB 349, Cambridge MA 02238, +1 617 864 9650
johnl at iecc.cambridge.ma.us, {ima|spdcc|world}!iecc!johnl
"Typically supercomputers use a single microprocessor." -Boston Globe



More information about the Comp.unix.internals mailing list