ulimit hassles

Piercarlo Grandi pcg at aber-cs.UUCP
Fri Feb 2 23:38:30 AEST 1990


In article <15142 at bfmny0.UU.NET> tneff at bfmny0.UU.NET (Tom Neff) writes:
    In article <1990Jan31.034746.8408 at virtech.uucp> cpcahil at virtech.UUCP (Conor P. Cahill) writes:
    >BTW - when I use this kind of backup I use the following:
    >
    >	find ... | cpio ... | compress | dd of=/dev/rmt0 obs=2048k
    >
    >This will run much faster and most of the time will use less space on 
    >the tape since it will stream 2MB segments.	

You should be using 'ddd' (which has been posted to
comp.sources.misc) or, even faster, my own 'team' (which has
been posted to alt.sources; a slightly improved version will be
submitted to comp.sources.misc, so don't ask me for a copy
now). These use two (or more) processes with local buffering
to overlap input with output, and smooth out variations in the
speed with which data is read from the disc. I normally run
my own 'team' with four processes each with a 30K buffer, and it
always streams continuously.

    Actually on my box, using "bs=2048k" (both input and output) rather
    than just "obs=2048k" (output) runs even faster.  It seems as though
    with the one block size and no conversion, 'dd' reads and writes
    simultaneously with good efficiency.

Using 'obs' is a bad idea; 'dd' will allocate *two* buffers,
one 'ibs' (default 512?) bytes long, and the other 'obs' bytes
long, and *copy* between the two before writing. If you use
'bs', 'dd' just allocate one buffer, and this is vastly faster.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk at nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg at cs.aber.ac.uk



More information about the Comp.unix.i386 mailing list