dump on old 1.2 Ultrix...

George Robbins grr at cbmvax.UUCP
Wed Aug 2 06:54:33 AEST 1989


In article <11953 at ulysses.homer.nj.att.com> ggs at ulysses.homer.nj.att.com (Griff Smith) writes:
> In article <7492 at cbmvax.UUCP>, grr at cbmvax.UUCP (George Robbins) writes:
> ...
> > The actual limits here are probably based on Massbus 16-bit byte
> > count registers, which enforce transfer sizes of 1-65536 bytes.
> > tar seems to make a software decision to avoid 65536 byte blocks.
> > 
> > Now it's entirely possible that other I/O architectures / controllers
> > may allow larger transfers.
> 
> Large blocks may also cause a problem when doing error recovery.  For instance,
> if using 9-track tape at 6250 bpi a 64K block takes about ten inches of tape.
> If the device driver tries to skip over a bad spot on the tape by rewinding
> over a bad record and writing a three inch gap, the bad spot may be on the
> other seven inches.  If the driver gives up after three write attempts, it
> will never skip over an error in the last inch of the record.  Larger blocks
> make the problem worse.  64k blocks are already at 97% of maximum density for
> 6250, so larger blocks are silly.
> 
> Adjust these arguments for newer media, but 64k should usually be a good size;
> it's already 8 times larger than the disk block size used by most BSD-based
> systems.

This is basically true.  The ability to use large blocks reliably depends
on the "quality" or error density of the drive/media combination.  In a
"DP" environment where tapes are heavily cycled and do wear out.  In the
unix backup environment where the "big" backup tapes might be cycled
weekly or monthly, there doesn't seem to be any problem with 64K block
sizes (at least at 6250 BPI, at 1600 BPI you surely want smaller blocks).

The issue is more confused by devices that don't necessarily share the
"variable length block" nature of the traditional tape drives.  Sun
compatible cartridges, for instance, write in 512 byte blocks regardless
of the nominal transfer size, so you can write at bs=126b and read back
at bs=13b if you want to.  Here the urge for large block sizes is to
avoid streaming vs. start/stop tradeoffs via the I/O clustering implict
in requesting large "block" sizes.  Since error control/recover is done
on the per hardware "block" basis there's no physical downside to this.

I'm not sure where the TK50 & TK70 fit into this spectrum.  They are
"preformatted" but I don't know the details of the error handling

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr at uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)



More information about the Comp.unix.ultrix mailing list