8mm tape length for dump

Chris Torek chris at mimsy.UUCP
Sat Aug 12 01:48:36 AEST 1989


>>>Can anyone tell me what the length for an 8mm tape drive is

>In article <18999 at mimsy.UUCP> I answered:
>>No.

In article <111 at engr.wisc.edu> laplant at engr.wisc.edu (dave laplant) writes:
>yes.
>
>according to the documentation i receive with our 8 mm tape drive, the tape
>is 346 feet long and has density of 4.4 megabits per inch. HOWEVER, the
>density is higher than dump can handle, and they recommend using a size
>of 6000 and a density of 54,000. this will specify the correct size.

Nope.  (Pretty close, though....)

Here are the magic calculations from the 4.3BSD dump (which is what
we use everywhere---if you are using the SunOS dump under SunOS,
you should switch to the 4.3BSD dump):

	if 'c' flag:
		float fetapes = 
		(	  esize		/* blocks */
			* TP_BSIZE	/* bytes/block */
			* (1.0/density)	/* 0.1" / byte */
		  +
			  esize		/* blocks */
			* (1.0/ntrec)	/* streaming-stops per block */
			* 15.48		/* 0.1" / streaming-stop */
		) * (1.0 / tsize );	/* tape / 0.1" */

where esize is the number of 1024-byte blocks dump thinks it is to dump;
density is the value from the `d' flag, divided by 10; ntrec is 10,
or the value from the `b' flag, or if `d' given but `b' not, and density
>= 625 (i.e., `dump d 6250' or more), 32; and tsize is the tape size
from `s', in units of 0.1 inch, default 2300*120 if not cflag, or
1700*120 if cflag.

(I will get to the `regular' method, i.e., non-cartridge, in a moment.
Just trying to be thorough.)

So if you used `0ufdsc' with d=54000 and s=6000, we have:

	(	  esize
		* 1024
		* (1.0/5400)
	 +
		  esize
		* (1.0/32)
		* 15.48
	) * (1.0 / (6000*120));

We want this to come out to 1.0 for esize being approximately 2 GB
(in units of 1024 bytes), so plug in esize=(2*1024*1024):

	bc << end
	scale=10
	(2*1024*1024 * 1024 * (1.0/5400) + 2*1024*1024 * (1.0/32) * 15.48) \
	* (1.0 / (6000*120))
	end

1.9612345480 tapes.  But then, we were not using the `c' flag anyway.
On to the other calculation.  All variables are as before:

	if not cflag:
		int tenthsperirg = (density == 625) ? 3 : 7;
		float fetapes =
		(	  esize		/* blocks */
			* TP_BSIZE	/* bytes / block */
			* (1.0/density)	/* 0.1" / byte */
		  +
			  esize		/* blocks */
			* (1.0/ntrec)	/* IRG's / block */
			* tenthsperirg	/* 0.1" / IRG */
		) * (1.0 / tsize );	/* tape / 0.1" */

so we have

	(	  esize
		* 1024
		* (1.0/5400)
	  +
		  esize
		* (1.0/32)
		* 7
	) * (1.0 / (6000*120))

and again we will use 2 meg for esize (which is in terms of 1024 bytes):

	bc << end
	scale=10
	(2*1024*1024 * 1024 * (1.0/5400) + 2*1024*1024 * (1.0/32) * 7) \
	* (1.0 / (6000*120))
	end

1.1894155032.  Oops.  Not bad, though; a bit on the conservative side,
but that is generally best.

But---as I pointed out in my original posting---none of this matters
anyway!  The important question that has to be answered before you
can give a size for these tapes is:  How many long tape marks will
you write?  With a tape capacity of 2.2+ GB, and disks of 100 to 600
MB being common, obviously you will want to put many dumps on each
tape.  EVERY DUMP IS SEPARATED BY A TAPE MARK, and each long tape mark
consumes approximately 2 MB worth of tape!  (This is only 1/1024 of
the tape, but if each dump is ~100 MB, it is still a running 2% error.)

If you use short tape marks, this question goes away, but then the
tapes get very hard to reposition.

Oh, just for fun, let me see how close my d=6250,s=32000 `WAG' came:

	bc << end
	scale=10
	(2*1024*1024 * 1024 * (1.0/625) + 2*1024*1024 * (1.0/32) * 3) \
	* (1.0 / (32000*120))
	end

Only .9459243103.  Maybe not conservative enough.  To check, plug in
2.28 GB:

	bc << end
	scale=10
	(2.28*1024*1024 * 1024 * (1.0/625) + 2*1024*1024 * (1.0/32) * 3) \
	* (1.0 / (32000*120))
	end

1.0711861724.  Whew!

( :-) )

(I got the 32000, of course, by solving for esize at density=625,
ntrec=32, back when we installed the drives.)
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163)
Domain:	chris at mimsy.umd.edu	Path:	uunet!mimsy!chris



More information about the Comp.unix.questions mailing list