query about multiple block write efficiency

utzoo!decvax!ucbvax!menlo70!sri-unix!mclure at SRI-UNIX utzoo!decvax!ucbvax!menlo70!sri-unix!mclure at SRI-UNIX
Thu Jan 7 23:17:46 AEST 1982


I have a question about write()ing in multiples of BUFSIZ chars vs.
write()ing in gigantic chunks. How much of a difference does it make?
The program takes text from one file such as:

	text1:	400 chars
	text2:  1005 chars
	text3:	15332 chars
	text4:  566 chars
	text5:  712 chars
	etc.

and must write out text1, text3 and text5 to another file most efficently
(and quickly).  My current scheme simply has a gigantic 10*BUFSIZ char array
and each textN is read into the array (the text all comes from a file) and
then write()s out to the other file.  How much more efficient would it be to
guarantee that each write produces a multiple of BUFSIZ?  Would this produce
a noticeable speedup?  We're on an 11/70 running the Berkeley Software
Distribution.

Has anyone done any studies on this?



More information about the Comp.unix.wizards mailing list