rsize, wsize bench marks

Monty Mullig monty at nestor.bsd.uchicago.edu
Mon May 15 12:43:14 AEST 1989


i've been seeing a lot over the net about the importance of turning down
the rsize and wsize for nfs mounted partitions for machines of
significantly different speeds, and since we have that situation here, i
decided to do some (very) simple bench marks.

here are the results:

>----------------------------------------------------------<

benchmark of read/write sizes for nfs mounted partitions

client: nestor
	a diskless sun 3/50, 4MB, no windowing systems running during the test.

server: delphi
	a sun 4/280, 32MB, 2x892MB drives, each with their own 451 controller.

conditions: performed on a sunday evening with minimal network traffic
	other than that generated by the test.  no active login sessions
	on the server, minimal competition for server cpu.

test: 1) perform a cp of a large file (9,517,094 bytes) from one
	location to another on the same partition, varying the rsize
	and wsize of that partition on the client.
      2) perform a wc on the same file, applying the same variations.

	load averages prior to test:
		load average: 1.77, 1.35, 1.05 (nestor)
		load average: 0.10, 0.02, 0.00 (delphi)

trial 1: read/write sizes using default

	fstab entry for /u1 partition:
		delphi:/u1 /u1 nfs rw 0 0

	observation 1:

		nestor# time cp /u1/bsd/loadzone/sls.198904.raw garbage
		0.0u 18.3s 1:33 19% 0+288k 1+1163io 1176pf+0w

		nestor# time wc garbage
		  102505  281141 9517094 garbage
		79.9u 16.7s 1:47 89% 0+40k 1163+1io 1176pf+0w

	observation 2:

		nestor# time cp /u1/bsd/loadzone/sls.198904.raw garbage
		0.0u 18.3s 1:33 19% 0+296k 1+1161io 1178pf+0w

		nestor# time wc garbage
		  102505  281141 9517094 garbage
		79.1u 16.9s 1:45 90% 0+40k 1164+0io 1185pf+0w

	observation 3:

		nestor# time cp /u1/bsd/loadzone/sls.198904.raw garbage
		0.0u 19.2s 1:35 20% 0+280k 2+1162io 1187pf+0w

		nestor# time wc garbage
		  102505  281141 9517094 garbage
		79.7u 16.5s 1:45 90% 0+40k 1164+0io 1185pf+0w

	average cp: 1:33.6 (93.6s)
	average wc: 1:45.6 (105.6s)

trial 2: read/write sizes of 2048, timeo=100

	fstab entry for /u1 partition:
		delphi:/u1 /u1 nfs rw,rsize=2048,wsize=2048,timeo=100 0 0

	observation 1:

		nestor# time cp /u1/bsd/loadzone/sls.198904.raw garbage
		0.0u 23.5s 4:41 8% 0+256k 1+1163io 1189pf+0w

		nestor# time wc garbage
		  102505  281141 9517094 garbage
		84.0u 20.7s 2:08 81% 0+40k 1164+0io 1181pf+0w

	observation 2:

		nestor# time cp /u1/bsd/loadzone/sls.198904.raw garbage
		0.0u 22.5s 4:57 7% 0+288k 2+1162io 1180pf+0w

		nestor# time wc garbage
		  102505  281141 9517094 garbage
		81.8u 20.3s 2:00 84% 0+40k 1164+0io 1186pf+0w

	observation 3:

		nestor# time cp /u1/bsd/loadzone/sls.198904.raw garbage
		0.0u 22.4s 4:53 7% 0+280k 2+1162io 1187pf+0w

		nestor# time wc garbage
		  102505  281141 9517094 garbage
		81.3u 20.1s 2:20 72% 0+40k 1164+0io 1181pf+0w

	average cp: 4:50.3 (290.3s) +210.2% over defaults ave
	average wc: 2:09.3 (129.3s) + 22.4% over defaults ave

trial 3: read/write sizes of 1024, timeo=100

	fstab entry for /u1 partition:
		delphi:/u1 /u1 nfs rw,rsize=2048,wsize=2048,timeo=100 0 0

	observation 1:

		nestor# time cp /u1/bsd/loadzone/sls.198904.raw garbage
		0.0u 23.5s 1:45 22% 0+280k 2+1162io 1181pf+0w

		nestor# time wc garbage
		  102505  281141 9517094 garbage
		79.0u 17.0s 1:45 90% 0+40k 1164+0io 1185pf+0w

	observation 2:

		nestor# time cp /u1/bsd/loadzone/sls.198904.raw garbage
		0.0u 25.3s 2:02 20% 0+280k 2+1162io 1188pf+0w

		nestor# time wc garbage
		  102505  281141 9517094 garbage
		79.6u 16.6s 1:45 90% 0+40k 1164+0io 1185pf+0w

	observation 3: 

		nestor# time cp /u1/bsd/loadzone/sls.198904.raw garbage
		0.0u 25.2s 1:35 26% 0+288k 2+1162io 1189pf+0w

		nestor# time wc garbage
		  102505  281141 9517094 garbage
		79.5u 16.5s 1:45 90% 0+40k 1164+0io 1185pf+0w

	average cp time: 1:48.6 (108.6s) 16.0% over defaults
	average wc time: 1:45.0 (105.0s) -0.6% over defaults

>---------------------------------------------------------------<

i was a bit surprised by these numbers.  elapsed times on these show the
defaults doing rather better than the two suggested sizes that i've seen.
obviously, since this test was run at a time that the network activity was
so low that it approximated a two node net, these results won't
necessarily reflect what would happen under normal, weekday conditions,
but i'm not convinced that a normal load on the network, server, and
client would substantially alter the relative performances of the sizes
tried.  also, i used the default soft|hard mount option (i haven't been
able find which is the default in the manuals yet), and i suppose that
using the other option *could* change the results, maybe.  i also used a
rather primitive timing mechanism, but since i was interested in elasped
time it seemed sufficient to me.  finally, i should note that the swap
partition for the 3/50 is on the same controller/drive on the server as
the /u1 partition on which the tests were performed, and that it [swap]
was mounted with the default rsize and wsize for all of the tests (which
might also make the results questionable, but, again, i don't know).

note: i was unable to determine the default rsize or wsize from my review
of the manuals.  if anyone has that information, it would be most
valuable.

a few weeks ago, weltyc at cs.rpi.edu wrote:

>> You should also never mount nfs partitions at the root level.

why so ?

comments on this test should be emailed directly to me
(monty at delphi.bsd.uchicago.edu) and i'll post a summary if there is enough
response.  i'll be glad to take suggestions for improving the benchmark
methods if they might lead to noticable differences in the relative
performances of the trial sizes.  these benchmarks take a bit of time to
run, so i have to do them on the weekends, where i can make the time
without feeling too pressured by other tasks (so, suggestions like "run
them during normal hours" i won't be able to follow up on).

--monty
  univ of chicago
  biological sciences division



More information about the Comp.sys.sun mailing list