Machine *load* calculation

Richard Caloggero rich at eddie.MIT.EDU
Wed Mar 2 04:34:58 AEST 1988


     A while ago (.. a long while ago ..) there was some stuff floating
around this newsgroup (comp.sys.apollo) about dsee.  One of the articles talked about
dsee's ability to construct the object files in parallel by compiling
the various source files simultaniously on a number of nodes.  It went
on to say that the program selected those nodes whose *load* was low in
order to maximize throughput.

     Ok, all very well. How then is the *load* calculated.  I have seen
various methods used, but all seem rather vague sort of things.

	----- 1). The load is defined as the average number of
	*runable* jobs in the
	    system over some interval of time. How might one calculate
	    this (on an Apollo and on a vanilla unix box)?

	----- 2). One could calculate the ratio of the connect (clock
	time) VS. the cpu time
	    used by some *_small_* compute-bound routine.  I assume
	    this would always show something greater than 1 on a
	    multiprocessing system unless it was the only thing running
	    and it was never interupted (then the timing routines
	    wouldn't work if no clock interupts happen).  This also
	    will take longer to run as the load goes up (doesn't seem
	    desirable).

	-----


     Does anyone care to comment on this stuff?  Got any more ideas?





-- 
						-- Rich (rich at eddie.mit.edu).
	The circle is open, but unbroken.
	Merry meet, merry part,
	and merry meet again.



More information about the Comp.unix.questions mailing list