4.2bsd kernel auto-nicing, scheduling

Jeff Straathof jeff at umcp-cs.UUCP
Fri Feb 28 06:48:01 AEST 1986


In article <3375 at umcp-cs.UUCP> chris at umcp-cs.UUCP (Chris Torek) writes:
>...  I would rather let Jeff speak for himself.

Here I speak.  Let me give a brief description of how Maryland's new UNIX
scheduler works and then tell what I've done with it since the USENIX
proceedings paper.

The new scheduler employs a multilevel feedback run queue with roundrobin
scheduling in each level.  Quantum sizes vary with each level and begin
when processes get control of the cpu.  Processes get preempted by
higher-priority processes leaving blocked states.  The priority of a process
directly determines the level of the run queue in which it should reside;
the one with the highest priority gets serviced first.

The priority of a process is determined by what it does.  Priority boosts are
given for socket input, terminal input (specific to mode) and disk input,
among other minor things.  Priority bumps are given for quantum expirations.
Each process has a priority maximum and priority minimum limiting the range
of its priority; its values are inherited from its parent.

The quantum sizes, boost amounts, and bump amount can be tuned per
configuration.  The priority maximum and priority minimum of any running
process can be changed, thus providing external resource control.

The new scheduler has not been subjected to any rigorous performance
testing.  It's code is much cleaner than that of the previous scheduler so
its throughput should be better.  It is clearly obvious to users of a heavily
loaded machine that the scheduler distinguishes between interactive and
cpu-bound processes extremely well.  As a matter of fact, I am writing this
on a heavily loaded machine not running my scheduler and am getting pretty
annoyed.  The persons running their troffs of course don't have the same
feelings.

Since the last USENIX conference, I've fixed up some of the code and prepared
it for distribution.  I even dug up our old 4.2 code and reinstalled the
scheduler in that.  To tune my scheduler and to find out the real performance
differences between it and the standard scheduler, I have developed a pretty
snazzy remote terminal emulator to run some benchmarking.  A description of
the emulator and the results of the testing will hopefully make it to the next
USENIX conference.

The scheduler code will be available very soon for both 4.2 and beta 4.3 BSD.
It's great for those who don't think the standard scheduler is what they
need.  It's a must for those doing their own scheduling work who want to avoid
learning from scratch.  Send me mail if you want more information.

-- 
Spoken: Jeff Straathof 	ARPA:	jeff at mimsy.umd.edu	Phone: +1-301-454-7690
CSNet:	jeff at umcp-cs 	UUCP:	seismo!umcp-cs!jeff
USPS: Comp Sci Dept, Lab for Parallel Comp, Univ of Md, College Park, MD 20742



More information about the Comp.unix.wizards mailing list