Autotasking considered harmful

David B. Serafini serafini at nas.nasa.gov
Thu Oct 18 05:44:55 AEST 1990


In article <69 at garth.UUCP> fouts at bozeman.bozeman.ingr.UUCP (Martin Fouts) writes:
>
>Alan Klietz replies to an earlier groan about the difficulty of
>autotasking on an interactive system:
>
>Granted, autotasking complicates things somewhat because it makes a context
>switch much more painful where stragglers can slow down striped loops. 
>I don't know how UNICOS 6 does things, but as you suggested it ought to
>dedicate processors to autotasking or greatly penalize switches of active 
>autotasked jobs.  Swapping can be avoided by setting your sched parameters
>appropriately.
>
>The problem with {multi,micro,auto}tasking in a multiuser (even batch)
>workload is the assumption that all those processors are MINE, ALL
>MINE... 

That's only part of the problem.  The other part is, as Marty mentions below,
that any kind of multiple processor usage assumes that there are null cycles
available on the machine, or that some jobs are more important than others,
and deserve faster wall-clock turnaround.  In the "old" days of small-memory
Cray 1s and X-MPs, this was frequently true because the machine spent a lot of
time swapping or doing other I/O, idling that process. If enough processes in the
machine were "I/O Blocked", the CPU would idle for lack of runnable jobs. 
With the latest large memory 2s and Ys, this happens much less often becuase more
jobs fit in memory, and the SSD's are used as disk cache which helps I/O 
performance a lot.  The key isn't how many users you have, or whether they are
interactive or batch, it's whether you have a workload that has more I/O 
than the machine has bandwidth.

>It takes overhead to implement autotasking and it costs to run more
>threads than there are CPUS.  If you autotask and I autotask, and we
>both run at once, we both take the same time in our code as if we
>didn't, plus the overhead we've introduced to manage the threads, so
>we slow the system down, get worse turnaround time and degrade
>throughput.

One exception to this is running on a dedicated machine, where it can be
extremely useful to have 2 jobs running with half the processors each,
which is usually more efficient than 1 job running with all the cpus.

>(In economics, this would be a microeconomic gain leading to a
>macroeconomic loss, since it is good for one, but not good for more
>than one.)
>
>If you've got enough users to keep all of the processors on the
>machine busy without *tasking, than you are better off not doing it.

I would modify this to say that if you've got a workload, regardless 
of number of users, that keeps the cpus busy, you don't need *tasking,
but it can help use up otherwise idle CPU cycles if you don't.

>Marty
>
>--
>Martin Fouts
>
> UUCP:  ...!pyramid!garth!fouts (or) uunet!ingr!apd!fouts
> ARPA:  apd!fouts at ingr.com
>PHONE:  (415) 852-2310            FAX:  (415) 856-9224
> MAIL:  2400 Geng Road, Palo Alto, CA, 94303
>
>Moving to Montana;  Goin' to be a Dental Floss Tycoon.
>  -  Frank Zappa


--
David B. Serafini			serafini at ralph.arc.nasa.gov
Rose Engineering and Research			or
NASA/Ames Research Center		...!ames!amelia!serafini
MS 227-6					or



More information about the Comp.unix.cray mailing list