Context Switch time in UNIX

Chris Torek torek at elf.ee.lbl.gov
Fri Mar 22 10:37:48 AEST 1991


In article <1991Mar21.202637.29340 at cs.umn.edu> patiath at umn-cs.cs.umn.edu
(Pradip Patiath) writes:
>We would like to know the time a context switch in UNIX takes.

You will have to define it before you can measure it:

>Is there any way to measure this? Is it documented somewhere, 
>say for SunOS 4.0.3 on a Sparcstation 1+? How does this figure
>vary as a function of # of processes? 

In particular, on SparcStations (and other machines with Sun MMU)s
the word `context' has several meanings, and kernel process scheduling
timings depend on whether an MMU context already exists, among other
things.

Once you sit down and decide what it is you want to measure, the best
way to do it is to use external hardware to monitor some sort of
signals (preferably ones that do not involve inserting extra code into
the bits you want timed, although this requires much fancier external
timing devices).  Otherwise the timing code you insert winds up
changing the time taken.  You must also watch out for cache effects
---simply moving one instruction can change the time taken to run that
instruction, because it moves into or out of the cache, or shares a
cache line with some other important item, or something.

If you want good results, you are pretty much stuck with doing
everything yourself.  If you just want `user level approximations', the
gettimeofday() system call is designed to return values accurate to the
nearest microsecond.  It even comes fairly close to doing this on
the SparcStations, which have microsecound counter/timer chips on board.
-- 
In-Real-Life: Chris Torek, Lawrence Berkeley Lab CSE/EE (+1 415 486 5427)
Berkeley, CA		Domain:	torek at ee.lbl.gov



More information about the Comp.unix.questions mailing list