scheduling/timing weirdo

lfm at ukc.UUCP lfm at ukc.UUCP
Sat May 5 05:25:56 AEST 1984


Can anybody enlighten me as to why the following things happen :

1) When testing some code that interacts with a server on a remote machine
I replaced a programmed C copy loop with a call on a fast move subroutine,
as expected the user time as reported by "time(1)" dropped significantly.
However, the system time increased considerably, and the elapsed stayed
about the same. This was repeatable and consistent, and the program was
unmodified in any other way. (PDP11/45 and V7 by the way).

2) (This time on a PERQ running PNX - a V7 port) This time the
optimisation was put into the file server end, so the program running
locally was completely unchanged. When the test was run again the system
time increased significantly and consistently. The time was measured
was only local and could not include any of the time on the remote machine.
No extra messages were generated by the communication between the processes
(A first thought was that with the speed increase things were running too
fast and retries were happening) and the local process cannot have done
anymore work than it would have before the change to the server.

So, any suggestions?

  Lindsay F. Marshall
    uucp : ...!{mcvax,vax135}!ukc!lfm
    ARPA : Lindsay_Marshall%NEWCASTLE at MIT-MULTICS
    post : Computing Laboratory, U of Newcastle upon Tyne, U.K.
           +44 - 632 - 329233 xtn 212



More information about the Comp.unix.wizards mailing list