uucp to hundreds of sites

Carl S. Gutekunst csg at pyramid.pyramid.com
Mon Feb 22 04:04:39 AEST 1988


>| The amount of time needed to find work using UUCP increases exponentially
>| with the number of jobs queued, and that's mostly time searching and
>| sorting the UUCP work directory.
>
>(a) only the user time goes up, not the process initiation time,

I fail to see your point. Time is time; who cares whether the CPU spends it in
user code or process launch? A master uucico starting on a busy system has a
lot of work to do before it can make a call, and you cannot begin to extrapo-
late the time for running a queue with 100 jobs based on the time for 1.

Granted, if you're running HoneyDanBer UUCP (which is in Xenix/386, at least
it was in the copy I used) that this phase will be much less time consuming
than for any other UUCP version. But startup is still exponential in nature.

>| It will also be highly dependent on the disk used.
>
>(b) sorry, the CPU will not be affected much by the disk speed....

Have you ever benchmarked I/O subsystems? The type of controller, the type of
disk, and a large number of other factors directly affect the amount of work
the CPU is able to do in parallel. Given identical everything else, and disks
that differ only in seek time, you statement is true. But any other bets are
off. 

Anyway, it still takes *time*. With a slow disk, you can timeout the other
uucico while your box sorts the work directory. This is really common on
overworked 68000 boxes, and happens occasionally to 68020 systems as well,
particularly those at university sites, who tend to put more users on them
than God ever intended. :-) 

>I don't know about "SysV" derivitives, but the times on my 386 running
>Xenix/386 (I had a chance to look there since my last posting), look like
>[equation deleted] for all jobs, at all baud rates, sending or receiving....

Well, yeah, fuuny how it works that way. You're using process time, which does
not include interrupt time. And interrupt time is where most of the difference
in baud rate and incoming vs. outgoing occurs. This is especially true with
HoneyDanBer, which does an excellent job of sleeping until all 70 characters
of the 'g' protocol packet have been received. If you include interrupt time,
a drastically different situation unfolds. 

Now, I have been told that a good intelligent serial card can bring the CPU
load down to only 25%. But the garden variety of multi-port RS-232 boards
don't do that.

Also, as many people have opinied, comparing the UNIX process times to wall
clock time is an exercise in futily.

And BTW, Xenix/386 *is* a System V derivitive.

>I'm sure you measured the 100% under controlled conditions or you wouldn't
>have mentioned it, therefore I must be missing something.

If one user on one Xenix 386 with a Telebit TrailBlazer counts as controlled,
then yes. But nothing elaborate. Try vmstat, and just watch the idle time sink
to 0%. Or if you don't have that (or the version you have isn't accurate,
which several people have told me is the case on Xenix/386), try a totally CPU
bound program for which you have precisely measured its wall clock time. Then
run it when uucico is running. If it takes twice as long in real time, then
you have a 100% saturated CPU. Another test: try two 9600 incoming uucico
transfers simultaneously, and measure the individual and aggregate throughput.

>sixhub is the distribution machine for starix, and handles about 200
>files/day.

200 a day? I think we have a problem in scale. I thought we were talking an
order of magnitude more than that, on the scale of uunet (which transfers in
excess of 300 files per hour, many of them news batches), or pyramid (150
files per hour). I would certainly trust a 386 PC for 200 news-sized files a
day. 200 an hour, no way. 

<csg>



More information about the Comp.unix.questions mailing list