Abstract on Ethernet and diskless Workstations

JF600%ALBNY1VX.BITNET at cornellc.ccs.cornell.edu JF600%ALBNY1VX.BITNET at cornellc.ccs.cornell.edu
Wed Apr 6 22:23:48 AEST 1988


                      ***********WARNING: THIS MESSAGE IS LONG*******

A while ago I posted a query about how people felt of the diskless
workstations. This question was pronted because we were considering
such a setting for our image processing stuff. I got a good number of
answers and a good number of "me too" messages. So I thought the best
thing would be to post the contributions I got. I am most thankful to the
people that took the time to share all that information.

Hope this will be useful...........................Jose-Maria Carazo

In-Real-Life: Jose-Maria Carazo, New York State Health Department

JF600 at ALBNY1VX.BITNET                  JCARAZO at BIONET-20.ARPA

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

We have an Ethernet with one Sun 3/180 and 13 diskless 3/50 clients,
two Sun 3/280s with 2-3 diskless clients each, a Vax, a Gould, an ISI,
and a CCI (the big machines are gateways).  We found when we tried to
put another Sun 3/280 with 10 clients on that that Ethernet collapsed
(90% utilization).  We have another Ethernet with two Sun 3/280s and
about 18 diskless clients split between them; it's just about maxed
out.

Lately our rule of thumb has been one Ethernet per Sun server and
15 or so diskless clients, and then throw in a couple of random
machines as extra gateways.  We have two interfaces in all our Sun
servers so that we can gateway between subnets.  Note that we are
using a Class B network number and presently have about 20+ subnets,
mostly because of Suns.

It remains to be seen what the limits are with Sun 4s, we will be
doing this this summer.  We have determined that a Sun 4 does I/O so
fast that mounting a Sun 3 filesystem on the Sun 4 with NFS beats the
Sun 3 into the ground.

Basically, you don't want your Ethernet to go over around 40%
utilization most of the time, with maybe 60% utilization when clients
are booting, etc.  We have found that 70% of theoretical maximum is
where the thing really starts to collapse (so much for theoretical
maximums).

--Dave Curry
Purdue University


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


We don't do image processing, but we do a lot of lisp
work which tends to be disk i/o intensive.

We're beginning to trend towards lots of "little" disks,
now that Sun has started to build in SCSI controllers
into their 3/50's, 3/60's and 4/110's.

Of course, we don't buy from Sun, we build the disk
shoeboxes ourselves (or buy from 3rd parties, preassembled).

You can easily buy 300MB in a small shoebox for $4500.  Less
if you're willing to buy the parts and put things together
yourself.  If you go the latter route, be careful; there's
lots of things you can do that won't work right.

-- Jim Guyton

++++++++++++++++++++++++++++++++++++++++++++++++++++++


The Unix community is slowly realizing that a network of totally diskless
machines, while neat, isn't feasible. Things bottleneck on the network way
too much.

Many sites run a small disk on each machine, for booting and as a page device.
If you page to the network itself, you slow everything down; if the applications
are largely CPU-bound and don't talk to files/other machines once going, then
the only real concern IS paging activity.

    Jeff Bowles


+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


 Before working for my present company Silc Technologies, I was in
Harris Corp's Advasnced Technology Group which was trying to similar
multiple processing under a budget.

We had one Sun 3/160 as the disk server with 24 Mbytes of memory and 2
Fujitsu Eagles disks (tolerably fast SMD drives) and 13 diskless Sun
3/50's and Sun 3/60's. We found that the diskless Sun's  performance
was about the same as using a locally connected slower SCSI drive (I
cannot remmeber the exact throughput, but the SMD drives could put out
100 Kb/s, SCSI ~20 Kb/s, net gave ~10 -> 20 Kb/s depending upon the
total combined load). We did find out that some of our antiquated
systems were not agile enough to get packets squeezed in between the
Sun packets (Sun uses something called back-to-back packets that
occasionally overflow some older system's packet buffers).

The solution that seemed pleasing to everyone was to put all the
diskless Suns on their own ethernet wire and use the Disk server as a
bridge to the outside ethernet (the Disked Sun 3/160 had its standard
ethernet I/F and one extra ethernet board). Sun recommended that the
diskless units hooked into the standard ethernet connector as it
bypassed the backplane bus and was supposedly faster. This allowed the
13 diskless Suns to fllod the net all they wanted and not collapsing
the main line.

My recommendation is to buy something similar to a Sun 3/60 which can
have from 4-24 Mbytes of core RAM. Buy them with about 8 Mbytes and
you can problably win some budget battles buying the add-on memory
strips (chicklets) a little at a time (accounting chokes less when the
bill is broken to serveral chunks). Also get the disk server with
multiple disk drives and the fastest drives you can afford, the Sun's
seem to know how to optimally interleave the drives accesses
especially when the network swapping spaces are evenly split between
the drives.

Hopefully this overly verbose info is of use to you.
                Adam Zilinskas
                (617)-221-5931
                Silc Technologies Inc.
                (a startup Design Synthesis company)



More information about the Comp.unix.questions mailing list