Hello

Barry Shein bzs at world.std.com
Tue Sep 11 12:41:47 AEST 1990


The difference is that when things get big you eventually hit a
paradigm shift in the management of those resources. You can't manage
a big system by simply doing what you do for small systems, only more
of it.

Consider when you get to the point that you can't perform a daily
backup in less than 24 hours. Obviously something would have to
change, more of the same won't cut it.

If you think that's ludicrous, there are terabyte Unix systems out
there. That's 10^12 bytes. If 10% needed to be backed up every day
that would be 10^11 bytes. At about 2x10^8/tape we get 500 full tapes
per day. If there were one tape drive that would leave about 3 minutes
per tape to get it done in exactly 24 hours (and then start again.)  5
tape drives running full blast simultaneously, a more realistic 15
minutes per tape, and it still takes 10 hours to do a lousy daily
incremental. And you have 500 tapes per day to manage!

Clearly things do not scale linearly as systems get large, completely
different management and technology strategies must be employed.

It is those different strategies that I would hope this group were
interested in. Where are the discontinuities? How does one manage
them?
-- 
        -Barry Shein

Software Tool & Die    | {xylogics,uunet}!world!bzs | bzs at world.std.com
Purveyors to the Trade | Voice: 617-739-0202        | Login: 617-739-WRLD



More information about the Comp.unix.large mailing list