Project Athena ( was Re: Non Destructive Version of rm)

The Grand Master asg at sage.cc.purdue.edu
Fri May 10 03:29:31 AEST 1991


In article <1991May9.001907.13024 at athena.mit.edu> jik at athena.mit.edu (Jonathan I. Kamens) writes:
}  I have asserted that our "attach" is secure, and that root access to our
}public workstations does not compromise our security.  I have offered to mail

Just answer one quick question. I assume that each workstation has a 
disk of it's own mounted on / right? If so, can I not log into one of
your workstations and rm -rf /, thus making it useless? Can I not do
this for EACH AND EVERY WORKSTATION YOU HAVE?
}
}  Therefore, we have two choices: Either we can restrict who gets network
}access so only machines that are directly and cleanly controlled can be on the
}network, or we can develop an authentication system that allows a high level
}of service and is not compromised by root access to machines on the network. 
}We have chosen the latter.
}

You have another choice. To trust only those computers to which the user does
not have physical access.

}If one machine on the network is broken into, the entire network is
}vulnerable.
I understand this. But if you only trust computers that won't be broken into,
then you are safer. What would happen if someone broke into one of your 
servers. Could they not delete all the files on all the filesystems for which
that machine is the server?
}trusting every machine on the Internet to be secure is therefore somewhat
}suspect.

I NEVER said anything about trusting every machine on the internet. Is there
no way of telling a system to "trust" only a select few others?
}
}|> Now, if you are saying that the people in the computer dept do not
}|> know if they can trust the SYSADMINs in the MATH dept, well then 
}|> you should do something about that.
}
}  Security that depends on the good graces of every admin on your network is
}no security at all.  Furthermore, it DOES NOT SCALE.  The more machines you
}have on a network, the more people you need to run them, and the more people
}there are running them, the more possibility there is that someone will start
}playing around with root access.  Kerberos avoids this problem.

There still must be SOME people that have a top level of access. What is
to stop them from doing whatever they want?
}
}  I wonder -- if I snarfed the entomb software from Purdue and figured out
}from it how the entombd stuff works and what RPC port requests are sent on,
}could I su to root on my workstation and send requests to an entombd at Purdue
}to remove somebody's files?

I doubt it, since the entombd probably knows to trust only a few other
systems.
}
}|> A workstation can be made such that you cannot boot it from floppy
}|> without a passwd (in fact my PC even does this), so physical 
}|> access is not really an excuse.
}
}  A workstation, perhaps.  But perhaps not a PC, or a Mac, or the portable PC
}that your opponent carries into a lab and plugs into your ethernet when no
}one's looking.  One of my coworkers has an ethernet card and software in a PC
}that weighs ten pounds.

Are you telling me that you cannot tell your systems to trust just
a selected list of other systems? Are you saying that your system
must trust everyone or noone and that there is no medium?
}
}  Furthermore, the machines on our network are on the Internet.  We cannot
}control the entire Internet.  And restricting access to the Internet handicaps
}our users unnecessarily.  I say "unnecessarily" because, as I've pointed out,
}the security of Kerberos makes it unnecessary for us to worry about root
}machines elsewhere on the Internet.
}
Again, Are you telling me tha You cannot tell your system to 
trust prep.ai.mit.edu and not trust ypig.stanford.edu ?
Why not?
}
}  It is more likely that GE is using an automounter of some sort. 
}Automounting is a good idea, and has some advantages over the way Athena does
}things (although there is as much of a delay when an automounter mounts a
}filesystem as there is when our "attach" does it).  Unfortunately, many
}vendors do not provide enough kernel support to make automounters work (since
}they usually work by attaching a process to a directory as an NFS server, and
}many variants of Unix don't support that), so Project Athena (which requires
}multiple-platform support as one of its highest priorities) decided to go
}another route.
}
I don't think so then. If I do a listing of // , and say the directory
for the system a333 is not listed, then I do a cd //a333, there is
no waiting period. I have never experienced a waiting period.
}
}|> It might be a little easier for you to have to include NO NO NO commands
}|> because you are in the source group which has write access to the source
}|> files.
}
}  Um, excuse me, but in message <11941 at mentor.cc.purdue.edu>, you wrote:
}
}|> Is this system source code? If so, I really don't think you should be 
}|> deleting it with your own account.
}
}In other words, your original assertion which started this whole thread is
}that I should not be able to write to the source code with my own account.  So
}which is it, that I should be able to write to the source code or that I
}shouldn't?

Let's try again. Maybe you should get a dictionary first and look
up the words "write" and "delete~. Hmm, they are different aren`t they?
I see no problem with altering the system source code to a program 
from your own account. I just feel that it should be a little 
harder to delete stuff (so you don't do it accidently, etc).
}
}-- 
}Jonathan Kamens			              USnail:


Some final thoughts. 
  I can see that Athena is a worthwhile setup. I know that it is one
of the possible worthwhile solutions. It has alot of merit, and I can
see why some people would prefer it.
 I understand now that letting users mount filesystems is safe, though
I still disagree with giving them root (see my comment on this near
the beginning of the article). 
  I see why some people like distributed computing.
  But Jon conveniently failed to respond to my major gripe with
distributed computing. Resources go unused more often. 
  So I will state it again in case you all forgot:

If I am doing something CPU intensive on a workstation, I gain no
added benifit from the fact that only 1/10 of the workstations 
are in use. I also will see a significant reduction in the speed of
my window operations, since the same CPU is handling them and the
intensive task.
If I have a some centralized computers though, I now DO benifit when
only 1/10 of the maximum number of people are logged in. And my
windows will not be nearly as affected, since they are controlled
bu a CPU that is not encumbered with a heavy Job.

Just so you know.
I do not consider Jon stupid in any way. He is very intelligent, and
I respect his opinions, which is exactly why I have continued this
discussion - If I did not care about his opinion, I would never
have responded to him in the first place. However, It is not
right to call ME foolish, or ignorant simply because i choose to
disagree with him. 
Maybe Jon has not had some of the problem s I have with workstations. 
I often am running CPU and disk intensive jobs. Unfortunately, this 
sometimes limits me to one thing at a time, since the single 
processor is unable to handle a huge load. Working on a system that
had 40 || processors however would alleviate this problem in the
majority of cases. 
Jon's opinion is valid. 
But that does not mean that my opinion is not.
---------
                                   ###             ##
Courtesy of Bruce Varney           ###               #
aka -> The Grand Master                               #
asg at sage.cc.purdue.edu             ###    #####       #
PUCC                               ###                #
;-)                                 #                #
;'>                                #               ##



More information about the Comp.unix.admin mailing list