Non Destructive Version of rm

Jonathan I. Kamens jik at athena.mit.edu
Wed May 8 08:46:44 AEST 1991


In article <12021 at mentor.cc.purdue.edu>, asg at sage.cc.purdue.edu (The Grand Master) writes:
|> Are you telling me that when you log in, you have to wait for your home
|> directory to be mounted on the workstation you log in on?

  Yes.

|> - This is 
|> absolutely Horid!!

  I would suggest, Bruce, that you refrain from commenting about things about
which you know very little.  Your entire posting is filled with jibes about
the way Project Athena does things, when you appear to know very little about
*how* we do things or about the history of Project Athena.

  I doubt that DEC and IBM would have given Athena millions of dollars over
more than seven years if they thought it was a "kludge".  I doubt that
Universities and companies all over the world would be adopting portions of
the Project Athena environment if they thought it was a "kludge".  I doubt DEC
would be selling a bundled "Project Athena workstation" product if they
thought it was a "kludge".  I doubt the OSF would have accepted major portions
of the Project Athena environment in their DCE if they thought it was a
"kludge".

  You have the right to express your opinion about Project Athena.  However,
when you opinion is based on almost zero actual knowledge, you just end up
making yourself look like a fool.  Before allowing that to happen any more, I
suggest you try to find out more about Athena.  There have been several
articles about it published over the years, in journals such as the CACM.

  You also seem to be quite in the dark about the future of distributed
computing.  The computer industry has recognized for years that personal
workstations in distributed environments are becoming more popular.  I have
more computing power under my desk right now than an entire machine room could
hold ten years ago.  With the entire computing industry moving towards
distributed environments, you assert that Project Athena, the first successful
large-scale distributed DCE in the world, would be better off "to have a few
powerful centralized systems with Xwindow terminals instead of separate
workstations."  Whatever you say, Bruce; perhaps you should try to convince
DEC, IBM, Sun, HP, etc. to stop selling workstations, since the people buying
them would obviously be better off with a few powerful centralized systems.

|> Next, at the top level of each filesystem you but a directory named
|> tomb - in other words, instead of the jik directory being the only 
|> directory on your partition, there are two - jik and tomb.

  "Filesystems" are arbitrary in our environment.  I can mount any AFS
directory as a "filesystem" (although AFS mounts are achieved using symbolic
links, the filesystem abstraction is how we keep NFS and AFS filesystems
parallel to each other).  Furthermore, I can mount any *subdirectory* of any
NFS filesystem as a filesystem on a workstation, and the workstation has no
way of knowing whether that directory really is the top of a filesystem on the
remote host, or of getting to the "tomb" directory you propose.

  As I think I've already pointed out now twice, we considered what you're
proposing when we designed Athena's "delete".  But we also realized that in a
generalized environment that allows arbitrary mounting of filesystems,
top-level "tomb" or ".delete" or whatever directories just don't work, and
they degenerate into storing deleted files in each directory.

  If your site uses "a few powerful centralized systems" and does not allow
mounting as we do, then your site can use the entomb stuff.  But it just
doesn't cut it in a large-scale distributed environment, which is the point
I've tried to make in my previous two postings (and in this one).

  In any case, mounting user home directories on login takes almost no time at
all; I just mounted a random user directory via NFS and it took 4.2 seconds. 
That 4.2 seconds is well worth it, considering that they can access their home
directory on any of over 1000 workstations, any of which is probably as
powerful as one of your "powerful centralized systems."

|> What these functtions will actually do is
|> call on a process entombd (which runs suid to root - ohmigod) to move
|> your files to the tomb directory.

  One more time -- setuid does not work with authenticated filesystems, even
when moving files on the same filesystem.  Your solution will not work in our
environment.  I do not know how many times I am going to have to repeat it
before you understand it.

|> ____________________________________________________________________________
|> |variable setting      action                                              |
|> ____________________________________________________________________________
|> |"no"                  no files are entombed                               |
|> |"yes" (the default)   all files are entombed                              |
|> |"yes:pattern"         only files matching pattern are entombed            |
|>  "no:pattern"          all files except those matching pattern are entombed
|> +__________________________________________________________________________+

  Very nice.  I could implement this in delete if I wanted to; this does not
seem specific to the issues we are discussing (although it's a neat feature,
and I'll have to consider it when I have time to spend on developing delete).

|>   If the file to be entombed is NFS mounted from a remote
|>      host, the entomb program would be unable to move it to the
|>      tomb because of the mapping of root (UID 0) to nobody (UID
|>      -2).  Instead, it uses the RPC mechanism to call the entombd
|>      server on the remote host, which does the work of entombing.

  We considered this too, and it was rejected because of the complexity
argument I mentioned in my last posting.  Your daemon has to be able to figure
out what filesystem to call via RPC, using gross stuff to figure out mount
points.  Even if you get it to work for NFS, you've got to be able to do the
same thing for AFS, or for RVD, which is the other file protocol we use.  And
when you add new file protocols, your daemon has to be able to understand them
to know who to do the remote RPC too.  Not generalized.  Not scalable.

  Furthermore, you have to come up with a protocol for the RPC requests.  Not
difficult, but not easy either.

  Furthermore, the local entombd has to have some way of authenticating to the
remote entombd.  In an environment where root is secure and entombd can just
use a reserved port to transmit the requests, this isn't a problem.  But in an
environment such as Athena's where anyone can hook up a PC or Mac or
workstation to the network and pretend to be root, or even log in as root on
one of our workstations (or public workstation root password is "mroot";
enjoy it), that kind of authentication is useless.

  No, I'm not going to debate with you why people have root access on our
workstations.  I've done that flame once, in alt.security shortly after it was
created.  I'd be glad to send via E-mail to anyone who asks, every posting I
made during that discussion.  But I will not debate it again here; in any
case, it is tangential to the subject currently being discussed.

  By the way, the more I read about your entomb system, the more I think that
it is a clever solution to the problem it was designed to solve.  It has lots
of nice features, too.  But it is not appropriate for our environment.

|> }  Um, the whole idea of Unix is that the user knows what's in the file
|> }hierarchy.  *All* Unix file utilities expect the user to remember where files
|> 
|> Not exactly true. Note this is the reason for the PATH variable, so that
|> you do not have to remember where every God-blessed command resides.

  Running commands is different from manipulating files.  There are very few
programs which manipulate files that allow the user to specify a filename and
know where to find it automatically.  And those programs that do have this
functionality do so by either (a) always looking in the same place, or (b)
looking in a limited path of places (TEXINPUTS comes to mind).  I don't know
of any Unix program which, by default, takes the filename specified by the
user and searches the entire filesystem looking for it.  And no, find doesn't
count, since that's the one utility that was specifically designed to do this,
since nothing else does (although even find requires that you give it a
directory to start in).

|> Well, I could copy 300meg of GIFs to /tmp and keep touching them
|> every few hours or so (say with a daemon I run from my crontab) and
|> the effect would be the same.

  You could, but I might not keep 300meg of space in my /tmp partition,
whereas I would probably want to keep as much space as possible free in my
entomb partitions, so that deleted files would not be lost prematurely.

|> Well, then this is an absolute kludge. How ridiculous to have to mount and
|> unmount everyones directory when they log in/out. ABSURD!.

  See above.  What you are calling "ABSURD" is pretty much accepted as the
wave of the future by almost every major workstation manufacturer and OS
developer in the world.  Even the Mac supports remote filesystem access at
this point.

  How else do you propose a network of 1000 workstations deal with all the
users' home directories?  Oh, I forgot, you don't think anyone should need to
have a network of 1000 workstations.  Right, Bruce.

|>   In fact, what you have appearantly makes it impossible for me to access
|> any other users files that he might have purposefully left accessable
|> unless he is logged into the same workstation.

  No, we have not.  As I said above, you don't know what you're talking about,
and making accusations at Project Athena when you haven't even bothered to try
to find out if there is any truth behind the accusations is unwise at best,
and foolish at worst.  Project Athena provides "attach", an interface to
mount(2) which allows users to mount any filesystem they want, anywhere they
want (at least, anywhere that is not disallowed by the configuration file for
"attach").  All someone else has to do to get to my home directory is type
"attach jik".

  Do not assume that Project Athena is like Purdue and then assume what we don
on that basis.  Project Athena is unlike almost any other environment in the
world (although there are a few that parallel it, such as CMU's Andrew system).

|> And if I am working on a workstation and 10 people happen to rlogin
|> to it at the same time, boy are my processes gonna keep smokin. 

  Workstations on Project Athena are private.  One person, one machine (there
are exceptions, but they are just that, exceptions).

|>   No the idea of an Xterminal with a small processor to handle the 
|> Xwindows, and a large system to handle the rest is MUCH MUCH more reasonable
|> and functional.

  You don't know what you're talking about.  Project Athena *used to be*
composed of several large systems connected to many terminals.  Users could
only log in on the cluster nearest the machine they had an account on, and
near the end of the term, every machine on campus was unuseable because the
loads were so high.  Now, we can end up with 1000 people logged in at a time
on workstations all over campus, and the performance is still significantly
better than it was before we switched to workstations.

|> It is my perogative to announce my opinion on whatever the hell I choose,
|> and it is not yours to tell me I cannot. Again this seems like a worthless
|> stupid kludge. What is next - a password so that you can execute ls?

  You asserted that we should not be writing to system source code with our
own account.  I responded by pointing out that, in effect, we are not.  We
simply require separate Kerberos authentication, rather than a completely
separate login, to get source write access.  Now you respond by saying that
that authentication is wrong, when it is in fact what you implied we should be
doing in the first place.

-- 
Jonathan Kamens			              USnail:
MIT Project Athena				11 Ashford Terrace
jik at Athena.MIT.EDU				Allston, MA  02134
Office: 617-253-8085			      Home: 617-782-0710



More information about the Comp.unix.admin mailing list