Non Destructive Version of rm

The Grand Master asg at sage.cc.purdue.edu
Wed May 8 06:58:41 AEST 1991


In article <1991May7.095912.17509 at athena.mit.edu> jik at athena.mit.edu (Jonathan I. Kamens) writes:
}
}  (I have addressed some of Bruce's points in my last posting, so I will not
}repeat here any point I have made there.)
}
}In article <11941 at mentor.cc.purdue.edu>, asg at sage.cc.purdue.edu (The Grand Master) writes:
}|> environment on all workstations. If so, /var/preserve SHOULD 
}|> exist on all workstations if it exists on any. Maybe you should make
}  The idea of mounting one filesystem from one fileserver (which is what
}/var/preserve would have to be, if it were to look the same from any
Are you telling me that when you log in, you have to wait for your home
directory to be mounted on the workstation you log in on? - This is 
absolutely Horid!! However, My suggestion of PUCC's entomb (with a
coupla mods) is very useful. Here it goes
First, you have a user named charon (yeah, the boatkeeper) which will
be in control of deleted files.
Next, at the top level of each filesystem you but a directory named
tomb - in other words, instead of the jik directory being the only 
directory on your partition, there are two - jik and tomb.
Next, you use PUCC's entomb library. you wilol need to make a few mods, 
but there should be little problem with that. The entomb library is full
of functions named (unlink,  link etc) which will be calledfrom rm
instead of th "real" unlink, etc and which will if necesarry
call the real unlink etc. What these functtions will actually do is
call on a process entombd (which runs suid to root - ohmigod) to move
your files to the tomb directory. The current library does not
retain directory structure, but that is little problem to fix. The
important thing is that things are moved to the tomb directory that
is on the same file system as the directory from which they are deleted.
tomb is owned by charon, and is 700. The companion program unrm can
restore the files to their original location (note, in this case you do
not neccesarily have to be in the same directory from which you deleted
them - though the files WILL be returned to the directory from which you
deleted them). Unrm will only let you restore a file if you have read
permission on the file, and write permission on the directory to which
it will be restored. Just as important, since the ownership, permissions,
and directory structure of the files will be kept, you still will not be
able to look at files you are not authorized to look at. You no longer
have to worry about moving files to a new filesystem. You know longer
have to worry about looking at stupid .# files. And since preend(1) also
takes care of cleaning out the tomb directories, you no longer need
to search for them. Another nice thing is that preend is capable of
specifying different times for differnet files. A few quotes from the
PUCC man page on entomb:

    You can control whether or not your files get entombed with
     the ENTOMB environment variable:

____________________________________________________________________________
|variable setting      action                                              |
____________________________________________________________________________
|"no"                  no files are entombed                               |
|"yes" (the default)   all files are entombed                              |
|"yes:pattern"         only files matching pattern are entombed            |
 "no:pattern"          all files except those matching pattern are entombed
+__________________________________________________________________________+

.......
  If the file to be entombed is NFS mounted from a remote
     host, the entomb program would be unable to move it to the
     tomb because of the mapping of root (UID 0) to nobody (UID
     -2).  Instead, it uses the RPC mechanism to call the entombd
     server on the remote host, which does the work of entombing.

..........
 Files destroyed by the library calls in the entombing
     library, libtomb.a, are placed in subdirectories on each
     filesystem.  The preening daemon, preend, removes old files
     from these tombs.  If the filesystem in question is less
     than 90% full, files are left in the tomb for 24 hours,
     minus one second for each two bytes of the file.  If the
     filesystem is between 90 and 95% full, files last 6 hours,
     again adjusted for file size.  If the filesystem is between
     95 and 100% full, files last 15 minutes.  If the filesystem
     is more than 100% full, all files are removed at about 5
     minute intervals.  An exception is made for files named
     "a.out" or "core" and filenames beginning with a "#" or end-
     ing in ".o" or "~", which are left in the tomb for at most
     15 minutes.
........

 The entombing library, libtomb.a, contains routines named
     creat, open, rename, truncate, and unlink that are call-
     compatible with the system calls of the same names, but
     which as a side effect may execute /usr/local/lib/entomb to
     arrange for the file in question to be entombed.

     The user can control whether or not his files get entombed
     with the ENTOMB environment variable.  If there is no ENTOMB
     environment variable or if it is set to "yes", all files
     destroyed by rm, cp, and mv are saved.  If the ENTOMB
     environment variable is set to "no", no files are ever
     entombed.

     In addition, a colon-separated list of glob patterns can be
     given in the ENTOMB environment variable after the initial
     "yes" or "no".  A glob pattern uses the special characters
     `*', `?', and `[' to generate lists of files.  See the
manual page for sh(1) under the heading "Filename Genera-
     tion" for an explanation of glob patterns.

     center box; l l.  variable setting    action _ "no" no files
     are entombed "yes" (the default) all files are entombed
     "yes:pattern"  only files matching pattern are entombed
     "no:pattern"   all files except those matching pattern are
     entombed

     If the ENTOMB environment variable indicates that the file
     should not be entombed, or if there is no tomb directory on
     the filesytem that contains the given file, the routines in
     this library simply invoke the corresponding system call.

---------------------------------
If this is not a full enough explaination, please contact me via
email and I will try to be more thorough.

}
}|> 	However, what Jon fails to point out is that one must remember
}|> where they deleted a file from with his method too. Say for example I do
}|> the following.
}|> $ cd $HOME/src/zsh2.00/man
}|> $ delete zsh.1
}|>  Now later, when I want to retrieve zsh.1 - I MUST CHANGE DIRECTORIES
}|> to $HOME/src/zsh2.00/man. I STILL HAVE TO REMEMBER WHAT DIRECTORY I 
}|> DELETED THE FILE FROM!!!! So you gain NOTHING by keeping the file in 
}|> the directory it was deleted from. Or does your undelete program also
}|> search the entire damn directory structure of the system?
}
}  Um, the whole idea of Unix is that the user knows what's in the file
}hierarchy.  *All* Unix file utilities expect the user to remember where files

Not exactly true. Note this is the reason for the PATH variable, so that
you do not have to remember where every God-blessed command resides.
}
}  How many deleted files do you normally have in a directory in any three-day
}period, or seven-day period, or whatever?
Often many - it depends on the day
}
}|> Say I do this
}|> $ ls -las
}|> 14055 -rw-------   1 wines    14334432 May  6 11:31 file12.dat
}|> 21433 -rw-------   1 wines    21860172 May  6 09:09 file14.dat
}|> $ rm file*.dat
}|> $ cp ~/new_data/file*.dat .
}|> [ note at this point, my directory will probably grow to a bigger
}|> size since therre is now a fill 70 Meg in one directory as opposed
}|> to the 35 meg that should be there using John Navarra's method]
}
}  First of all, the size of a directory has nothing to do with the size of the
}files in it.  Only with the number of files in it.  Two extra file entries in
Ok, you are right - I wan\sn't thinking here
}
}1. Copy 300meg of GIF files into /tmp.
}
}2. "rm" them all.
}
}3. Every day or so, "undelete" them into /tmp, touch them to update the
}   modification time, and then delete them.
}
}Now I'm getting away with using the preservation area as my own personal file
}space, quite possibly preventing other people from deleting files.

Well, I could copy 300meg of GIFs to /tmp and keep touching them
every few hours or so (say with a daemon I run from my crontab) and
the effect would be the same.
}
}  Using $HOME/tmp avoids this problem, but (as I pointed out in my first
Yes it does, as does using filesystemroot:/tomb
}
}  You could put quotas on the preserve directory.  But the user's home
}directory already has a quota on it (if you're using quotas), so why not just
}leave the file in whatever filesystem it was in originally?  Better yet, in
Thatt is what entomb does!
}  You can't get it back in the other system suggested either.
Some kind of revision control (though I am not sure how it works) is also
present with entomb.

}|> Well most of us try not to go mounting filesystems all over the place.
}|> Who would be mounting your home dir on /mnt?? AND WHY???
}
}  In a distributed environment of over 1000 workstations, where the vast
}majority of file space is on remote filesystems, virtually all file access
}happens on mounted filesystems.  A generalized solution to this problem must
}therefore be able to cope with filesystems mounted in arbitrary locations.

Well, then this is an absolute kludge. How ridiculous to have to mount and
unmount everyones directory when they log in/out. ABSURD!. You would be
better off to have a few powerful centralized systems with Xwindow terminals
instead of separate workstations.

  In fact, what you have appearantly makes it impossible for me to access
any other users files that he might have purposefully left accessable
unless he is logged into the same workstation. Even if he puts some files
in /tmp for me, I HAVE TO LOG INTO THE SAME WORKSTATION HE WAS ON TO GET
THEM!! And if I am working on a workstation and 10 people happen to rlogin
to it at the same time, boy are my processes gonna keep smokin. 
  No the idea of an Xterminal with a small processor to handle the 
Xwindows, and a large system to handle the rest is MUCH MUCH more reasonable
and functional.
}
}  For example, let's say I have a NFS home directory that usually mounts on
}/mit/jik.  But then I log into one of my development machines in which I have
}a local directory in /mit/jik, with my NFS home directory mounted on
}/mit/jik/nfs.  This *happens* in our environment.  A solution that does not
}deal with this situation is not acceptable in our environment (and will
}probably run into problems in other environments as well).
Well, in most environments (as far as I know) the average user is not allowed
to mount file systems. 
}
}|> Is this system source code? If so, I really don't think you should be 
}|> deleting it with your own account.
}  First of all, it is not your prerogative to question the source-code access
}policies at this site.  For your information, however, everyone who has write
}access to the "system source code" must authenticate that access using a
}separate Kerberos principal with a separate password.  I hope that meets with
}your approval.

It is my perogative to announce my opinion on whatever the hell I choose,
and it is not yours to tell me I cannot. Again this seems like a worthless
stupid kludge. What is next - a password so that you can execute ls?
}
}-- 
}Jonathan Kamens			              USnail:
}MIT Project Athena				11 Ashford Terrace
}jik at Athena.MIT.EDU				Allston, MA  02134
}Office: 617-253-8085			      Home: 617-782-0710

While I understand the merits of your system, I still argue that it is
NOT a particularly good one. I remove things so that I do not have
to look at them anymore. And despite your ravings at John, ls -a is 
not at all uncommon. In fact I believe it is the default if you are
root is it not? Most people I know DO use -a most of the time, in 
fact most have
alias ls 'ls -laF'
or something of the like. And I do not like being restricted from 
ever naming a file .#jikisdumb or whatever I wanna name it.

			As Always,
			   The Grand Master

---------
                                   ###             ##
Courtesy of Bruce Varney           ###               #
aka -> The Grand Master                               #
asg at sage.cc.purdue.edu             ###    #####       #
PUCC                               ###                #
;-)                                 #                #
;'>                                #               ##



More information about the Comp.unix.admin mailing list