Hard links to directories: why not?

Dan Bernstein brnstnd at kramden.acf.nyu.edu
Sun Jul 29 06:19:45 AEST 1990


In article <PCG.90Jul27192942 at odin.cs.aber.ac.uk> pcg at cs.aber.ac.uk (Piercarlo Grandi) writes:
  [ about Multics and various filesystem ideas ]
> There are instead filesystem organizations where a path name is just a
> strings, and you use something like a btree to map that string into a
> file pointer, and instead of a current directory you have a current
> prefix. "Links" are then just names (or prefixes) that are declared to
> be synonyms, and you make a name (or prefix) resolve to another name
> (symlinks) or two names resolve to the same file block pointer (hard
> links).

This is a perfect description of the UNIX filesystem.

> My own idea is actually very different: use a file name resolution
> service that is totally not hierarchical. File names are really sets of
> keywords, as in say "user,pcg,sources,IPC,header,macros", and a file is
> identified by any uniquely resolvable subset of the keywords with which
> it is registered with the name service, given in any order.

Ugh, ugh, ugh, ugh, ugh. I find it just as important to know that a file
doesn't exist as to find it if it does. By removing any hint of absolute
structure, you're guaranteeing that people will continually access the
wrong files without even noticing what went wrong. Do you never see the
words ``file not found''? Would you rather that the filesystem silently
read or touch or trash a file somewhere else?

---Dan



More information about the Comp.unix.wizards mailing list