File locking under Unix

Jeff Lee jeff at gatech.UUCP
Fri Jun 8 06:16:11 AEST 1984


I'm almost finished with a general-purpose tree structured accounting package
for (what I hope is) a relatively generic version of Unix. I implemented a
simple locking procedure for the master accounting file using links. I keep
a lock file around to which each process will attempt to link. Whenever a
process sees that the file is busy, it waits 1 second (the smallest interval)
and then attempts it again. Currently it will retry 9 times before it announces
failure to the calling routine, but I have tested it with 15 times. This means
that a process may have to wait up to 15 seconds before it can access the
master file, which is a pretty long time considering that each operation only
takes .2 to .25 seconds to complete. The problem is this, when I cranked up
5 processes that were beating on the master file pretty heavily, I was having
processes starving (often). I knew that it didn't prevent this but I had no
idea that it would be quite so bad. I have considered setting an alarm for 10
seconds and linking as much as possible during that time to try to eliminate
the synchronizing that seems to be happening. All that needs to happen is to
get the file once in 10 seconds to prevent starvation.

Does anyone else have a file locking scheme that will ensure mutual exclusion
and prevent starvation? Semaphores would be nice but we are assuming no such
beasties in most unix implementations. No frills needed. No record locking or
N readers or 1 writer function. This can be Canada Dry, not Perrier.



More information about the Comp.unix.wizards mailing list