process interlocking problems in unix

utzoo!decvax!ucbvax!unix-wizards utzoo!decvax!ucbvax!unix-wizards
Fri Oct 2 00:51:53 AEST 1981


>From ucsfcgl!sdcarl!dgl at Berkeley Fri Oct  2 00:46:53 1981
The lack of a proper process interlock under unix is very lamentable.
I have dealt with this problem trying to implement a bullit-proof
interlock for a multi-process database (a file system for sound
waveform data) and have come to the realization that there is no
way to completely eliminate race conditions short of adding an explicit
locking mechanism of some sort to the kernel.

Two solutions have proved to be good enough for most applications:
1) it is possible to use multiplexed files, using a daemon process
to watch references for a particular file, and manipulate which processes
are allowed to complete the open as a way to selectively control the
sequence of access requests among asynchronous processes.
2) For systems where multiplex files are not available, the following
strategy works fairly well:  open a lock file, write in it your own
process id (checking first to see that there is no file there to begin with).
Close the file, and reopen it, trying to read the contents.  If at any
step you fail, go back to the first step.  If you are able to read, and
it is the same process id you wrote, close the file and READ IT AGAIN.
If it STILL agrees, you've locked the system.  (If you fail reading, or
another number comes back, you've been locked out.  Enter a loop that
alternately sleeps and does an access() on the lock file until it dissapears,
then do the entire sequence again.)  As a final check, just before
deleting the lockfile to free the system, read the file again to see if
it is still your pid.  If not, report a synchronization error.  Using
this scheme, I still occasionally get synchronization errors.



More information about the Comp.unix.wizards mailing list