SCO Xenix System Hang

Steve Dyer dyer at spdcc.COM
Fri Dec 16 05:05:22 AEST 1988


In article <418 at marob.MASA.COM> daveh at marob.masa.com (Dave Hammond) writes:
>>[description of a hung system]
>
>You have exceeded the system-imposed open files limit (_NFILES in stdio.h)
>for a single user-id.  The only solution is to assign individual user
>accounts to each of your users.  The shell prompt will continue to be issued,
>as no files are opened by simply pressing Return.  However, as soon as a
>valid (non-builtin) command is issued, the shell must fork/exec the child
>program, which involves opening more files (the program binary itself,
>at the very least) and fails due to the afore-mentioned file table overflow.

I'm afraid that you have things a little mixed up.

There is a global (per-system) limit on the maximum number of files which
may be open at once.  In most versions of UNIX, objects of type "struct file"
are allocated from a global array, "file[NFILE]", where NFILE may be a
compile-time manifest constant or a parameter subject to tuning used at boot
time for memory allocation.  Attempting to run over this generates a
kernel printf "file table overflow" on the console of most versions of UNIX,
and the offending system call to return -1 with an errno of ENFILE.

There is a per-process limit on the maximum number of open files, usually
referred to as NOFILES, and this ranges from 20-60 or more, depending on
your variant of UNIX.  Hopefully, _NFILE (from stdio.h) will agree with
NOFILES, though I have come across flavors where the kernel folks weren't
talking to the library folks.  An open/dup/creat system call will return
-1 with errno set to EMFILE.

There is a limit on the maximum number of simultaneously-running processes
for a non-root user id, known as MAXUPRC, which is usually something like 25.
fork() returns -1 with errno set to EAGAIN if MAXUPRC is reached.



More information about the Comp.unix.xenix mailing list