vaxingres bugs which can cause database corruption

stevesu at copper.UUCP stevesu at copper.UUCP
Fri Oct 24 12:37:11 AEST 1986


In article <356 at adiron.UUCP>, tish at adiron.UUCP (Tish Todd) quotes
code which does stuff like:

	for (i = 0; i < NOFILE; i++)
		{ do stuff to file descriptor i}

and mentions that the code had problems when ported to Ultrix.

Unless that NOFILE is a real variable, and not the old

	#define NOFILE 20

from <sys/param.h>, code like this has more problems than the one
Tish mentioned.  In 4.3bsd, and in Ultrix, and probably in any
number of other recent Unices, the old static limit of 20 file
descriptors is no more.  In fact, I believe that the maximum
number of open file descriptors per process is now a tuneable
parameter, so a program can't know it at run time.  Any program
that does fancy file descriptor manipulation must call
getdtablesize(), and be prepared for the return value being
arbitrarily large.  (I believe that it will never be bigger than
64 for 4.3bsd, but it's clearly stupid for a program to assume
that; if you're going to handle arbitrary numbers of file
descriptors, why not make your code truly general and maximize
portability possibilities in the future?)

This change particularly affects callers of select().  The mask
arguments are now pointers to arrays of integers, not pointers to
single 32-bit integers.  (Berkeley really lucked out on this one;
they were able to maintain backwards-compatibility with 4.1bsd,
thanks in part to the close relationship between pointers and
arrays in C which has been so exhaustively discussed in
net.lang.c.)  Anyway, programs must in general dynamically
allocate the arrays they'll hand to select, using getdtablesize()
to determine how big the array needs to be.  I have used code
like (abbreviated for posting):

	#include "select.h"

	int *mask = (int *)malloc(NFDwords(getdtablesize()) * sizeof(int));

	bzero((int *)mask, NFDwords(getdtablesize()) * sizeof(int));

	Setbit(0, mask);

	select(getdtablesize(), mask, (int *)NULL, (int *)NULL,
						(struct timeval *)NULL);

	if(Getbit(0, mask))
		...

I use a header file, select.h, which contains some #definitions
to make this sort of thing easier.  It contains:

	#define BITSPERINT (sizeof(int) * 8)

	#define NFDwords(nfds) (((nfds) + BITSPERINT - 1) / BITSPERINT)

	#define Wordpos(bit) ((bit) / BITSPERINT)
	#define Bitpos(bit) ((bit) % BITSPERINT)
	#define Bitmask(bit) (1 << Bitpos(bit))
	#define Getbit(bit, mask) (mask[Wordpos(bit)] & Bitmask(bit))
	#define Setbit(bit, mask) (mask[Wordpos(bit)] |= Bitmask(bit))
	#define Clearbit(bit, mask) (mask[Wordpos(bit)] &= ~Bitmask(bit))

NFDwords(nfds) tells you how many ints it takes to build a bit
mask for nfds file descriptors.  The rest should be self-
explanatory.

I'd like to see a header file like this become standard.  The
names of the macros could certainly be changed; I'm not
particularly fond of the ones above.  I'm also worried about
assuming that there are 8 bits in the "bytes" that sizeof deals
with, but any discrepancies for unusual architectures could
certainly be encapsulated in select.h, so that #including code
wouldn't have to worry about them.

I believe that all programs should be written in terms of
getdtablesize() (unless P1003 has a better name for it), even if
they are being a written for a system for which the static NOFILE
would "work."  For such systems, just write the following
six-liner, and call it getdtab.c or something:

	#include <sys/param.h>

	getdtablesize()
	{
		return(NOFILE);
	}

                                         Steve Summit
                                         tektronix!copper!stevesu



More information about the Comp.unix.wizards mailing list