NULL vs 0

mjs at rabbit.UUCP mjs at rabbit.UUCP
Thu Jan 19 23:46:08 AEST 1984


The point is that there are an increasing number of machines whose
compilers choose 16 bits to represent an int, and 32 for a pointer.  On
such a machine, the following code is simply wrong:

	#define	NULL ((char *) 0)
	int f(p,x,y)
	char * p; int x, y;
	{
		/* stuff */
		return (x);
	}

	int g()
	{
	#ifdef	INCORRECT
		(void) f(0, 17, 42);	/* 3 16-bit quantities */
	#else	!INCORRECT
		(void) f(NULL, 17, 42);	/* 32 bits of 0 & 2 16-bit ints */
	#endif	INCORRECT
	}

All that's been asked of anyone is to enhance portability to machines
where it is advantageous (due to things like bus bandwidth, cpu power,
etc.) to use 16 bits to represent type int, and 32 for pointer types.
-- 
	Marty Shannon
UUCP:	{alice,rabbit,research}!mjs
Phone:	201-582-3199



More information about the Comp.unix.wizards mailing list