Why NULL is 0

Stephen J. Friedl friedl at vsi.UUCP
Sun Mar 13 05:07:25 AEST 1988


In article <124 at polygen.UUCP>, pablo at polygen.uucp (Pablo Halpern) writes:
> However, if I were writing a C compiler, I would choose a size for all
> pointers equal to the size of the largest possible pointer.  This would
> allow code that passed uncasted NULL to work correctly, provided NULL
> is a type as large as a pointer.  This is not because dpANS says it
> should be so, but because so much code would break if it were not.
> Perhaps ANSI should add the restriction that all pointer types must be
> the same size in an effort to "codify common existing practice."

I think this is naive.  Presumably, the large "common" pointer
format would pass around the machine OK but it still must be
converted to the machine's native pointer types to actually
use -- how efficiently will this be done?  This might be like the 80x86
huge pointer calculations or the old promote-float-to-double
rules -- they makes life a little easier for the (lazy?)
programmer at the larger expense in time and compiler complexity

Perhaps more reasonable is to promote all pointers to the same
large width while passing them on the stack and convert them back
on the other end: this would fix the alignment issues but would
still slow things down.

I don't favor this approach but I bring it up in the spirit of
this discussion.  The obvious answer is to be very rigorous about
casting your pointers and knowing when to do so.  This is a
bummer for the beginners but we all had to go through it unless
we develop on a VAX :-).
-- 
Life : Stephen J. Friedl @ V-Systems, Inc./Santa Ana, CA   *Hi Mom*
CSNet: friedl%vsi.uucp at kent.edu  ARPA: friedl%vsi.uucp at uunet.uu.net
uucp : {kentvax, uunet, attmail, ihnp4!amdcad!uport}!vsi!friedl



More information about the Comp.lang.c mailing list