Time to standardize "true" and "false"

Tom Karzes karzes at mfci.UUCP
Tue Sep 26 05:02:28 AEST 1989


In article <9464 at attctc.Dallas.TX.US> wjf at attctc.Dallas.TX.US (Jesse Furqueron) writes:
>I would suggest rather than FALSE = 0 and TRUE = 1, that the "real" definition
>of TRUE is not FALSE (TRUE = not 0), i.e. TRUE = !0.  Therefore the following
>
>#define FALSE	0
>#define TRUE	!0 

That's silly.  You should either assume the values that C guarantes:

    #define FALSE 0
    #define TRUE  1

Or else assume nothing and let the compiler figure it out each time:

    #define FALSE (0 != 0)
    #define TRUE  (0 == 0)

Your mistake is that you're confusing C's truth test (!= 0) with its
canonical true and false values (F=0, T=1).  In general, the canonical
true and false values in a language must behave appropriately under its
true test, but there may be non-canonical values which do the same.
In C, there is only one integral false value.  However, there are
also false pointer and floating point types.  In that sense, the "!= 0"
test in itself says nothing about canonical false being an integer zero.

Since they should only be defined once, I think it's a bit extreme to
use the latter definitions shown above, but they do provide the proper
justification for 0 and 1, which are best thought of as constant folded
versions of these (or similar constant expressions which generate canonical
true and false values without depending on what those values are).



More information about the Comp.lang.c mailing list