TRUE and FALSE

David T. Sandberg dts at quad.sialis.mn.org
Tue Sep 4 20:56:57 AEST 1990


In article <898 at mwtech.UUCP> martin at mwtech.UUCP (Martin Weitzel) writes:
>In article <585 at quad.sialis.mn.org> I wrote:
>>My two cents: I define FALSE as 0 and TRUE as 1 on a regular basis,
>>but only use [... followed by about 10 more lines of rules when and
> when not to use TRUE and FALSE in statements]

[...which was actually ten lines of explanation for just one rule,
partly because I do tend to be wordy, and partly in order to be certain
that I was not misunderstood and thereby avoid responses which border
on flamage criticizing understated elements of my rationale.  Silly me.]

>................................. how have you made sure that the
>readers have grasped *your* strict rules when and when not to use
>TRUE and FALSE and will apply it correctly?

Speaking for myself, I realize that this is a weak point in my own
TRUE-FALSE philosophy.  However, "my strict rules" can be quite
adequately summarized by a single comment next to the definition:

#define	TRUE	1		/* for assignments to flags ONLY! */
#define	FALSE	0		/* likewise */

Is this really that difficult to fathom?  Sure, the next programmer
could ignore it and use TRUE and FALSE in too general of a context
anyway, but he could also similarly abuse 90% of anyone's code - for
example, using local variables for multiple unrelated tasks in a
function.  So, how often do you bother to write comments like this?

	int c;		/* for the return value of getch() ONLY! */

Not often, I'd guess.  We have to depend on other programmers to
have some modicum of sense, after all, don't we?  If someone is
going to test the results of isalpha() or whatever against a macro
without even knowing what the macro is and is for, I am tempted to
say that they need to spend the extra ten minutes looking for the
problem so as not to repeat it in the future.  But saying it would
probably bring on more responses bordering on flamage, so I won't.  (-:

IMVHO, the whole point of having TRUE and FALSE available is to help
differentiate setting flags from arithmetic operations, and without
having to create a boolean data type.  Maybe my problem is that I
write structured(TM) code to a fault, commonly using flags to exit
loops or do error handling, rather than using goto or break.  When
the fabled next programmer comes along to read/support my code, I
expect something like "done = FALSE;" to be a lot more intuitive than
"done = 0;", since the latter could easily be arithmetic in nature,
whereas TRUE or FALSE should immediately imply that "done" is a flag.

So, either the next programmer can take a moment to learn what TRUE
and FALSE actually are, or he can go wading through code everytime he
sees 1 or 0 assigned to an int, to see if the result is being used
as a flag variable as opposed to an arithmetic one.  You can weigh
those tradeoffs as you see fit: the defines seem reasonably to me,
given my coding style.  (This will now undoubtedly degenerate into
a style war, in which I will not take part.)

>............................................... why don't you call
>the formal constants somthing like SET, YES, ON, SUCCESS, ... (and
>their counterparts CLEAR, NO, OFF, FAILURE, ...)?  Is it because
>most of you (still) grew up with PASCAL?

Not in my case at least.  And I have actually used descriptive
definitions like those you suggest on occasion.  But typically
my flag variables are named in such a way that TRUE and FALSE
"read" better than anything else. (Oh good; now maybe I'll be
accused of growing up with COBOL.)

-- 
 \\                                 \   David Sandberg, consultant   \\
 //         "Small hats!"           /   Richfield MN                 //
 \\                                 \   dts at quad.sialis.mn.org       \\



More information about the Comp.lang.c mailing list