optimization (was Re: volatile)

SuperUser root at mfci.UUCP
Sun May 1 17:36:09 AEST 1988


Expires:

Sender:

Followup-To:

Distribution:

Keywords:


In article <500 at wsccs.UUCP> terry at wsccs.UUCP (Every system needs one) writes:
}Your example was apparently correct, but a wee bit long.  Let me back up
}a little and get to the main point I was trying to convey: good code that
}works will be broken by the new standard.  This code is good from both the
}standpoint of K&R and the standpoint of 'standard common practice'.  I would
}not expect
}
}	#define EOF (-1)
}	unsigned char getachar();
}	main()
}	{
}		while( getachar() != EOF);
}		...
}		...
}		...
}	}
}
}to not be optimized to the equivalent of
}
}	unsigned char getachar();
}	main()
}	{
}		for( ;;)
}			getachar();
}	}
}
}In fact, a good optimizer would do just that, as an unsigned char can never
}be negative, by definition.

No, these two programs are not equivalent.  When comparing an unsigned to
a signed integer, the signed integer is first cast to unsigned (which results
in no change in the bit pattern), then the comparison is performed.  In fact,
since octal and hex constants are signed in C, on a machine with 4 byte two's
complement integers 0xffffffff is equivalent to -1, and people compare these
signed constants to unsigned values all the same.  Most people probably think
they're unsigned to being with.  People are often surprised by the fact that
an expression like (u > -1) is always false when u is unsigned, since the -1
is first cast to unsigned, whereupon it becomes the largest possible
unsigned.



More information about the Comp.lang.c mailing list