optimization (was Re: volatile)

Every system needs one terry at wsccs.UUCP
Thu Apr 28 13:46:25 AEST 1988


In article <20065 at think.UUCP>, barmar at think.COM (Barry Margolin) writes:
> In article <488 at wsccs.UUCP> I write:
> >Basically, if it works without -O, it sould work with -O, regardless of what
> >the compiler writer's optimization does to achieve it's goal.
> 
> This is unreasonable, and probably rules out almost all optimizations
> but the simplest peephole optimizations, and even some of them.

I don't believe this.  Optimization based on correct language structure as
currently practiced should be able to reduce bogus constructs just as well
as optimization based on correct language structure as proposed.  It's just
harder to implement.  I say "so what?" to that.

> But my main objection to your blanket statement above is that it
> requires code that ACCIDENTALLY works when unoptimized to continue
> working when optimized.
[Example of an incorrect algorythm deleted]

Your example was apparently correct, but a wee bit long.  Let me back up
a little and get to the main point I was trying to convey: good code that
works will be broken by the new standard.  This code is good from both the
standpoint of K&R and the standpoint of 'standard common practice'.  I would
not expect

	#define EOF (-1)
	unsigned char getachar();
	main()
	{
		while( getachar() != EOF);
		...
		...
		...
	}

to not be optimized to the equivalent of

	unsigned char getachar();
	main()
	{
		for( ;;)
			getachar();
	}

In fact, a good optimizer would do just that, as an unsigned char can never
be negative, by definition.


> Optimizable programs have many qualities in common with portable
> programs.  Both require that the programmer be more careful

This I most certainly agree with.  In general, I write in a subset of the
"allowable" C syntax to avoid common stupidities in C implementations.

> and pay attention to the standard.

Again I agree; but K&R, not ANSI; most K&R compliant compiler's I know are
either all derived from the same source (You know who that is ...the guys
with the "deathstar" logo), or written to be compatable with those from
that source or their derivitives.  Why should I have to work harder on all
my programs, making them cryptic in the process, to allow some C "stud" to 
work less on his (the compiler).

> Also, just because a program works doesn't mean that it is correct.

No, but it's one hell of a plus :-)

The main thrust of my argument was to point out that all of the "nifty"
speed-ups you can get via optimization should be applicable to ANY assembly
code.  If you want to optimize higher, use quads and optimize constants in
expressions via binary tree insertion at associative and communitive property
boundries in the expressions.  We all know (except, of course, that joker
Trebmal Yrret) that it is easier to write a top-down parser than a bottom up
parser without program generators.  Human generated code is much better than
machine generated code (yacc, lex).  I know it's a bitch to optimize and it
would be a lot easier with ANSI C.  Tough.  Don't break everybody's code to
make your job easier, and don't stick me with it.  I have my own work to do.


PS: The 'you' is not meant to be indictive of Barry.  It refers to optimizer
writers.


| Terry Lambert           UUCP: ...{ decvax, ihnp4 } ...utah-cs!century!terry |
| @ Century Software        OR: ...utah-cs!uplherc!sp7040!obie!wsccs!terry    |
| SLC, Utah                                                                   |
|                   These opinions are not my companies, but if you find them |
|                   useful, send a $20.00 donation to Brisbane Australia...   |
| 'There are monkey boys in the facility.  Do not be alarmed; you are secure' |



More information about the Comp.lang.c mailing list