Optimization in a better world

David Chase chase at Ozona.orc.olivetti.com
Wed Jun 1 07:04:05 AEST 1988


In article <15422 at brl-adm.ARPA> dsill at nswc-oas.arpa (Dave Sill) writes:
>From: Steven Ryan <smryan at garth.uucp>
>>The point being, do not expect magic from a compiler. It can provide at best
>>linear improvement.
>
>Since when?  [examples of better than linear improvement, some silly]

Agreed, but a much more plausible possible optimization is the
introduction of memo-functions.  This is not a completely trivial
transformation (figuring out when to apply it is the hard part,
really), but use of memo-functions will convert exponential time
algorithms (for example, f(x) = x > 2 then x else f(x-1) + f(x-2))
into polymonial-time algorithms.  For many structured problems you can
do this by hand and call it "dynamic programming", but that is only a
special case.

Other forms of this optimization have been dicussed for functional
languages--see John Hughes, "Lazy Memo Functions" in S-V LNCS #201,
Kieburtz and Schultis, "Transformations of FP programs schemes" in the
1981 Conf. on Functional Programming and Computer Architectures, and
(I think) a paper by Norman Cohen that appeared in the last couple of
years in TOPLAS.

(Proposed) APL compilers used pattern matching to discover common
"idioms" (Perlis's term) that could be efficiently compiled.  The
expected speedup obtained was "only" linear, but the factor was
typically 10 or 100.  There was a technical report on this out of Iowa
State University in 1982 by Cheng, Omdahl, and Strawn--there were
still problems remaining to be solved back then, but some of those
problems have since been solved.

David Chase
Olivetti Research Center, Menlo Park



More information about the Comp.lang.c mailing list