Tail recursion optimization

Alan T. Bowler [SDG] atbowler at watmath.UUCP
Tue Apr 15 11:24:49 AEST 1986


In article <708 at bentley.UUCP> kwh at bentley.UUCP (KW Heuer) writes:
>A good optimizer should compile tail-recursion into a jump anyway.  (But
>there are few good optimizers by that standard.)
>
There is a problem if you want to support a "nargs" function with this.
If an implementation tries stores the nargs value in some fixed position
in the call sequence then it cannot optimize tail recursion into
a branch to this or some other function.  (If your compiler
wants to optimize tail recursion then why not end calls as well.)
   On a lot of hardware models it is quite feasible to put a nargs
constant a part of the call sequence (as was done in pre-version 6 Unix).
If this is possible it seems the most useful place to keep it since
it involves essentially no overhead unless the called program chooses
to use it.  Tail recursion and end call optimization results in
the "wrong" nargs value being found.  (i.e. that associated with
the original call to the function and not the call that was
optimized out).  There are 2 ways arround this
  1) always pass the nargs value as an explicit hidden argument
     which raises the overhead for all function calls.
     (i.e. an expensive choice to make).
  2) delete nargs functionality which denies the programmer a
     very useful tool to reduced the overhead and name clutter
     in is program.
I know that the later option is one I must assume when I am writing
very portable programs, but there are many times when I am writing
system specific code and it is annoying to have to do things like
"node1(a), node2(a, b), node3(a,b,c)".  In environments where
I have NARGS guaranteed, I make more use of it than tail recursion.



More information about the Comp.lang.c mailing list