ANSI prototypes, the right choice...

Steve Summit scs at adam.mit.edu
Tue Feb 12 11:57:26 AEST 1991


In article <1991Feb11.030811.25074 at sugar.hackercorp.com> peter at sugar.hackercorp.com (Peter da Silva) writes:
>In article <1991Feb9.075215.26939 at athena.mit.edu> scs at adam.mit.edu writes:
>> Since compilers may issue any warning
>> messages they want to, I suspect that Lattice is just trying to
>> prod people towards the Party line.
>
>You have it backwards. Lattice accepts mixtures. No other Ansi-compatible
>compiler I've used does... including Manx. Most of them take advantage of
>the prototypes to generate faster, more efficient function calls. If you
>mix declarations you *will* break code.

I have what backwards?

You'll have to provide an example of this code that *will* break.
If I have

	f(i, f, d)
	int i;
	float f;
	double d;

and somewhere else I invoke

	f(1, 2., 3.);

it's going to work, regardless of whether I have a prototype for
f in scope or not, and regardless of any optimizations the
compiler is trying to do.

The standard (section 3.3.2.2) makes it clear that function calls
without prototypes in scope are to be treated as they have always
been:

	If the expression that denotes the called function has a
	type that does not include a prototype, the integral
	promotions are performed on each argument[,] and
	arguments that have type float are converted to double.

A simpleminded, incorrect prototype for f such as

	extern f(int, float, double);

could certainly cause trouble, as it would allow the compiler
to pass the second argument unwidened as a float, but this is not
a consistent prototype for f.  The correct prototype for the
above definition of f, if you want to mix things, is

	extern f(int, double, double);

This particular issue (the confusing appearance of prototypes for
old-style function definitions containing float parameters) is
discussed in the comp.lang.c Frequently Asked Questions list.

A compiler's "efficient" calling sequences do not enter into this
question.  Calls to functions without prototypes in scope are
made as if a prototype were spontaneously invented, based only on
information visible in the call, and taking into account the
float => double promotion rule quoted above.  (I have heard
people speak of a clause explicitly mentioning this on-the-fly
prototype construction, but it must have been in an earlier
draft.)

Typically, the case in which the "efficient" calling sequences
cannot be used is functions with a variable number of arguments.
However, ANSI C explicitly requires that these functions be
defined using new-style syntax and called with prototypes in
scope.  Old-style functions may be assumed to be fixed-argument.
(This is actually consistent with the original definition of C;
printf has always been an anomaly.)  X3.159 section 3.3.2.2
(quoted above) continues (still talking about called functions
without prototypes):

	If the number of arguments does not agree with the number
	of parameters, the result is undefined.

This makes it clear that a compiler may (and in fact, should) use
its efficient calling sequences when prototypes are not in scope.
The variable upon which to base the choice of function-call
linkage is not the presence or absence of prototypes, but the
presence or absence of variable-length argument lists.  (I
suspect that it is a misunderstanding of this point that is
causing the confusion.)

None of this is accidental; the requirement that functions with
variable numbers of arguments be called with a prototype in scope
was made precisely so that a compiler could use one function
calling mechanism for them, and another one for all others.

To be sure, mixing prototypes with old-style definitions is
tricky, and error-prone if the old-style definitions include
"narrow" parameters (i.e. float).  This probably explains K&R2's
suggestion that "mixtures are to be avoided if possible."
However, correct mixtures are supposed to work.

If a compiler sees

	extern f(int, double, double);
	f(1, 2., 3.);

and emits code for an "efficient" function call, and then turns
around and compiles a file containing

	f(i, f, d)
	int i;
	float f;
	double d;

into code which expects a different, incompatible, "inefficient"
stack frame, that compiler is nonconforming.  (Says he, sticking
his neck out.  "I'm not a language lawyer, but I play one on
comp.lang.c.")

As it stands, the real reason to avoid old-style function
definitions is that they are officially "obsolescent."  There are
a number of situations in which mixing old-style definitions and
prototype declarations is fairly silly -- if you've got a
compiler which accepts the prototypes, why not use prototype-
style definitions as well?  (On the other hand, there are
situations in which it makes sense.  #ifdefs within function
definitions are ugly, so a viable interim strategy is to use
old-style definitions, augmented by prototypes inside
#ifdef __STDC__ so that old compilers won't choke on them but new
compilers won't scream about "call of function with no prototype
in scope.")

>Please, folks. It gets tiring fixing broken code... either go all the way
>with ANSI or stick with K&R.

Good advice, for those whose code will never see a pre-ANSI
compiler.  The rest of us have to straddle the fence a bit.
(In my own code, I use mostly K&R style, except for functions
with variable numbers of arguments.)

Followups to comp.std.c; the Amiga folks are probably getting
sick of this.

                                            Steve Summit
                                            scs at adam.mit.edu



More information about the Comp.std.c mailing list