NOT Educating FORTRAN programmers to use C

Jim Giles jlg at lambda.UUCP
Fri Jan 12 12:15:45 AEST 1990


>From article <649 at chem.ucsd.EDU>, by tps at chem.ucsd.edu (Tom Stockfisch):
> In the absence of vectorization, I don't think pointer aliasing is a
> significant hindrance to optimization.

In a previous article I made the incorrect statement that the number
of machines which can be well optimized in the presence of pointers
is growing narrow.  The fact is that the set of such machines is zero
(at least, to the best of my knowledge).  Consider the following code:

   z=1;
   *a=2;
   *b=1;

If you could tell that a and b were not aliased (to each other
or to z), the the generated code would be to set a register to
one, store z, store b, increment, and store a.  However, if you
can't detect aliasing the sequence must be in the order given.
On all machines I know, this would require at least one more
instruction - so even on a RISC machine with no pipelining or
vectorization to worry about this code would be 20% slower because
of the possibility of aliasing.  If these were arrays and this
sequence of code were in an inner loop, this slowdown could
strongly effect the whole code's performance.

The truth is, any machine which has registers to schedule,
program control over cache management, pipelining, or vectors
can be effected by aliasing.  The severity of the effect depends
on the program.

> [...]                                With vectorization, you
> need some sort of "noalias" pragma.  There are, after all,
> vectorizing compilers for the Cray and other vector machines.

Pragmas are non-portable and require explicit user intervention
(which merely presents another opportunity for user error).
Other than that, 'noalias' is useful on ALL machines and will
probably only _ever_ be available on a small percentage.

> You write character manipulation programs in Fortran?

Why not?  It's as fast as the fastest mechanism C has for Characters
(or should be), it has built-in concatenate and substring syntax
(instead of C's clumsy looking function calls for these features),
and it's widely regarded as something that Fortran did (nearly)
right - even by people who otherwise don't like Fortran.

> The standard allows str*() to be built-in.

Most ANSI standards allow additional features to be implemented in
a conforming processor.  I don't consider these extensions to be
an inherent part of the language.  C also allows the user to define
additional data types - something which really _is_ lacking in Fortran.

However, the only built-in feature of C which pertains to character
strings are character string constants - which are implemented a
null terminated sequences denoted by character pointers.  This
also happens to be the usual implementation of strings by most C
programmers.  In fact, this is such a common choice that most
people _assume_ a pointer to a null terminated string is what
is meant by the phrase "character string".  This implementation
is less effecient than that used by most Fortrans (or Pascals,
Modulas, etc. for that matter).

However, as I said in my original posting, this is a secondary
issue since a careful C programmer _can_ handle strings efficiently
(even though most don't).

> 
>>Aside from the above two issues Fortran and C are identical with
>>respect to optimization.
> 
> I have worked on a machine that has a double-word fetch instruction.  It
> works only on double-word boundaries.  Fortran cannot make use of it
> because the 77 standard allows any two consecutive members of
> a real array to be equivalenced as a double precsion array.  Double
> precision memory fetches with fortran then require two fetch instructions,
> [...]

I too have worked on such machines.  What you're saying is hogwash.
Unless the implementor was an idiot, he would _always_ allocate double
precision variables on double word boundaries.  The only time he would
fai to do so would be if the double _really_was_ equivalanced to a
single word object.  Even THEN he would insert a "dead" word in the
memory allocation so that the double word objects were properly aligned.
The only way for optimization to be inhibited would be if he equivalenced
two different double word objects to an array of singles - AND the two
equivalances were an odd number of words apart!  In this case, most
implementations I've seen will still do the fast loads/stores/ on the
properly aligned data and will issue a warning about the other.

Common blocks are handled slightly differently but, even here, the
compiler can always generate efficient code for properly alligned
doubles and innefficient code (plus a warning message) for improperly
alligned doubles.  The techniques for doing this sort of thing have
been public domain for over 25 YEARS!  If your compiler can't do
this stuff, that's how far out of date it is.

> A reasonable C implementation on this machine would simply require
> type "double" to be aligned on 8 byte boundary.

Which just means that C CAN'T do something that Fortran CAN.  And,
if C DID allow doubles on odd words, it would face the same optimization
problems - worse: because the compiler can't necessarily tell if
a pointer-to-double will be properly aligned until run-time.

So, what I said still applies: except for pointers (and character
constants), both C and Fortran are identically optimizable.

J. Giles



More information about the Comp.lang.c mailing list