FORTRAN to C converter wanted. Also tweaking code.

Moshe Braner braner at batcomputer.tn.cornell.edu
Tue Oct 4 01:04:42 AEST 1988


[Disclaimer: I have no affection for FORTRAN, but I have stubborn
customers to satisfy :-)]

Does anybody know about software to automatically convert FORTRAN code
to C?  If it turns out good C, all the better, but if it makes ugly C
code but does the conversion fast, one could think of it as a preprocessor,
part of the compiler.  The code is (of course) scientific, numerical.

I have done a fair amount of manual FORTRAN-->C translation of scientific
subroutines.  It is generally easy, with one caveat:

FORTRAN uses array indexing starting at 1, C normally starts at 0.  There
are various ways to get around that, but all easily lead to confusion and
bugs, especially when one deals with arrays of more than one dimension.
The best way is probably to shift to 0-based arrays everywhere, but in
numerical algorithms there are always hidden statements such as "a[2] = ..."
or even "for (i=n; i>3; i--) a[i] = ...".  This difficulty, I believe,
has prompted the authors of "Numerical Recipes in C" to stick to the
FORTRAN indexing, through various, somewhat akward tricks.  They also went
the route of malloc()ing local arrays, a practice that has a severe
performance penalty if used in library procedures that are called often
and don't do very much.

I have frequently tweaked code written by scientists, resulting
in speed up factors of 5 or better.  Besides pointer arithmetic tricks
(not always ugly: e.g., using pointers to rows, as in "p=&a[i][0];
... p[j]=...")  the standard things to do include:
Taking things out of loops if they don't need to be there.
Using int ops whenever possible instead of FP.
Using FP types in FP expressions rather than forcing int-->float
conversions each time around the loop.
Use shifts for integer division by powers of 2.
Use masking instead of modulo ops (n&0x07 same as n%8, but faster).
In C, using double instead of float (grrrrrr!).
Using temp variables to avoid recalculations, especially to avoid
function calls.  Saving a lot of info in memory, even large arrays,
frequently allows significant speedups if the algorithm is well
thought through.  Unfortunately, many programmers still think in
terms of reducing memory use even when they have a lot more RAM than
they need.
When it comes to speeding up I/O, using unformatted (binary) files,
through the lowest-level OS calls available, to read and write large
chunks of data at a time, can do wonders.

Some of these things are redundant or ineffectual on some compilers
or machines, but they never hurt...

And yes, the 90/10 rule DOES frequently hold!

- Moshe Braner

Cornell Theory Center, 265 Olin Hall,
Cornell University, Ithaca, NY 14853
(607) 255-9401	(Work)
<braner at tcgould.tn.cornell.edu>		(INTERNET)
<braner at crnlthry> or <braner at crnlcam>	(BITNET)

--------------------------------
Why use AL if you can do it through a shell script?  It's 2000 times faster?
Well, just wait 10 years and the CPU will be that much faster... :-)



More information about the Comp.lang.c mailing list