Efficient coding considered harmful?

der Mouse mouse at mcgill-vision.UUCP
Mon Nov 14 19:18:44 AEST 1988


[ > from <137 at twwells.uucp>, by bill at twwells.uucp (T. William Wells) ]
[ >> from <7700 at bloom-beacon.MIT.EDU>, by scs at adam.pika.mit.edu (Steve Summit) ]
[ >>> from <119 at twwells.uucp>, by bill at twwells.uucp (T. William Wells) ]

>>> Avoid excess function calls.  A lot of people have been brainwashed
>>> by some modularity gurus into believing that every identifiable
>>> "whole" should have its own function.
>> Here's another religious debate: the structured programming one.
> I think you missed the point.  Structured programming is essential to
> good procedural programming.

The truth of this depends on what you mean by "structured programming".
An unfortunately large segment seems to believe that structured
programming means sticking strictly to certain rules, such as "never
use a goto, ever" or "no function body larger than a page, ever"
(though they never say whether terminal page or paper page, curious).
And, as I am sure you are aware, blind adherence to rules cannot, in
itself, produce good programs, regardless of the rules.  Good
programmers write good programs without needing rules; bad programmers
can't write good programs even with rules.  I suspect their reason for
existence is that good programmers' output tends to follow them (not
always, though!), so in a wonderful inversion of cause and effect, some
people deduce that following the rules will make for good programs.
(Using such rules as guidelines may also help nudge in-between
programmers the right way.)

>> There are a few applications (and, some believe, a few
>> architectures) for which function call overhead is significant,

On a VAX, almost all compilers (certainly all of which I am aware,
except possibly for BLISS) use the CALLS/CALLG function call mechanism.
This mechanism is so horribly inefficient it arguably shouldn't have
existed in the first place.  The VAX is certainly not more than one
architecture, but it's a rather common one.  (As Steve implies, this
doesn't matter for the vast majority of programs.  I am particularly
aware of it because I have been involved in an effort to build a robot
control system, which involves guaranteeing a 28-millisecond cycle time
for certain parts of the code, and a CALLS - to pick just one example -
is a nontrivial fraction of that.)

>> Please don't avoid malloc; the alternative is generally fixed-size
>> arrays and "lines longer than 512 characters are silently
>> truncated."

I have been guilty of going the fixed-size route on occasion, and
doubtless will be in the future.  Why?  Because allocating requires
that I know the size ahead of time.  If there were some way to inquire
of "the system" how much data remains to be read before the next
newline....but there isn't, and probably never will be.  For most
programs, the user-interface flaw implied by fgets() is less painful
than the time it would eat up for me to go the general route.

>>> Unless, of course, the programmer remembered to COMMENT.
>> If the code reads
>>       a &= 3;         /* really a %= 4 */
>> or
>>       a &= 3;         /* really a %= HASHSIZE */
>> and I do a global replace of 4, or re#define HASHSIZE, the comment
>> may not help.
> Yes, but writing an explicit constant is bad to start off with.  It
> should be:
> #define HASHSIZE 4              /* A power of two, or else! */
> 	a &= HASHSIZE - 1;

How about

#if HASHSIZE & (HASHSIZE-1)
	a %= HASHSIZE;
#else
	a &= HASHSIZE - 1;
#endif

Of course, I daresay there aren't that may hashing algorithms that work
well for both power-of-two table sizes *and* non-power-of-two sizes.
But the choice of algorithm is not the point here.

					der Mouse

			old: mcgill-vision!mouse
			new: mouse at larry.mcrcim.mcgill.edu



More information about the Comp.lang.c mailing list