Undelivered mail

MAILER%ALASKA.BITNET at CUNYVM.CUNY.EDU MAILER%ALASKA.BITNET at CUNYVM.CUNY.EDU
Sun Mar 13 07:17:58 AEST 1988


Subject:  Re: I forget what it was originally called.

[Non-Deliverable:  User does not exist or has never logged on]

Reply-To: Info-C at BRL.ARPA

Received: From UWAVM(MAILER) by ALASKA with Jnet id 8466
          for SXJVK at ALASKA; Sat, 12 Mar 88 11:43 AST
Received: by UWAVM (Mailer X1.25) id 5770; Sat, 12 Mar 88 12:43:11 PST
Date:         Thu, 10 Mar 88 19:37:44 GMT
Reply-To:     Info-C at BRL.ARPA
Sender:       Info-C List <INFO-C at NDSUVM1>
Comments:     Warning -- original Sender: tag was news at unmvax.unm.EDU
From:         "Michael I. Bushnell" <mike at turing.UNM.EDU>
Subject:      Re: I forget what it was originally called.
Comments: To: info-c at brl-smoke.arpa
To:           Vic Kapella <SXJVK at ALASKA>

In article <3623 at bloom-beacon.MIT.EDU> tada at athena.mit.edu (Michael Zehr)
 writes:
>Which brings me to another question:  How good are compilers these
>day?  Can they optimize just as well as a programmer (without
>resorting to assembly, that is) or not?  For example:

Actually, optimizing C and FORTRAN compilers nowadays do better than
programmers writing in assembly language.  There goes another old
misconception...

>A)
>temp = {expression}
>a[temp] = 1;
>b[temp] = 2;
>c[temp] = 3;

>B)
>a[temp={expression}]=1;
>b[temp]=2;
>c[temp]=3;

>C)
>a[{expression}]=1;
>b[{expression}]=2;
>c[{expression}]=3;

>A) is very readable.  B) may or may not be slightly faster.  Does
>anyone know whether B) is typically faster than A)?  Would it probably
>generate the same code?  C) is slower since {expression} is evaluated
>three times, *unless* the compiler knows to evaluate it once and put
>it in a register.  Which may be about the same thing as what A) and B)
>are doing, *except* in A) and B) it might will write the value into a
>variable, which is slower than storing it in a register.  Of course,
>there's always:

Actaully, I find (B) to be more readable, but that isn't what you were
concerned about.  B is just as fast as A on any optimizing compiler.
C is also equal.  However, if the expression has a potential side
effect (like a function call), but you know there aren't any real side
effects, then C can't be optimized and will prove a little slower.
The Gnu C compiler will take (C) (assuming no potential side effects)
and calculate expression once, and store it in a register.  Pretty
neat, huh?

>D)
>{
>register temp;
>temp = {expression};
>.
>.
>.
>}
>But I find this harder to read, and furthermore, since C compilers can
>ignore register declarations anyway, it might put it in as a normal
>variable (just like A) ), but it will slow down because it's allocated
>each time through the block of code.

A compiler is allowed to ignore the decalaration, yes.  But if you
have a compiler that does, you are entitled to your money back.  It is
a little like the air IP link:  no packets transmitted, but, they did
say the protocol was nonreliable, right?
It is a little harder to read, and usually won't get you much.  If the
compiler ignores the register spec, however, you won't have any
problem with wasting time pushing and popping stack frames:  simple
peephole optimization will generate code equivalent to declaring it as
a variable specific to the function as a whole.  It is usually, I
think, a little slower to declare it static, but that is heavily
architecture dependent.


                Michael I. Bushnell
                mike at turing.unm.edu
                {ucbvax,gatech}!unmvax!turing!mike

                HASA -- "A" division



More information about the Comp.lang.c mailing list