Datalight faster than 4.2, why?

Gregory Smith greg at utcsri.UUCP
Mon May 19 02:22:37 AEST 1986


In article <131 at stracs.cs.strath.ac.uk> jim at cs.strath.ac.uk (Jim Reid) writes:
>In article <989 at dataioDataio.UUCP> bjorn at dataio.UUCP writes:
>>	The state-of-the-art in compilers has progressed on PCs,
>>	so why hasn't anyone come up with a better compiler for
>>	UNIX, or have I just not heard of it?
>>
>Comparisons like that are *totally meaningless* - What about the quality of
			    ^^^^^ depending on how many compiles
				you have to wait through.
>the generated code? What "optimisations" do the compilers perform? Do both
>produce much the same symbolic information for debugging? What's involved in
>linking object modules in the two progams? How many passes over the source
>code/expanded code/"parse trees" does each compiler do? The 4BSD compiler has
>at least 5 - 6 if you count linking. First there's the preprocessor, then
>the compiler proper has probably two passes, the assembler has another two
>for good measure (depending on how you look at the assembler). Then there's
>your configuration - how much memory does each system have? How much core
>does each compiler use/need? How much paging or swapping goes on during
>compilation? How much disk traffic - intermediate files etc - is done?
>
Give me a break. Sure, having a separate pre-processor will slow the
compiler down considerably, but is it an advantage?????? It only gives
you a certain amount of convenience in implementing the compiler.
Consider that the cpp has to do lexical analysis as sophisticated as
that done by the compiler, in order to do `#if's. It makes a *lot* of
sense to have the cpp/lexer/parser in a single pass - Much code can be
shared. When you find an identifier, for example, you go look it up in
the #define table before saying you have found an identifier/keyword - as
opposed to going through everything twice. Consider the single
character i/o that will be saved - even if it is done through a pipe.
The only disadvantage is that the cpp and compiler symbol tables must
live together in the same process. If compiler A has more passes than
compiler B, it doesn't mean 'A' is better or more sophisticated - It
could just mean that the implementors of B did a better job.

Your argument that the 4.2 compiler is slower because it generates better
code makes sense, but I haven't the slightest idea which one is better
in this area.

I know of one *big* reason why the UNIX compiler would be easy to beat
- it produces human-readable assembler. If it produced a binary-coded
assembler, the costs of (1) writing out all that text (2) reading in
all that text [twice] and (3) *lexing* and *parsing* all that &*!@#@
TEXT and looking up all those mnemonics [twice!] would be saved, and no
functionality would be lost. Of course, you would want a
binary-to-human assembler translator as a utitility...
This makes even more sense for any compiler that may have to run off
floppies - the full assembler text can be considerably larger than
the C program, so you would be rather limited in what you could compile
if full assembler were used.

-- 
"We demand rigidly defined areas of doubt and uncertainty!" - Vroomfondel
----------------------------------------------------------------------
Greg Smith     University of Toronto      UUCP: ..utzoo!utcsri!greg



More information about the Comp.lang.c mailing list