uutraffic report (in perl)

Tom Christiansen tchrist at convex.COM
Thu Nov 23 07:18:43 AEST 1989


In article <14947 at bfmny0.UU.NET> tneff at bfmny0.UU.NET (Tom Neff) writes:
|On System V/386, for instance, with all debugging #undef'd and with -O
|turned on, the latest perl executable *after* both 'strip' and 'mcs -d'
|subtends 229K, and takes ~5 seconds to load, compile and interpret an
|in-line script consisting of 'exit 0;'.  

|So while I like and admire Perl and feel it's the best batched report
|writer invented, I think its limitations preclude basic tool status.

Which limitations, load time?  Somewhere it does say that perl probably
isn't good for tiny machines.  My idea of a tiny machine is my diskless
Sun 3/50 with 4 megabytes.  Here are three successive timings:

sun% /bin/time perl -e 'exit 0;'
        4.5 real         0.0 user         0.4 sys
        1.2 real         0.0 user         0.2 sys
        1.3 real         0.0 user         0.1 sys


So yes, that first load (across the net) took nearly 5 seconds, after
which a lot went resident.  but then, i don't try to get real work done on
a system of that size anymore.  Here are some other timings.  

First a vax750 with 8 megabytes of memory.  It sure helps to actually have a
local disk.  This is my idea of a low-end machine:

vax750% /bin/time perl -e 'exit 0;'
        1.4 real         0.0 user         0.4 sys
        1.4 real         0.0 user         0.4 sys
        0.9 real         0.0 user         0.3 sys

Here is a low-end Convex, a 120 with just 32megabytes, 
my idea of a medium-end machine.

The -e gets me extended (microsecond) precision on real 
time on a C1, and also on user and sys on a c2.

C120% /bin/time -e perl -e 'exit 0;'
        0.292628 real        0.010000 user        0.190000 sys
        0.274396 real        0.010000 user        0.170000 sys
        0.255572 real        0.000000 user        0.150000 sys

And here are timings for a C220 with 256 meg, which is definitely
getting on the high end.  Halving or doubling the number of processors
or memory doesn't make for significantly different timings:

C220%  /bin/time -e perl -e 'exit 0;'
        0.125862 real        0.004936 user        0.054461 sys
        0.114496 real        0.005027 user        0.050926 sys
        0.107741 real        0.004883 user        0.041306 sys


So I would say that for the initial load on very tiny machines,
perl is somewhat large.  But on your mid-range and larger machines,
it's quick enough for use as a command line filter.  I don't think
that the statement that "it's too slow for general work" is valid 
except on the very tiniest of machines.  Actually, the exact quote
is that "its limitations preclude basic tool status."  I wonder 
which limitations Tom Neff was thinking of.

|Perhaps one solution would be to stop writing '?2p' translators for a
|bit and write 'p2c' so that cherished perl tools can be C-compiled
|for lasting freshness.  Then perl becomes a superb prototyper capable
|of dashing off a fast tool for intensive use in other environments.

That's an idea, although the task of translating for as many different
machines as perl runs on would not be pleasant.  Furthermore, because
of the 'eval' capability, it would be like writing a lisp compiler;
you'd have to embed the whole interpreter in each program.  If you 
really want to do that, you can use 'dump LABEL' if you have undump
in order to checkpoint after initializations.

There are plenty of things perl is NOT good for.  These including
(to my mind) debuggers and windowing applications.   But for many
other things it fits better than the previously existing tools,
sometimes even better than C itself because of built-ins, optimized
regex handling, and the fact it's an interpreter.  


--tom

    Tom Christiansen                       {uunet,uiucdcs,sun}!convex!tchrist 
    Convex Computer Corporation                            tchrist at convex.COM
		 "EMACS belongs in <sys/errno.h>: Editor too big!"



More information about the Alt.sources.d mailing list