Safe coding practices (was Re: Bug in users command)

Martin Weitzel martin at mwtech.UUCP
Thu Jan 31 03:33:17 AEST 1991


In article <87681 at tut.cis.ohio-state.edu> Bob Manson <manson at cis.ohio-state.edu> writes:
>In article <60 at garth.UUCP> smryan at garth.UUCP (Steven Ryan) writes:
>>Recompile what? Is the source always available? [...]
>
>I know what I thought of the "person" that hard-coded a limit on
>the # of /etc/magic entries in AT&Ts file program...and it wasn't
>kind. No, I didn't have source. No, I couldn't recompile. The
>solution was to write a replacement that didn't have any such
>stupid limit coded in it.

Agreed!

>>Why is it difficult for so-called programmers to avoid arbitrary limits?
>
>Because they don't care. I've met several people who call themselves
>"programmers" that think writing portable, reasonably limit-free code
>is a joke. They've just got a job to get done, a hacky piece of code
>to be written, and they don't care what it looks like or if it'll work
>a year from now.

Agreed!

[...]
>So get a grip, take the time to create
>data structures that don't involve fixed-sized arrays, and a lot of
>people will be much happier with you.
[...]

Again agreed ... but now the great BUT:

There are some reasons to use fixed limits:

	a) During building a prototype.
	b) For simplicity of algorithms, which may tend
	   to make the program faster and smaller and
	   may reduce the testing/debuuging efforts%.
	c) Because sometimes there's no easy strategy to back
	   off out of an error situation, especially in places
	   where you typically use malloc (if you have set the
	   critical space aside in a static variable, you can
	   allways count on it).

IMHO prototyping is a sound principle of software design. Unfortunately
sometimes you are forced (by your boss or other cicumstances) to turn
away from a project prematurely and the prototype will become the
production version.

Judging b) is not at all easy - you can fight battles with examples and
counter examples over hours, but to give you one, suppose your program
MUST work on a machine which has only 64KB of NON-separated I/D-space,
and you need to have of amounts of data in core memory. If you save the
space for the malloc/free-routines you may be able to increase the
limits of some of your data strutures. (IMHO for that reasons there are
some fixed limits in yacc, which I really *hate* when I have Megabytes
of memory avalable. Obviously nobody at AT&T cared to change yacc in this
respect since Johnson wrote it.)

Furthermore remember that we all actually *live* with some fixed limits in
our programs: That are the sizes of the machine's data types. (Of course
I'm the only one using plain int-s or long-s - all the others of you
use those fine packages capable of doing arbitrary size integer
arithmetic all the time in all your program :-)).

Also for c) you can have examples and counter examples. But there are
cases where I would rather like to have some arbitrary limits (which
are unlikely to hit) if this helps to get a critical operation done
reliable.

(Mind you: We can discuss for hours and hours over b) and c). We can use
excerpts of source code to prove a fact or to prove it wrong and some of
the examples could surely be improved by using better algorithms. But
THERE IS a general problem. The question is where to draw the borderline.)

NOTE: I don't want to justify fixed limits where they could be easily
      avoided.
      I don't want to justify companies making lots of money with
      their software, don't change the `compiled-in' fixed limits
      for more than a decade but of course keep the sources away from
      their poor customers, and finally forcing programmers to use
      ugly kluges to get their work done. (If you've ever tried to use
      yacc for really large grammars you know whom I'm accusing :-()

================
%: Somewhere above I wrote that programs are getting easier to test and
debug with simpler algorithms. Be honest: If you write a program that uses
malloc, I assume you have programmed an `emergency exit' if there's no
more memory available, but do you also allways care to create some test data
(and run your program with it at least once) so that you can confirm it
aborts in the way you intended? I've more than once found "fprintf"-s
for error messages in MY programs, where I had forgotten to supply
stderr as FILE-pointer or the arguments didn't match the format string.
I dare to say: Error paths are the least tested parts of all existing
software, especially for errors which are very unlikely to happen.
-- 
Martin Weitzel, email: martin at mwtech.UUCP, voice: 49-(0)6151-6 56 83



More information about the Comp.bugs.4bsd.ucb-fixes mailing list