Uses of "short" ?

preece at ccvaxa.UUCP preece at ccvaxa.UUCP
Fri Sep 20 02:40:00 AEST 1985


> The use of "long" instead of "int" shows more attention to machine
> specificity?  /* Written  8:49 pm  Sep 13, 1985 by guy at sun.uucp in
> ccvaxa:net.lang.c */
----------
Well, yes, to my mind. The programmer, thinking abstractly about a
particular process, is likely to think of a variable quantity as
an integer or as a real.  Precision is a secondary consideration
that, generally, only becomes a primary consideration when the
simpler assumption fails.  The range of an integer is not likely to
be a concern until the programmer is faced with a case in which the
default assumption ("it's an integer") fails.  So I submit that
as a default, "int" is more abstract than "short" or "long." That's
why the white book says an int is "the natural size suggested by
the host machine architecture."  In practice, the programmer usually
will have a pretty good idea when a quantity is likely to violate
that default assumption on a particular machine and work around it
accordingly (whether by changing "int" to "long" or by providing
a specialized data type if "long" isn't long enough).

Now we're going to have a C standard pretty soon, in all probability,
and that may well change the default assumptions.  If "int" is
truly defined as a 16bit quantity, I will probably change my
default working habits to use long -- otherwise my default
abstraction would be violated too often.  Up until the present,
however, we have been working with a language in which the
definition of short, int, and long were specifically machine
dependent, and anyone porting software simply had to be aware
of the obvious places where machine dependencies showed up.

There are other languages where the purer abstraction is quite
acceptable.  In Common Lisp and integer can get to be any size
it needs to be; the system will take care of keeping it in an
appropriate kind of object for the size it currently has.  On
the other hand, there is a certain amount of overhead in that
approach that you might not want to swallow.  But it IS machine
independent.
----------
> Code that uses "int" to implement objects known to have values outside
> the range -32767 to 32767 is incorrect C.  The ANSI standard explicitly
> indicates this.  Even in the absence of such an explicit indication in
> an official language specification document, this information should be
> imparted to all people being taught C.
----------
Now that there is a reasonably firm draft standard, this is a reasonable
statement.  Not very long ago it was religious dogma.
----------
> In a lot of cases, I damn well hope that the provable range is
> significant to the programmer's view of the process.  [gives
> example of variable used as index into fixed size array]
----------
Well, yes and no.  It is significant to the programmer that the
value be a legal index into the array, but that may not be a fixed
range in the programmer's mental model of the process.  That is, it
may be temporarily fixed, simply because it is necessary to provide
a value for the declaration, but that size may be incidental.  In
such cases the well-bred programmer will have provided a #define,
with a suggestive name, for the range of the array and anything
that checks against it, but may not be able to provide a range for
the index other than "fitting within the array," which may not be
well modeled by architecturally convenient number sizes.  An array
with a dynamic size is another example.  The point is that "fitting
within the array" is a good and sufficient abstraction for the
programmer's view of the process.

-- 
scott preece
gould/csd - urbana
ihnp4!uiucdcs!ccvaxa!preece



More information about the Comp.lang.c mailing list