Style guides and portability

Steve Summit scs at adam.mit.edu
Mon Jan 14 05:26:55 AEST 1991


Doug Gwyn (I think) wrote:
> No, any C compiler worth using (and certainly any that conforms to the
> standard) will provide at least 16 bits for an int, at least 32 bits
> for a long, and at least 8 bits for a char.  While there are uses for
> user-defined primitive data types... I don't think that int16, int32, etc.
> are justifiable.

In article <BEVAN.91Jan12120920 at orca.cs.man.ac.uk>, Stephen Bevan writes:
>What about the cases where it is a requirement that a particular int
>MUST be able to hold 32 bit numbers.  If you transfer this to a 16 bit
>int system, your software is going to die horribly.
>The only way I know around this is to define types like int32 and a
>lot of macros/functions that go along with them.

Perhaps I am missing something ridiculously subtle, but where I
come from, a "requirement that a particular int MUST be able to
hold 32 bit numbers" is (assuming "int" means "int only") an
oxymoron at best.  Standard C provides the type "long int" which
fulfills the requirement precisely.  Why make your life miserable
by cluttering the code with "a lot of macros/functions" to
implement this int32 pseudo-type?

(It's true that "Classic" C made no guarantees about type sizes,
but as Doug pointed out, ANSI X3.159 does specify that ints and
short ints are at least 16 bits, while long ints are at least 32
bits.  I thought there was language somewhere in the Standard
referring explicitly to bit counts, but I can't find it just now.
In any case, the "minimum maxima" for <limits.h> in section
2.2.4.2.1, combined with the requirement of a "pure binary
numeration system" and other language in section 3.1.2.5,
effectively imply the 16 and 32 bit sizes.)

>...define types like int32 and a
>lot of macros/functions that go along with them.  For example,
>int32plus, int32divide, ... etc.

What does this mean?  C isn't C++, but it has always defined
binary operators such as "+" as working correctly for any
"arithmetic type" (i.e. integers and floating-point numbers of
all sizes) with implicit casts inserted as necessary.  The only
problem I have with user-defined types in C is printing them.
If you have

	int32 bigint;

do you print it with %d or %ld?  (Come to think of it, this is
another strong argument in favor of "long int" over "int32".)

                                            Steve Summit
                                            scs at adam.mit.edu

P.S. The answer to "How do you print something declared with
`int32 bigint;' ?" is that you have to abandon printf in favor of
something you define, like "print32".  I find this awkward, and
far less convenient than printf.  C++ has another syntax, which
isn't perfect, either.  User-defined output is tricky, and I'm
still waiting for an ideal solution. CLU's was, as I recall,
fairly clever.  I heard that 8th edition research Unix has a way
to "register" new printf %-formats, which I'd love to learn the
details of.



More information about the Comp.lang.c mailing list