representation of integers
Doug Gwyn
gwyn at smoke.brl.mil
Tue Nov 6 01:05:36 AEST 1990
In article <5242 at ima.ima.isc.com> karl at ima.isc.com (Karl Heuer) writes:
>Actually, the Standard guarantees that bitstrings with the sign bit clear
>will have the obvious interpretation as a nonnegative integer. When the sign
>bit is set things get a bit murkier: apparently the Committee intended that
>two's complement, one's complement, and sign-magnitude representations are all
>legal.
Certainly those three representations are intended to be conformant.
>The relevant rule says something about "strict binary except for the sign
>bit". One interpretation would be that this says nothing at all when the sign
>bit is set, and so you could have something silly like normal binary for
>positives and Gray code for negatives.
A "pure binary numeration system", as defined in the American National
Dictionary for Information Processing Systems, is required. This is
elaborated in a footnote. (This dictionary is in effect incorporated
into the C standard near the end of section 1.6.) The bit with the
highest position need not represent a power of two, but the other bits
must represent successive powers of two, starting with 1.
This means that bitwise arithmetic on non-negative numbers is well defined
and portable (so long as no representation limit is exceeded). Indeed,
that was the main reason for this requirement.
>A related issue is the question of whether it's required for U{type}_MAX+1 to
>be a power of two, and if so, whether it must be 1<<sizeof(type)*CHAR_BIT
>(false on some Cray machines, I'm told).
While there is no such explicit constraint, it is a logical consequence
of the integral representation requirements that the largest representable
value of any type of unsigned integer would be one less than a power of
two, even on a non-binary (e.g. decimal) machine (which could consequently
use less than its "natural" range of integers).
There is no requirement that all bit patterns in a representation have a
meaning. Thus, Cray is correct to have UINT_MAX+1 not be a power of
UCHAR_MAX+1.
More information about the Comp.std.c
mailing list