What breaks? (was Re: 64 bit longs?)

gerst at ecs.umass.edu gerst at ecs.umass.edu
Sat Jan 19 01:29:13 AEST 1991


Reply-To: lloyd at ucs.umass.edu

>Subject: Re: What breaks? (was Re: 64 bit longs?)
>From: adeboer at gjetor.geac.COM (Anthony DeBoer)
>
>In article <1991Jan15.053356.2631 at zoo.toronto.edu> henry at zoo.toronto.edu (Henry Spencer) writes:
>>In article <54379 at eerie.acsu.Buffalo.EDU> chu at acsu.buffalo.edu (john c chu) writes:
>>>>It is intuitively appealing, but I would be surprised to see anyone
>>>>implementing it:  it would break far too much badly-written software.
>>>
>>>Can someone please tell me what would break under that model and why?
>>
>>There is an awful lot of crufty, amateurish code -- notably the Berkeley
>>kernel networking stuff, but it's not alone -- which has truly pervasive
>>assumptions that int, long, and pointers are all the same size:  32 bits.
>>
>>At least one manufacturer of 64-bit machines has 32-bit longs and 64-bit
>>long longs for exactly this reason.
>>
>>The problem can largely be avoided if you define symbolic names for your
>>important types (say, for example, net32_t for a 32-bit number in a TCP/IP
>>header) and consistently use those types, with care taken when converting
>>between them, moving them in and out from external storage, and passing
>>them as parameters.  This is a nuisance.  It's a lot easier to just treat
>>all your major types as interchangeable, but God will get you for it.
>
>It seems to me that there really isn't any _portable_ way to declare a 32-bit
>long, for example.  Not that I would want to advocate changing the syntax of C
>[again], but for most software the key thing is that the integer has at least
>enough bits, rather than a precise number of them, so perhaps if there was
>some declaration sequence like "int[n] variable", where n was the minimum
>number of bits needed, and the compiler substituted the appropriate integer
>size that met the requirement (so an int[10] declaration would get 16-bit
>integers, for example), then the language might take a step toward
>portability.  A bitsizeof() operator that told you how many bits you actually
>had to play with might help too, but even then you'd have to allow for
>machines that didn't use two's complement representation.

gack-o-matic! PL/1! run away! run away! :)

IMHO, the ideal language would have two forms of scalar values; 

     1) ranges (min..max) 
     2) bitfields on an non-struct basis.

This would solve sooooooooooooooo many headaches.  Of course C has neither
of these, thus giving me serious migraines :)  

[ stuff deleted ]

>-- 
>Anthony DeBoer - NAUI #Z8800                           adeboer at gjetor.geac.com
>Programmer, Geac J&E Systems Ltd.             uunet!jtsv16!geac!gjetor!adeboer
>Toronto, Ontario, Canada             #include <std.random.opinions.disclaimer>

Chris Lloyd - lloyd at ucs.umass.edu
"The more languages I learn, the more I dislike them all" - me



More information about the Comp.lang.c mailing list