binary to ascii

Paul John Falstad pfalstad at phoenix.Princeton.EDU
Sat Sep 15 04:56:09 AEST 1990


In article <13680 at hydra.gatech.EDU> cc100aa at prism.gatech.EDU (Ray Spalding) writes:
>In article <574 at demott.COM> kdq at demott.COM (Kevin D. Quitt) writes:
>>In article <371 at bally.Bally.COM> siva at bally.Bally.COM (Siva Chelliah) writes:
>>>    i=(int ) c;
>>>    i=i &  0x00FF;   /* this is necessary because when you read, sign is 
>>>                        extended in c   */
>>    Try "i = (unsigned int) c;" and you'll see it isn't necessary.  
>This is incorrect (where c is a signed char).  When converting from a
>signed integral type to a wider, unsigned one, sign extention IS
>performed (in two's complement representations).  See K&R II section

True.  But the best way to avoid the sign extension is not with a
logical and.  Use two casts:

		i = (unsigned int) (unsigned char) c;

I, for one, loathe the concept of signed chars.  I've wasted countless
hours of programming time searching for bugs caused because I forgot that
chars are signed by default.  I think chars (in fact, all integer types)
should be unsigned by default.  Comments?

Paul Falstad, pfalstad at phoenix.princeton.edu PLink:HYPNOS GEnie:P.FALSTAD
For viewers at home, the answer is coming up on your screen.  For those of
you who wish to play it the hard way, stand upside down with your head in a
bucket of piranha fish.



More information about the Comp.lang.c mailing list