Memory Models

Bill Rust wjr at ftp.COM
Wed Aug 23 05:11:46 AEST 1989


In article <1989Aug18.210404.13183 at ziebmef.uucp> mdfreed at ziebmef.UUCP (Mark Freedman) writes:
>>In article <562 at dcscg1.UUCP> drezac at dcscg1.UUCP (Duane L. Rezac) writes:
>>>I am just getting into C and have a question on Memory Models.
>   huge pointers are normalized (all arithmetic is done via function calls
>which perform normalization), but pointers must be explicitly declared as
>huge. Even the huge memory model uses far pointers as the default (because of
>the overhead, I would imagine).
>   I haven't used Microsoft or other MS-DOS implementations, but I suspect
>that they have similar design compromises.

Note that MSC huge pointers are normalized in a strange way. While TC 
normalizes the segment register after every increment (ptr++), MSC does
not. That is, if you have a record that is 3000h bytes and a starting point
of 400:0, incrementing it six times results in values of 400:3000, 400:6000,
400:9000, 400:c000, 400:f000, and 1400:2000. Referencing the 400:f000 value
will cause a segment wrap. I haven't checked what operations other than
increment do, but I was most distressed at finding this, what I consider an
error but MS does not. They apparently feel that the performance hit is too
great to normalize every time. I feel that if you ask for huge pointers, you
should get them with as much memory addressable from the pointer as possible.
(For grins sake, think about huge pointers under OS/2, a protected mode 
program).

Bill Rust (wjr at ftp.com)



More information about the Comp.lang.c mailing list