Memory Models

Mark Freedman mdfreed at ziebmef.uucp
Sat Aug 19 11:03:59 AEST 1989


In article <10703 at smoke.BRL.MIL> gwyn at brl.arpa (Doug Gwyn) writes:
>In article <562 at dcscg1.UUCP> drezac at dcscg1.UUCP (Duane L. Rezac) writes:
>>I am just getting into C and have a question on Memory Models.
>
>That is not a C language issue.  It's kludgery introduced specifically
>in the IBM PC environment.  Unless you have a strong reason not to,
>just always use the large memory model.  (A strong reason would be
>compatibility with an existing object library, for example.)


   and remember that objects larger than 64K can run into problems because of
the segmented architecture. In Turbo C 2.0, malloc, calloc don't work for
objects larger than 64K (use farmalloc, farcalloc), and far pointers wrap
(the segment register is unchanged ... only the offset has been changed to
protect the innocent :-)).
   huge pointers are normalized (all arithmetic is done via function calls
which perform normalization), but pointers must be explicitly declared as
huge. Even the huge memory model uses far pointers as the default (because of
the overhead, I would imagine).
   I haven't used Microsoft or other MS-DOS implementations, but I suspect that
they have similar design compromises.

   (apologies for the Intel-specific followup, but it might save someone some
aggravation).



More information about the Comp.lang.c mailing list