Memory Models

Raymond Dunn ray at philmtl.philips.ca
Sat Aug 26 07:50:06 AEST 1989


In article <2694 at cbnewsc.ATT.COM> gregg at cbnewsc.ATT.COM (gregg.g.wonderly) writes:
>From article <664 at philmtl.philips.ca>, by ray at philmtl.philips.ca (Raymond Dunn):
>> Hey, it's easy.  If you don't want to bother yourself with memory models,
>> then always use the large or huge models and forget about it.
>
>Funny, I have yet to see a compiler for the Intel 80x (x < 386) family that
>can increment a pointer through more than 64K.  Anyone else seen one?

MSC 5.1 Huge memory model:

"The huge-model option is similar to the large model option, except that the
restriction on the size of individual data items [to 64K] is removed for
arrays."

There are some restrictions and problems of course, as there are on *most*
architectures, specifically, no array *element* can be more than 64K.  There
are obvious difficulties with sizeof and pointer subtraction unless the
appropriate cast is used.  This is of course a consequence of an int being 16
bits, not of the segmentation.

Let's not get into an achitecture war again.  It's fairly generally accepted
that the more orthogonal architectures are intrinsically "better" than the more
ad-hoc, and that segmentation does have *some* advantages hidden amongst the
anguish.  That is *not* what's being discussed here.
-- 
Ray Dunn.                    | UUCP: ..!uunet!philmtl!ray
Philips Electronics Ltd.     | TEL : (514) 744-8200  Ext: 2347
600 Dr Frederik Philips Blvd | FAX : (514) 744-6455
St Laurent. Quebec.  H4M 2S9 | TLX : 05-824090



More information about the Comp.lang.c mailing list