Contiguous Arrays

Charles Marslett chasm at killer.DALLAS.TX.US
Fri Mar 3 00:46:30 AEST 1989


In article <9718 at smoke.BRL.MIL>, gwyn at smoke.BRL.MIL (Doug Gwyn ) writes:
> In article <7309 at killer.DALLAS.TX.US> chasm at killer.DALLAS.TX.US (Charles Marslett) writes:
> >This is not necessarily a condemnation of any computer that shows such lack
> >of concern for basic mathematics, but I would expect a computer that does
> >anything but return equality in the above expression to be broken as a
> >mathematical engine.  If it is acceptable for addition and subtraction to
> >be other than inverse operations (taking into account the possibility of
> >an error trap or whatever) the thing you are using cannot support any
> >programming language I know of without simulating integer arithmetic with
> >some well protected external unit.
> 
> First of all, you're wrong.  The problem with out-of-range pointer
> arithmetic arises from the way that addresses have to be represented
> in a segmented architecture: a (segment identifier, offset) pair.
> Any arithmetic that would result in an out-of-segment address is simply
> illegal; anything could happen, including hardware trap of the illegal
> operation.  However, in-range address computation presents no problem.

To repeat my original statement:  ANYTHING COULD HAPPEN, but anything does
not happen, because it makes the construction of the computer more difficult.
My point was that making funny things happen with some arithmetic and not
with other arithmetic (since integer arithmetic is constrained by expectations
if not by the standards) is hard to do (so designers don't).

> I know of no Algol-like programming language that guarantees the ability
> to compute out-of-range pointers even as an intermediate step in a
> longer computation, probably because such a guarantee would severely
> constrain implementations on some architectures, requiring considerably
> less efficient data representations and generated code.

I disagree that it is not guaranteed because of some positive good from
not guaranteeing it.  It is not guaranteed because it is not very useful
in a language that has 0-based arrays.  A computer that made such references
"hard" to do would almost certainly result in, as you said, less efficient
data representations and generated code -- if the language were Fortran or
Pascal (where arrays do not have to start at 0 or even at a representable
address).  One might be forced to do an additional subtraction for every
subscript reference.

> Secondly, no computer fully supports the kind of real-number arithmetic
> (including integer arithmetic) that you were taught in school.  Instead
> of saying that therefore all computers are broken, which is not helpful,
> most of us learn how to remain within the range of validity of actual
> hardware operations.  . . .

But do any fail to support this subset?  (I have guessed Burroughs may,
Intel's architectures, except for the 432 perhaps, certainly do allow
such pointers and the pointers work as expected in large and small models --
in the huge model compiler bugs keep much portable code from working, but
I think if all portable code could be made to work in that model, this would
as well.)

> >To make it even more serious, I do not know of any twos-
> >complement computer that even has a signed vs. unsigned add instruction.
> 
> They are one and the same.

You fell into the same "oops" I did: the IBM 360 had two different instructions
(has ...) that differ only in the way the condition codes are set (overflow
and the like).  So there is a difference.  But not for the purposes of this
discussion, since the condition code can be ignored (and usually is).


So, . . . any real machine/compiler combinations out there?

Please, . . .

Charles



More information about the Comp.lang.c mailing list