ABIs and the futurrrr of UNIX(tm)

pri=-10 Stuart Lynne sl at van-bc.UUCP
Mon Mar 28 06:23:44 AEST 1988


In article <431 at micropen> dave at micropen (David F. Carlson) writes:
>There has been much UNIX news recently on the subject of ABI (Application Binary
>Interface) standard which AT&T along with Motorola, SUN and Intel are setting.
>If I understand the problem, as it is now each UNIX vendor for any machine
>they sell UNIX for is responsible for defining a binary protocol for things such
>as alignment (and/or packing), traps to kernel with associated arguments, etc.

>I believe what we all seek is a means of portability across machines lines
>without having to support N-machines to sell a product.  Parts of this are in
>place:  COFF has conversion routines for correctly ordering big-endian vs. 
>little-endian data sections.  Why can't a machine independent intermediate
>form be developed for UNIX solely to be translated into native binary on the
>target machine by a similar utility?  This form would have to be opaque 
>enough to discourage un-compiling but adaptable enough to allow for tight 
>native translation on any SystemV (and eventually POSIX) machine.  Perhaps
>a meta-assembler language such as the DoD CORE set as a possible portable 
>target code for PCC.  Or perhaps even some intermediate PCC form that a code
>generator fixes on the target.  The form should not preclude typical machine 
>dependent optimizations and data packing.
>

This is quite possible to do. The ill-fated p-System from Softech
Microsystems did it! (Still available from Pecan Software I believe.)

The p-System was a complete operating system, with development tools etc
which was ported across a large number of CPU's (808*, 68000, LSI-11, VAX,
6502, ...). 

The system consisted of a p-code interpreter and BIOS developed for each
architecture and a the rest of the system which was distributed as p-code
binary programs. The big/little endian problem, floating point constants,
problems where all solved. You could develop your program on any machine and
run the binary on any machine.

Also available was a Native Code Generator. This could be used by the *end
user* to *selectively* convert on a procedure basis parts of a binary p-code
file to the native code of his system. This had two results, the binary file
got bigger (and slower to swap) and faster to execute.

In theory at least the p-System allowed for complete portability while
retaining the ability to convert to a very fast native code if required on
the users machine. Unfortunately while this did work in practice as well
many other problems factored into the equation.

One of the big problems was that for machines like the IBM PC the *only* way
to get decent performance was to avoid the use of any operating system if
you could. Write directly to screen memory etc. 

The joke at the time (circa 1984) was that even though the p-System could
run on a dozen microprocessor's there where more IBM PC systems than all of
the other processors put together. This meant that anyone trying to make a
buck optimized his product highly for the MS-DOS/IBM PC market. He just
didn't care if it didn't run on the other 2 or 3 percent of the potential
market. Losing any sort of competive edge in the IBM PC market couldn't be
regained elsewhere. 

The moral is that this type of idea has been tried and technically it worked
well. But unless there are several potential markets that can be addressed
it just isn't worth the additional effort. In the case of Unix with Intel /
Motorola / SPARC it probably would be worth it, but I doubt if anyone is
giving to much thought to it.

PS. I'm not a Cobol person, but I have dim memories of a product blurb for
one of the more popular Cobol compilers doing something along these lines as
well. Cobol to object code which could be interpreted on any of the
supported systems.

-- 
{ihnp4!alberta!ubc-vision,uunet}!van-bc!Stuart.Lynne Vancouver,BC,604-937-7532



More information about the Comp.unix.wizards mailing list