Standards Update, USENIX Standards Watchdog Committee

Jeffrey S. Haemer jsh at usenix.org
Mon Jan 8 13:57:08 AEST 1990


From: Jeffrey S. Haemer <jsh at usenix.org>


            An Update on UNIX* and C Standards Activities

                            December 1989

                 USENIX Standards Watchdog Committee

                   Jeffrey S. Haemer, Report Editor

USENIX Standards Watchdog Committee Update

The reports that accompany this summary are for the Fall meeting of
IEEE 1003 and IEEE 1201, conducted the week of October 16-20, 1989, in
Brussels, Belgium.  (This isn't really true of the 1003.4 and 1003.8/1
reports, but let's overlook that.)

The reports are done quarterly, for the USENIX Association, by
volunteers from the individual standards committees.  The volunteers
are familiarly known as ``snitches'' and the reports as ``snitch
reports.'' The band of snitches and I make up the working committee of
the USENIX Standards Watchdog Committee.  The group also has a policy
committee: John S. Quarterman (chair), Alan G. Nemeth, and Shane P.
McCarron.  Our job is to let you know about things going on in the
standards arena that might affect your professional life - either now
or down the road a ways.

More formally:

     The basic USENIX policy regarding standards is:

          to attempt to prevent standards from prohibiting innovation.

     To do that, we

        o+ Collect and publish contextual and technical information
          such as the snitch reports that otherwise would be lost in
          committee minutes or rationale appendices or would not be
          written down at all.

        o+ Encourage appropriate people to get involved in the
          standards process.

        o+ Hold forums such as Birds of a Feather (BOF) meetings at
          conferences.  We sponsored one workshop on standards.

__________

  * UNIX is a registered trademark of AT&T in the U.S. and other
    countries.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 2 -

        o+ Write and present proposals to standards bodies in specific
          areas.

        o+ Occasionally sponsor White Papers in particularly
          problematical areas, such as IEEE 1003.7 (in 1989) and
          possibly IEEE 1201 (in 1990).

        o+ Very occasionally lobby organizations that oversee standards
          bodies regarding new committees, documents, or balloting
          procedures.

        o+ Starting in mid-1989, USENIX and EUUG (the European UNIX
          Users Group) began sponsoring a joint representative to the
          ISO/IEC JTC1 SC22 WG15 (ISO POSIX) standards committee.

     There are some things we do _n_o_t do:

        o+ We do not form standards committees.  It's the USENIX
          Standards Watchdog Committee, not the POSIX Watchdog
          Committee, not part of POSIX, and not limited to POSIX.

        o+ We do not promote standards.

        o+ We do not endorse standards.

     Occasionally we may ask snitches to present proposals or argue
     positions on behalf of USENIX.  They are not required to do so
     and cannot do so unless asked by the USENIX Standards Watchdog
     Policy Committee.  Snitches mostly report.  We also encourage
     them to recommend actions for USENIX to take.

          John S. Quarterman, Chair, USENIX Standards Watchdog Committee

We don't yet have active snitches for all the committees and sometimes
have to beat the bushes for new snitches when old ones retire or can't
make a meeting, but the number of groups with active snitches is
growing steadily.  This quarter, you've seen reports from .1, .4, .5,
.6, .8/2, and a belated report of last quarter's .8/1 meeting, as well
as a report from 1201.  Reports from .2 and .7 are in the pipeline,
and may get posted before this summary does.  We have no reports from
.3, .8/[3-6], .9, .10, or .11, even though we asked Santa for these
reports for Christmas.

If you have comments or suggestions, or are interested in snitching
for any group, please contact me (jsh at usenix.org) or John
(jsq at usenix.org).  If you want to make suggestions in person, both of
us go to the POSIX meetings.  The next set will be January 8-12, at
the Hotel Intercontinental in New Orleans, Louisiana.  Meetings after
that will be April 23-27, 1990 in Salt Lake City, Utah, and July 16-
20, 1990 in Danvers (Boston), Massachusetts.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 3 -

I've appended some editorial commentary on problems I see facing each
group.  I've emphasized non-technical problems, which are unlikely to
appear in the official minutes and mailings of the committees.  If the
comments for a particular group move you to read a snitch report that
you wouldn't have read otherwise, they've served their purpose.  Be
warned, however, that when you read the snitch report, you may
discover that the snitch's opinion differs completely from mine.

1003.0

Outside of dot zero, this group is referred to as ``the group that
lets marketing participate in POSIX.'' Meetings seem to be dominated
by representatives from upper management of large and influential
organizations; there are plenty of tailor-made suits, and few of the
jeans and T-shirts that abound in a dot one or dot two meeting.
There's a good chance that reading this is making you nervous; that
you're thinking, ``Uh, oh.  I'll bet the meetings have a lot of
politics, positioning, and discussion about `potential direction.'''
Correct.  This group carries all the baggage, good and bad, that you'd
expect by looking at it.

For example, their official job is to produce the ``POSIX Guide:'' a
document to help those seeking a path through the open-systems
standards maze.  Realistically, if the IEEE had just hired a standards
expert who wrote well to produce the guide, it would be done, and both
cleaner and shorter than the current draft.

Moreover, because dot zero can see the entire open systems standards
activities as a whole, they have a lot of influence in what new areas
POSIX addresses.  Unfortunately, politics sometimes has a heavy hand.
The last two groups whose creation dot zero recommended were 1201 and
the internationalization study group.  There's widespread sentiment,
outside of each group (and, in the case of internationalization,
inside of the group) that these groups were created at the wrong time,
for the wrong reason, and should be dissolved, but won't be.  And
sometimes, you can find the group discussing areas about which they
appear to have little technical expertise.  Meeting before last, dot
zero spent an uncomfortable amount of time arguing about graphics
primitives.

That's the predictable bad side.  The good side?  Frankly, these folks
provide immense credibility and widespread support for POSIX.  If dot
zero didn't exist, the only way for some of the most important people
and organizations in the POSIX effort to participate would be in a
more technical group, where the narrow focus would block the broad
overview that these folks need, and which doing the guide provides.

In fact, from here it looks as though it would be beneficial to POSIX
to have dot zero actually do more, not less, than it's doing.  For
example, if dot five is ever going to have much impact in the Ada

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 4 -

community, someone's going to have to explain to that community why
POSIX is important, and why they should pay more attention to it.
That's not a job for the folks you find in dot five meetings (mostly
language experts); it's a job for people who wear tailor-made suits;
who understand the history, the direction, and the importance of the
open systems effort; and who know industry politics.  And there are
members of dot zero who fit that description to a tee.

1003.1

Is dot one still doing anything, now that the ugly green book is in
print?  Absolutely.

First, it's moved into maintenance and bug-fix mode.  It's working on
a pair of extensions to dot 1 (A and B), on re-formatting the ugly
green book to make the ISO happy, and on figuring out how to make the
existing standard language-independent.  (The developer, he works from
sun to sun, but the maintainer's work is never done.) Second, it's
advising other groups and helping arbitrate their disputes.  An
example is the recent flap over transparent file access, in which the
group defining the standard (1003.8/1) was told, in no uncertain
terms, that NFS wouldn't do, because it wasn't consistent with dot one
semantics.  One wonders if things like the dot six chmod dispute will
finally be resolved here as well.

A key to success will be keeping enough of the original dot one
participants available and active to insure consistency.

1003.2

Dot one standardized the UNIX section two and three commands.  (Okay,
okay.  All together now: ``It's not UNIX, it's POSIX.  All resemblance
to any real operating system, living or dead, explicit or implied, is
purely coincidental.'') Dot two is making a standard for UNIX section
one commands.  Sort of.

The dot two draft currently in ballot, ``dot-two classic,'' is
intended to standardize commands that you'd find in shell scripts.
Unfortunately, if you look at dot-two classic you'll see things
missing.  In fact, you could have a strictly conforming system that
would be awfully hard to to develop software on or port software to.
To solve this, NIST pressured dot two into drawing up a standard for a
user portability extension (UPE).  The distinction is supposed to be
that dot-two classic standardizes commands necessary for shell script
portability, while the UPE standardizes things that are primarily
interactive, but aid user portability.

The two documents have some strategic problems.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 5 -

   o+ Many folks who developed dot-two classic say the UPE is outside
     of dot two's charter, and won't participate in the effort.  This
     sort of behavior unquestionably harms the UPE.  Since I predict
     that the outside world will make no distinction between the UPE
     and the rest of the standard, it will actually harm the entire
     dot-two effort.

   o+ The classification criteria are unconvincing.  Nm(1) is in the
     UPE.  Is it really primarily used interactively?

   o+ Cc has been renamed c89, and lint may become lint89.  This is
     silly and annoying, but look on the bright side: at least we can
     see why c89 wasn't put in the UPE.  Had it been, it would have
     had to have a name users expected.

   o+ Who died and left NIST in charge?  POSIX seems constantly to be
     doing things that it didn't really want to do because it was
     afraid that if it didn't, NIST would strike out on its own.
     Others instances are the accelerated timetables of .1 and .2, and
     the creation of 1003.7 and 1201.)

   o+ Crucial pieces of software are missing from dot two.  The largest
     crevasse is the lack of any form of source-code control.  People
     on the committee don't want to suffer through an SCCS-RCS debate.
     POSIX dealt with the cpio-tar debate.  (It decided not to
     decide.) POSIX dealt with the vi-emacs debate.  (The UPE provides
     a standard for ex/vi.) POSIX is working on the NFS-RFS debate,
     and a host of others.  Such resolutions are a part of its
     responsibility and authority.  POSIX is even working on the
     Motif-Open/Look debate (whether it should or not).

     At the very least, the standard could require some sort of source
     code control, with an option specifying which flavor is
     available.  Perhaps we could ask NIST to threaten to provide a
     specification.

As a final note, because dot two (collective) standardizes user-level
commands, it really can provide practical portability across operating
systems.  Shell scripts written on a dot-two-conforming UNIX system
should run just fine on an MS-DOS system under the MKS toolkit.

1003.3

Dot three is writing test assertions for standards.  This means dot
three is doing the most boring work in the POSIX arena.  Oh, shoot,
that just slipped out.  But what's amazing is that the committee
members don't see it as boring.  In fact, Roger Martin, who, as senior
representative of the NIST, is surely one of the single most
influential people in the POSIX effort, actually chairs this
committee.  Maybe they know something I don't.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 6 -

Dot three is balloting dot one assertions and working on dot two.  The
process is moving at standards-committee speed, but has the advantage
of having prior testing art as a touchstone (existing MindCraft, IBM,
and NIST test work).  The dilemma confronting the group is what to do
about test work for other committees, which are proliferating like
lagomorphs.  Dot three is clearly outnumbered, and needs some
administrative cavalry to come to its rescue.  Unless it expands
drastically (probably in the form of little subcommittees and a
steering committee) or is allowed to delegate some of the
responsibility of generating test assertions to the committees
generating the standards, it will never finish.  (``Whew, okay, dot
five's done.  Does anyone want to volunteer to be a liaison with dot
thirty-seven?'')

1003.4

Dot four is caught in a trap fashioned by evolution.  It began as a
real-time committee.  Somehow, it's metamorphosed into a catch-all,
``operating-system extensions'' committee.  Several problems have
sprung from this.

   o+ Some of the early proposed extensions were probably right for
     real-time, but aren't general enough to be the right approach at
     the OS level.

   o+ Pieces of the dot-four document probably belong in the the dot
     one document instead of a separate document.  Presumeably, ISO
     will perform this merge down the road.  Should the IEEE follow
     suit?

   o+ Because the dot-four extensions aren't as firmly based in
     established UNIX practice as the functionality specified in dot
     one and two, debate over how to do things is more heated, and the
     likelihood that the eventual, official, standard solution will be
     an overly complex and messy compromise is far higher.  For
     example, there is a currently active dispute about something as
     fundamental as how threads and signals should interact.

Unfortunately, all this change has diverted attention from a problem
that has to be dealt with soon - how to guarantee consistency between
dot four and dot five, the Ada-language-binding group.  Tasks
semantics are specified by the Ada language definition.  In order to
get an Ada binding to dot four's standard (which someone will have to
do), dot four's threads will have to be consistent with the way dot
five uses tasks in their current working document.  With dot five's
low numbers, the only practical way to insure this seems to be to have
dot four aggressively track the work of dot five.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 7 -

1003.5

Dot five is creating an Ada-language binding for POSIX.  What's
``Ada-language binding'' mean?  Just that an Ada programmer should be
able to get any functionality provided by 1003.1 from within an Ada
program.  (Right now, they're working on an Ada-language binding for
the dot one standard, but eventually, they'll also address other
interfaces, including those from dot four, dot six, and dot eight.)
They face at least two kinds of technical problems and one social one.

The first technical problems is finding some way to express everything
in 1003.1 in Ada.  That's not always easy, since the section two and
three commands standardized by dot one evolved in a C universe, and
the semantics of C are sometimes hard to express in Ada, and vice-
versa.  Examples are Ada's insistence on strong typing, which makes
things like ioctl() look pretty odd, and Ada's tasking semantics,
which require careful thinking about fork(), exec(), and kill().
Luckily, dot five is populated by people who are Ada-language wizards,
and seem to be able to solve these problems.  One interesting
difference between dot five and dot nine is that the FORTRAN group has
chosen to retain the organization of the original dot one document so
that their document can simply point into the ugly green book in many
cases, whereas dot five chose to re-organize wherever it seemed to
help the flow of their document.  It will be interesting to see which
decision ends up producing the most useful document.

The second technical problem is making the solutions look like Ada.
For more discussion of this, see the dot-nine (FORTRAN bindings)
summary.  Again, this is a problem for Ada wizards, and dot five can
handle it.

The social problem?  Interest in dot five's work, outside of their
committee, is low.  Ada is out-of-favor with most UNIX programmers.
(``Geez, 1201 is a mess.  Their stuff's gonna look as ugly as Ada.'')
Conversely, most of the Ada community's not interested in UNIX.
(``Huh?  Another `standard' operating environment?  How's it compare
to, say, PCTE?  No, never mind.  Just let me know every few years how
it's coming along.'') The group that has the hardest problem - welding
together two, well-developed, standardized, disparate universes - has
the least help.

Despite all of this, the standard looks like it's coming close to
ballot, which means people ought to start paying attention to it
before they have no choice.

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 8 -

1003.6

Most of the UNIX community would still feel more at home at a Rainbow
gathering than reading the DOD rainbow books.  The unfamiliar-buzzword
frequency at dot six (security) meetings is quite high.  If you can
get someone patient to explain some of the issues, though, they're
pretty interesting.  The technical problems they're solving each boil
down to thinking through how to graft very foreign ideas onto UNIX
without damaging it beyond recognition.  (The recent posting about
chmod and access control lists, in comp.std.unix by Ana Maria de
Alvare and Mike Ressler, is a wonderful, detailed example.)

Dot six's prominent, non-technical problem is just as interesting.
The government has made it clear that vendors who can supply a
``secure UNIX'' will make a lot of money.  No fools, major vendors
have begun been furiously working on implementations.  The push to
provide a POSIX security standard comes at a time when these vendors
are already quite far along in their efforts, but still some way from
releasing the products.  Dot six attendees from such corporations
can't say too much, because it will give away what they're doing
(remember, too, that this is security), but must, somehow insure that
the standard that emerges is compatible with their company's existing,
secret implementation.

1003.7

There is no single, standard body of practice for UNIX system
administration, the area dot seven is standardizing.  Rather than seek
a compromise, dot seven has decided to re-invent system administration
from scratch.  This was probably necessary simply because there isn't
enough existing practice to compromise on.  Currently, their intent is
to provide an object-oriented standard, with objects specified in
ASN.1 and administration of a multi-machine, networked system as a
target.  (This, incidentally, was the recommendation of a USENIX White
Paper on system administration by Susanne Smith and John Quarterman.)
The committee doesn't have a high proportion of full-time system
administrators, or a large body of experience in object-oriented
programming.  It's essentially doing research by committee.  Despite
this, general sentiment outside the committee seems to be that it has
chosen a reasonable approach, but that progress may be slow.

A big danger is that they'll end up with a fatally flawed solution:
lacking good, available implementations; distinct enough from existing
practices, where they exist, to hamper adoption; and with no clear-cut
advantage to be gained by replacing any ad-hoc, existing, solutions
except for standard adherence.  The standard could be widely ignored.

What might prevent that from happening?  Lots of implementations.
Object-oriented programming and C++ are fashionable (at the 1988,

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 9 -

Winter Usenix C++ conference, Andrew Koenig referred to C++ as a
``strongly hyped language''); networked, UNIX systems are ubiquitous
in the research community; and system administration has the feeling
of a user-level, solvable problem.  If dot seven (perhaps with the
help of dot zero) can publicize their work in the object-oriented
programming community, we can expect OOPSLA conferences and
comp.sources.unix to overflow with high-quality, practical, field-
tested, object-oriented, system administration packages that conform
to dot seven.

1003.8

There are two administrative problems facing dot eight, the network
services group.  Both stem directly from the nature of the subject.
There is not yet agreement on how to solve either one.

The first is its continued growth.  There is now serious talk of
making each subgroup a full-fledged POSIX committee.  Since there are
currently six groups (transparent file access, network IPC, remote
procedure call, OSI/MAP services, X.400 mail gateway, and directory
services), this would increase the number of POSIX committees by
nearly 50%, and make networking the single largest aspect of the
standards work.  This, of course, is because standards are beneficial
in operating systems, and single-machine applications, but
indispensible in networking.

The second is intergroup coordination.  Each of the subgroups is
specialized enough that most dot eight members only know what's going
on in their own subgroup.  But because the issues are networking
issues, it's important that someone knows enough about what each group
is doing to prevent duplication of effort or glaring omissions.  And
that's only a piece of the problem.  Topics like system administration
and security are hard enough on a single, stand-alone machine.  In a
networked world, they're even harder.  Someone needs to be doing the
system engineering required to insure that all these areas of overlap
are addressed, addressed exactly once, and completed in time frames
that don't leave any group hanging, awaiting another group's work.

The SEC will have to sort out how to solve these problems.  In the
meantime, it would certainly help if we had snitches for each subgroup
in dot eight.  Any volunteers for .8/[3-6]?

1003.9

Dot nine, which is providing FORTRAN bindings, is really fun to watch.
They're fairly un-structured, and consequently get things done at an
incredible clip.  They're also friendly; when someone new arrives,
they actually stop, smile, and provide introductions all around.  It
helps that there are only half-a-dozen attendees or so, as opposed to

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 10 -

the half-a-hundred you might see in some of the other groups.
Meetings have sort of a ``we're all in this together/defenders of the
Alamo'' atmosphere.

The group was formed after two separate companies independently
implemented FORTRAN bindings for dot one and presented them to the
UniForum technical committee on supercomputing.  None of this, ``Let's
consider forming a study group to generate a PAR to form a committee
to think about how we might do it,'' stuff.  This was rapid
prototyping at the standards level.

Except for the advantage of being able to build on prior art (the two
implementations), dot nine has the same basic problems that dot five
has.  What did the prior art get them?  The most interesting thing is
that a correct FORTRAN binding isn't the same as a good FORTRAN
binding.  Both groups began by creating a binding that paralleled the
original dot one standard fairly closely.  Complaints about the
occasional non-FORTRANness of the result have motivated the group to
try to re-design the bindings to seem ``normal'' to typical FORTRAN
programmers.  As a simple example, FORTRAN-77 would certainly allow
the declaration of a variable in common called ERRNO, to hold the
error return code.  Users, however, would find such name misleading;
integer variables, by default and by convention, begin with ``I''
through ``N.''

It is worth noting that dot nine is actually providing FORTRAN-77
bindings, and simply ignoring FORTRAN-8x.  (Who was it that said of
8x, ``Looks like a nice language.  Too bad it's not FORTRAN''?)
Currently, 1003 intends to move to a language-independent
specification by the time 8x is done, which, it is claimed, will ease
the task of creating 8x bindings.

On the surface, it seems logical and appealing that documents like
1003.1 be re-written as a language-independent standard, with a
separate C-language binding, analogous to those of dot five and dot
nine.  But is it really?

First, it fosters the illusion that POSIX is divorced from, and
unconstrained by its primary implementation language.  Should the
prohibition against nul characters in filenames be a base-standard
restriction or a C-binding restriction?

I've seen a dot five committee member argue that it's the former.
Looked at in isolation, this is almost sensible.  If Ada becomes the
only language anyone wants to run, yet the government still mandates
POSIX compliance, why should a POSIX implementation prohibit its
filenames from containing characters that aren't special to Ada?  At
the same time, every POSIX attendee outside of dot five seems repelled
by the idea of filenames that contain nuls.  (Quiz: Can you specify a
C-language program or shell script that will create a filename
containing a nul?)

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 11 -

Second, C provides an existing, precise, widely-known language in
which POSIX can be specified.  If peculiarities of C make implementing
some portions of a standard, specified in C, difficult in another
language, then there are four, clear solutions:

  1.  change the specification so that it's equally easy in C and in
      other languages,

  2.  change the specification so that it's difficult in every
      language,

  3.  change the specification so that it's easy in some other
      language but difficult in C

  4.  make the specification vague enough so that it can be done in
      incompatible (though equally easy) ways in each language.

Only the first option makes sense.  Making the specification
language-independent means either using an imprecise language, which
risks four, or picking some little-known specification language (like
VDL), which risks two and three.  Declaring C the specification
language does limit the useful lifetime of POSIX to the useful
lifetime of C, but if we don't think we'll come up with good
replacements for both in a few decades, we're facing worse problems
than language-dependent specifications.

Last, if you think the standards process is slow now, wait until the
IEEE tries to find committee volunteers who are fluent in both UNIX
and some language-independent specification language.  Not only will
the useful lifetime of POSIX remain wedded to the useful lifetime of
C, but both will expire before the language-independent version of dot
one is complete.

It would be nice if this push for language-independent POSIX would go
away quietly, but it won't.

1003.10

In July, at the San Jose meeting, John Caywood of Unisys caught me in
the halls and said, accusingly, ``I understand you're think
supercomputers don't need a batch facility.'' I didn't have the
foggiest idea what he was talking about, but it seemed like as good a
chance as any to get a tutorial on dot ten, the supercomputing group,
so I grabbed it. (Pretty aggressively helpful folks in this
supercomputing group.  If only someone in it could be persuaded to
file a snitch report.)

Here's the story:

     Articles about software engineering like to point out that

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 12 -

     approaches and tools have changed from those used twenty years
     ago; computers and computing resources are now much cheaper than
     programmers and their time, while twenty years ago the reverse
     was true.  These articles are written by people who've never used
     a Cray.  A typical supercomputer application might run on a $25M,
     non-byte-addressable, non-virtual-memory machine, require 100 to
     1000 Mbytes of memory, and run for 10 Ksecs.  Expected running
     time for jobs can be greater than the machine's mean-time-to-
     failure.  The same techniques that were common twenty years ago
     are still important on these machines, for the same reasons -
     we're working close to the limits of hardware art.

The card punches are gone, but users often still can't login to the
machines directly, and must submit jobs through workstation or
mainframe front ends.  Resources are severely limited, and access to
those resources need to be carefully controlled.  The two needs that
surface most often are checkpointing, and a batch facility.

Checkpointing lets you re-start a job in the middle.  If you've used
five hours of Cray time, and need to continue your run for another
hour but have temporarily run out of grant money, you don't want to
start again from scratch when the money appears.  If you've used six
months of real time running a virus-cracking program and the machine
goes down, you might be willing to lose a few hours, even days, of
work, but can't afford to lose everything.  Checkpointing is a hard
problem, without a generally agreed-upon solution.

A batch facility is easier to provide.  Both Convex and Cray currently
support NQS, a public-domain, network queueing system.  The product
has enough known problems that the group is re-working the facility,
but the basic model is well-understood, and the committee members,
both users and vendors, seem to want to adopt it.  The goal is
command-level and library-level support for batch queues that will
provide effective resource management for really big jobs.  Users will
be able to do things like submit a job to a large machine through a
wide-area network, specify the resources - memory, disk space, time,
tape drives, etc. - that the job will need to run to completion, and
then check back a week or two later to find out how far their job's
progressed in the queue.

The group is determined to make rapid progress, and to that end is
holding 6-7 meetings a year.  One other thing: the group is actually
doing an application profile, not a standards document.  For an
clarification of the distinction, see the discussion of dot eleven.

1003.11

Dot eleven has begun work on an application profile (AP) on
transaction processing (TP).  An AP is a set of pointers into the
POSIX Open System Environment (OSE).  For example, the TP AP might

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 13 -

say, ``For dot eleven conformance, you need to conform to dot one, dot
four, sections 2.3.7 and 2.3.8 of dot 6, 1003.8 except for /2, and
provide a batch facility as specified in the dot 10 AP.'' A group
doing an AP will also look for holes or vague areas in the existing
standards, as they relate to the application area, go point them out
to the appropriate committee, and possibly chip in to help the
committee solve them.  If they find a gap that really doesn't fall
into anyone else's area, they can write a PAR, requesting that the SEC
(1003's oversight committee) charter them to write a standard to cover
it.

Dot eleven is still in the early, crucial stage of trying to figure
out what it wants to do.  Because of fundamental differences in
philosophy of the members, the group seems to be thrashing a lot.
There is a clear division between folks who want to pick a specific
model of TP and write an AP to cover it, and folks who think a model
is a far-too-detailed place to start.  The latter group is small, but
not easily dismissed.

It will be interesting to see how dot eleven breaks this log jam, and
what the resolution is.  As an aside, many of the modelers are from
the X/OPEN and ISO TP groups, which are already pushing specific
models of their own; this suggests what kinds of models we're likely
to get if the modeling majority wins.

X3J11

A single individual, Russell Hansberry, is blocking the official
approval of the ANSI standard for C on procedural grounds.  At some
point, someone failed to comply with the letter of IEEE rules for
ballot resolution.  and Hansberry is using the irregularity to delay
adoption of the standard.

This has had an odd effect in the 1003 committees.  No one wants to
see something like this inflicted on his or her group, so folks are
being particularly careful to dot all i's and cross all t's.  I say
odd because it doesn't look as though Hansberry's objections will have
any effect whatsoever on either the standard, or its effectiveness.
Whether ANSI puts its stamp on it or not, every C compiler vendor is
implementing the standard, and every book (even K&R) is writing to it.
X3J11 has replaced one de-facto standard with another, even stronger
one.

1201.1

What's that you say, bunky?  You say you're Jewish or Moslem, and you
can look at Xwindows as long as you don't eat it?  Well then, you
won't care much for 1201.1, which is supposed to be ``User Interface:
Application Programming Interface,'' but is really ``How much will the

December 1989 Standards Update     USENIX Standards Watchdog Committee


                                - 14 -

Motif majority have to compromise with the Open/Look minority before
force-feeding us a thick standard full of `Xm[A-Z]...' functions with
long names and longer argument lists?''

Were this group to change its name to ``Xwindows application
programming interface,'' you might not hear nearly as much grousing
from folks outside the working group.  As it is, the most positive
comments you hear are, ``Well, X is pretty awful, but I guess we're
stuck with it,'' and ``What could they do?  If POSIX hadn't undertaken
it, NIST would have.''

If 1201 is to continue to be called ``User Interface,'' these aren't
valid arguments for standardizing on X or toolkits derived from it.
In what sense are we stuck with X?  The number of X applications is
still small, and if X and its toolkits aren't right for the job, it
will stay small.  Graphics hardware will continue to race ahead,
someone smart will show us a better way to do graphics, and X will
become a backwater.  If they are right, some toolkit will become a
de-facto standard, the toolkit will mature, and the IEEE can write a
realistic standard based on it.

Moreover, if NIST wants to write a standard based on X, what's wrong
with that?  If they come up with something that's important in the
POSIX world, good for them.  ANSI or the IEEE can adopt it, the way
ANSI's finally getting around to adopting C.  If NIST fails, it's not
the IEEE's problem.

If 1201.1 ignores X and NIST, can it do anything?  Certainly.  The
real problem with the occasionally asked question, ``are standards
bad?'' is that it omits the first word: ``When.'' Asked properly, the
answer is, ``When they're at the wrong level.'' API's XVT is example
of a toolkit that sits above libraries like Motif or the Mac toolbox,
and provides programmers with much of the standard functionality
necessary to write useful applications on a wide variety of window
systems.  Even if XVT isn't the answer, it provides proof by example
that we can have a window-system-independent, application programming
interface for windowing systems.  1201.1 could provide a useful
standard at that level.  Will it?  Watch and see.

December 1989 Standards Update     USENIX Standards Watchdog Committee


Volume-Number: Volume 18, Number 10



More information about the Comp.std.unix mailing list