Standards Update, USENIX Standards Watchdog Committee

Doug Gwyn gwyn at smoke.brl.mil
Tue Jan 9 08:40:19 AEST 1990


From: Doug Gwyn <gwyn at smoke.brl.mil>


In article <500 at longway.TIC.COM> std-unix at uunet.uu.net writes:
>From: Jeffrey S. Haemer <jsh at usenix.org>
>            An Update on UNIX* and C Standards Activities

I have several comments on these issues (and will try to refrain
from commenting on the ones I don't track closely).

>1003.1
>An example is the recent flap over transparent file access, in which the
>group defining the standard (1003.8/1) was told, in no uncertain terms,
>that NFS wouldn't do, because it wasn't consistent with dot one semantics.

This is an important point; 1003.1 did very much have in mind network
file systems, and we decided that the full semantics specified in 1003.1
were really required for the benefit of portable applications on UNIX
systems (or workalikes), which is what 1003 was originally all about.

Having run into problem after problem with the lack of full 1003.1
semantics in our NFS-supporting environment, I fully agree with the
decision that applications should be able to rely on "UNIX semantics"
and that NFS simply does not meet this criterion.  (There are other
network-transparent file system implementations that do; the design
of NFS was constrained by the desire to support MS/DOS and to be
"stateless", both of which run contrary to UNIX filesystem semantics.)

>One wonders if things like the dot six chmod dispute will finally be
>resolved here as well.

Fairly late in the drafting of Std 1003.1, consultation with NCSC and
other parties concerned with "UNIX security" led to a fundamental
change in the way that privileges were specified.  That's when the
notion of "appropriate privilege" and the acknowledgement of optional
"additional mechanisms" were added, deliberately left generally vague
so as to encompass any other specification that would be acceptable
to the 1003.1 people as not interfering unduly with the traditional
UNIX approach to file access permissions.

Upon reviewing the chmod spec in IEE Std 1003.1-1988, I see no reason
to think that it would interfere with addition of ACL or other similar
additional mechanisms, the rules for which would be included in the
implementation-defined "appropriate privileges".  Remember, the UNIX-
like access rules of 1003.1 apply only when there is no additional
mechanism (or the additional mechanism is satisfied).

>A key to success will be keeping enough of the original dot one
>participants available and active to insure consistency.

Good luck with this.  Personally, I couldn't afford to pay the dues
and limited my membership to 1003.2 once Std 1003.1-1988 was published.

>1003.2
>The dot two draft currently in ballot, ``dot-two classic,'' is
>intended to standardize commands that you'd find in shell scripts.
>Unfortunately, if you look at dot-two classic you'll see things
>missing.  In fact, you could have a strictly conforming system that
>would be awfully hard to to develop software on or port software to.

>From my point of view, 1003.2 unfortunately included TOO MUCH, not
too little, for portable application support.  (My views on the
proper set of commands and options were spelled out in a very early
1003.2 document.)

>To solve this, NIST pressured dot two into drawing up a standard for a
>user portability extension (UPE).  The distinction is supposed to be
>that dot-two classic standardizes commands necessary for shell script
>portability, while the UPE standardizes things that are primarily
>interactive, but aid user portability.

NIST apparently thinks that all the horrible existing tools they're
familiar with should be forced upon all environments.  I think this
does interactive users a DISservice.  For one thing, many interesting
architectures require different tools from the traditional ones, and
requiring the traditional ones merely makes it difficult or impossible
for better environments to be provided under contracts that require
conformance to the UPE.  (This probably includes most future U.S.
government procurements, which means most major vendor OSes.)

>The two documents have some strategic problems.
>   o+ Many folks who developed dot-two classic say the UPE is outside
>     of dot two's charter, and won't participate in the effort.  This
>     sort of behavior unquestionably harms the UPE.  Since I predict
>     that the outside world will make no distinction between the UPE
>     and the rest of the standard, it will actually harm the entire
>     dot-two effort.

But they're right.  The UPE effort should be STOPPED, immediately.
There IS no "right" way to standardize this area.

>   o+ The classification criteria are unconvincing.  Nm(1) is in the
>     UPE.  Is it really primarily used interactively?

"nm" is precisely the sort of thing that should NOT be standardized
at all, due to widely varying environmental needs in software
generation systems.  There have been numerous attempts to standardize
object module formats (which is similar to standardizing "nm" behavior),
and none of them have been successful over anywhere near the range of
systems that a 1003 standard should properly encompass.

>   o+ Cc has been renamed c89, and lint may become lint89.  This is
>     silly and annoying, but look on the bright side: at least we can
>     see why c89 wasn't put in the UPE.  Had it been, it would have
>     had to have a name users expected.

"cc" (and "nm") is not sufficiently useful to APPLICATIONS to merit
being in 1003.2 at all.  Certainly its options cannot be fully specified
due to the wide range of system-specific support needed in many
environments.  Thus, "cc options files" where options include just -c
-Iwherever -Dname=value and -o file and files includes -lwhatever is all
that has fully portable meaning.  Is there really any UNIX implementation
that doesn't provide these so that a standard is needed?  I think not.

>   o+ Who died and left NIST in charge?  POSIX seems constantly to be
>     doing things that it didn't really want to do because it was
>     afraid that if it didn't, NIST would strike out on its own.
>     Others instances are the accelerated timetables of .1 and .2, and
>     the creation of 1003.7 and 1201.)

The problem is, NIST prepares FIPS and there is essentially no stopping
them.  Because FIPS are binding on government procurements (unless
specific waivers are obtained), they have heavy economic impact on
vendors.  In the "good old days", NBS allowed the computing industry
to develop suitable standards and later blessed them with FIPS.  With
the change in political climate that occurred with the Reagan
administration, which was responsible for the name change from NBS to
NIST, NIST was given a more "proactive" role in the development of
technology.  Unfortunately they seem to think that forcing standards
advances the technology, whereas that would be true only under
favorable circumstances (which unsuitable standards do not promote).
(Actually I think that the whole idea of a government attempting to
promote technology is seriously in error, but that's another topic.)

I don't know how you can tone down NIST.  Perhaps if enough congressmen
receive enough complaints some pressure may be applied.

>   o+ Crucial pieces of software are missing from dot two.  The largest
>     crevasse is the lack of any form of source-code control.  People
>     on the committee don't want to suffer through an SCCS-RCS debate.
>     POSIX dealt with the cpio-tar debate.  (It decided not to
>     decide.) POSIX dealt with the vi-emacs debate.  (The UPE provides
>     a standard for ex/vi.) POSIX is working on the NFS-RFS debate,
>     and a host of others.  Such resolutions are a part of its
>     responsibility and authority.  POSIX is even working on the
>     Motif-Open/Look debate (whether it should or not).

The problem with all these is that there is not a "good enough"
solution in widespread existing practice.  This should tell the
parties involved that standardization in these areas is therefore
premature, since it would in effect "lock in" inferior technology.
However, marketing folks have jumped on the standardization
bandwagon and want standards even where they're inappropriate.
(This is especially apparent in the field of computer graphics.)

>     At the very least, the standard could require some sort of source
>     code control, with an option specifying which flavor is
>     available.  Perhaps we could ask NIST to threaten to provide a
>     specification.

Oh, ugh.  Such options are evil in a standard, because they force
developers to always allow for multiple ways of doing things, which is
more work than necessary.

You shouldn't even joke about using NIST to force premature decisions,
as that's been a real problem already and we don't need it to get worse.

>As a final note, because dot two (collective) standardizes user-level
>commands, it really can provide practical portability across operating
>systems.  Shell scripts written on a dot-two-conforming UNIX system
>should run just fine on an MS-DOS system under the MKS toolkit.

I hope that is not literally true.  1003 decided quite early that it
would not bend over backward to accommodate layered implementations.
For MS-DOS to be supported even at the 1003.2 level would seem to
require that the standard not permit shared file descriptors,
concurrent process scheduling, etc. in portable scripts.  That would
rule out exploitation of some of UNIX's strongest features!

>On the surface, it seems logical and appealing that documents like
>1003.1 be re-written as a language-independent standard, with a
>separate C-language binding, analogous to those of dot five and dot
>nine.  But is it really?

I don't think it is.  UNIX and C were developed together, and C was
certainly intended to be THE systems implementation language for UNIX.

>First, it fosters the illusion that POSIX is divorced from, and
>unconstrained by its primary implementation language.  Should the
>prohibition against nul characters in filenames be a base-standard
>restriction or a C-binding restriction?

The prohibition is required due to kernel implementation constraints
(due to UNIX being implemented in C and relying on C conventions for
such things as handling pathname strings).  Thus the prohibition is
required no matter what the application implementation language.

>It would be nice if this push for language-independent POSIX would go
>away quietly, but it won't.

As I understand it, it is mainly ISO that is forcing this, probably
originally due to Pascal folks feeling left out of the action.
Because many large U.S. vendors have a significant part of their
market in Europe, where conformance with ISO standards is an
important consideration, there is a lot of pressure to make the
U.S.-developed standards meet ISO requirements, to avoid having to
provide multiple versions of products.  I think this is unfortunate
but don't have any solution to offer.

>X3J11
>A single individual, Russell Hansberry, is blocking the official
>approval of the ANSI standard for C on procedural grounds.  At some
>point, someone failed to comply with the letter of IEEE rules for
>ballot resolution.  and Hansberry is using the irregularity to delay
>adoption of the standard.

This is misstated.  IEEE has nothing to do with X3J11 (other than
through the 1003.1/X3J11 acting liaison, at the moment yours truly).

Mr. Hansberry did appeal to X3 on both technical and procedural
grounds.  X3 reaffirmed the technical content of the proposed
standard and the procedural appeal was eventually voluntarily
withdrawn.  The ANSI Board of Standards Review recently approved
the standard prepared by X3J11.

The delay in ratification consisted of two parts:  First, a delay
caused by having to address an additional public-review letter
(Mr. Hansberry's) that had somehow been mislaid by X3; fortunately
the points in the letter that X3J11 agreed with had already been
addressed during previous public review resolution.  (Note that
X3J11 and X3 do NOT follow anything like the IEEE 1003.n ballot
resolution/consensus process.  I much prefer X3J11's approach.)
Thus through expeditious work by the editor (me again) and reviewers
of the formal X3J11 document responding to the issues raised by the
late letter, this part of the delay was held to merely a few weeks.
The second part of the delay was caused by the appeal process that
Mr. Hansberry initiated (quite within his rights, although nobody I
know of in X3J11 or X3 thought his appeal to be justified).  The
net effect was to delay ratification of the ANSI standard by
several months.

>This has had an odd effect in the 1003 committees.  No one wants to
>see something like this inflicted on his or her group, so folks are
>being particularly careful to dot all i's and cross all t's.  I say
>odd because it doesn't look as though Hansberry's objections will have
>any effect whatsoever on either the standard, or its effectiveness.
>Whether ANSI puts its stamp on it or not, every C compiler vendor is
>implementing the standard, and every book (even K&R) is writing to it.
>X3J11 has replaced one de-facto standard with another, even stronger
>one.

That's because all the technical work had been completed and the
appeal introduced merely procedural delays.  Thus there was a clear
specification that was practically certain to become ratified as the
official standard eventually, so there was little risk and considerable
gain in proceeding to implement conformance to it.

You should note that no amount of dotting i's and crossing t's
would have prevented the Hansberry appeal.  I'm not convinced that
even handling his letter during the second public review batch would
have forestalled the appeal, which so far as I can tell was motivated
primarily by his disappointment that X3J11 had not attempted to specify
facilities aimed specifically at real-time embedded system applications.
(Note that this sort of thing was not part of X3J11's charter.)

>1201
>someone smart will show us a better way to do graphics, and X will
>become a backwater.

Someone smart has already shown us better ways to do graphics.
(If you've been reading ACM TOG and the USENIX TJ, you should have
already seen some of these.)

There is no doubt a need for X standardization, but it makes no
sense to bundle it in with POSIX.

>If 1201.1 ignores X and NIST, can it do anything?  Certainly.  The
>real problem with the occasionally asked question, ``are standards
>bad?'' is that it omits the first word: ``When.'' Asked properly, the
>answer is, ``When they're at the wrong level.'' API's XVT is example
>of a toolkit that sits above libraries like Motif or the Mac toolbox,
>and provides programmers with much of the standard functionality
>necessary to write useful applications on a wide variety of window
>systems.  Even if XVT isn't the answer, it provides proof by example
>that we can have a window-system-independent, application programming
>interface for windowing systems.  1201.1 could provide a useful
>standard at that level.  Will it?  Watch and see.

This makes a good point.  Standards can be bad not only because of
being drawn up for the wrong conceptual level, but also when they
do not readily accommodate a variety of environments.  1003.1 was
fairly careful to at least consider pipes-as-streams, network file
systems, ACLs, and other potential enhancements to the POSIX-
specified environment as just that, enhancements to an environment
that was deliberately selected to support portability of applications.
If a standard includes a too-specific methodology, it actually will
adversely constrain application portability.

By the way, I could use more information about API's XVT.  How can
I obtain it?

Volume-Number: Volume 18, Number 13



More information about the Comp.std.unix mailing list