Mail not delivered yet, still trying

SMTP MAILER postmaster at ddnvx2.afwl.af.mil
Fri Jan 19 03:22:53 AEST 1990


 ----Mail status follows----
Have been unable to send your mail to <declerck at sun4b.afwl.af.mil>,
will keep trying for a total of three days.
At that time your mail will be returned.

 ----Transcript of message follows----
Date: 18 Jan 90 03:59:00 MST
From: info-unix at BRL.MIL
Subject: INFO-UNIX Digest  V9#049
To: "declerck" <declerck at sun4b.afwl.af.mil>

Return-Path: <info-unix-request at sem.brl.mil>
Received: from SEM.BRL.MIL by ddnvx2.afwl.af.mil with SMTP ; 
          Thu, 18 Jan 90 03:56:59 MST
Received: from SEM.BRL.MIL by SEM.brl.MIL id ab08556; 18 Jan 90 3:05 EST
Received: from sem.brl.mil by SEM.BRL.MIL id aa08539; 18 Jan 90 2:46 EST
Date:       Thu, 18 Jan 90 02:46:15 EST
From:       The Moderator (Mike Muuss) <Info-Unix-Request at BRL.MIL>
To:         INFO-UNIX at BRL.MIL
Reply-To:   INFO-UNIX at BRL.MIL
Subject:    INFO-UNIX Digest  V9#049
Message-ID:  <9001180246.aa08539 at SEM.BRL.MIL>

INFO-UNIX Digest          Thu, 18 Jan 1990              V9#049

Today's Topics:
                             sys/resource.h
                        Vi Reference - version 6
                       help on MICROPOLIS 1568 HD
                           Re: YACC question
                              Re: /bin/sh
                     Re: ps -c num bug or feature ?
              Re: Is there an EDT editor for Unix systems?
                    Re: use of set in a shell script
                            Re: RPC numbers
                             X25 summarize
                        Re: Problem with 'find'
                   Reply to question on 'find p* ...'
                          using RCS with make
                        Re: using RCS with make
 Re: async I/O (was: Is there a select()-like call for message queues?)
                             Re: async I/O
 Re: async I/O (was: Is there a select()-like call for message queues?)
                             Re: async I/O
            The trouble with fork() (Re: IBM PC prehistory)
          Re: The trouble with fork() (Re: IBM PC prehistory)
                           Unix System V link
                         Re: Unix System V link
                                Re: help
                     Re: making compressed backups
                           Re: Robust Mounts
                             timed question
       Re: How to do a non-blocking write of more than one char?
                      h files that include h files
                    Re: h files that include h files
                              xDBM sources
                            Re: xDBM sources
            getting the system's domain name in a C program
                       csh variable manipulation
                            Re: #! troubles
                       csh and signal handling...
                         problem with find(1)?
                          Re: Shared libraries
                          Re: Compress problem
        Printcap & filters for Genicom 3180-3404 Series Printers
                            Hex input to awk
                          Wanted: Info. on zic
                       Passing variables to gawk
                        panic: sys pt too small
                             size of a file
                     SUMMARY: problem with find(1)
                   Unix Operating System on an PC XT.
                    How does a process use memory???
            Shadow passwds breaks programs with pw_stayopen?
-----------------------------------------------------------------

From: Michael Richardson <michael at fts1.uucp>
Subject: sys/resource.h
Keywords: Porting to System V3.2
Date: 10 Jan 90 01:28:42 GMT
To:       info-unix at sem.brl.mil


  On occasion, I have attempted to port BSD programs to ICS 386/ix. 
	Occasionally, I manage to do it :-)

	Very often, however, I run into sys/resource.h as a major problem.
This usually has to do with wait3() and the like in BSD. I've read
the man pages on a Sun I have access to, and understand it, however
that seldom helps to decide what to do about the code... I know
SunOS/4.3 _far_ better than I know SysV. (I will be _very_ happy
to see V.4)
	Can anyone suggest any general recommendations for porting
things that involve sys/resource.h?
  Thanks.

-- 
  :!mcr!:
  Michael C. Richardson
HOME: mcr at julie.UUCP SCHOOL: mcr at doe.carleton.ca WORK: michael at fts1.UUCP
I never liked staying in one place too long, but this is getting silly...

-----------------------------

From: Maarten Litmaath <maart at cs.vu.nl>
Subject: Vi Reference - version 6
Date: 15 Jan 90 06:49:50 GMT
To:       info-unix at sem.brl.mil

It'll never end.  Many Minor Modifications, some additions.  A patch would
be twice as big.  Still `old' format (what happened to you, Kevin?).
Enjoy!

-----------------------------

From: Lin Chen   <lin at cdin-1.uucp>
Subject: help on MICROPOLIS 1568 HD
Keywords: 1568 MICROPOLIS SCO ADAPTEC HD
Date: 15 Jan 90 23:08:19 GMT
To:       info-unix at sem.brl.mil


Hi Netters:

We have a problem that we need some help with : 

First our system configuration : 
 
AMI386/25 running SCO XENIX2.3.1 and we have tried SCO XENIX 2.3.2 GT and AT 
with a MICROPOLIS Model 1568 which is 660 MB HD. Some drive specifics :
Cylinder : 1630; Head : 15; Sector : 54. The hard drive controller is
(ADAPTEC) ACB-2322B-8.
	
The drive gets through Debug ( using G=C800:5 ) with no problems. It only
when we go to install SCO XENIX that we can not get through the bad track
routine with either destructive or non-destructive using thorough or quick 
options. 

We have talked to MICROPOLIS, ADAPTEC and SCO and have been unsuccessful with
their suggestions. We would appreciate any ideas and passed experiences with
this configuration.

We will summarize if there is enough interest.
-- 
	Lin Chen				{uunet,bpa}!cdin-1!lin
						lin at cdin-1.uu.net
  CompuData Inc.,  Philadelphia  PA

-----------------------------

From: Evan Bigall <evan at plx.uucp>
Subject: Re: YACC question
Date: 16 Jan 90 00:31:28 GMT
To:       info-unix at sem.brl.mil

>
>    expr:       mulexpr PLUS mulexpr
>        | mulexpr MINUS mulexpr
>
>It's very straightforward; the yylex() routine must be written to return
>the constant PLUS when it encounters a '+' in the input, and the
>constant MINUS when it encounters a '-' in the input.  However, Yacc
>allows you to rewrite the above fragment as
>
>    expr:       mulexpr '+' mulexpr
>        | mulexpr '-' mulexpr
>
>My question is, where does Yacc find the '+' and the '-' characters? 
>Apparently they're not gotten via a call to yylex().  Does Yacc simply
>do a getchar()?

Quoting from the yacc section of my sys5.2 "Suport Tool Guide":

}	The rules section is made up of one or more grammar rules.  A grammar
}rule has the form 
}
}A : BODY ;
}
}where "A" represents a nonterminal name, and "BODY" represents a sequence of
}zero or more names and LITERALS {my emphasis}.  The colon and the semicolon
}are yacc punctuation. 

{later it says:}

}A literal consists of a character enclosed in single quotes (').  As in C
}language, the backslash (\) is an escape character within literals....

Really all that is going on here is that yacc is using the value of the
character literal as the token number.  This is why the yacc generated token
numbers start at 257 (on machines with ""normal"" char sets).

The standard way to represent this as a lex rule is:

 .                      	return(*yytext);

to return a literal for all charcters not recognized by another rule. 

Evan


-- 
Evan Bigall, Plexus Software, Santa Clara CA (408)982-4840  ...!sun!plx!evan
"I barely have the authority to speak for myself, certainly not anybody else"

-----------------------------

From: Maarten Litmaath <maart at cs.vu.nl>
Subject: Re: /bin/sh
Date: 16 Jan 90 08:00:39 GMT
To:       info-unix at sem.brl.mil

In article <21838 at mimsy.umd.edu>,
	chris at mimsy.umd.edu (Chris Torek) writes:
\...
\	$ eval echo $"$#"
\...
\but for a small problem with sh argument syntax:
\
\	$ set a b c d e f g h i j k l
\	$ echo $11
\	a1

A ridiculous feature which should have been fixed long ago...
Similar:
	exec 10< foo 11> bar
-- 
  Q: "How do I convert UNIX files to IXUN format?"  A: "rev | dd conv=swab." |
  Maarten Litmaath @ VU Amsterdam:  maart at cs.vu.nl,  uunet!mcsun!botter!maart

-----------------------------

From: Hans Buurman <hans at duttnph.tudelft.nl>
Subject: Re: ps -c num bug or feature ?
Date: 16 Jan 90 10:15:12 GMT
Sender: tnphnws at dutrun.uucp
To:       info-unix at sem.brl.mil

In article <1990Jan15.094339.4254 at athena.mit.edu> jik at athena.mit.edu (Jonathan I. Kamens) writes:
>
>In article <1072 at dutrun.UUCP>, hans at duttnph.tudelft.nl (Hans Buurman) writes:
>> 
>> I don't understand the following behaviour of ps (SunOs 4.0.1):
>> 
>> hans55> ps -c 23706
>> ps: cannot open 23706: No such file or directory

>  The manual also says (at the top):
>
>     SYNOPSIS
>          ps [ acegklnstuvwxU# ]
>
>This means that the pid number should be part of the first argument
>passed to ps, not in a second argument.  In other words, you should have typed:
>
>     ps -c23706

Er, yes and no.

You are quite right, ps -c23706 works perfectly.
However, my SunOs 4.0.1 manual says:

SYNOPSIS
     ps [ -acCegklnrStuvwxU ] [ num ]
          [ kernel_name ] [ c_dump_file ] [ swap_file ]

Sun Release 4.0   Last change: 14 January 1988                  1

So obviously, the manual is incorrect.

You go on to explain why exactly we get this error message.
Your explanation is no doubt correct, but again, you quote from
a different manual. Which manual are you using ?

Thanks for the reply,

	Hans

>Jonathan Kamens			              USnail:
>MIT Project Athena				11 Ashford Terrace
>jik at Athena.MIT.EDU				Allston, MA  02134
>Office: 617-253-8495			      Home: 617-782-0710


========================================================================
Hans Buurman               | hans at duttnph.tudelft.nl | hans at duttnph.UUCP
Pattern Recognition Group  | 31-(0)15-78 46 94       |
Faculty of Applied Physics | Delft University of Technology

-----------------------------

From: "Jonathan I. Kamens" <jik at athena.mit.edu>
Subject: Re: ps -c num bug or feature ?
Date: 16 Jan 90 17:03:04 GMT
Sender: News system <news at athena.mit.edu>
To:       info-unix at sem.brl.mil

In article <1073 at dutrun.UUCP>, hans at duttnph.tudelft.nl (Hans Buurman) writes:
> You go on to explain why exactly we get this error message.
> Your explanation is no doubt correct, but again, you quote from
> a different manual. Which manual are you using ?

  BSD 4.3.  I should have mentioned that....

Jonathan Kamens			              USnail:
MIT Project Athena				11 Ashford Terrace
jik at Athena.MIT.EDU				Allston, MA  02134
Office: 617-253-8495			      Home: 617-782-0710

-----------------------------

From: ekuns at zodiac.rutgers.edu
Subject: Re: Is there an EDT editor for Unix systems?
Date: 16 Jan 90 13:51:52 GMT
To:       info-unix at sem.brl.mil

In article <1990Jan12.035136.2081 at world.std.com>, madd at world.std.com (jim frost) writes:
> lss at babcock.cerc.wvu.wvnet.edu (Linda S. Saus) writes:
>>From article <680 at dftsrv.gsfc.nasa.gov>, by packer at chrpserv.gsfc.nasa.gov (Charles Packer):
>>> Does there exist a screen-oriented editor that runs
>>> under Unix on a Sun, for example, but looks to the user
>>> like the VAX-VMS editor EDT?

One product I haven't seen mentioned is EDT8, by a company called acceler8. 
I've been using it for a while, and it seems quite close to EDT.  (I haven't
run into a difference yet, except that on some systems, you can't use ^Z to get
out of screen mode.  But that's probably my ineptness at figuring out stty!) 
This company also sells a product DCL8, a VAX/VMS DCL-like shell for UNIX, and
LIB8, to emulate the VAX/VMS run-time libraries.  I haven't used either of the
two myself.
-- 
 
/--------------+----------------------------------+--------------------------\
|              | bitnet: EKuns at zodiac             | 2005 Tall Oaks Drive #2A |
|  Eddie Kuns  | domain: EKuns at zodiac.rutgers.edu | Aurora, IL  60505        |
|              | Delphi: EddieKuns                | (708) 820-3943           |
+--------------+----------------------------------+--------------------------+
| Note:  You can subsitute Cancer or Pisces for Zodiac if you have problems. |
\----------------------------------------------------------------------------/

-----------------------------

From: Geoff Clare <gwc at root.co.uk>
Subject: Re: use of set in a shell script
Date: 16 Jan 90 14:06:32 GMT
To:       info-unix at sem.brl.mil

In article <5060 at solo9.cs.vu.nl> maart at cs.vu.nl (Maarten Litmaath) writes:
>optc=0
>optv=
>
>for i
>do
>	case $i in
>	-*)
>		optc=`expr $optc + 1`
>		eval optv$optc='"$i"'
>		optv="$optv \"\$optv$optc\""
>		;;
>	*)
>		# you get the idea
>	esac
>done
>
>eval set $optv		# restore the options EXACTLY

A good attempt, Maarten, but there are a couple of big problems here.  

Firstly, the use of "expr" will be extremely slow for shells which don't
have "expr" built in (virtually all Bourne shells, I think).  There's no
need to use a separate variable for each argument, anyway.

Secondly, the final "set" command will not work correctly.  Suppose at
the start of the script $1 contains "-x".  This will end up as a
"set -x" command, which will turn on tracing mode in the shell, not
place "-x" in $1.  With some shells you can use "--" in a "set" command
to mark the end of the options, but a dummy first argument is more
portable.

Try this modified version:

optv=

for i
do
	case $i in
	-*)
		optv="$optv '$i'"
		;;
	*)
		# you get the idea
	esac
done

eval set X "$optv"; shift		# restore the options EXACTLY


-- 
Geoff Clare, UniSoft Limited, Saunderson House, Hayne Street, London EC1A 9HH
gwc at root.co.uk  (Dumb mailers: ...!uunet!root.co.uk!gwc)  Tel: +44-1-315-6600

-----------------------------

From: Stephen Vinoski <vinoski at apollo.hp.com>
Subject: Re: RPC numbers
Keywords: RPC SUN Problems
Date: 16 Jan 90 15:33:00 GMT
Sender: root at apollo.hp.com
To:       info-unix at sem.brl.mil

In article <148 at ingreur.UUCP> pve at ingreur.UUCP (new user) writes:
>I am trying to get a RPC number registered. I have tryed to
>email, send letters and contacted local SUN offices, but they either
>they did not respond or they refer to the RPC Administrator as mentioned
>in the SUN manuals.(but she/he is not responding !!!!!!) 
>
>I would really appreciate any suggestions or tips on how to get it
>done. (Maybe someone from SUN is reading this ?? )

Too bad you're not using Apollo's NCS.  With it, there is no need to do this
ridiculous activity since NCS includes a program that generates Unique Universal
Identifiers (UUIDs) which are used to identify RPCs and remote objects.

>From "Network Computing Architecture", Lisa Zahn, et. al., Prentice-Hall, 1990,
ISBN 0-13-611674-4, page 11:

 "In addition, UUIDs can be generated anywhere without the need for prior
contact with some other agent, for example, contact with a special server on the
network, or a human representative of a company that hands out identifiers."

Pretty much hits the nail on the head, huh?

NCS is available for Suns, among other machines.


-steve

| Steve Vinoski       | Hewlett-Packard Apollo Div. | ARPA: vinoski at apollo.com |
| (508)256-6600 x5904 | Chelmsford, MA    01824     | UUCP: ...!apollo!vinoski |

-----------------------------

From: Alfredo Villalobos <avq at goya.dit.upm.es>
Subject: X25 summarize
Date: 16 Jan 90 16:15:09 GMT
Sender: avq at goya.dit.upm.es
Followup-To: comp.unix.i386
To:       info-unix at sem.brl.mil


X.25 on UNIX Sys V Rel. 3.2
All products are available for PC/AT bus based 386 machines with 
UNIX/XENIX Operating System.

 ------------
netCS Informationstechnik GmbH
Ahornstrasse 1-2
D-1000 Berlin 30
West Germany
Phone: +49 30 244237; Fax: +49 30 243800
pengo at tmpmbx.UUCP <Hans Huebner>
pengo at garp.mit.edu
 
- Intelligent Adapter Card (currently three different cards) that provides 
  everything up to ISO level 3.
- X.3,X.28, X.29 internal PAD support.
- programmer's library to support custom applications.

 ---------------
Systems Strategies Inc.
USA, Phone 212 279 8400

X.25 intelligent board and Comlink Communications Software.

 ---------------------
Symicron Computer Communications
Charles House
35 Widmore Road
Bromley
Kent BR1 1RW, England
Phone: 01-460-2238, Fax: 01-290-1669

- Product DTSX PC/AT card (up to 64 kbps) 
- soft STS (Symicron Telematics Software)
- UNIX/XENIX device drivers
- X.3, X.28, X.29 internal PAD support (release not yet available)
- programmer's library


 ---------------------
The Software Group Limited 
2 Director Court, Suite 201
Woodbridge, Canada L4L 3Z5
Phone: (416) 8560238
Fax: (416) 8560242
uunet!tsgfred!derek <Derek Vair>

- Product NETCOM-II includes both Intelligent Board (up to 64 kbps) and
  software.
- X.3, X.28, X.29 internal PAD support (T-PAD,H-PAD and U-PAD)
- programmer's library

 ----------------------
RETIX
2644 30th Street
Santa Monica, CA 90405-3009
Phone: (213) 3992200
rutgers!retix!mark <Mark Hoy>

- X.25 card (up to 19.200 kbps) for UNIX on 386 systems. 

-----------------------------

From: Mark Runyan <runyan at hpirs.hp.com>
Subject: Re: Problem with 'find'
Date: 16 Jan 90 17:11:24 GMT
To:       info-unix at sem.brl.mil

>/ mcd at mcdd1.UUCP (Martin Dew) /  6:35 am  Jan 11, 1990 /
>When I execute the command line :
>
>	find . -name t.c -print
>
>I get the response :
>
>	find  cannot execute 'pwd'
>
>'Pwd' exists and is available on typing 'pwd' on its own.
>
>Any ideas why I am having problems ?????????????

May depend on your shell.  A long time ago, I noticed that csh
does it's own pwd.  If /bin/pwd exists, try executing it as "/bin/pwd"
and see if you get results.  If not, then it may be that the
directory you are in has permissions that keeps pwd from finding
out where you are.

Another thing to examine: Are you on an NFS mount?  If so, you may find
that some programs know what to do to get pwd while others fail.  
Programs that use libPW curdir() function may have some difficulty if
that function wasn't updated to understand NFS or other remote mounting
systems.

If all of the above doesn't help, then provide more information, please.

Mark Runyan

-----------------------------

From: "Jan B. Andersen" <jba at harald.ruc.dk>
Subject: Reply to question on 'find p* ...'
Date: 16 Jan 90 17:32:54 GMT
To:       info-unix at sem.brl.mil

My reply (by mail) bounced with

554 usdtsg.UUCP!musson... Host usdtsg not known within the UUCP domain

>when I do a 'find p* -mtime +1 -print'   in a directory with a large number of
>files starting with 'p', I get find: too many arguments.

The first argument to find(1) must the starting directory. Assuming that
you want to find *only* those files starting with 'p' in the current
directory use the command

  % find . -name 'p*' -mtime.....

-----------------------------

From: dw block x-4621 <dwb at hare.udev.cdc.com>
Subject: using RCS with make
Date: 16 Jan 90 17:39:54 GMT
Sender: news at shamash.cdc.com
Followup-To: comp.unix.questions
To:       info-unix at sem.brl.mil

Is there some way to get the make utility to understand RCS files?  I would
like to have make know how to create a .f file from a .f,v RCS file.

I have heard that the latest version of make from AT&T has this capability.
If this is true, any idea when it will be available on MIPS?


 ------------------------------------------------------------
Dave Block                    E-mail:  gwk at hare.udev.cdc.com
Control Data Corp.            AT&T:    (612) 482-4621

-----------------------------

From: ilan343 at violet.berkeley.edu
Subject: Re: using RCS with make
Date: 16 Jan 90 21:22:09 GMT
Sender: "USENET Administrator;;;;ZU44" <usenet at agate.berkeley.edu>
To:       info-unix at sem.brl.mil

In article <15093 at shamash.cdc.com> dwb at hare.udev.cdc.com (dw block x-4621) writes:
>Is there some way to get the make utility to understand RCS files?  I would
>like to have make know how to create a .f file from a .f,v RCS file.
>
I had the same problem a while ago. The easiest way to fix it was to
install GNUmake.

-----------------------------

From: Peter da Silva <peter at ficc.uu.net>
Subject: Re: async I/O (was: Is there a select()-like call for message queues?)
Date: 16 Jan 90 17:44:53 GMT
To:       info-unix at sem.brl.mil

In article <11956 at smoke.BRL.MIL> gwyn at brl.arpa (Doug Gwyn) writes:
> One thing that was appreciated in the computer science research community
> during the 1970s was that forcing applications to explicitly deal with
> asynchronism had been causing numerous reliability problems.

First, a nitpick. By 1974 UNIX was basically in its current form, and its
design goals must have been established earlier than that. But that's just
a nitpick.

Secondly, I'm not suggesting that applications be forced to explicitly
deal with asynchronism. I just believe that since the real world is
asynchronous you should be able to deal with it.

Also, the event-loop construct has considerable success in the real world
for dealing with asynchronous events. I've worked in the process control
industry for the past 10 years, and UNIX has effectively zero penetration
simply because it doesn't allow for processes to handle asynchronous events.

> UNIX loosely followed the CSP notion, wherein individual processes are
> strictly sequential but can communicate with concurrent processes to
> achieve controlled asynchronity.  The UNIX kernel manages the actual
> asynchronous operations and converts them into the per-process sequential
> I/O model.

Unfortunately, UNIX doesn't support a sufficiently fine-grained process
structure to allow this to be generally used. Systems like Mach do, but
they do it by pretty much abandoning the UNIX model.

Or you can implement a fineer grained process structure within a UNIX
process, but to do that effectively you need asynchronous I/O.

> Rob Pike has shown in an article in a recent issue of Computing Systems
> how the CSP model can be applied to graphical windowing environments,
> with the result of dramatically simplifying the design of applications
> in such environments.

I'm sure it can. But not under UNIX as it exists, and not under any
extension of UNIX that I've seen that still remains close to the source.
-- 
 _--_|\  Peter da Silva. +1 713 274 5180. <peter at ficc.uu.net>.
/      \
\_.--._/ Xenix Support -- it's not just a job, it's an adventure!
      v  "Have you hugged your wolf today?" `-_-'

-----------------------------

From: brnstnd at stealth.acf.nyu.edu
Subject: Re: async I/O
Date: 17 Jan 90 08:09:10 GMT
X-Original-Subject: Is there a select()-like call for message queues?
To:       info-unix at sem.brl.mil

I very much agree with Peter. The basic I/O calls should be asynchronous:
aread(), awrite(), and astatus(). aschedwait() and asyncwait() should wait
for scheduling and synchronization respectively; both should only be
special cases of a single await() call, with different semantics for
different devices and file types. Then my multitee program would be easy
to deal with, along with a host of related problems.

In article <CU318Y5xds13 at ficc.uu.net> peter at ficc.uu.net (Peter da Silva) writes:
> Secondly, I'm not suggesting that applications be forced to explicitly
> deal with asynchronism.

Exactly. read() and write() would be short library routines.

> I just believe that since the real world is
> asynchronous you should be able to deal with it.

Yup, and select() is only half a solution. (select() and poll() would be
forms of the more logically named await().)

---Dan

-----------------------------

From: Doug Gwyn <gwyn at smoke.brl.mil>
Subject: Re: async I/O (was: Is there a select()-like call for message queues?)
Date: 17 Jan 90 14:25:57 GMT
To:       info-unix at sem.brl.mil

In article <CU318Y5xds13 at ficc.uu.net> peter at ficc.uu.net (Peter da Silva) writes:
>First, a nitpick. By 1974 UNIX was basically in its current form, and its
>design goals must have been established earlier than that. But that's just
>a nitpick.

Yes, Ken Thompson was thinking about these issues too, as far back as
1969 for sure and probably well before that.

>Secondly, I'm not suggesting that applications be forced to explicitly
>deal with asynchronism. I just believe that since the real world is
>asynchronous you should be able to deal with it.

I would rather have it under control than have to deal with it ad lib.

>Also, the event-loop construct has considerable success in the real world
>for dealing with asynchronous events.

Ha!  Practically everybody I know who has had to program event loops
thinks "there has to be a better way".  The fundamental problem with
event loops is that it forces the application to maintain state
information merely to properly schedule the application's actions.
This (tedious and error-prone) bookkeeping is unnecessary when using
better methods for handling asynchronism.

>Unfortunately, UNIX doesn't support a sufficiently fine-grained process
>structure to allow this [CSP] to be generally used.

Actually, it does pretty well, but in most implementations its IPC needs
improvement.  Also, there is no reasonable programming language for
exploiting this approach other than the shell language, which is too
limited and difficult to use in this area.

>I'm sure it can. But not under UNIX as it exists, and not under any
>extension of UNIX that I've seen that still remains close to the source.

The issue was the best way to extend UNIX to give applications better
control over asynchronism.  I made suggestions for better methods than
forcing processes to deal with awrite() etc.

-----------------------------

From: Peter da Silva <peter at ficc.uu.net>
Subject: Re: async I/O (was: Is there a select()-like call for message queues?)
Date: 17 Jan 90 23:05:30 GMT
To:       info-unix at sem.brl.mil

In article <11968 at smoke.BRL.MIL> gwyn at brl.arpa (Doug Gwyn) writes:
> In article <CU318Y5xds13 at ficc.uu.net> peter at ficc.uu.net (Peter da Silva) writes:
> >I just believe that since the real world is
> >asynchronous you should be able to deal with it.

> I would rather have it under control than have to deal with it ad lib.

> >Also, the event-loop construct has considerable success in the real world
> >for dealing with asynchronous events.

> Ha!  Practically everybody I know who has had to program event loops
> thinks "there has to be a better way".

You sound like me (see my occasional diatribes against X in comp.windoows.*).
However with a conventional programming language it's the only way to do it,
unless you go all the way to UNIX processes... and that's too slow.

> >Unfortunately, UNIX doesn't support a sufficiently fine-grained process
> >structure to allow this [CSP] to be generally used.

> Actually, it does pretty well, but in most implementations its IPC needs
> improvement.

Yes, that's an understatement. Replacing all System V's shm_* calls with
something like map_fd() (from Mach) would help.

But context switch overhead is still too high for realtime work.

> Also, there is no reasonable programming language for
> exploiting this approach other than the shell language, which is too
> limited and difficult to use in this area.

Multithreaded applications are difficult in many languuages, even when the
operating system is up to snuff. This is a language problem...

> The issue was the best way to extend UNIX to give applications better
> control over asynchronism.  I made suggestions for better methods than
> forcing processes to deal with awrite() etc.

First, you keep telling me I'm *forcing* processes to deal with awrite().
I'm not. I'm saying it should be an option.

Secondly, you can implement threads on top of asynchronous I/O calls. I've
done this for Forth under RSX-11. You have to have an explicit context
switch routine, but that simplifies the programming immensely anyway. You
just include checks for completed I/O in the swtch() routine. I laid out
the outline for such a routine in comp.lang.c some months ago, and at least
one person has turned it into a real concurrent "library" for C.

*If* UNIX supported await() and friends, then you could efficiently
implement a concurrent programming language. In fact, you could use C
plus a set of small routines to switch to a new context.

But it doesn't. Pity. Your serve.
-- 
 _--_|\  Peter da Silva. +1 713 274 5180. <peter at ficc.uu.net>.
/      \
\_.--._/ Xenix Support -- it's not just a job, it's an adventure!
      v  "Have you hugged your wolf today?" `-_-'

-----------------------------

From: Barry Margolin <barmar at think.com>
Subject: Re: async I/O
Date: 18 Jan 90 06:31:57 GMT
Sender: news at think.com
To:       info-unix at sem.brl.mil

In article <20718 at stealth.acf.nyu.edu> brnstnd at stealth.acf.nyu.edu (Dan Bernstein) writes:
>I very much agree with Peter. The basic I/O calls should be asynchronous:
 ...
>Exactly. read() and write() would be short library routines.

Watch out, this is how Multics does it.  Remember, Unix is supposed to be a
castrated Multics :-)

On Multics, the only system call that causes the process to block is
hcs_$block, which is similar to select().  All I/O system calls are
asynchronous (file access is done using memory mapping and paging, so it
isn't included).  These are hidden away in library routines (called I/O
modules) which implement device-independent, I/O (similar to Unix read(),
write(), etc.).  Since the underlying mechanism is asynchronous, I/O
modules can provide synchronous and asynchronous modes.

When doing asynchronous writes, the I/O module returns the count of the
number of characters written.  The caller can then advance his buffer
pointer that many characters into his output buffer, wait for the device to
be ready to accept more data, and then try to write the rest of the buffer;
this is iterated until the entire buffer is taken.  The terminal driver
also provides an all-or-nothing interface, for use by applications that
write escape sequences (to guarantee that process interrupts don't cause
partial escape sequences to be written); this is just like the normal
interface, but acts as if the kernel's buffer is full unless it has enough
room for the entire string being written (even a normal write call can
return "0 characters written", if other processes fill up the kernel
buffers before this process gets around to making the write call).

--
Barry Margolin, Thinking Machines Corp.

barmar at think.com
{uunet,harvard}!think!barmar

-----------------------------

From: Peter da Silva <peter at ficc.uu.net>
Subject: The trouble with fork() (Re: IBM PC prehistory)
Date: 16 Jan 90 19:10:15 GMT
Followup-To: comp.misc
To:       info-unix at sem.brl.mil

Fork() is an elegant concept, but as has been seen it leads to problems
implementing UNIX on a system without an MMU, or implementing a UNIX
lookalike on top of a non-UNIX O/S. It's possible, but expensive.

Wouldn't it be nice if there was a sanctioned P1003 subset that replaced
fork() with a combined fork()/exec() call (spawn?). Or just an addition
of spawn to the standard as an alternative process creation mechanism:
This would radically improve the performance of non-UNIX POSIX systems,
without compromising the capability of the standard...
-- 
 _--_|\  Peter da Silva. +1 713 274 5180. <peter at ficc.uu.net>.
/      \
\_.--._/ Xenix Support -- it's not just a job, it's an adventure!
      v  "Have you hugged your wolf today?" `-_-'

-----------------------------

From: Doug Gwyn <gwyn at smoke.brl.mil>
Subject: Re: The trouble with fork() (Re: IBM PC prehistory)
Date: 17 Jan 90 14:31:07 GMT
To:       info-unix at sem.brl.mil

In article <DW31DR7xds13 at ficc.uu.net> peter at ficc.uu.net (Peter da Silva) writes:
>Wouldn't it be nice if there was a sanctioned P1003 subset that replaced
>fork() with a combined fork()/exec() call (spawn?). Or just an addition
>of spawn to the standard as an alternative process creation mechanism:
>This would radically improve the performance of non-UNIX POSIX systems,
>without compromising the capability of the standard...

Wrong; fork() is more flexible than spawn(), and this is thoroughly
exploited by UNIX applications.  For example, a job control shell
has many things to do between the fork() and the exec() in the child
branch.

IEEE P1003 decided early on not to compromise UNIX semantics merely
to allow POSIX to be more readily implemented on non-UNIX platforms.

-----------------------------

From: Peter da Silva <peter at ficc.uu.net>
Subject: Re: The trouble with fork() (Re: IBM PC prehistory)
Date: 18 Jan 90 02:47:27 GMT
To:       info-unix at sem.brl.mil

In article <11969 at smoke.BRL.MIL> gwyn at brl.arpa (Doug Gwyn) writes:
> In article <DW31DR7xds13 at ficc.uu.net> peter at ficc.uu.net (Peter da Silva) writes:
> >Wouldn't it be nice if there was a sanctioned P1003 subset that replaced
> >fork() with a combined fork()/exec() call (spawn?). Or just an addition
                                                       ^^^^^^^^^^^^^^^^^^^
> >of spawn to the standard as an alternative process creation mechanism:
   ^^^^^^^^                 ^^^^^^^^^^^^^^^^^
> >This would radically improve the performance of non-UNIX POSIX systems,
> >without compromising the capability of the standard...

> Wrong; fork() is more flexible than spawn(), and this is thoroughly
> exploited by UNIX applications.

Read my lips: no new taxes.

Oops. Wrong program.

Anyway, look at the second sentence above. Most cases of fork/exec could
be replaced by a simple spawn call, making the majority of programs that
didn't need fork/exec more efficient.

How do you feel about adding coroutines to C?
-- 
 _--_|\  Peter da Silva. +1 713 274 5180. <peter at ficc.uu.net>.
/      \
\_.--._/ Xenix Support -- it's not just a job, it's an adventure!
      v  "Have you hugged your wolf today?" `-_-'

-----------------------------

From: Francois Bronsard <bronsard at m.cs.uiuc.edu>
Subject: Unix System V link
Date: 16 Jan 90 20:43:55 GMT
Sender: Paul Pomes <paul at ux1.cso.uiuc.edu>
To:       info-unix at sem.brl.mil

I have a question about the dangers of having two hard links to the
same directory in Unix System V.  Basically, I cannot do a symbolic
link (because my version of the system is too old), so I was told that
I could use the system call link() to create a hard link to a directory.
However, I was warned that such a thing is dangerous since it might
confuse the file system (specifically, the programs find and
fsck/icheck/ncheck).  Now my question is : "How dangerous is it really?"
In particular, all I want to do create the following structure :
                    $home
                    / .. \ ...
                   /      \
                FILES      \   
                  \         \
                   \_______files
                            ...

 (The link between FILES and files is the new link that I want to have).
So what problems can such a link cause to the files systems?

Francois

-----------------------------

From: Doug Gwyn <gwyn at smoke.brl.mil>
Subject: Re: Unix System V link
Date: 17 Jan 90 14:14:10 GMT
To:       info-unix at sem.brl.mil

In article <1990Jan16.204355.13792 at ux1.cso.uiuc.edu> bronsard at m.cs.uiuc.edu.UUCP (Francois Bronsard) writes:
>Now my question is : "How dangerous is it really?"

It's very dangerous, because when you rmdir one of the links
the other will no longer find . and .. entries in the directory.

Don't do it.

-----------------------------

From: Kartik Subbarao <subbarao at phoenix.princeton.edu>
Subject: Re: help
Date: 17 Jan 90 14:18:07 GMT
To:       info-unix at sem.brl.mil

In article <22108 at adm.BRL.MIL> KOEHLER%DMRHRZ11.BITNET at cunyvm.cuny.edu (Klaus Koehler) writes:
>help


Maybe some UNIX Wizards can read minds, but not this one :-)

Could you be more specific?

-- 
subbarao@{phoenix,bogey or gauguin}.princeton.edu

-----------------------------

From: "Martin H. Brooks" <mb33 at prism.gatech.edu>
Subject: Re: making compressed backups
Date: 16 Jan 90 21:30:06 GMT
To:       info-unix at sem.brl.mil

In article <1990Jan13.020205.3660 at virtech.uucp> cpcahil at virtech.uucp (Conor P. Cahill) writes:
>In article <1990Jan12.092215.1567 at aai.uu.net>, leo at aai.uu.net (Leo Pinard) writes:
>> find /usr -depth -print  | cpio -oBmc | compress | dd of=/dev/rmt8
>
>	1. You should do a cd /usr; find . -depth...
>	2. In order to make the tape stream you probably should specify
>	   an output block size of something like 1024k.

Unless your "cpio" is different from mine, use the "a" option instead
of the "m".  The "a" option resets the access time after a file is
backed up so that it appears not to have been accessed by the backup.
On my "cpio", the "m" option is not available with the "o" output
mode.

-- 
Martin H. Brooks  -  Georgia Institute of Technology
U.S. Mail: GTRI/RAIL/MAD/EMB 0800, Georgia Tech, Atlanta, GA 30332
uucp:	   ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!mb33
Internet:  mb33 at prism.gatech.edu

-----------------------------

From: urlichs at smurf.ira.uka.de
Subject: Re: Robust Mounts
Date: 16 Jan 90 21:35:18 GMT
To:       info-unix at sem.brl.mil

In comp.unix.questions deke at ee.rochester.edu (Dikran Kassabian) writes:
< In article <1376 at smurf.ira.uka.de> urlichs at smurf.ira.uka.de writes:
< > In article <10284 at zodiac.ADS.COM> mliverig at spark.uucp writes:
< > >
< > >1) Soft mounts of read-write file systems would increase the risk of
< > >corrupting the file systems.
< > 
< >How?
< 
< In the same way a system crash can result in corruption of local disks.
< Summary information can be inaccurate depending on exactly when the crash
< takes place, as it relates to pending disk writes.
< 
< An NFS rw,hard mount is a win in this case...  the process on the NFS client
< hangs until the NFS mount becomes available again, and so gets to continue.
< Not that this guarentees you a clean file-system, but I believe that your
< chances are lots better.
< 
Well, I fail to see why, given the following sequence of events
- client sends NFS request
- server (partially or completely) processes the request
- server crashes

either one of the following events
- client times out, user program gets error
or
- client hangs until server is back, user program continues
or
- client gets disconnected by automount until server is back, user program
  gets error

could possibly have any impact on the probability that
- server disk needs to be fsck'd, probably dropping some files
or that
- buffer was not written on server, causing inconsistent database although the
  client got an OK return from NFS.

A hard NFS mount obviously improves your chances if
- server crashes but managed to write its buffers, but
- client was doing things which left the database inconsistent.

In this case, obviously, a hard mount is helpful here and either a soft mount
or a client disconnected by an automount daemon would cause problems because
the client has no way to get the database back into a consistent state.

I hope I'm not missing anything here.
< 
< BUT:
< 
< My preferred solution would be to use SunOS automount(8) or Jan-Simon Pendry's
< 'amd'.  I'm still hoping someone will comment on my question, which
< asked about automounter, and why it might be considered 'not yet safe'.
< 
My understanding of an automount daemon is:
- It periodically tests if the server is still there.
- If not for N seconds, the server is unmounted. This has the same effect as a
  soft mount in that the client, trying to read or write a file, gets an error.
- Any request to the server returns an error immediately until the server is
  back online, in which case
- the automounter reconnects the client to the server.

Anyone more knowledgeable enlighten me in case I'm wrong, please.
-- 
Matthias Urlichs

-----------------------------

From: Andy Wai <accwai at maytag.waterloo.edu>
Subject: timed question
Date: 16 Jan 90 22:36:59 GMT
To:       info-unix at sem.brl.mil

Could somebody tell me what the "-i" and "-n" flag on timed really does?
The man page doesn't tell me anything concrete and the "Timed Installation
and Operation Guide" isn't much better either.

Better yet, could somebody tell me what the proper command line is to
start a submaster daemon that will propagate the time to a small subnet
but won't try to take control of the backbone network should the master on
the backbone dies.

Thanks in advance.

Andy Wai

-----------------------------

From: brnstnd at stealth.acf.nyu.edu
Subject: Re: How to do a non-blocking write of more than one char?
Date: 16 Jan 90 23:08:35 GMT
To:       info-unix at sem.brl.mil

I'm trying to write a ``multitee'' program so that, e.g.,

  multitee 0:6,7 6:1

would send all input from fd 0 to fd 6 and fd 7, while sending all input
from fd 6 to fd 1. It's a trivial problem, except that multitee should
do its best to never block on writes. (Otherwise it could enter a deadlock
with another process.) Buffering is easy, but how to avoid blocking?

One correct answer is to always write just one character per write()
call. Unfortunately, this usually forces a hellish overhead.

Another answer is to use fcntl() and set FNDELAY on the descriptor.
Unfortunately, this doesn't just affect the descriptor; it affects the
entire open file, including possibly other processes. (This is the real
problem.)

Another answer is that multitee should fork into separate processes,
one for each input descriptor. This works and solves the flow control
problems, but it's not very polite to other processes unless the system
supports threads.

In article <2816 at auspex.auspex.com> guy at auspex.auspex.com (Guy Harris) writes:
> >C'mon, guys, this is a simple question!
> And it may have a simple answer like "sorry, you can't do it".

I guess it's a good question, then...

---Dan

-----------------------------

From: Scot Mcintosh <psm at manta.nosc.mil>
Subject: h files that include h files
Date: 16 Jan 90 23:26:44 GMT
To:       info-unix at sem.brl.mil

In a makefile, how should one handle the case where some
of the .h files include other .h files?  I see two 
possibilities: make dependencies for them, and 'touch'
the dependent ones, thus causing the .c files that depend on
them to be made.  Or, examine all of the .h files, figure out
the nesting and then put all of the pertinent ones in
the .c file's dependency.

Neither of these is particularly appealing.  How does
the Unix world handle this kind of thing? (If you haven't
guessed, I'm not that experienced with Unix yet).  Thanks.

-- 
----
Scot McIntosh
Internet: psm at helios.nosc.mil
UUCP:     {ihnp4,akgua,decvax,decwest,ucbvax}!sdscvax!nosc!psm

-----------------------------

From: "Jonathan I. Kamens" <jik at athena.mit.edu>
Subject: Re: h files that include h files
Date: 17 Jan 90 00:11:11 GMT
Sender: News system <news at athena.mit.edu>
To:       info-unix at sem.brl.mil

In article <997 at manta.NOSC.MIL>, psm at manta.NOSC.MIL (Scot Mcintosh) writes:
> In a makefile, how should one handle the case where some
> of the .h files include other .h files?  I see two 
> possibilities: make dependencies for them, and 'touch'
> the dependent ones, thus causing the .c files that depend on
> them to be made.  Or, examine all of the .h files, figure out
> the nesting and then put all of the pertinent ones in
> the .c file's dependency.

  Various shell scripts and binary programs have been written to do
automatically what you described in your second possibility (but the
dependency would be associated with the .o file, not the .c file).

  Typically, they use either the -E or -M flag of the compiler.  The -E
flag, on compilers that have it (most of them), outputs the
pre-processed text that would be compiled, without actually compiling
it.  Interspersed in this text is various lines like

  # 1 "main.c"
  # 1 "./xsaver.h"
  # 1 "/usr/include/X11/Intrinsic.h"

which are used later by the preprocessor to build the symbol tables into
the binaries.  These lines can be played with using sed et al to
generate a dependency list suitable for inclusion in a makefile.

  The -M flag, con compilers that have it (fewer than have the -E flag,
I think) actually outputs a dependency list.

  The shell script versions tend to be slow -- a binary dedicated to
figuring out dependencies by reading the preprocessor directives tends
to do it faster.  Such a binary (called "makedepend") is released as
part of the standard X window system distribution -- if you have X at
your site, then you have the binary (or its sources) somewhere, or you
can ftp it from an X archive site.

Jonathan Kamens			              USnail:
MIT Project Athena				11 Ashford Terrace
jik at Athena.MIT.EDU				Allston, MA  02134
Office: 617-253-8495			      Home: 617-782-0710

-----------------------------

From: Kevin O'Gorman <kevin at kosman.uucp>
Subject: xDBM sources
Date: 16 Jan 90 23:48:09 GMT
Followup-To: comp.unix.questions
To:       info-unix at sem.brl.mil

I see references to dbm, ndbm and mdbm from time to time.  I even see
that some packages that I use (like PERL, netnews) say they work more
or better with these things.

I can guess what they are.  I don't know what the differences are among
them.

Can anyone mail, or send a pointer to, a set of sources?  Or let me know
the (happiware, guiltware, commercial, GNU, etc) status of them?
-- 
Kevin O'Gorman ( kevin at kosman.UUCP, kevin%kosman.uucp at nrc.com )
voice: 805-984-8042 Vital Computer Systems, 5115 Beachcomber, Oxnard, CA  93035
Non-Disclaimer: my boss is me, and he stands behind everything I say.

-----------------------------

From: "Ronald S. Woan/2113674" <ron at woan.austin.ibm.com>
Subject: Re: xDBM sources
Date: 18 Jan 90 00:03:22 GMT
Sender: news at awdprime.uucp
To:       info-unix at sem.brl.mil

In article <1068 at kosman.UUCP>, kevin at kosman.UUCP (Kevin O'Gorman) writes:
|>I see references to dbm, ndbm and mdbm from time to time.  
|>I can guess what they are.  I don't know what the differences are among
|>them.

In general dbm belongs to the BSD world with ndbm introduced in
4.3BSD. ndbm differs from dbm in that multiple database files can be
simultaneously open. The other ?dbm packages provide compatible
functions for those without ndbm. I believe the GNU version is sdbm.
Another version was just posted to one of the source groups
(alt.sources).

					Ron

+-----All Views Expressed Are My Own And Are Not Necessarily Shared By------+
+------------------------------My Employer----------------------------------+
+ Ronald S. Woan  (IBM VNET)WOAN AT AUSTIN, (AUSTIN)ron at woan.austin.ibm.com +
+ outside of IBM       @cs.utexas.edu:ibmchs!auschs!woan.austin.ibm.com!ron +

-----------------------------

From: Jim O'Connor <jim at tiamat.fsc.com>
Subject: getting the system's domain name in a C program
Date: 17 Jan 90 01:03:20 GMT
To:       info-unix at sem.brl.mil

Is there a system call or library function that is supposed to return the
name of the current host, complete with the domain attached?

I'm a little confused, since when I set up TCP/IP on a Xenix machine, the
installation scripts created the command "hostname tiamat.fsc.com" in the
TCP/IP startup file, so using "hostname" or "gethostname()" returns the
whole thing, and I can always use "uanme" to get just the host name with
out the domain part.

However, when I installed TCP/IP on our HP 9000 Model 815 machine, the
installation created the command "hostname guinan" in the TCP/IP start
up file.  So, on guinan, I don't get the domain name with "gethostname".

Which way is "right"?  Is there a "right" way?

Thanks for any help.
 ------------- 
James B. O'Connor			jim at tiamat.fsc.com
Ahlstrom Filtration, Inc.		615/821-4022 x. 651

*** Altos users unite! mail to "info-altos-request at tiamat.fsc.com" ***

-----------------------------

From: brunger at venus.ycc.yale.edu
Subject: csh variable manipulation
Keywords: csh variables
Date: 17 Jan 90 03:36:21 GMT
To:       info-unix at sem.brl.mil

i would like to strip part of a path from a file name in a csh script.
the portion of the path i would like to strip is stored in a variable.

ex. WPATH = /user/tmp/ctree/orig
    FILEN = /user/tmp/ctree/orig/src/dix/newcode.c

i would like to strip the path in WPATH from FILEN leaving

src/dix/newcode.c

please respond to m at jacobi.biology.yale.edu
thanks in advance.

-----------------------------

From: Spencer Garrett <srg at quick.com>
Subject: Re: #! troubles
Keywords: exec, kernel, environment
Date: 17 Jan 90 04:43:45 GMT
To:       info-unix at sem.brl.mil

In article <1990Jan15.215617.9659 at i88.isc.com>, daveb at i88.isc.com (Dave Burton) writes:
> In article <2047 at uvaarpa.virginia.edu> worley at compass.com writes:
> >> Assuming the kernel gets your $PATH (which it doesn't, but pretend) -
> >
> The kernel does not get _any_ environment variables. Sure, it passes an
> environment pointer to the exec'd process, but this does not imply the
> environment is scanned. That would be _very_ expensive.

Au contraire.  The environment strings are *copied* to the child process
in the same manner as the argv strings.  The kernel could easily scan
for a PATH variable.  The main argument against this is that it's the
sort of feeping creaturism for which Berkeley has been long and loudly
chastized, though given that #! interpretation got moved in I don't
see this as an inappropriate adjunct.  I think it got left out mostly
because coding this sort of thing at the kernel level is a mess.

-----------------------------

From: Dave Burton <daveb at i88.isc.com>
Subject: Re: #! troubles
Keywords: exec, kernel, environment
Date: 17 Jan 90 17:22:18 GMT
Sender: Steve Alexander <stevea at i88.isc.com>
To:       info-unix at sem.brl.mil

In article <7661 at quick.COM> srg at quick.COM (Spencer Garrett) writes:
|In article <1990Jan15.215617.9659 at i88.isc.com>, daveb at i88.isc.com (Dave Burton) writes:
|> >> Assuming the kernel gets your $PATH (which it doesn't, but pretend) -
|> The kernel does not get _any_ environment variables. Sure, it passes an
|> environment pointer to the exec'd process, but this does not imply the
|> environment is scanned. That would be _very_ expensive.
|
|Au contraire.  The environment strings are *copied* to the child process
|in the same manner as the argv strings.  The kernel could easily scan
|for a PATH variable.  The main argument against this is that it's the
|sort of feeping creaturism for which Berkeley has been long and loudly
|chastized, though given that #! interpretation got moved in I don't
|see this as an inappropriate adjunct.  I think it got left out mostly
|because coding this sort of thing at the kernel level is a mess.

Unfortunately, I caught my gaff after I posted. Yes, of course the environment
is copied (where would the pointer point? Oops.)

I suppose I don't see this as feeping creaturism so much as simply wrong:

 . The kernel does not examine (better than the word get :-) any environment
  variables, for any case. IMHO, it shouldn't. The kernel should not change
  its behavior due to user environment changes.

 . exec[lv]p(3) were written to do $PATH scanning on exec's. At this time
  there was the opportunity to add $PATH scanning to the kernel, but it
  wasn't done, which suggests that the proper place to do this is uspace.
  Making exec[lv]p(3) equivalent to exec[lv](3) and placing $PATH scanning
  in the kernel today would break other programs (e.g. execl("prog","prog",0)
  with $PATH not containing the current directory). Therefore, this would
  have to be a special case for #! execution.

 . This creature is even more special case: given an suid/sgid script, $PATH
  scanning cannot be trusted - trivially easy to spoof. Thus, $PATH scanning
  wouldn't always work. The Principle of Least Astonishment?

 . The kernel contortions required to support this would be messy, at best
  (good argument for not changing the kernel, huh?).

 . Aside from kernel issues: as stated later in the referenced article (in
  comp.lang.perl), there's no good reason to do this anyway. It helps nobody.
  I believe this is the strongest argument against it.
--
Dave Burton
uunet!ism780c!laidbak!daveb

-----------------------------

From: Gregory Gulik <greg at gagme.uucp>
Subject: csh and signal handling...
Keywords: csh trap onintr
Date: 17 Jan 90 06:01:10 GMT
To:       info-unix at sem.brl.mil


I know that in the standard sh, I can tell it to perform certain
functions upon the receipt of certain signals using the trap command.

I was working on a script in csh on a Harris running some BSD-line
OS and the closest I came to the trap was the onintr command,
but according to TFMP it cancells ALL interrupts..  I don't want
that.  I just want to catch the intr and hup interrupts.

Has anyone gotten around this?  Why is it implemented in this
way?

I finally found something fairly major that sh can do that csh cannot!!!

-greg


-- 
Gregory A. Gulik
	greg at gagme.UUCP  ||  ...!jolnet!gagme!greg
	||  gulik at depaul.edu

-----------------------------

From: Brian Litzinger <brian at apt.uucp>
Subject: problem with find(1)?
Date: 17 Jan 90 07:09:41 GMT
Followup-To: poster
To:       info-unix at sem.brl.mil

I running ISC 386/ix UNIX V.3.2 and have run into the following
problem with find(1):

Find(1) reports the following errors while traversing part of one
of my filesystems: (I'm logged in as root)

# find . -print
[ lotsa of normal find(1) output followed by ... ]
find: stat() failed: ./brian/core: No such file or directory
find: stat() failed: ./leone: No such file or directory
find: stat() failed: ./chet: No such file or directory
find: stat() failed: ./zyrel: No such file or directory
find: stat() failed: ./marcos: No such file or directory
find: stat() failed: ./btest: No such file or directory
find: stat() failed: ./dmyers: No such file or directory

These directories seem alright.  And a command like
# find chet -print
works as expected, however a command like
# find brian -print
works mostly through the directory then reports similar errors
near the end.  The directory tree under brian is by far the
biggest of the user directories.

Is there some limit to the number of entries find(1) can process?

Or has the file system become corrupt in a way that bothers nothing
except find(1)?

<>  Brian Litzinger @ APT Technology Inc., San Jose, CA
<>  UUCP:  {apple,sun,pyramid}!daver!apt!brian    brian at apt.UUCP
<>  VOICE: 408 370 9077      FAX: 408 370 9291
-- 
<>  Brian Litzinger @ APT Technology Inc., San Jose, CA
<>  UUCP:  {apple,sun,pyramid}!daver!apt!brian    brian at apt.UUCP
<>  VOICE: 408 370 9077      FAX: 408 370 9291

-----------------------------

From: david newall <CCDN at lv.sait.edu.au>
Subject: Re: Shared libraries
Date: 17 Jan 90 16:49:13 GMT
To:       info-unix at sem.brl.mil

In article <32873 at news.Think.COM>, barmar at think.com (Barry Margolin) writes:
> In article <6256 at levels.sait.edu.au> CCDN at levels.sait.edu.au (david newall) writes:
>>UCSD p-System had the first implementation of shared libraries I know of.
>
> Did the p-System predate Multics, which was developed in the mid-60's?

I don't recall when UCSD first did the p-System -- I think it was early
'70s.  I can't even say for certain that p-System version-I had shared
libraries (I know version II did), although I do seem to recall that it
did.  Anyway, it sounds like Multics's shared libraries pre-date UCSD's.


David Newall                     Phone:  +61 8 343 3160
Unix Systems Programmer          Fax:    +61 8 349 6939
Academic Computing Service       E-mail: ccdn at levels.sait.oz.au
SA Institute of Technology       Post:   The Levels, South Australia, 5095

-----------------------------

From: Tin Le <tinle at aimt.uu.net>
Subject: Re: Compress problem
Keywords: compress error file too large
Date: 17 Jan 90 17:41:13 GMT
To:       info-unix at sem.brl.mil

In article <493 at dhump.lakesys.COM>, mort at dhump.lakesys.COM (Mort d`Hump) writes:
] I am having trouble using compress to de-compress large .Z files.
] 
] The operation terminates with a "File too large" error.
] 
] I have a PLEXUS P-20 running vanilla Sys VR2 w/ 2Meg of RAM and over
] 25 Meg of free space on the file system I am working in.
] 
] The results of compress -V:
] Based on compress.c,v 4.0 85/07/30 12:50:00 joe Release
] Options: BITS = 16
] 
] This is the first file I have trouble with:
] mort     users    1059441 Dec 21 21:01 SML3.1.cpio.Z
] 
] Does anyone have any pointers?

The problem is your ulimit is too low.  If you are in sh, check by simply
typing ulimit.  That will give you a number telling you the current ulimit.
My guess is that it's set to the default of 2048 blocks (2MB).  Increase
that and you will be fine.  If you are using csh, you will need to change
to sh (temporarily) while you are uncompressing the file.

One more note: you will need root privilege in order to change your ulimit.

-- Tin Le

-- 
Tin Le                    |  UUCP: {wyse, claris, uunet}!aimt!tinle
Sr. Software Engineer     |  Internet: tinle at aimt.uu.net
AIM Technology            | XBBS (408)-739-1520  19.2K Telebit+
Santa Clara, CA 95054     | "'tis an ill wind that blows no mind..."

-----------------------------

From: "Adam W. Feigin" <adam at ncifcrf.gov>
Subject: Printcap & filters for Genicom 3180-3404 Series Printers
Keywords: Genicom 3404, /etc/printcap, filters
Date: 17 Jan 90 18:04:29 GMT
Followup-To:adam at ncifcrf.gov
To:       info-unix at sem.brl.mil

We've got an old Genicom 3404 line printer that I'd like to hook to
one of our Suns. Anybody got a printcap entry for this beastie, and
any associated filters I might need ???

The Genicom 3404 is in the 3180 series of printers, but has multicolor
printing. I dont really need anything fancy, just something that'll at
least let me use the thing as a dumb line printer for the short
term...I'll hack something up later for the good stuff....

Thanks in advance.

							AWF

-- 
Internet: adam at ncifcrf.gov			Adam W. Feigin
UUCP: {backbonz}!ncifcrf!adam			Senior Systems Manager
Mail: P.O. Box B, Bldg 430	National Cancer Institute-Supercomputer Center
      Frederick, MD 21701		Frederick Cancer Research Facility

-----------------------------

From: Dan Jacobson <danj1 at cbnewse.att.com>
Subject: Hex input to awk
Date: 17 Jan 90 19:59:57 GMT
To:       info-unix at sem.brl.mil

What's the easy way to read in hexidecimal numbers in [n]awk?  There's no
scanf function.  Outputting them is no problem (printf, OFMT).  But what
if you have a input file like:
Burgers	04ff
Wax	2d3f
I hope I don't have to write a function.
-- 
Dan Jacobson +1-708-979-6364 danj1 at ihlpa.ATT.COM

-----------------------------

From: 4277_5067 at uwovax.uwo.ca
Subject: Wanted: Info. on zic
Date: 17 Jan 90 21:02:12 GMT
To:       info-unix at sem.brl.mil


 I have been trying to prepare a brief report on the 'zic' (Time Zone
Compiler) tool, for my class (I am a student too!). Unfortunately, I
couldn't find sufficient technical infomation, such as how it works,
what kind of results it generates, etc..., on zic.

 If anybody has something to say about zic, I would greatly appreciate
it.


							Thanks in advance.



		Miky_The_Dragon ruling....

-----------------------------

From: Mike McManus <mikemc at mustang.ncr-fc.ftcollins.ncr.com>
Subject: Passing variables to gawk
Date: 17 Jan 90 21:12:46 GMT
Sender: news at ncr-fc.ftcollins.ncr.com
To:       info-unix at sem.brl.mil


I want to pass variables to a gawk program via the command line.  According to
the man page, all you need do is call gawk with <var>=<value> instead of file
names:

gawk -f foo.awk a=1 b=2 c=3 infile > outfile

and the variables a,b,c will be free to use in foo.awk (the gawk script), and
will be assigned the values 1,2,3.  I can't seem to get this to work!  When I
print the value of the variables inside a BEGIN{} block in foo.awk, they are all
set to 0.  Anybody have any experience with this that they'd like to share?

Thanks!
--
Disclaimer: All spelling and/or grammer in this document are guaranteed to be
            correct; any exseptions is the is wurk uv intter-net deemuns.

Mike McManus (mikemc at ncr-fc.FtCollins.ncr.com)  
NCR Microelectronics                
2001 Danfield Ct.                   ncr-fc!mikemc at ncr-sd.sandiego.ncr.com, or
Ft. Collins,  Colorado              ncr-fc!mikemc at ccncsu.colostate.edu, or
(303) 223-5100   Ext. 360           uunet!ncrlnk!ncr-sd!ncr-fc!garage!mikemc
                                    

-----------------------------

From: Mike Macgirvin <mike at relgyro.stanford.edu>
Subject: panic: sys pt too small
Date: 17 Jan 90 21:57:37 GMT
Sender: news at helens.stanford.edu
To:       info-unix at sem.brl.mil


	I am adding kernel support for a Xylogics 753 controller. Booting
the new kernel results in the message "sys pt too small", and a panic reboot.
I have object license only, no OS source.

	Question: What causes a "sys pt too small" panic? i.e. What is a "pt"?
Can I enlarge it through a config change (without OS source license) ? Or, do
I have to remove some devices? Any particular kind of devices?

	The displayed string was tracked down to "/usr/sys/OBJ/machdep.o".

	Please respond via e-mail if possible. I will summarize if desired.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+  Mike Macgirvin              Relativity Gyroscope Experiment (GP-B)    +
+  mike at relgyro.stanford.edu   (36.64.0.50)                              +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

-----------------------------

From: Rick Peralta <peralta at pinocchio.encore.com>
Subject: size of a file
Date: 18 Jan 90 00:46:32 GMT
Sender: news at encore.com
To:       info-unix at sem.brl.mil

In most implementations a pointer to the offset in the file is maintained.
The file (and filesystem) cannot ecxcede the limit of the int.  So, how
can a very large file be accommodated?  On a 16 bit machine this must have
been a critical problem.  Does anyone know how it was overcome?


 - Rick "Maybe compiler vodo, floating point hacks, secret math libraries..."

-----------------------------

From: Brian Litzinger <brian at apt.uucp>
Subject: SUMMARY: problem with find(1)
Date: 18 Jan 90 02:50:42 GMT
Followup-To: poster
To:       info-unix at sem.brl.mil

I an earlier posting I asked for help with a problem I
was having with find(1).  

The problem was find(1) would stop at a point through the
file system and report the same error from that point on.
The file system worked correctly in all aspects except for
the find(1) problem.

Several people responded that there must be something wrong
with the file system.

Along another discussion subject the cause of my problem was
identified.  This discussion was unrelated to my posting.

>In article <1990Jan16.204355.13792 at ux1.cso.uiuc.edu> bronsard at m.cs.uiuc.edu.UUCP (Francois Bronsard) writes:
>>Now my question is : "How dangerous is it really?"
>
>It's very dangerous, because when you rmdir one of the links
>the other will no longer find . and .. entries in the directory.
>
>Don't do it.

[ 'it' refers to using /etc/link ]

I had used /etc/link to link a directory from one account to another.
The destination link has long since been erased, however, it broke the
'.' and '..' entries in that directory which broke find(1).

I have removed the other half of the /etc/link'ed directory and the
file system is all better.

Thanks for the help.

<>  Brian Litzinger @ APT Technology Inc., San Jose, CA
<>  UUCP:  {apple,sun,pyramid}!daver!apt!brian    brian at apt.UUCP
<>  VOICE: 408 370 9077      FAX: 408 370 9291
-- 
<>  Brian Litzinger @ APT Technology Inc., San Jose, CA
<>  UUCP:  {apple,sun,pyramid}!daver!apt!brian    brian at apt.UUCP
<>  VOICE: 408 370 9077      FAX: 408 370 9291

-----------------------------

From: Richard Todd Wall <wall-rt at cscosl.ncsu.edu>
Subject: Unix Operating System on an PC XT.
Date: 18 Jan 90 03:57:07 GMT
To:       info-unix at sem.brl.mil


     I am currently interested in running a unix operating system on
my IBM PC XT clone, and I would like some input from any fellow unix
IBMers as to the best one to get.  
     Here are some things I would like to have access to...

       1)  uucp with the ability to get news and send mail to other
           unix machines.
       2)  Have multi-user, multi-processing capibilities.
       3)  Run a terminal off my serial port.
       4)  I need one with a C compiler that can handle large files like
           NETHACK.
       5)  And one that will boot off of floppies and then only access
           my second hard drive.  i.e.  I want to still be able to do 
           MSDOS and not loose my data.

Any help with this will be greatly appreciated..  Thanks again.

                            -> Todd Wall <-

wall-rt at cscosl.ncsu.edu

-----------------------------

From: brunger at venus.ycc.yale.edu
Subject: How does a process use memory???
Date: 18 Jan 90 04:31:19 GMT
To:       info-unix at sem.brl.mil


I am looking for a description of how memory is used by a process in a Unix
system. Is there any thing I can count on across processors?

Background:

We have a very large program written in FORTRAN and C. The program performs
very long simulations (up to 1 week of Cray time). During a run it may perform
thousands of mallocs and frees. Most of the time this allocated space is tied
to a FORTRAN array by calculating the array index for the given address.
I am trying to provide a mechanism that will provide a map of the process's
memory at a given time.  Showing our allocated space in relating to the code,
data (the FORTRAN array in particular), stack, etc. space. Some systems seem
to do a good job at reclaiming freed space while others do not. I have assumed
that the systems programmers used the optimum algorithm for their architecture,
but I would like to verify this while learning something myself.

please respond to:

m at jacobi.biology.yale.edu

Mark McCallum

Thanks in advance.

Also thank you for the usefull replies to a previous question on csh variable
manipulation.

-----------------------------

From: Marc Lesure <system at asuvax.asu.edu>
Subject: Shadow passwds breaks programs with pw_stayopen?
Keywords: shadow passwds pwd
Date: 18 Jan 90 06:32:34 GMT
To:       info-unix at sem.brl.mil

I just install the shadow password code from Berkeley on one of our
Vax 780's running 4.3-tahoe.  When I did, the programs quot, finger,
find, and others broke.  The programs would core dump and re-compiling
them determined that the common fault was an unresolved reference 
to pw_stayopen.  Looking at the sources on both a 4.3-tahoe with and
without the shadow passwords, I've been unable to find pw_stayopen on 
either.  My questions are where is pw_stayopen defined?  And, Why
did shadow passwords eliminate it?

 -----------------------------------------------------------------------
Marc Lesure / Arizona State University / Tempe, AZ
"Between the world of men and make-believe, I can be found..."
"False faces and meaningless chases, I travel alone..."
"And where do you go when you come to the end of your dream?"

UUCP:       ...!ncar!noao!asuvax!lesure  
Internet:   lesure at asuvax.eas.asu.edu

-----------------------------


End of INFO-UNIX Digest
***********************



More information about the Comp.unix.questions mailing list