Shared Memory --- Parallel filters and piping -- Examples Needed

Root Boy Jim rbj at icst-cmr.arpa
Thu Feb 18 06:26:42 AEST 1988


   From: "John S. Robinson" <jsrobin at eneevax.uucp>

   If a programmer has filters filt1, filt2, ... filtn which he wishes to
   apply in a serial fashion on a stream of data, the process can be accomplished
   in a trivial fashion by use of a sequence of pipes:
	   filt1 < <stream> |filt2 |filt3 | ... |filtn > <sink> .

   How does one handle the case where some of the above filters are to be
   applied in parallel and then be recombined:

It depends on what you mean by `recombined'. Do you want the output of the
parallel filters in order, or do you want them to run asynchronously,
mixing their output? BTW, you have two `filt5's in your diagram.

			      filt3
			     /	    \
			    /	     \
   filt1 < <stream> |filt2 /__filt4___+ filt5 | filt6 | ... filtn > <sink> .
			   \	     /
			    \	    /
			     \filt5/

My diagram will look something like this:

   f1 < <stream> | f2 | parallel 'f3' 'f4 -opts' 'f5a' | f5b ...

The program `parallel' forks once for each of it's arguments and execs them
(one filter, f4, is shown with options/arguments) exactly as shown. However,
before it can do this, the parent must establish a pipe between each of it's
children. The parent thus performs the function of `tee' except that
each it is writing to processes rather than files.

   Note that in general the data stream will be to long to be stored and then
   have applied the various filters.

Some versions of unix will buffer pipe data to actual files, some won't.
I am not clear on which ones do or don't, altho to hazard a guess, I think
Syetem V does and BSD doesn't. Someone please enlighten me.

I don't see why you will have to buffer the data unless you care about
the collecting the filters' output in order rather than  intermingled.
To collect the output in order, you need an extra process at the end
order the output correctly. Note that if the OS will not buffer the
pipes on disk, you don't get much parallism, and may get deadlock.

   Secondly, each of the 'filtk' programs will be programmed to accept data
   from stdin in and will send data to stdout (with the exception of 'filt5'
   which would probably be getting its data from shared memory.

No problem. Just set up the plumbing correctly.

   The type of solution I am looking for would be a 'control program' which
   would be given a sequence of programs to be executed as child processes. The
   control program would set up the shared memory segments and the appropriate
   unnamed or named pipes that would feed the various parallel filters. The
   algorithms must operate online and asynchonously and hence the access to
   shared memory must have some sort of handshaking so that after filt[345]
   are through using a buffer filt2 can be allowed to refill it.

The other thing to do is just bite the bullet and embed the thing in a shell
script, using temporary files to distribute the input and collect and/or
order the output. It's not such a big deal, programs like sort use them
all the time, even tho they look like (and behave as) filters.

   If anyone has examples of their use of shared memory and/or pipes in a way
   similar to the above please send me a copy. I really need help on this.

   My e-mailing address is:

	   jsrobin at eneevax.umd.edu

   The machine internet address is 128.8.133.1

   Thank you for your consideration of this problem.

	(Root Boy) Jim Cottrell	<rbj at icst-cmr.arpa>
	National Bureau of Standards
	Flamer's Hotline: (301) 975-5688
	FEELINGS are cascading over me!!!



More information about the Comp.unix.wizards mailing list