searching for data

William R. Pringle wrp at PRC.Unisys.COM
Thu Apr 18 13:17:55 AEST 1991


In article <18dQ01aK5bwe00 at amdahl.uts.amdahl.com> krs at amdahl.uts.amdahl.com (Kris Stephens [Hail Eris!]) writes:
>In article <cs342a37.671481248 at zaphod> cs342a37 at cs.iastate.edu (Class login) writes:
 >>I am a new comer in writing shell scripts. I have the following problem:
 >>
 >>I have a data file that I use as a key for searching my Master file. Both files are text files. Each line in the Master file is a record. Both files are sorted by the key. I would like to readaa line in the data file for the key, and then read scan the Master file for the line that contains the key and append that to a file.
 >>
 >>I have the following script written:
 >>
 >>cat datafile : ( while read line; do fgrep "$line" masterfile >> outputfile ; done )
 >>
 >>This however, is very slow as I have about 2000 lines of key in my data file and about 10000 lines of records in my master file, and for each key I have to scan about 10000 lines.
 >>
 >>Can I write a shell script to do the following:
 >>read a line from masterfile
 >>while more  key to read do
 >>  read a line from data file
 >>  while (key from masterfile < line from data file)
 >>    read line from masterfile
 >>  (end while)
 >>  if line from masterfile contains key
 >>    append to output file
 >>  else
 >>    append empty line to output file
 >>  (endif)
 >>(end while)
>
>Here's an awk script that handles it, assuming that your awk has enough
>room to store all the keys (if not, send some mail to me including this
>article and I'll offer an alternative).

You might also want to look at the join command.  If the keys are sorted,
and you don't have repeating keys, then you could use join to append the
data file onto the end of the master file.

Bill Pringle
wrp at prc.unisys.com



More information about the Comp.unix.shell mailing list