LZ compression algorithm

Michael M. How OA.HOW at MIT-XX.ARPA
Tue Feb 12 08:35:11 AEST 1985


Judging from  recent messages, I see they is some interest in LZ 
compression algorithm. I would like to sum up some of the work I have
done with the algorithm and exchange some ideas about implementation.

Back  in Oct. 1983 , Miller and Wegman from IBM Yorktown gave a talk
about their extensions to the LZ algorithm. They proposed an LRU
replacement to handle dictionary overflow. They also devised a string
extension version of the algorithm. In particular, to form the new
string to insert in the dictionary, concatenate the previous dictionary
match with the match just found. This allows much faster buildup of
dictionary entries. The most interesting part of their work was
applying this to terminal sessions. The algorithm sat between an IBM PC
and a CMS system. Results they claim are 4 to 8:1 compression, so you
1200 baud sessions look 9600. I investigate applying this to a character
oriented system, UNIX in particular. Results of course are good. 

What I am interested is finding other implementation of the LZ compression
and seeing how people handle dictionary overflow, matching. I would be
interested in performance figures. The IBM implementation running on a
3081 compressed 1 million character of English text in less than 20
seconds. My version has compressed about 1000 chars/sec for 90K file on
a VAX/750. In using this for terminal sessions, I would like to improve
the speed on the VAX (the decompression side on the PC just rips).
					Thank you,
					Michael How
-------
-------
-------



More information about the Comp.unix.wizards mailing list