On Thu Feb 20, 2003 at 12:39:14PM -0600, Drew Scott Daniels wrote: > It's interesting to note that in this case -8 and -9 yield the same > results, but -9 requires more memory at the time of decompression and > compression. I suspect the memory usage statistics will remain constant > for any file (inherent in block sorting), not just the corpus. The > compression results will of course vary depending on the input file.
I think we should use lzip compression http://lzip.sourceforge.net/ It uses a constant-time lossy compression algorithm that is guaranteed to reduce files to 0% of their original size. ;-) -Erik -- Erik B. Andersen http://codepoet-consulting.com/ --This message was written using 73% post-consumer electrons-- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]