On Mon, Jul 10, 2023 at 10:55:06AM +0200, Adrien Nader wrote:
> There is a little-know but very interesting property of LZMA: its
> decompression speed does not depend on the uncompressed size but only on
> the compressed size. What this means is that if you compress a 100MB
> down to 20MB, it will take roughly twice as long to decompress than if
> you compress it down to 10MB. In other words, higher compression means
> faster decompression.

This makes a certain amount of sense -- so much of a computer's
operational time is spent waiting for data to arrive from memory into
the processor, refilling cache lines, etc.

You nerd-sniped me into testing a bunch of algorithms on the
firefox_115.0+build2.orig.tar from our archive.

I only ran these things once, and quite a lot of them ran while a
second one was running, but this system (dual xeon E5-2630v3) has enough
processors and memory that it probably didn't matter much.

Times in seconds, with lower level on the left, higher on the right:

         1   3   5   9
compression:
gzip    39  46  73 211
zstd     8  12  23  54
bzip2  228 237 249 265
lzma   154 294 643 945
xz     159 298 644 945

decompression:
gzip    16  15  15  15
zstd     3   3   3   3
bzip2   68  73  74  75
lzma    41  37  35  33
xz      36  32  31  30

xz of course absolutely dominates the end file sizes:

2989486080  original

 515273416  xz -9
 625958113  zstd -9
 647365812  xz -1
 666820870  zstd -5  (seemed like a sweet spot in the timings)

Anyway it's fun to see that gzip and zstd have consistent decompression
speeds, lzma and xz get faster as they go smaller, and bzip2 just gets
slower the more it has to think.

Thanks

Attachment: signature.asc
Description: PGP signature

-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel

Reply via email to