Hi! I did a similar experiment as you did, but using the GCC tarball (I was too lazy to wait for ungoogled-chromium’s tarball), like so:
--8<---------------cut here---------------start------------->8--- $ xz -d < /gnu/store/x043r7crzd0p0p5cfky8r6hwsxknhkk0-gcc-11.2.0.tar.xz | zstd -19 > /tmp/gcc.zst $ xz -d < /gnu/store/x043r7crzd0p0p5cfky8r6hwsxknhkk0-gcc-11.2.0.tar.xz | gzip -9 > /tmp/gcc.gz $ du -h /tmp/gcc.{zst,gz} 81M /tmp/gcc.zst 128M /tmp/gcc.gz --8<---------------cut here---------------end--------------->8--- The code (the inner loop is pure decompression, no allocation, no I/O): --8<---------------cut here---------------start------------->8--- (use-modules (zstd) (zlib) (rnrs bytevectors) (ice-9 binary-ports) (ice-9 match) (ice-9 time)) (define bv (make-bytevector (* 4 (expt 2 20)))) (define (dump port) (let loop () (match (get-bytevector-n! port bv 0 (bytevector-length bv)) ((? eof-object?) #t) (n (loop))))) (pk 'zlib) (call-with-gzip-input-port (open-input-file "/tmp/gcc.gz") (lambda (port) (time (dump port)))) (pk 'zst) (call-with-zstd-input-port (open-input-file "/tmp/gcc.zst") (lambda (port) (time (dump port)))) --8<---------------cut here---------------end--------------->8--- The result shows that zstd decompression is ~60% faster than gzip decompression: --8<---------------cut here---------------start------------->8--- $ guile ~/src/guix-debugging/decompress.scm ;;; (zlib) clock utime stime cutime cstime gctime 2.15 2.11 0.03 0.00 0.00 0.00 ;;; (zst) clock utime stime cutime cstime gctime 0.80 0.77 0.03 0.00 0.00 0.00 --8<---------------cut here---------------end--------------->8--- Are you observing something similar? Ludo’.