30.11.2010 04:40, Julien Laffaye wrote:
You can specify limits during compression, so the question is should we do that
so that hosts with N MB of RAM can decompress packages?  Do we retain the
compression ratio over bzip2 if we limit compression memory to 512 MB so that
decompression would be possible with, say, 128 MB?

According to xz(1), in its default mode (-6), xz uses ~100MiB for
compression and ~10MiB for decompression.
That seems to be acceptable.

You possibly miss something about compression/decompression.

The designated memory size is not directly affected only by compression mode. When decompressing you will need memory for:

1. Data history.
2. Dictionary.
3. Some indexes.

And those ones are all empty at start. So say, if you are compressing something really huge trying to use 4G of memory you end using that much memory between 2G - 3G of source data. And we will need 512MB to decompress that hunk of data.

Are the packages _that_ large?

I think that in worst possible case we need to double the size of package for comfort decompression. If the lower point is 64Mb then 32Mb package compressed with any compression strategy would be decompressed without hitting a swap.

We'll need someone to do actual testing but look at this one:

# ls -la ImageMagick-6.6.5-10.tar.xz
-rw-r--r--  1 root  wheel  6316324 21 ноя 23:52 ImageMagick-6.6.5-10.tar.xz

# time xz -dt ImageMagick-6.6.5-10.tar.xz
0.860u 0.036s 0:01.02 87.2%     68+1492k 56+0io 1pf+0w

--
Sphinx of black quartz judge my vow.

_______________________________________________
freebsd-ports@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ports
To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"

Reply via email to