Hi,

On 1/1/21 8:59 PM, Chris Murphy wrote:
On Fri, Jan 1, 2021 at 11:31 AM Artem Tim <ego.corda...@gmail.com> wrote:

It's faster. Here is some benchmark with different zstd compression ratios 
https://lkml.org/lkml/2019/1/28/1930. Could be outdated a little bit though.

But for HDD it makes sense to increase it probably. And IIRC Chris wrote about 
such plans.

There are ideas but it's difficult because the kernel doesn't expose
the information we really need to make an automatic determination.
sysfs commonly misreports rotational devices as being non-rotational
and vice versa. Since this is based on the device self-reporting, it's
not great.

I use zstd:1 for SSD/NVMe. And zstd:3 (which is the same as not
specifying a level) for HDD/USB sticks/eMMC/SD Card. For the more
archive style of backup, I use zstd:7. But these can all be mixed and
matched, Btrfs doesn't care. You can even mix and match algorithms.

Anyway, compress=zstd:1 is a good default. Everyone benefits, and I'm
not even sure someone with a very fast NVMe drive will notice a slow
down because the compression/decompression is threaded.

I disagree that everyone benefits. Any read latency sensitive workload will be slower due to the application latency being both the drive latency plus the decompression latency. And as the kernel benchmarks indicate very few systems are going to get anywhere near the performance of even baseline NVMe drives when its comes to throughput. With PCIe Gen4 controllers the burst speeds are even higher (>3GB/sec read & write). Worse, if the workload is very parallel, and at max CPU already the compression overhead will only make that situation worse as well. (I suspect you could test this just by building some packages that have good parallelism during the build).

So, your penalizing a large majority of machines built in the past couple years.

Plus, the write amplification comment isn't even universal as there continue to be controllers where the flash translation layer is compressing the data.

OTOH, it makes a lot more sense on a lot of these arm/sbc boards utilizing MMC because the disks are so slow. Maybe if something like this were made the default the machine should run a quick CPU compress/decompress vs IO speed test and only enable compression if the compress/decompress speed is at least the IO rate.



I expect if we get the "fast" levels (the negative value levels) new
to zstd in the kernel, that Btrfs will likely remap its level 1 to one
of the negative levels, and keep level 3 set to zstd 3 (the default).
So we might actually see it get even faster at the cost of some
compression ratio. Given this possibility, I think level 1 is the best
choice as a default for Fedora.



_______________________________________________
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure

Reply via email to