On Thu, Aug 11, 2016 at 1:07 PM, Duncan <1i5t5.dun...@cox.net> wrote:
> The compression-related problem is this:  Btrfs is considerably less
> tolerant of checksum-related errors on btrfs-compressed data,


Why? The data is the data. And why would it matter if it's application
compressed data vs Btrfs compressed data? If there's an error, Btrfs
is intolerant. I don't see how there's a checksum error that Btrfs
tolerates.

But also I don't know if the checksum is predicated on compressed data
or uncompressed data - does the scrub blindly read compressed data,
checksums it, and compares to the previously recorded csum? Or does
the scrub read compressed data, decompresses it, checksums it, then
compares? And does compression compress metadata? I don't think it
does from some of the squashfs testing of the same set of binary files
on ext4 vs btrfs uncompressed vs btrfs compressed. The difference is
explained by inline data being compressed (which it is), so I don't
think the fs itself gets compressed.


Chris Murphy

-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to