On Sun, Sep 11, 2016 at 2:06 PM, Adam Borowski <kilob...@angband.pl> wrote:
> On Sun, Sep 11, 2016 at 09:48:35PM +0200, Martin Steigerwald wrote:
>> Hmm… I found this from being referred to by reading Debian wiki page on
>> BTRFS¹.
>>
>> I use compress=lzo on BTRFS RAID 1 since April 2014 and I never found an
>> issue. Steven, your filesystem wasn´t RAID 1 but RAID 5 or 6?
>>
>> I just want to assess whether using compress=lzo might be dangerous to use in
>> my setup. Actually right now I like to keep using it, since I think at least
>> one of the SSDs does not compress. And… well… /home and / where I use it are
>> both quite full already.
>>
>> [1] https://wiki.debian.org/Btrfs#WARNINGS
>
> I have used compress=lzo for years, kernels 3.8, 3.13 and 3.14 (a bunch of
> machines), without a single glitch; heavy snapshotting, single dev only, no
> quota.  Until recently I did never balanced.
>
> I did have a case of ENOSPC with <80% full on 4.7 which might or might not
> be related to compress=lzo.

I'm not finding it off hand, but Duncan has some experience with this
issue, where he'd occasionally have some sort of problem (hand wave),
I don't know how serious it was, maybe just scary warnings like a call
trace or something, but no actual problem? My recollection is that
compression might be making certain edge case problems more difficult
to recover from. I don't know why that would be, as metadata itself
isn't compressed (the inline data saved in metadata nodes can be
compressed). But there you go, if things start going wonky compression
might make it more difficult. But that's speculative. And I also don't
know if there's any difference between lzo and zlib in this regard
either.


-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to