On 2014-10-10 18:05, Eric Sandeen wrote:
On 10/10/14 2:35 PM, Austin S Hemmelgarn wrote:
On 2014-10-10 13:43, Bob Marley wrote:
On 10/10/2014 16:37, Chris Murphy wrote:
The fail safe behavior is to treat the known good tree root as
the default tree root, and bypass the bad tree root if it cannot
be repaired, so that the volume can be mounted with default mount
options (i.e. the ones in fstab). Otherwise it's a filesystem
that isn't well suited for general purpose use as rootfs let
alone for boot.


A filesystem which is suited for "general purpose" use is a
filesystem which honors fsync, and doesn't *ever* auto-roll-back
without user intervention.

Anything different is not suited for database transactions at all.
Any paid service which has the users database on btrfs is going to
be at risk of losing payments, and probably without the company
even knowing. If btrfs goes this way I hope a big warning is
written on the wiki and on the manpages telling that this
filesystem is totally unsuitable for hosting databases performing
transactions.
If they need reliability, they should have some form of redundancy
in-place and/or run the database directly on the block device;
because even ext4, XFS, and pretty much every other filesystem can
lose data sometimes,

Not if i.e. fsync returns.  If the data is gone later, it's a hardware
problem, or occasionally a bug - bugs that are usually found & fixed
pretty quickly.
Yes, barring bugs and hardware problems they won't lose data.

the difference being that those tend to give
worse results when hardware is misbehaving than BTRFS does, because
BTRFS usually has a old copy of whatever data structure gets
corrupted to fall back on.

I'm curious, is that based on conjecture or real-world testing?

I wouldn't really call it testing, but based on personal experience I know that ext4 can lose whole directory sub-trees if it gets a single corrupt sector in the wrong place. I've also had that happen on FAT32 and (somewhat interestingly) HFS+ with failing/misbehaving hardware; and I've actually had individual files disappear on HFS+ without any discernible hardware issues. I don't have as much experience with XFS, but would assume based on what I do know of it that it could have similar issues. As for BTRFS, I've only ever had any issues with it 3 times, one was due to the kernel panicking during resume from S1, and the other two were due to hardware problems that would have caused issues on most other filesystems as well. In both cases of hardware issues, while the filesystem was initially unmountable, it was relatively simple to fix once I knew how. I tried to fix an ext4 fs that had become unmountable due to dropped writes once, and that was anything but simple, even with the much greater amount of documentation.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to