Lately, I've been running into a sharp increase in btrfsck inode 400
corruptions after an unclean shutdown.

The shutdowns have resulted from multiple sources (power outage, Xorg
keyboard misconfiguration, etc...).  I have not made any systematic
study of btrfs' robustness to corruption after an unclean shutdown,
but I've had at least 4 btrfs partitions report btrfsck inode 400
corruptions after unclean shutdowns.  That seems frequent enough for
me to develop concerns that a regression has slipped in somewhere.
Then again, maybe I'm just unlucky (I know you can't guarantee a
shutdown won't lead to a potential corruption).

So far, the impact of these corruptions has been minor.  I've been
able to pull the data from the partition without error for a reformat.
 However, I see numerous errors reported if I try to run a balance
once I get reports of btrfsck inode 400 corruptions.

Even though the impact of this corruption is minor, this frequency of
issues after an unclean shutdown seems much higher than I encounter
with competing file systems.

Has anybody else been encountering an increase in btrfsck inode 400
errors recently?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to