On 2018-10-17 00:14, Dmitry Katsubo wrote:
As a workaround I can monitor dmesg output but:

1. It would be nice if I could tell btrfs that I would like to mount read-only
after a certain error rate per minute is reached.
2. It would be nice if btrfs could detect that both drives are not available and
unmount (as mount read-only won't help much) the filesystem.

Kernel log for Linux v4.14.2 is attached.

I wonder if somebody could further advise the workaround. I understand that running btrfs volume over USB devices is not good, but I think btrfs could play some role
as well.

In particular I wonder if btrfs could detect that all devices in RAID1 volume became inaccessible and instead of reporting increasing "write error" counter to kernel log simply render the volume as read-only. "inaccessible" could be that if the same block cannot be written back to minimum number of devices in RAID volume, so btrfs gives up.

Maybe someone can advise some sophisticated way of quick checking that filesystems is healthy? Right now the only way I see is to make a tiny write (like create a file and instantly remove it) to make it die faster... Checking for write IO errors in "btrfs dev stats /mnt/backups" output could be an option provided that delta is computed for some period of time and write errors counter increase for both devices in the volume (as apparently I am not interested in one failing block which btrfs tries to write
again and again increasing the write errors counter).

Thanks for any feedback.

--
With best regards,
Dmitry

Reply via email to