On Sun, 2015-12-27 at 17:58 -0700, Chris Murphy wrote:
> I don't see a good use case for scrubbing a degraded array. First
> make
> the array healthy, then scrub.
As I've said, I agree basically... but *if* scrubbing a degraded fs
leads to even more errors (apart from the fact that you may loose
another device for hardware reasons while scrubbing), than either this
needs to be fixed *or* scrub mustn't be started on a degraded array (in
the sense that software prevents it).

Consider you have a weekly cron job that scrubs, which runs while the
array is degraded.


>  But I've not tested this with mdadm or
> lvm raid. I don't know how they behave.
Don't know, but for them it makes not so much sense, as their scrub is
different from ours... they have no checksums and can only tell whether
the RAID itself is consistent, which it is of course not when degraded.
It does perhaps make sense in any mirrored RAIDs (e.g. when you have a
4 disk RAID1, which is degraded and 1 disk is missing,... it may still
be worth to scrub the remaining 3,...)


>  But even if either of them
> tolerate it, it's a legitimate design decision for Btrfs developers
> to
> refuse supporting the scrub of a degraded array. Same for balancing
> for that matter.
Sure... but still it needs to be handled gracefully... i.e. not more
damage and reasonable userland output which people will understand.


Cheers,
Chris.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to