On Tue, Dec 19, 2017 at 1:28 AM, Chris Murphy <li...@colorremedies.com> wrote:
> On Mon, Dec 18, 2017 at 1:49 AM, Anand Jain <anand.j...@oracle.com> wrote:
>
>>  Agreed. IMO degraded-raid1-single-chunk is an accidental feature
>>  caused by [1], which we should revert back, since..
>>    - balance (to raid1 chunk) may fail if FS is near full
>>    - recovery (to raid1 chunk) will take more writes as compared
>>      to recovery under degraded raid1 chunks
>
>
> The advantage of writing single chunks when degraded, is in the case
> where a missing device returns (is readded, intact). Catching up that
> device with the first drive, is a manual but simple invocation of
> 'btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft'   The
> alternative is a full balance or full scrub. It's pretty tedious for
> big arrays.
>

The alternative would be to introduce new "resilver" operation that
would allocate second copy for every degraded chunk. And it could even
be started automatically when enough redundacy is present again.

> mdadm uses bitmap=internal for any array larger than 100GB for this
> reason, avoiding full resync.
>

ZFS manages to avoid full sync in this case quite efficiently.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to