On Thu, Jun 28, 2018 at 9:37 AM, Remi Gauvin <r...@georgianit.com> wrote:
> On 2018-06-28 10:17 AM, Chris Murphy wrote:
>
>> 2. The new data goes in a single chunk; even if the user does a manual
>> balance (resync) their data isn't replicated. They must know to do a
>> -dconvert balance to replicate the new data. Again this is a net worse
>> behavior than mdadm out of the box, putting user data at risk.
>
> I'm not sure this is the case.  Even though writes failed to the
> disconnected device, btrfs seemed to keep on going as though it *were*.

Yeah in your case the failure happens during normal operation and in
that case there's no degraded state on Btrfs. So it keeps writing to
raid1 chunk on the working drive, with writes on the failed devices
going nowhere (with lots of write errors). When you stop using the
volume, fix the problem with the missing drive, then remount the
volume, it really should still use only the new copy on the never
missing drive, even though it won't necessarily notice the file is
missing on the formerly missing drive. You have to balance manually to
fix it.


> When the array was re-mounted with both devices, (never mounted as
> degraded), and scrub was run, scrub took a *long* time fixing errors, at
> a whopping 3MB/s, and reported having fixed millions of them.

That's slow but it's expected to fix a lot of problems. Even in a very
short amount of time there are thousands of missing data and metadata
extents that need to be replicated.




-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to