On 2019-01-29 2:02 p.m., Chris Murphy wrote:
>
> There's no dirty bit set on mount, and thus no dirty bit to unset on
> clean mount, from which to infer a dirty unmount if it's present at
> the next mount.
Some time back, i was toying with the idea of a Startup script that
creates a /need_scrub
Going back to my original email, would the BTRFS wiki admins consider a
better more reflective update of the RAID56 status page?
It still states "multiple serious data-loss bugs" which as Qu Wenruo has
already clarified is not the case. The only "bug" left is the write
hole edge-case problem.
On 29/01/2019 20.02, Chris Murphy wrote:
> On Mon, Jan 28, 2019 at 3:52 PM Remi Gauvin wrote:
>>
>> On 2019-01-28 5:07 p.m., DanglingPointer wrote:
>>
>>> From Qu's statement and perspective, there's no difference to other
>>> non-BTRFS software RAID56's out there that are marked as stable (except
On Mon, Jan 28, 2019 at 3:52 PM Remi Gauvin wrote:
>
> On 2019-01-28 5:07 p.m., DanglingPointer wrote:
>
> > From Qu's statement and perspective, there's no difference to other
> > non-BTRFS software RAID56's out there that are marked as stable (except
> > ZFS).
> > Also there are no "multiple ser
On 2019/1/29 上午6:07, DanglingPointer wrote:
> Thanks Qu!
> I thought as much from following the mailing list and your great work
> over the years!
>
> Would it be possible to get the wiki updated to reflect the current
> "real" status?
>
> From Qu's statement and perspective, there's no differe
On 2019-01-28 5:07 p.m., DanglingPointer wrote:
> From Qu's statement and perspective, there's no difference to other
> non-BTRFS software RAID56's out there that are marked as stable (except
> ZFS).
> Also there are no "multiple serious data-loss bugs".
> Please do consider my proposal as it will
Thanks Qu!
I thought as much from following the mailing list and your great work
over the years!
Would it be possible to get the wiki updated to reflect the current
"real" status?
From Qu's statement and perspective, there's no difference to other
non-BTRFS software RAID56's out there that
On Mon, Jan 28, 2019 at 03:23:28PM +, Supercilious Dude wrote:
> On Mon, 28 Jan 2019 at 01:18, Qu Wenruo wrote:
> >
> > So for current upstream kernel, there should be no major problem despite
> > write hole.
>
>
> Can you please elaborate on the implications of the write-hole? Does
> it mea
On Mon, 28 Jan 2019 at 01:18, Qu Wenruo wrote:
>
> So for current upstream kernel, there should be no major problem despite
> write hole.
Can you please elaborate on the implications of the write-hole? Does
it mean that the transaction currently in-flight might be lost but the
filesystem is othe
On 2019/1/26 下午7:45, DanglingPointer wrote:
>
>
> Hi All,
>
> For clarity for the masses, what are the "multiple serious data-loss
> bugs" as mentioned in the btrfs wiki?
> The bullet points on this page:
> https://btrfs.wiki.kernel.org/index.php/RAID56
> don't enumerate the bugs. Not even in
On 2019-01-26 7:07 a.m., waxhead wrote:
>
> What effect exactly the write hole might have on *data* is not pointed
> out in detail, but I would imagine that for some it might be desirable
> to run a btrfs filesystem with metadata in "RAID" 1/10 mode and data in
> "RAID" 5/6.
>
One big problem
DanglingPointer wrote:
Hi All,
For clarity for the masses, what are the "multiple serious data-loss
bugs" as mentioned in the btrfs wiki?
The bullet points on this page:
https://btrfs.wiki.kernel.org/index.php/RAID56
don't enumerate the bugs. Not even in a high level. If anything what
can
12 matches
Mail list logo