On 2017-06-23 13:25, Michał Sokołowski wrote:
Hello group.
I am confused: Can somebody please confirm/deny, which RAID subsystem is
affected? BTRFS' RAID5/6 or mdadm (Linux kernel raid) RAID 5/6 ?
All of the issues mentioned here are specific to BTRFS raid5/raid6
profiles, with the exception
Hello group.
I am confused: Can somebody please confirm/deny, which RAID subsystem is
affected? BTRFS' RAID5/6 or mdadm (Linux kernel raid) RAID 5/6 ?
Are there some gotchas (in terms of broken reliability) when using
kernel one?
The web is full of legends, it seems that this confusion is quite
On 2017-06-22 04:12, Qu Wenruo wrote:
>
> And in that case even device of data stripe 2 is missing, btrfs don't really
> need to use parity to rebuild it, as btrfs knows there is no extent in that
> stripe, and data csum matches for data stripe 1.
You are assuming that there is no data in
At 06/22/2017 10:43 AM, Chris Murphy wrote:
On Wed, Jun 21, 2017 at 8:12 PM, Qu Wenruo wrote:
Well, in fact, thanks to data csum and btrfs metadata CoW, there is quite a
high chance that we won't cause any data damage.
But we have examples where data does not
On Wed, Jun 21, 2017 at 8:12 PM, Qu Wenruo wrote:
>
> Well, in fact, thanks to data csum and btrfs metadata CoW, there is quite a
> high chance that we won't cause any data damage.
But we have examples where data does not COW, we see a partial stripe
overwrite. And if
At 06/22/2017 02:24 AM, Chris Murphy wrote:
On Wed, Jun 21, 2017 at 2:45 AM, Qu Wenruo wrote:
Unlike pure stripe method, one fully functional RAID5/6 should be written in
full stripe behavior, which is made up by N data stripes and correct P/Q.
Given one example to
At 06/22/2017 01:03 AM, Goffredo Baroncelli wrote:
Hi Qu,
On 2017-06-21 10:45, Qu Wenruo wrote:
At 06/21/2017 06:57 AM, waxhead wrote:
I am trying to piece together the actual status of the RAID5/6 bit of BTRFS.
The wiki refer to kernel 3.19 which was released in February 2015 so I assume
On Wed, Jun 21, 2017 at 2:12 PM, Goffredo Baroncelli wrote:
>
> Generally speaking, when you write "two failure" this means two failure at
> the same time. But the write hole happens even if these two failures are not
> at the same time:
>
> Event #1: power failure between
On 2017-06-21 20:24, Chris Murphy wrote:
> On Wed, Jun 21, 2017 at 2:45 AM, Qu Wenruo wrote:
>
>> Unlike pure stripe method, one fully functional RAID5/6 should be written in
>> full stripe behavior, which is made up by N data stripes and correct P/Q.
>>
>> Given one
On Wed, Jun 21, 2017 at 12:51 AM, Marat Khalili wrote:
> On 21/06/17 06:48, Chris Murphy wrote:
>>
>> Another possibility is to ensure a new write is written to a new*not*
>> full stripe, i.e. dynamic stripe size. So if the modification is a 50K
>> file on a 4 disk raid5; instead of
On Wed, Jun 21, 2017 at 2:45 AM, Qu Wenruo wrote:
> Unlike pure stripe method, one fully functional RAID5/6 should be written in
> full stripe behavior, which is made up by N data stripes and correct P/Q.
>
> Given one example to show how write sequence affects the
On 2017-06-21 13:20, Andrei Borzenkov wrote:
21.06.2017 16:41, Austin S. Hemmelgarn пишет:
On 2017-06-21 08:43, Christoph Anton Mitterer wrote:
On Wed, 2017-06-21 at 16:45 +0800, Qu Wenruo wrote:
Btrfs is always using device ID to build up its device mapping.
And for any multi-device
21.06.2017 16:41, Austin S. Hemmelgarn пишет:
> On 2017-06-21 08:43, Christoph Anton Mitterer wrote:
>> On Wed, 2017-06-21 at 16:45 +0800, Qu Wenruo wrote:
>>> Btrfs is always using device ID to build up its device mapping.
>>> And for any multi-device implementation (LVM,mdadam) it's never a
>>>
21.06.2017 09:51, Marat Khalili пишет:
> On 21/06/17 06:48, Chris Murphy wrote:
>> Another possibility is to ensure a new write is written to a new*not*
>> full stripe, i.e. dynamic stripe size. So if the modification is a 50K
>> file on a 4 disk raid5; instead of writing 3 64K data strips + 1 64K
Hi Qu,
On 2017-06-21 10:45, Qu Wenruo wrote:
> At 06/21/2017 06:57 AM, waxhead wrote:
>> I am trying to piece together the actual status of the RAID5/6 bit of BTRFS.
>> The wiki refer to kernel 3.19 which was released in February 2015 so I assume
>> that the information there is a tad outdated
On 2017-06-21 08:43, Christoph Anton Mitterer wrote:
On Wed, 2017-06-21 at 16:45 +0800, Qu Wenruo wrote:
Btrfs is always using device ID to build up its device mapping.
And for any multi-device implementation (LVM,mdadam) it's never a
good
idea to use device path.
Isn't it rather the other
On Wed, 2017-06-21 at 16:45 +0800, Qu Wenruo wrote:
> Btrfs is always using device ID to build up its device mapping.
> And for any multi-device implementation (LVM,mdadam) it's never a
> good
> idea to use device path.
Isn't it rather the other way round? Using the ID is bad? Don't you
remember
At 06/21/2017 06:57 AM, waxhead wrote:
I am trying to piece together the actual status of the RAID5/6 bit of
BTRFS.
The wiki refer to kernel 3.19 which was released in February 2015 so I
assume that the information there is a tad outdated (the last update on
the wiki page was July 2016)
> [ ... ] This will make some filesystems mostly RAID1, negating
> all space savings of RAID5, won't it? [ ... ]
RAID5/RAID6/... don't merely save space, more precisely they
trade lower resilience and a more anisotropic and smaller
performance envelope to gain lower redundancy (= save space).
--
On 21/06/17 06:48, Chris Murphy wrote:
Another possibility is to ensure a new write is written to a new*not*
full stripe, i.e. dynamic stripe size. So if the modification is a 50K
file on a 4 disk raid5; instead of writing 3 64K data strips + 1 64K
parity strip (a full stripe write); write out 1
On Tue, Jun 20, 2017 at 5:25 PM, Hugo Mills wrote:
> On Wed, Jun 21, 2017 at 12:57:19AM +0200, waxhead wrote:
>> I am trying to piece together the actual status of the RAID5/6 bit of BTRFS.
>> The wiki refer to kernel 3.19 which was released in February 2015 so
>> I assume
On Wed, Jun 21, 2017 at 12:57:19AM +0200, waxhead wrote:
> I am trying to piece together the actual status of the RAID5/6 bit of BTRFS.
> The wiki refer to kernel 3.19 which was released in February 2015 so
> I assume that the information there is a tad outdated (the last
> update on the wiki page
I am trying to piece together the actual status of the RAID5/6 bit of BTRFS.
The wiki refer to kernel 3.19 which was released in February 2015 so I
assume that the information there is a tad outdated (the last update on
the wiki page was July 2016)
23 matches
Mail list logo