Dear Qu Wenruo/all
Did this patch ever get accepted?
This is the one which allowed a degraded disk to be RW if it was safe to
do so.
For my use case (2 disk NAS RAID1) this is the one patch I'm waiting for
before upgrading to BTRFS, I suspect I'm not alone.
It would help avoid being locke
On 2019/2/13 上午2:42, Andrei Borzenkov wrote:
> 12.02.2019 10:47, Qu Wenruo пишет:
>>
>>
>> On 2019/2/12 下午3:43, Remi Gauvin wrote:
>>> On 2019-02-12 2:22 a.m., Qu Wenruo wrote:
>>>
> Does this mean you would rely on scrub/CSUM to repair the missing data
> if device is restored?
On 2019-02-12 1:42 p.m., Andrei Borzenkov wrote:
>>
>
> But if I understand what happens after your patch correctly, replacement
> device still does not contain valid data until someone does scrub. So in
> either case manual step is required to restore full redundancy.
>
> Or does "btrfs replace
12.02.2019 10:47, Qu Wenruo пишет:
>
>
> On 2019/2/12 下午3:43, Remi Gauvin wrote:
>> On 2019-02-12 2:22 a.m., Qu Wenruo wrote:
>>
Does this mean you would rely on scrub/CSUM to repair the missing data
if device is restored?
>>>
>>> Yes, just as btrfs usually does.
>>>
>>
>> I don't reall
On 2019/2/12 下午3:55, Remi Gauvin wrote:
> On 2019-02-12 2:47 a.m., Qu Wenruo wrote:
>>
>>
>> Consider this use case:
>>
>> One btrfs with 2 devices, RAID1 for data and metadata.
>>
>> One day devid 2 got failure, and before replacement arrives, user can
>> only use devid 1 alone. (Maybe that's th
On 2019-02-12 2:47 a.m., Qu Wenruo wrote:
>
>
> Consider this use case:
>
> One btrfs with 2 devices, RAID1 for data and metadata.
>
> One day devid 2 got failure, and before replacement arrives, user can
> only use devid 1 alone. (Maybe that's the root fs).
>
> Then new disk arrived, user repl
On 2019/2/12 下午3:43, Remi Gauvin wrote:
> On 2019-02-12 2:22 a.m., Qu Wenruo wrote:
>
>>> Does this mean you would rely on scrub/CSUM to repair the missing data
>>> if device is restored?
>>
>> Yes, just as btrfs usually does.
>>
>
> I don't really understand the implications of the problems wi
On 2019-02-12 2:22 a.m., Qu Wenruo wrote:
>> Does this mean you would rely on scrub/CSUM to repair the missing data
>> if device is restored?
>
> Yes, just as btrfs usually does.
>
I don't really understand the implications of the problems with mounting
fs when single/dup data chunk are allocat
On 2019/2/12 下午3:20, Remi Gauvin wrote:
> On 2019-02-12 2:03 a.m., Qu Wenruo wrote:
>
>> So we only need to consider missing devices as writable, and calculate
>> our chunk allocation profile with missing devices too.
>>
>> Then every thing should work as expected, without annoying SINGLE/DUP
>>
On 2019-02-12 2:03 a.m., Qu Wenruo wrote:
> So we only need to consider missing devices as writable, and calculate
> our chunk allocation profile with missing devices too.
>
> Then every thing should work as expected, without annoying SINGLE/DUP
> chunks blocking later degraded mount.
>
>
Does
[PROBLEM]
The following script can easily create unnecessary SINGLE or DUP chunks:
#!/bin/bash
dev1="/dev/test/scratch1"
dev2="/dev/test/scratch2"
dev3="/dev/test/scratch3"
mnt="/mnt/btrfs"
umount $dev1 $dev2 $dev3 $mnt &> /dev/null
mkfs.btrfs -f $dev1 $dev2 -d raid1 -m raid1
mo
11 matches
Mail list logo