On 2018/12/26 下午1:37, Kangjie Lu wrote:
> In case sysfs_create_group fails, let's check its return value and
> issues an error message.
>
> Signed-off-by: Kangjie Lu
> ---
> fs/btrfs/sysfs.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
> index
On 12/26/18 1:37 PM, Kangjie Lu wrote:
In case sysfs_create_group fails, let's check its return value and
issues an error message.
Signed-off-by: Kangjie Lu
---
fs/btrfs/sysfs.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
index 3717c864ba23..2
In case sysfs_create_group fails, let's check its return value and
issues an error message.
Signed-off-by: Kangjie Lu
---
fs/btrfs/sysfs.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
index 3717c864ba23..24ef416e700b 100644
--- a/fs/btrfs/sysfs.c
+++
On 2018/12/26 上午11:46, Kangjie Lu wrote:
> In case sysfs_create_group fails, let's check its return value and
> issues an error message.
>
> Signed-off-by: Kangjie Lu
> ---
> fs/btrfs/sysfs.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
> inde
In case sysfs_create_group fails, let's check its return value and
issues an error message.
Signed-off-by: Kangjie Lu
---
fs/btrfs/sysfs.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
index 3717c864ba23..62529153a51a 100644
--- a/fs/btrfs/sysfs.c
+++
Now with a little larger fs (257G used, backed by HDD), the result is
much more obvious:
$ sudo perf ftrace -t function_graph \
-T open_ctree \
-T btrfs_read_block_groups \
-T check_chunk_block_group_mappings \
-T btrfs_re
Please discard this patch, such generation check can't handle
concurrency in snapshot creation.
And since the original report on generation mismatch is completely
caused by btrfs-progs, such check has no real-world need.
Thanks,
Qu
On 2018/12/25 下午12:50, Qu Wenruo wrote:
> Even we did super bloc
On Fri, Dec 14, 2018 at 9:51 AM Dmitriy Gorokh wrote:
>
> RAID5 or RAID6 filesystem might get corrupted in the following scenario:
>
> 1. Create 4 disks RAID6 filesystem
> 2. Preallocate 16 10Gb files
> 3. Run fio: 'fio --name=testload --directory=./ --size=10G
> --numjobs=16 --bs=64k --iodepth=64
there was discussion about this just some days ago. CC 4-5 lists is
more than enough
On 23/12/2018, Julia Lawall wrote:
>
>
> On Sun, 23 Dec 2018, Tom Psyborg wrote:
>
>> Why do you CC this to so many lists?
>
> Because the different files are in different subsystems. The cover letter
> goes to