Re: [PATCH 3/3] fstests: btrfs: Add test case to check v1 space cache corruption

2018-03-06 Thread Filipe Manana
On Tue, Mar 6, 2018 at 12:07 PM, Qu Wenruo  wrote:
>
>
> On 2018年03月06日 18:12, Filipe Manana wrote:
>> On Tue, Mar 6, 2018 at 8:15 AM, Qu Wenruo  wrote:
>>> There are some btrfs corruption report in mail list for a while,
>>
>> There have been for years (well, since ever) many reports of different
>> types of corruptions.
>> Which kind of corruption are you referring to?
>>
>>> although such corruption is pretty rare and almost impossible to
>>> reproduce, with dm-log-writes we found it's highly related to v1 space
>>> cache.
>>>
>>> Unlike journal based filesystems, btrfs completely rely on metadata CoW
>>> to protect itself from power loss.
>>> Which needs extent allocator to avoid allocate extents on existing
>>> extent range.
>>> Btrfs also uses free space cache to speed up such allocation.
>>>
>>> However there is a problem, v1 space cache is not protected by data CoW,
>>> and can be corrupted during power loss.
>>> So btrfs do extra check on free space cache, verifying its own in-file csum,
>>> generation and free space recorded in cache and extent tree.
>>>
>>> The problem is, under heavy concurrency, v1 space cache can be corrupted
>>> even under normal operations without power loss.
>>
>> How?
>>
>>> And we believe corrupted space cache can break btrfs metadata CoW and
>>> leads to the rare corruption in next power loss.
>>
>> Which kind of corruption?
>>
>>>
>>> The most obvious symptom will be difference in free space:
>>>
>>> This will be caught by kernel, but such check is quite weak, and if
>>> the net free space change is 0 in one transaction, the corrupted
>>> cache can be loaded by kernel.
>>
>> How can that happen? The only case I'm aware of, explained below,
>> always leads to a difference (space cache has less free space then
>> what we actually have if we check the extent tree).
>>
>>>
>>> In this case, btrfs check would report things like :
>>> --
>>> block group 298844160 has wrong amount of free space
>>> failed to load free space cache for block group 298844160
>>> --
>>
>> This is normal, but not very common, due to tiny races that exists
>> between committing a transaction (writing the free space caches) and
>> running dellaloc for example (since reserving an extent while running
>> dealloc doesn't joing/start a transaction).
>
> Well, at least I didn't find any place to release space unprotected by
> trans handler, so free space of cache can only be less or equal to real
> free space.

Well, that what I said before. For that particular race I mentioned,
which is the only one I can remember of always existing, the only
inconsistency that can happen is that the cache has less free extents
than what you can find by scanning the extent tree. If the
inconsistency was not detected at cache loading time, we would only
leak extents, but never double allocate them.

>
> So in that case, corrupted cache will never pass the free space check so
> it will never be loaded.
>
> Another dead end unfortunately.
>
> Thanks,
> Qu
>>
>>>
>>> But considering the test case are using notreelog, btrfs won't do
>>> sub-transaction commit which doesn't increase generation, each
>>> transaction should be consistent, and nothing should be reported at all.
>>>
>>> Further more, we can even found corrupted file extents like:
>>> --
>>> root 5 inode 261 errors 100, file extent discount
>>> Found file extent holes:
>>> start: 962560, len: 32768
>>> ERROR: errors found in fs roots
>>
>> Why do you think that's a corruption? Does it cause data loss or any
>> user visible issue?
>>
>> Having file extent holes not inserted happens when mixing buffered and
>> direct IO writes to a file (and fsstress does that), for example:
>>
>> create file
>> buffered write at offset 0, length 64K
>> direct IO write at offset at offset 64K, length 4K
>> transaction commit
>> power loss
>> after this we got a missing 64K hole extent at offset 0 (at
>> btrfs_file_write_iter we only add hole extents if the start offset is
>> greater then the current i_size)
>>
>> But this does not cause any problem for the user or the fs itself, and
>> it's supposed to be like that in the NO_HOLES mode which one day
>> (probably) will be the default mode.
>>
>>> --
>>>
>>> Signed-off-by: Qu Wenruo 
>>> ---
>>>  common/dmlogwrites  |  72 +++
>>>  tests/btrfs/159 | 141 
>>> 
>>>  tests/btrfs/159.out |   2 +
>>>  tests/btrfs/group   |   1 +
>>>  4 files changed, 216 insertions(+)
>>>  create mode 100755 tests/btrfs/159
>>>  create mode 100644 tests/btrfs/159.out
>>>
>>> diff --git a/common/dmlogwrites b/common/dmlogwrites
>>> index 467b872e..54e7e242 100644
>>> --- a/common/dmlogwrites
>>> +++ b/common/dmlogwrites
>>> @@ -126,3 +126,75 @@ _log_writes_cleanup()
>>> $UDEV_SETTLE_PROG >/dev/null 2>&1
>>> _log_writes_remove
>>>  }
>>> +
>>> +# Convert log writes mark to entry number
>>> +# Result entry number is output to stdout, could be

Re: [PATCH 3/3] fstests: btrfs: Add test case to check v1 space cache corruption

2018-03-06 Thread Filipe Manana
On Tue, Mar 6, 2018 at 10:53 AM, Qu Wenruo  wrote:
>
>
> On 2018年03月06日 18:12, Filipe Manana wrote:
>> On Tue, Mar 6, 2018 at 8:15 AM, Qu Wenruo  wrote:
>>> There are some btrfs corruption report in mail list for a while,
>>
>> There have been for years (well, since ever) many reports of different
>> types of corruptions.
>> Which kind of corruption are you referring to?
>
> Transid error.

You mean parent transid mismatches in tree blocks?
Please always be explicit about problems are you mentioning. There can
be many "transid" problems.


>
>>
>>> although such corruption is pretty rare and almost impossible to
>>> reproduce, with dm-log-writes we found it's highly related to v1 space
>>> cache.
>>>
>>> Unlike journal based filesystems, btrfs completely rely on metadata CoW
>>> to protect itself from power loss.
>>> Which needs extent allocator to avoid allocate extents on existing
>>> extent range.
>>> Btrfs also uses free space cache to speed up such allocation.
>>>
>>> However there is a problem, v1 space cache is not protected by data CoW,
>>> and can be corrupted during power loss.
>>> So btrfs do extra check on free space cache, verifying its own in-file csum,
>>> generation and free space recorded in cache and extent tree.
>>>
>>> The problem is, under heavy concurrency, v1 space cache can be corrupted
>>> even under normal operations without power loss.
>>
>> How?
>
> At FUA writes, we can get v1 space cache who can pass checksum and
> generation check, but has difference in free space.
>
>>
>>> And we believe corrupted space cache can break btrfs metadata CoW and
>>> leads to the rare corruption in next power loss.
>>
>> Which kind of corruption?
>
> Transid related error.
>
>>
>>>
>>> The most obvious symptom will be difference in free space:
>>>
>>> This will be caught by kernel, but such check is quite weak, and if
>>> the net free space change is 0 in one transaction, the corrupted
>>> cache can be loaded by kernel.
>>
>> How can that happen? The only case I'm aware of, explained below,
>> always leads to a difference (space cache has less free space then
>> what we actually have if we check the extent tree).
>>
>>>
>>> In this case, btrfs check would report things like :
>>> --
>>> block group 298844160 has wrong amount of free space
>>> failed to load free space cache for block group 298844160
>>> --
>>
>> This is normal, but not very common, due to tiny races that exists
>> between committing a transaction (writing the free space caches) and
>> running dellaloc for example (since reserving an extent while running
>> dealloc doesn't joing/start a transaction).
>
> This race explains a lot.
>
> But could that cause corrupted cache to be loaded after power loss, and
> break metadata CoW?

No, unless there's a bug in the procedure of detecting inconsistent caches.

>
> At least for the time point when FUA happens, the free space cache can
> pass both csum and generation, we only have free space difference to
> prevent it to be loaded.
>
>>
>>>
>>> But considering the test case are using notreelog, btrfs won't do
>>> sub-transaction commit which doesn't increase generation, each
>>> transaction should be consistent, and nothing should be reported at all.
>>>
>>> Further more, we can even found corrupted file extents like:
>>> --
>>> root 5 inode 261 errors 100, file extent discount
>>> Found file extent holes:
>>> start: 962560, len: 32768
>>> ERROR: errors found in fs roots
>>
>> Why do you think that's a corruption? Does it cause data loss or any
>> user visible issue?
>
> It breaks the rule that we shouldn't have the hole in file extents.

Right. My question/remark was that, besides the warning emitted by
fsck, this does not cause any harm to users/applications nor the
filesytem. That is, no data loss, metadata corruption or anything that
prevents the user from reading all previously written data nor
anything preventing future IO to any range of the file.
So this is just a minor annoyance and far from a serious bug.

>
> IIRC Nikolay is trying to use inode_lock_shared() to solve this race.

I don't see what's the relation, even because this is not caused by
race conditions.

>
>>
>> Having file extent holes not inserted happens when mixing buffered and
>> direct IO writes to a file (and fsstress does that), for example:
>>
>> create file
>> buffered write at offset 0, length 64K
>> direct IO write at offset at offset 64K, length 4K
>> transaction commit
>> power loss
>> after this we got a missing 64K hole extent at offset 0 (at
>> btrfs_file_write_iter we only add hole extents if the start offset is
>> greater then the current i_size)
>>
>> But this does not cause any problem for the user or the fs itself, and
>> it's supposed to be like that in the NO_HOLES mode which one day
>> (probably) will be the default mode.
>
> At least before that happens, we should follow the current schema of
> file extents.
>
> If we just ignore problems that won't cause data loss and keeps 

Re: [PATCH 3/3] fstests: btrfs: Add test case to check v1 space cache corruption

2018-03-06 Thread Qu Wenruo


On 2018年03月06日 18:12, Filipe Manana wrote:
> On Tue, Mar 6, 2018 at 8:15 AM, Qu Wenruo  wrote:
>> There are some btrfs corruption report in mail list for a while,
> 
> There have been for years (well, since ever) many reports of different
> types of corruptions.
> Which kind of corruption are you referring to?
> 
>> although such corruption is pretty rare and almost impossible to
>> reproduce, with dm-log-writes we found it's highly related to v1 space
>> cache.
>>
>> Unlike journal based filesystems, btrfs completely rely on metadata CoW
>> to protect itself from power loss.
>> Which needs extent allocator to avoid allocate extents on existing
>> extent range.
>> Btrfs also uses free space cache to speed up such allocation.
>>
>> However there is a problem, v1 space cache is not protected by data CoW,
>> and can be corrupted during power loss.
>> So btrfs do extra check on free space cache, verifying its own in-file csum,
>> generation and free space recorded in cache and extent tree.
>>
>> The problem is, under heavy concurrency, v1 space cache can be corrupted
>> even under normal operations without power loss.
> 
> How?
> 
>> And we believe corrupted space cache can break btrfs metadata CoW and
>> leads to the rare corruption in next power loss.
> 
> Which kind of corruption?
> 
>>
>> The most obvious symptom will be difference in free space:
>>
>> This will be caught by kernel, but such check is quite weak, and if
>> the net free space change is 0 in one transaction, the corrupted
>> cache can be loaded by kernel.
> 
> How can that happen? The only case I'm aware of, explained below,
> always leads to a difference (space cache has less free space then
> what we actually have if we check the extent tree).
> 
>>
>> In this case, btrfs check would report things like :
>> --
>> block group 298844160 has wrong amount of free space
>> failed to load free space cache for block group 298844160
>> --
> 
> This is normal, but not very common, due to tiny races that exists
> between committing a transaction (writing the free space caches) and
> running dellaloc for example (since reserving an extent while running
> dealloc doesn't joing/start a transaction).

Well, at least I didn't find any place to release space unprotected by
trans handler, so free space of cache can only be less or equal to real
free space.

So in that case, corrupted cache will never pass the free space check so
it will never be loaded.

Another dead end unfortunately.

Thanks,
Qu
> 
>>
>> But considering the test case are using notreelog, btrfs won't do
>> sub-transaction commit which doesn't increase generation, each
>> transaction should be consistent, and nothing should be reported at all.
>>
>> Further more, we can even found corrupted file extents like:
>> --
>> root 5 inode 261 errors 100, file extent discount
>> Found file extent holes:
>> start: 962560, len: 32768
>> ERROR: errors found in fs roots
> 
> Why do you think that's a corruption? Does it cause data loss or any
> user visible issue?
> 
> Having file extent holes not inserted happens when mixing buffered and
> direct IO writes to a file (and fsstress does that), for example:
> 
> create file
> buffered write at offset 0, length 64K
> direct IO write at offset at offset 64K, length 4K
> transaction commit
> power loss
> after this we got a missing 64K hole extent at offset 0 (at
> btrfs_file_write_iter we only add hole extents if the start offset is
> greater then the current i_size)
> 
> But this does not cause any problem for the user or the fs itself, and
> it's supposed to be like that in the NO_HOLES mode which one day
> (probably) will be the default mode.
> 
>> --
>>
>> Signed-off-by: Qu Wenruo 
>> ---
>>  common/dmlogwrites  |  72 +++
>>  tests/btrfs/159 | 141 
>> 
>>  tests/btrfs/159.out |   2 +
>>  tests/btrfs/group   |   1 +
>>  4 files changed, 216 insertions(+)
>>  create mode 100755 tests/btrfs/159
>>  create mode 100644 tests/btrfs/159.out
>>
>> diff --git a/common/dmlogwrites b/common/dmlogwrites
>> index 467b872e..54e7e242 100644
>> --- a/common/dmlogwrites
>> +++ b/common/dmlogwrites
>> @@ -126,3 +126,75 @@ _log_writes_cleanup()
>> $UDEV_SETTLE_PROG >/dev/null 2>&1
>> _log_writes_remove
>>  }
>> +
>> +# Convert log writes mark to entry number
>> +# Result entry number is output to stdout, could be empty if not found
>> +_log_writes_mark_to_entry_number()
>> +{
>> +   local _mark=$1
>> +   local ret
>> +
>> +   [ -z "$_mark" ] && _fail \
>> +   "mark must be given for _log_writes_mark_to_entry_number"
>> +
>> +   ret=$($here/src/log-writes/replay-log --find --log $LOGWRITES_DEV \
>> +   --end-mark $_mark 2> /dev/null)
>> +   [ -z "$ret" ] && return
>> +   ret=$(echo "$ret" | cut -f1 -d\@)
>> +   echo "mark $_mark has entry number $ret" >> $seqres.full
>> +   echo "$ret"
>> +}
>> +
>

Re: [PATCH 3/3] fstests: btrfs: Add test case to check v1 space cache corruption

2018-03-06 Thread Nikolay Borisov


On  6.03.2018 12:53, Qu Wenruo wrote:
> 
> 
[snip]

> It breaks the rule that we shouldn't have the hole in file extents.
> 
> IIRC Nikolay is trying to use inode_lock_shared() to solve this race.
> 

Unfortunately the inode_lock_shared approach is a no go since Filipe
objected to it quite adamantly. After discussion that happened in that
thread I can say I'm almost convinced that the pair of READDIO_LOCK and
setsize *do* provide necessary consistency for the DIO case. However at
the moment there is a memory barrier missing.

But I think the DIO case is unrelated to the issue you are discussing
here, no ?


[snip]
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] fstests: btrfs: Add test case to check v1 space cache corruption

2018-03-06 Thread Qu Wenruo


On 2018年03月06日 18:12, Filipe Manana wrote:
> On Tue, Mar 6, 2018 at 8:15 AM, Qu Wenruo  wrote:
>> There are some btrfs corruption report in mail list for a while,
> 
> There have been for years (well, since ever) many reports of different
> types of corruptions.
> Which kind of corruption are you referring to?

Transid error.

> 
>> although such corruption is pretty rare and almost impossible to
>> reproduce, with dm-log-writes we found it's highly related to v1 space
>> cache.
>>
>> Unlike journal based filesystems, btrfs completely rely on metadata CoW
>> to protect itself from power loss.
>> Which needs extent allocator to avoid allocate extents on existing
>> extent range.
>> Btrfs also uses free space cache to speed up such allocation.
>>
>> However there is a problem, v1 space cache is not protected by data CoW,
>> and can be corrupted during power loss.
>> So btrfs do extra check on free space cache, verifying its own in-file csum,
>> generation and free space recorded in cache and extent tree.
>>
>> The problem is, under heavy concurrency, v1 space cache can be corrupted
>> even under normal operations without power loss.
> 
> How?

At FUA writes, we can get v1 space cache who can pass checksum and
generation check, but has difference in free space.

> 
>> And we believe corrupted space cache can break btrfs metadata CoW and
>> leads to the rare corruption in next power loss.
> 
> Which kind of corruption?

Transid related error.

> 
>>
>> The most obvious symptom will be difference in free space:
>>
>> This will be caught by kernel, but such check is quite weak, and if
>> the net free space change is 0 in one transaction, the corrupted
>> cache can be loaded by kernel.
> 
> How can that happen? The only case I'm aware of, explained below,
> always leads to a difference (space cache has less free space then
> what we actually have if we check the extent tree).
> 
>>
>> In this case, btrfs check would report things like :
>> --
>> block group 298844160 has wrong amount of free space
>> failed to load free space cache for block group 298844160
>> --
> 
> This is normal, but not very common, due to tiny races that exists
> between committing a transaction (writing the free space caches) and
> running dellaloc for example (since reserving an extent while running
> dealloc doesn't joing/start a transaction).

This race explains a lot.

But could that cause corrupted cache to be loaded after power loss, and
break metadata CoW?

At least for the time point when FUA happens, the free space cache can
pass both csum and generation, we only have free space difference to
prevent it to be loaded.

> 
>>
>> But considering the test case are using notreelog, btrfs won't do
>> sub-transaction commit which doesn't increase generation, each
>> transaction should be consistent, and nothing should be reported at all.
>>
>> Further more, we can even found corrupted file extents like:
>> --
>> root 5 inode 261 errors 100, file extent discount
>> Found file extent holes:
>> start: 962560, len: 32768
>> ERROR: errors found in fs roots
> 
> Why do you think that's a corruption? Does it cause data loss or any
> user visible issue?

It breaks the rule that we shouldn't have the hole in file extents.

IIRC Nikolay is trying to use inode_lock_shared() to solve this race.

> 
> Having file extent holes not inserted happens when mixing buffered and
> direct IO writes to a file (and fsstress does that), for example:
> 
> create file
> buffered write at offset 0, length 64K
> direct IO write at offset at offset 64K, length 4K
> transaction commit
> power loss
> after this we got a missing 64K hole extent at offset 0 (at
> btrfs_file_write_iter we only add hole extents if the start offset is
> greater then the current i_size)
> 
> But this does not cause any problem for the user or the fs itself, and
> it's supposed to be like that in the NO_HOLES mode which one day
> (probably) will be the default mode.

At least before that happens, we should follow the current schema of
file extents.

If we just ignore problems that won't cause data loss and keeps them,
there will never be a good on-disk format schema.

Thanks,
Qu

> 
>> --
>>
>> Signed-off-by: Qu Wenruo 
>> ---
>>  common/dmlogwrites  |  72 +++
>>  tests/btrfs/159 | 141 
>> 
>>  tests/btrfs/159.out |   2 +
>>  tests/btrfs/group   |   1 +
>>  4 files changed, 216 insertions(+)
>>  create mode 100755 tests/btrfs/159
>>  create mode 100644 tests/btrfs/159.out
>>
>> diff --git a/common/dmlogwrites b/common/dmlogwrites
>> index 467b872e..54e7e242 100644
>> --- a/common/dmlogwrites
>> +++ b/common/dmlogwrites
>> @@ -126,3 +126,75 @@ _log_writes_cleanup()
>> $UDEV_SETTLE_PROG >/dev/null 2>&1
>> _log_writes_remove
>>  }
>> +
>> +# Convert log writes mark to entry number
>> +# Result entry number is output to stdout, could be empty if not found
>> +_log_writes_mark_

Re: [PATCH 3/3] fstests: btrfs: Add test case to check v1 space cache corruption

2018-03-06 Thread Filipe Manana
On Tue, Mar 6, 2018 at 8:15 AM, Qu Wenruo  wrote:
> There are some btrfs corruption report in mail list for a while,

There have been for years (well, since ever) many reports of different
types of corruptions.
Which kind of corruption are you referring to?

> although such corruption is pretty rare and almost impossible to
> reproduce, with dm-log-writes we found it's highly related to v1 space
> cache.
>
> Unlike journal based filesystems, btrfs completely rely on metadata CoW
> to protect itself from power loss.
> Which needs extent allocator to avoid allocate extents on existing
> extent range.
> Btrfs also uses free space cache to speed up such allocation.
>
> However there is a problem, v1 space cache is not protected by data CoW,
> and can be corrupted during power loss.
> So btrfs do extra check on free space cache, verifying its own in-file csum,
> generation and free space recorded in cache and extent tree.
>
> The problem is, under heavy concurrency, v1 space cache can be corrupted
> even under normal operations without power loss.

How?

> And we believe corrupted space cache can break btrfs metadata CoW and
> leads to the rare corruption in next power loss.

Which kind of corruption?

>
> The most obvious symptom will be difference in free space:
>
> This will be caught by kernel, but such check is quite weak, and if
> the net free space change is 0 in one transaction, the corrupted
> cache can be loaded by kernel.

How can that happen? The only case I'm aware of, explained below,
always leads to a difference (space cache has less free space then
what we actually have if we check the extent tree).

>
> In this case, btrfs check would report things like :
> --
> block group 298844160 has wrong amount of free space
> failed to load free space cache for block group 298844160
> --

This is normal, but not very common, due to tiny races that exists
between committing a transaction (writing the free space caches) and
running dellaloc for example (since reserving an extent while running
dealloc doesn't joing/start a transaction).

>
> But considering the test case are using notreelog, btrfs won't do
> sub-transaction commit which doesn't increase generation, each
> transaction should be consistent, and nothing should be reported at all.
>
> Further more, we can even found corrupted file extents like:
> --
> root 5 inode 261 errors 100, file extent discount
> Found file extent holes:
> start: 962560, len: 32768
> ERROR: errors found in fs roots

Why do you think that's a corruption? Does it cause data loss or any
user visible issue?

Having file extent holes not inserted happens when mixing buffered and
direct IO writes to a file (and fsstress does that), for example:

create file
buffered write at offset 0, length 64K
direct IO write at offset at offset 64K, length 4K
transaction commit
power loss
after this we got a missing 64K hole extent at offset 0 (at
btrfs_file_write_iter we only add hole extents if the start offset is
greater then the current i_size)

But this does not cause any problem for the user or the fs itself, and
it's supposed to be like that in the NO_HOLES mode which one day
(probably) will be the default mode.

> --
>
> Signed-off-by: Qu Wenruo 
> ---
>  common/dmlogwrites  |  72 +++
>  tests/btrfs/159 | 141 
> 
>  tests/btrfs/159.out |   2 +
>  tests/btrfs/group   |   1 +
>  4 files changed, 216 insertions(+)
>  create mode 100755 tests/btrfs/159
>  create mode 100644 tests/btrfs/159.out
>
> diff --git a/common/dmlogwrites b/common/dmlogwrites
> index 467b872e..54e7e242 100644
> --- a/common/dmlogwrites
> +++ b/common/dmlogwrites
> @@ -126,3 +126,75 @@ _log_writes_cleanup()
> $UDEV_SETTLE_PROG >/dev/null 2>&1
> _log_writes_remove
>  }
> +
> +# Convert log writes mark to entry number
> +# Result entry number is output to stdout, could be empty if not found
> +_log_writes_mark_to_entry_number()
> +{
> +   local _mark=$1
> +   local ret
> +
> +   [ -z "$_mark" ] && _fail \
> +   "mark must be given for _log_writes_mark_to_entry_number"
> +
> +   ret=$($here/src/log-writes/replay-log --find --log $LOGWRITES_DEV \
> +   --end-mark $_mark 2> /dev/null)
> +   [ -z "$ret" ] && return
> +   ret=$(echo "$ret" | cut -f1 -d\@)
> +   echo "mark $_mark has entry number $ret" >> $seqres.full
> +   echo "$ret"
> +}
> +
> +# Find next fua write entry number
> +# Result entry number is output to stdout, could be empty if not found
> +_log_writes_find_next_fua()
> +{
> +   local _start_entry=$1
> +   local ret
> +
> +   if [ -z "$_start_entry" ]; then
> +   ret=$($here/src/log-writes/replay-log --find --log 
> $LOGWRITES_DEV \
> +   --next-fua 2> /dev/null)
> +   else
> +   ret=$($here/src/log-writes/replay-log --find --log 
> $LOGWRITES_DEV \
> +  

Re: [PATCH 3/3] fstests: btrfs: Add test case to check v1 space cache corruption

2018-03-06 Thread Qu Wenruo


On 2018年03月06日 17:03, Amir Goldstein wrote:
> On Tue, Mar 6, 2018 at 10:15 AM, Qu Wenruo  wrote:
>> There are some btrfs corruption report in mail list for a while,
>> although such corruption is pretty rare and almost impossible to
>> reproduce, with dm-log-writes we found it's highly related to v1 space
>> cache.
>>
>> Unlike journal based filesystems, btrfs completely rely on metadata CoW
>> to protect itself from power loss.
>> Which needs extent allocator to avoid allocate extents on existing
>> extent range.
>> Btrfs also uses free space cache to speed up such allocation.
>>
>> However there is a problem, v1 space cache is not protected by data CoW,
>> and can be corrupted during power loss.
>> So btrfs do extra check on free space cache, verifying its own in-file csum,
>> generation and free space recorded in cache and extent tree.
>>
>> The problem is, under heavy concurrency, v1 space cache can be corrupted
>> even under normal operations without power loss.
>> And we believe corrupted space cache can break btrfs metadata CoW and
>> leads to the rare corruption in next power loss.
>>
>> The most obvious symptom will be difference in free space:
>>
>> This will be caught by kernel, but such check is quite weak, and if
>> the net free space change is 0 in one transaction, the corrupted
>> cache can be loaded by kernel.
>>
>> In this case, btrfs check would report things like :
>> --
>> block group 298844160 has wrong amount of free space
>> failed to load free space cache for block group 298844160
>> --
>>
>> But considering the test case are using notreelog, btrfs won't do
>> sub-transaction commit which doesn't increase generation, each
>> transaction should be consistent, and nothing should be reported at all.
>>
>> Further more, we can even found corrupted file extents like:
>> --
>> root 5 inode 261 errors 100, file extent discount
>> Found file extent holes:
>> start: 962560, len: 32768
>> ERROR: errors found in fs roots
>> --
>>
> 
> So what is the expectation from this test on upstream btrfs?
> Probable failure? reliable failure?

Reliable failure, as the root reason is not fully exposed yet.

> Are there random seeds to fsstress that can make the test fail reliably?

Since concurrency is involved, I don't think seed would help much.

> Or does failure also depend on IO timing and other uncontrolled parameters?

Currently the concurrency would be the main factor.

>> +#
>> +#---
>> +# Copyright (c) SUSE.  All Rights Reserved.
> 
> 2018>
>> +#
[snip]
>> +
>> +_log_writes_replay_log_entry_range $prev >> $seqres.full 2>&1
>> +while [ ! -z "$cur" ]; do
>> +   _log_writes_replay_log_entry_range $cur $prev
>> +   # Catch the btrfs check output into temp file, as we need to
>> +   # grep the output to find the cache corruption
>> +   $BTRFS_UTIL_PROG check --check-data-csum $SCRATCH_DEV &> $tmp.fsck
> 
> So by making this a btrfs specific test you avoid the need to mount/umount and
> revert to $prev. Right?

Yes. Especially notreelog mount option disables journal-like behavior.

> 
> Please spell out the missing pieces for making a generic variant
> to this test, so if anyone wants to pick it up they have a good starting 
> point.
> Or maybe you still intend to post a generic test as well?

I'm still working on the generic test, but the priority is the btrfs
corruption fixing.

For the missing pieces, we need dm-snapshot to make journal based
filesystems to replay their log without polluting the original device.

Despite that, current code should illustrate the framework.

> 
>> +
>> +   # Cache passed generation,csum and free space check but corrupted
>> +   # will be reported as error
>> +   if [ $? -ne 0 ]; then
>> +   cat $tmp.fsck >> $seqres.full
>> +   _fail "btrfs check found corruption"
>> +   fi
>> +
>> +   # Mount option has ruled out any possible factors affect space cache
>> +   # And we are at the FUA writes, no generation related problem should
>> +   # happen anyway
>> +   if grep -q -e 'failed to load free space cache' $tmp.fsck; then
>> +   cat $tmp.fsck >> $seqres.full
>> +   _fail "btrfs check found invalid space cache"
>> +   fi
>> +
>> +   prev=$cur
>> +   cur=$(_log_writes_find_next_fua $prev)
>> +   [ -z $cur ] && break
>> +
>> +   # Same as above
>> +   cur=$(($cur + 1))
>> +done
>> +
>> +echo "Silence is golden"
>> +
>> +# success, all done
>> +status=0
>> +exit
>> diff --git a/tests/btrfs/159.out b/tests/btrfs/159.out
>> new file mode 100644
>> index ..e569e60c
>> --- /dev/null
>> +++ b/tests/btrfs/159.out
>> @@ -0,0 +1,2 @@
>> +QA output created by 159
>> +Silence is golden
>> diff --git a/tests/btrfs/group b/tests/btrfs/group
>> index 8007e07e..bc83db94 100644
>> --- a/tests/btrfs/group
>> +++ b/tests/btrfs/group
>> @@ -161,3 +161,4 @@
>>  156 auto quic

Re: [PATCH 3/3] fstests: btrfs: Add test case to check v1 space cache corruption

2018-03-06 Thread Amir Goldstein
On Tue, Mar 6, 2018 at 10:15 AM, Qu Wenruo  wrote:
> There are some btrfs corruption report in mail list for a while,
> although such corruption is pretty rare and almost impossible to
> reproduce, with dm-log-writes we found it's highly related to v1 space
> cache.
>
> Unlike journal based filesystems, btrfs completely rely on metadata CoW
> to protect itself from power loss.
> Which needs extent allocator to avoid allocate extents on existing
> extent range.
> Btrfs also uses free space cache to speed up such allocation.
>
> However there is a problem, v1 space cache is not protected by data CoW,
> and can be corrupted during power loss.
> So btrfs do extra check on free space cache, verifying its own in-file csum,
> generation and free space recorded in cache and extent tree.
>
> The problem is, under heavy concurrency, v1 space cache can be corrupted
> even under normal operations without power loss.
> And we believe corrupted space cache can break btrfs metadata CoW and
> leads to the rare corruption in next power loss.
>
> The most obvious symptom will be difference in free space:
>
> This will be caught by kernel, but such check is quite weak, and if
> the net free space change is 0 in one transaction, the corrupted
> cache can be loaded by kernel.
>
> In this case, btrfs check would report things like :
> --
> block group 298844160 has wrong amount of free space
> failed to load free space cache for block group 298844160
> --
>
> But considering the test case are using notreelog, btrfs won't do
> sub-transaction commit which doesn't increase generation, each
> transaction should be consistent, and nothing should be reported at all.
>
> Further more, we can even found corrupted file extents like:
> --
> root 5 inode 261 errors 100, file extent discount
> Found file extent holes:
> start: 962560, len: 32768
> ERROR: errors found in fs roots
> --
>

So what is the expectation from this test on upstream btrfs?
Probable failure? reliable failure?
Are there random seeds to fsstress that can make the test fail reliably?
Or does failure also depend on IO timing and other uncontrolled parameters?

> Signed-off-by: Qu Wenruo 
> ---
>  common/dmlogwrites  |  72 +++
>  tests/btrfs/159 | 141 
> 
>  tests/btrfs/159.out |   2 +
>  tests/btrfs/group   |   1 +
>  4 files changed, 216 insertions(+)
>  create mode 100755 tests/btrfs/159
>  create mode 100644 tests/btrfs/159.out
>
> diff --git a/common/dmlogwrites b/common/dmlogwrites
> index 467b872e..54e7e242 100644
> --- a/common/dmlogwrites
> +++ b/common/dmlogwrites
> @@ -126,3 +126,75 @@ _log_writes_cleanup()
> $UDEV_SETTLE_PROG >/dev/null 2>&1
> _log_writes_remove
>  }
> +
> +# Convert log writes mark to entry number
> +# Result entry number is output to stdout, could be empty if not found
> +_log_writes_mark_to_entry_number()
> +{
> +   local _mark=$1
> +   local ret
> +
> +   [ -z "$_mark" ] && _fail \
> +   "mark must be given for _log_writes_mark_to_entry_number"
> +
> +   ret=$($here/src/log-writes/replay-log --find --log $LOGWRITES_DEV \
> +   --end-mark $_mark 2> /dev/null)
> +   [ -z "$ret" ] && return
> +   ret=$(echo "$ret" | cut -f1 -d\@)
> +   echo "mark $_mark has entry number $ret" >> $seqres.full
> +   echo "$ret"
> +}
> +
> +# Find next fua write entry number
> +# Result entry number is output to stdout, could be empty if not found
> +_log_writes_find_next_fua()
> +{
> +   local _start_entry=$1
> +   local ret
> +
> +   if [ -z "$_start_entry" ]; then
> +   ret=$($here/src/log-writes/replay-log --find --log 
> $LOGWRITES_DEV \
> +   --next-fua 2> /dev/null)
> +   else
> +   ret=$($here/src/log-writes/replay-log --find --log 
> $LOGWRITES_DEV \
> +   --next-fua --start-entry $_start_entry 2> /dev/null)
> +   fi
> +   [ -z "$ret" ] && return
> +
> +   ret=$(echo "$ret" | cut -f1 -d\@)
> +   echo "next fua is entry number $ret" >> $seqres.full
> +   echo "$ret"
> +}
> +
> +# Replay log range to specified entry
> +# $1:  End entry. The last entry will *NOT* be replayed
> +# $2:  Start entry. If not specified, start from the first entry.
> +_log_writes_replay_log_entry_range()
> +{
> +   local _end=$1
> +   local _start=$2
> +
> +   [ -z "$_end" ] && _fail \
> +   "end entry must be specified for _log_writes_replay_log_entry_range"
> +
> +   if [[ "$_start" && "$_start" -gt "$_end" ]]; then
> +   _fail \
> +   "wrong parameter order for 
> _log_writes_replay_log_entry_range:start=$_start end=$_end"
> +   fi
> +
> +   # Original replay-log won't replay the last entry. So increase entry
> +   # number here to ensure the end entry to be replayed
> +   if [ -z "$_start" ]; then
> +   echo "=== replay to $_end ===