Currently free space cache inode size is determined by two factors:
- block group size
- PAGE_SIZE
This means, for the same sized block group, with different PAGE_SIZE, it
will result different inode size.
This will not be a good thing for subpage support, so change the
requirement for PAGE_SIZE
repair FST
using btrfs check. Above case failed at the second readonly check step.
Test log said "cache and super generation don't match, space cache will
be invalidated" which is printed by validate_free_space_cache().
If cache_generation of the superblock is not -1ULL,
validate_f
test failed for case 037-freespacetree-repair
The test tries to corrupt FST, call btrfs check readonly then
repair FST
using btrfs check. Above case failed at the second readonly
check steup.
Test log said "cache and super generation don't match, space
cache will
be invalidated" w
pacetree-repair
>
> The test tries to corrupt FST, call btrfs check readonly then repair FST
> using btrfs check. Above case failed at the second readonly check steup.
> Test log said "cache and super generation don't match, space cache will
> be invalidated" which is
repair FST
using btrfs check. Above case failed at the second readonly check steup.
Test log said "cache and super generation don't match, space cache will
be invalidated" which is printed by validate_free_space_cache().
If cache_generation of the superblock is not -1ULL,
validate_f
On Wed, Feb 3, 2021 at 10:15 PM Martin Raiber wrote:
>
> Hi,
>
> I've been looking a bit into the btrfs space cache and came to following
> conclusions. Please correct me if I'm wrong:
>
> 1. The space cache mount option only modifies how the space cache is
&g
Hi,
I've been looking a bit into the btrfs space cache and came to following
conclusions. Please correct me if I'm wrong:
1. The space cache mount option only modifies how the space cache is persisted
and not the in-memory structures (hence why I have 2,3 GiB
btrfs_free_space_bitmap
On Fri, Jan 22, 2021 at 07:07:45PM +, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> After a sudden power failure we may end up with a space cache on disk that
> is not valid and needs to be rebuilt from scratch.
>
> If that happens, during log replay when we attemp
On Fri, Jan 22, 2021 at 6:39 PM Josef Bacik wrote:
>
> On 1/22/21 12:56 PM, fdman...@kernel.org wrote:
> > From: Filipe Manana
> >
> > After a sudden power failure we may end up with a space cache on disk that
> > is not valid and needs to be rebuilt from scratch.
&
On 1/22/21 2:07 PM, fdman...@kernel.org wrote:
From: Filipe Manana
After a sudden power failure we may end up with a space cache on disk that
is not valid and needs to be rebuilt from scratch.
If that happens, during log replay when we attempt to pin an extent buffer
from a log tree, at
On 1/22/21 12:56 PM, fdman...@kernel.org wrote:
From: Filipe Manana
After a sudden power failure we may end up with a space cache on disk that
is not valid and needs to be rebuilt from scratch.
If that happens, during log replay when we attempt to pin an extent buffer
from a log tree, at
From: Filipe Manana
After a sudden power failure we may end up with a space cache on disk that
is not valid and needs to be rebuilt from scratch.
If that happens, during log replay when we attempt to pin an extent buffer
from a log tree, at btrfs_pin_extent_for_log_replay(), we do not wait for
From: Filipe Manana
After a sudden power failure we may end up with a space cache on disk that
is not valid and needs to be rebuilt from scratch.
If that happens, during log replay when we attempt to pin an extent buffer
from a log tree, at btrfs_pin_extent_for_log_replay(), we do not wait for
On Fri, Jan 22, 2021 at 4:43 PM Josef Bacik wrote:
>
> On 1/22/21 10:28 AM, fdman...@kernel.org wrote:
> > From: Filipe Manana
> >
> > After a sudden power failure we may end up with a space cache on disk that
> > is not valid and needs to be rebuilt from scratch.
&
On 1/22/21 10:28 AM, fdman...@kernel.org wrote:
From: Filipe Manana
During log replay we first start by walking the log trees and pin the
ranges for their extent buffers, through calls to the function
btrfs_pin_extent_for_log_replay().
However if the space cache for a block group is invalid
On 1/22/21 10:28 AM, fdman...@kernel.org wrote:
From: Filipe Manana
After a sudden power failure we may end up with a space cache on disk that
is not valid and needs to be rebuilt from scratch.
If that happens, during log replay when we attempt to pin an extent buffer
from a log tree, at
From: Filipe Manana
After a sudden power failure we may end up with a space cache on disk that
is not valid and needs to be rebuilt from scratch.
If that happens, during log replay when we attempt to pin an extent buffer
from a log tree, at btrfs_pin_extent_for_log_replay(), we do not wait for
From: Filipe Manana
During log replay we first start by walking the log trees and pin the
ranges for their extent buffers, through calls to the function
btrfs_pin_extent_for_log_replay().
However if the space cache for a block group is invalid and needs to be
rebuilt, we can fail the log replay
On Tue, Dec 29, 2020 at 09:34:51AM +, Stéphane Lesimple wrote:
> December 29, 2020 1:32 AM, "Qu Wenruo" wrote:
>
> > There are cases where v1 free space cache is still left while user has
> > already enabled v2 cache.
> >
> > In that case, we still
December 30, 2020 6:52 AM, "Qu Wenruo" wrote:
> BTW, would you mind to dump the root tree?
>
> # btrfs ins dump-tree -t root
>
> I'm more interested in the half dropped v1 space cache. Currently
> although they are sane (with proper inode items), I'm a
(2019-09-08 03:07:02)
mtime 0.0 (1970-01-01 01:00:00)
otime 0.0 (1970-01-01 01:00:00)
item 27 key (51933 EXTENT_DATA 0) itemoff 9854 itemsize 53
generation 1517381 type 2 (prealloc)
prealloc data disk byte 34626327621632 nr 262144
Got the point now.
The type is preallocated, which means we haven
On 2020/12/29 上午8:38, Qu Wenruo wrote:
In delete_v1_space_cache(), if we find a leaf whose owner is tree root,
and we can't grab the free space cache inode, then we return -ENOENT.
However this would make the caller, add_data_references(), to consider
this as a critical error, and
ctime 1567904822.739884119 (2019-09-08 03:07:02)
>> mtime 0.0 (1970-01-01 01:00:00)
>> otime 0.0 (1970-01-01 01:00:00)
>> item 27 key (51933 EXTENT_DATA 0) itemoff 9854 itemsize 53
>> generation 1517381 type 2 (prealloc)
>> prealloc data disk byte 3462632762163
33 EXTENT_DATA 0) itemoff 9854 itemsize 53
generation 1517381 type 2 (prealloc)
prealloc data disk byte 34626327621632 nr 262144
Got the point now.
The type is preallocated, which means we haven't yet written space cache
into it.
But the code only checks the regular file extent (written, no
0600 links 1 uid 0 gid 0 rdev 0
>> sequence 116552 flags 0x1b(NODATASUM|NODATACOW|NOCOMPRESS|PREALLOC)
>> atime 0.0 (1970-01-01 01:00:00)
>> ctime 1567904822.739884119 (2019-09-08 03:07:02)
>> mtime 0.0 (1970-01-01 01:00:00)
>> otime 0.0 (1970-01-01 01:00:00)
>&g
6327621632 nr 262144
Got the point now.
The type is preallocated, which means we haven't yet written space cache
into it.
But the code only checks the regular file extent (written, not
preallocated).
So the proper fix would looks like this:
diff --git a/fs/btrfs/relocation.c b/fs
root data bytenr 42678619897856 refs 1
item 33 key (52268 144 5) itemoff 8471 itemsize 21
item 34 key (52453 132 0) itemoff 8032 itemsize 439
root data bytenr 42705323196416 refs 1
item 35 key (52453 144 5) itemoff 8011 itemsize 21
item 36 key (58841 132 0) itemoff 7572 itemsize 439
root data bytenr 42677453111296 refs 1
item 37 key (58841 144 5) itemoff 7545 itemsize 27
BTRFS warning (device dm-10): leftover v1 space cache found, please use
btrfs-check --clear-space-cache v1 to clean it up
BTRFS info (device dm-10): balance: ended with status: -2
Regards,
Stéphane.
oup_cache(leaf->fs_info, block_group, NULL,
space_cache_ino);
return ret;
Thanks,
Qu
[ 451.026353] BTRFS warning (device dm-10): leftover v1 space cache found,
please use btrfs-check --clear-space-cache v1 to clean it up
[ 463.501781] BTRFS info (de
6327621632
[ 451.026353] BTRFS warning (device dm-10): leftover v1 space cache found,
please use btrfs-check --clear-space-cache v1 to clean it up
[ 463.501781] BTRFS info (device dm-10): balance: ended with status: -2
Regards,
Stéphane.
On 2020/12/29 下午5:27, Stéphane Lesimple wrote:
December 29, 2020 1:38 AM, "Qu Wenruo" wrote:
In delete_v1_space_cache(), if we find a leaf whose owner is tree root,
and we can't grab the free space cache inode, then we return -ENOENT.
However this would
December 29, 2020 1:32 AM, "Qu Wenruo" wrote:
> There are cases where v1 free space cache is still left while user has
> already enabled v2 cache.
>
> In that case, we still want to force v1 space cache cleanup in
> btrfs-check.
>
> This patch will remove the v
December 29, 2020 1:38 AM, "Qu Wenruo" wrote:
> In delete_v1_space_cache(), if we find a leaf whose owner is tree root,
> and we can't grab the free space cache inode, then we return -ENOENT.
>
> However this would make the caller, add_data_references(), to consider
In delete_v1_space_cache(), if we find a leaf whose owner is tree root,
and we can't grab the free space cache inode, then we return -ENOENT.
However this would make the caller, add_data_references(), to consider
this as a critical error, and abort current data balance.
This happens for fs
There are cases where v1 free space cache is still left while user has
already enabled v2 cache.
In that case, we still want to force v1 space cache cleanup in
btrfs-check.
This patch will remove the v2 check if we're cleaning up v1 cache,
allowing us to cleanup the leftover.
Signed-off-b
On Mon, Dec 07, 2020 at 09:56:12PM +0800, Zhihao Cheng wrote:
> Fix to return the error code(instead always 0) when memory allocating
> failed in __load_free_space_cache().
>
> This lacks the analysis of consequences, so there's only one caller and
By "This lacks the analysis of consequences" I w
et_from_fork+0x1f/0x30
>
> The page we were trying to drop had a page->private, but had no
> page->mapping and so called drop_buffers, assuming that we had a
> buffer_head on the page, and then panic'ed trying to deref 1, which is
> our page->private for data pages.
>
f/0x30
>
> The page we were trying to drop had a page->private, but had no
> page->mapping and so called drop_buffers, assuming that we had a
> buffer_head on the page, and then panic'ed trying to deref 1, which is
> our page->private for data pages.
>
> This is
_fork+0x1f/0x30
>
> The page we were trying to drop had a page->private, but had no
> page->mapping and so called drop_buffers, assuming that we had a
> buffer_head on the page, and then panic'ed trying to deref 1, which is
> our page->private for data pages.
>
>
had a
buffer_head on the page, and then panic'ed trying to deref 1, which is
our page->private for data pages.
This is happening because we're truncating the free space cache while
we're trying to load the free space cache. This isn't supposed to
happen, and I'll fix tha
cept when using the
> v2 space cache mount option (MOUNT_OPTIONS="-o space_cache=v2"), since
> the filesystem generation is 8 because creating a v2 space cache adds
> an additional transaction commit. So update the test to not hardcode
> specific generation numbers in its golden o
From: Filipe Manana
In order to check that the filesystem generation does not change after
failure to set a property, the test expects a specific generation number
of 7 in its golden output. That currently works except when using the
v2 space cache mount option (MOUNT_OPTIONS="-o space_cac
Is
> > the lack of COW repairs with btrfs check solved? Can this file system
> > be fixed? Maybe --init-extent-tree is worth a shot?
> >
> >
> Latest btrfs check has solved the problem of bad CoW. In fact it's
> solved in btrfs-progs v5.1 release.
>
> So,
Latest btrfs check has solved the problem of bad CoW. In fact it's
solved in btrfs-progs v5.1 release.
So, if you run btrfs check --clear-space-cache v1 again, it shouldn't
cause transid mismatch problem any more.
Although IIRC that fs already got transid error, thus enhanced
On Sun, Mar 10, 2019 at 5:20 PM Chris Murphy wrote:
>
> On Sat, Mar 2, 2019 at 11:18 AM Chris Murphy wrote:
> >
> > Sending URL for dump-tree output offlist. Conversation should still be
> > on-list.
>
>
> Any more information required from me at this point?
This file system has been on a shelf
The CRC checksum in the free space cache is not dependant on the super
block's csum_type field but always a CRC32C.
So use btrfs_crc32c() and btrfs_crc32c_final() instead of btrfs_csum_data()
and btrfs_csum_final() for computing these checksums.
Signed-off-by: Johannes Thumshirn
Review
The CRC checksum in the free space cache is not dependant on the super
block's csum_type field but always a CRC32C.
So use btrfs_crc32c() and btrfs_crc32c_final() instead of btrfs_csum_data()
and btrfs_csum_final() for computing these checksums.
Signed-off-by: Johannes Thumshirn
Review
The CRC checksum in the free space cache is not dependant on the super
block's csum_type field but always a CRC32C.
So use btrfs_crc32c() and btrfs_crc32c_final() instead of btrfs_csum_data()
and btrfs_csum_final() for computing these checksums.
Signed-off-by: Johannes Thumshirn
Review
On 10.05.19 г. 14:15 ч., Johannes Thumshirn wrote:
> The CRC checksum in the free space cache is not dependant on the super
> block's csum_type field but always a CRC32C.
>
> So use btrfs_crc32c() and btrfs_crc32c_final() instead of btrfs_csum_data()
> and btrfs_csum_f
The CRC checksum in the free space cache is not dependant on the super
block's csum_type field but always a CRC32C.
So use btrfs_crc32c() and btrfs_crc32c_final() instead of btrfs_csum_data()
and btrfs_csum_final() for computing these checksums.
Signed-off-by: Johannes Thumshirn
---
fs/
Just like lowmem mode, also check and repair free space cache inode
item.
And since we don't really have a good timing/function to check free
space chace inodes, we use the same common mode
check_repair_free_space_inode() when iterating root tree.
Signed-off-by: Qu Wenruo
---
check/main.c
00:00)
ctime 1553491158.189771625 (2019-03-25 13:19:18)
mtime 0.0 (1970-01-01 08:00:00)
otime 0.0 (1970-01-01 08:00:00)
There is a report of such problem in the mail list.
This patch will check and repair inode items of free space cache inodes in
lowmem
The image has one free space cache inode with invalid mode (0).
item 9 key (256 INODE_ITEM 0) itemoff 13702 itemsize 160
generation 30 transid 30 size 65536 nbytes 1507328
block group 0 mode 0 links 1 uid 0 gid 0 rdev 0
sequence 23 flags 0x1b
On 2019/3/29 下午8:05, Nikolay Borisov wrote:
>
>
> On 25.03.19 г. 10:22 ч., Qu Wenruo wrote:
>> The image has one free space cache inode with invalid mode (0).
>> item 9 key (256 INODE_ITEM 0) itemoff 13702 itemsize 160
>> generation 30 t
On 2019/3/29 下午8:05, Nikolay Borisov wrote:
>
>
> On 25.03.19 г. 10:22 ч., Qu Wenruo wrote:
>> The image has one free space cache inode with invalid mode (0).
>> item 9 key (256 INODE_ITEM 0) itemoff 13702 itemsize 160
>> generation 30 t
On 25.03.19 г. 10:22 ч., Qu Wenruo wrote:
> The image has one free space cache inode with invalid mode (0).
> item 9 key (256 INODE_ITEM 0) itemoff 13702 itemsize 160
> generation 30 transid 30 size 65536 nbytes 1507328
> block group 0 mode 0
On 29.03.19 г. 13:02 ч., Qu Wenruo wrote:
> [snip]
>>> +/*
>>> + * For free space inodes, we can't call check_inode_item() as free space
>>> + * cache inode doesn't have INODE_REF.
>>> + * We just check its inode mode.
>>> + */
&
[snip]
>> +/*
>> + * For free space inodes, we can't call check_inode_item() as free space
>> + * cache inode doesn't have INODE_REF.
>> + * We just check its inode mode.
>> + */
>> +int check_repair_free_space_inode(struct btrfs_fs_info *fs_
report of such problem in the mail list.
>
> This patch will check and repair inode items of free space cache inodes in
> lowmem mode.
>
> Since free space cache inodes doesn't have INODE_REF but still has 1
> link, we can't use check_inode_item() directly.
> Instea
(1970-01-01 08:00:00)
>> otime 0.0 (1970-01-01 08:00:00)
>>
>> There is a report of such problem in the mail list.
>>
>> This patch will check and repair inode items of free space cache
>> inodes in
>> lowmem mode.
>>
>> Since f
inode items of free space cache inodes in
lowmem mode.
Since free space cache inodes doesn't have INODE_REF but still has 1
link, we can't use check_inode_item() directly.
Instead we only check the inode mode, as that's the important part.
The check and repair function: check_repair_f
00:00)
ctime 1553491158.189771625 (2019-03-25 13:19:18)
mtime 0.0 (1970-01-01 08:00:00)
otime 0.0 (1970-01-01 08:00:00)
There is a report of such problem in the mail list.
This patch will check and repair inode items of free space cache inodes in
lowmem
Just like lowmem mode, also check and repair free space cache inode
item.
And since we don't really have a good timing/function to check free
space chace inodes, we use the same lowmem mode
check_repair_free_space_inode() when iterating root tree.
Signed-off-by: Qu Wenruo
---
check/main.c
The image has one free space cache inode with invalid mode (0).
item 9 key (256 INODE_ITEM 0) itemoff 13702 itemsize 160
generation 30 transid 30 size 65536 nbytes 1507328
block group 0 mode 0 links 1 uid 0 gid 0 rdev 0
sequence 23 flags 0x1b
On Sat, Mar 2, 2019 at 11:18 AM Chris Murphy wrote:
>
> Sending URL for dump-tree output offlist. Conversation should still be
> on-list.
Any more information required from me at this point?
--
Chris Murphy
717
>>>
>>> That bug report includes the trace in user space; but I've also
>>> processed the resulting coredump file with gdb and attached that to
>>> the bug report.
>>>
>>> Before using --clear-space-cache v1, btrfs scrub and btrfs ch
Two more strange things:
# btrfs check -r 930086912 /dev/mapper/sdd
Opening filesystem to check...
parent transid verify failed on 930086912 wanted 4514 found 4076
parent transid verify failed on 930086912 wanted 4514 found 4076
parent transid verify failed on 930086912 wanted 4514 found 4076
pare
OK this is screwy. The super has a totally different generation for
the root tree than any of the backup roots.
$ sudo btrfs rescue super -v /dev/mapper/sdd
All Devices:
Device: id = 1, name = /dev/mapper/sdc
Device: id = 2, name = /dev/mapper/sdd
Before Recovering:
[All good supers]:
but I've also
> > processed the resulting coredump file with gdb and attached that to
> > the bug report.
> >
> > Before using --clear-space-cache v1, btrfs scrub and btrfs check came
> > up clean no errors. Following the crash, I compiled brfsprogs 4.20.2
&g
hat to
> the bug report.
>
> Before using --clear-space-cache v1, btrfs scrub and btrfs check came
> up clean no errors. Following the crash, I compiled brfsprogs 4.20.2
> and ran 'btrfs check' which finds corruption.
>
> I've taken no further action, and
Problem happens with btrfsprogs 4.19.1
https://bugzilla.kernel.org/show_bug.cgi?id=202717
That bug report includes the trace in user space; but I've also
processed the resulting coredump file with gdb and attached that to
the bug report.
Before using --clear-space-cache v1, btrfs scru
If we're allocating a new space cache inode it's likely going to be
under a transaction handle, so we need to use memalloc_nofs_save() in
order to avoid deadlocks, and more importantly lockdep messages that
make xfstests fail.
Reviewed-by: Omar Sandoval
Signed-off-by: Josef Bacik
R
If we're allocating a new space cache inode it's likely going to be
under a transaction handle, so we need to use memalloc_nofs_save() in
order to avoid deadlocks, and more importantly lockdep messages that
make xfstests fail.
Reviewed-by: Omar Sandoval
Signed-off-by: Josef Bacik
R
On Fri, Sep 28, 2018 at 07:17:49AM -0400, Josef Bacik wrote:
> If we're allocating a new space cache inode it's likely going to be
> under a transaction handle, so we need to use memalloc_nofs_save() in
> order to avoid deadlocks, and more importantly lockdep messages that
&
If we're allocating a new space cache inode it's likely going to be
under a transaction handle, so we need to use memalloc_nofs_save() in
order to avoid deadlocks, and more importantly lockdep messages that
make xfstests fail.
Reviewed-by: Omar Sandoval
Signed-off-by: Josef Bacik
---
On Wed, Sep 12, 2018 at 07:43:27AM +0800, Qu Wenruo wrote:
>
>
> On 2018/9/11 下午10:48, David Sterba wrote:
> > On Tue, Sep 04, 2018 at 08:41:28PM +0800, Qu Wenruo wrote:
> >> No need to abort checking, especially for RO check free space cache is
> >> meaningless
On 2018/9/11 下午10:48, David Sterba wrote:
> On Tue, Sep 04, 2018 at 08:41:28PM +0800, Qu Wenruo wrote:
>> No need to abort checking, especially for RO check free space cache is
>> meaningless, the errors in fs/extent tree is more interesting for
>> developers.
>>
&
If we're allocating a new space cache inode it's likely going to be
under a transaction handle, so we need to use memalloc_nofs_save() in
order to avoid deadlocks, and more importantly lockdep messages that
make xfstests fail.
Reviewed-by: Omar Sandoval
Signed-off-by: Josef Bacik
---
On Tue, Sep 04, 2018 at 08:41:28PM +0800, Qu Wenruo wrote:
> No need to abort checking, especially for RO check free space cache is
> meaningless, the errors in fs/extent tree is more interesting for
> developers.
>
> So continue checking even something in free space cache is wrong
No need to abort checking, especially for RO check free space cache is
meaningless, the errors in fs/extent tree is more interesting for
developers.
So continue checking even something in free space cache is wrong.
Reported-by: Etienne Champetier
Signed-off-by: Qu Wenruo
---
check/main.c | 1
On Thu, Aug 30, 2018 at 01:41:59PM -0400, Josef Bacik wrote:
> If we're allocating a new space cache inode it's likely going to be
> under a transaction handle, so we need to use memalloc_nofs_save() in
> order to avoid deadlocks, and more importantly lockdep messages that
&
If we're allocating a new space cache inode it's likely going to be
under a transaction handle, so we need to use memalloc_nofs_save() in
order to avoid deadlocks, and more importantly lockdep messages that
make xfstests fail.
Signed-off-by: Josef Bacik
---
fs/btrfs/free-space-c
On Thu, Jul 19, 2018 at 05:44:53PM +0200, David Sterba wrote:
> On Thu, Jul 19, 2018 at 06:35:33PM +0300, Nikolay Borisov wrote:
> >
> >
> > On 19.07.2018 17:49, Josef Bacik wrote:
> > > If we're allocating a new space cache inode it's likely going to be
On Thu, Jul 19, 2018 at 06:35:33PM +0300, Nikolay Borisov wrote:
>
>
> On 19.07.2018 17:49, Josef Bacik wrote:
> > If we're allocating a new space cache inode it's likely going to be
> > under a transaction handle, so we need to use GFP_NOFS to keep from
> >
On Thu, Jul 19, 2018 at 06:35:33PM +0300, Nikolay Borisov wrote:
>
>
> On 19.07.2018 17:49, Josef Bacik wrote:
> > If we're allocating a new space cache inode it's likely going to be
> > under a transaction handle, so we need to use GFP_NOFS to keep from
> >
On 19.07.2018 17:49, Josef Bacik wrote:
> If we're allocating a new space cache inode it's likely going to be
> under a transaction handle, so we need to use GFP_NOFS to keep from
> deadlocking. Otherwise GFP_KERNEL is fine.
>
> Signed-off-by: Josef Bacik
> ---
&g
If we're allocating a new space cache inode it's likely going to be
under a transaction handle, so we need to use GFP_NOFS to keep from
deadlocking. Otherwise GFP_KERNEL is fine.
Signed-off-by: Josef Bacik
---
fs/btrfs/free-space-cache.c | 5 +
fs/btrfs/inode.c| 3 ++-
On Sun, Jul 01, 2018 at 10:45:26AM +0800, Qu Wenruo wrote:
> In btrfs_add_free_space(), if the free space to be added is already
> here, we trigger ASSERT() which is just another BUG_ON().
>
> Let's remove such BUG_ON() at all.
>
> Reported-by: Lewis Diamond
> Signed-off-by: Qu Wenruo
Applied,
On 1.07.2018 05:45, Qu Wenruo wrote:
> In btrfs_add_free_space(), if the free space to be added is already
> here, we trigger ASSERT() which is just another BUG_ON().
>
> Let's remove such BUG_ON() at all.
>
> Reported-by: Lewis Diamond
> Signed-off-by: Qu Wenruo
Reviewed-by: Nikolay Boris
In btrfs_add_free_space(), if the free space to be added is already
here, we trigger ASSERT() which is just another BUG_ON().
Let's remove such BUG_ON() at all.
Reported-by: Lewis Diamond
Signed-off-by: Qu Wenruo
---
free-space-cache.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
From: Omar Sandoval
Deleted free space cache inodes also get an orphan item in the root
tree, but we shouldn't report those as deleted subvolumes. Deleted
subvolumes will still have the root item, so we can just do an extra
tree search.
Reported-by: Tomohiro Misono
Signed-off-by: Omar San
On Thu, Mar 08, 2018 at 03:02:31PM +0800, Qu Wenruo wrote:
> When we found free space difference between free space cache and block
> group item, we just discard this free space cache.
>
> Normally such difference is caused by btrfs_reserve_extent() called by
> delalloc wh
On Fri, Mar 09, 2018 at 10:06:02AM +0200, Nikolay Borisov wrote:
> >>> + * space cache has less free space, and both kernel just discard
> >>> + * such cache. But if we find some case where free space cache
> >>> + * has more free
On 9.03.2018 01:27, Qu Wenruo wrote:
>
>
> On 2018年03月08日 22:05, Nikolay Borisov wrote:
>>
>>
>> On 8.03.2018 09:02, Qu Wenruo wrote:
>>> When we found free space difference between free space cache and block
>>> group item, we just discard
On 2018年03月08日 22:05, Nikolay Borisov wrote:
>
>
> On 8.03.2018 09:02, Qu Wenruo wrote:
>> When we found free space difference between free space cache and block
>> group item, we just discard this free space cache.
>>
>> Normally such difference is caused by
On 8.03.2018 09:02, Qu Wenruo wrote:
> When we found free space difference between free space cache and block
> group item, we just discard this free space cache.
>
> Normally such difference is caused by btrfs_reserve_extent() called by
> delalloc which is out of a transacti
When we found free space difference between free space cache and block
group item, we just discard this free space cache.
Normally such difference is caused by btrfs_reserve_extent() called by
delalloc which is out of a transaction.
And since all btrfs_release_extent() is called with a
When we found free space difference between free space cache and block
group item, we just discard this free space cache.
Normally such difference is caused by btrfs_reserve_extent() called by
delalloc which is out of a transaction.
And since all btrfs_release_extent() is called with a
ince ever) many reports of different
>> types of corruptions.
>> Which kind of corruption are you referring to?
>>
>>> although such corruption is pretty rare and almost impossible to
>>> reproduce, with dm-log-writes we found it's highly related to v1 space
problems.
>
>>
>>> although such corruption is pretty rare and almost impossible to
>>> reproduce, with dm-log-writes we found it's highly related to v1 space
>>> cache.
>>>
>>> Unlike journal based filesystems, btrfs completely rely on
ind of corruption are you referring to?
>
>> although such corruption is pretty rare and almost impossible to
>> reproduce, with dm-log-writes we found it's highly related to v1 space
>> cache.
>>
>> Unlike journal based filesystems, btrfs completely rely on
1 - 100 of 361 matches
Mail list logo