Thanks for the reply, it's appreciated!
> Mounting raid1 btrfs writable in degraded mode creates chunks with
> single profile. This is long standing issue. What is rather surprising
> that you apparently have chunk size 819GiB which is suspiciously close
> to 10% of 8TiB. btrfs indeed limits chunk
Hi Khaled,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on kdave/for-next]
[also build test WARNING on v5.12-rc8 next-20210420]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--bas
On 2021/4/20 下午10:19, Gervais, Francois wrote:
On 2021/4/19 下午10:56, Gervais, Francois wrote:
My bad, wrong number.
The correct number command is:
# btrfs ins dump-tree -b 790151168 /dev/loop0p3
root@debug:~# btrfs ins dump-tree -b 790151168 /dev/loop0p3
btrfs-progs v5.7
[...]
i
On 2021/04/21 7:33, Neal Gompa wrote:
> On Thu, Sep 3, 2020 at 4:50 AM Damien Le Moal wrote:
>>
>> On 2020/09/03 16:34, Johannes Thumshirn wrote:
>>> On 02/09/2020 19:37, Josef Bacik wrote:
I know Johannes is working on something magical,
but IDK what it is or how far out it is.
>>>
>>>
On Thu, Sep 3, 2020 at 4:50 AM Damien Le Moal wrote:
>
> On 2020/09/03 16:34, Johannes Thumshirn wrote:
> > On 02/09/2020 19:37, Josef Bacik wrote:
> >> I know Johannes is working on something magical,
> >> but IDK what it is or how far out it is.
> >
> > That something is still slide-ware. I hope
Hi Khaled,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on kdave/for-next]
[also build test WARNING on v5.12-rc8 next-20210420]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--bas
As a preparation for another user, rename the unused_bgs_mutex into
reclaim_bgs_lock.
Signed-off-by: Johannes Thumshirn
Reviewed-by: Filipe Manana
Reviewed-by: Josef Bacik
---
fs/btrfs/block-group.c | 6 +++---
fs/btrfs/ctree.h | 3 ++-
fs/btrfs/disk-io.c | 6 +++---
fs/btrfs/volu
When relocating a block group the freed up space is not discarded in one
big block, but each extent is discarded on it's own with -odisard=sync.
For a zoned filesystem we need to discard the whole block group at once,
so btrfs_discard_extent() will translate the discard into a
REQ_OP_ZONE_RESET op
When a file gets deleted on a zoned file system, the space freed is not
returned back into the block group's free space, but is migrated to
zone_unusable.
As this zone_unusable space is behind the current write pointer it is not
possible to use it for new allocations. In the current implementation
When a file gets deleted on a zoned file system, the space freed is not
returned back into the block group's free space, but is migrated to
zone_unusable.
As this zone_unusable space is behind the current write pointer it is not
possible to use it for new allocations. In the current implementation
On 19.04.2021 18:22, Jonah Sabean wrote:
> I'm running Ubuntu 21.04 (technically not a stable "release" yet, but
> it will be in a few days, so if this is an ubuntu specific issue I'd
> like to report it before it is!).
>
> The btrfs volume in question is two 8TB hard disks that were in RAID1
> at
On Tue, Apr 20, 2021 at 07:40:37AM +, Johannes Thumshirn wrote:
> On 19/04/2021 19:13, David Sterba wrote:
> > On Mon, Apr 19, 2021 at 04:41:00PM +0900, Johannes Thumshirn wrote:
> >> When relocating a block group the freed up space is not discarded in one
> >> big block, but each extent is dis
On Fri, Apr 09, 2021 at 01:24:20PM +0800, Qu Wenruo wrote:
> Due to the pagecache limit of 32bit systems, btrfs can't access metadata
> at or beyond (ULONG_MAX + 1) << PAGE_SHIFT.
> This is 16T for 4K page size while 256T for 64K page size.
>
> And unlike other fses, btrfs uses internally mapped u
On Wed, Apr 14, 2021 at 02:05:26PM +0100, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> When doing a device replace on a zoned filesystem, if we find a block
> group with ->to_copy == 0, we jump to the label 'done', which will result
> in later calling btrfs_unfreeze_block_group(), even th
On Tue, Apr 20, 2021 at 10:55:44AM +0100, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> Commit dbcc7d57bffc0c ("btrfs: fix race when cloning extent buffer during
> rewind of an old root"), fixed a race when we need to rewind the extent
> buffer of an old root. It was caused by picking a ne
On Tue, Apr 20, 2021 at 10:55:12AM +0100, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> When creating a subvolume we allocate an extent buffer for its root node
> after starting a transaction. We setup a root item for the subvolume that
> points to that extent buffer and then attempt to in
I have been experimenting with both BTRFS and EXT4 on hacked up,
external RAID5 array made of USB disks. Yes, I know, crazy -- but
they're laying around, so I figured why not use them for long-term storage?
Anyway, in my latest transfer of multiple terabytes of data, to an EXT4
filesystem (it
As reported by the Coverity static analysis.
The variable zone is not initialized which
may causes a failed assertion.
Addresses-Coverity: ("Uninitialized variables")
Signed-off-by: Khaled ROMDHANI
---
v3: catch default as an assertion failure
as proposed by David Sterba.
---
fs/btrfs/zoned.c |
> On 2021/4/19 下午10:56, Gervais, Francois wrote:
>>> My bad, wrong number.
>>>
>>> The correct number command is:
>>> # btrfs ins dump-tree -b 790151168 /dev/loop0p3
>>
>>
>> root@debug:~# btrfs ins dump-tree -b 790151168 /dev/loop0p3
>> btrfs-progs v5.7
> [...]
>> item 4 key (5007 INODE_ITE
On Tue, Apr 20, 2021 at 01:22:14PM +0300, Dan Carpenter wrote:
> On Sat, Apr 17, 2021 at 04:36:16PM +0100, Khaled ROMDHANI wrote:
> > As reported by the Coverity static analysis.
> > The variable zone is not initialized which
> > may causes a failed assertion.
> >
> > Addresses-Coverity: ("Uniniti
On Mon, Apr 19, 2021 at 07:32:25PM +0200, David Sterba wrote:
> On Sat, Apr 17, 2021 at 04:36:16PM +0100, Khaled ROMDHANI wrote:
> > As reported by the Coverity static analysis.
> > The variable zone is not initialized which
> > may causes a failed assertion.
> >
> > Addresses-Coverity: ("Uninitia
On Sat, Apr 17, 2021 at 04:36:16PM +0100, Khaled ROMDHANI wrote:
> As reported by the Coverity static analysis.
> The variable zone is not initialized which
> may causes a failed assertion.
>
> Addresses-Coverity: ("Uninitialized variables")
> Signed-off-by: Khaled ROMDHANI
> ---
> v2: add a defa
From: Filipe Manana
Commit dbcc7d57bffc0c ("btrfs: fix race when cloning extent buffer during
rewind of an old root"), fixed a race when we need to rewind the extent
buffer of an old root. It was caused by picking a new mod log operation
for the extent buffer while getting a cloned extent buffer
From: Filipe Manana
When creating a subvolume we allocate an extent buffer for its root node
after starting a transaction. We setup a root item for the subvolume that
points to that extent buffer and then attempt to insert the root item into
the root tree - however if that fails, due to -ENOMEM f
On 19/04/2021 19:29, David Sterba wrote:
> On Mon, Apr 19, 2021 at 04:41:02PM +0900, Johannes Thumshirn wrote:
>> --- a/fs/btrfs/zoned.h
>> +++ b/fs/btrfs/zoned.h
>> @@ -9,6 +9,8 @@
>> #include "disk-io.h"
>> #include "block-group.h"
>>
>> +#define DEFAULT_RECLAIM_THRESH 75
>
> This is not a .
On 19/04/2021 19:20, David Sterba wrote:
> On Mon, Apr 19, 2021 at 04:41:02PM +0900, Johannes Thumshirn wrote:
>> +void btrfs_reclaim_bgs_work(struct work_struct *work)
>> +{
>> +struct btrfs_fs_info *fs_info =
>> +container_of(work, struct btrfs_fs_info, reclaim_bgs_work);
>> +
On 19/04/2021 19:13, David Sterba wrote:
> On Mon, Apr 19, 2021 at 04:41:00PM +0900, Johannes Thumshirn wrote:
>> When relocating a block group the freed up space is not discarded in one
>> big block, but each extent is discarded on it's own with -odisard=sync.
>>
>> For a zoned filesystem we need
Currently mkfs.btrfs will output a warning message if the sectorsize is
not the same as page size:
WARNING: the filesystem may not be mountable, sectorsize 4096 doesn't match
page size 65536
But since btrfs subpage support for 64K page size is comming, this
output is populating the golden outpu
28 matches
Mail list logo