inode_init_always started setting the field to 0.
Signed-off-by: Mateusz Guzik
---
fs/bcachefs/fs.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/bcachefs/fs.c b/fs/bcachefs/fs.c
index 514bf83ebe29..f9044da417ac 100644
--- a/fs/bcachefs/fs.c
+++ b/fs/bcachefs/fs.c
@@ -230,7 +230,6
inode_init_always started setting the field to 0.
Signed-off-by: Mateusz Guzik
---
fs/xfs/xfs_icache.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index d31a2c1ac00a..088ac200b026 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs
iant that i_state == 0 to begin with, but
achieving that would require dealing with all filesystem alloc handlers
one by one.
In the meantime drop the misleading locking and move i_state zeroing to
inode_init_always so that others don't need to deal with it by hand.
Signed-off-by: Mateusz
This is in preparation for the routine starting to zero the field.
De facto coded by Dave Chinner, see:
https://lore.kernel.org/linux-fsdevel/zmgtagglol33w...@dread.disaster.area/
Signed-off-by: Mateusz Guzik
---
fs/xfs/xfs_icache.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/xfs
As requested by Jan this is a 4-part series.
I diffed this against fs-next + my inode hash patch v3 as it adds one
i_state = 0 case. Should that hash thing not be accepted this bit is
trivially droppable from the patch.
Mateusz Guzik (4):
xfs: preserve i_state around inode_init_always in
On Mon, Jun 10, 2024 at 8:50 PM Kent Overstreet
wrote:
>
> On Mon, Jun 10, 2024 at 08:44:37PM +0200, Mateusz Guzik wrote:
> > On Mon, Jun 10, 2024 at 8:13 PM Kent Overstreet
> > wrote:
> > >
> > > On Sat, Jun 08, 2024 at 11:24:37AM +0200, Mateusz Guzik wrote
On Mon, Jun 10, 2024 at 8:13 PM Kent Overstreet
wrote:
>
> On Sat, Jun 08, 2024 at 11:24:37AM +0200, Mateusz Guzik wrote:
> > On Fri, Jun 07, 2024 at 02:10:05PM -0400, Kent Overstreet wrote:
> > > Does the following patch help? I think the hammering on the key cache
> &g
On Fri, Jun 07, 2024 at 02:10:05PM -0400, Kent Overstreet wrote:
> Does the following patch help? I think the hammering on the key cache
> lock may be correlated with the key cache being mostly empty (and it
> looks like the shrinker code is behaving badly and trying very hard to
> free from a most
On Fri, Jun 7, 2024 at 9:37 PM Kent Overstreet
wrote:
>
> On Fri, Jun 07, 2024 at 09:15:31PM +0200, Mateusz Guzik wrote:
> > On Fri, Jun 7, 2024 at 8:43 PM Kent Overstreet
> > wrote:
> > >
> > > On Fri, Jun 07, 2024 at 08:28:07PM +0200, Mateusz Guzik wr
On Fri, Jun 7, 2024 at 8:43 PM Kent Overstreet
wrote:
>
> On Fri, Jun 07, 2024 at 08:28:07PM +0200, Mateusz Guzik wrote:
> > Well I did a quick skim through, it's probably a no-go:
> > 1. uses bit locks which i'm told would be problematic vs lockdep and
> > rt
On Fri, Jun 7, 2024 at 8:21 PM Mateusz Guzik wrote:
>
> On Fri, Jun 7, 2024 at 8:10 PM Kent Overstreet
> wrote:
> >
> > On Fri, Jun 07, 2024 at 07:59:10PM +0200, Mateusz Guzik wrote:
> > > On Fri, Jun 7, 2024 at 7:53 PM Kent Overstreet
> > > wrote:
> &
On Fri, Jun 7, 2024 at 8:10 PM Kent Overstreet
wrote:
>
> On Fri, Jun 07, 2024 at 07:59:10PM +0200, Mateusz Guzik wrote:
> > On Fri, Jun 7, 2024 at 7:53 PM Kent Overstreet
> > wrote:
> > >
> > > On Fri, Jun 07, 2024 at 06:51:05PM +0200, Mateusz Guzik wrote:
&g
On Fri, Jun 7, 2024 at 7:53 PM Kent Overstreet
wrote:
>
> On Fri, Jun 07, 2024 at 06:51:05PM +0200, Mateusz Guzik wrote:
> > On Fri, Jun 7, 2024 at 6:28 PM Mateusz Guzik wrote:
> > >
> > > On Fri, Jun 7, 2024 at 6:07 PM Kent Overstreet
> > > wrote:
> &
On Fri, Jun 7, 2024 at 6:28 PM Mateusz Guzik wrote:
>
> On Fri, Jun 7, 2024 at 6:07 PM Kent Overstreet
> wrote:
> >
> > On Fri, Jun 07, 2024 at 12:10:34PM +0200, Mateusz Guzik wrote:
> > > On Fri, Jun 7, 2024 at 11:13 AM Kent Overstreet
> > > wrote:
> &
On Fri, Jun 7, 2024 at 6:07 PM Kent Overstreet
wrote:
>
> On Fri, Jun 07, 2024 at 12:10:34PM +0200, Mateusz Guzik wrote:
> > On Fri, Jun 7, 2024 at 11:13 AM Kent Overstreet
> > wrote:
> > >
> > > On Fri, Jun 07, 2024 at 08:50:40AM +0200, Mateusz Guzik wro
On Fri, Jun 7, 2024 at 11:13 AM Kent Overstreet
wrote:
>
> On Fri, Jun 07, 2024 at 08:50:40AM +0200, Mateusz Guzik wrote:
> > On Fri, Jun 7, 2024 at 2:31 AM Kent Overstreet
> > wrote:
> > >
> > > On Thu, Jun 06, 2024 at 08:40:50PM +0200, Mateusz Guzik wrote:
&
On Fri, Jun 7, 2024 at 2:31 AM Kent Overstreet
wrote:
>
> On Thu, Jun 06, 2024 at 08:40:50PM +0200, Mateusz Guzik wrote:
> > So I tried out bcachefs again and it once more fails to complete
> > parallel creation of 20 mln files -- processes doing the work use cpu
>
f7f01 __GI___libc_open+0x51
(/usr/lib/x86_64-linux-gnu/libc.so.6)
342f3439332f3032 [unknown] ([unknown])
the bcachefs thing is fs/bcachefs/btree_key_cache.c:871
ext4 and btrfs did not have any trouble. Read the old e-mail below for
repro instructions.
On Sun, Nov 5, 2023 at 1:38 P
On 11/7/23, Kent Overstreet wrote:
> On Sun, Nov 05, 2023 at 01:38:49PM +0100, Mateusz Guzik wrote:
>> Setup is a 24 core vm + 24G of ram, running debian 12 with top of
>> master as of writing this e-mail, commit being:
>> commit 1c41041124bd14dd6610da256a3da4e5b74ce6b1 (HEAD
On 11/6/23, Kent Overstreet wrote:
> On Sun, Nov 05, 2023 at 11:16:15PM +0100, Mateusz Guzik wrote:
>> On 11/5/23, Mateusz Guzik wrote:
>> > On 11/5/23, Kent Overstreet wrote:
>> >> On Sun, Nov 05, 2023 at 09:27:15PM +0100, Mateusz Guzik wrote:
>> &g
On 11/5/23, Mateusz Guzik wrote:
> On 11/5/23, Kent Overstreet wrote:
>> On Sun, Nov 05, 2023 at 09:27:15PM +0100, Mateusz Guzik wrote:
>>> On 11/5/23, Kent Overstreet wrote:
>>> > On Sun, Nov 05, 2023 at 08:57:53PM +0100, Mateusz Guzik wrote:
>>>
On 11/5/23, Kent Overstreet wrote:
> On Sun, Nov 05, 2023 at 09:27:15PM +0100, Mateusz Guzik wrote:
>> On 11/5/23, Kent Overstreet wrote:
>> > On Sun, Nov 05, 2023 at 08:57:53PM +0100, Mateusz Guzik wrote:
>> >> On 11/5/23, Kent Overstreet wrote:
>> >
On 11/5/23, Kent Overstreet wrote:
> On Sun, Nov 05, 2023 at 08:57:53PM +0100, Mateusz Guzik wrote:
>> On 11/5/23, Kent Overstreet wrote:
>> > Can you check btree_transactions in debugfs?
>>
>> 3807 __bch2_create
>> path 0 b l=0 subvolumes:0:1:0
>&
On 11/5/23, Kent Overstreet wrote:
> On Sun, Nov 05, 2023 at 01:38:49PM +0100, Mateusz Guzik wrote:
>> Setup is a 24 core vm + 24G of ram, running debian 12 with top of
>> master as of writing this e-mail, commit being:
>> commit 1c41041124bd14dd6610da256a3da4e5b74ce6b1 (HEAD
0 R09: 0064
R10: 01ff R11: 0202 R12: 0263
R13: 03e8 R14: 7ffee4ae267c R15: 55b2c0e97010
--
Mateusz Guzik
gt; Whether the patch is ready for reviews and whatnot is your call to
>> make as the author.
>
> This is basically why that patch never staid in -next. Dave said this
> patch is meaningless without his other patchs and I had no reason to
> doubt that claim nor currently the cycles to benchmark and disprove it.
>
That makes sense, thanks.
--
Mateusz Guzik
On 10/23/23, Dave Chinner wrote:
> On Fri, Oct 20, 2023 at 07:49:18PM +0200, Mateusz Guzik wrote:
>> On 10/20/23, Dave Chinner wrote:
>> > On Thu, Oct 19, 2023 at 05:59:58PM +0200, Mateusz Guzik wrote:
>> >> > To be clear there is no urgency as far as I
tributing this better.
So tl;dr hash distribution leaves some room for improvement and
*maybe* I'll prod it some time next month. One can also grow the table
of course, but that's for later.
--
Mateusz Guzik
On 10/20/23, Dave Chinner wrote:
> On Thu, Oct 19, 2023 at 05:59:58PM +0200, Mateusz Guzik wrote:
>> On Thu, Oct 19, 2023 at 05:30:40PM +0200, Mateusz Guzik wrote:
>> > On Tue, May 23, 2023 at 11:28:38AM +0200, Christian Brauner wrote:
>> > > On Tue, 09 May 2023 1
On Thu, Oct 19, 2023 at 05:30:40PM +0200, Mateusz Guzik wrote:
> On Tue, May 23, 2023 at 11:28:38AM +0200, Christian Brauner wrote:
> > On Tue, 09 May 2023 12:56:47 -0400, Kent Overstreet wrote:
> > > Because scalability of the global inode_hash_lock really, really
> > >
On Tue, May 23, 2023 at 11:28:38AM +0200, Christian Brauner wrote:
> On Tue, 09 May 2023 12:56:47 -0400, Kent Overstreet wrote:
> > Because scalability of the global inode_hash_lock really, really
> > sucks.
> >
> > 32-way concurrent create on a couple of different filesystems
> > before:
> >
> >
31 matches
Mail list logo