On Mon, Mar 17, 2025 at 03:33:04PM +0100, Vlastimil Babka wrote: > Extend the sheaf infrastructure for more efficient kfree_rcu() handling. > For caches with sheaves, on each cpu maintain a rcu_free sheaf in > addition to main and spare sheaves. > > kfree_rcu() operations will try to put objects on this sheaf. Once full, > the sheaf is detached and submitted to call_rcu() with a handler that > will try to put it in the barn, or flush to slab pages using bulk free, > when the barn is full. Then a new empty sheaf must be obtained to put > more objects there. > > It's possible that no free sheaves are available to use for a new > rcu_free sheaf, and the allocation in kfree_rcu() context can only use > GFP_NOWAIT and thus may fail. In that case, fall back to the existing > kfree_rcu() machinery. > > Expected advantages: > - batching the kfree_rcu() operations, that could eventually replace the > existing batching > - sheaves can be reused for allocations via barn instead of being > flushed to slabs, which is more efficient > - this includes cases where only some cpus are allowed to process rcu > callbacks (Android) > > Possible disadvantage: > - objects might be waiting for more than their grace period (it is > determined by the last object freed into the sheaf), increasing memory > usage - but the existing batching does that too? > > Only implement this for CONFIG_KVFREE_RCU_BATCHED as the tiny > implementation favors smaller memory footprint over performance. > > Signed-off-by: Vlastimil Babka <vba...@suse.cz> > Reviewed-by: Suren Baghdasaryan <sur...@google.com> > --- > mm/slab.h | 2 + > mm/slab_common.c | 24 ++++++++ > mm/slub.c | 165 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++- > 3 files changed, 189 insertions(+), 2 deletions(-) > > diff --git a/mm/slab.h b/mm/slab.h > index > 8daaec53b6ecfc44171191d421adb12e5cba2c58..94e9959e1aefa350d3d74e3f5309fde7a5cf2ec8 > 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -459,6 +459,8 @@ static inline bool is_kmalloc_normal(struct kmem_cache *s) > return !(s->flags & (SLAB_CACHE_DMA|SLAB_ACCOUNT|SLAB_RECLAIM_ACCOUNT)); > } > > +bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj); > + > /* Legal flag mask for kmem_cache_create(), for various configurations */ > #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ > SLAB_CACHE_DMA32 | SLAB_PANIC | \ > diff --git a/mm/slab_common.c b/mm/slab_common.c > index > ceeefb287899a82f30ad79b403556001c1860311..9496176770ed47491e01ed78e060a74771d5541e > 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -1957,6 +1978,9 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr) > if (!head) > might_sleep(); > > + if (kfree_rcu_sheaf(ptr)) > + return; > + > // Queue the object but don't yet schedule the batch. > if (debug_rcu_head_queue(ptr)) { > // Probable double kfree_rcu(), just leak. > diff --git a/mm/slub.c b/mm/slub.c > index > fa3a6329713a9f45b189f27d4b1b334b54589c38..83f4395267dccfbc144920baa7d0a85a27fbb1b4 > 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -350,6 +350,8 @@ enum stat_item { > ALLOC_FASTPATH, /* Allocation from cpu slab */ > ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ > FREE_PCS, /* Free to percpu sheaf */ > + FREE_RCU_SHEAF, /* Free to rcu_free sheaf */ > + FREE_RCU_SHEAF_FAIL, /* Failed to free to a rcu_free sheaf */ > FREE_FASTPATH, /* Free to cpu slab */ > FREE_SLOWPATH, /* Freeing not to cpu slab */ > FREE_FROZEN, /* Freeing to frozen slab */ > @@ -442,6 +444,7 @@ struct slab_sheaf { > struct rcu_head rcu_head; > struct list_head barn_list; > }; > + struct kmem_cache *cache; > unsigned int size; > void *objects[]; > }; > @@ -450,6 +453,7 @@ struct slub_percpu_sheaves { > localtry_lock_t lock; > struct slab_sheaf *main; /* never NULL when unlocked */ > struct slab_sheaf *spare; /* empty or full, may be NULL */ > + struct slab_sheaf *rcu_free; /* for batching kfree_rcu() */ > struct node_barn *barn; > }; > > @@ -2597,7 +2621,7 @@ static void sheaf_flush_unused(struct kmem_cache *s, > struct slab_sheaf *sheaf) > static void pcs_flush_all(struct kmem_cache *s) > { > struct slub_percpu_sheaves *pcs; > - struct slab_sheaf *spare; > + struct slab_sheaf *spare, *rcu_free; > > localtry_lock(&s->cpu_sheaves->lock); > pcs = this_cpu_ptr(s->cpu_sheaves); > @@ -2605,6 +2629,9 @@ static void pcs_flush_all(struct kmem_cache *s) > spare = pcs->spare; > pcs->spare = NULL; > > + rcu_free = pcs->rcu_free; > + pcs->rcu_free = NULL; > + > localtry_unlock(&s->cpu_sheaves->lock);
Hmm this hunk in v3 is fine, but on your slub-percpu-shaves-v4r0 branch it's calling local_unlock() twice. Probably a rebase error? Otherwise looks good to me. When you address this, please feel free to add: Reviewed-by: Harry Yoo <harry....@oracle.com> Thanks! -- Cheers, Harry / Hyeonggon