[Resending after fixing broken email headers] On Fri, Apr 10, 2026 at 02:19:37PM +0200, Thomas Gleixner wrote: > The decision whether to scan remote nodes is based on a 'random' number > retrieved via get_cycles(). get_cycles() is about to be removed. > > There is already prandom state in the code, so use that instead. > > Signed-off-by: Thomas Gleixner <[email protected]> > Cc: Vlastimil Babka <[email protected]> > Cc: [email protected] > ---
Acked-by: Harry Yoo (Oracle) <[email protected]> Is this for this merge window? This may conflict with upcoming changes on freelist shuffling [1] (not queued for slab/for-next yet though), but it should be easy to resolve. [Cc'ing Shengming and SLAB ALLOCATOR folks] [1] https://lore.kernel.org/linux-mm/[email protected] -- Cheers, Harry / Hyeonggon > mm/slub.c | 37 +++++++++++++++++++++++-------------- > 1 file changed, 23 insertions(+), 14 deletions(-) > > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3302,6 +3302,25 @@ static inline struct slab *alloc_slab_pa > return slab; > } > > +#if defined(CONFIG_SLAB_FREELIST_RANDOM) || defined(CONFIG_NUMA) > +static DEFINE_PER_CPU(struct rnd_state, slab_rnd_state); > + > +static unsigned int slab_get_prandom_state(unsigned int limit) > +{ > + struct rnd_state *state; > + unsigned int res; > + > + /* > + * An interrupt or NMI handler might interrupt and change > + * the state in the middle, but that's safe. > + */ > + state = &get_cpu_var(slab_rnd_state); > + res = prandom_u32_state(state) % limit; > + put_cpu_var(slab_rnd_state); > + return res; > +} > +#endif > + > #ifdef CONFIG_SLAB_FREELIST_RANDOM > /* Pre-initialize the random sequence cache */ > static int init_cache_random_seq(struct kmem_cache *s) > @@ -3365,8 +3384,6 @@ static void *next_freelist_entry(struct > return (char *)start + idx; > } > > -static DEFINE_PER_CPU(struct rnd_state, slab_rnd_state); > - > /* Shuffle the single linked freelist based on a random pre-computed > sequence */ > static bool shuffle_freelist(struct kmem_cache *s, struct slab *slab, > bool allow_spin) > @@ -3383,15 +3400,7 @@ static bool shuffle_freelist(struct kmem > if (allow_spin) { > pos = get_random_u32_below(freelist_count); > } else { > - struct rnd_state *state; > - > - /* > - * An interrupt or NMI handler might interrupt and change > - * the state in the middle, but that's safe. > - */ > - state = &get_cpu_var(slab_rnd_state); > - pos = prandom_u32_state(state) % freelist_count; > - put_cpu_var(slab_rnd_state); > + pos = slab_get_prandom_state(freelist_count); > } > > page_limit = slab->objects * s->size; > @@ -3882,7 +3891,7 @@ static void *get_from_any_partial(struct > * with available objects. > */ > if (!s->remote_node_defrag_ratio || > - get_cycles() % 1024 > s->remote_node_defrag_ratio) > + slab_get_prandom_state(1024) > s->remote_node_defrag_ratio) > return NULL; > > do { > @@ -7102,7 +7111,7 @@ static unsigned int > > /* see get_from_any_partial() for the defrag ratio description */ > if (!s->remote_node_defrag_ratio || > - get_cycles() % 1024 > s->remote_node_defrag_ratio) > + slab_get_prandom_state(1024) > s->remote_node_defrag_ratio) > return 0; > > do { > @@ -8421,7 +8430,7 @@ void __init kmem_cache_init_late(void) > flushwq = alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM | WQ_PERCPU, > 0); > WARN_ON(!flushwq); > -#ifdef CONFIG_SLAB_FREELIST_RANDOM > +#if defined(CONFIG_SLAB_FREELIST_RANDOM) || defined(CONFIG_NUMA) > prandom_init_once(&slab_rnd_state); > #endif > } > >

