On Mon, 15 Mar 2021, Vlastimil Babka wrote:
> Commit ca0cab65ea2b ("mm, slub: introduce static key for slub_debug()")
> introduced a static key to optimize the case where no debugging is enabled for
> any cache. The static key is enabled when slub_debug boot parameter is passed,
> or CONFIG_SLUB_DEBUG_ON enabled.
>
> However, some caches might be created with one or more debugging flags
> explicitly passed to kmem_cache_create(), and the commit missed this. Thus the
> debugging functionality would not be actually performed for these caches
> unless
> the static key gets enabled by boot param or config.
>
> This patch fixes it by checking for debugging flags passed to
> kmem_cache_create() and enabling the static key accordingly.
>
> Note such explicit debugging flags should not be used outside of debugging and
> testing as they will now enable the static key globally. btrfs_init_cachep()
> creates a cache with SLAB_RED_ZONE but that's a mistake that's being corrected
> [1]. rcu_torture_stats() creates a cache with SLAB_STORE_USER, but that is a
> testing module so it's OK and will start working as intended after this patch.
>
> Also note that in case of backports to kernels before v5.12 that don't have
> 59450bbc12be ("mm, slab, slub: stop taking cpu hotplug lock"),
> static_branch_enable_cpuslocked() should be used.
>
Since this affects 5.9+, is the plan to propose backports to stable with
static_branch_enable_cpuslocked() once this is merged? (I notice the
absence of the stable tag here, which I believe is intended.)
> [1]
> https://lore.kernel.org/linux-btrfs/[email protected]/
>
> Reported-by: Oliver Glitta <[email protected]>
> Fixes: ca0cab65ea2b ("mm, slub: introduce static key for slub_debug()")
> Signed-off-by: Vlastimil Babka <[email protected]>
Acked-by: David Rientjes <[email protected]>
> ---
> mm/slub.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 350a37f30e60..cd6694ad1a0a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3827,6 +3827,15 @@ static int calculate_sizes(struct kmem_cache *s, int
> forced_order)
>
> static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
> {
> +#ifdef CONFIG_SLUB_DEBUG
> + /*
> + * If no slub_debug was enabled globally, the static key is not yet
> + * enabled by setup_slub_debug(). Enable it if the cache is being
> + * created with any of the debugging flags passed explicitly.
> + */
> + if (flags & SLAB_DEBUG_FLAGS)
> + static_branch_enable(&slub_debug_enabled);
> +#endif
> s->flags = kmem_cache_flags(s->size, flags, s->name);
> #ifdef CONFIG_SLAB_FREELIST_HARDENED
> s->random = get_random_long();