On 2020/8/11 9:29, Abel Wu wrote:
> 
> 
> On 2020/8/11 3:44, David Rientjes wrote:
>> On Mon, 10 Aug 2020, wuyun...@huawei.com wrote:
>>
>>> From: Abel Wu <wuyun...@huawei.com>
>>>
>>> The commit below is incomplete, as it didn't handle the add_full() part.
>>> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before 
>>> remove_full()")
>>>
>>> Signed-off-by: Abel Wu <wuyun...@huawei.com>
>>> ---
>>>  mm/slub.c | 4 +++-
>>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index fe81773..0b021b7 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, 
>>> struct page *page,
>>>             }
>>>     } else {
>>>             m = M_FULL;
>>> -           if (kmem_cache_debug(s) && !lock) {
>>> +#ifdef CONFIG_SLUB_DEBUG
>>> +           if (!lock) {
>>>                     lock = 1;
>>>                     /*
>>>                      * This also ensures that the scanning of full
>>> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, 
>>> struct page *page,
>>>                      */
>>>                     spin_lock(&n->list_lock);
>>>             }
>>> +#endif
>>>     }
>>>
>>>     if (l != m) {
>>
>> This should be functionally safe, I'm wonder if it would make sense to 
>> only check for SLAB_STORE_USER here instead of kmem_cache_debug(), 
>> however, since that should be the only context in which we need the 
>> list_lock for add_full()?  It seems more explicit.
>> .
>>
> Yes, checking for SLAB_STORE_USER here can also get rid of noising macros.
> I will resend the patch later.
> 
> Thanks,
>       Abel
> .
> 
Wait... It still needs CONFIG_SLUB_DEBUG to wrap around, but can avoid
locking overhead when SLAB_STORE_USER is not set (as what you said).
I will keep the CONFIG_SLUB_DEBUG in my new patch.

Reply via email to