Re: [PATCH] mm/slub: remove useless kmem_cache_debug

2020-08-10 Thread Abel Wu



On 2020/8/11 9:29, Abel Wu wrote:
> 
> 
> On 2020/8/11 3:44, David Rientjes wrote:
>> On Mon, 10 Aug 2020, wuyun...@huawei.com wrote:
>>
>>> From: Abel Wu 
>>>
>>> The commit below is incomplete, as it didn't handle the add_full() part.
>>> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before 
>>> remove_full()")
>>>
>>> Signed-off-by: Abel Wu 
>>> ---
>>>  mm/slub.c | 4 +++-
>>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index fe81773..0b021b7 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, 
>>> struct page *page,
>>> }
>>> } else {
>>> m = M_FULL;
>>> -   if (kmem_cache_debug(s) && !lock) {
>>> +#ifdef CONFIG_SLUB_DEBUG
>>> +   if (!lock) {
>>> lock = 1;
>>> /*
>>>  * This also ensures that the scanning of full
>>> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, 
>>> struct page *page,
>>>  */
>>> spin_lock(>list_lock);
>>> }
>>> +#endif
>>> }
>>>
>>> if (l != m) {
>>
>> This should be functionally safe, I'm wonder if it would make sense to 
>> only check for SLAB_STORE_USER here instead of kmem_cache_debug(), 
>> however, since that should be the only context in which we need the 
>> list_lock for add_full()?  It seems more explicit.
>> .
>>
> Yes, checking for SLAB_STORE_USER here can also get rid of noising macros.
> I will resend the patch later.
> 
> Thanks,
>   Abel
> .
> 
Wait... It still needs CONFIG_SLUB_DEBUG to wrap around, but can avoid
locking overhead when SLAB_STORE_USER is not set (as what you said).
I will keep the CONFIG_SLUB_DEBUG in my new patch.


Re: [PATCH] mm/slub: remove useless kmem_cache_debug

2020-08-10 Thread Abel Wu



On 2020/8/11 3:44, David Rientjes wrote:
> On Mon, 10 Aug 2020, wuyun...@huawei.com wrote:
> 
>> From: Abel Wu 
>>
>> The commit below is incomplete, as it didn't handle the add_full() part.
>> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before 
>> remove_full()")
>>
>> Signed-off-by: Abel Wu 
>> ---
>>  mm/slub.c | 4 +++-
>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index fe81773..0b021b7 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, 
>> struct page *page,
>>  }
>>  } else {
>>  m = M_FULL;
>> -if (kmem_cache_debug(s) && !lock) {
>> +#ifdef CONFIG_SLUB_DEBUG
>> +if (!lock) {
>>  lock = 1;
>>  /*
>>   * This also ensures that the scanning of full
>> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, 
>> struct page *page,
>>   */
>>  spin_lock(>list_lock);
>>  }
>> +#endif
>>  }
>>
>>  if (l != m) {
> 
> This should be functionally safe, I'm wonder if it would make sense to 
> only check for SLAB_STORE_USER here instead of kmem_cache_debug(), 
> however, since that should be the only context in which we need the 
> list_lock for add_full()?  It seems more explicit.
> .
> 
Yes, checking for SLAB_STORE_USER here can also get rid of noising macros.
I will resend the patch later.

Thanks,
Abel


Re: [PATCH] mm/slub: remove useless kmem_cache_debug

2020-08-10 Thread David Rientjes
On Mon, 10 Aug 2020, wuyun...@huawei.com wrote:

> From: Abel Wu 
> 
> The commit below is incomplete, as it didn't handle the add_full() part.
> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before 
> remove_full()")
> 
> Signed-off-by: Abel Wu 
> ---
>  mm/slub.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index fe81773..0b021b7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, 
> struct page *page,
>   }
>   } else {
>   m = M_FULL;
> - if (kmem_cache_debug(s) && !lock) {
> +#ifdef CONFIG_SLUB_DEBUG
> + if (!lock) {
>   lock = 1;
>   /*
>* This also ensures that the scanning of full
> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, 
> struct page *page,
>*/
>   spin_lock(>list_lock);
>   }
> +#endif
>   }
> 
>   if (l != m) {

This should be functionally safe, I'm wonder if it would make sense to 
only check for SLAB_STORE_USER here instead of kmem_cache_debug(), 
however, since that should be the only context in which we need the 
list_lock for add_full()?  It seems more explicit.


[PATCH] mm/slub: remove useless kmem_cache_debug

2020-08-10 Thread wuyun.wu
From: Abel Wu 

The commit below is incomplete, as it didn't handle the add_full() part.
commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before 
remove_full()")

Signed-off-by: Abel Wu 
---
 mm/slub.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index fe81773..0b021b7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct 
page *page,
}
} else {
m = M_FULL;
-   if (kmem_cache_debug(s) && !lock) {
+#ifdef CONFIG_SLUB_DEBUG
+   if (!lock) {
lock = 1;
/*
 * This also ensures that the scanning of full
@@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct 
page *page,
 */
spin_lock(>list_lock);
}
+#endif
}

if (l != m) {
--
1.8.3.1