On 03.02.2021 20:20, Yang Shi wrote:
> Since memcg_shrinker_map_size just can be changed under holding shrinker_rwsem
> exclusively, the read side can be protected by holding read lock, so it sounds
> superfluous to have a dedicated mutex.
> 
> Kirill Tkhai suggested use write lock since:
> 
>   * We want the assignment to shrinker_maps is visible for 
> shrink_slab_memcg().
>   * The rcu_dereference_protected() dereferrencing in shrink_slab_memcg(), but
>     in case of we use READ lock in alloc_shrinker_maps(), the dereferrencing
>     is not actually protected.
>   * READ lock makes alloc_shrinker_info() racy against memory allocation fail.
>     alloc_shrinker_info()->free_shrinker_info() may free memory right after
>     shrink_slab_memcg() dereferenced it. You may say
>     shrink_slab_memcg()->mem_cgroup_online() protects us from it? Yes, sure,
>     but this is not the thing we want to remember in the future, since this
>     spreads modularity.
> 
> And a test with heavy paging workload didn't show write lock makes things 
> worse.
> 
> Acked-by: Vlastimil Babka <vba...@suse.cz>
> Signed-off-by: Yang Shi <shy828...@gmail.com>

Acked-by: Kirill Tkhai <ktk...@virtuozzo.com>

> ---
>  mm/vmscan.c | 16 ++++++----------
>  1 file changed, 6 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 96b08c79f18d..e4ddaaaeffe2 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -187,7 +187,6 @@ static DECLARE_RWSEM(shrinker_rwsem);
>  #ifdef CONFIG_MEMCG
>  
>  static int memcg_shrinker_map_size;
> -static DEFINE_MUTEX(memcg_shrinker_map_mutex);
>  
>  static void free_shrinker_map_rcu(struct rcu_head *head)
>  {
> @@ -200,8 +199,6 @@ static int expand_one_shrinker_map(struct mem_cgroup 
> *memcg,
>       struct memcg_shrinker_map *new, *old;
>       int nid;
>  
> -     lockdep_assert_held(&memcg_shrinker_map_mutex);
> -
>       for_each_node(nid) {
>               old = rcu_dereference_protected(
>                       mem_cgroup_nodeinfo(memcg, nid)->shrinker_map, true);
> @@ -249,7 +246,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg)
>       if (mem_cgroup_is_root(memcg))
>               return 0;
>  
> -     mutex_lock(&memcg_shrinker_map_mutex);
> +     down_write(&shrinker_rwsem);
>       size = memcg_shrinker_map_size;
>       for_each_node(nid) {
>               map = kvzalloc_node(sizeof(*map) + size, GFP_KERNEL, nid);
> @@ -260,7 +257,7 @@ int alloc_shrinker_maps(struct mem_cgroup *memcg)
>               }
>               rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_map, map);
>       }
> -     mutex_unlock(&memcg_shrinker_map_mutex);
> +     up_write(&shrinker_rwsem);
>  
>       return ret;
>  }
> @@ -275,9 +272,8 @@ static int expand_shrinker_maps(int new_id)
>       if (size <= old_size)
>               return 0;
>  
> -     mutex_lock(&memcg_shrinker_map_mutex);
>       if (!root_mem_cgroup)
> -             goto unlock;
> +             goto out;
>  
>       memcg = mem_cgroup_iter(NULL, NULL, NULL);
>       do {
> @@ -286,13 +282,13 @@ static int expand_shrinker_maps(int new_id)
>               ret = expand_one_shrinker_map(memcg, size, old_size);
>               if (ret) {
>                       mem_cgroup_iter_break(NULL, memcg);
> -                     goto unlock;
> +                     goto out;
>               }
>       } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
> -unlock:
> +out:
>       if (!ret)
>               memcg_shrinker_map_size = size;
> -     mutex_unlock(&memcg_shrinker_map_mutex);
> +
>       return ret;
>  }
>  
> 

Reply via email to