From: Kirill Tkhai <ktk...@virtuozzo.com>

To avoid further unneed calls of do_shrink_slab() for shrinkers, which
already do not have any charged objects in a memcg, their bits have to
be cleared.

This patch introduces a lockless mechanism to do that without races
without parallel list lru add.  After do_shrink_slab() returns
SHRINK_EMPTY the first time, we clear the bit and call it once again.
Then we restore the bit, if the new return value is different.

Note, that single smp_mb__after_atomic() in shrink_slab_memcg() covers
two situations:

1)list_lru_add()     shrink_slab_memcg
    list_add_tail()    for_each_set_bit() <--- read bit
                         do_shrink_slab() <--- missed list update (no barrier)
    <MB>                 <MB>
    set_bit()            do_shrink_slab() <--- seen list update

This situation, when the first do_shrink_slab() sees set bit, but it
doesn't see list update (i.e., race with the first element queueing), is
rare.  So we don't add <MB> before the first call of do_shrink_slab()
instead of this to do not slow down generic case.  Also, it's need the
second call as seen in below in (2).

2)list_lru_add()      shrink_slab_memcg()
    list_add_tail()     ...
    set_bit()           ...
  ...                   for_each_set_bit()
  do_shrink_slab()        do_shrink_slab()
    clear_bit()           ...
  ...                     ...
  list_lru_add()          ...
    list_add_tail()       clear_bit()
    <MB>                  <MB>
    set_bit()             do_shrink_slab()

The barriers guarantee that the second do_shrink_slab() in the right
side task sees list update if really cleared the bit.  This case is
drawn in the code comment.

[Results/performance of the patchset]

After the whole patchset applied the below test shows signify increase
of performance:

  $echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy
  $mkdir /sys/fs/cgroup/memory/ct
  $echo 4000M > /sys/fs/cgroup/memory/ct/memory.kmem.limit_in_bytes
      $for i in `seq 0 4000`; do mkdir /sys/fs/cgroup/memory/ct/$i;
                            echo $$ > /sys/fs/cgroup/memory/ct/$i/cgroup.procs;
                            mkdir -p s/$i; mount -t tmpfs $i s/$i;
                            touch s/$i/file; done

Then, 5 sequential calls of drop caches:

  $time echo 3 > /proc/sys/vm/drop_caches

1)Before:
  0.00user 13.78system 0:13.78elapsed 99%CPU
  0.00user 5.59system 0:05.60elapsed 99%CPU
  0.00user 5.48system 0:05.48elapsed 99%CPU
  0.00user 8.35system 0:08.35elapsed 99%CPU
  0.00user 8.34system 0:08.35elapsed 99%CPU

2)After
  0.00user 1.10system 0:01.10elapsed 99%CPU
  0.00user 0.00system 0:00.01elapsed 64%CPU
  0.00user 0.01system 0:00.01elapsed 82%CPU
  0.00user 0.00system 0:00.01elapsed 64%CPU
  0.00user 0.01system 0:00.01elapsed 82%CPU

The results show the performance increases at least in 548 times.

Shakeel Butt tested this patchset with fork-bomb on his configuration:

 > I created 255 memcgs, 255 ext4 mounts and made each memcg create a
 > file containing few KiBs on corresponding mount. Then in a separate
 > memcg of 200 MiB limit ran a fork-bomb.
 >
 > I ran the "perf record -ag -- sleep 60" and below are the results:
 >
 > Without the patch series:
 > Samples: 4M of event 'cycles', Event count (approx.): 3279403076005
 > +  36.40%            fb.sh  [kernel.kallsyms]    [k] shrink_slab
 > +  18.97%            fb.sh  [kernel.kallsyms]    [k] list_lru_count_one
 > +   6.75%            fb.sh  [kernel.kallsyms]    [k] super_cache_count
 > +   0.49%            fb.sh  [kernel.kallsyms]    [k] down_read_trylock
 > +   0.44%            fb.sh  [kernel.kallsyms]    [k] mem_cgroup_iter
 > +   0.27%            fb.sh  [kernel.kallsyms]    [k] up_read
 > +   0.21%            fb.sh  [kernel.kallsyms]    [k] osq_lock
 > +   0.13%            fb.sh  [kernel.kallsyms]    [k] shmem_unused_huge_count
 > +   0.08%            fb.sh  [kernel.kallsyms]    [k] shrink_node_memcg
 > +   0.08%            fb.sh  [kernel.kallsyms]    [k] shrink_node
 >
 > With the patch series:
 > Samples: 4M of event 'cycles', Event count (approx.): 2756866824946
 > +  47.49%            fb.sh  [kernel.kallsyms]    [k] down_read_trylock
 > +  30.72%            fb.sh  [kernel.kallsyms]    [k] up_read
 > +   9.51%            fb.sh  [kernel.kallsyms]    [k] mem_cgroup_iter
 > +   1.69%            fb.sh  [kernel.kallsyms]    [k] shrink_node_memcg
 > +   1.35%            fb.sh  [kernel.kallsyms]    [k] mem_cgroup_protected
 > +   1.05%            fb.sh  [kernel.kallsyms]    [k] 
 > queued_spin_lock_slowpath
 > +   0.85%            fb.sh  [kernel.kallsyms]    [k] _raw_spin_lock
 > +   0.78%            fb.sh  [kernel.kallsyms]    [k] lruvec_lru_size
 > +   0.57%            fb.sh  [kernel.kallsyms]    [k] shrink_node
 > +   0.54%            fb.sh  [kernel.kallsyms]    [k] queue_work_on
 > +   0.46%            fb.sh  [kernel.kallsyms]    [k] shrink_slab_memcg

[ktk...@virtuozzo.com: v9]
  Link: 
http://lkml.kernel.org/r/153112561772.4097.11011071937553113003.stgit@localhost.localdomain
Link: 
http://lkml.kernel.org/r/153063070859.1818.11870882950920963480.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai <ktk...@virtuozzo.com>
Acked-by: Vladimir Davydov <vdavydov....@gmail.com>
Tested-by: Shakeel Butt <shake...@google.com>
Cc: Al Viro <v...@zeniv.linux.org.uk>
Cc: Andrey Ryabinin <aryabi...@virtuozzo.com>
Cc: Chris Wilson <ch...@chris-wilson.co.uk>
Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
Cc: Guenter Roeck <li...@roeck-us.net>
Cc: "Huang, Ying" <ying.hu...@intel.com>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: Josef Bacik <jba...@fb.com>
Cc: Li RongQing <lirongq...@baidu.com>
Cc: Matthew Wilcox <wi...@infradead.org>
Cc: Matthias Kaehlcke <m...@chromium.org>
Cc: Mel Gorman <mgor...@techsingularity.net>
Cc: Michal Hocko <mho...@kernel.org>
Cc: Minchan Kim <minc...@kernel.org>
Cc: Philippe Ombredanne <pombreda...@nexb.com>
Cc: Roman Gushchin <g...@fb.com>
Cc: Sahitya Tummala <stumm...@codeaurora.org>
Cc: Stephen Rothwell <s...@canb.auug.org.au>
Cc: Tetsuo Handa <penguin-ker...@i-love.sakura.ne.jp>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Waiman Long <long...@redhat.com>
Signed-off-by: Andrew Morton <a...@linux-foundation.org>
Signed-off-by: Linus Torvalds <torva...@linux-foundation.org>
(cherry picked from commit f90280d6b7963fa8925258ed66b4f567fe73dfea)
Signed-off-by: Andrey Ryabinin <aryabi...@virtuozzo.com>
---
 mm/memcontrol.c |  2 ++
 mm/vmscan.c     | 26 ++++++++++++++++++++++++--
 2 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 3edfc5176a07..b4e2537f624b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -434,6 +434,8 @@ void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int 
nid, int shrinker_id)
 
                rcu_read_lock();
                map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map);
+               /* Pairs with smp mb in shrink_slab() */
+               smp_mb__before_atomic();
                set_bit(shrinker_id, map->map);
                rcu_read_unlock();
        }
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 45d116cc62cd..cdf75a69f6f9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -564,8 +564,30 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int 
nid,
                        continue;
 
                ret = do_shrink_slab(&sc, shrinker, priority);
-               if (ret == SHRINK_EMPTY)
-                       ret = 0;
+               if (ret == SHRINK_EMPTY) {
+                       clear_bit(i, map->map);
+                       /*
+                        * After the shrinker reported that it had no objects to
+                        * free, but before we cleared the corresponding bit in
+                        * the memcg shrinker map, a new object might have been
+                        * added. To make sure, we have the bit set in this
+                        * case, we invoke the shrinker one more time and reset
+                        * the bit if it reports that it is not empty anymore.
+                        * The memory barrier here pairs with the barrier in
+                        * memcg_set_shrinker_bit():
+                        *
+                        * list_lru_add()     shrink_slab_memcg()
+                        *   list_add_tail()    clear_bit()
+                        *   <MB>               <MB>
+                        *   set_bit()          do_shrink_slab()
+                        */
+                       smp_mb__after_atomic();
+                       ret = do_shrink_slab(&sc, shrinker, priority);
+                       if (ret == SHRINK_EMPTY)
+                               ret = 0;
+                       else
+                               memcg_set_shrinker_bit(memcg, nid, i);
+               }
                freed += ret;
 
                if (rwsem_is_contended(&shrinker_rwsem)) {
-- 
2.24.1

_______________________________________________
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel

Reply via email to