[Devel] [PATCH vz8 15/19] mm/vmscan.c: generalize shrink_slab() calls in shrink_node()

2020-03-27 Thread Andrey Ryabinin
From: Vladimir Davydov The patch makes shrink_slab() be called for root_mem_cgroup in the same way as it's called for the rest of cgroups. This simplifies the logic and improves the readability. [ktk...@virtuozzo.com: wrote changelog] Link: http://lkml.kernel.org/r/153063068338.1818.1149608475

[Devel] [PATCH vz8 06/19] mm/workingset.c: refactor workingset_init()

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai Use prealloc_shrinker()/register_shrinker_prepared() instead of register_shrinker(). This will be used in next patch. [ktk...@virtuozzo.com: v9] Link: http://lkml.kernel.org/r/153112550112.4097.16606173020912323761.stgit@localhost.localdomain Link: http://lkml.kernel.org/

[Devel] [PATCH vz8 05/19] mm, memcg: assign memcg-aware shrinkers bitmap to memcg

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai Imagine a big node with many cpus, memory cgroups and containers. Let we have 200 containers, every container has 10 mounts, and 10 cgroups. All container tasks don't touch foreign containers mounts. If there is intensive pages write, and global reclaim happens, a writing tas

[Devel] [PATCH vz8 07/19] fs/super.c: refactor alloc_super()

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai Do two list_lru_init_memcg() calls after prealloc_super(). destroy_unused_super() in fail path is OK with this. Next patch needs such the order. Link: http://lkml.kernel.org/r/153063058712.1818.3382490999719078571.stgit@localhost.localdomain Signed-off-by: Kirill Tkhai Acke

[Devel] [PATCH vz8 16/19] mm: add SHRINK_EMPTY shrinker methods return value

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai We need to distinguish the situations when shrinker has very small amount of objects (see vfs_pressure_ratio() called from super_cache_count()), and when it has no objects at all. Currently, in the both of these cases, shrinker::count_objects() returns 0. The patch introduces

[Devel] [PATCH vz8 11/19] mm/list_lru.c: pass lru argument to memcg_drain_list_lru_node()

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai This is just refactoring to allow next patches to have lru pointer in memcg_drain_list_lru_node(). Link: http://lkml.kernel.org/r/153063063164.1818.55009531386089350.stgit@localhost.localdomain Signed-off-by: Kirill Tkhai Acked-by: Vladimir Davydov Tested-by: Shakeel Butt

[Devel] [PATCH vz8 18/19] mm/vmscan.c: move check for SHRINKER_NUMA_AWARE to do_shrink_slab()

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai In case of shrink_slab_memcg() we do not zero nid, when shrinker is not numa-aware. This is not a real problem, since currently all memcg-aware shrinkers are numa-aware too (we have two: super_block shrinker and workingset shrinker), but something may change in the future. Li

[Devel] [PATCH vz8 13/19] mm/list_lru.c: set bit in memcg shrinker bitmap on first list_lru item appearance

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai Introduce set_shrinker_bit() function to set shrinker-related bit in memcg shrinker bitmap, and set the bit after the first item is added and in case of reparenting destroyed memcg's items. This will allow next patch to make shrinkers be called only, in case of they have charg

[Devel] [PATCH vz8 12/19] mm/memcontrol.c: export mem_cgroup_is_root()

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai This will be used in next patch. Link: http://lkml.kernel.org/r/153063064347.1818.1987011484100392706.stgit@localhost.localdomain Signed-off-by: Kirill Tkhai Acked-by: Vladimir Davydov Tested-by: Shakeel Butt Cc: Al Viro Cc: Andrey Ryabinin Cc: Chris Wilson Cc: Greg Kro

[Devel] [PATCH vz8 14/19] mm/vmscan.c: iterate only over charged shrinkers during memcg shrink_slab()

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai Using the preparations made in previous patches, in case of memcg shrink, we may avoid shrinkers, which are not set in memcg's shrinkers bitmap. To do that, we separate iterations over memcg-aware and !memcg-aware shrinkers, and memcg-aware shrinkers are chosen via for_each_se

[Devel] [PATCH vz8 08/19] fs: propagate shrinker::id to list_lru

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai Add list_lru::shrinker_id field and populate it by registered shrinker id. This will be used to set correct bit in memcg shrinkers map by lru code in next patches, after there appeared the first related to memcg element in list_lru. Link: http://lkml.kernel.org/r/15306305975

[Devel] [PATCH vz8 03/19] mm: assign id to every memcg-aware shrinker

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai Introduce shrinker::id number, which is used to enumerate memcg-aware shrinkers. The number start from 0, and the code tries to maintain it as small as possible. This will be used to represent a memcg-aware shrinkers in memcg shrinkers map. Since all memcg-aware shrinkers ar

[Devel] [PATCH vz8 04/19] mm/memcontrol.c: move up for_each_mem_cgroup{, _tree} defines

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai Next patch requires these defines are above their current position, so here they are moved to declarations. Link: http://lkml.kernel.org/r/153063055665.1818.5200425793649695598.stgit@localhost.localdomain Signed-off-by: Kirill Tkhai Acked-by: Vladimir Davydov Tested-by: Sha

[Devel] [PATCH vz8 19/19] mm: use special value SHRINKER_REGISTERING instead of list_empty() check

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai The patch introduces a special value SHRINKER_REGISTERING to use instead of list_empty() to differ a registering shrinker from unregistered shrinker. Why we need that at all? Shrinker registration is split in two parts. The first one is prealloc_shrinker(), which allocates s

[Devel] [PATCH vz8 01/19] mm/list_lru.c: combine code under the same define

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai Patch series "Improve shrink_slab() scalability (old complexity was O(n^2), new is O(n))", v8. This patcheset solves the problem with slow shrink_slab() occuring on the machines having many shrinkers and memory cgroups (i.e., with many containers). The problem is complexity

[Devel] [PATCH vz8 10/19] mm/list_lru: pass dst_memcg argument to memcg_drain_list_lru_node()

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai This is just refactoring to allow the next patches to have dst_memcg pointer in memcg_drain_list_lru_node(). Link: http://lkml.kernel.org/r/153063062118.1818.2761273817739499749.stgit@localhost.localdomain Signed-off-by: Kirill Tkhai Acked-by: Vladimir Davydov Tested-by: Sh

[Devel] [PATCH vz8 17/19] mm/vmscan.c: clear shrinker bit if there are no objects related to memcg

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai To avoid further unneed calls of do_shrink_slab() for shrinkers, which already do not have any charged objects in a memcg, their bits have to be cleared. This patch introduces a lockless mechanism to do that without races without parallel list lru add. After do_shrink_slab()

[Devel] [PATCH vz8 09/19] mm/list_lru.c: add memcg argument to list_lru_from_kmem()

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai This is just refactoring to allow the next patches to have memcg pointer in list_lru_from_kmem(). Link: http://lkml.kernel.org/r/153063060664.1818.9541345386733498582.stgit@localhost.localdomain Signed-off-by: Kirill Tkhai Acked-by: Vladimir Davydov Tested-by: Shakeel Butt

[Devel] [PATCH vz8 02/19] mm: introduce CONFIG_MEMCG_KMEM as combination of CONFIG_MEMCG && !CONFIG_SLOB

2020-03-27 Thread Andrey Ryabinin
From: Kirill Tkhai Introduce new config option, which is used to replace repeating CONFIG_MEMCG && !CONFIG_SLOB pattern. Next patches add a little more memcg+kmem related code, so let's keep the defines more clearly. Link: http://lkml.kernel.org/r/153063053670.1818.15013136946600481138.stgit@l

[Devel] [PATCH vz8] mm: remove the tswap

2020-03-27 Thread Andrey Ryabinin
The idea of keeping containers swap in the memory, like tswap does, doesn't make much sense, since it's better to give more memory to a container. Hence, remove the tswap. Signed-off-by: Andrey Ryabinin --- fs/proc/meminfo.c | 8 - mm/Kconfig| 10 -- mm/Makefile | 1 - mm/tsw

[Devel] [PATCH vz8] mm/vmscan: shrink tcache upfront everything else

2020-03-27 Thread Andrey Ryabinin
We don't want to evict page cache or anon to swap while there are a lot of reclaimable pages in tcache. Reclaim it first, and only after that reclaim the rest if still required Notes: 1) Keep tcache generic shrinkers so if new tcache are generated heavily, background kswapd thread does not forge