From: Vladimir Davydov
The patch makes shrink_slab() be called for root_mem_cgroup in the same
way as it's called for the rest of cgroups. This simplifies the logic
and improves the readability.
[ktk...@virtuozzo.com: wrote changelog]
Link:
http://lkml.kernel.org/r/153063068338.1818.1149608475
From: Kirill Tkhai
Use prealloc_shrinker()/register_shrinker_prepared() instead of
register_shrinker(). This will be used in next patch.
[ktk...@virtuozzo.com: v9]
Link:
http://lkml.kernel.org/r/153112550112.4097.16606173020912323761.stgit@localhost.localdomain
Link:
http://lkml.kernel.org/
From: Kirill Tkhai
Imagine a big node with many cpus, memory cgroups and containers. Let
we have 200 containers, every container has 10 mounts, and 10 cgroups.
All container tasks don't touch foreign containers mounts. If there is
intensive pages write, and global reclaim happens, a writing tas
From: Kirill Tkhai
Do two list_lru_init_memcg() calls after prealloc_super().
destroy_unused_super() in fail path is OK with this. Next patch needs
such the order.
Link:
http://lkml.kernel.org/r/153063058712.1818.3382490999719078571.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai
Acke
From: Kirill Tkhai
We need to distinguish the situations when shrinker has very small
amount of objects (see vfs_pressure_ratio() called from
super_cache_count()), and when it has no objects at all. Currently, in
the both of these cases, shrinker::count_objects() returns 0.
The patch introduces
From: Kirill Tkhai
This is just refactoring to allow next patches to have lru pointer in
memcg_drain_list_lru_node().
Link:
http://lkml.kernel.org/r/153063063164.1818.55009531386089350.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai
Acked-by: Vladimir Davydov
Tested-by: Shakeel Butt
From: Kirill Tkhai
In case of shrink_slab_memcg() we do not zero nid, when shrinker is not
numa-aware. This is not a real problem, since currently all memcg-aware
shrinkers are numa-aware too (we have two: super_block shrinker and
workingset shrinker), but something may change in the future.
Li
From: Kirill Tkhai
Introduce set_shrinker_bit() function to set shrinker-related bit in
memcg shrinker bitmap, and set the bit after the first item is added and
in case of reparenting destroyed memcg's items.
This will allow next patch to make shrinkers be called only, in case of
they have charg
From: Kirill Tkhai
This will be used in next patch.
Link:
http://lkml.kernel.org/r/153063064347.1818.1987011484100392706.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai
Acked-by: Vladimir Davydov
Tested-by: Shakeel Butt
Cc: Al Viro
Cc: Andrey Ryabinin
Cc: Chris Wilson
Cc: Greg Kro
From: Kirill Tkhai
Using the preparations made in previous patches, in case of memcg
shrink, we may avoid shrinkers, which are not set in memcg's shrinkers
bitmap. To do that, we separate iterations over memcg-aware and
!memcg-aware shrinkers, and memcg-aware shrinkers are chosen via
for_each_se
From: Kirill Tkhai
Add list_lru::shrinker_id field and populate it by registered shrinker
id.
This will be used to set correct bit in memcg shrinkers map by lru code
in next patches, after there appeared the first related to memcg element
in list_lru.
Link:
http://lkml.kernel.org/r/15306305975
From: Kirill Tkhai
Introduce shrinker::id number, which is used to enumerate memcg-aware
shrinkers. The number start from 0, and the code tries to maintain it
as small as possible.
This will be used to represent a memcg-aware shrinkers in memcg
shrinkers map.
Since all memcg-aware shrinkers ar
From: Kirill Tkhai
Next patch requires these defines are above their current position, so
here they are moved to declarations.
Link:
http://lkml.kernel.org/r/153063055665.1818.5200425793649695598.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai
Acked-by: Vladimir Davydov
Tested-by: Sha
From: Kirill Tkhai
The patch introduces a special value SHRINKER_REGISTERING to use instead
of list_empty() to differ a registering shrinker from unregistered
shrinker. Why we need that at all?
Shrinker registration is split in two parts. The first one is
prealloc_shrinker(), which allocates s
From: Kirill Tkhai
Patch series "Improve shrink_slab() scalability (old complexity was O(n^2), new
is O(n))", v8.
This patcheset solves the problem with slow shrink_slab() occuring on
the machines having many shrinkers and memory cgroups (i.e., with many
containers). The problem is complexity
From: Kirill Tkhai
This is just refactoring to allow the next patches to have dst_memcg
pointer in memcg_drain_list_lru_node().
Link:
http://lkml.kernel.org/r/153063062118.1818.2761273817739499749.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai
Acked-by: Vladimir Davydov
Tested-by: Sh
From: Kirill Tkhai
To avoid further unneed calls of do_shrink_slab() for shrinkers, which
already do not have any charged objects in a memcg, their bits have to
be cleared.
This patch introduces a lockless mechanism to do that without races
without parallel list lru add. After do_shrink_slab()
From: Kirill Tkhai
This is just refactoring to allow the next patches to have memcg pointer
in list_lru_from_kmem().
Link:
http://lkml.kernel.org/r/153063060664.1818.9541345386733498582.stgit@localhost.localdomain
Signed-off-by: Kirill Tkhai
Acked-by: Vladimir Davydov
Tested-by: Shakeel Butt
From: Kirill Tkhai
Introduce new config option, which is used to replace repeating
CONFIG_MEMCG && !CONFIG_SLOB pattern. Next patches add a little more
memcg+kmem related code, so let's keep the defines more clearly.
Link:
http://lkml.kernel.org/r/153063053670.1818.15013136946600481138.stgit@l
The idea of keeping containers swap in the memory, like tswap does,
doesn't make much sense, since it's better to give more memory to a container.
Hence, remove the tswap.
Signed-off-by: Andrey Ryabinin
---
fs/proc/meminfo.c | 8 -
mm/Kconfig| 10 --
mm/Makefile | 1 -
mm/tsw
We don't want to evict page cache or anon to swap while
there are a lot of reclaimable pages in tcache. Reclaim it first,
and only after that reclaim the rest if still required
Notes:
1) Keep tcache generic shrinkers so if new tcache are generated
heavily, background kswapd thread does not forge
21 matches
Mail list logo