to iterate over slab_caches and filter out
non-root kmem_caches.
It allows to remove a lot of config-dependent code and two pointers
from the kmem_cache structure.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
mm/slab.c| 1 -
mm/slab.h| 17
in the memcg_slabinfo attribute.
Following patches in the series will simplify the kmem_cache creation.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include/linux/memcontrol.h | 5 +-
include/linux/slab.h | 5 +-
mm/memcontrol.c| 163 +++---
mm/slab.c
are not
preventing the memory cgroup from being released after being deleted
by a user.
Signed-off-by: Roman Gushchin
---
tools/testing/selftests/cgroup/.gitignore | 1 +
tools/testing/selftests/cgroup/Makefile| 2 +
tools/testing/selftests/cgroup/test_kmem.c | 382 +
3 files
r must ensure the lifetime of the cgroup, e.g. grab
rcu_read_lock or css_set_lock.
Suggested-by: Johannes Weiner
Signed-off-by: Roman Gushchin
---
include/linux/memcontrol.h | 51 +++
mm/memcontrol.c| 288 -
2 files changed, 338 insertions
d from pre_alloc_hook to
post_alloc_hook. Then in case of successful allocation(s) it's
getting stored in the page->obj_cgroups vector.
The objcg obtaining part look a bit bulky now, but it will be simplified
by next commits in the series.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlasti
memcg_accumulate_slabinfo() is never called with a non-root
kmem_cache as a first argument, so the is_root_cache(s) check
is redundant and can be removed without any functional change.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
mm/slab_common.c | 3 ---
1 file changed, 3
To make the memcg_kmem_bypass() function available outside of
the memcontrol.c, let's move it to memcontrol.h. The function
is small and nicely fits into static inline sort of functions.
It will be used from the slab code.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include
nd it also takes a page address instead of a page pointer.
Let's remove slab_index() and replace it with the new helper
__obj_to_index(), which takes a page address. obj_to_index()
will be a simple wrapper taking a page pointer and passing
page_address(page) into __obj_to_index().
Signed-off-by: Roma
for a task, a bpf-based tracing
tool can be used, which can easily keep track of all slab allocations
belonging to a memory cgroup.
Signed-off-by: Roman Gushchin
Acked-by: Johannes Weiner
Reviewed-by: Vlastimil Babka
---
mm/memcontrol.c | 3 ---
mm/slab_common.c | 31
twice
(not all kmem_caches have a memcg clone), some additional memory
savings are expected. On my devvm it additionally saves about 3.5%
of slab memory.
Suggested-by: Johannes Weiner
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include/linux/slab.h | 2 -
include/linux
f the current mm tree: added css_get() in
mem_cgroup_charge(), dropped mem_cgroup_try_charge() part
2) I've reformatted commit references in the commit log to make
checkpatch.pl happy.
Signed-off-by: Johannes Weiner
Signed-off-by: Roman Gushchin
Acked-by: Roman
)charge_slab_page()
functions. The idea is to keep all slab pages accounted as slab pages
on system level.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
mm/slab.h | 173 --
1 file changed, 77 insertions(+), 96 deletions(-)
diff
-sized internal storage.
Signed-off-by: Roman Gushchin
Acked-by: Johannes Weiner
Reviewed-by: Vlastimil Babka
---
drivers/base/node.c| 2 +-
include/linux/mmzone.h | 10 ++
include/linux/vmstat.h | 14 +-
mm/memcontrol.c| 14 ++
mm/vmstat.c
To convert memcg and lruvec slab counters to bytes there must be
a way to change these counters without touching node counters.
Factor out __mod_memcg_lruvec_state() out of __mod_lruvec_state().
Signed-off-by: Roman Gushchin
Acked-by: Johannes Weiner
Reviewed-by: Vlastimil Babka
---
include
t; of NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE
>
> Thus make it read-only.
>
> Signed-off-by: Vlastimil Babka
Acked-by: Roman Gushchin
> ---
> mm/slub.c | 11 +--
> 1 file changed, 1 insertion(+), 10 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.
g boot with same granularity.
>
> Signed-off-by: Vlastimil Babka
Acked-by: Roman Gushchin
Thanks!
> ---
> Documentation/vm/slub.rst | 7 ++---
> mm/slub.c | 62 ++-
> 2 files changed, 5 insertions(+), 64 deletions(-)
>
> diff
o that.
>
> [1]
> https://lore.kernel.org/r/cag48ez31pp--h6_fzvyfj4h86qyczafpdxtjhueean+7vje...@mail.gmail.com
>
> Reported-by: Jann Horn
> Signed-off-by: Vlastimil Babka
Acked-by: Roman Gushchin
Thanks!
> ---
> mm/slub.c | 19 +--
> 1 file chan
l-vji...@codeaurora.org
> [2]
> https://lore.kernel.org/r/cag48ez31pp--h6_fzvyfj4h86qyczafpdxtjhueean+7vje...@mail.gmail.com
> [3]
> https://lore.kernel.org/r/1383cd32-1ddc-4dac-b5f8-9c42282fa...@codeaurora.org
>
> Reported-by: Vijayanand Jitta
> Reported-by: Jann Horn
> Signed-o
On Fri, Jun 05, 2020 at 08:07:51PM +, Dennis Zhou wrote:
> On Thu, May 28, 2020 at 04:25:08PM -0700, Roman Gushchin wrote:
> > Add a simple test to check the percpu memory accounting.
> > The test creates a cgroup tree with 1000 child cgroups
> > and checks val
On Fri, Jun 05, 2020 at 07:49:53PM +, Dennis Zhou wrote:
> On Thu, May 28, 2020 at 04:25:05PM -0700, Roman Gushchin wrote:
> > Percpu memory is becoming more and more widely used by various
> > subsystems, and the total amount of memory controlled by the percpu
> > allo
On Fri, Jun 05, 2020 at 06:24:33PM +0200, Vlastimil Babka wrote:
> On 5/28/20 12:34 AM, Roman Gushchin wrote:
> > diff --git a/mm/slab.h b/mm/slab.h
> > index c49a863adb63..57b425d623e5 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> ...
> > @@ -526,8 +430,
r different CMA areas.
>
> Cc: Roman Gushchin
> Signed-off-by: Barry Song
Acked-by: Roman Gushchin
> ---
> mm/hugetlb.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index bcabbe02192b..4ebc4edc3b40 100644
will return
> -ENOMEM if users set name parameter as NULL.
>
> Cc: Roman Gushchin
> Signed-off-by: Barry Song
Acked-by: Roman Gushchin
Thanks!
> ---
> mm/cma.c | 13 ++---
> mm/cma.h | 4 +++-
> 2 files changed, 9 insertions(+), 8 deletions(-)
>
>
On Wed, Jun 03, 2020 at 02:42:30PM +1200, Barry Song wrote:
> hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
> done yet. so all reserved memory will be located at node0.
>
> Cc: Roman Gushchin
> Signed-off-by: Barry Song
Acked-by: Roman Gu
memcg_accumulate_slabinfo() is never called with a non-root
kmem_cache as a first argument, so the is_root_cache(s) check
is redundant and can be removed without any functional change.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
mm/slab_common.c | 3 ---
1 file changed, 3
fix
3) separated memory.kmem.slabinfo deprecation into a separate patch,
provided a drgn-based replacement
4) rebased on top of the current mm tree
RFC:
https://lwn.net/Articles/798605/
Johannes Weiner (1):
mm: memcontrol: decouple reference counting from page accounting
Roman Gushc
for a task, a bpf-based tracing
tool can be used, which can easily keep track of all slab allocations
belonging to a memory cgroup.
Signed-off-by: Roman Gushchin
Acked-by: Johannes Weiner
Reviewed-by: Vlastimil Babka
---
mm/memcontrol.c | 3 ---
mm/slab_common.c | 31
d from pre_alloc_hook to
post_alloc_hook. Then in case of successful allocation(s) it's
getting stored in the page->obj_cgroups vector.
The objcg obtaining part look a bit bulky now, but it will be simplified
by next commits in the series.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlasti
)charge_slab_page()
functions. The idea is to keep all slab pages accounted as slab pages
on system level.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
mm/slab.h | 173 --
1 file changed, 77 insertions(+), 96 deletions(-)
diff
-off-by: Roman Gushchin
Acked-by: Johannes Weiner
Acked-by: Vlastimil Babka
---
drivers/base/node.c | 4 ++--
fs/proc/meminfo.c | 4 ++--
include/linux/mmzone.h | 16 +---
kernel/power/snapshot.c | 2 +-
mm/memcontrol.c | 11 ---
mm/oom_kill.c
twice
(not all kmem_caches have a memcg clone), some additional memory
savings are expected. On my devvm it additionally saves about 3.5%
of slab memory.
Suggested-by: Johannes Weiner
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include/linux/slab.h | 2 -
include/linux
r must ensure the lifetime of the cgroup, e.g. grab
rcu_read_lock or css_set_lock.
Suggested-by: Johannes Weiner
Signed-off-by: Roman Gushchin
---
include/linux/memcontrol.h | 51 +++
mm/memcontrol.c| 288 -
2 files changed, 338 insertions
to iterate over slab_caches and filter out
non-root kmem_caches.
It allows to remove a lot of config-dependent code and two pointers
from the kmem_cache structure.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
mm/slab.c| 1 -
mm/slab.h| 17
.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include/linux/memcontrol.h | 1 -
mm/memcontrol.c| 48 +-
mm/slab.h | 2 ++
mm/slab_common.c | 22 +
4 files changed, 15 insertions(+), 58
are not
preventing the memory cgroup from being released after being deleted
by a user.
Signed-off-by: Roman Gushchin
---
tools/testing/selftests/cgroup/.gitignore | 1 +
tools/testing/selftests/cgroup/Makefile| 2 +
tools/testing/selftests/cgroup/test_kmem.c | 382 +
3 files
in the memcg_slabinfo attribute.
Following patches in the series will simplify the kmem_cache creation.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include/linux/memcontrol.h | 5 +-
include/linux/slab.h | 5 +-
mm/memcontrol.c| 163 +++---
mm/slab.c
f the current mm tree: added css_get() in
mem_cgroup_charge(), dropped mem_cgroup_try_charge() part
2) I've reformatted commit references in the commit log to make
checkpatch.pl happy.
Signed-off-by: Johannes Weiner
Signed-off-by: Roman Gushchin
Acked-by: Roman
t's always set the lowest bit in the obj_cgroup case.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include/linux/mm_types.h | 5 +++-
include/linux/slab_def.h | 6 +
include/linux/slub_def.h | 5
mm/memcontrol.c | 17 +++---
mm/slab.h|
To make the memcg_kmem_bypass() function available outside of
the memcontrol.c, let's move it to memcontrol.h. The function
is small and nicely fits into static inline sort of functions.
It will be used from the slab code.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include
The memcg_kmem_get_cache() function became really trivial,
so let's just inline it into the single call point:
memcg_slab_pre_alloc_hook().
It will make the code less bulky and can also help the compiler
to generate a better code.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
nd it also takes a page address instead of a page pointer.
Let's remove slab_index() and replace it with the new helper
__obj_to_index(), which takes a page address. obj_to_index()
will be a simple wrapper taking a page pointer and passing
page_address(page) into __obj_to_index().
Signed-off-by: Roma
To convert memcg and lruvec slab counters to bytes there must be
a way to change these counters without touching node counters.
Factor out __mod_memcg_lruvec_state() out of __mod_lruvec_state().
Signed-off-by: Roman Gushchin
Acked-by: Johannes Weiner
Reviewed-by: Vlastimil Babka
---
include
-sized internal storage.
Signed-off-by: Roman Gushchin
Acked-by: Johannes Weiner
Reviewed-by: Vlastimil Babka
---
drivers/base/node.c| 2 +-
include/linux/mmzone.h | 10 ++
include/linux/vmstat.h | 14 +-
mm/memcontrol.c| 14 ++
mm/vmstat.c
On Wed, May 27, 2020 at 07:00:30PM +0200, Vlastimil Babka wrote:
> On 5/26/20 5:45 PM, Roman Gushchin wrote:
> > On Tue, May 26, 2020 at 05:24:46PM +0200, Vlastimil Babka wrote:
> >> 1 << 20 ?
> >>
> >> Anyway I was getting this:
> >> not ok 1
On Wed, May 27, 2020 at 03:56:14PM -0400, Johannes Weiner wrote:
> On Tue, May 26, 2020 at 02:42:14PM -0700, Roman Gushchin wrote:
> > @@ -257,6 +257,98 @@ struct cgroup_subsys_state *vmpressure_to_css(struct
> > vmpressure *vmpr)
> > }
> >
> > #ifdef CONFIG_
On Wed, May 27, 2020 at 06:01:20PM +0200, Vlastimil Babka wrote:
> On 5/26/20 11:42 PM, Roman Gushchin wrote:
>
> > @@ -549,17 +503,14 @@ static __always_inline int charge_slab_page(struct
> > page *page,
> >
On Wed, May 27, 2020 at 05:54:50PM +0200, Vlastimil Babka wrote:
> On 5/26/20 11:42 PM, Roman Gushchin wrote:
> > Deprecate memory.kmem.slabinfo.
> >
> > An empty file will be presented if corresponding config options are
> > enabled.
> >
> > The inter
On Wed, May 27, 2020 at 01:43:16PM +0200, Vlastimil Babka wrote:
> On 5/26/20 11:42 PM, Roman Gushchin wrote:
> > In order to prepare for per-object slab memory accounting, convert
> > NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE vmstat items to bytes.
> >
> > To m
are not
preventing the memory cgroup from being released after being deleted
by a user.
Signed-off-by: Roman Gushchin
---
tools/testing/selftests/cgroup/.gitignore | 1 +
tools/testing/selftests/cgroup/Makefile| 2 +
tools/testing/selftests/cgroup/test_kmem.c | 382 +
3 files
f the current mm tree: added css_get() in
mem_cgroup_charge(), dropped mem_cgroup_try_charge() part
2) I've reformatted commit references in the commit log to make
checkpatch.pl happy.
Signed-off-by: Johannes Weiner
Signed-off-by: Roman Gushchin
Acked-by: Roman
twice
(not all kmem_caches have a memcg clone), some additional memory
savings are expected. On my devvm it additionally saves about 3.5%
of slab memory.
Suggested-by: Johannes Weiner
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include/linux/slab.h | 2 -
include/linux
d from pre_alloc_hook to
post_alloc_hook. Then in case of successful allocation(s) it's
getting stored in the page->obj_cgroups vector.
The objcg obtaining part look a bit bulky now, but it will be simplified
by next commits in the series.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlasti
t's always set the lowest bit in the obj_cgroup case.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include/linux/mm_types.h | 5 +++-
include/linux/slab_def.h | 6 +
include/linux/slub_def.h | 5
mm/memcontrol.c | 17 +++---
mm/slab.h|
To make the memcg_kmem_bypass() function available outside of
the memcontrol.c, let's move it to memcontrol.h. The function
is small and nicely fits into static inline sort of functions.
It will be used from the slab code.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include
to iterate over slab_caches and filter out
non-root kmem_caches.
It allows to remove a lot of config-dependent code and two pointers
from the kmem_cache structure.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
mm/slab.c| 1 -
mm/slab.h| 17
.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include/linux/memcontrol.h | 1 -
mm/memcontrol.c| 48 +-
mm/slab.h | 2 ++
mm/slab_common.c | 22 +
4 files changed, 15 insertions(+), 58
memcg_accumulate_slabinfo() is never called with a non-root
kmem_cache as a first argument, so the is_root_cache(s) check
is redundant and can be removed without any functional change.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
mm/slab_common.c | 3 ---
1 file changed, 3
To convert memcg and lruvec slab counters to bytes there must be
a way to change these counters without touching node counters.
Factor out __mod_memcg_lruvec_state() out of __mod_lruvec_state().
Signed-off-by: Roman Gushchin
Acked-by: Johannes Weiner
Reviewed-by: Vlastimil Babka
---
include
The memcg_kmem_get_cache() function became really trivial,
so let's just inline it into the single call point:
memcg_slab_pre_alloc_hook().
It will make the code less bulky and can also help the compiler
to generate a better code.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
)charge_slab_page()
functions. The idea is to keep all slab pages accounted as slab pages
on system level.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
mm/slab.h | 173 --
1 file changed, 77 insertions(+), 96 deletions(-)
diff
RFC:
https://lwn.net/Articles/798605/
Johannes Weiner (1):
mm: memcontrol: decouple reference counting from page accounting
Roman Gushchin (18):
mm: memcg: factor out memcg- and lruvec-level changes out of
__mod_lruvec_state()
mm: memcg: prepare for byte-sized vmstat items
mm: memcg
-off-by: Roman Gushchin
---
drivers/base/node.c | 4 ++--
fs/proc/meminfo.c | 4 ++--
include/linux/mmzone.h | 16 +---
kernel/power/snapshot.c | 2 +-
mm/memcontrol.c | 11 ---
mm/oom_kill.c | 2 +-
mm/page_alloc.c | 8
mm/slab.h
in the memcg_slabinfo attribute.
Following patches in the series will simplify the kmem_cache creation.
Signed-off-by: Roman Gushchin
Reviewed-by: Vlastimil Babka
---
include/linux/memcontrol.h | 5 +-
include/linux/slab.h | 5 +-
mm/memcontrol.c| 163 +++---
mm/slab.c
nd it also takes a page address instead of a page pointer.
Let's remove slab_index() and replace it with the new helper
__obj_to_index(), which takes a page address. obj_to_index()
will be a simple wrapper taking a page pointer and passing
page_address(page) into __obj_to_index().
Signed-off-by: Roma
-sized internal storage.
Signed-off-by: Roman Gushchin
Acked-by: Johannes Weiner
Reviewed-by: Vlastimil Babka
---
drivers/base/node.c| 2 +-
include/linux/mmzone.h | 10 ++
include/linux/vmstat.h | 14 +-
mm/memcontrol.c| 14 ++
mm/vmstat.c
for a task, a bpf-based tracing
tool can be used, which can easily keep track of all slab allocations
belonging to a memory cgroup.
Signed-off-by: Roman Gushchin
---
mm/memcontrol.c | 3 ---
mm/slab_common.c | 31 ---
2 files changed, 4 insertions(+), 30 deletions(-)
diff
r must ensure the lifetime of the cgroup, e.g. grab
rcu_read_lock or css_set_lock.
Suggested-by: Johannes Weiner
Signed-off-by: Roman Gushchin
---
include/linux/memcontrol.h | 51 +++
mm/memcontrol.c| 278 -
2 files changed, 328 insertions
On Tue, May 26, 2020 at 12:52:24PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:47 PM, Roman Gushchin wrote:
> > Currently there are two lists of kmem_caches:
> > 1) slab_caches, which contains all kmem_caches,
> > 2) slab_root_caches, which contains
On Mon, May 25, 2020 at 06:10:55PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > Switch to per-object accounting of non-root slab objects.
> >
> > Charging is performed using obj_cgroup API in the pre_alloc hook.
> > Obj_cgro
On Mon, May 25, 2020 at 05:07:22PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > Store the obj_cgroup pointer in the corresponding place of
> > page->obj_cgroups for each allocated non-root slab object.
> > Make sure that each allocated
On Fri, May 22, 2020 at 08:27:15PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > Allocate and release memory to store obj_cgroup pointers for each
> > non-root slab page. Reuse page->mem_cgroup pointer to store a pointer
> >
On Tue, May 26, 2020 at 05:24:46PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:47 PM, Roman Gushchin wrote:
> > Add some tests to cover the kernel memory accounting functionality.
> > These are covering some issues (and changes) we had recently.
> >
> > 1) A
On Fri, May 22, 2020 at 08:27:15PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > Allocate and release memory to store obj_cgroup pointers for each
> > non-root slab page. Reuse page->mem_cgroup pointer to store a pointer
> >
On Thu, May 21, 2020 at 11:57:12AM +0200, Vlastimil Babka wrote:
> On 5/20/20 9:26 PM, Roman Gushchin wrote:
> > On Wed, May 20, 2020 at 02:25:22PM +0200, Vlastimil Babka wrote:
> >>
> >> However __mod_node_page_state() and mode_node_state() will now branch
> >&g
On Thu, May 21, 2020 at 01:01:38PM +0200, Vlastimil Babka wrote:
> On 5/20/20 11:00 PM, Roman Gushchin wrote:
> >
> > From beeaecdac85c3a395dcfb99944dc8c858b541cbf Mon Sep 17 00:00:00 2001
> > From: Roman Gushchin
> > Date: Mon, 29 Jul 2019 18:18:42 -0700
> > Sub
On Mon, May 18, 2020 at 10:20:50AM +0900, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> There is no difference between two migration callback functions,
> alloc_huge_page_node() and alloc_huge_page_nodemask(), except
> __GFP_THISNODE handling. This patch adds one more field on to
> the
On Mon, May 18, 2020 at 10:20:49AM +0900, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> Currently, page allocation functions for migration requires some arguments.
> More worse, in the following patch, more argument will be needed to unify
> the similar functions. To simplify them, in this
On Mon, May 18, 2020 at 10:20:47AM +0900, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> For locality, it's better to migrate the page to the same node
> rather than the node of the current caller's cpu.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Roman Gushchin
> ---
&
On Wed, May 20, 2020 at 03:51:45PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > This commit implements SLUB version of the obj_to_index() function,
> > which will be required to calculate the offset of obj_cgroup in the
> > obj_cgroups v
On Wed, May 20, 2020 at 11:51:51AM +0200, Vlastimil Babka wrote:
> On 5/13/20 2:57 AM, Roman Gushchin wrote:
> >
> > Btw, I'm trying to build up a prototype with an embedded memcg pointer,
> > but it seems to be way more tricky than I thought. It requires changes to
> &
On Wed, May 20, 2020 at 02:25:22PM +0200, Vlastimil Babka wrote:
> On 4/22/20 10:46 PM, Roman Gushchin wrote:
> > In order to prepare for per-object slab memory accounting, convert
> > NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE vmstat items to bytes.
> >
> > To m
On Fri, May 15, 2020 at 09:45:30PM +, Christoph Lameter wrote:
> On Tue, 12 May 2020, Roman Gushchin wrote:
>
> > > Add it to the metadata at the end of the object. Like the debugging
> > > information or the pointer for RCU freeing.
> >
> > Enabling d
On Tue, May 12, 2020 at 06:56:45PM -0400, Johannes Weiner wrote:
> On Thu, May 07, 2020 at 03:26:31PM -0700, Roman Gushchin wrote:
> > On Thu, May 07, 2020 at 05:03:14PM -0400, Johannes Weiner wrote:
> > > On Wed, Apr 22, 2020 at 01:46:55PM -0700, Roman Gushchin wrote
On Fri, May 08, 2020 at 09:35:54PM +, Christoph Lameter wrote:
> On Mon, 4 May 2020, Roman Gushchin wrote:
>
> > On Sat, May 02, 2020 at 11:54:09PM +, Christoph Lameter wrote:
> > > On Thu, 30 Apr 2020, Roman Gushchin wrote:
> > >
> > > > Sorry, b
On Fri, May 15, 2020 at 10:49:22AM -0700, Shakeel Butt wrote:
> On Fri, May 15, 2020 at 8:00 AM Roman Gushchin wrote:
> >
> > On Fri, May 15, 2020 at 06:44:44AM -0700, Shakeel Butt wrote:
> > > On Fri, May 15, 2020 at 6:24 AM Johannes Weiner
> > > wrote:
>
On Fri, May 15, 2020 at 06:44:44AM -0700, Shakeel Butt wrote:
> On Fri, May 15, 2020 at 6:24 AM Johannes Weiner wrote:
> >
> > On Fri, May 15, 2020 at 10:29:55AM +0200, Michal Hocko wrote:
> > > On Sat 09-05-20 07:06:38, Shakeel Butt wrote:
> > > > On Fri, May 8, 2020 at 2:44 PM Johannes Weiner
On Fri, May 08, 2020 at 09:35:54PM +, Christoph Lameter wrote:
> On Mon, 4 May 2020, Roman Gushchin wrote:
>
> > On Sat, May 02, 2020 at 11:54:09PM +, Christoph Lameter wrote:
> > > On Thu, 30 Apr 2020, Roman Gushchin wrote:
> > >
> > > > Sorry, b
only way
> to get these stats. So, make these stats consistent.
>
> Signed-off-by: Shakeel Butt
Acked-by: Roman Gushchin
Thanks!
On Thu, May 07, 2020 at 05:03:14PM -0400, Johannes Weiner wrote:
> On Wed, Apr 22, 2020 at 01:46:55PM -0700, Roman Gushchin wrote:
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -257,6 +257,78 @@ struct cgroup_subsys_state *vmpressure_to_css(struct
claim
> A.elow = 0
> B.elow = B.low
> C.elow = C.low
>
> and global reclaim could see the above and then
> B.elow = C.elow = 0 because children_low_usage > A.elow
>
> Which means that protected memcgs would get reclaimed.
>
> In future we would like to make mem_cgroup_p
On Sat, May 02, 2020 at 11:54:09PM +, Christoph Lameter wrote:
> On Thu, 30 Apr 2020, Roman Gushchin wrote:
>
> > Sorry, but what exactly do you mean?
>
> I think the right approach is to add a pointer to each slab object for
> memcg support.
>
As I understand, emb
On Thu, Apr 30, 2020 at 03:30:49PM -0400, Johannes Weiner wrote:
> On Thu, Apr 30, 2020 at 12:06:10PM -0700, Roman Gushchin wrote:
> > On Thu, Apr 30, 2020 at 11:27:12AM -0700, Shakeel Butt wrote:
> > > @@ -6106,7 +6107,7 @@ static ssize_t memory_max_write(struct
>
Hello, Shakeel!
On Thu, Apr 30, 2020 at 11:27:12AM -0700, Shakeel Butt wrote:
> Lowering memory.max can trigger an oom-kill if the reclaim does not
> succeed. However if oom-killer does not find a process for killing, it
> dumps a lot of warnings.
Makes total sense to me.
>
> Deleting a memcg
* Effective values of the reclaim targets are ignored so they
> + * can be stale. Have a look at mem_cgroup_protection for more
> + * details.
> + * TODO: calculation should be more robust so that we do not need
> + * that special casing.
> + */
> if (memcg == root)
> return MEMCG_PROT_NONE;
Acked-by: Roman Gushchin
Thanks!
On Thu, Apr 30, 2020 at 04:29:50PM +, Christoph Lameter wrote:
> On Mon, 27 Apr 2020, Roman Gushchin wrote:
>
> > > Why do you need this? Just slap a pointer to the cgroup as additional
> > > metadata onto the slab object. Is that not much simpler, safer and
On Tue, Apr 28, 2020 at 05:10:33PM +0800, Yang Yingliang wrote:
> Hi,
>
> On 2020/4/28 1:24, Roman Gushchin wrote:
> > On Mon, Apr 27, 2020 at 01:13:04PM -0400, Johannes Weiner wrote:
> > > +cc Roman who has been looking the most at this area
> > >
> > >
On Mon, Apr 27, 2020 at 09:46:38AM -0700, Roman Gushchin wrote:
> On Mon, Apr 27, 2020 at 04:21:01PM +, Christoph Lameter wrote:
> > On Fri, 24 Apr 2020, Roman Gushchin wrote:
> >
> > > > The patch seems to only use it for setup and debugging? It is used f
On Tue, Oct 22, 2019 at 05:42:49PM -0400, Johannes Weiner wrote:
> On Tue, Oct 22, 2019 at 07:56:33PM +0000, Roman Gushchin wrote:
> > On Tue, Oct 22, 2019 at 10:48:00AM -0400, Johannes Weiner wrote:
> > > - /* Record the group's
mem_cgroup
> *memcg,
>* will pick up pages from other mem cgroup's as well. We hack
> * the priority and make it zero.
>*/
> - shrink_node_memcg(pgdat, memcg, );
> + shrink_lruvec(lruvec, );
>
> trace_mm_vmscan_memcg_softlimit_reclaim_end(
> cgroup_ino(memcg->css.cgroup),
> --
> 2.23.0
>
Reviewed-by: Roman Gushchin
action layer for that node-memcg intersection.
>
> Introduce lruvec->flags and LRUVEC_CONGESTED. Then track that at the
> reclaim root level, which is either the NUMA node for global reclaim,
> or the cgroup-node intersection for cgroup reclaim.
Good idea!
Reviewed-by: Roman Gushchin
&
901 - 1000 of 2791 matches
Mail list logo