On Sun, Jul 22, 2018 at 11:44 PM Michal Hocko wrote:
>
> On Thu 19-07-18 09:23:10, Shakeel Butt wrote:
> > On Thu, Jul 19, 2018 at 3:43 AM Michal Hocko wrote:
> > >
> > > [CC Andrew]
> > >
> > > On Thu 19-07-18 18:06:47, Jing Xia wrote:
>
On Sun, Jul 22, 2018 at 11:44 PM Michal Hocko wrote:
>
> On Thu 19-07-18 09:23:10, Shakeel Butt wrote:
> > On Thu, Jul 19, 2018 at 3:43 AM Michal Hocko wrote:
> > >
> > > [CC Andrew]
> > >
> > > On Thu 19-07-18 18:06:47, Jing Xia wrote:
>
On Sun, Jun 17, 2018 at 2:57 PM Alexey Dobriyan wrote:
>
> commit 24074a35c5c975c94cd9691ae962855333aac47f
> ("proc: Make inline name size calculation automatic")
> started to put PDE allocations into kmalloc-256 which is unnecessary as
> ~40 character names are very rare.
>
> Put allocation back
On Sun, Jun 17, 2018 at 2:57 PM Alexey Dobriyan wrote:
>
> commit 24074a35c5c975c94cd9691ae962855333aac47f
> ("proc: Make inline name size calculation automatic")
> started to put PDE allocations into kmalloc-256 which is unnecessary as
> ~40 character names are very rare.
>
> Put allocation back
On Thu, Jul 19, 2018 at 3:43 AM Michal Hocko wrote:
>
> [CC Andrew]
>
> On Thu 19-07-18 18:06:47, Jing Xia wrote:
> > It was reported that a kernel crash happened in mem_cgroup_iter(),
> > which can be triggered if the legacy cgroup-v1 non-hierarchical
> > mode is used.
> >
> > Unable to handle
On Thu, Jul 19, 2018 at 3:43 AM Michal Hocko wrote:
>
> [CC Andrew]
>
> On Thu 19-07-18 18:06:47, Jing Xia wrote:
> > It was reported that a kernel crash happened in mem_cgroup_iter(),
> > which can be triggered if the legacy cgroup-v1 non-hierarchical
> > mode is used.
> >
> > Unable to handle
f-by: Kirill Tkhai
Reviewed-by: Shakeel Butt
> ---
> mm/vmscan.c | 11 +++
> 1 file changed, 3 insertions(+), 8 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9918bfc1d2f9..636657213b9b 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -445,16
f-by: Kirill Tkhai
Reviewed-by: Shakeel Butt
> ---
> mm/vmscan.c | 11 +++
> 1 file changed, 3 insertions(+), 8 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9918bfc1d2f9..636657213b9b 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -445,16
On Wed, Jul 18, 2018 at 10:58 AM Bruce Merry wrote:
>
> On 18 July 2018 at 19:48, Shakeel Butt wrote:
> > On Wed, Jul 18, 2018 at 10:40 AM Bruce Merry wrote:
> >> > Yes, very easy to produce zombies, though I don't think kernel
> >> > provides an
On Wed, Jul 18, 2018 at 10:58 AM Bruce Merry wrote:
>
> On 18 July 2018 at 19:48, Shakeel Butt wrote:
> > On Wed, Jul 18, 2018 at 10:40 AM Bruce Merry wrote:
> >> > Yes, very easy to produce zombies, though I don't think kernel
> >> > provides an
On Wed, Jul 18, 2018 at 10:40 AM Bruce Merry wrote:
>
> On 18 July 2018 at 17:49, Shakeel Butt wrote:
> > On Wed, Jul 18, 2018 at 8:37 AM Bruce Merry wrote:
> >> That sounds promising. Is there any way to tell how many zombies there
> >> are, and is there any way
On Wed, Jul 18, 2018 at 10:40 AM Bruce Merry wrote:
>
> On 18 July 2018 at 17:49, Shakeel Butt wrote:
> > On Wed, Jul 18, 2018 at 8:37 AM Bruce Merry wrote:
> >> That sounds promising. Is there any way to tell how many zombies there
> >> are, and is there any way
On Wed, Jul 18, 2018 at 8:37 AM Bruce Merry wrote:
>
> On 18 July 2018 at 17:26, Shakeel Butt wrote:
> > On Wed, Jul 18, 2018 at 7:29 AM Bruce Merry wrote:
> > It seems like you are using cgroup-v1. How many nodes are there in
> > your memcg tree and also how many c
On Wed, Jul 18, 2018 at 8:37 AM Bruce Merry wrote:
>
> On 18 July 2018 at 17:26, Shakeel Butt wrote:
> > On Wed, Jul 18, 2018 at 7:29 AM Bruce Merry wrote:
> > It seems like you are using cgroup-v1. How many nodes are there in
> > your memcg tree and also how many c
On Wed, Jul 18, 2018 at 8:27 AM Bruce Merry wrote:
>
> On 18 July 2018 at 16:47, Michal Hocko wrote:
> >> Thanks for looking into this. I'm not familiar with ftrace. Can you
> >> give me a specific command line to run? Based on "perf record cat
> >> /sys/fs/cgroup/memory/memory.stat"/"perf
On Wed, Jul 18, 2018 at 8:27 AM Bruce Merry wrote:
>
> On 18 July 2018 at 16:47, Michal Hocko wrote:
> >> Thanks for looking into this. I'm not familiar with ftrace. Can you
> >> give me a specific command line to run? Based on "perf record cat
> >> /sys/fs/cgroup/memory/memory.stat"/"perf
On Wed, Jul 18, 2018 at 7:29 AM Bruce Merry wrote:
>
> On 18 July 2018 at 12:42, Michal Hocko wrote:
> > [CC some more people]
> >
> > On Tue 17-07-18 21:23:07, Andrew Morton wrote:
> >> (cc linux-mm)
> >>
> >> On Tue, 3 Jul 2018 08:43:23 +0200 Bruce Merry wrote:
> >>
> >> > Hi
> >> >
> >> >
On Wed, Jul 18, 2018 at 7:29 AM Bruce Merry wrote:
>
> On 18 July 2018 at 12:42, Michal Hocko wrote:
> > [CC some more people]
> >
> > On Tue 17-07-18 21:23:07, Andrew Morton wrote:
> >> (cc linux-mm)
> >>
> >> On Tue, 3 Jul 2018 08:43:23 +0200 Bruce Merry wrote:
> >>
> >> > Hi
> >> >
> >> >
On Sun, Jul 15, 2018 at 6:50 PM Yafang Shao wrote:
>
> On Sun, Jul 15, 2018 at 11:04 PM, Shakeel Butt wrote:
> > On Sun, Jul 15, 2018 at 1:02 AM Yafang Shao wrote:
> >>
> >> On Sun, Jul 15, 2018 at 2:34 PM, Shakeel Butt wrote:
> >> > On Sat, Jul 14, 2
On Sun, Jul 15, 2018 at 6:50 PM Yafang Shao wrote:
>
> On Sun, Jul 15, 2018 at 11:04 PM, Shakeel Butt wrote:
> > On Sun, Jul 15, 2018 at 1:02 AM Yafang Shao wrote:
> >>
> >> On Sun, Jul 15, 2018 at 2:34 PM, Shakeel Butt wrote:
> >> > On Sat, Jul 14, 2
On Sun, Jul 15, 2018 at 1:02 AM Yafang Shao wrote:
>
> On Sun, Jul 15, 2018 at 2:34 PM, Shakeel Butt wrote:
> > On Sat, Jul 14, 2018 at 10:26 PM Yafang Shao wrote:
> >>
> >> On Sun, Jul 15, 2018 at 12:25 PM, Shakeel Butt wrote:
> >> > On Sat,
On Sun, Jul 15, 2018 at 1:02 AM Yafang Shao wrote:
>
> On Sun, Jul 15, 2018 at 2:34 PM, Shakeel Butt wrote:
> > On Sat, Jul 14, 2018 at 10:26 PM Yafang Shao wrote:
> >>
> >> On Sun, Jul 15, 2018 at 12:25 PM, Shakeel Butt wrote:
> >> > On Sat,
On Sat, Jul 14, 2018 at 10:26 PM Yafang Shao wrote:
>
> On Sun, Jul 15, 2018 at 12:25 PM, Shakeel Butt wrote:
> > On Sat, Jul 14, 2018 at 7:10 PM Yafang Shao wrote:
> >>
> >> On Sat, Jul 14, 2018 at 11:38 PM, Shakeel Butt wrote:
> >> > On Sat,
On Sat, Jul 14, 2018 at 10:26 PM Yafang Shao wrote:
>
> On Sun, Jul 15, 2018 at 12:25 PM, Shakeel Butt wrote:
> > On Sat, Jul 14, 2018 at 7:10 PM Yafang Shao wrote:
> >>
> >> On Sat, Jul 14, 2018 at 11:38 PM, Shakeel Butt wrote:
> >> > On Sat,
On Sat, Jul 14, 2018 at 7:10 PM Yafang Shao wrote:
>
> On Sat, Jul 14, 2018 at 11:38 PM, Shakeel Butt wrote:
> > On Sat, Jul 14, 2018 at 1:32 AM Yafang Shao wrote:
> >>
> >> try_charge maybe executed in packet receive path, which is in interrupt
> &
On Sat, Jul 14, 2018 at 7:10 PM Yafang Shao wrote:
>
> On Sat, Jul 14, 2018 at 11:38 PM, Shakeel Butt wrote:
> > On Sat, Jul 14, 2018 at 1:32 AM Yafang Shao wrote:
> >>
> >> try_charge maybe executed in packet receive path, which is in interrupt
> &
On Sat, Jul 14, 2018 at 1:32 AM Yafang Shao wrote:
>
> try_charge maybe executed in packet receive path, which is in interrupt
> context.
> In this situation, the 'current' is the interrupted task, which may has
> no relation to the rx softirq, So it is nonsense to use 'current'.
>
Have you
On Sat, Jul 14, 2018 at 1:32 AM Yafang Shao wrote:
>
> try_charge maybe executed in packet receive path, which is in interrupt
> context.
> In this situation, the 'current' is the interrupted task, which may has
> no relation to the rx softirq, So it is nonsense to use 'current'.
>
Have you
On Tue, Jul 3, 2018 at 12:25 PM Matthew Wilcox wrote:
>
> On Tue, Jul 03, 2018 at 12:19:35PM -0700, Shakeel Butt wrote:
> > On Tue, Jul 3, 2018 at 12:13 PM Kirill Tkhai wrote:
> > > > Do we really have so very many !memcg-aware shrinkers?
> > > >
> &g
On Tue, Jul 3, 2018 at 12:25 PM Matthew Wilcox wrote:
>
> On Tue, Jul 03, 2018 at 12:19:35PM -0700, Shakeel Butt wrote:
> > On Tue, Jul 3, 2018 at 12:13 PM Kirill Tkhai wrote:
> > > > Do we really have so very many !memcg-aware shrinkers?
> > > >
> &g
On Tue, Jul 3, 2018 at 12:13 PM Kirill Tkhai wrote:
>
> On 03.07.2018 20:58, Matthew Wilcox wrote:
> > On Tue, Jul 03, 2018 at 06:46:57PM +0300, Kirill Tkhai wrote:
> >> shrinker_idr now contains only memcg-aware shrinkers, so all bits from
> >> memcg map
> >> may be potentially populated. In
On Tue, Jul 3, 2018 at 12:13 PM Kirill Tkhai wrote:
>
> On 03.07.2018 20:58, Matthew Wilcox wrote:
> > On Tue, Jul 03, 2018 at 06:46:57PM +0300, Kirill Tkhai wrote:
> >> shrinker_idr now contains only memcg-aware shrinkers, so all bits from
> >> memcg map
> >> may be potentially populated. In
The flag GFP_ATOMIC already contains __GFP_HIGH. There is no need to
explicitly or __GFP_HIGH again. So, just remove unnecessary __GFP_HIGH.
Signed-off-by: Shakeel Butt
---
block/blk-ioc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-ioc.c b/block/blk-ioc.c
The flag GFP_ATOMIC already contains __GFP_HIGH. There is no need to
explicitly or __GFP_HIGH again. So, just remove unnecessary __GFP_HIGH.
Signed-off-by: Shakeel Butt
---
block/blk-ioc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-ioc.c b/block/blk-ioc.c
On Tue, Jul 3, 2018 at 9:17 AM Kirill Tkhai wrote:
>
> Hi, Shakeel,
>
> On 03.07.2018 18:46, Shakeel Butt wrote:
> > On Tue, Jul 3, 2018 at 8:27 AM Matthew Wilcox wrote:
> >>
> >> On Tue, Jul 03, 2018 at 06:09:05PM +0300, Kirill Tkhai wrote:
> >&g
On Tue, Jul 3, 2018 at 9:17 AM Kirill Tkhai wrote:
>
> Hi, Shakeel,
>
> On 03.07.2018 18:46, Shakeel Butt wrote:
> > On Tue, Jul 3, 2018 at 8:27 AM Matthew Wilcox wrote:
> >>
> >> On Tue, Jul 03, 2018 at 06:09:05PM +0300, Kirill Tkhai wrote:
> >&g
On Tue, Jul 3, 2018 at 8:27 AM Matthew Wilcox wrote:
>
> On Tue, Jul 03, 2018 at 06:09:05PM +0300, Kirill Tkhai wrote:
> > +++ b/mm/vmscan.c
> > @@ -169,6 +169,49 @@ unsigned long vm_total_pages;
> > static LIST_HEAD(shrinker_list);
> > static DECLARE_RWSEM(shrinker_rwsem);
> >
> > +#ifdef
On Tue, Jul 3, 2018 at 8:27 AM Matthew Wilcox wrote:
>
> On Tue, Jul 03, 2018 at 06:09:05PM +0300, Kirill Tkhai wrote:
> > +++ b/mm/vmscan.c
> > @@ -169,6 +169,49 @@ unsigned long vm_total_pages;
> > static LIST_HEAD(shrinker_list);
> > static DECLARE_RWSEM(shrinker_rwsem);
> >
> > +#ifdef
The patch "fs, mm: account buffer_head to kmemcg" missed to add
__GFP_ACCOUNT flag into the gfp mask for directed memcg charging. So,
adding it. Andrew, please squash this into the original patch.
Signed-off-by: Shakeel Butt
---
fs/buffer.c | 2 +-
1 file changed, 1 insertion(+),
The patch "fs, mm: account buffer_head to kmemcg" missed to add
__GFP_ACCOUNT flag into the gfp mask for directed memcg charging. So,
adding it. Andrew, please squash this into the original patch.
Signed-off-by: Shakeel Butt
---
fs/buffer.c | 2 +-
1 file changed, 1 insertion(+),
On Mon, Jul 2, 2018 at 2:54 PM Shakeel Butt wrote:
>
> Hi Andres, this is a small cleanup to the patch "fs: fsnotify: account
*Andrew*
> fsnotify metadata to kmemcg". Please squash.
>
> Signed-off-by: Shakeel Butt
> ---
> fs/notify/fanotify/fanotify.c
On Mon, Jul 2, 2018 at 2:54 PM Shakeel Butt wrote:
>
> Hi Andres, this is a small cleanup to the patch "fs: fsnotify: account
*Andrew*
> fsnotify metadata to kmemcg". Please squash.
>
> Signed-off-by: Shakeel Butt
> ---
> fs/notify/fanotify/fanotify.c
Hi Andres, this is a small cleanup to the patch "fs: fsnotify: account
fsnotify metadata to kmemcg". Please squash.
Signed-off-by: Shakeel Butt
---
fs/notify/fanotify/fanotify.c| 2 +-
fs/notify/inotify/inotify_fsnotify.c | 2 +-
2 files changed, 2 insertions(+), 2 deletion
Hi Andres, this is a small cleanup to the patch "fs: fsnotify: account
fsnotify metadata to kmemcg". Please squash.
Signed-off-by: Shakeel Butt
---
fs/notify/fanotify/fanotify.c| 2 +-
fs/notify/inotify/inotify_fsnotify.c | 2 +-
2 files changed, 2 insertions(+), 2 deletion
On Fri, Jun 29, 2018 at 7:55 AM Michal Hocko wrote:
>
> On Fri 29-06-18 16:40:23, Paolo Bonzini wrote:
> > On 29/06/2018 16:30, Michal Hocko wrote:
> > > I am not familiar wtih kvm to judge but if we are going to account this
> > > memory we will probably want to let oom_badness know how much
On Fri, Jun 29, 2018 at 7:55 AM Michal Hocko wrote:
>
> On Fri 29-06-18 16:40:23, Paolo Bonzini wrote:
> > On 29/06/2018 16:30, Michal Hocko wrote:
> > > I am not familiar wtih kvm to judge but if we are going to account this
> > > memory we will probably want to let oom_badness know how much
The size of kvm's shadow page tables corresponds to the size of the
guest virtual machines on the system. Large VMs can spend a significant
amount of memory as shadow page tables which can not be left as system
memory overhead. So, account shadow page tables to the kmemcg.
Signed-off-by: Shakeel
The size of kvm's shadow page tables corresponds to the size of the
guest virtual machines on the system. Large VMs can spend a significant
amount of memory as shadow page tables which can not be left as system
memory overhead. So, account shadow page tables to the kmemcg.
Signed-off-by: Shakeel
On Thu, Jun 28, 2018 at 12:03 PM Jan Kara wrote:
>
> On Wed 27-06-18 12:12:49, Shakeel Butt wrote:
> > A lot of memory can be consumed by the events generated for the huge or
> > unlimited queues if there is either no or slow listener. This can cause
> > system level memo
On Thu, Jun 28, 2018 at 12:03 PM Jan Kara wrote:
>
> On Wed 27-06-18 12:12:49, Shakeel Butt wrote:
> > A lot of memory can be consumed by the events generated for the huge or
> > unlimited queues if there is either no or slow listener. This can cause
> > system level memo
On Wed, Jun 27, 2018 at 2:51 PM Eric Dumazet wrote:
>
>
>
> On 06/27/2018 01:41 PM, Shakeel Butt wrote:
> > Currently the kernel accounts the memory for network traffic through
> > mem_cgroup_[un]charge_skmem() interface. However the memory accounted
> > only in
On Wed, Jun 27, 2018 at 2:51 PM Eric Dumazet wrote:
>
>
>
> On 06/27/2018 01:41 PM, Shakeel Butt wrote:
> > Currently the kernel accounts the memory for network traffic through
> > mem_cgroup_[un]charge_skmem() interface. However the memory accounted
> > only in
to the listener's memcg. Thus we save the memcg reference in the
fsnotify_group structure of the listener.
This patch has also moved the members of fsnotify_group to keep the size
same, at least for 64 bit build, even with additional member by filling
the holes.
Signed-off-by: Shakeel Butt
Cc: Michal Hocko
to the listener's memcg. Thus we save the memcg reference in the
fsnotify_group structure of the listener.
This patch has also moved the members of fsnotify_group to keep the size
same, at least for 64 bit build, even with additional member by filling
the holes.
Signed-off-by: Shakeel Butt
Cc: Michal Hocko
be used for charging and for buffer_head, the memcg of
the page can be charged. For directed charging, the caller can use the
scope API memalloc_[un]use_memcg() to specify the memcg to charge for
all the __GFP_ACCOUNT allocations within the scope.
Shakeel Butt (2):
fs: fsnotify: account fsnotify
on the system. So, the right way to charge buffer_head is
to extract the memcg from the page for which buffer_heads are being
allocated and then use targeted memcg charging API.
Signed-off-by: Shakeel Butt
Cc: Michal Hocko
Cc: Jan Kara
Cc: Amir Goldstein
Cc: Greg Thelen
Cc: Johannes Weiner
Cc
be used for charging and for buffer_head, the memcg of
the page can be charged. For directed charging, the caller can use the
scope API memalloc_[un]use_memcg() to specify the memcg to charge for
all the __GFP_ACCOUNT allocations within the scope.
Shakeel Butt (2):
fs: fsnotify: account fsnotify
on the system. So, the right way to charge buffer_head is
to extract the memcg from the page for which buffer_heads are being
allocated and then use targeted memcg charging API.
Signed-off-by: Shakeel Butt
Cc: Michal Hocko
Cc: Jan Kara
Cc: Amir Goldstein
Cc: Greg Thelen
Cc: Johannes Weiner
Cc
The size of kvm's shadow page tables corresponds to the size of the
guest virtual machines on the system. Large VMs can spend a significant
amount of memory as shadow page tables which can not be left as system
memory overhead. So, account shadow page tables to the kmemcg.
Signed-off-by: Shakeel
The size of kvm's shadow page tables corresponds to the size of the
guest virtual machines on the system. Large VMs can spend a significant
amount of memory as shadow page tables which can not be left as system
memory overhead. So, account shadow page tables to the kmemcg.
Signed-off-by: Shakeel
On Tue, Jun 26, 2018 at 12:03 PM Johannes Weiner wrote:
>
> On Mon, Jun 25, 2018 at 04:06:58PM -0700, Shakeel Butt wrote:
> > @@ -140,8 +141,9 @@ struct fanotify_event_info *fanotify_alloc_event(struct
> > fsnotify_group *group,
> >
On Tue, Jun 26, 2018 at 12:03 PM Johannes Weiner wrote:
>
> On Mon, Jun 25, 2018 at 04:06:58PM -0700, Shakeel Butt wrote:
> > @@ -140,8 +141,9 @@ struct fanotify_event_info *fanotify_alloc_event(struct
> > fsnotify_group *group,
> >
On Tue, Jun 26, 2018 at 11:55 AM Johannes Weiner wrote:
>
> On Tue, Jun 26, 2018 at 11:00:53AM -0700, Shakeel Butt wrote:
> > On Mon, Jun 25, 2018 at 10:49 PM Amir Goldstein wrote:
> > >
> > ...
> > >
> > > The verb 'unuse' takes an argument memcg
On Tue, Jun 26, 2018 at 11:55 AM Johannes Weiner wrote:
>
> On Tue, Jun 26, 2018 at 11:00:53AM -0700, Shakeel Butt wrote:
> > On Mon, Jun 25, 2018 at 10:49 PM Amir Goldstein wrote:
> > >
> > ...
> > >
> > > The verb 'unuse' takes an argument memcg
On Mon, Jun 25, 2018 at 10:49 PM Amir Goldstein wrote:
>
...
>
> The verb 'unuse' takes an argument memcg and 'uses' it - too weird.
> You can use 'override'/'revert' verbs like override_creds or just call
> memalloc_use_memcg(old_memcg) since there is no reference taken
> anyway in use_memcg and
On Mon, Jun 25, 2018 at 10:49 PM Amir Goldstein wrote:
>
...
>
> The verb 'unuse' takes an argument memcg and 'uses' it - too weird.
> You can use 'override'/'revert' verbs like override_creds or just call
> memalloc_use_memcg(old_memcg) since there is no reference taken
> anyway in use_memcg and
on the system. So, the right way to charge buffer_head is
to extract the memcg from the page for which buffer_heads are being
allocated and then use targeted memcg charging API.
Signed-off-by: Shakeel Butt
---
Changelog since v1:
- simple code cleanups
fs/buffer.c| 15
on the system. So, the right way to charge buffer_head is
to extract the memcg from the page for which buffer_heads are being
allocated and then use targeted memcg charging API.
Signed-off-by: Shakeel Butt
---
Changelog since v1:
- simple code cleanups
fs/buffer.c| 15
to the listener's memcg. Thus we save the memcg reference in the
fsnotify_group structure of the listener.
This patch has also moved the members of fsnotify_group to keep the size
same, at least for 64 bit build, even with additional member by filling
the holes.
Signed-off-by: Shakeel Butt
Cc: Michal
to the listener's memcg. Thus we save the memcg reference in the
fsnotify_group structure of the listener.
This patch has also moved the members of fsnotify_group to keep the size
same, at least for 64 bit build, even with additional member by filling
the holes.
Signed-off-by: Shakeel Butt
Cc: Michal
be used for charging and for buffer_head, the memcg of
the page can be charged. For directed charging, the caller can use the
scope API memalloc_[un]use_memcg() to specify the memcg to charge for
all the __GFP_ACCOUNT allocations within the scope.
Shakeel Butt (2):
fs: fsnotify: account fsnotify
be used for charging and for buffer_head, the memcg of
the page can be charged. For directed charging, the caller can use the
scope API memalloc_[un]use_memcg() to specify the memcg to charge for
all the __GFP_ACCOUNT allocations within the scope.
Shakeel Butt (2):
fs: fsnotify: account fsnotify
On Fri, Jun 22, 2018 at 5:06 PM Roman Gushchin wrote:
>
> Introduce the mem_cgroup_put() helper, which helps to eliminate
> guarding memcg css release with "#ifdef CONFIG_MEMCG" in multiple
> places.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel But
On Fri, Jun 22, 2018 at 5:06 PM Roman Gushchin wrote:
>
> Introduce the mem_cgroup_put() helper, which helps to eliminate
> guarding memcg css release with "#ifdef CONFIG_MEMCG" in multiple
> places.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel But
On Tue, Jun 19, 2018 at 12:55 PM Roman Gushchin wrote:
>
> On Tue, Jun 19, 2018 at 12:51:15PM -0700, Shakeel Butt wrote:
> > On Tue, Jun 19, 2018 at 10:41 AM Roman Gushchin wrote:
> > >
> > > On Tue, Jun 19, 2018 at 12:27:41PM -0400, Johannes Weiner wrote:
> &g
On Tue, Jun 19, 2018 at 12:55 PM Roman Gushchin wrote:
>
> On Tue, Jun 19, 2018 at 12:51:15PM -0700, Shakeel Butt wrote:
> > On Tue, Jun 19, 2018 at 10:41 AM Roman Gushchin wrote:
> > >
> > > On Tue, Jun 19, 2018 at 12:27:41PM -0400, Johannes Weiner wrote:
> &g
p://lkml.kernel.org/r/CAHmME9rtoPwxUSnktxzKso14iuVCWT7BE_-_8PAC=pgw1ij...@mail.gmail.com
>
> Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
> Cc: Shakeel Butt
> Cc: David Rientjes
> Cc: Christoph Lameter
> Cc: Pekka Enberg
> Cc: Joonsoo K
p://lkml.kernel.org/r/CAHmME9rtoPwxUSnktxzKso14iuVCWT7BE_-_8PAC=pgw1ij...@mail.gmail.com
>
> Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
> Cc: Shakeel Butt
> Cc: David Rientjes
> Cc: Christoph Lameter
> Cc: Pekka Enberg
> Cc: Joonsoo K
On Thu, Jun 21, 2018 at 8:01 AM Michal Hocko wrote:
>
> On Thu 21-06-18 01:15:30, Cristopher Lameter wrote:
> > On Wed, 20 Jun 2018, Shakeel Butt wrote:
> >
> > > For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> > > allocated per nod
On Thu, Jun 21, 2018 at 8:01 AM Michal Hocko wrote:
>
> On Thu 21-06-18 01:15:30, Cristopher Lameter wrote:
> > On Wed, 20 Jun 2018, Shakeel Butt wrote:
> >
> > > For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> > > allocated per nod
On Wed, Jun 20, 2018 at 6:15 PM Christopher Lameter wrote:
>
> On Wed, 20 Jun 2018, Shakeel Butt wrote:
>
> > For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> > allocated per node for a kmem_cache. Thus, slabs_node() in
> > __kmem_cache_e
On Wed, Jun 20, 2018 at 6:15 PM Christopher Lameter wrote:
>
> On Wed, 20 Jun 2018, Shakeel Butt wrote:
>
> > For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> > allocated per node for a kmem_cache. Thus, slabs_node() in
> > __kmem_cache_e
ail.gmail.com
Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
Signed-off-by: Shakeel Butt
Suggested-by: David Rientjes
Reported-by: Jason A . Donenfeld
Cc: Christoph Lameter
Cc: Pekka Enberg
Cc: Joonsoo Kim
Cc: Andrew Morton
Cc: Andrey Ryabinin
Cc:
Cc:
Cc:
---
ail.gmail.com
Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
Signed-off-by: Shakeel Butt
Suggested-by: David Rientjes
Reported-by: Jason A . Donenfeld
Cc: Christoph Lameter
Cc: Pekka Enberg
Cc: Joonsoo Kim
Cc: Andrew Morton
Cc: Andrey Ryabinin
Cc:
Cc:
Cc:
---
On Wed, Jun 20, 2018 at 5:08 AM Andrey Ryabinin wrote:
>
>
>
> On 06/20/2018 12:33 AM, Shakeel Butt wrote:
> > For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> > allocated per node for a kmem_cache. Thus, slabs_node() in
> > __kmem_cache_em
On Wed, Jun 20, 2018 at 5:08 AM Andrey Ryabinin wrote:
>
>
>
> On 06/20/2018 12:33 AM, Shakeel Butt wrote:
> > For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> > allocated per node for a kmem_cache. Thus, slabs_node() in
> > __kmem_cache_em
On Tue, Jun 19, 2018 at 5:49 PM David Rientjes wrote:
>
> On Tue, 19 Jun 2018, Shakeel Butt wrote:
>
> > diff --git a/mm/slub.c b/mm/slub.c
> > index a3b8467c14af..731c02b371ae 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3673,9 +3673,23 @@ static
On Tue, Jun 19, 2018 at 5:49 PM David Rientjes wrote:
>
> On Tue, 19 Jun 2018, Shakeel Butt wrote:
>
> > diff --git a/mm/slub.c b/mm/slub.c
> > index a3b8467c14af..731c02b371ae 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3673,9 +3673,23 @@ static
On Tue, Jun 19, 2018 at 9:22 AM Johannes Weiner wrote:
>
> On Mon, Jun 18, 2018 at 10:13:25PM -0700, Shakeel Butt wrote:
> > @@ -248,6 +248,30 @@ static inline void memalloc_noreclaim_restore(unsigned
> > int flags)
> > current->flags = (current->
On Tue, Jun 19, 2018 at 9:22 AM Johannes Weiner wrote:
>
> On Mon, Jun 18, 2018 at 10:13:25PM -0700, Shakeel Butt wrote:
> > @@ -248,6 +248,30 @@ static inline void memalloc_noreclaim_restore(unsigned
> > int flags)
> > current->flags = (current->
ils. Please fold patch 1 and introduce API along with the
> users.
>
Thanks a lot for the review. Ack, I will do as you suggested in next version.
> On Mon, Jun 18, 2018 at 10:13:24PM -0700, Shakeel Butt wrote:
> > This patchset introduces memcg variant memory allocation functio
ils. Please fold patch 1 and introduce API along with the
> users.
>
Thanks a lot for the review. Ack, I will do as you suggested in next version.
> On Mon, Jun 18, 2018 at 10:13:24PM -0700, Shakeel Butt wrote:
> > This patchset introduces memcg variant memory allocation functio
On Tue, Jun 19, 2018 at 2:33 PM Shakeel Butt wrote:
>
> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to chec
On Tue, Jun 19, 2018 at 2:33 PM Shakeel Butt wrote:
>
> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to chec
that __kmem_cache_shutdown() and __kmem_cache_shrink() are
not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
per-cpu slabs.
Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
Signed-off-by: Shakeel Butt
Reported-by: Jason A . Donenfeld
Cc: Christoph L
that __kmem_cache_shutdown() and __kmem_cache_shrink() are
not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
per-cpu slabs.
Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
Signed-off-by: Shakeel Butt
Reported-by: Jason A . Donenfeld
Cc: Christoph L
On Tue, Jun 19, 2018 at 10:41 AM Roman Gushchin wrote:
>
> On Tue, Jun 19, 2018 at 12:27:41PM -0400, Johannes Weiner wrote:
> > On Mon, Jun 18, 2018 at 10:13:27PM -0700, Shakeel Butt wrote:
> > > The buffer_head can consume a significant amount of system memory and
>
On Tue, Jun 19, 2018 at 10:41 AM Roman Gushchin wrote:
>
> On Tue, Jun 19, 2018 at 12:27:41PM -0400, Johannes Weiner wrote:
> > On Mon, Jun 18, 2018 at 10:13:27PM -0700, Shakeel Butt wrote:
> > > The buffer_head can consume a significant amount of system memory and
>
On Tue, Jun 19, 2018 at 8:19 AM Jason A. Donenfeld wrote:
>
> On Tue, Jun 19, 2018 at 5:08 PM Shakeel Butt wrote:
> > > > Are you using SLAB or SLUB? We stress kernel pretty heavily, but with
> > > > SLAB, and I suspect Shakeel may also be using SLAB. So
On Tue, Jun 19, 2018 at 8:19 AM Jason A. Donenfeld wrote:
>
> On Tue, Jun 19, 2018 at 5:08 PM Shakeel Butt wrote:
> > > > Are you using SLAB or SLUB? We stress kernel pretty heavily, but with
> > > > SLAB, and I suspect Shakeel may also be using SLAB. So
701 - 800 of 1184 matches
Mail list logo