()
(in follow_page_pte()).
On larger machines the overhead of lru_add_drain_all() in mlock() can
be significant when mlocking data already in memory. We have observed
high latency in mlock() due to lru_add_drain_all() when the users
were mlocking in memory tmpfs files.
Signed-off-by: Shakeel Butt
---
Changelog
On Thu, Oct 19, 2017 at 11:19 PM, Michal Hocko wrote:
> On Thu 19-10-17 15:25:07, Shakeel Butt wrote:
>> lru_add_drain_all() is not required by mlock() and it will drain
>> everything that has been cached at the time mlock is called. And
>> that is not really relate
On Tue, Nov 14, 2017 at 4:56 PM, Minchan Kim wrote:
> On Tue, Nov 14, 2017 at 06:37:42AM +0900, Tetsuo Handa wrote:
>> When shrinker_rwsem was introduced, it was assumed that
>> register_shrinker()/unregister_shrinker() are really unlikely paths
>> which are called during initialization and tear
On Wed, Nov 15, 2017 at 4:46 PM, Minchan Kim wrote:
> On Tue, Nov 14, 2017 at 10:28:10PM -0800, Shakeel Butt wrote:
>> On Tue, Nov 14, 2017 at 4:56 PM, Minchan Kim wrote:
>> > On Tue, Nov 14, 2017 at 06:37:42AM +0900, Tetsuo Handa wrote:
>> >> When shrinker_rwsem
Ping, really appreciate comments on this patch.
On Sat, Nov 4, 2017 at 3:43 PM, Shakeel Butt wrote:
> When a thread mlocks an address space backed by file, a new
> page is allocated (assuming file page is not in memory), added
> to the local pagevec (lru_add_pvec), I/O is
On Thu, Nov 16, 2017 at 7:09 PM, Yafang Shao wrote:
> Currently the default tmpfs size is totalram_pages / 2 if mount tmpfs
> without "-o size=XXX".
> When we mount tmpfs in a container(i.e. docker), it is also
> totalram_pages / 2 regardless of the memory limit on this container.
> That may
t
all? The only side effect of over reclaim I can think of is the job
might suffer a bit over (more swapins & pageins). Shouldn't this be
within the expectation of the user decreasing the limits?
> nack. If we ever see such a problem then reverting this patch should be
> pretty straghtfor
On Wed, Nov 15, 2017 at 1:31 AM, Jan Kara wrote:
> On Wed 15-11-17 01:32:16, Yang Shi wrote:
>>
>>
>> On 11/14/17 1:39 AM, Michal Hocko wrote:
>> >On Tue 14-11-17 03:10:22, Yang Shi wrote:
>> >>
>> >>
>> >>On 11/9/17 5:54 AM, Michal Hocko wrote:
>> >>>[Sorry for the late reply]
>> >>>
>> >>>On
On Fri, Jan 19, 2018 at 7:11 AM, Michal Hocko wrote:
> On Fri 19-01-18 06:49:29, Shakeel Butt wrote:
>> On Fri, Jan 19, 2018 at 5:35 AM, Michal Hocko wrote:
>> > On Fri 19-01-18 16:25:44, Andrey Ryabinin wrote:
>> >> Currently mem_cgroup_resize_limit() retries
On Wed, Jan 24, 2018 at 11:51 PM, Amir Goldstein wrote:
>
> There is a nicer alternative, instead of failing the file access,
> an overflow event can be queued. I sent a patch for that and Jan
> agreed to the concept, but thought we should let user opt-in for this
> change:
>
Cc: a...@linux-foundation.org
On Wed, Jun 5, 2019 at 3:06 AM Hui Zhu wrote:
>
> As a zpool_driver, zsmalloc can allocate movable memory because it
> support migate pages.
> But zbud and z3fold cannot allocate movable memory.
>
> This commit adds malloc_support_movable to zpool_driver.
> If a
.org/lkml/2019/5/29/73 and
> Shakeel Butt https://lkml.org/lkml/2019/6/4/973
>
> zswap compresses swap pages into a dynamically allocated RAM-based
> memory pool. The memory pool should be zbud, z3fold or zsmalloc.
> All of them will allocate unmovable pages. It will increase the
&g
mm/z3fold.c: add structure for buddy handles")
>
> Reported-by: Henry Burns
> Signed-off-by: Vitaly Wool
Reviewed-by: Shakeel Butt
> ---
> mm/z3fold.c | 5 -
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index
_page(oldpage_newpage)
> a_ops->migrate_page(oldpage, newpage)
> z3fold_page_migrate(oldpage, newpage)
> trylock_page(oldpage)
>
>
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
Please add the Fixes tag as well.
> ---
> mm/z3fold.c | 6 --
y related flags from the call to kmem_cache_alloc()
> for our slots since it is a kernel allocation.
>
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
> ---
> mm/z3fold.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/z3fo
On Sun, May 19, 2019 at 8:53 PM Minchan Kim wrote:
>
> - Background
>
> The Android terminology used for forking a new process and starting an app
> from scratch is a cold start, while resuming an existing app is a hot start.
> While we continually try to improve the performance of cold starts,
On Mon, May 20, 2019 at 7:55 PM Anshuman Khandual
wrote:
>
>
>
> On 05/20/2019 10:29 PM, Tim Murray wrote:
> > On Sun, May 19, 2019 at 11:37 PM Anshuman Khandual
> > wrote:
> >>
> >> Or Is the objective here is reduce the number of processes which get
> >> killed by
> >> lmkd by triggering
On Thu, May 23, 2019 at 11:37 AM Matthew Wilcox wrote:
>
> On Thu, May 23, 2019 at 01:43:49PM -0400, Johannes Weiner wrote:
> > I noticed that recent upstream kernels don't account the xarray nodes
> > of the page cache to the allocating cgroup, like we used to do for the
> > radix tree nodes.
>
a new putback_zspage_deferred() function which both
> zs_page_migrate() and zs_page_putback() call.
>
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
> ---
> mm/zsmalloc.c | 30 --
> 1 file changed, 20 insertions(+), 10 deletions(-)
>
&
000600
The fix is to decouple the cpuset/mempolicy intersection check from
oom_unkillable_task() and make sure cpuset/mempolicy intersection check
is only done in the global oom context.
Reported-by: syzbot+d0fc9d3c166bc5e4a...@syzkaller.appspotmail.com
Signed-off-by: Shakeel Butt
---
Ch
the task_in_mem_cgroup() check altogether.
Signed-off-by: Shakeel Butt
Signed-off-by: Tetsuo Handa
---
Changelog since v2:
- Further divided the patch into two patches.
- Incorporated the task_in_mem_cgroup() from Tetsuo.
Changelog since v1:
- Divide the patch into two patches.
fs/proc/base.c | 2
mem_cgroup_scan_tasks to selectively traverse only processes of the
target memcg hierarchy during memcg OOM.
Signed-off-by: Shakeel Butt
Acked-by: Michal Hocko
---
Changelog since v2:
- Updated the commit message.
Changelog since v1:
- Divide the patch into two patches.
mm/oom_kill.c | 68
work -> work
>
> And RCU/delayed work callbacks in slab common code:
> kmemcg_deactivate_rcufn -> kmemcg_rcufn
> kmemcg_deactivate_workfn -> kmemcg_workfn
>
> This patch contains no functional changes, only renamings.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Vladimir Davydov
Reviewed-by: Shakeel Butt
fter_rcu() SLUB-only
>
> For consistency, all allocator-specific functions start with "__".
>
> Signed-off-by: Roman Gushchin
> Acked-by: Vladimir Davydov
Reviewed-by: Shakeel Butt
the kmem cache
destruction and allocations.
> so no new memcg kmem_cache
> creation can be scheduled after the flag is set. And if it was
> scheduled before, flush_memcg_workqueue() will wait for it anyway.
>
> So let's drop this check to simplify the code.
>
> Signed-off-by: R
q context,
> which will be required in order to implement asynchronous release
> of kmem_caches.
>
> So let's switch over to the irq-save flavor of the spinlock-based
> synchronization.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
t; user0m0.216suser0m0.181s
> sys 0m0.824ssys 0m0.864s
>
> real0m1.350sreal0m1.295s
> user0m0.200s user0m0.190s
> sys 0m0.842ssys 0m0.811s
>
> So it looks like the difference is not noticeable in this test.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Vladimir Davydov
Reviewed-by: Shakeel Butt
;
> Signed-off-by: Roman Gushchin
The reparenting of top level memcg and "return true" is fixed in the
later patch.
Reviewed-by: Shakeel Butt
away. Instead rely on kmem_cache
> as an intermediate object.
>
> Make sure that vmstats and shrinker lists are working as previously,
> as well as /proc/kpagecgroup interface.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Vladimir Davydov
Reviewed-by: Shakeel Butt
On Mon, Aug 5, 2019 at 7:32 AM Michal Hocko wrote:
>
> On Fri 02-08-19 11:56:28, Yang Shi wrote:
> > On Fri, Aug 2, 2019 at 2:35 AM Michal Hocko wrote:
> > >
> > > On Thu 01-08-19 14:00:51, Yang Shi wrote:
> > > > On Mon, Jul 29, 2019 at 11:48 AM Michal Hocko wrote:
> > > > >
> > > > > On Mon
ea2b1 ("mm: memcontrol: flush percpu vmstats before releasing
memcg")
Signed-off-by: Shakeel Butt
Cc: Roman Gushchin
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Vladimir Davydov
Cc: Andrew Morton
Cc:
---
Note: The buggy patches were marked for stable therefore adding Cc to
stable.
On Wed, Sep 11, 2019 at 8:16 AM Michal Hocko wrote:
>
> On Wed 11-09-19 07:37:40, Andrew Morton wrote:
> > On Wed, 11 Sep 2019 14:00:02 +0200 Michal Hocko wrote:
> >
> > > On Mon 09-09-19 13:22:45, Michal Hocko wrote:
> > > > On Fri 06-09-19 11:24:55, Shak
On Wed, Aug 28, 2019 at 4:23 AM Michal Hocko wrote:
>
> On Mon 26-08-19 16:32:34, Mina Almasry wrote:
> > mm/hugetlb.c | 493 --
> > mm/hugetlb_cgroup.c | 187 +--
>
> This is a lot of changes to an already subtle code
entry,
> this can consume significant memory.
>
> Signed-off-by: Khazhismel Kumykov
Actually we are seeing this in production where a job creating a lot
of fuse files cause a lot of extra system level overhead.
Reviewed-by: Shakeel Butt
> ---
> fs/fuse/dir.c | 19
On Wed, Aug 21, 2019 at 5:10 PM Khazhismel Kumykov wrote:
>
> Instead of having a helper per flag
>
> Signed-off-by: Khazhismel Kumykov
I think it would be better to re-order the patch 2 and 3 of this
series. There will be less code churn.
> ---
> fs/fuse/dev.c| 22 +++---
On Thu, Aug 22, 2019 at 1:00 PM Khazhismel Kumykov wrote:
>
> Instead of having a helper per flag
>
> Signed-off-by: Khazhismel Kumykov
Reviewed-by: Shakeel Butt
> ---
> fs/fuse/dev.c| 16 +++-
> fs/fuse/file.c | 6 +++---
> fs/fuse/fuse_i.h | 4
usually isn't accounted
>
> Signed-off-by: Khazhismel Kumykov
Reviewed-by: Shakeel Butt
> ---
> fs/fuse/dir.c | 3 ++-
> fs/fuse/file.c | 4 ++--
> fs/fuse/inode.c | 3 ++-
> 3 files changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/fs/fuse/dir.c b/fs/f
On Wed, Oct 16, 2019 at 3:49 PM Dave Hansen wrote:
>
> We're starting to see systems with more and more kinds of memory such
> as Intel's implementation of persistent memory.
>
> Let's say you have a system with some DRAM and some persistent memory.
> Today, once DRAM fills up, reclaim will start
On Thu, Oct 17, 2019 at 9:32 AM Dave Hansen wrote:
>
> On 10/17/19 9:01 AM, Suleiman Souhlal wrote:
> > One problem that came up is that if you get into direct reclaim,
> > because persistent memory can have pretty low write throughput, you
> > can end up stalling users for a pretty long time
On Thu, Oct 17, 2019 at 7:26 AM Dave Hansen wrote:
>
> On 10/16/19 8:45 PM, Shakeel Butt wrote:
> > On Wed, Oct 16, 2019 at 3:49 PM Dave Hansen
> > wrote:
> >> This set implements a solution to these problems. At the end of the
> >> reclaim process in shr
On Thu, Oct 17, 2019 at 10:20 AM Yang Shi wrote:
>
> On Thu, Oct 17, 2019 at 7:26 AM Dave Hansen wrote:
> >
> > On 10/16/19 8:45 PM, Shakeel Butt wrote:
> > > On Wed, Oct 16, 2019 at 3:49 PM Dave Hansen
> > > wrote:
> > >> This set impl
On Wed, Oct 9, 2019 at 2:30 PM Qian Cai wrote:
>
> The linux-next commit "mm/rmap.c: reuse mergeable anon_vma as parent when
> fork"
> [1] causes a crash on s390 while compiling some C code. Reverted it fixes the
> issue.
>
> [1]
>
On Wed, Oct 9, 2019 at 2:19 PM Minchan Kim wrote:
>
> From: Minchan Kim
>
> If block device supports rw_page operation, it doesn't submit bio
> so annotation in submit_bio for refault stall doesn't work.
> It happens with zram in android, especially swap read path which
> could consume CPU cycle
nsume CPU cycle for decompress. It is also a problem for
> zswap which uses frontswap.
>
> Annotate swap_readpage() to account the synchronous IO overhead
> to prevent underreport memory pressure.
>
> Acked-by: Johannes Weiner
> Signed-off-by: Minchan Kim
Reviewed-by: Shakeel Bu
se RCU works
> will complete before the memcg pointer will be zeroed.
>
> Big thanks for Karsten for the perfect report containing all necessary
> information, his help with the analysis of the problem and testing
> of the fix.
>
> Fixes: fb2f2b0adb98 ("mm: memcg/slab:
On Sun, May 3, 2020 at 11:56 PM Michal Hocko wrote:
>
> On Thu 30-04-20 11:27:12, Shakeel Butt wrote:
> > Lowering memory.max can trigger an oom-kill if the reclaim does not
> > succeed. However if oom-killer does not find a process for killing, it
> > dumps a lot of wa
On Sun, May 3, 2020 at 11:57 PM Michal Hocko wrote:
>
> On Thu 30-04-20 13:20:10, Shakeel Butt wrote:
> > On Thu, Apr 30, 2020 at 12:29 PM Johannes Weiner wrote:
> > >
> > > On Thu, Apr 30, 2020 at 11:27:12AM -0700, Shakeel Butt wrote:
> > > > L
On Mon, May 4, 2020 at 7:11 AM Michal Hocko wrote:
>
> On Mon 04-05-20 06:54:40, Shakeel Butt wrote:
> > On Sun, May 3, 2020 at 11:56 PM Michal Hocko wrote:
> > >
> > > On Thu 30-04-20 11:27:12, Shakeel Butt wrote:
> > > > Lowering memory.max c
On Mon, May 4, 2020 at 7:20 AM Tetsuo Handa
wrote:
>
> On 2020/05/04 22:54, Shakeel Butt wrote:
> > It may not be a problem for an individual or small scale deployment
> > but when "sweep before tear down" is the part of the workflow for
> > thousands of
On Mon, May 4, 2020 at 8:00 AM Michal Hocko wrote:
>
> On Mon 04-05-20 07:53:01, Shakeel Butt wrote:
> > On Mon, May 4, 2020 at 7:11 AM Michal Hocko wrote:
> > >
> > > On Mon 04-05-20 06:54:40, Shakeel Butt wrote:
> > > > On Sun, M
-off-by: Shakeel Butt
---
include/linux/oom.h | 3 +++
mm/memcontrol.c | 9 +
mm/oom_kill.c | 2 +-
3 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/include/linux/oom.h b/include/linux/oom.h
index c696c265f019..6345dc55df64 100644
--- a/include/linux/oom.h
+++ b
On Thu, Apr 30, 2020 at 12:06 PM Roman Gushchin wrote:
>
> Hello, Shakeel!
>
> On Thu, Apr 30, 2020 at 11:27:12AM -0700, Shakeel Butt wrote:
> > Lowering memory.max can trigger an oom-kill if the reclaim does not
> > succeed. However if oom-killer does not find a process f
On Thu, Apr 30, 2020 at 12:29 PM Johannes Weiner wrote:
>
> On Thu, Apr 30, 2020 at 11:27:12AM -0700, Shakeel Butt wrote:
> > Lowering memory.max can trigger an oom-kill if the reclaim does not
> > succeed. However if oom-killer does not find a process for killing, it
> >
On Thu, Apr 30, 2020 at 6:39 PM Yafang Shao wrote:
>
> On Fri, May 1, 2020 at 2:27 AM Shakeel Butt wrote:
> >
> > Lowering memory.max can trigger an oom-kill if the reclaim does not
> > succeed. However if oom-killer does not find a process for killing, it
> > dump
claimining on behalf of atomic allocations onto the
> regular allocations that can block.
>
> Cc: sta...@kernel.org # 4.18+
> Fixes: e699e2c6a654 ("net, mm: account sock objects to kmemcg")
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
> ---
> mm/memcon
On Wed, Oct 23, 2019 at 8:46 AM Johannes Weiner wrote:
>
> On Wed, Oct 23, 2019 at 08:40:12AM +0200, Michal Hocko wrote:
> > On Tue 22-10-19 19:37:08, Johannes Weiner wrote:
> > > While upgrading from 4.16 to 5.2, we noticed these allocation errors
> > > in the log of the new kernel:
> > >
> > >
On Fri, Sep 6, 2019 at 5:56 AM Michal Hocko wrote:
>
> From: Michal Hocko
>
> Thomas has noticed the following NULL ptr dereference when using cgroup
> v1 kmem limit:
> BUG: unable to handle kernel NULL pointer dereference at 0008
> PGD 0
> P4D 0
> Oops: [#1] PREEMPT SMP PTI
>
On Sat, Jul 18, 2020 at 6:31 AM SeongJae Park wrote:
>
> On Fri, 17 Jul 2020 19:47:50 -0700 Shakeel Butt wrote:
>
> > On Mon, Jul 13, 2020 at 1:43 AM SeongJae Park wrote:
> > >
> > > From: SeongJae Park
> > >
> > > DAMON is a data
On Tue, Jun 23, 2020 at 11:47 AM Roman Gushchin wrote:
>
> To implement accounting of percpu memory we need the information about the
> size of freed object. Return it from pcpu_free_area().
>
> Signed-off-by: Roman Gushchin
> Acked-by: Dennis Zhou
Reviewed-by: Shakeel Butt
ters directly: instead we use obj_cgroup API,
> introduced for slab accounting.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Dennis Zhou
Reviewed-by: Shakeel Butt
d-off-by: Roman Gushchin
> Acked-by: Dennis Zhou
Reviewed-by: Shakeel Butt
sumed percpu memory to the parent cgroup.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Dennis Zhou
Reviewed-by: Shakeel Butt
On Tue, Jun 23, 2020 at 10:40 AM Roman Gushchin wrote:
>
> Deprecate memory.kmem.slabinfo.
>
> An empty file will be presented if corresponding config options are
> enabled.
>
> The interface is implementation dependent, isn't present in cgroup v2, and
> is generally useful only for core mm
r idle flag for write operations.
>
> Fixes: bbddabe2e436 ("mm: filemap: only do access activations on reads")
> Cc: Johannes Weiner
> Cc: Rik van Riel
> Cc: Shakeel Butt
> Reported-by: Gang Deng
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
On Wed, Jun 24, 2020 at 12:18 PM Yang Shi wrote:
>
>
>
> On 6/24/20 11:53 AM, Andrew Morton wrote:
> > On Thu, 25 Jun 2020 01:43:32 +0800 Yang Shi
> > wrote:
> >
> >> Since commit bbddabe2e436aa7869b3ac5248df5c14ddde0cbf ("mm: filemap:
> >> only do access activations on reads"),
+Minchan Kim
On Mon, Jul 20, 2020 at 12:52 AM Christoph Hellwig wrote:
>
> There is no point in trying to call bdev_read_page if SWP_SYNCHRONOUS_IO
> is not set, as the device won't support it. Also there is no point in
> trying a bio submission if bdev_read_page failed.
This will at least
On Tue, Jul 21, 2020 at 4:20 AM jingrui wrote:
>
> Cc: Johannes Weiner ; Michal Hocko ;
> Vladimir Davydov
>
> Thanks.
>
> ---
> PROBLEM: cgroup cost too much memory when transfer small files to tmpfs.
>
> keywords: cgroup PERCPU/memory cost too much.
>
> description:
>
> We send small files
On Tue, Jul 21, 2020 at 11:51 AM Roman Gushchin wrote:
>
> On Tue, Jul 21, 2020 at 01:41:26PM -0400, Johannes Weiner wrote:
> > On Tue, Jul 21, 2020 at 11:19:52AM +, jingrui wrote:
> > > Cc: Johannes Weiner ; Michal Hocko
> > > ; Vladimir Davydov
> > >
> > > Thanks.
> > >
> > > ---
> > >
> causes the workload inside the cgroup into direct reclaim, that of
> course will continue to count as memory pressure.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
any problems in practice so far.
>
> Fixes: 8c8c383c04f6 ("mm: memcontrol: try harder to set a new memory.high")
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Mon, Jul 27, 2020 at 2:03 AM SeongJae Park wrote:
>
> On Mon, 27 Jul 2020 00:34:54 -0700 Greg Thelen wrote:
>
> > SeongJae Park wrote:
> >
> > > From: SeongJae Park
> > >
> > > This commit introduces a reference implementation of the address space
> > > specific low level primitives for the
On Tue, Jul 14, 2020 at 1:41 AM Michal Hocko wrote:
>
> On Fri 10-07-20 12:19:37, Shakeel Butt wrote:
> > On Fri, Jul 10, 2020 at 11:42 AM Roman Gushchin wrote:
> > >
> > > On Fri, Jul 10, 2020 at 07:12:22AM -0700, Shakeel Butt wrote:
> > > > On Fri, J
On Tue, Jul 14, 2020 at 8:39 AM Johannes Weiner wrote:
>
> On Fri, Jul 10, 2020 at 12:19:37PM -0700, Shakeel Butt wrote:
> > On Fri, Jul 10, 2020 at 11:42 AM Roman Gushchin wrote:
> > >
> > > On Fri, Jul 10, 2020 at 07:12:22AM -0700, Shakeel Butt wrote:
> >
gt;
> If we repeat steps 4) and 5), this will cause a lot of memory leak.
> So only when refcount reach zero, we mark the root kmem_cache as dying.
>
> Fixes: 92ee383f6daa ("mm: fix race between kmem_cache destroy, create and
> deactivate")
> Signed-off-by: Muchun Song
Hi Yafang,
On Tue, Mar 31, 2020 at 3:05 AM Yafang Shao wrote:
>
> PSI gives us a powerful way to anaylze memory pressure issue, but we can
> make it more powerful with the help of tracepoint, kprobe, ebpf and etc.
> Especially with ebpf we can flexiblely get more details of the memory
>
On Fri, Jul 17, 2020 at 9:24 AM SeongJae Park wrote:
>
> On Fri, 17 Jul 2020 08:17:09 -0700 Shakeel Butt wrote:
>
> > On Thu, Jul 16, 2020 at 11:54 PM SeongJae Park wrote:
> > >
> > > On Thu, 16 Jul 2020 17:46:54 -0700 Shakeel Butt
> > > wrote:
>
On Mon, Jul 13, 2020 at 1:43 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> DAMON is a data access monitoring framework subsystem for the Linux
> kernel. The core mechanisms of DAMON make it
>
> - accurate (the monitoring output is useful enough for DRAM level
>memory management; It
On Wed, Jul 15, 2020 at 8:19 PM Yafang Shao wrote:
>
> On Thu, Jul 16, 2020 at 12:36 AM Shakeel Butt wrote:
> >
> > Hi Yafang,
> >
> > On Tue, Mar 31, 2020 at 3:05 AM Yafang Shao wrote:
> > >
> > > PSI gives us a powerful way to anaylze memo
On Mon, Jul 13, 2020 at 1:44 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> This commit introduces a reference implementation of the address space
> specific low level primitives for the virtual address space, so that
> users of DAMON can easily monitor the data accesses on virtual address
On Thu, Jul 16, 2020 at 11:54 PM SeongJae Park wrote:
>
> On Thu, 16 Jul 2020 17:46:54 -0700 Shakeel Butt wrote:
>
> > On Mon, Jul 13, 2020 at 1:44 AM SeongJae Park wrote:
> > >
> > > From: SeongJae Park
> > >
> > > This commit introdu
On Wed, Jul 22, 2020 at 1:55 AM Arnd Bergmann wrote:
>
> Adding Roman Gushchin to Cc, he touched that code recently.
>
> Naresh, if nobody has any immediate ideas, you could double-check by
> reverting these commits:
>
> e0b8d00b7561 mm: memcg/percpu: per-memcg percpu memory statistics
>
On Tue, Jul 21, 2020 at 11:27 PM Christoph Hellwig wrote:
>
> There is no point in trying to call bdev_read_page if SWP_SYNCHRONOUS_IO
> is not set, as the device won't support it.
>
> Signed-off-by: Christoph Hellwig
> ---
> mm/page_io.c | 18 ++
> 1 file changed, 10
On Fri, May 24, 2019 at 10:06 AM Johannes Weiner wrote:
>
> On Fri, May 24, 2019 at 09:11:46AM -0700, Matthew Wilcox wrote:
> > On Thu, May 23, 2019 at 03:59:33PM -0400, Johannes Weiner wrote:
> > > My point is that we cannot have random drivers' internal data
> > > structures charge to and pin
workingset-a
> + cat workingset-a
> + ./mincore workingset-a
> 153600/153600 workingset-a
> + dd of=workingset-b bs=1M count=0 seek=600
> + cat workingset-b
> + ./mincore workingset-a workingset-b
> 124607/153600 workingset-a
> 87876/153600 workingset-b
> + cat workingset-b
> + ./mincore workingset-a workingset-b
> 81313/153600 workingset-a
> 133321/153600 workingset-b
> + cat workingset-b
> + ./mincore workingset-a workingset-b
> 63036/153600 workingset-a
> 153600/153600 workingset-b
>
> Cc: sta...@vger.kernel.org # 4.20+
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Fri, May 24, 2019 at 12:33 PM wrote:
>
> From: Ira Weiny
>
> RFC I have no idea if this is correct or not. But looking at
> release_pages() I see a call to both __ClearPageActive() and
> __ClearPageWaiters() while in __page_cache_release() I do not.
>
> Is this a bug which needs to be fixed?
there will not be
any process in the internal nodes and thus no chance of local pressure.
Signed-off-by: Shakeel Butt
Reviewed-by: Roman Gushchin
Acked-by: Johannes Weiner
---
Changelog since v2:
- Added documentation.
Changelog since v1:
- refactor memory_events_show to share between events
da>] do_syscall_64+0x76/0x1a0 arch/x86/entry/common.c:301
[<43d74ca0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
This is a simple off by one bug on the error path.
Reported-by: syzbot+f90a420dfe2b1b03c...@syzkaller.appspotmail.com
Signed-off-by: Shakeel Butt
---
mm/list_lru.c | 2
On Mon, May 27, 2019 at 9:32 PM Shakeel Butt wrote:
>
> Syzbot reported following memory leak:
>
> da RBX: 0003 RCX: 00441f79
> BUG: memory leak
> unreferenced object 0x888114f26040 (size 32):
> comm "syz-executor626", pid 7056,
On Tue, May 28, 2019 at 1:42 AM Michal Hocko wrote:
>
> On Tue 28-05-19 11:04:46, Konstantin Khlebnikov wrote:
> > On 28.05.2019 10:38, Michal Hocko wrote:
> [...]
> > > Could you define the exact semantic? Ideally something for the manual
> > > page please?
> > >
> >
> > Like kswapd which works
On Tue, May 21, 2019 at 8:16 AM Johannes Weiner wrote:
>
> The kernel test robot noticed a 26% will-it-scale pagefault regression
> from commit 42a300353577 ("mm: memcontrol: fix recursive statistics
> correctness & scalabilty"). This appears to be caused by bouncing the
> additional cachelines
mask+0x49/0x70
> [ 381.346287] softirqs last enabled at (10262): []
> cgroup_idr_replace+0x3a/0x50
> [ 381.346290] softirqs last disabled at (10260): []
> cgroup_idr_replace+0x1d/0x50
> [ 381.346293] ---[ end trace b324ba73eb3659f0 ]---
>
> v2: fixed return value from memcg_ch
parented caches by adding a new slab flag "SLAB_DEACTIVATED" to those
> kmem caches that will be reparent'ed if it cannot be destroyed completely.
>
> For the reparent'ed memcg kmem caches, the tag ":deact" will now be
> shown in /memcg_slabinfo.
>
> S
On Fri, Jun 14, 2019 at 6:08 PM syzbot
wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:3f310e51 Add linux-next specific files for 20190607
> git tree: linux-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=15ab8771a0
> kernel config:
On Sat, Jun 15, 2019 at 6:50 AM Michal Hocko wrote:
>
> On Fri 14-06-19 20:15:31, Shakeel Butt wrote:
> > On Fri, Jun 14, 2019 at 6:08 PM syzbot
> > wrote:
> > >
> > > Hello,
> > >
> > > syzbot found the following crash on:
> > &g
On Sat, Jun 15, 2019 at 9:49 AM Tetsuo Handa
wrote:
>
> On 2019/06/16 1:11, Shakeel Butt wrote:
> > On Sat, Jun 15, 2019 at 6:50 AM Michal Hocko wrote:
> >> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> >> index 5a58778c91d4..43eb479a5dc7 100644
> >> --- a
On Sun, Jun 16, 2019 at 8:14 AM Tetsuo Handa
wrote:
>
> On 2019/06/16 16:37, Tetsuo Handa wrote:
> > On 2019/06/16 6:33, Tetsuo Handa wrote:
> >> On 2019/06/16 3:50, Shakeel Butt wrote:
> >>>> While dump_tasks() traverses only each thread group,
> >&g
().
Signed-off-by: Shakeel Butt
---
fs/proc/base.c | 3 +-
include/linux/oom.h | 3 +-
mm/oom_kill.c | 100 +---
3 files changed, 60 insertions(+), 46 deletions(-)
diff --git a/fs/proc/base.c b/fs/proc/base.c
index b8d5d100ed4a..69b0d1b6583d
On Mon, Jun 17, 2019 at 9:17 AM Michal Hocko wrote:
>
> On Mon 17-06-19 08:59:54, Shakeel Butt wrote:
> > Currently oom_unkillable_task() checks mems_allowed even for memcg OOMs
> > which does not make sense as memcg OOMs can not be triggered due to
> > numa constraints. F
dump_tasks() currently goes through all the processes present on the
system even for memcg OOMs. Change dump_tasks() similar to
select_bad_process() and use mem_cgroup_scan_tasks() to selectively
traverse the processes of the memcgs during memcg OOM.
Signed-off-by: Shakeel Butt
---
Changelog
will do a bogus
cpuset_mems_allowed_intersects() check. Removing that.
Signed-off-by: Shakeel Butt
---
Changelog since v1:
- Divide the patch into two patches.
fs/proc/base.c | 3 +--
include/linux/oom.h | 1 -
mm/oom_kill.c | 28 +++-
3 files changed, 16
601 - 700 of 1184 matches
Mail list logo