+Tejun Heo
On Tue, Dec 1, 2020 at 11:14 AM Axel Rasmussen wrote:
>
> On Tue, Dec 1, 2020 at 10:42 AM Shakeel Butt wrote:
> >
> > On Tue, Dec 1, 2020 at 9:56 AM Greg Thelen wrote:
> > >
> > > Axel Rasmussen wrote:
> > >
> > > > On
On Tue, Dec 1, 2020 at 4:16 PM Axel Rasmussen wrote:
>
> On Tue, Dec 1, 2020 at 12:53 PM Shakeel Butt wrote:
> >
> > +Tejun Heo
> >
> > On Tue, Dec 1, 2020 at 11:14 AM Axel Rasmussen
> > wrote:
> > >
> > > On Tue, Dec 1, 2020 at 10:42 AM Sh
On Tue, Dec 1, 2020 at 5:07 PM Steven Rostedt wrote:
>
> On Tue, 1 Dec 2020 16:36:32 -0800
> Shakeel Butt wrote:
>
> > SGTM but note that usually Andrew squash all the patches into one
> > before sending to Linus. If you plan to replace the path buffer with
> > int
;lock, but it seems
> checking src->nr_items in reparenting is the simplest and avoids lock
> contention.
>
> Fixes: fae91d6d8be5 ("mm/list_lru.c: set bit in memcg shrinker bitmap on
> first list_lru item appearance")
> Suggested-by: Roman Gushchin
> Reviewed-by: Roman Gushchin
> Cc: Vladimir Davydov
> Cc: Kirill Tkhai
> Cc: Shakeel Butt
> Cc: v4.19+
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
On Wed, Dec 2, 2020 at 11:01 AM Tejun Heo wrote:
>
> Hello,
>
> On Tue, Dec 01, 2020 at 12:53:46PM -0800, Shakeel Butt wrote:
> > The writeback tracepoint in include/trace/events/writeback.h is
> > already using the cgroup IDs. Actually it used to use cgroup_path but
>
On Wed, Dec 2, 2020 at 10:31 PM Mike Rapoport wrote:
>
> From: Mike Rapoport
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
>
> Signed-off-by: Mike Rapoport
> Acked-by: Roman Gushchin
Reviewed-by: Shakeel Butt
ss_offline was noticed.
> The hstate index is not reinitialized each time through the do-while loop.
> Fix this as well.
>
> Fixes: 1adc4d419aa2 ("hugetlb_cgroup: add interface for charge/uncharge
> hugetlb reservations")
> Cc:
> Reported-by: Adrian Moreno
> Tested-by: Adrian Moreno
> Signed-off-by: Mike Kravetz
Reviewed-by: Shakeel Butt
On Thu, Nov 26, 2020 at 8:14 PM Roman Gushchin wrote:
>
> Commit 10befea91b61 ("mm: memcg/slab: use a single set of kmem_caches
> for all allocations") introduced a regression into the handling of the
> obj_cgroup_charge() return value. If a non-zero value is returned
> (indicating of exceeding
On Fri, Nov 27, 2020 at 8:18 AM Roman Gushchin wrote:
>
> On Thu, Nov 26, 2020 at 09:55:24PM -0800, Shakeel Butt wrote:
> > On Thu, Nov 26, 2020 at 8:14 PM Roman Gushchin wrote:
> > >
> > > Commit 10befea91b61 ("mm: memcg/slab: use a single set of km
On Wed, Nov 25, 2020 at 1:51 AM Mike Rapoport wrote:
>
> From: Mike Rapoport
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
>
> Signed-off-by: Mike Rapoport
> Acked-by: Roman Gushchin
> ---
> mm/filemap.c | 3
On Tue, Oct 20, 2020 at 2:01 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> DAMON is a data access monitoring framework for the Linux kernel. The
> core mechanisms of DAMON make it
>
> - accurate (the monitoring output is useful enough for DRAM level
>performance-centric memory
On Tue, Oct 20, 2020 at 2:01 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> Even somehow the initial monitoring target regions are well constructed
> to fulfill the assumption (pages in same region have similar access
> frequencies), the data access pattern can be dynamically changed. This
On Tue, Oct 20, 2020 at 2:06 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> DAMON is designed to be used by kernel space code such as the memory
> management subsystems, and therefore it provides only kernel space API.
> That said, letting the user space control DAMON could provide some
>
On Tue, Oct 20, 2020 at 2:02 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> The monitoring target address range can be dynamically changed. For
> example, virtual memory could be dynamically mapped and unmapped.
> Physical memory could be hot-plugged.
>
> As the changes could be quite
On Tue, Oct 20, 2020 at 2:06 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> This commit introduces a reference implementation of the address space
> specific low level primitives for the virtual address space, so that
> users of DAMON can easily monitor the data accesses on virtual address
On Tue, Oct 20, 2020 at 2:06 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> Concurrent Idle Page Tracking users can interfere each other because the
> interface doesn't provide a central rule for synchronization between the
> users. Users could implement their own synchronization rule, but
On Tue, Oct 20, 2020 at 2:04 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> PG_idle and PG_young allows the two PTE Accessed bit users,
> IDLE_PAGE_TRACKING and the reclaim logic concurrently work while don't
> interfere each other. That is, when they need to clear the Accessed
> bit, they
On Tue, Oct 20, 2020 at 2:01 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> DAMON separates its monitoring target address space independent high
> level logics
Please rewrite the above sentence to be more clear.
> from the target space dependent low level primitives for
> flexible support
the memcontrol.h in that file.
Signed-off-by: Shakeel Butt
---
include/linux/memcontrol.h | 112 -
include/linux/vmstat.h | 104 ++
mm/memcontrol.c| 18 ++
3 files changed, 122 insertions(+), 112 deletions(-)
diff
Many workloads consumes significant amount of memory in pagetables. This
patch series exposes the pagetable comsumption for each memory cgroup.
Shakeel Butt (2):
mm: move lruvec stats update functions to vmstat.h
mm: memcontrol: account pagetables per node
Documentation/admin-guide/cgroup
as well.
Signed-off-by: Shakeel Butt
---
This patch was posted at [1] and [2] but more work was needed to make it
build for all archs.
[1] http://lkml.kernel.org/r/20201121022118.3143384-1-shake...@google.com
[2] http://lkml.kernel.org/r/20201123161425.341314-1-shake...@google.com
Documentation
> apologies to everyone else I should be replying to.
>
I really appreciate your insights and historical anecdotes. I always
learn something new.
> On Wed, 4 Nov 2020, Shakeel Butt wrote:
>
> > Since the commit 369ea8242c0f ("mm/rmap: update to new mmu_notifier
> > semantic v2
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
ith mem_cgroup_lruvec().
>
> Signed-off-by: Hui Su
Reviewed-by: Shakeel Butt
significant. So, charge this memory to the memcg of
the VMM. Please note that lifetime of these allocations corresponds to
the lifetime of the VMM.
Signed-off-by: Shakeel Butt
---
This patch has dependency on Roman's patch series "mm: allow mapping
accounted kernel pages to userspace&quo
+LKML and linux-mm
On Sat, Nov 7, 2020 at 4:38 PM Wei Yang wrote:
>
> Some definitions are left unused, just clean them.
>
> Signed-off-by: Wei Yang
Reviewed-by: Shakeel Butt
On Tue, Aug 20, 2019 at 3:45 AM Michal Hocko wrote:
>
> On Tue 20-08-19 17:48:23, Alex Shi wrote:
> > This patchset move lru_lock into lruvec, give a lru_lock for each of
> > lruvec, thus bring a lru_lock for each of memcg.
> >
> > Per memcg lru_lock would ease the lru_lock contention a lot in
>
t adds charges to the batch. If the current page
> happens to be the last one holding the reference for its memcg then the
> memcg is OK to go and the next page to be freed will trigger batched
> uncharge which needs to access the memcg which is gone already.
>
> Fix the issue by taking a reference for the memcg in the current batch.
>
> Fixes: 1a3e1f40962c ("mm: memcontrol: decouple reference counting from page
> accounting")
> Reported-by: syzbot+b305848212deec86e...@syzkaller.appspotmail.com
> Reported-by: syzbot+b5ea6fb6f139c8b94...@syzkaller.appspotmail.com
> Signed-off-by: Michal Hocko
Seems correct to me.
Reviewed-by: Shakeel Butt
On Fri, Aug 21, 2020 at 8:01 AM Roman Gushchin wrote:
>
> Include memory used by bpf programs into the memcg-based accounting.
> This includes the memory used by programs itself, auxiliary data
> and statistics.
>
> Signed-off-by: Roman Gushchin
> ---
> kernel/bpf/core.c | 8
> 1 file
On Fri, Aug 21, 2020 at 8:01 AM Roman Gushchin wrote:
>
> This patch enables memcg-based memory accounting for memory allocated
> by __bpf_map_area_alloc(), which is used by most map types for
> large allocations.
>
> If a map is updated from an interrupt context, and the update
> results in
On Tue, Aug 18, 2020 at 12:27 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> This commit implements the four callbacks (->init_target_regions,
> ->update_target_regions, ->prepare_access_check, and ->check_accesses)
> for the basic access monitoring of the physical memory address space.
>
On Tue, Aug 18, 2020 at 12:25 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> Changes from Previous Version
> =
>
> - Use 42 as the fake target id for paddr instead of -1
> - Fix a typo
>
> Introduction
>
>
> DAMON[1] programming interface users can
On Thu, Aug 20, 2020 at 12:17 AM SeongJae Park wrote:
>
> On Wed, 19 Aug 2020 17:26:15 -0700 Shakeel Butt wrote:
>
> > On Tue, Aug 18, 2020 at 12:27 AM SeongJae Park wrote:
> > >
> > > From: SeongJae Park
> > >
> > > This commit i
On Thu, Aug 20, 2020 at 12:11 AM SeongJae Park wrote:
>
> On Wed, 19 Aug 2020 18:21:44 -0700 Shakeel Butt wrote:
>
> > On Tue, Aug 18, 2020 at 12:25 AM SeongJae Park wrote:
> > >
> > > From: SeongJae Park
> >
On Thu, Aug 20, 2020 at 6:04 AM Waiman Long wrote:
>
> The swap page counter is v2 only while memsw is v1 only. As v1 and v2
> controllers cannot be active at the same time, there is no point to keep
> both swap and memsw page counters in mem_cgroup. The previous patch has
> made sure that memsw
On Thu, May 28, 2020 at 6:55 AM Dan Schatzberg wrote:
>
> Much of the discussion about this has died down. There's been a
> concern raised that we could generalize infrastructure across loop,
> md, etc. This may be possible, in the future, but it isn't clear to me
> how this would look like. I'm
he enum itself
> was not removed at that time. Remove the obsolete enum charge_type now.
>
> Signed-off-by: Waiman Long
Reviewed-by: Shakeel Butt
On Thu, Aug 20, 2020 at 1:29 PM Waiman Long wrote:
>
> On 8/20/20 1:35 PM, Johannes Weiner wrote:
> > On Thu, Aug 20, 2020 at 09:03:49AM -0400, Waiman Long wrote:
> >> The mem_cgroup_get_max() function used to get memory+swap max from
> >> both the v1 memsw and v2 memory+swap page counters &
On Fri, Aug 21, 2020 at 9:02 AM Roman Gushchin wrote:
>
> On Fri, Aug 21, 2020 at 11:04:05AM -0400, Dan Schatzberg wrote:
> > On Thu, Aug 20, 2020 at 10:06:44AM -0700, Shakeel Butt wrote:
> > > On Thu, May 28, 2020 at 6:55 AM Dan Schatzberg
> > > wrote:
> >
On Mon, Aug 17, 2020 at 7:11 AM Waiman Long wrote:
>
> Memory controller can be used to control and limit the amount of
> physical memory used by a task. When a limit is set in "memory.high"
> in a non-root memory cgroup, the memory controller will try to reclaim
> memory if the limit has been
On Fri, Aug 21, 2020 at 8:23 AM Roman Gushchin wrote:
>
> Include percpu objects and the size of map metadata into the
> accounting.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Song Liu
Reviewed-by: Shakeel Butt
ixes: 4c6355b25e8b ("mm: memcontrol: charge swapin pages on instantiation")
> Signed-off-by: Hugh Dickins
> Cc: sta...@vger.kernel.org # v5.8
Reviewed-by: Shakeel Butt
ages12 triggers the warning in Alex Shi's prospective commit
> "mm/memcg: warning on !memcg after readahead page charged".
>
> Signed-off-by: Hugh Dickins
Reviewed-by: Shakeel Butt
is off).
>
> LRU page reclaim always splits the shmem huge page first: I'd prefer not
> to demand that of i915, so check and split compound in shmem_writepage().
>
> Fixes: 2d6692e642e7 ("drm/i915: Start writeback from the shrinker")
> Signed-off-by: Hugh Dickins
> Cc: sta...@vger.kernel.org # v5.3+
Reviewed-by: Shakeel Butt
page: so follow that precedent here.
>
> Do this in such a way that if mem_cgroup_page_lruvec() is made stricter
> (to check page->mem_cgroup is always set), no problem: skip the tails
> before calling it, and add thp_nr_pages() to vmstats on the head.
>
> Signed-off-by: Hugh Dickins
Thanks for catching this.
Reviewed-by: Shakeel Butt
5d91f31faf8e ("mm: swap: fix vmstats for huge page")
> Signed-off-by: Hugh Dickins
Reviewed-by: Shakeel Butt
On Tue, Sep 22, 2020 at 1:38 PM Roman Gushchin wrote:
>
> The lowest bit in page->memcg_data is used to distinguish between
> struct memory_cgroup pointer and a pointer to a objcgs array.
> All checks and modifications of this bit are open-coded.
>
> Let's formalize it using page memcg flags,
As a bonus, on !CONFIG_MEMCG build the PageMemcgKmem() check will
> be compiled out.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
sing page memcg flags, defined in page_memcg_flags
> enum, and replace all open-coded accesses with test_bit()/__set_bit().
>
> Additional flags might be added later.
>
> Signed-off-by: Roman Gushchin
Thanks.
Reviewed-by: Shakeel Butt
On Tue, Oct 27, 2020 at 8:50 PM Muchun Song wrote:
>
> Consider the following memcg hierarchy.
>
> root
>/\
> A B
>
> If we get the objcg of memcg A failed,
Please fix the above statement.
> the get_obj_cgroup_from_current
> can
ab: obj_cgroup API")
> Signed-off-by: Muchun Song
> Acked-by: Roman Gushchin
Reviewed-by: Shakeel Butt
mem.
>
> Signed-off-by: Muchun Song
> Acked-by: Roman Gushchin
Reviewed-by: Shakeel Butt
On Thu, Oct 29, 2020 at 2:08 AM Michal Hocko wrote:
>
> On Wed 28-10-20 11:50:13, Muchun Song wrote:
> [...]
> > -struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct
> > pglist_data *pgdat)
> > +static struct lruvec *
> > +__mem_cgroup_node_lruvec(struct mem_cgroup *memcg, struct
On Thu, Oct 29, 2020 at 9:09 AM Muchun Song wrote:
>
> On Thu, Oct 29, 2020 at 11:48 PM Shakeel Butt wrote:
> >
> > On Tue, Oct 27, 2020 at 8:50 PM Muchun Song
> > wrote:
> > >
> > > Consider the following memcg hiera
On Fri, Sep 25, 2020 at 9:19 AM Ming Lei wrote:
>
> On Fri, Sep 25, 2020 at 03:31:45PM +0800, Ming Lei wrote:
> > On Thu, Sep 24, 2020 at 09:13:11PM -0400, Theodore Y. Ts'o wrote:
> > > On Thu, Sep 24, 2020 at 10:33:45AM -0400, Theodore Y. Ts'o wrote:
> > > > HOWEVER, thanks to a hint from a
On Fri, Sep 25, 2020 at 9:32 AM Shakeel Butt wrote:
>
> On Fri, Sep 25, 2020 at 9:19 AM Ming Lei wrote:
> >
> > On Fri, Sep 25, 2020 at 03:31:45PM +0800, Ming Lei wrote:
> > > On Thu, Sep 24, 2020 at 09:13:11PM -0400, Theodore Y. Ts'o wrote:
> > > > O
On Fri, Sep 25, 2020 at 10:17 AM Linus Torvalds
wrote:
>
> On Fri, Sep 25, 2020 at 9:19 AM Ming Lei wrote:
> >
> > git bisect shows the first bad commit:
> >
> > [10befea91b61c4e2c2d1df06a2e978d182fcf792] mm: memcg/slab: use a
> > single set of
> > kmem_caches for all
On Fri, Sep 25, 2020 at 10:22 AM Shakeel Butt wrote:
>
> On Fri, Sep 25, 2020 at 10:17 AM Linus Torvalds
> wrote:
> >
> > On Fri, Sep 25, 2020 at 9:19 AM Ming Lei wrote:
> > >
> > > git bisect shows the first bad commit:
> > >
> > >
On Fri, Sep 25, 2020 at 10:48 AM Roman Gushchin wrote:
>
> On Fri, Sep 25, 2020 at 10:35:03AM -0700, Shakeel Butt wrote:
> > On Fri, Sep 25, 2020 at 10:22 AM Shakeel Butt wrote:
> > >
> > > On Fri, Sep 25, 2020 at 10:17 AM Linus Torvalds
> > > wrote:
>
On Fri, Sep 25, 2020 at 10:58 AM Shakeel Butt
wrote:
>
[snip]
>
> I don't think you can ignore the flushing. The __free_once() in
> ___cache_free() assumes there is a space available.
>
> BTW do_drain() also have the same issue.
>
> Why not move slabs_destroy() a
On Fri, Sep 25, 2020 at 1:56 PM Roman Gushchin wrote:
>
> On Fri, Sep 25, 2020 at 12:19:02PM -0700, Shakeel Butt wrote:
> > On Fri, Sep 25, 2020 at 10:58 AM Shakeel Butt
> > wrote:
> > >
> > [snip]
> > >
> > > I don't think you can ignore the
local CPU array_cache cache before
calling slabs_destroy().
Fixes: 10befea91b61 ("mm: memcg/slab: use a single set of kmem_caches for all
allocations")
Signed-off-by: Shakeel Butt
Reviewed-by: Roman Gushchin
Tested-by: Ming Lei
Reported-by: kernel test robot
---
mm/slab.c | 8 ++-
ry.use_hierarchy interface is preserved
> with a limited functionality: reading always returns "1", writing
> of "1" passes silently, writing of any other value fails with
> -EINVAL and a warning to dmesg (on the first occasion).
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
On Tue, Nov 3, 2020 at 1:27 PM Roman Gushchin wrote:
>
> Update cgroup v1 docs after the deprecation of the non-hierarchical
> mode of the memory controller.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
creating of broken hierarchies.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
same memcg of the target
page but the unmapping code is considering accesses from all the
processes, so, decreasing the effectiveness of memcg reclaim.
The simplest solution is to always assume TTU_IGNORE_ACCESS in unmapping
code.
Signed-off-by: Shakeel Butt
---
include/linux/rmap.h | 1 -
mm/huge_memory.
On Tue, Sep 22, 2020 at 4:12 AM Michal Hocko wrote:
>
> On Mon 21-09-20 11:35:35, Shakeel Butt wrote:
> > Hi all,
> >
> > We are seeing machine lockups due extreme memory pressure where the
> > free pages on all the zones are way below the min watermarks. The stack
&
On Tue, Sep 22, 2020 at 4:49 AM Michal Hocko wrote:
>
> On Mon 21-09-20 10:50:14, Shakeel Butt wrote:
> > On Mon, Sep 21, 2020 at 9:30 AM Michal Hocko wrote:
> > >
> > > On Wed 09-09-20 14:57:52, Shakeel Butt wrote:
> > > > Introduce an memcg inter
On Tue, Sep 22, 2020 at 8:16 AM Michal Hocko wrote:
>
> On Tue 22-09-20 06:37:02, Shakeel Butt wrote:
> [...]
> > > I would recommend to focus on tracking down the who is blocking the
> > > further progress.
> >
> > I was able to find the CPU next in line
On Tue, Sep 22, 2020 at 9:34 AM Michal Hocko wrote:
>
> On Tue 22-09-20 09:29:48, Shakeel Butt wrote:
> > On Tue, Sep 22, 2020 at 8:16 AM Michal Hocko wrote:
> > >
> > > On Tue 22-09-20 06:37:02, Shakeel Butt wrote:
> [...]
> > > > I talked about
On Tue, Sep 22, 2020 at 9:55 AM Michal Hocko wrote:
>
> On Tue 22-09-20 08:54:25, Shakeel Butt wrote:
> > On Tue, Sep 22, 2020 at 4:49 AM Michal Hocko wrote:
> > >
> > > On Mon 21-09-20 10:50:14, Shakeel Butt wrote:
> [...]
> > > > Let me add o
On Tue, Sep 22, 2020 at 11:31 AM Michal Hocko wrote:
>
> On Tue 22-09-20 11:10:17, Shakeel Butt wrote:
> > On Tue, Sep 22, 2020 at 9:55 AM Michal Hocko wrote:
> [...]
> > > So far I have learned that you are primarily working around an
> > > implementation d
On Tue, Sep 22, 2020 at 5:37 AM Chunxin Zang wrote:
>
> On Tue, Sep 22, 2020 at 6:42 PM Chris Down wrote:
> >
> > Chunxin Zang writes:
> > >On Tue, Sep 22, 2020 at 5:51 PM Chris Down wrote:
> > >>
> > >> Chunxin Zang writes:
> > >> >My usecase is that there are two types of services in one
On Tue, Sep 22, 2020 at 12:09 PM Michal Hocko wrote:
>
> On Tue 22-09-20 11:10:17, Shakeel Butt wrote:
> > On Tue, Sep 22, 2020 at 9:55 AM Michal Hocko wrote:
> [...]
> > > Last but not least the memcg
> > > background reclaim is something that should be possi
On Mon, Aug 17, 2020 at 3:52 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> Changes from Previous Version
> =
>
> - Place 'CREATE_TRACE_POINTS' after '#include' statements (Steven Rostedt)
> - Support large record file (Alkaid)
> - Place 'put_pid()' of virtual
On Wed, Jan 27, 2021 at 8:57 AM SeongJae Park wrote:
>
> On Thu, 24 Dec 2020 08:11:11 +0100 SeongJae Park wrote:
>
> > On Wed, 23 Dec 2020 14:54:02 -0800 Shakeel Butt wrote:
> >
> > > On Wed, Dec 23, 2020 at 8:47 AM SeongJae Park wrote:
> &
On Tue, Feb 2, 2021 at 2:30 AM SeongJae Park wrote:
>
> > On Mon, 1 Feb 2021 09:37:39 -0800 Shakeel Butt wrote:
> >
> > > On Tue, Dec 15, 2020 at 3:59 AM SeongJae Park wrote:
> > > >
> > > > From: SeongJae Park
> > > &g
On Tue, Feb 2, 2021 at 7:46 AM SeongJae Park wrote:
>
[snip]
> >
> > You can simplify by simply restricting to one pid/target per each write
> > syscall.
>
> Right, thanks for the suggestion. However, I already almost finished writing
> the fix. If there is no other concern, I'd like to keep
: bba4c5f96ce4 ("mm/z3fold.c: support page migration")
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
> ---
> mm/z3fold.c | 10 ++
> 1 file changed, 10 insertions(+)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 42ef9955117c..9da471bcab93 100644
: bba4c5f96ce4 ("mm/z3fold.c: support page migration")
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
> ---
> Changelog since v1:
> - Made comments explicityly refer to new_zhdr->buddy.
>
> mm/z3fold.c | 10 ++
> 1 file changed, 10 insertions(+)
&
Adding related people.
The thread starts at:
http://lkml.kernel.org/r/1562795006.8510.19.ca...@lca.pw
On Mon, Jul 15, 2019 at 8:01 PM Yang Shi wrote:
>
>
>
> On 7/15/19 6:36 PM, Qian Cai wrote:
> >
> >> On Jul 15, 2019, at 8:22 PM, Yang Shi wrote:
> >>
> >>
> >>
> >> On 7/15/19 2:23 PM, Qian
On Tue, Jul 16, 2019 at 5:12 PM Yang Shi wrote:
>
>
>
> On 7/16/19 4:36 PM, Shakeel Butt wrote:
> > Adding related people.
> >
> > The thread starts at:
> > http://lkml.kernel.org/r/1562795006.8510.19.ca...@lca.pw
> >
> > On Mon, Jul 15, 2019 at 8:
On Wed, Jul 17, 2019 at 10:45 AM Yang Shi wrote:
>
> Shakeel Butt reported premature oom on kernel with
> "cgroup_disable=memory" since mem_cgroup_is_root() returns false even
> though memcg is actually NULL. The drop_caches is also broken.
>
> It is because commi
ready disabled.
Fixes: 2262185c5b28 ("mm: per-cgroup memory reclaim stats")
Signed-off-by: Shakeel Butt
Acked-by: Johannes Weiner
---
mm/swap.c | 17 -
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index 3dbef6517cac..4eb179ee0b72 1006
Many of the callbacks called by pagevec_lru_move_fn() does not correctly
update the vmstats for huge pages. Fix that. Also __pagevec_lru_add_fn()
use the irq-unsafe alternative to update the stat as the irqs are
already disabled.
Signed-off-by: Shakeel Butt
Acked-by: Johannes Weiner
---
mm
the fixup
from the splitting path.
Signed-off-by: Johannes Weiner
Signed-off-by: Shakeel Butt
---
mm/swap.c | 23 +--
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index 4eb179ee0b72..b75c0ce90418 100644
--- a/mm/swap.c
+++ b/mm/swap.c
On Wed, May 27, 2020 at 12:42 PM Johannes Weiner wrote:
>
> On Wed, May 27, 2020 at 11:29:58AM -0700, Shakeel Butt wrote:
> > From: Johannes Weiner
> >
> > Currently, THP are counted as single pages until they are split right
> > before being swapped out. However, a
Always account THP by the number of basepages, and remove the fixup
> from the splitting path.
>
> Signed-off-by: Johannes Weiner
> Reviewed-by: Rik van Riel
> Acked-by: Michal Hocko
> Acked-by: Minchan Kim
Reviewed-by: Shakeel Butt
On Wed, May 27, 2020 at 1:46 PM Andrew Morton wrote:
>
> On Wed, 27 May 2020 15:41:48 -0400 Johannes Weiner wrote:
>
> > On Wed, May 27, 2020 at 11:29:58AM -0700, Shakeel Butt wrote:
> > > From: Johannes Weiner
> > >
> > > Currently, THP are counted
I haven't gone through the whole email-chain, so, I might be asking
some repetitive questions. I will go through the email-chain later.
On Wed, May 20, 2020 at 7:37 AM Chris Down wrote:
>
> In Facebook production, we've seen cases where cgroups have been put
> into allocator throttling even when
On Thu, May 28, 2020 at 1:30 PM Johannes Weiner wrote:
>
> On Thu, May 28, 2020 at 08:48:31PM +0100, Chris Down wrote:
> > Shakeel Butt writes:
> > > What was the initial reason to have different behavior in the first place?
> >
> > This differing behaviour
_RETRIES with MAX_RECLAIM_RETRIES, making the page
> allocator and memcg internals more similar in semantics when reclaim
> fails to produce results, avoiding premature OOMs or throttling.
>
> Signed-off-by: Chris Down
Reviewed-by: Shakeel Butt
On Wed, Jan 24, 2018 at 3:12 AM, Amir Goldstein wrote:
> On Wed, Jan 24, 2018 at 12:34 PM, Jan Kara wrote:
>> On Mon 22-01-18 22:31:20, Amir Goldstein wrote:
>>> On Fri, Jan 19, 2018 at 5:02 PM, Shakeel Butt wrote:
>>> > On Wed, Nov 15, 2017 at 1:31 AM, Jan Kara
On Wed, Jan 24, 2018 at 5:54 PM, Al Viro wrote:
> On Wed, Jan 24, 2018 at 05:08:27PM -0800, Shakeel Butt wrote:
>> First, let me apologize, I think I might have led the discussion in
>> wrong direction by giving one wrong information. The current upstream
>> kernel, from the
On Thu, Oct 19, 2017 at 3:18 AM, Kirill A. Shutemov
wrote:
> On Wed, Oct 18, 2017 at 04:17:30PM -0700, Shakeel Butt wrote:
>> Recently we have observed high latency in mlock() in our generic
>> library and noticed that users have started using tmpfs files even
>> without
On Thu, Oct 19, 2017 at 5:32 AM, Michal Hocko wrote:
> On Wed 18-10-17 16:17:30, Shakeel Butt wrote:
>> Recently we have observed high latency in mlock() in our generic
>> library and noticed that users have started using tmpfs files even
>> without swap and the latency
On Wed, Oct 18, 2017 at 11:24 PM, Anshuman Khandual
wrote:
> On 10/19/2017 04:47 AM, Shakeel Butt wrote:
>> Recently we have observed high latency in mlock() in our generic
>> library and noticed that users have started using tmpfs files even
>> without swap and the latency
> [...]
>>
>> Sorry for the confusion. I wanted to say that if the pages which are
>> being mlocked are on caches of remote cpus then lru_add_drain_all will
>> move them to their corresponding LRUs and then remaining functionality
>> of mlock will move them again from their evictable LRUs to
On Wed, Oct 18, 2017 at 8:18 PM, Balbir Singh wrote:
> On Wed, 18 Oct 2017 16:17:30 -0700
> Shakeel Butt wrote:
>
>> Recently we have observed high latency in mlock() in our generic
>> library and noticed that users have started using tmpfs files even
>> without s
On Thu, Oct 19, 2017 at 1:13 PM, Michal Hocko wrote:
> On Thu 19-10-17 12:46:50, Shakeel Butt wrote:
>> > [...]
>> >>
>> >> Sorry for the confusion. I wanted to say that if the pages which are
>> >> being mlocked are on caches of remote
501 - 600 of 1184 matches
Mail list logo