On Thu, Jun 7, 2018 at 10:30 AM Ralph Campbell wrote:
>
>
>
> On 06/07/2018 07:57 AM, Matthew Wilcox wrote:
> > From: Matthew Wilcox
> >
> > Need to do a bit of rearranging to make this work.
> >
> > Signed-off-by: Matthew Wilcox
> > ---
> > arch/x86/events/intel/uncore.c | 19
Thelen
Signed-off-by: Shakeel Butt
---
fs/quota/dquot.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index d88231e3b2be..241b00f835b9 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -716,7 +716,7 @@ dqcache_shrink_scan(struct
happens because parent_mem_cgroup() returns a NULL
> pointer, which is dereferenced later without a check.
>
> As cgroup v1 has no memory guarantee support, let's make
> mem_cgroup_protected() immediately return MEMCG_PROT_NONE,
> if the given cgroup has no parent (non-hierarchi
On Tue, May 22, 2018 at 3:09 AM Kirill Tkhai wrote:
>
> From: Vladimir Davydov
>
> The patch makes shrink_slab() be called for root_mem_cgroup
> in the same way as it's called for the rest of cgroups.
> This simplifies the logic and improves the readability.
>
> Signed-off-by: Vladimir Davydov
On Sat, Jun 9, 2018 at 3:20 AM Vladimir Davydov wrote:
>
> On Tue, May 29, 2018 at 05:12:04PM -0700, Shakeel Butt wrote:
> > The memcg kmem cache creation and deactivation (SLUB only) is
> > asynchronous. If a root kmem cache is destroyed whose memcg cache is in
> >
On Sun, Jun 10, 2018 at 9:32 AM Paul E. McKenney
wrote:
>
> On Sun, Jun 10, 2018 at 07:52:50AM -0700, Shakeel Butt wrote:
> > On Sat, Jun 9, 2018 at 3:20 AM Vladimir Davydov
> > wrote:
> > >
> > > On Tue, May 29, 2018 at 05:12:04PM -0700, Shakeel Butt
The flag memcg_kmem_skip_account was added during the era of opt-out
kmem accounting. There is no need for such flag in the opt-in world as
there aren't any __GFP_ACCOUNT allocations within
memcg_create_cache_enqueue().
Signed-off-by: Shakeel Butt
---
include/linux/sched.h | 3 ---
mm
On Thu, Mar 16, 2017 at 12:57 PM, Johannes Weiner wrote:
> On Sat, Mar 11, 2017 at 09:52:15AM -0800, Shakeel Butt wrote:
>> On Sat, Mar 11, 2017 at 5:51 AM, Yisheng Xie wrote:
>> > @@ -2808,7 +2826,7 @@ static unsigned long do_try_to_free_pages(struct
>&
On Mon, Mar 6, 2017 at 2:30 AM, Michal Hocko wrote:
> From: Michal Hocko
>
> vhost code uses __GFP_REPEAT when allocating vhost_virtqueue resp.
> vhost_vsock because it would really like to prefer kmalloc to the
> vmalloc fallback - see 23cc5a991c7a ("vhost-net: extend device
> allocation to
-by: Shakeel Butt
---
mm/vmscan.c | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bae698484e8e..b2d24cc7a161 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2819,6 +2819,12 @@ static bool pfmemalloc_watermark_ok(pg_data_t *pgdat
On Fri, Mar 10, 2017 at 6:19 PM, Yisheng Xie wrote:
> From: Yisheng Xie
>
> When we enter do_try_to_free_pages, the may_thrash is always clear, and
> it will retry shrink zones to tap cgroup's reserves memory by setting
> may_thrash when the former shrink_zones reclaim nothing.
>
> However, if
function
> may_thrash and return true when memcg is disabled or on legacy
> hierarchy.
>
> Signed-off-by: Yisheng Xie
> Suggested-by: Shakeel Butt
> ---
> v2:
> - more restrictive condition for retry of shrink_zones (restricting
>cgroup_disabled=memory boot opt
> A more useful metric for memory pressure at this point is quantifying
> that time you spend thrashing: time the job spends in direct reclaim
> and on the flipside time the job waits for recently evicted pages to
> come back. Combined, that gives you a good measure of overhead from
> memory
On Mon, Mar 6, 2017 at 2:33 AM, Michal Hocko wrote:
> From: Michal Hocko
>
> fq_alloc_node, alloc_netdev_mqs and netif_alloc* open code kmalloc
> with vmalloc fallback. Use the kvmalloc variant instead. Keep the
> __GFP_REPEAT flag based on explanation from Eric:
> "
> At the time, tests on the
On Fri, Mar 31, 2017 at 8:30 AM, Andrey Ryabinin
wrote:
> zswap_frontswap_store() is called during memory reclaim from
> __frontswap_store() from swap_writepage() from shrink_page_list().
> This may happen in NOFS context, thus zswap shouldn't use __GFP_FS,
> otherwise we may renter into fs code
ub function
> mem_cgroup_thrashed() and return true when memcg is disabled or on
> legacy hierarchy.
>
> Signed-off-by: Yisheng Xie
> Suggested-by: Shakeel Butt
Thanks.
Reviewed-by: Shakeel Butt
> ---
> v3:
> - rename function may_thrash() to mem_cgroup_thrashed() to
On Mon, Mar 13, 2017 at 2:02 AM, Michal Hocko wrote:
> On Fri 10-03-17 11:46:20, Shakeel Butt wrote:
>> Recently kswapd has been modified to give up after MAX_RECLAIM_RETRIES
>> number of unsucessful iterations. Before going to sleep, kswapd thread
>> will unconditional
On Mon, Mar 13, 2017 at 1:33 AM, Michal Hocko wrote:
> Please do not post new version after a single feedback and try to wait
> for more review to accumulate. This is in the 3rd version and it is not
> clear why it is still an RFC.
>
> On Sun 12-03-17 19:06:10, Yisheng Xie wrote:
>> From: Yisheng
On Mon, Mar 13, 2017 at 8:46 AM, Michal Hocko wrote:
> On Mon 13-03-17 08:07:15, Shakeel Butt wrote:
>> On Mon, Mar 13, 2017 at 2:02 AM, Michal Hocko wrote:
>> > On Fri 10-03-17 11:46:20, Shakeel Butt wrote:
>> >> Recently kswapd has been modified to giv
On Mon, Mar 13, 2017 at 12:58 PM, Johannes Weiner wrote:
> Hi Shakeel,
>
> On Fri, Mar 10, 2017 at 11:46:20AM -0800, Shakeel Butt wrote:
>> Recently kswapd has been modified to give up after MAX_RECLAIM_RETRIES
>> number of unsucessful iterations. Before going to
-by: Shakeel Butt
Suggested-by: Michal Hocko
Suggested-by: Johannes Weiner
---
v2:
Instead of separate helper function for checking kswapd_failures,
added the check into pfmemalloc_watermark_ok() and renamed that
function.
mm/vmscan.c | 15 +--
1 file changed, 9 insertions(+), 6
dy to make a forward
progress. So, add kswapd_failures check on the throttle_direct_reclaim
condition.
Signed-off-by: Shakeel Butt
Suggested-by: Michal Hocko
Suggested-by: Johannes Weiner
Acked-by: Hillf Danton
Acked-by: Michal Hocko
---
v3:
Commit message updated.
v2:
Instead of separate helpe
On Tue, Feb 28, 2017 at 1:39 PM, Johannes Weiner wrote:
> Jia He reports a problem with kswapd spinning at 100% CPU when
> requesting more hugepages than memory available in the system:
>
> $ echo 4000 >/proc/sys/vm/nr_hugepages
>
> top - 13:42:59 up 3:37, 1 user, load average: 1.09, 1.03,
he mod_objcg_state()
> from int to enum node_stat_item.
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
ue.c:2416)
> [9.846034] ? process_one_work (kernel/workqueue.c:2358)
> [9.846162] kthread (kernel/kthread.c:292)
> [9.846271] ? __kthread_bind_mask (kernel/kthread.c:245)
> [9.846420] ret_from_fork (arch/x86/entry/entry_64.S:300)
> [9.846531] ---[ end trace 8b5647c1
CCed: Paolo Bonzini
On Fri, Oct 16, 2020 at 1:53 PM Minchan Kim wrote:
[snip]
> > And there might be others, and adding everything to /proc/meminfo is not
> > feasible. I have once proposed adding a counter called "Unaccounted:" which
> > would at least tell the user easily if a significant
On Mon, Dec 7, 2020 at 6:22 AM Hui Su wrote:
>
> Since the commit 60cd4bcd6238 ("memcg: localize memcg_kmem_enabled()
> check"), we have supplied the api which users don't have to explicitly
> check memcg_kmem_enabled().
>
> Signed-off-by: Hui Su
> ---
> mm/page_alloc.c | 12 ++--
> 1
+Michal Hocko
Message starts at https://lkml.kernel.org/r/20201207142204.GA18516@rlk
On Mon, Dec 7, 2020 at 10:08 PM Hui Su wrote:
>
> On Mon, Dec 07, 2020 at 09:28:46AM -0800, Shakeel Butt wrote:
> > On Mon, Dec 7, 2020 at 6:22 AM Hui Su wrote:
> >
> > The reason
* c * n) bytes.
>
> Signed-off-by: Muchun Song
Few nits below:
Reviewed-by: Shakeel Butt
> ---
> Changes in v1 -> v2:
> - Update the commit log to point out how many bytes that we can save.
>
> include/linux/memcontrol.h | 6 +-
> mm/memcontrol.c| 10 +++
On Tue, Jan 12, 2021 at 9:12 AM Johannes Weiner wrote:
>
> When a value is written to a cgroup's memory.high control file, the
> write() context first tries to reclaim the cgroup to size before
> putting the limit in place for the workload. Concurrent charges from
> the workload can keep such a
instances running in a
datacenter with heterogeneous systems (some have swap and some don't)
will keep seeing a consistent view of their usage.
Signed-off-by: Shakeel Butt
Acked-by: Michal Hocko
---
Changes since v1:
- Updated commit message
Documentation/admin-guide/cgroup-v2.rst | 4
drivers
_dirty per-memcg numa stat.
Fixes: 5f9a4f4a7096 ("mm: memcontrol: add the missing numa_stat interface for
cgroup v2")
Signed-off-by: Shakeel Butt
Reviewed-by: Muchun Song
Acked-by: Yang Shi
Reviewed-by: Roman Gushchin
Cc:
---
Changes since v1:
- none
mm/migrate.c | 4 ++--
1 file c
but to be more future proof, this patch adds the THP support
for those stats as well.
Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp")
Signed-off-by: Shakeel Butt
Acked-by: Yang Shi
Reviewed-by: Roman Gushchin
Cc:
---
Changes since v1:
- Fixed a typo
mm/migr
ic.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
st one is being collected, but the rest of the flushing code is the
> same. Merge them into one function and share the common code.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
BTW what about the lruvec stats? Why not convert them to rstat as well?
memcg_exact_page_state(), since memcg_page_state() is now exact.
Only if cgroup_rstat_flush() has been called before memcg_page_state(), right?
>
> Signed-off-by: Johannes Weiner
> Reviewed-by: Roman Gushchin
> Acked-by: Michal Hocko
Overall the patch looks good to me with a co
to the memory.stat side of the
> equation, since they're included in memory.current and could throw
> false positives.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
d-by: Kirill Tkhai
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
ported-by: Muchun Song
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
licit reference on the page, and this dance is no longer needed.
>
> Use unlock_page_memcg() and dec_lruvec_page_stat() directly.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
stored in a memcg structure. So
> move the
> shrinker_maps handling code into vmscan.c for tighter integration with
> shrinker code,
> and remove the "memcg_" prefix. There is no functional change.
>
> Acked-by: Vlastimil Babka
> Acked-by: Kirill Tkhai
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
his dance is no longer needed.
>
> Use unlock_page_memcg() and dec_lruvec_page_state() directly.
>
> This removes the last user of the lock_page_memcg() return value,
> change it to void. Touch up the comments in there as well. This also
> removes the last extern user of __unlock_page_memcg(), make it
> static. Further, it removes the last user of dec_lruvec_state(),
> delete it, along with a few other unused helpers.
>
> Signed-off-by: Johannes Weiner
> Acked-by: Hugh Dickins
> Reviewed-by: Shakeel Butt
The patch looks fine. I don't want to spoil the fun but just wanted to
call out that I might bring back __unlock_page_memcg() for the memcg
accounting of zero copy TCP memory work where we are uncharging the
page in page_remove_rmap().
On Thu, Jan 28, 2021 at 6:22 AM Michal Hocko wrote:
>
> On Thu 28-01-21 06:05:11, Shakeel Butt wrote:
> > On Wed, Jan 27, 2021 at 11:59 PM Michal Hocko wrote:
> > >
> > > On Wed 27-01-21 10:42:13, Roman Gushchin wrote:
> > > > On Tue, Jan 26, 20
On Mon, Jan 25, 2021 at 1:35 PM Mike Rapoport wrote:
>
> On Mon, Jan 25, 2021 at 09:18:04AM -0800, Shakeel Butt wrote:
> > On Mon, Jan 25, 2021 at 8:20 AM Matthew Wilcox wrote:
> > >
> > > On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrot
On Wed, Jan 27, 2021 at 11:59 PM Michal Hocko wrote:
>
> On Wed 27-01-21 10:42:13, Roman Gushchin wrote:
> > On Tue, Jan 26, 2021 at 04:05:55PM +0100, Michal Hocko wrote:
> > > On Tue 26-01-21 14:48:38, Matthew Wilcox wrote:
> > > > On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
>
y in memory.stat reporting")
Reviewed-by: Shakeel Butt
On Tue, Feb 2, 2021 at 12:51 PM Johannes Weiner wrote:
>
> No need to encapsulate a simple struct member access.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Tue, Feb 2, 2021 at 12:45 PM Johannes Weiner wrote:
>
> There are no users outside of the memory controller itself. The rest
> of the kernel cares either about node or lruvec stats.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Tue, Dec 15, 2020 at 3:57 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> Even somehow the initial monitoring target regions are well constructed
> to fulfill the assumption (pages in same region have similar access
> frequencies), the data access pattern can be dynamically changed. This
On Tue, Dec 15, 2020 at 3:59 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> DAMON is designed to be used by kernel space code such as the memory
> management subsystems, and therefore it provides only kernel space API.
Which kernel space APIs are being referred here?
> That said, letting
On Tue, Dec 15, 2020 at 3:56 AM SeongJae Park wrote:
>
> From: SeongJae Park
>
> To avoid the unbounded increase of the overhead, DAMON groups adjacent
> pages that assumed to have the same access frequencies into a region.
'that are assumed'
> As long as the assumption (pages in a region have
again.
>
> Fixes: 536d3bf261a2 ("mm: memcontrol: avoid workload stalls when lowering
> memory.high")
> Cc: # 5.8+
> Reported-by: Tejun Heo
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Tue, Mar 23, 2021 at 11:42 AM Arjun Roy wrote:
>
[...]
>
> To summarize then, it seems to me that we're on the same page now.
> I'll put together a tentative v3 such that:
> 1. It uses pre-charging, as previously discussed.
> 2. It uses a page flag to delineate pages of a certain networking
On Wed, Mar 24, 2021 at 1:39 PM Arjun Roy wrote:
>
> On Wed, Mar 24, 2021 at 2:12 AM Michal Hocko wrote:
> >
> > On Tue 23-03-21 11:47:54, Arjun Roy wrote:
> > > On Tue, Mar 23, 2021 at 7:34 AM Michal Hocko wrote:
> > > >
> > > > On Wed 17-03-21 18:12:55, Johannes Weiner wrote:
> > > > [...]
>
On Mon, Mar 29, 2021 at 9:13 AM Muchun Song wrote:
>
> On Mon, Mar 29, 2021 at 10:49 PM Dan Schatzberg
> wrote:
[...]
>
> Since remote memcg must hold a reference, we do not
> need to do something like get_active_memcg() does.
> Just use css_get to obtain a ref, it is simpler. Just
> Like below.
On Tue, Mar 30, 2021 at 3:20 AM Muchun Song wrote:
>
> Since the following patchsets applied. All the kernel memory are charged
> with the new APIs of obj_cgroup.
>
> [v17,00/19] The new cgroup slab memory controller
> [v5,0/7] Use obj_cgroup APIs to charge kmem pages
>
> But user
On Thu, Apr 1, 2021 at 9:08 AM Muchun Song wrote:
>
[...]
> > The zombie issue is a pretty urgent concern that has caused several
> > production emergencies now. It needs a fix sooner rather than later.
>
> Thank you very much for clarifying the problem for me. I do agree
> with you. This issue
CC: Hugh Dickins
On Wed, Mar 31, 2021 at 9:37 PM Alistair Popple wrote:
>
> On Wednesday, 31 March 2021 10:57:46 PM AEDT Jason Gunthorpe wrote:
> > On Wed, Mar 31, 2021 at 03:15:47PM +1100, Alistair Popple wrote:
> > > On Wednesday, 31 March 2021 2:56:38 PM AEDT John Hubbard wrote:
> > > > On
On Tue, Mar 30, 2021 at 4:44 PM Hugh Dickins wrote:
>
> Lockdep warns mm/vmscan.c: suspicious rcu_dereference_protected() usage!
> when free_shrinker_info() is called from mem_cgroup_css_free(): there it
> is called with no locking, whereas alloc_shrinker_info() calls it with
> down_write of
On Fri, Apr 2, 2021 at 6:04 PM Andrew Morton wrote:
>
> On Wed, 31 Mar 2021 20:35:02 -0700 Roman Gushchin wrote:
>
> > On Thu, Apr 01, 2021 at 11:31:16AM +0800, Miaohe Lin wrote:
> > > On 2021/4/1 11:01, Muchun Song wrote:
> > > > Christian Borntraeger reported a warning about "percpu ref
> > >
rence of memcg.
>
> Reported-by: Christian Borntraeger
> Signed-off-by: Muchun Song
Looks good to me.
Reviewed-by: Shakeel Butt
On Tue, Mar 30, 2021 at 2:10 PM Johannes Weiner wrote:
>
[...]
> > The main concern I have with *just* reparenting LRU pages is that for
> > the long running systems, the root memcg will become a dumping ground.
> > In addition a job running multiple times on a machine will see
> > inconsistent
a object cgroup. This is just
> a code movement without any functional changes.
>
> Signed-off-by: Muchun Song
> Acked-by: Roman Gushchin
Reviewed-by: Shakeel Butt
On Fri, Mar 12, 2021 at 3:07 PM Johannes Weiner wrote:
>
> On Fri, Mar 12, 2021 at 02:42:45PM -0800, Shakeel Butt wrote:
> > Hi Johannes,
> >
> > On Fri, Mar 12, 2021 at 11:23 AM Johannes Weiner wrote:
> > >
> > [...]
> > >
> > > Longe
Hi Johannes,
On Fri, Mar 12, 2021 at 11:23 AM Johannes Weiner wrote:
>
[...]
>
> Longer term we most likely need it there anyway. The issue you are
> describing in the cover letter - allocations pinning memcgs for a long
> time - it exists at a larger scale and is causing recurring problems
> in
On Thu, Mar 11, 2021 at 12:52 AM Huang, Ying wrote:
>
> Hi, Butt,
>
> Shakeel Butt writes:
>
> > On Wed, Mar 10, 2021 at 4:47 PM Huang, Ying wrote:
> >>
> >> From: Huang Ying
> >>
> >> In shrink_node(), to determine whether t
> sleep. Switch to the atomic variant, cgroup_rstat_irqsafe().
>
> To be consistent with other memcg flush calls, but without adding
> another memcg wrapper, inline and drop memcg_flush_vmstats() instead.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Mon, Mar 15, 2021 at 9:20 PM Arjun Roy wrote:
>
[...]
> >
>
> Apologies for the spam - looks like I forgot to rebase the first time
> I sent this out.
>
> Actually, on a related note, it's not 100% clear to me whether this
> patch (which in its current form, applies to net-next) should instead
On Tue, Mar 16, 2021 at 8:37 AM Dan Schatzberg wrote:
>
[...]
>
> /* Support for loadable transfer modules */
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 0c04d39a7967..fd5dd961d01f 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
>
case 3) it
mem_cgroup_charge()
> would always charge the root cgroup. Now it looks up the current
> active_memcg first (falling back to charging the root cgroup if not
> set).
>
> Signed-off-by: Dan Schatzberg
> Acked-by: Johannes Weiner
> Acked-by: Tejun Heo
> Ac
On Wed, Mar 17, 2021 at 3:30 PM Jens Axboe wrote:
>
> On 3/16/21 9:36 AM, Dan Schatzberg wrote:
> > No major changes, just rebasing and resubmitting
>
> Applied for 5.13, thanks.
>
I have requested a couple of changes in the patch series. Can this
applied series still be changed or new patches
On Wed, Mar 17, 2021 at 11:17 PM Stephen Rothwell wrote:
>
> Hi all,
>
> Today's linux-next merge of the akpm-current tree got a conflict in:
>
> mm/memcontrol.c
>
> between commit:
>
> 06d69d4c8669 ("mm: Charge active memcg when no mm is set")
>
> from the block tree and commit:
>
>
On Thu, Mar 18, 2021 at 4:08 AM Muchun Song wrote:
>
> Just like assignment to ug->memcg, we only need to update ug->dummy_page
> if memcg changed. So move it to there. This is a very small optimization.
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
e anything left in there by the time the page is freed, what
> we care about is whether the value of page->memcg_data is 0. So just
> directly access page->memcg_data here.
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
On Thu, Mar 18, 2021 at 4:08 AM Muchun Song wrote:
>
[...]
>
> +static inline struct mem_cgroup *get_obj_cgroup_memcg(struct obj_cgroup
> *objcg)
I would prefer get_mem_cgroup_from_objcg().
> +{
> + struct mem_cgroup *memcg;
> +
> + rcu_read_lock();
> +retry:
> + memcg =
On Thu, Mar 18, 2021 at 4:46 PM Andrew Morton wrote:
>
> On Thu, 18 Mar 2021 10:00:17 -0600 Jens Axboe wrote:
>
> > On 3/18/21 9:53 AM, Shakeel Butt wrote:
> > > On Wed, Mar 17, 2021 at 3:30 PM Jens Axboe wrote:
> > >>
> > >> On 3/16/21 9:36 AM
On Wed, Mar 10, 2021 at 10:54 AM Yang Shi wrote:
>
> On Wed, Mar 10, 2021 at 10:24 AM Shakeel Butt wrote:
> >
> > On Wed, Mar 10, 2021 at 9:46 AM Yang Shi wrote:
> > >
> > > The number of deferred objects might get windup to an absurd number, and
> > &
On Wed, Mar 10, 2021 at 9:46 AM Yang Shi wrote:
>
> The number of deferred objects might get windup to an absurd number, and it
> results in clamp of slab objects. It is undesirable for sustaining
> workingset.
>
> So shrink deferred objects proportional to priority and cap nr_deferred to
>
On Wed, Mar 10, 2021 at 4:47 PM Huang, Ying wrote:
>
> From: Huang Ying
>
> In shrink_node(), to determine whether to enable cache trim mode, the
> LRU size is gotten via lruvec_page_state(). That gets the value from
> a per-CPU counter (mem_cgroup_per_node->lruvec_stat[]). The error of
> the
On Wed, Mar 10, 2021 at 1:41 PM Yang Shi wrote:
>
> On Wed, Mar 10, 2021 at 1:08 PM Shakeel Butt wrote:
> >
> > On Wed, Mar 10, 2021 at 10:54 AM Yang Shi wrote:
> > >
> > > On Wed, Mar 10, 2021 at 10:24 AM Shakeel Butt wrote:
> > > >
> &g
t;
> On the other hand, there have been several bugs and confusion around
> the many possible swap controller states (cgroup1 vs cgroup2 behavior,
> memory accounting without swap accounting, memcg runtime disabled).
>
> This puts the maintenance overhead of retaining the toggle abov
On Thu, Mar 18, 2021 at 9:05 PM Muchun Song wrote:
>
> On Fri, Mar 19, 2021 at 11:40 AM Shakeel Butt wrote:
> >
> > On Thu, Mar 18, 2021 at 4:08 AM Muchun Song
> > wrote:
> > >
> > [...]
> > >
> > > +static inline struct mem_cgr
98072de ("mm: memcontrol: make swap tracking an integral part of
> memory control")
> Reported-by: Hugh Dickins
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Fri, Mar 19, 2021 at 8:51 AM Dan Schatzberg wrote:
>
> On Thu, Mar 18, 2021 at 05:56:28PM -0700, Shakeel Butt wrote:
> >
> > We need something similar for mem_cgroup_swapin_charge_page() as well.
> >
> > It is better to take this series in mm tree and Jens i
sched/pipe benchmark...
# Executed 100 pipe operations between two processes
Total time: 3.329 [sec]
3.329820 usecs/op
300316 ops/sec
Signed-off-by: Shakeel Butt
---
kernel/sched/psi.c | 19 ++-
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git
On Fri, Mar 19, 2021 at 10:36 AM Johannes Weiner wrote:
>
> On Fri, Mar 19, 2021 at 06:49:55AM -0700, Shakeel Butt wrote:
> > On Thu, Mar 18, 2021 at 10:49 PM Johannes Weiner wrote:
> > >
> > > The swapaccounting= commandline option already does very little
&
uot;mm: memcg/slab: optimize objcg stock draining")
> Signed-off-by: Muchun Song
Good catch.
Reviewed-by: Shakeel Butt
() to get the object
> cgroup associated with a kmem page. Or we can use page_memcg()
> to get the memory cgroup associated with a kmem page, but caller must
> ensure that the returned memcg won't be released (e.g. acquire the
> rcu_read_lock or css_set_lock).
>
> Signed-off-by: Muchun Song
> Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
) and
> call obj_cgroup_uncharge_pages() in obj_cgroup_release().
>
> This is just code cleanup without any functionality changes.
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
On Tue, Mar 9, 2021 at 2:09 AM Muchun Song wrote:
>
> We want to reuse the obj_cgroup APIs to charge the kmem pages.
> If we do that, we should store an object cgroup pointer to
> page->memcg_data for the kmem pages.
>
> Finally, page->memcg_data can have 3 different meanings.
replace 'can' with
d-off-by: Muchun Song
Reviewed-by: Shakeel Butt
The function swap_readpage() (and other functions it call) extracts swap
entry from page->private. However for SWP_SYNCHRONOUS_IO, the kernel
skips the swapcache and thus we need to manually set the page->private
with the swap entry before calling swap_readpage().
Signed-off-by: Shakee
On Wed, Mar 3, 2021 at 6:02 AM Michal Hocko wrote:
>
[...]
> > > > + BUG_ON(vm->nr_pages != THREAD_SIZE / PAGE_SIZE);
> > >
> > > I do not think we need this BUG_ON. What kind of purpose does it serve?
> >
> > vm->nr_pages should be always equal to THREAD_SIZE / PAGE_SIZE
> > if the
On Wed, Mar 3, 2021 at 10:58 AM Suren Baghdasaryan wrote:
>
> process_madvise currently requires ptrace attach capability.
> PTRACE_MODE_ATTACH gives one process complete control over another
> process. It effectively removes the security boundary between the
> two processes (in one direction).
On Wed, Mar 3, 2021 at 3:34 PM Suren Baghdasaryan wrote:
>
> On Wed, Mar 3, 2021 at 3:17 PM Shakeel Butt wrote:
> >
> > On Wed, Mar 3, 2021 at 10:58 AM Suren Baghdasaryan
> > wrote:
> > >
> > > process_madvise currently requires ptrace attach capa
On Fri, Feb 26, 2021 at 2:09 PM syzbot
wrote:
>
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:577c2835 Add linux-next specific files for 20210224
> git tree: linux-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=137cef82d0
> kernel config:
On Fri, Feb 26, 2021 at 3:14 PM Mike Kravetz wrote:
>
> Cc: Michal
>
> On 2/26/21 2:44 PM, Shakeel Butt wrote:
> > On Fri, Feb 26, 2021 at 2:09 PM syzbot
> > wrote:
>
> >> other info that might help us debug this:
> >>
> >> Possible in
On Sun, Feb 28, 2021 at 10:25 PM Muchun Song wrote:
>
> We want to reuse the obj_cgroup APIs to reparent the kmem pages when
> the memcg offlined. If we do this, we should store an object cgroup
> pointer to page->memcg_data for the kmem pages.
>
> Finally, page->memcg_data can have 3 different
On Mon, Mar 1, 2021 at 5:16 PM Roman Gushchin wrote:
>
> On Mon, Mar 01, 2021 at 02:22:26PM +0800, Muchun Song wrote:
> > The remote memcg charing APIs is a mechanism to charge kernel memory
> > to a given memcg. So we can move the infrastructure to the scope of
> > the CONFIG_MEMCG_KMEM.
>
>
On Mon, Mar 1, 2021 at 7:03 PM Muchun Song wrote:
>
> On Tue, Mar 2, 2021 at 2:11 AM Shakeel Butt wrote:
> >
> > On Sun, Feb 28, 2021 at 10:25 PM Muchun Song
> > wrote:
> > >
> > > We want to reuse the obj_cgroup APIs to reparent the kmem pages when
&
1001 - 1100 of 1184 matches
Mail list logo