g in the child? (i.e. do you need
VM_DONTCOPY). BTW VM_DONTDUMP is added by remap_pfn_range(), so if you
want you can remove it here.
> + return remap_pfn_range(vma, vma->vm_start, pfn, vm_size,
> vma->vm_page_prot);
> +}
> +
> static struct bin_attribute bin_attr_btf_vmlinux __ro_after_init = {
> .attr = { .name = "vmlinux", .mode = 0444, },
> .read_new = sysfs_bin_attr_simple_read,
> + .mmap = btf_sysfs_vmlinux_mmap,
> };
>
> struct kobject *btf_kobj;
>
Overall this looks good to me, so you can add:
Reviewed-by: Shakeel Butt
akes
Reviewed-by: Shakeel Butt
ask, which otherwise break the import
> of the UAPI header.
>
> Signed-off-by: Lorenzo Stoakes
Reviewed-by: Shakeel Butt
nd update tests accordingly.
>
> Signed-off-by: Lorenzo Stoakes
Reviewed-by: Shakeel Butt
On Thu, Jan 30, 2025 at 08:40:27PM +, Lorenzo Stoakes wrote:
> The pidfd_fdinfo_test.c and pidfd_setns_test.c tests appear to be missing
> fundamental system header imports required to execute correctly. Add these.
>
> Signed-off-by: Lorenzo Stoakes
Reviewed-by: Shakeel Butt
implementing this functionality for
> process_madvise(), process_mrelease() (albeit, using it here wouldn't
> really make sense) and pidfd_send_signal().
>
> Signed-off-by: Lorenzo Stoakes
Reviewed-by: Shakeel Butt
On Mon, Dec 02, 2024 at 10:52:13AM +, Lorenzo Stoakes wrote:
> On Fri, Nov 08, 2024 at 02:28:14PM +, Lorenzo Stoakes wrote:
> > On Wed, Oct 30, 2024 at 04:37:37PM +, Lorenzo Stoakes wrote:
> > > On Mon, Oct 28, 2024 at 04:06:07PM +, Lorenzo Stoakes wrote:
> > > > I guess I'll try to
On Wed, Oct 23, 2024 at 08:18:35AM GMT, Lorenzo Stoakes wrote:
> On Tue, Oct 22, 2024 at 05:53:00PM -0700, Shakeel Butt wrote:
> > On Thu, Oct 17, 2024 at 10:05:50PM GMT, Lorenzo Stoakes wrote:
> > > It is useful to be able to utilise the pidfd mechanism to reference the
>
> +
> + /* The caller expects an elevated reference count. */
> + get_pid(pid);
Do you want this helper to work for scenarios where pid is used across
context? Otherwise can't we get rid of this get and later put for self?
> + return pid;
> +}
> +
Overall looks good to me.
Reviewed-by: Shakeel Butt
citly reference the current process (i.e. thread group
> leader) without the need for a pidfd.
>
> Signed-off-by: Lorenzo Stoakes
Reviewed-by: Shakeel Butt
On Tue, Apr 02, 2024 at 09:50:54AM +0800, Ubisectech Sirius wrote:
> > On Mon, Apr 01, 2024 at 03:04:46PM +0800, Ubisectech Sirius wrote:
> > Hello.
> > We are Ubisectech Sirius Team, the vulnerability lab of China ValiantSec.
> > Recently, our team has discovered a issue in Linux kernel 6.7. Atta
On Thu, Jan 4, 2024 at 1:44 PM Jakub Kicinski wrote:
>
[...]
>
> You seem to be trying hard to make struct netmem a thing.
> Perhaps you have a reason I'm not getting?
Mina already went with your suggestion and that is fine. To me, struct
netmem is more aesthetically aligned with the existing str
On Wed, Dec 20, 2023 at 01:45:02PM -0800, Mina Almasry wrote:
> diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
> index 65d1f6755f98..3180a54b2c68 100644
> --- a/net/kcm/kcmsock.c
> +++ b/net/kcm/kcmsock.c
> @@ -636,9 +636,15 @@ static int kcm_write_msgs(struct kcm_sock *kcm)
> for
On Wed, Dec 20, 2023 at 01:45:01PM -0800, Mina Almasry wrote:
> Add the netmem_ref type, an abstraction for network memory.
>
> To add support for new memory types to the net stack, we must first
> abstract the current memory type. Currently parts of the net stack
> use struct page directly:
>
>
age.
>
> Signed-off-by: Mina Almasry
Reviewed-by: Shakeel Butt
On Mon, Apr 19, 2021 at 11:46 PM Michal Hocko wrote:
>
> On Mon 19-04-21 18:44:02, Shakeel Butt wrote:
[...]
> > memory.min. However a new allocation from userspace oom-killer can
> > still get stuck in the reclaim and policy rich oom-killer do trigger
> > new allocations
Proposal: Provide memory guarantees to userspace oom-killer.
Background:
Issues with kernel oom-killer:
1. Very conservative and prefer to reclaim. Applications can suffer
for a long time.
2. Borrows the context of the allocator which can be resource limited
(low sched priority or limited CPU quo
On Mon, Apr 19, 2021 at 8:43 AM Ilias Apalodimas
wrote:
>
[...]
> > Pages mapped into the userspace have their refcnt elevated, so the
> > page_ref_count() check by the drivers indicates to not reuse such
> > pages.
> >
>
> When tcp_zerocopy_receive() is invoked it will call
> tcp_zerocopy_vm_ins
ff-by: Waiman Long
Reviewed-by: Shakeel Butt
On Sun, Apr 18, 2021 at 11:07 PM Muchun Song wrote:
>
> On Mon, Apr 19, 2021 at 8:01 AM Waiman Long wrote:
> >
> > There are two issues with the current refill_obj_stock() code. First of
> > all, when nr_bytes reaches over PAGE_SIZE, it calls drain_obj_stock() to
> > atomically flush out remainin
On Sun, Apr 18, 2021 at 10:12 PM Ilias Apalodimas
wrote:
>
> On Wed, Apr 14, 2021 at 01:09:47PM -0700, Shakeel Butt wrote:
> > On Wed, Apr 14, 2021 at 12:42 PM Jesper Dangaard Brouer
> > wrote:
> > >
> > [...]
> > > > >
> > > > &
On Thu, Apr 15, 2021 at 10:16 PM Muchun Song wrote:
>
> lruvec_holds_page_lru_lock() doesn't check anything about locking and is
> used to check whether the page belongs to the lruvec. So rename it to
> page_matches_lruvec().
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
On Wed, Apr 14, 2021 at 12:42 PM Jesper Dangaard Brouer
wrote:
>
[...]
> > >
> > > Can this page_pool be used for TCP RX zerocopy? If yes then PageType
> > > can not be used.
> >
> > Yes it can, since it's going to be used as your default allocator for
> > payloads, which might end up on an SKB.
>
On Wed, Apr 14, 2021 at 6:52 AM Rik van Riel wrote:
>
> On Wed, 2021-04-14 at 16:27 +0800, Huang, Ying wrote:
> > Yu Zhao writes:
> >
> > > On Wed, Apr 14, 2021 at 12:15 AM Huang, Ying
> > > wrote:
> > > >
> > > NUMA Optimization
> > > -
> > > Support NUMA policies and per-node R
On Mon, Apr 12, 2021 at 11:58 PM Muchun Song wrote:
>
> The css_set_lock is used to guard the list of inherited objcgs. So there
> is no need to uncharge kernel memory under css_set_lock. Just move it
> out of the lock.
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
is also
> impossible for the two to run in parallel. So xchg() is unnecessary
> and it is enough to use WRITE_ONCE().
>
> Signed-off-by: Muchun Song
> Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
roup_disabled() and
> CONFIG_MEMCG.
>
> Signed-off-by: Muchun Song
> Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
cal variable of @pgdat. So mem_cgroup_page_lruvec() do not
> need the pgdat parameter. Just remove it to simplify the code.
>
> Signed-off-by: Muchun Song
> Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Mon, Apr 12, 2021 at 4:10 PM Shakeel Butt wrote:
>
> On Mon, Apr 12, 2021 at 3:55 PM Waiman Long wrote:
> >
> > Most kmem_cache_alloc() calls are from user context. With instrumentation
> > enabled, the measured amount of kmem_cache_alloc() calls from non-task
> &
On Mon, Apr 12, 2021 at 3:55 PM Waiman Long wrote:
>
> Most kmem_cache_alloc() calls are from user context. With instrumentation
> enabled, the measured amount of kmem_cache_alloc() calls from non-task
> context was about 0.01% of the total.
>
> The irq disable/enable sequence used in this case to
> Acked-by: Roman Gushchin
Reviewed-by: Shakeel Butt
en __mod_obj_stock_state() is called leads to an actual call to
> mod_objcg_state() after initial boot. When doing parallel kernel build,
> the figure was about 16% (21894614 out of 139780628). So caching the
> vmstat data reduces the number of calls to mod_objcg_state() by more
> than 80%.
>
> Signed-off-by: Waiman Long
Reviewed-by: Shakeel Butt
allow either of the two parameters to be set to null. This
> makes mod_memcg_lruvec_state() equivalent to mod_memcg_state() if lruvec
> is null.
>
> Signed-off-by: Waiman Long
Similar to Roman's suggestion: instead of what this patch is doing the
'why' would be better in the changelog.
Reviewed-by: Shakeel Butt
On Fri, Apr 9, 2021 at 4:26 PM Tim Chen wrote:
>
>
> On 4/8/21 4:52 AM, Michal Hocko wrote:
>
> >> The top tier memory used is reported in
> >>
> >> memory.toptier_usage_in_bytes
> >>
> >> The amount of top tier memory usable by each cgroup without
> >> triggering page reclaim is controlled by the
t. Introduce a new function obj_cgroup_uncharge_mod_state()
> that combines them with a single irq_save/irq_restore cycle.
>
> Signed-off-by: Waiman Long
Reviewed-by: Shakeel Butt
On Thu, Apr 8, 2021 at 4:52 AM Michal Hocko wrote:
>
[...]
>
> What I am trying to say (and I have brought that up when demotion has been
> discussed at LSFMM) is that the implementation shouldn't be PMEM aware.
> The specific technology shouldn't be imprinted into the interface.
> Fundamentally y
On Thu, Apr 8, 2021 at 1:50 PM Yang Shi wrote:
>
[...]
> >
> > The low and min limits have semantics similar to the v1's soft limit
> > for this situation i.e. letting the low priority job occupy top tier
> > memory and depending on reclaim to take back the excess top tier
> > memory use of such
> >
> > Since this patch is somewhat independent of the rest of the series,
> > you may want to put it in the very beginning, or even submit it
> > separately, to keep the main series as compact as possible. Reviewers
> > can be more hesitant to get involved with larger series ;)
>
> OK. I will gather all the cleanup patches into a separate series.
> Thanks for your suggestion.
That would be best.
For this patch:
Reviewed-by: Shakeel Butt
d-by: Johannes Weiner
Reviewed-by: Shakeel Butt
; there is a WARN_ON_ONCE in the page_counter_cancel(). Who knows if it
> will trigger? So it is better to fix it.
>
> Signed-off-by: Muchun Song
> Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Sat, Apr 10, 2021 at 9:16 AM Ilias Apalodimas
wrote:
>
> Hi Matthew
>
> On Sat, Apr 10, 2021 at 04:48:24PM +0100, Matthew Wilcox wrote:
> > On Sat, Apr 10, 2021 at 12:37:58AM +0200, Matteo Croce wrote:
> > > This is needed by the page_pool to avoid recycling a page not allocated
> > > via page_
On Wed, Apr 7, 2021 at 2:47 PM Daniel Xu wrote:
>
> There currently does not exist a way to answer the question: "What is in
> the page cache?". There are various heuristics and counters but nothing
> that can tell you anything like:
>
> * 3M from /home/dxu/foo.txt
> * 5K from ...
> * etc.
>
On Thu, Apr 8, 2021 at 11:01 AM Yang Shi wrote:
>
> On Thu, Apr 8, 2021 at 10:19 AM Shakeel Butt wrote:
> >
> > Hi Tim,
> >
> > On Mon, Apr 5, 2021 at 11:08 AM Tim Chen wrote:
> > >
> > > Traditionally, all memory is DRAM. Some DRAM might be
Hi Tim,
On Mon, Apr 5, 2021 at 11:08 AM Tim Chen wrote:
>
> Traditionally, all memory is DRAM. Some DRAM might be closer/faster than
> others NUMA wise, but a byte of media has about the same cost whether it
> is close or far. But, with new memory tiers such as Persistent Memory
> (PMEM). ther
keep limping along.
>
> [ We used to do this with the original res_counter, where it was a
> more straight-forward correction inside the spinlock section. I
> didn't carry it forward into the lockless page counters for
> simplicity, but it turns out this is quite useful in practice. ]
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Wed, Apr 7, 2021 at 4:55 AM Michal Hocko wrote:
>
> On Mon 05-04-21 11:18:48, Bharata B Rao wrote:
> > Hi,
> >
> > When running 1 (more-or-less-empty-)containers on a bare-metal Power9
> > server(160 CPUs, 2 NUMA nodes, 256G memory), it is seen that memory
> > consumption increases quite a
On Fri, Apr 2, 2021 at 6:04 PM Andrew Morton wrote:
>
> On Wed, 31 Mar 2021 20:35:02 -0700 Roman Gushchin wrote:
>
> > On Thu, Apr 01, 2021 at 11:31:16AM +0800, Miaohe Lin wrote:
> > > On 2021/4/1 11:01, Muchun Song wrote:
> > > > Christian Borntraeger reported a warning about "percpu ref
> > > >
CC: Hugh Dickins
On Wed, Mar 31, 2021 at 9:37 PM Alistair Popple wrote:
>
> On Wednesday, 31 March 2021 10:57:46 PM AEDT Jason Gunthorpe wrote:
> > On Wed, Mar 31, 2021 at 03:15:47PM +1100, Alistair Popple wrote:
> > > On Wednesday, 31 March 2021 2:56:38 PM AEDT John Hubbard wrote:
> > > > On 3/3
On Thu, Apr 1, 2021 at 9:08 AM Muchun Song wrote:
>
[...]
> > The zombie issue is a pretty urgent concern that has caused several
> > production emergencies now. It needs a fix sooner rather than later.
>
> Thank you very much for clarifying the problem for me. I do agree
> with you. This issue sh
ce of memcg.
>
> Reported-by: Christian Borntraeger
> Signed-off-by: Muchun Song
Looks good to me.
Reviewed-by: Shakeel Butt
On Tue, Mar 30, 2021 at 4:44 PM Hugh Dickins wrote:
>
> Lockdep warns mm/vmscan.c: suspicious rcu_dereference_protected() usage!
> when free_shrinker_info() is called from mem_cgroup_css_free(): there it
> is called with no locking, whereas alloc_shrinker_info() calls it with
> down_write of shrin
On Tue, Mar 30, 2021 at 2:10 PM Johannes Weiner wrote:
>
[...]
> > The main concern I have with *just* reparenting LRU pages is that for
> > the long running systems, the root memcg will become a dumping ground.
> > In addition a job running multiple times on a machine will see
> > inconsistent me
On Tue, Mar 30, 2021 at 3:20 AM Muchun Song wrote:
>
> Since the following patchsets applied. All the kernel memory are charged
> with the new APIs of obj_cgroup.
>
> [v17,00/19] The new cgroup slab memory controller
> [v5,0/7] Use obj_cgroup APIs to charge kmem pages
>
> But user
On Mon, Mar 29, 2021 at 9:13 AM Muchun Song wrote:
>
> On Mon, Mar 29, 2021 at 10:49 PM Dan Schatzberg
> wrote:
[...]
>
> Since remote memcg must hold a reference, we do not
> need to do something like get_active_memcg() does.
> Just use css_get to obtain a ref, it is simpler. Just
> Like below.
On Wed, Mar 24, 2021 at 1:39 PM Arjun Roy wrote:
>
> On Wed, Mar 24, 2021 at 2:12 AM Michal Hocko wrote:
> >
> > On Tue 23-03-21 11:47:54, Arjun Roy wrote:
> > > On Tue, Mar 23, 2021 at 7:34 AM Michal Hocko wrote:
> > > >
> > > > On Wed 17-03-21 18:12:55, Johannes Weiner wrote:
> > > > [...]
> >
On Tue, Mar 23, 2021 at 11:42 AM Arjun Roy wrote:
>
[...]
>
> To summarize then, it seems to me that we're on the same page now.
> I'll put together a tentative v3 such that:
> 1. It uses pre-charging, as previously discussed.
> 2. It uses a page flag to delineate pages of a certain networking sor
The following commit has been merged into the sched/core branch of tip:
Commit-ID: df77430639c9cf73559bac0f25084518bf9a812d
Gitweb:
https://git.kernel.org/tip/df77430639c9cf73559bac0f25084518bf9a812d
Author:Shakeel Butt
AuthorDate:Sun, 21 Mar 2021 13:51:56 -07:00
sched/pipe benchmark...
# Executed 100 pipe operations between two processes
Total time: 3.329 [sec]
3.329820 usecs/op
300316 ops/sec
Signed-off-by: Shakeel Butt
---
kernel/sched/psi.c | 19 ++-
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a
) and
> call obj_cgroup_uncharge_pages() in obj_cgroup_release().
>
> This is just code cleanup without any functionality changes.
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
() to get the object
> cgroup associated with a kmem page. Or we can use page_memcg()
> to get the memory cgroup associated with a kmem page, but caller must
> ensure that the returned memcg won't be released (e.g. acquire the
> rcu_read_lock or css_set_lock).
>
> Signed-off-by: Muchun Song
> Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
4f25a74 ("mm: memcg/slab: optimize objcg stock draining")
> Signed-off-by: Muchun Song
Good catch.
Reviewed-by: Shakeel Butt
On Fri, Mar 19, 2021 at 10:36 AM Johannes Weiner wrote:
>
> On Fri, Mar 19, 2021 at 06:49:55AM -0700, Shakeel Butt wrote:
> > On Thu, Mar 18, 2021 at 10:49 PM Johannes Weiner wrote:
> > >
> > > The swapaccounting= commandline option already does very little
&
On Fri, Mar 19, 2021 at 8:51 AM Dan Schatzberg wrote:
>
> On Thu, Mar 18, 2021 at 05:56:28PM -0700, Shakeel Butt wrote:
> >
> > We need something similar for mem_cgroup_swapin_charge_page() as well.
> >
> > It is better to take this series in mm tree and Jens i
72de ("mm: memcontrol: make swap tracking an integral part of
> memory control")
> Reported-by: Hugh Dickins
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Thu, Mar 18, 2021 at 9:05 PM Muchun Song wrote:
>
> On Fri, Mar 19, 2021 at 11:40 AM Shakeel Butt wrote:
> >
> > On Thu, Mar 18, 2021 at 4:08 AM Muchun Song
> > wrote:
> > >
> > [...]
> > >
> > > +static inline struct mem_cgr
y
>
> On the other hand, there have been several bugs and confusion around
> the many possible swap controller states (cgroup1 vs cgroup2 behavior,
> memory accounting without swap accounting, memcg runtime disabled).
>
> This puts the maintenance overhead of retaining the toggle
On Thu, Mar 18, 2021 at 4:08 AM Muchun Song wrote:
>
[...]
>
> +static inline struct mem_cgroup *get_obj_cgroup_memcg(struct obj_cgroup
> *objcg)
I would prefer get_mem_cgroup_from_objcg().
> +{
> + struct mem_cgroup *memcg;
> +
> + rcu_read_lock();
> +retry:
> + memcg = obj_c
On Thu, Mar 18, 2021 at 4:08 AM Muchun Song wrote:
>
> Just like assignment to ug->memcg, we only need to update ug->dummy_page
> if memcg changed. So move it to there. This is a very small optimization.
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
nything left in there by the time the page is freed, what
> we care about is whether the value of page->memcg_data is 0. So just
> directly access page->memcg_data here.
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
On Thu, Mar 18, 2021 at 4:46 PM Andrew Morton wrote:
>
> On Thu, 18 Mar 2021 10:00:17 -0600 Jens Axboe wrote:
>
> > On 3/18/21 9:53 AM, Shakeel Butt wrote:
> > > On Wed, Mar 17, 2021 at 3:30 PM Jens Axboe wrote:
> > >>
> > >> On 3/16/21 9:36 AM
On Wed, Mar 17, 2021 at 11:17 PM Stephen Rothwell wrote:
>
> Hi all,
>
> Today's linux-next merge of the akpm-current tree got a conflict in:
>
> mm/memcontrol.c
>
> between commit:
>
> 06d69d4c8669 ("mm: Charge active memcg when no mm is set")
>
> from the block tree and commit:
>
> 6747882
On Wed, Mar 17, 2021 at 3:30 PM Jens Axboe wrote:
>
> On 3/16/21 9:36 AM, Dan Schatzberg wrote:
> > No major changes, just rebasing and resubmitting
>
> Applied for 5.13, thanks.
>
I have requested a couple of changes in the patch series. Can this
applied series still be changed or new patches ar
The function swap_readpage() (and other functions it call) extracts swap
entry from page->private. However for SWP_SYNCHRONOUS_IO, the kernel
skips the swapcache and thus we need to manually set the page->private
with the swap entry before calling swap_readpage().
Signed-off-by: Shakee
On Tue, Mar 16, 2021 at 8:37 AM Dan Schatzberg wrote:
>
[...]
>
> /* Support for loadable transfer modules */
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 0c04d39a7967..fd5dd961d01f 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@
rge (case 3) it
mem_cgroup_charge()
> would always charge the root cgroup. Now it looks up the current
> active_memcg first (falling back to charging the root cgroup if not
> set).
>
> Signed-off-by: Dan Schatzberg
> Acked-by: Johannes Weiner
> Acked-by: Tejun Heo
&g
On Mon, Mar 15, 2021 at 9:20 PM Arjun Roy wrote:
>
[...]
> >
>
> Apologies for the spam - looks like I forgot to rebase the first time
> I sent this out.
>
> Actually, on a related note, it's not 100% clear to me whether this
> patch (which in its current form, applies to net-next) should instead
eep. Switch to the atomic variant, cgroup_rstat_irqsafe().
>
> To be consistent with other memcg flush calls, but without adding
> another memcg wrapper, inline and drop memcg_flush_vmstats() instead.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Thu, Mar 11, 2021 at 12:52 AM Huang, Ying wrote:
>
> Hi, Butt,
>
> Shakeel Butt writes:
>
> > On Wed, Mar 10, 2021 at 4:47 PM Huang, Ying wrote:
> >>
> >> From: Huang Ying
> >>
> >> In shrink_node(), to determine whether t
On Fri, Mar 12, 2021 at 3:07 PM Johannes Weiner wrote:
>
> On Fri, Mar 12, 2021 at 02:42:45PM -0800, Shakeel Butt wrote:
> > Hi Johannes,
> >
> > On Fri, Mar 12, 2021 at 11:23 AM Johannes Weiner wrote:
> > >
> > [...]
> > >
> > > Longer te
Hi Johannes,
On Fri, Mar 12, 2021 at 11:23 AM Johannes Weiner wrote:
>
[...]
>
> Longer term we most likely need it there anyway. The issue you are
> describing in the cover letter - allocations pinning memcgs for a long
> time - it exists at a larger scale and is causing recurring problems
> in
d-off-by: Muchun Song
Reviewed-by: Shakeel Butt
On Tue, Mar 9, 2021 at 2:09 AM Muchun Song wrote:
>
> We want to reuse the obj_cgroup APIs to charge the kmem pages.
> If we do that, we should store an object cgroup pointer to
> page->memcg_data for the kmem pages.
>
> Finally, page->memcg_data can have 3 different meanings.
replace 'can' with
a object cgroup. This is just
> a code movement without any functional changes.
>
> Signed-off-by: Muchun Song
> Acked-by: Roman Gushchin
Reviewed-by: Shakeel Butt
On Wed, Mar 10, 2021 at 4:47 PM Huang, Ying wrote:
>
> From: Huang Ying
>
> In shrink_node(), to determine whether to enable cache trim mode, the
> LRU size is gotten via lruvec_page_state(). That gets the value from
> a per-CPU counter (mem_cgroup_per_node->lruvec_stat[]). The error of
> the p
On Wed, Mar 10, 2021 at 1:41 PM Yang Shi wrote:
>
> On Wed, Mar 10, 2021 at 1:08 PM Shakeel Butt wrote:
> >
> > On Wed, Mar 10, 2021 at 10:54 AM Yang Shi wrote:
> > >
> > > On Wed, Mar 10, 2021 at 10:24 AM Shakeel Butt wrote:
> > > >
> &g
On Wed, Mar 10, 2021 at 10:54 AM Yang Shi wrote:
>
> On Wed, Mar 10, 2021 at 10:24 AM Shakeel Butt wrote:
> >
> > On Wed, Mar 10, 2021 at 9:46 AM Yang Shi wrote:
> > >
> > > The number of deferred objects might get windup to an absurd number, and
> > &
On Wed, Mar 10, 2021 at 9:46 AM Yang Shi wrote:
>
> The number of deferred objects might get windup to an absurd number, and it
> results in clamp of slab objects. It is undesirable for sustaining
> workingset.
>
> So shrink deferred objects proportional to priority and cap nr_deferred to
> twi
ushchin
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
ONFIG_MEMCG or memcg
> is disabled
> by kernel command line, then shrinker's SHRINKER_MEMCG_AWARE flag would be
> cleared.
> This makes the implementation of this patch simpler.
>
> Acked-by: Vlastimil Babka
> Reviewed-by: Kirill Tkhai
> Acked-by: Roman Gushchin
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
On Mon, Mar 8, 2021 at 12:22 PM Yang Shi wrote:
>
> On Mon, Mar 8, 2021 at 8:49 AM Roman Gushchin wrote:
> >
> > On Sun, Mar 07, 2021 at 10:13:04PM -0800, Shakeel Butt wrote:
> > > On Tue, Feb 16, 2021 at 4:13 PM Yang Shi wrote:
> > > >
> > > &g
On Mon, Mar 8, 2021 at 12:30 PM Yang Shi wrote:
>
> On Mon, Mar 8, 2021 at 11:12 AM Shakeel Butt wrote:
> >
> > On Tue, Feb 16, 2021 at 4:13 PM Yang Shi wrote:
> > >
> > > Currently the number of deferred objects are per shrinker, but some
> > > sla
oot parameter
>
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
On Tue, Feb 16, 2021 at 4:13 PM Yang Shi wrote:
>
> Currently the number of deferred objects are per shrinker, but some slabs,
> for example,
> vfs inode/dentry cache are per memcg, this would result in poor isolation
> among memcgs.
>
> The deferred objects typically are generated by __GFP_NOFS
; Acked-by: Vlastimil Babka
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
;shrinker_info. And the later patch
> will add more dereference places.
>
> So extract the dereference into a helper to make the code more readable. No
> functional change.
>
> Acked-by: Roman Gushchin
> Acked-by: Kirill Tkhai
> Acked-by: Vlastimil Babka
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
ding nr_deferred cleaner and readable and
> make
> review easier. Also remove the "memcg_" prefix.
>
> Acked-by: Vlastimil Babka
> Acked-by: Kirill Tkhai
> Acked-by: Roman Gushchin
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
ax is also used by
> iterating the
> bit map.
>
> Acked-by: Kirill Tkhai
> Acked-by: Roman Gushchin
> Acked-by: Vlastimil Babka
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
: Kirill Tkhai
> Acked-by: Roman Gushchin
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
On Tue, Feb 16, 2021 at 4:13 PM Yang Shi wrote:
>
> Using kvfree_rcu() to free the old shrinker_maps instead of call_rcu().
> We don't have to define a dedicated callback for call_rcu() anymore.
>
> Signed-off-by: Yang Shi
> ---
> mm/vmscan.c | 7 +--
> 1 file changed, 1 insertion(+), 6 dele
On Fri, Mar 5, 2021 at 1:26 PM Shakeel Butt wrote:
>
> Currently the kernel adds the page, allocated for swapin, to the
> swapcache before charging the page. This is fine but now we want a
> per-memcg swapcache stat which is essential for folks who wants to
> transparently migrate
1 - 100 of 935 matches
Mail list logo