On 9/8/20 5:31 PM, Marco Elver wrote:
>>
>> How much memory overhead does this end up having? I know it depends on
>> the object size and so forth. But, could you give some real-world
>> examples of memory consumption? Also, what's the worst case? Say I
>> have a ton of worst-case-sized (32b)
On 9/8/20 5:09 PM, Chris Down wrote:
> drop_caches by its very nature can be extremely performance intensive -- if
> someone wants to abort after trying too long, they can just send a
> TASK_KILLABLE signal, no? If exiting the loop and returning to usermode
> doesn't
> reliably work when doing
ns_huge().
>
> Signed-off-by: Wei Yang
Other than that, seems like it leads to less shifting, so
Acked-by: Vlastimil Babka
> ---
> mm/huge_memory.c | 4 ++--
> mm/mmap.c| 8
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/huge_memo
On 8/27/20 2:06 PM, Jim Baxter wrote:
> Has anyone any ideas of how to investigate this delay further?
>
> Comparing the perf output for unplugging the USB stick and using umount
> which does not cause these delays in other workqueues the main difference
I don't have that much insight in this,
On 9/7/20 3:40 PM, Marco Elver wrote:
> This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
> low-overhead sampling-based memory safety error detector of heap
> use-after-free, invalid-free, and out-of-bounds access errors. This
> series enables KFENCE for the x86 and arm64
() and __zone_pcp_update() wrappers.
No functional change.
Signed-off-by: Vlastimil Babka
---
mm/page_alloc.c | 40 +---
1 file changed, 17 insertions(+), 23 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0b516208afda..f669a251f654 100644
--- a/mm
-by: Vlastimil Babka
---
mm/page_alloc.c | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f669a251f654..a0cab2c6055e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5902,7 +5902,7 @@ build_all_zonelists_init(void
wrappers was:
build_all_zonelists_init()
setup_pageset()
pageset_set_batch()
which was hardcoding batch as 0, so we can just open-code a call to
pageset_update() with constant parameters instead.
No functional change.
Signed-off-by: Vlastimil Babka
---
mm/page_alloc.c | 51
...@soleen.com/
Vlastimil Babka (5):
mm, page_alloc: clean up pageset high and batch update
mm, page_alloc: calculate pageset high and batch once per zone
mm, page_alloc(): remove setup_pageset()
mm, page_alloc: cache pageset high and batch in struct zone
mm, page_alloc: disable pcplists
-by: Vlastimil Babka
---
include/linux/mmzone.h | 2 ++
mm/page_alloc.c| 18 +-
2 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 8379432f4f2f..15582ca368b9 100644
--- a/include/linux/mmzone.h
+++ b
ing some
cpu's to drain. If others agree, this can be separated and potentially
backported.
[1]
https://lore.kernel.org/linux-mm/20200903140032.380431-1-pasha.tatas...@soleen.com/
Suggested-by: David Hildenbrand
Suggested-by: Michal Hocko
Signed-off-by: Vlastimil Babka
---
include/linu
On 9/3/20 8:23 PM, Pavel Tatashin wrote:
>>
>> As expressed in reply to v2, I dislike this hack. There is strong
>> synchronization, just PCP is special. Allocating from MIGRATE_ISOLATE is
>> just plain ugly.
>>
>> Can't we temporarily disable PCP (while some pageblock in the zone is
>> isolated,
e
>list_add(>lru, >lists[migratetype]);
> // add new page to already drained pcp list
>
> Thread#2
> Never drains pcp again, and therefore gets stuck in the loop.
>
> The fix is to try to drain per-cpu lists again after
> check_pages_isolated_cb() fails.
>
> Signed-off-by: Pavel Tatashin
> Cc: sta...@vger.kernel.org
Fixes: ?
Acked-by: Vlastimil Babka
Thanks.
On 9/3/20 10:40 AM, Alex Shi wrote:
>
>
> 在 2020/9/3 下午4:32, Alex Shi 写道:
>>>
>> I have run thpscale with 'always' defrag setting of THP. The Amean stddev is
>> much
>> larger than a very little average run time reducing.
>>
>> But the left patch 4 could show the cmpxchg retry reduce from
On 9/2/20 7:25 PM, Mike Kravetz wrote:
> On 9/2/20 3:49 AM, Vlastimil Babka wrote:
>> On 9/1/20 3:46 AM, Wei Yang wrote:
>>> The page allocated from buddy is not on any list, so just use list_add()
>>> is enough.
>>>
>>> Signed-off-by: Wei Yang
>&
On 9/2/20 5:13 PM, Michal Hocko wrote:
> On Wed 02-09-20 16:55:05, Vlastimil Babka wrote:
>> On 9/2/20 4:26 PM, Pavel Tatashin wrote:
>> > On Wed, Sep 2, 2020 at 10:08 AM Michal Hocko wrote:
>> >>
>> >> >
>> >> >
On 9/2/20 4:26 PM, Pavel Tatashin wrote:
> On Wed, Sep 2, 2020 at 10:08 AM Michal Hocko wrote:
>>
>> >
>> > Thread#1 - continue
>> > free_unref_page_commit
>> >migratetype = get_pcppage_migratetype(page);
>> > // get old migration type
>> >
On 9/2/20 4:31 PM, Pavel Tatashin wrote:
>> > > The fix is to try to drain per-cpu lists again after
>> > > check_pages_isolated_cb() fails.
>>
>> Still trying to wrap my head around this but I think this is not a
>> proper fix. It should be the page isolation to make sure no races are
>> possible
On 9/1/20 3:46 AM, Wei Yang wrote:
> The page allocated from buddy is not on any list, so just use list_add()
> is enough.
>
> Signed-off-by: Wei Yang
> Reviewed-by: Baoquan He
> Reviewed-by: Mike Kravetz
> ---
> mm/hugetlb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff
On 8/28/20 6:47 PM, Pavel Tatashin wrote:
> There appears to be another problem that is related to the
> cgroup_mutex -> mem_hotplug_lock deadlock described above.
>
> In the original deadlock that I described, the workaround is to
> replace crash dump from piping to Linux traditional save to
On 9/1/20 4:50 AM, Alex Shi wrote:
> pageblock_flags is used as long, since every pageblock_flags is just 4
> bits, 'long' size will include 8(32bit machine) or 16 pageblocks' flags,
> that flag setting has to sync in cmpxchg with 7 or 15 other pageblock
> flags. It would cause long waiting for
On 8/19/20 10:09 AM, Alex Shi wrote:
>
>
> 在 2020/8/19 下午3:57, Anshuman Khandual 写道:
>>
>>
>> On 08/19/2020 11:17 AM, Alex Shi wrote:
>>> Current pageblock_flags is only 4 bits, so it has to share a char size
>>> in cmpxchg when get set, the false sharing cause perf drop.
>>>
>>> If we incrase
On 8/26/20 7:12 AM, Joonsoo Kim wrote:
> 2020년 8월 25일 (화) 오후 6:43, Vlastimil Babka 님이 작성:
>>
>>
>> On 8/25/20 6:59 AM, js1...@gmail.com wrote:
>> > From: Joonsoo Kim
>> >
>> > memalloc_nocma_{save/restore} APIs can be used to skip page allocatio
On 8/25/20 6:59 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> memalloc_nocma_{save/restore} APIs can be used to skip page allocation
> on CMA area, but, there is a missing case and the page on CMA area could
> be allocated even if APIs are used. This patch handles this case to fix
> the
On 7/30/20 11:34 AM, David Hildenbrand wrote:
> Let's clean it up a bit, simplifying error handling and getting rid of
> the label.
Nit: the label was already removed by patch 1/6?
> Reviewed-by: Baoquan He
> Reviewed-by: Pankaj Gupta
> Cc: Andrew Morton
> Cc: Michal Hocko
> Cc: Michael S.
een seen, so it's
> a good trade-off.
>
> Reported-by: Qian Cai
> Suggested-by: Matthew Wilcox
> Cc: Vlastimil Babka
> Cc: Kirill A. Shutemov
> Signed-off-by: John Hubbard
Acked-by: Vlastimil Babka
> ---
> Hi,
>
> I'm assuming that a fix is not required for -st
On 8/6/20 3:48 PM, Matthew Wilcox wrote:
> On Thu, Aug 06, 2020 at 01:45:11PM +0200, Vlastimil Babka wrote:
>> How about this additional patch now that we have head_mapcoun()? (I wouldn't
>> go for squashing as the goal and scope is too different).
>
> I like it. It bothers
On 8/6/20 5:39 PM, Matthew Wilcox wrote:
>> >> +++ b/mm/huge_memory.c
>> >> @@ -2125,7 +2125,7 @@ static void __split_huge_pmd_locked(struct
>> >> vm_area_struct *vma, pmd_t *pmd,
>> >>* Set PG_double_map before dropping compound_mapcount to avoid
>> >>* false-negative page_mapped().
>>
On 7/2/20 10:32 AM, Xunlei Pang wrote:
> The node list_lock in count_partial() spend long time iterating
> in case of large amount of partial page lists, which can cause
> thunder herd effect to the list_lock contention, e.g. it cause
> business response-time jitters when accessing
On 8/4/20 7:12 PM, Matthew Wilcox wrote:
> On Tue, Aug 04, 2020 at 07:02:14PM +0200, Vlastimil Babka wrote:
>> > 2) There was a proposal from Matthew Wilcox:
>> > https://lkml.org/lkml/2020/7/31/1015
>> >
>> >
>> > On non-RT, we could make that lo
No idea how much it helps in practice wrt security, but implementation-wise it
seems fine, so:
Acked-by: Vlastimil Babka
Maybe you don't want to warn just once, though? We had similar discussion on
cache_to_obj().
> ---
> mm/slab.c | 14 --
> 1 file changed, 12 insertions(+),
rability.pdf
>
> Fixes: 598a0717a816 ("mm/slab: validate cache membership under freelist
> hardening")
> Signed-off-by: Kees Cook
Acked-by: Vlastimil Babka
> ---
> init/Kconfig | 9 +
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git
On 8/3/20 6:30 PM, Uladzislau Rezki (Sony) wrote:
> Some background and kfree_rcu()
> ===
> The pointers to be freed are stored in the per-cpu array to improve
> performance, to enable an easier-to-use API, to accommodate vmalloc
> memmory and to support a single
timal but it doesn't cause any problem.
>
> Suggested-by: Michal Hocko
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
> ---
> include/linux/hugetlb.h | 2 ++
> mm/gup.c| 17 -
> 2 files changed, 10 insertions(+), 9 deletions(
On 8/4/20 4:35 AM, Cho KyongHo wrote:
> On Mon, Aug 03, 2020 at 05:45:55PM +0200, Vlastimil Babka wrote:
>> On 8/3/20 9:57 AM, David Hildenbrand wrote:
>> > On 03.08.20 08:10, pullip@samsung.com wrote:
>> >> From: Cho KyongHo
>> >>
>>
On 8/3/20 9:57 AM, David Hildenbrand wrote:
> On 03.08.20 08:10, pullip@samsung.com wrote:
>> From: Cho KyongHo
>>
>> LPDDR5 introduces rank switch delay. If three successive DRAM accesses
>> happens and the first and the second ones access one rank and the last
>> access happens on the
system is not using benefits offered by the pcp lists when there is a
> single onlineable memory block in a zone. Correct this by always
> updating the pcp lists when memory block is onlined.
>
> Signed-off-by: Charan Teja Reddy
Makes sense to me.
Acked-by: Vlastimil Babka
> ---
On 7/21/20 2:05 PM, Matthew Wilcox wrote:
> On Tue, Jul 21, 2020 at 12:28:49PM +0900, js1...@gmail.com wrote:
>> +static inline unsigned int current_alloc_flags(gfp_t gfp_mask,
>> +unsigned int alloc_flags)
>> +{
>> +#ifdef CONFIG_CMA
>> +unsigned int pflags
or exactly this purpose.
> Fixes: d7fefcc8de91 (mm/cma: add PF flag to force non cma alloc)
> Cc:
> Signed-off-by: Joonsoo Kim
Reviewed-by: Vlastimil Babka
Thanks!
On 7/17/20 10:10 AM, Vlastimil Babka wrote:
> On 7/17/20 9:29 AM, Joonsoo Kim wrote:
>> 2020년 7월 16일 (목) 오후 4:45, Vlastimil Babka 님이 작성:
>>>
>>> On 7/16/20 9:27 AM, Joonsoo Kim wrote:
>>> > 2020년 7월 15일 (수) 오후 5:24, Vlastimil Babka 님이 작성:
>>> &
On 7/17/20 9:29 AM, Joonsoo Kim wrote:
> 2020년 7월 16일 (목) 오후 4:45, Vlastimil Babka 님이 작성:
>>
>> On 7/16/20 9:27 AM, Joonsoo Kim wrote:
>> > 2020년 7월 15일 (수) 오후 5:24, Vlastimil Babka 님이 작성:
>> >> > /*
>> >> > * get_page_from
On 7/16/20 6:51 PM, Muchun Song wrote:
> If the kmem_cache refcount is greater than one, we should not
> mark the root kmem_cache as dying. If we mark the root kmem_cache
> dying incorrectly, the non-root kmem_cache can never be destroyed.
> It resulted in memory leak when memcg was destroyed. We
On 7/16/20 9:27 AM, Joonsoo Kim wrote:
> 2020년 7월 15일 (수) 오후 5:24, Vlastimil Babka 님이 작성:
>> > /*
>> > * get_page_from_freelist goes through the zonelist trying to allocate
>> > * a page.
>> > @@ -3706,6 +3714,8 @@ get_page_from_freelist(gfp_t
On 7/15/20 5:13 PM, Muchun Song wrote:
> On Wed, Jul 15, 2020 at 7:32 PM Vlastimil Babka wrote:
>>
>> On 7/7/20 8:27 AM, Muchun Song wrote:
>> > If the kmem_cache refcount is greater than one, we should not
>> > mark the root kmem_cache as dying. If we ma
On 7/7/20 8:27 AM, Muchun Song wrote:
> If the kmem_cache refcount is greater than one, we should not
> mark the root kmem_cache as dying. If we mark the root kmem_cache
> dying incorrectly, the non-root kmem_cache can never be destroyed.
> It resulted in memory leak when memcg was destroyed. We
ngs.
>
> Signed-off-by: Roman Gushchin
> Cc: Hugh Dickins
> Signed-off-by: Roman Gushchin
Acked-by: Vlastimil Babka
cannot be utilized.
>
> This patch tries to fix this situation by making the deque function on
> hugetlb CMA aware. In the deque function, CMA memory is skipped if
> PF_MEMALLOC_NOCMA flag is found.
>
> Acked-by: Mike Kravetz
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
On 7/15/20 7:05 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> Currently, preventing cma area in page allocation is implemented by using
> current_gfp_context(). However, there are two problems of this
> implementation.
>
> First, this doesn't work for allocation fastpath. In the fastpath,
On 7/14/20 11:57 AM, Wei Yang wrote:
> On Tue, Jul 14, 2020 at 11:22:03AM +0200, Vlastimil Babka wrote:
>>On 7/14/20 11:13 AM, Vlastimil Babka wrote:
>>> On 7/14/20 9:34 AM, Wei Yang wrote:
>>>> The second parameter of for_each_node_mask_to_[alloc|free] is a loop
&
On 7/13/20 3:57 AM, Robbie Ko wrote:
>
> Vlastimil Babka 於 2020/7/10 下午11:31 寫道:
>> On 7/9/20 4:48 AM, robbieko wrote:
>>> From: Robbie Ko
>>>
>>> When a migrate page occurs, we first create a migration entry
>>> to replace the original pte, and
On 7/13/20 6:43 PM, Alexander A. Klimov wrote:
> Rationale:
> Reduces attack surface on kernel devs opening the links for MITM
> as HTTPS traffic is much harder to manipulate.
>
> Deterministic algorithm:
> For each file:
> If not .svg:
> For each line:
> If doesn't contain
On 7/14/20 11:13 AM, Vlastimil Babka wrote:
> On 7/14/20 9:34 AM, Wei Yang wrote:
>> The second parameter of for_each_node_mask_to_[alloc|free] is a loop
>> variant, which is not used outside of loop iteration.
>>
>> Let's hide this.
>>
>> Signed-off-by:
On 7/14/20 9:34 AM, Wei Yang wrote:
> The second parameter of for_each_node_mask_to_[alloc|free] is a loop
> variant, which is not used outside of loop iteration.
>
> Let's hide this.
>
> Signed-off-by: Wei Yang
> ---
> mm/hugetlb.c | 38 --
> 1 file
On 7/13/20 8:41 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
Nit: s/make/introduce/ in the subject, is a more common verb in this context.
n seen during
> large mmaps initialization. There is no indication that this is a
> problem for migration as well but theoretically the same might happen
> when migrating large mappings to a different node. Make the migration
> callback consistent with regular THP allocations.
>
> Signed-of
On 7/9/20 4:48 AM, robbieko wrote:
> From: Robbie Ko
>
> When a migrate page occurs, we first create a migration entry
> to replace the original pte, and then go to fallback_migrate_page
> to execute a writeout if the migratepage is not supported.
>
> In the writeout, we will clear the dirty
ngrading mmap_lock in __do_munmap() if detached
> VMAs are next to VM_GROWSDOWN or VM_GROWSUP VMA.
>
> Signed-off-by: Kirill A. Shutemov
> Reported-by: Jann Horn
> Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
> Cc: # 4.20
> Cc: Yang Shi
> C
;
> Signed-off-by: Roman Gushchin
Acked-by: Vlastimil Babka
> ---
> mm/slab.c | 4 ++--
> mm/slab.h | 8
> mm/slub.c | 4 ++--
> 3 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index fafd46877504..300adfb67245
xceeds the
> cost of a jump. However, the conversion makes the code look more
> logically.
>
> Signed-off-by: Roman Gushchin
Acked-by: Vlastimil Babka
> ---
> include/linux/memcontrol.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/l
On 7/7/20 7:36 PM, Roman Gushchin wrote:
> charge_slab_page() is not using the gfp argument anymore,
> remove it.
>
> Signed-off-by: Roman Gushchin
Acked-by: Vlastimil Babka
> ---
> mm/slab.c | 2 +-
> mm/slab.h | 3 +--
> mm/slub.c | 2 +-
> 3 files changed, 3
On 7/8/20 9:41 AM, Michal Hocko wrote:
> On Wed 08-07-20 16:16:02, Joonsoo Kim wrote:
>> On Tue, Jul 07, 2020 at 01:22:31PM +0200, Vlastimil Babka wrote:
>>
>> Simply, I call memalloc_nocma_{save,restore} in new_non_cma_page(). It
>> would not cause any problem.
On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> There is a well-defined standard migration target callback. Use it
> directly.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
> ---
> mm/memory-failure.c | 18 ++
> 1
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
Thanks! Nitpick below.
> @@ -1345,9 +1324,28 @@ do_migrate_range(unsigned long start_pfn, unsigned
> long end_pfn)
> put_page(page);
> }
> if (!list_empty()) {
> - /* Allocate a new p
On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> There are some similar functions for migration target allocation. Since
> there is no fundamental difference, it's better to keep just one rather
> than keeping all variants. This patch implements base migration target
>
On 7/7/20 1:48 PM, Michal Hocko wrote:
> On Tue 07-07-20 16:44:48, Joonsoo Kim wrote:
>> From: Joonsoo Kim
>>
>> There is a well-defined standard migration target callback. Use it
>> directly.
>>
>> Signed-off-by: Joonsoo Kim
>> ---
>> mm/memory-failure.c | 18 ++
>> 1 file
On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> In mm/migrate.c, THP allocation for migration is called with the provided
> gfp_mask | GFP_TRANSHUGE. This gfp_mask contains __GFP_RECLAIM and it
> would be conflict with the intention of the GFP_TRANSHUGE.
>
>
On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> new_non_cma_page() in gup.c which try to allocate migration target page
> requires to allocate the new page that is not on the CMA area.
> new_non_cma_page() implements it by removing __GFP_MOVABLE flag. This way
> works well
are changed
> to provide gfp_mask.
>
> Note that it's safe to remove a node id check in alloc_huge_page_node()
> since there is no caller passing NUMA_NO_NODE as a node id.
>
> Reviewed-by: Mike Kravetz
> Signed-off-by: Joonsoo Kim
Yeah, this version looks very good :)
Reviewed-by: Vlastimil Babka
Thanks!
On 6/23/20 8:13 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> There is a well-defined standard migration target callback.
> Use it directly.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
But you could move this to patch 5/8 to reduce churn. And do the s
On 6/23/20 8:13 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> There is a well-defined migration target allocation callback.
> Use it.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
I like that this removes the wrapper completely.
arget
> allocation callback and use it on gup.c.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
But a suggestion below.
> ---
> mm/gup.c | 57 -
> mm/internal.h | 1 +
> mm/migrate.c | 4 +++-
&
soo Kim
Provided that the "&= ~__GFP_RECLAIM" line is separated patch as you discussed,
Acked-by: Vlastimil Babka
irreversible (always returning true
>> after returning it for the first time), it'll make the general logic
>> more simple and robust. It also will allow to guard some checks which
>> otherwise would stay unguarded.
>>
>> Signed-off-by: Roman Gushchin
Fixes: ? or let Andrew
On 6/26/20 6:02 AM, Joonsoo Kim wrote:
> 2020년 6월 25일 (목) 오후 8:26, Michal Hocko 님이 작성:
>>
>> On Tue 23-06-20 15:13:43, Joonsoo Kim wrote:
>> > From: Joonsoo Kim
>> >
>> > There is no difference between two migration callback functions,
>> > alloc_huge_page_node() and alloc_huge_page_nodemask(),
t; Acked-by: Johannes Weiner
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
Thanks!
I still hope Matthew can review updated patch 4/6 (I'm not really familiar with
proper xarray handling), and Johannes patch 5/6.
And then we just need a nice Documentation file describing how
; the shadow entry.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
> diff --git a/mm/workingset.c b/mm/workingset.c
> index 8395e60..3769ae6 100644
> --- a/mm/workingset.c
> +++ b/mm/workingset.c
> @@ -353,8 +353,9 @@ void workingset_refault(struct page *page, void
ning at all, at least not in this patch.
> Acked-by: Johannes Weiner
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
ered
> as workingset. So, file refault formula which uses the number of all
> anon pages is changed to use only the number of active anon pages.
a "v6" note is more suitable for a diffstat area than commit log, but it's good
to mention this so drop the 'v6:'?
> Acked-by: Johannes W
list. Afterwards this patch is
effectively reverted.
> Acked-by: Johannes Weiner
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
> ---
> mm/vmscan.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 749d239..9
"mm: workingset: age nonresident information alongside anonymous pages".
Agreed.
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
> ---
> mm/swap.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/swap.c b/mm/swap.c
> index 667133d
> in fault code.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
> ---
> mm/memory.c | 8
> 1 file changed, 8 insertions(+)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index bc6a471..3359057 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
gt;
> Make anon aging drive nonresident age as well to address that.
Fixes: 34e58cac6d8f ("mm: workingset: let cache workingset challenge anon")
> Reported-by: Joonsoo Kim
> Signed-off-by: Johannes Weiner
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
> ---
>
On 6/18/20 12:10 PM, Vlastimil Babka wrote:
> 8<
> From b8df607d92b37e5329ce7bda62b2b364cc249893 Mon Sep 17 00:00:00 2001
> From: Vlastimil Babka
> Date: Thu, 18 Jun 2020 11:52:03 +0200
> Subject: [PATCH] mm, slab/slub: improve error reporting and overhead of
On 6/23/20 3:58 AM, Roman Gushchin wrote:
> This is v7 of the slab cgroup controller rework.
Hi,
As you and Jesper did those measurements on v6, and are sending v7, it would be
great to put some summary in the cover letter?
Thanks,
Vlastimil
> The patchset moves the accounting from the page
On 6/23/20 11:02 AM, kernel test robot wrote:
> Greeting,
>
> FYI, we noticed the following commit (built with gcc-6):
>
> commit: 7b39adbb1b1d3e73df9066a8d1e93a83c18d7730 ("mm, slab/slub: improve
> error reporting and overhead of cache_from_obj()")
>
On 6/17/20 7:49 PM, Kees Cook wrote:
> On Wed, Jun 10, 2020 at 06:31:35PM +0200, Vlastimil Babka wrote:
>> The function cache_from_obj() was added by commit b9ce5ef49f00 ("sl[au]b:
>> always get the cache from its page in kmem_cache_free()") to support kmemcg,
>
On 6/10/20 6:31 PM, Vlastimil Babka wrote:
> There are few places that call kmem_cache_debug(s) (which tests if any of
> debug
> flags are enabled for a cache) immediatelly followed by a test for a specific
> flag. The compiler can probably eliminate the extra check, but we can mak
On 6/17/20 7:56 PM, Kees Cook wrote:
> On Wed, Jun 10, 2020 at 06:31:33PM +0200, Vlastimil Babka wrote:
>> There are few places that call kmem_cache_debug(s) (which tests if any of
>> debug
>> flags are enabled for a cache) immediatelly followed by a test for a specific
>
On 6/18/20 2:35 AM, Roman Gushchin wrote:
> On Wed, Jun 17, 2020 at 04:35:28PM -0700, Andrew Morton wrote:
>> On Mon, 8 Jun 2020 16:06:52 -0700 Roman Gushchin wrote:
>>
>> > Instead of having two sets of kmem_caches: one for system-wide and
>> > non-accounted allocations and the second one
On 6/17/20 5:32 AM, Roman Gushchin wrote:
> On Tue, Jun 16, 2020 at 08:05:39PM -0700, Shakeel Butt wrote:
>> On Tue, Jun 16, 2020 at 7:41 PM Roman Gushchin wrote:
>> >
>> > On Tue, Jun 16, 2020 at 06:46:56PM -0700, Shakeel Butt wrote:
>> > > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
On 6/17/20 9:15 AM, Yi Wang wrote:
> From: Liao Pingfang
>
> Use kmem_cache_zalloc instead of manually calling kmem_cache_alloc
> with flag GFP_ZERO.
>
> Signed-off-by: Liao Pingfang
> ---
> include/linux/slab.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git
On 6/16/20 10:29 PM, Hugh Dickins wrote:
> On Tue, 16 Jun 2020, Vlastimil Babka wrote:
>
>> Hugh noted that task_capc() could use unlikely(), as most of the time there
>> is
>> no capture in progress and we are in page freeing hot path. Indeed adding
>> unlikely
gt;-22275 [006] 889.213391: mm_page_alloc: page=f8a51d4f
> pfn=970260 order=0 migratetype=0 nr_free=3650
> gfp_flags=GFP_HIGHUSER|__GFP_ZERO
> <...>-22275 [006] 889.213393: mm_page_alloc: page=6ba8f5ac
> pfn=970261 order=0 migratetype=0 nr_free=
that we don't need to test for cc->direct_compaction as the
only place we set current->task_capture is compact_zone_order() which also
always sets cc->direct_compaction true.
Suggested-by: Hugh Dickins
Signed-off-by: Vlastimil Babka
---
mm/page_alloc.c | 5 ++---
1 file changed, 2 inserti
e leaking with WRITE_ONCE/READ_ONCE
in the proper order.
Fixes: 5e1f0f098b46 ("mm, compaction: capture a page under direct compaction")
Cc: sta...@vger.kernel.org # 5.1+
Reported-by: Hugh Dickins
Suggested-by: Hugh Dickins
Signed-off-by: Vlastimil Babka
---
mm/compaction.c | 17 +
On 6/16/20 9:38 AM, kernel test robot wrote:
> Greeting,
>
> FYI, we noticed the following commit (built with gcc-9):
>
> commit: f117955a2255721a6a0e9cecf6cad3a6eb43cbc3 ("kernel/watchdog.c: convert
> {soft/hard}lockup boot parameters to sysctl aliases")
>
On 6/15/20 11:03 PM, Hugh Dickins wrote:
> On Fri, 12 Jun 2020, Vlastimil Babka wrote:
>> > This could presumably be fixed by a barrier() before setting
>> > current->capture_control in compact_zone_order(); but would also need
>> > more care on return from com
On 6/14/20 2:39 PM, Muchun Song wrote:
> The slabs_node() always return zero when CONFIG_SLUB_DEBUG is disabled.
> But some codes determine whether slab is empty by checking the return
> value of slabs_node(). As you know, the result is not correct. This
> problem can be reproduce by the follow
On 6/14/20 8:38 AM, Muchun Song wrote:
> When a kmem_cache is initialized with SLAB_ACCOUNT slab flag, we must
> not call kmem_cache_alloc with __GFP_ACCOUNT GFP flag. In this case,
> we can be accounted to kmemcg twice. This is not correct. So we add a
Are you sure? How does that happen?
The
> > > managed:2673676kB mlocked:2444kB kernel_stack:62512kB
>> > > pagetables:105264kB bounce:0kB free_pcp:4140kB local_pcp:40kB
>> > > free_cma:712kB
>
> Checked this mem info, wondering why there's no 'reserved_highatomic'
> printing in all these examples.
401 - 500 of 6095 matches
Mail list logo