Re: [PATCH RFC 00/10] KFENCE: A low-overhead sampling-based memory safety error detector

2020-09-08 Thread Vlastimil Babka
On 9/8/20 5:31 PM, Marco Elver wrote: >> >> How much memory overhead does this end up having? I know it depends on >> the object size and so forth. But, could you give some real-world >> examples of memory consumption? Also, what's the worst case? Say I >> have a ton of worst-case-sized (32b)

Re: [PATCH] mm/vmscan: fix infinite loop in drop_slab_node

2020-09-08 Thread Vlastimil Babka
On 9/8/20 5:09 PM, Chris Down wrote: > drop_caches by its very nature can be extremely performance intensive -- if > someone wants to abort after trying too long, they can just send a > TASK_KILLABLE signal, no? If exiting the loop and returning to usermode > doesn't > reliably work when doing

Re: [PATCH] mm/mmap: leave adjust_next as virtual address instead of page frame number

2020-09-08 Thread Vlastimil Babka
ns_huge(). > > Signed-off-by: Wei Yang Other than that, seems like it leads to less shifting, so Acked-by: Vlastimil Babka > --- > mm/huge_memory.c | 4 ++-- > mm/mmap.c| 8 > 2 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/mm/huge_memo

Re: PROBLEM: Long Workqueue delays V2

2020-09-08 Thread Vlastimil Babka
On 8/27/20 2:06 PM, Jim Baxter wrote: > Has anyone any ideas of how to investigate this delay further? > > Comparing the perf output for unplugging the USB stick and using umount > which does not cause these delays in other workqueues the main difference I don't have that much insight in this,

Re: [PATCH RFC 00/10] KFENCE: A low-overhead sampling-based memory safety error detector

2020-09-08 Thread Vlastimil Babka
On 9/7/20 3:40 PM, Marco Elver wrote: > This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a > low-overhead sampling-based memory safety error detector of heap > use-after-free, invalid-free, and out-of-bounds access errors. This > series enables KFENCE for the x86 and arm64

[RFC 2/5] mm, page_alloc: calculate pageset high and batch once per zone

2020-09-07 Thread Vlastimil Babka
() and __zone_pcp_update() wrappers. No functional change. Signed-off-by: Vlastimil Babka --- mm/page_alloc.c | 40 +--- 1 file changed, 17 insertions(+), 23 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0b516208afda..f669a251f654 100644 --- a/mm

[RFC 3/5] mm, page_alloc(): remove setup_pageset()

2020-09-07 Thread Vlastimil Babka
-by: Vlastimil Babka --- mm/page_alloc.c | 13 +++-- 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f669a251f654..a0cab2c6055e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5902,7 +5902,7 @@ build_all_zonelists_init(void

[RFC 1/5] mm, page_alloc: clean up pageset high and batch update

2020-09-07 Thread Vlastimil Babka
wrappers was: build_all_zonelists_init() setup_pageset() pageset_set_batch() which was hardcoding batch as 0, so we can just open-code a call to pageset_update() with constant parameters instead. No functional change. Signed-off-by: Vlastimil Babka --- mm/page_alloc.c | 51

[RFC 0/5] disable pcplists during page isolation

2020-09-07 Thread Vlastimil Babka
...@soleen.com/ Vlastimil Babka (5): mm, page_alloc: clean up pageset high and batch update mm, page_alloc: calculate pageset high and batch once per zone mm, page_alloc(): remove setup_pageset() mm, page_alloc: cache pageset high and batch in struct zone mm, page_alloc: disable pcplists

[RFC 4/5] mm, page_alloc: cache pageset high and batch in struct zone

2020-09-07 Thread Vlastimil Babka
-by: Vlastimil Babka --- include/linux/mmzone.h | 2 ++ mm/page_alloc.c| 18 +- 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8379432f4f2f..15582ca368b9 100644 --- a/include/linux/mmzone.h +++ b

[RFC 5/5] mm, page_alloc: disable pcplists during page isolation

2020-09-07 Thread Vlastimil Babka
ing some cpu's to drain. If others agree, this can be separated and potentially backported. [1] https://lore.kernel.org/linux-mm/20200903140032.380431-1-pasha.tatas...@soleen.com/ Suggested-by: David Hildenbrand Suggested-by: Michal Hocko Signed-off-by: Vlastimil Babka --- include/linu

Re: [PATCH] mm/memory_hotplug: drain per-cpu pages again during memory offline

2020-09-04 Thread Vlastimil Babka
On 9/3/20 8:23 PM, Pavel Tatashin wrote: >> >> As expressed in reply to v2, I dislike this hack. There is strong >> synchronization, just PCP is special. Allocating from MIGRATE_ISOLATE is >> just plain ugly. >> >> Can't we temporarily disable PCP (while some pageblock in the zone is >> isolated,

Re: [PATCH] mm/memory_hotplug: drain per-cpu pages again during memory offline

2020-09-03 Thread Vlastimil Babka
e >list_add(>lru, >lists[migratetype]); > // add new page to already drained pcp list > > Thread#2 > Never drains pcp again, and therefore gets stuck in the loop. > > The fix is to try to drain per-cpu lists again after > check_pages_isolated_cb() fails. > > Signed-off-by: Pavel Tatashin > Cc: sta...@vger.kernel.org Fixes: ? Acked-by: Vlastimil Babka Thanks.

Re: [PATCH v4 1/4] mm/pageblock: mitigation cmpxchg false sharing in pageblock flags

2020-09-03 Thread Vlastimil Babka
On 9/3/20 10:40 AM, Alex Shi wrote: > > > 在 2020/9/3 下午4:32, Alex Shi 写道: >>> >> I have run thpscale with 'always' defrag setting of THP. The Amean stddev is >> much >> larger than a very little average run time reducing. >> >> But the left patch 4 could show the cmpxchg retry reduce from

Re: [Patch v4 5/7] mm/hugetlb: a page from buddy is not on any list

2020-09-02 Thread Vlastimil Babka
On 9/2/20 7:25 PM, Mike Kravetz wrote: > On 9/2/20 3:49 AM, Vlastimil Babka wrote: >> On 9/1/20 3:46 AM, Wei Yang wrote: >>> The page allocated from buddy is not on any list, so just use list_add() >>> is enough. >>> >>> Signed-off-by: Wei Yang >&

Re: [PATCH] mm/memory_hotplug: drain per-cpu pages again during memory offline

2020-09-02 Thread Vlastimil Babka
On 9/2/20 5:13 PM, Michal Hocko wrote: > On Wed 02-09-20 16:55:05, Vlastimil Babka wrote: >> On 9/2/20 4:26 PM, Pavel Tatashin wrote: >> > On Wed, Sep 2, 2020 at 10:08 AM Michal Hocko wrote: >> >> >> >> > >> >> >

Re: [PATCH] mm/memory_hotplug: drain per-cpu pages again during memory offline

2020-09-02 Thread Vlastimil Babka
On 9/2/20 4:26 PM, Pavel Tatashin wrote: > On Wed, Sep 2, 2020 at 10:08 AM Michal Hocko wrote: >> >> > >> > Thread#1 - continue >> > free_unref_page_commit >> >migratetype = get_pcppage_migratetype(page); >> > // get old migration type >> >

Re: [PATCH] mm/memory_hotplug: drain per-cpu pages again during memory offline

2020-09-02 Thread Vlastimil Babka
On 9/2/20 4:31 PM, Pavel Tatashin wrote: >> > > The fix is to try to drain per-cpu lists again after >> > > check_pages_isolated_cb() fails. >> >> Still trying to wrap my head around this but I think this is not a >> proper fix. It should be the page isolation to make sure no races are >> possible

Re: [Patch v4 5/7] mm/hugetlb: a page from buddy is not on any list

2020-09-02 Thread Vlastimil Babka
On 9/1/20 3:46 AM, Wei Yang wrote: > The page allocated from buddy is not on any list, so just use list_add() > is enough. > > Signed-off-by: Wei Yang > Reviewed-by: Baoquan He > Reviewed-by: Mike Kravetz > --- > mm/hugetlb.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff

Re: [PATCH v2 00/28] The new cgroup slab memory controller

2020-09-02 Thread Vlastimil Babka
On 8/28/20 6:47 PM, Pavel Tatashin wrote: > There appears to be another problem that is related to the > cgroup_mutex -> mem_hotplug_lock deadlock described above. > > In the original deadlock that I described, the workaround is to > replace crash dump from piping to Linux traditional save to

Re: [PATCH v3 1/3] mm/pageblock: mitigation cmpxchg false sharing in pageblock flags

2020-09-01 Thread Vlastimil Babka
On 9/1/20 4:50 AM, Alex Shi wrote: > pageblock_flags is used as long, since every pageblock_flags is just 4 > bits, 'long' size will include 8(32bit machine) or 16 pageblocks' flags, > that flag setting has to sync in cmpxchg with 7 or 15 other pageblock > flags. It would cause long waiting for

Re: [PATCH v2 2/2] mm/pageblock: remove false sharing in pageblock_flags

2020-09-01 Thread Vlastimil Babka
On 8/19/20 10:09 AM, Alex Shi wrote: > > > 在 2020/8/19 下午3:57, Anshuman Khandual 写道: >> >> >> On 08/19/2020 11:17 AM, Alex Shi wrote: >>> Current pageblock_flags is only 4 bits, so it has to share a char size >>> in cmpxchg when get set, the false sharing cause perf drop. >>> >>> If we incrase

Re: [PATCH for v5.9] mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs

2020-08-27 Thread Vlastimil Babka
On 8/26/20 7:12 AM, Joonsoo Kim wrote: > 2020년 8월 25일 (화) 오후 6:43, Vlastimil Babka 님이 작성: >> >> >> On 8/25/20 6:59 AM, js1...@gmail.com wrote: >> > From: Joonsoo Kim >> > >> > memalloc_nocma_{save/restore} APIs can be used to skip page allocatio

Re: [PATCH for v5.9] mm/page_alloc: handle a missing case for memalloc_nocma_{save/restore} APIs

2020-08-25 Thread Vlastimil Babka
On 8/25/20 6:59 AM, js1...@gmail.com wrote: > From: Joonsoo Kim > > memalloc_nocma_{save/restore} APIs can be used to skip page allocation > on CMA area, but, there is a missing case and the page on CMA area could > be allocated even if APIs are used. This patch handles this case to fix > the

Re: [PATCH v2 4/6] mm/page_isolation: cleanup set_migratetype_isolate()

2020-08-06 Thread Vlastimil Babka
On 7/30/20 11:34 AM, David Hildenbrand wrote: > Let's clean it up a bit, simplifying error handling and getting rid of > the label. Nit: the label was already removed by patch 1/6? > Reviewed-by: Baoquan He > Reviewed-by: Pankaj Gupta > Cc: Andrew Morton > Cc: Michal Hocko > Cc: Michael S.

Re: [PATCH v2] mm, dump_page: do not crash with bad compound_mapcount()

2020-08-06 Thread Vlastimil Babka
een seen, so it's > a good trade-off. > > Reported-by: Qian Cai > Suggested-by: Matthew Wilcox > Cc: Vlastimil Babka > Cc: Kirill A. Shutemov > Signed-off-by: John Hubbard Acked-by: Vlastimil Babka > --- > Hi, > > I'm assuming that a fix is not required for -st

Re: [PATCH v2] mm, dump_page: do not crash with bad compound_mapcount()

2020-08-06 Thread Vlastimil Babka
On 8/6/20 3:48 PM, Matthew Wilcox wrote: > On Thu, Aug 06, 2020 at 01:45:11PM +0200, Vlastimil Babka wrote: >> How about this additional patch now that we have head_mapcoun()? (I wouldn't >> go for squashing as the goal and scope is too different). > > I like it. It bothers

Re: [PATCH v2] mm, dump_page: do not crash with bad compound_mapcount()

2020-08-06 Thread Vlastimil Babka
On 8/6/20 5:39 PM, Matthew Wilcox wrote: >> >> +++ b/mm/huge_memory.c >> >> @@ -2125,7 +2125,7 @@ static void __split_huge_pmd_locked(struct >> >> vm_area_struct *vma, pmd_t *pmd, >> >>* Set PG_double_map before dropping compound_mapcount to avoid >> >>* false-negative page_mapped(). >>

Re: [PATCH 1/2] mm/slub: Introduce two counters for the partial objects

2020-08-06 Thread Vlastimil Babka
On 7/2/20 10:32 AM, Xunlei Pang wrote: > The node list_lock in count_partial() spend long time iterating > in case of large amount of partial page lists, which can cause > thunder herd effect to the list_lock contention, e.g. it cause > business response-time jitters when accessing

Re: [RFC-PROTOTYPE 1/1] mm: Add __GFP_FAST_TRY flag

2020-08-04 Thread Vlastimil Babka
On 8/4/20 7:12 PM, Matthew Wilcox wrote: > On Tue, Aug 04, 2020 at 07:02:14PM +0200, Vlastimil Babka wrote: >> > 2) There was a proposal from Matthew Wilcox: >> > https://lkml.org/lkml/2020/7/31/1015 >> > >> > >> > On non-RT, we could make that lo

Re: [PATCH v2 2/2] slab: Add naive detection of double free

2020-08-04 Thread Vlastimil Babka
No idea how much it helps in practice wrt security, but implementation-wise it seems fine, so: Acked-by: Vlastimil Babka Maybe you don't want to warn just once, though? We had similar discussion on cache_to_obj(). > --- > mm/slab.c | 14 -- > 1 file changed, 12 insertions(+),

Re: [PATCH v2 1/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB

2020-08-04 Thread Vlastimil Babka
rability.pdf > > Fixes: 598a0717a816 ("mm/slab: validate cache membership under freelist > hardening") > Signed-off-by: Kees Cook Acked-by: Vlastimil Babka > --- > init/Kconfig | 9 + > 1 file changed, 5 insertions(+), 4 deletions(-) > > diff --git

Re: [RFC-PROTOTYPE 1/1] mm: Add __GFP_FAST_TRY flag

2020-08-04 Thread Vlastimil Babka
On 8/3/20 6:30 PM, Uladzislau Rezki (Sony) wrote: > Some background and kfree_rcu() > === > The pointers to be freed are stored in the per-cpu array to improve > performance, to enable an easier-to-use API, to accommodate vmalloc > memmory and to support a single

Re: [PATCH v3 1/3] mm/gup: restrict CMA region by using allocation scope API

2020-08-04 Thread Vlastimil Babka
timal but it doesn't cause any problem. > > Suggested-by: Michal Hocko > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka > --- > include/linux/hugetlb.h | 2 ++ > mm/gup.c| 17 - > 2 files changed, 10 insertions(+), 9 deletions(

Re: [PATCH] mm: sort freelist by rank number

2020-08-04 Thread Vlastimil Babka
On 8/4/20 4:35 AM, Cho KyongHo wrote: > On Mon, Aug 03, 2020 at 05:45:55PM +0200, Vlastimil Babka wrote: >> On 8/3/20 9:57 AM, David Hildenbrand wrote: >> > On 03.08.20 08:10, pullip@samsung.com wrote: >> >> From: Cho KyongHo >> >> >>

Re: [PATCH] mm: sort freelist by rank number

2020-08-03 Thread Vlastimil Babka
On 8/3/20 9:57 AM, David Hildenbrand wrote: > On 03.08.20 08:10, pullip@samsung.com wrote: >> From: Cho KyongHo >> >> LPDDR5 introduces rank switch delay. If three successive DRAM accesses >> happens and the first and the second ones access one rank and the last >> access happens on the

Re: [PATCH] mm, memory_hotplug: update pcp lists everytime onlining a memory block

2020-08-03 Thread Vlastimil Babka
system is not using benefits offered by the pcp lists when there is a > single onlineable memory block in a zone. Correct this by always > updating the pcp lists when memory block is onlined. > > Signed-off-by: Charan Teja Reddy Makes sense to me. Acked-by: Vlastimil Babka > ---

Re: [PATCH] mm/page_alloc: fix memalloc_nocma_{save/restore} APIs

2020-07-21 Thread Vlastimil Babka
On 7/21/20 2:05 PM, Matthew Wilcox wrote: > On Tue, Jul 21, 2020 at 12:28:49PM +0900, js1...@gmail.com wrote: >> +static inline unsigned int current_alloc_flags(gfp_t gfp_mask, >> +unsigned int alloc_flags) >> +{ >> +#ifdef CONFIG_CMA >> +unsigned int pflags

Re: [PATCH] mm/page_alloc: fix memalloc_nocma_{save/restore} APIs

2020-07-21 Thread Vlastimil Babka
or exactly this purpose. > Fixes: d7fefcc8de91 (mm/cma: add PF flag to force non cma alloc) > Cc: > Signed-off-by: Joonsoo Kim Reviewed-by: Vlastimil Babka Thanks!

Re: [PATCH 1/4] mm/page_alloc: fix non cma alloc context

2020-07-17 Thread Vlastimil Babka
On 7/17/20 10:10 AM, Vlastimil Babka wrote: > On 7/17/20 9:29 AM, Joonsoo Kim wrote: >> 2020년 7월 16일 (목) 오후 4:45, Vlastimil Babka 님이 작성: >>> >>> On 7/16/20 9:27 AM, Joonsoo Kim wrote: >>> > 2020년 7월 15일 (수) 오후 5:24, Vlastimil Babka 님이 작성: >>> &

Re: [PATCH 1/4] mm/page_alloc: fix non cma alloc context

2020-07-17 Thread Vlastimil Babka
On 7/17/20 9:29 AM, Joonsoo Kim wrote: > 2020년 7월 16일 (목) 오후 4:45, Vlastimil Babka 님이 작성: >> >> On 7/16/20 9:27 AM, Joonsoo Kim wrote: >> > 2020년 7월 15일 (수) 오후 5:24, Vlastimil Babka 님이 작성: >> >> > /* >> >> > * get_page_from

Re: [PATCH v3] mm: memcg/slab: fix memory leak at non-root kmem_cache destroy

2020-07-16 Thread Vlastimil Babka
On 7/16/20 6:51 PM, Muchun Song wrote: > If the kmem_cache refcount is greater than one, we should not > mark the root kmem_cache as dying. If we mark the root kmem_cache > dying incorrectly, the non-root kmem_cache can never be destroyed. > It resulted in memory leak when memcg was destroyed. We

Re: [PATCH 1/4] mm/page_alloc: fix non cma alloc context

2020-07-16 Thread Vlastimil Babka
On 7/16/20 9:27 AM, Joonsoo Kim wrote: > 2020년 7월 15일 (수) 오후 5:24, Vlastimil Babka 님이 작성: >> > /* >> > * get_page_from_freelist goes through the zonelist trying to allocate >> > * a page. >> > @@ -3706,6 +3714,8 @@ get_page_from_freelist(gfp_t

Re: [External] Re: [PATCH v5.4.y, v4.19.y] mm: memcg/slab: fix memory leak at non-root kmem_cache destroy

2020-07-15 Thread Vlastimil Babka
On 7/15/20 5:13 PM, Muchun Song wrote: > On Wed, Jul 15, 2020 at 7:32 PM Vlastimil Babka wrote: >> >> On 7/7/20 8:27 AM, Muchun Song wrote: >> > If the kmem_cache refcount is greater than one, we should not >> > mark the root kmem_cache as dying. If we ma

Re: [PATCH v5.4.y, v4.19.y] mm: memcg/slab: fix memory leak at non-root kmem_cache destroy

2020-07-15 Thread Vlastimil Babka
On 7/7/20 8:27 AM, Muchun Song wrote: > If the kmem_cache refcount is greater than one, we should not > mark the root kmem_cache as dying. If we mark the root kmem_cache > dying incorrectly, the non-root kmem_cache can never be destroyed. > It resulted in memory leak when memcg was destroyed. We

Re: [PATCH] mm: vmstat: fix /proc/sys/vm/stat_refresh generating false warnings

2020-07-15 Thread Vlastimil Babka
ngs. > > Signed-off-by: Roman Gushchin > Cc: Hugh Dickins > Signed-off-by: Roman Gushchin Acked-by: Vlastimil Babka

Re: [PATCH 3/4] mm/hugetlb: make hugetlb migration callback CMA aware

2020-07-15 Thread Vlastimil Babka
cannot be utilized. > > This patch tries to fix this situation by making the deque function on > hugetlb CMA aware. In the deque function, CMA memory is skipped if > PF_MEMALLOC_NOCMA flag is found. > > Acked-by: Mike Kravetz > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka

Re: [PATCH 1/4] mm/page_alloc: fix non cma alloc context

2020-07-15 Thread Vlastimil Babka
On 7/15/20 7:05 AM, js1...@gmail.com wrote: > From: Joonsoo Kim > > Currently, preventing cma area in page allocation is implemented by using > current_gfp_context(). However, there are two problems of this > implementation. > > First, this doesn't work for allocation fastpath. In the fastpath,

Re: [PATCH] mm/hugetlb: hide nr_nodes in the internal of for_each_node_mask_to_[alloc|free]

2020-07-14 Thread Vlastimil Babka
On 7/14/20 11:57 AM, Wei Yang wrote: > On Tue, Jul 14, 2020 at 11:22:03AM +0200, Vlastimil Babka wrote: >>On 7/14/20 11:13 AM, Vlastimil Babka wrote: >>> On 7/14/20 9:34 AM, Wei Yang wrote: >>>> The second parameter of for_each_node_mask_to_[alloc|free] is a loop &

Re: [PATCH] mm : fix pte _PAGE_DIRTY bit when fallback migrate page

2020-07-14 Thread Vlastimil Babka
On 7/13/20 3:57 AM, Robbie Ko wrote: > > Vlastimil Babka 於 2020/7/10 下午11:31 寫道: >> On 7/9/20 4:48 AM, robbieko wrote: >>> From: Robbie Ko >>> >>> When a migrate page occurs, we first create a migration entry >>> to replace the original pte, and

Re: [PATCH] mm: thp: Replace HTTP links with HTTPS ones

2020-07-14 Thread Vlastimil Babka
On 7/13/20 6:43 PM, Alexander A. Klimov wrote: > Rationale: > Reduces attack surface on kernel devs opening the links for MITM > as HTTPS traffic is much harder to manipulate. > > Deterministic algorithm: > For each file: > If not .svg: > For each line: > If doesn't contain

Re: [PATCH] mm/hugetlb: hide nr_nodes in the internal of for_each_node_mask_to_[alloc|free]

2020-07-14 Thread Vlastimil Babka
On 7/14/20 11:13 AM, Vlastimil Babka wrote: > On 7/14/20 9:34 AM, Wei Yang wrote: >> The second parameter of for_each_node_mask_to_[alloc|free] is a loop >> variant, which is not used outside of loop iteration. >> >> Let's hide this. >> >> Signed-off-by:

Re: [PATCH] mm/hugetlb: hide nr_nodes in the internal of for_each_node_mask_to_[alloc|free]

2020-07-14 Thread Vlastimil Babka
On 7/14/20 9:34 AM, Wei Yang wrote: > The second parameter of for_each_node_mask_to_[alloc|free] is a loop > variant, which is not used outside of loop iteration. > > Let's hide this. > > Signed-off-by: Wei Yang > --- > mm/hugetlb.c | 38 -- > 1 file

Re: [PATCH v5 5/9] mm/migrate: make a standard migration target allocation function

2020-07-13 Thread Vlastimil Babka
On 7/13/20 8:41 AM, js1...@gmail.com wrote: > From: Joonsoo Kim Nit: s/make/introduce/ in the subject, is a more common verb in this context.

Re: [PATCH v5 4/9] mm/migrate: clear __GFP_RECLAIM to make the migration callback consistent with regular THP allocations

2020-07-13 Thread Vlastimil Babka
n seen during > large mmaps initialization. There is no indication that this is a > problem for migration as well but theoretically the same might happen > when migrating large mappings to a different node. Make the migration > callback consistent with regular THP allocations. > > Signed-of

Re: [PATCH] mm : fix pte _PAGE_DIRTY bit when fallback migrate page

2020-07-10 Thread Vlastimil Babka
On 7/9/20 4:48 AM, robbieko wrote: > From: Robbie Ko > > When a migrate page occurs, we first create a migration entry > to replace the original pte, and then go to fallback_migrate_page > to execute a writeout if the migratepage is not supported. > > In the writeout, we will clear the dirty

Re: [PATCH] mm: Close race between munmap() and expand_upwards()/downwards()

2020-07-10 Thread Vlastimil Babka
ngrading mmap_lock in __do_munmap() if detached > VMAs are next to VM_GROWSDOWN or VM_GROWSUP VMA. > > Signed-off-by: Kirill A. Shutemov > Reported-by: Jann Horn > Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap") > Cc: # 4.20 > Cc: Yang Shi > C

Re: [PATCH 2/3] mm: slab: rename (un)charge_slab_page() to (un)account_slab_page()

2020-07-08 Thread Vlastimil Babka
; > Signed-off-by: Roman Gushchin Acked-by: Vlastimil Babka > --- > mm/slab.c | 4 ++-- > mm/slab.h | 8 > mm/slub.c | 4 ++-- > 3 files changed, 8 insertions(+), 8 deletions(-) > > diff --git a/mm/slab.c b/mm/slab.c > index fafd46877504..300adfb67245

Re: [PATCH 3/3] mm: kmem: switch to static_branch_likely() in memcg_kmem_enabled()

2020-07-08 Thread Vlastimil Babka
xceeds the > cost of a jump. However, the conversion makes the code look more > logically. > > Signed-off-by: Roman Gushchin Acked-by: Vlastimil Babka > --- > include/linux/memcontrol.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/include/l

Re: [PATCH 1/3] mm: memcg/slab: remove unused argument by charge_slab_page()

2020-07-08 Thread Vlastimil Babka
On 7/7/20 7:36 PM, Roman Gushchin wrote: > charge_slab_page() is not using the gfp argument anymore, > remove it. > > Signed-off-by: Roman Gushchin Acked-by: Vlastimil Babka > --- > mm/slab.c | 2 +- > mm/slab.h | 3 +-- > mm/slub.c | 2 +- > 3 files changed, 3

Re: [PATCH v4 04/11] mm/hugetlb: make hugetlb migration callback CMA aware

2020-07-08 Thread Vlastimil Babka
On 7/8/20 9:41 AM, Michal Hocko wrote: > On Wed 08-07-20 16:16:02, Joonsoo Kim wrote: >> On Tue, Jul 07, 2020 at 01:22:31PM +0200, Vlastimil Babka wrote: >> >> Simply, I call memalloc_nocma_{save,restore} in new_non_cma_page(). It >> would not cause any problem.

Re: [PATCH v4 10/11] mm/memory-failure: remove a wrapper for alloc_migration_target()

2020-07-07 Thread Vlastimil Babka
On 7/7/20 9:44 AM, js1...@gmail.com wrote: > From: Joonsoo Kim > > There is a well-defined standard migration target callback. Use it > directly. > > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka > --- > mm/memory-failure.c | 18 ++ > 1

Re: [PATCH v4 11/11] mm/memory_hotplug: remove a wrapper for alloc_migration_target()

2020-07-07 Thread Vlastimil Babka
> Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka Thanks! Nitpick below. > @@ -1345,9 +1324,28 @@ do_migrate_range(unsigned long start_pfn, unsigned > long end_pfn) > put_page(page); > } > if (!list_empty()) { > - /* Allocate a new p

Re: [PATCH v4 06/11] mm/migrate: make a standard migration target allocation function

2020-07-07 Thread Vlastimil Babka
On 7/7/20 9:44 AM, js1...@gmail.com wrote: > From: Joonsoo Kim > > There are some similar functions for migration target allocation. Since > there is no fundamental difference, it's better to keep just one rather > than keeping all variants. This patch implements base migration target >

Re: [PATCH v4 10/11] mm/memory-failure: remove a wrapper for alloc_migration_target()

2020-07-07 Thread Vlastimil Babka
On 7/7/20 1:48 PM, Michal Hocko wrote: > On Tue 07-07-20 16:44:48, Joonsoo Kim wrote: >> From: Joonsoo Kim >> >> There is a well-defined standard migration target callback. Use it >> directly. >> >> Signed-off-by: Joonsoo Kim >> --- >> mm/memory-failure.c | 18 ++ >> 1 file

Re: [PATCH v4 05/11] mm/migrate: clear __GFP_RECLAIM for THP allocation for migration

2020-07-07 Thread Vlastimil Babka
On 7/7/20 9:44 AM, js1...@gmail.com wrote: > From: Joonsoo Kim > > In mm/migrate.c, THP allocation for migration is called with the provided > gfp_mask | GFP_TRANSHUGE. This gfp_mask contains __GFP_RECLAIM and it > would be conflict with the intention of the GFP_TRANSHUGE. > >

Re: [PATCH v4 04/11] mm/hugetlb: make hugetlb migration callback CMA aware

2020-07-07 Thread Vlastimil Babka
On 7/7/20 9:44 AM, js1...@gmail.com wrote: > From: Joonsoo Kim > > new_non_cma_page() in gup.c which try to allocate migration target page > requires to allocate the new page that is not on the CMA area. > new_non_cma_page() implements it by removing __GFP_MOVABLE flag. This way > works well

Re: [PATCH v4 03/11] mm/hugetlb: unify migration callbacks

2020-07-07 Thread Vlastimil Babka
are changed > to provide gfp_mask. > > Note that it's safe to remove a node id check in alloc_huge_page_node() > since there is no caller passing NUMA_NO_NODE as a node id. > > Reviewed-by: Mike Kravetz > Signed-off-by: Joonsoo Kim Yeah, this version looks very good :) Reviewed-by: Vlastimil Babka Thanks!

Re: [PATCH v3 8/8] mm/page_alloc: remove a wrapper for alloc_migration_target()

2020-07-03 Thread Vlastimil Babka
On 6/23/20 8:13 AM, js1...@gmail.com wrote: > From: Joonsoo Kim > > There is a well-defined standard migration target callback. > Use it directly. > > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka But you could move this to patch 5/8 to reduce churn. And do the s

Re: [PATCH v3 7/8] mm/mempolicy: use a standard migration target allocation callback

2020-07-03 Thread Vlastimil Babka
On 6/23/20 8:13 AM, js1...@gmail.com wrote: > From: Joonsoo Kim > > There is a well-defined migration target allocation callback. > Use it. > > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka I like that this removes the wrapper completely.

Re: [PATCH v3 6/8] mm/gup: use a standard migration target allocation callback

2020-07-03 Thread Vlastimil Babka
arget > allocation callback and use it on gup.c. > > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka But a suggestion below. > --- > mm/gup.c | 57 - > mm/internal.h | 1 + > mm/migrate.c | 4 +++- &

Re: [PATCH v3 5/8] mm/migrate: make a standard migration target allocation function

2020-07-03 Thread Vlastimil Babka
soo Kim Provided that the "&= ~__GFP_RECLAIM" line is separated patch as you discussed, Acked-by: Vlastimil Babka

Re: BUG: Bad page state in process - page dumped because: page still charged to cgroup

2020-07-02 Thread Vlastimil Babka
irreversible (always returning true >> after returning it for the first time), it'll make the general logic >> more simple and robust. It also will allow to guard some checks which >> otherwise would stay unguarded. >> >> Signed-off-by: Roman Gushchin Fixes: ? or let Andrew

Re: [PATCH v3 3/8] mm/hugetlb: unify migration callbacks

2020-07-02 Thread Vlastimil Babka
On 6/26/20 6:02 AM, Joonsoo Kim wrote: > 2020년 6월 25일 (목) 오후 8:26, Michal Hocko 님이 작성: >> >> On Tue 23-06-20 15:13:43, Joonsoo Kim wrote: >> > From: Joonsoo Kim >> > >> > There is no difference between two migration callback functions, >> > alloc_huge_page_node() and alloc_huge_page_nodemask(),

Re: [PATCH v6 6/6] mm/vmscan: restore active/inactive ratio for anonymous LRU

2020-07-02 Thread Vlastimil Babka
t; Acked-by: Johannes Weiner > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka Thanks! I still hope Matthew can review updated patch 4/6 (I'm not really familiar with proper xarray handling), and Johannes patch 5/6. And then we just need a nice Documentation file describing how

Re: [PATCH v6 5/6] mm/swap: implement workingset detection for anonymous LRU

2020-07-02 Thread Vlastimil Babka
; the shadow entry. > > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka > diff --git a/mm/workingset.c b/mm/workingset.c > index 8395e60..3769ae6 100644 > --- a/mm/workingset.c > +++ b/mm/workingset.c > @@ -353,8 +353,9 @@ void workingset_refault(struct page *page, void

Re: [PATCH v6 3/6] mm/workingset: extend the workingset detection for anon LRU

2020-07-01 Thread Vlastimil Babka
ning at all, at least not in this patch. > Acked-by: Johannes Weiner > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka

Re: [PATCH v6 2/6] mm/vmscan: protect the workingset on anonymous LRU

2020-07-01 Thread Vlastimil Babka
ered > as workingset. So, file refault formula which uses the number of all > anon pages is changed to use only the number of active anon pages. a "v6" note is more suitable for a diffstat area than commit log, but it's good to mention this so drop the 'v6:'? > Acked-by: Johannes W

Re: [PATCH v6 1/6] mm/vmscan: make active/inactive ratio as 1:1 for anon lru

2020-06-30 Thread Vlastimil Babka
list. Afterwards this patch is effectively reverted. > Acked-by: Johannes Weiner > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka > --- > mm/vmscan.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 749d239..9

Re: [PATCH for v5.8 2/3] mm/swap: fix for "mm: workingset: age nonresident information alongside anonymous pages"

2020-06-29 Thread Vlastimil Babka
"mm: workingset: age nonresident information alongside anonymous pages". Agreed. > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka > --- > mm/swap.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/mm/swap.c b/mm/swap.c > index 667133d

Re: [PATCH for v5.8 3/3] mm/memory: fix IO cost for anonymous page

2020-06-29 Thread Vlastimil Babka
> in fault code. > > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka > --- > mm/memory.c | 8 > 1 file changed, 8 insertions(+) > > diff --git a/mm/memory.c b/mm/memory.c > index bc6a471..3359057 100644 > --- a/mm/memory.c > +++ b/mm/memory.c

Re: [PATCH for v5.8 1/3] mm: workingset: age nonresident information alongside anonymous pages

2020-06-29 Thread Vlastimil Babka
gt; > Make anon aging drive nonresident age as well to address that. Fixes: 34e58cac6d8f ("mm: workingset: let cache workingset challenge anon") > Reported-by: Joonsoo Kim > Signed-off-by: Johannes Weiner > Signed-off-by: Joonsoo Kim Acked-by: Vlastimil Babka > --- >

Re: [PATCH 9/9] mm, slab/slub: move and improve cache_from_obj()

2020-06-24 Thread Vlastimil Babka
On 6/18/20 12:10 PM, Vlastimil Babka wrote: > 8< > From b8df607d92b37e5329ce7bda62b2b364cc249893 Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka > Date: Thu, 18 Jun 2020 11:52:03 +0200 > Subject: [PATCH] mm, slab/slub: improve error reporting and overhead of

Re: [PATCH v7 00/19] The new cgroup slab memory controller

2020-06-23 Thread Vlastimil Babka
On 6/23/20 3:58 AM, Roman Gushchin wrote: > This is v7 of the slab cgroup controller rework. Hi, As you and Jesper did those measurements on v6, and are sending v7, it would be great to put some summary in the cover letter? Thanks, Vlastimil > The patchset moves the accounting from the page

Re: [mm, slab/slub] 7b39adbb1b: WARNING:at_mm/slab.h:#kmem_cache_free

2020-06-23 Thread Vlastimil Babka
On 6/23/20 11:02 AM, kernel test robot wrote: > Greeting, > > FYI, we noticed the following commit (built with gcc-6): > > commit: 7b39adbb1b1d3e73df9066a8d1e93a83c18d7730 ("mm, slab/slub: improve > error reporting and overhead of cache_from_obj()") >

Re: [PATCH 9/9] mm, slab/slub: move and improve cache_from_obj()

2020-06-18 Thread Vlastimil Babka
On 6/17/20 7:49 PM, Kees Cook wrote: > On Wed, Jun 10, 2020 at 06:31:35PM +0200, Vlastimil Babka wrote: >> The function cache_from_obj() was added by commit b9ce5ef49f00 ("sl[au]b: >> always get the cache from its page in kmem_cache_free()") to support kmemcg, >

Re: [PATCH 7/9] mm, slub: introduce kmem_cache_debug_flags()

2020-06-18 Thread Vlastimil Babka
On 6/10/20 6:31 PM, Vlastimil Babka wrote: > There are few places that call kmem_cache_debug(s) (which tests if any of > debug > flags are enabled for a cache) immediatelly followed by a test for a specific > flag. The compiler can probably eliminate the extra check, but we can mak

Re: [PATCH 7/9] mm, slub: introduce kmem_cache_debug_flags()

2020-06-18 Thread Vlastimil Babka
On 6/17/20 7:56 PM, Kees Cook wrote: > On Wed, Jun 10, 2020 at 06:31:33PM +0200, Vlastimil Babka wrote: >> There are few places that call kmem_cache_debug(s) (which tests if any of >> debug >> flags are enabled for a cache) immediatelly followed by a test for a specific >

Re: [PATCH v6 17/19] mm: memcg/slab: use a single set of kmem_caches for all allocations

2020-06-18 Thread Vlastimil Babka
On 6/18/20 2:35 AM, Roman Gushchin wrote: > On Wed, Jun 17, 2020 at 04:35:28PM -0700, Andrew Morton wrote: >> On Mon, 8 Jun 2020 16:06:52 -0700 Roman Gushchin wrote: >> >> > Instead of having two sets of kmem_caches: one for system-wide and >> > non-accounted allocations and the second one

Re: [PATCH v6 00/19] The new cgroup slab memory controller

2020-06-17 Thread Vlastimil Babka
On 6/17/20 5:32 AM, Roman Gushchin wrote: > On Tue, Jun 16, 2020 at 08:05:39PM -0700, Shakeel Butt wrote: >> On Tue, Jun 16, 2020 at 7:41 PM Roman Gushchin wrote: >> > >> > On Tue, Jun 16, 2020 at 06:46:56PM -0700, Shakeel Butt wrote: >> > > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:

Re: [PATCH] mm, slab: Use kmem_cache_zalloc() instead of kmem_cache_alloc() with flag GFP_ZERO.

2020-06-17 Thread Vlastimil Babka
On 6/17/20 9:15 AM, Yi Wang wrote: > From: Liao Pingfang > > Use kmem_cache_zalloc instead of manually calling kmem_cache_alloc > with flag GFP_ZERO. > > Signed-off-by: Liao Pingfang > --- > include/linux/slab.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git

Re: [PATCH 2/2] mm, page_alloc: use unlikely() in task_capc()

2020-06-17 Thread Vlastimil Babka
On 6/16/20 10:29 PM, Hugh Dickins wrote: > On Tue, 16 Jun 2020, Vlastimil Babka wrote: > >> Hugh noted that task_capc() could use unlikely(), as most of the time there >> is >> no capture in progress and we are in page freeing hot path. Indeed adding >> unlikely

Re: [PATCH v3] page_alloc: consider highatomic reserve in wmartermark fast

2020-06-16 Thread Vlastimil Babka
gt;-22275 [006] 889.213391: mm_page_alloc: page=f8a51d4f > pfn=970260 order=0 migratetype=0 nr_free=3650 > gfp_flags=GFP_HIGHUSER|__GFP_ZERO > <...>-22275 [006] 889.213393: mm_page_alloc: page=6ba8f5ac > pfn=970261 order=0 migratetype=0 nr_free=

[PATCH 2/2] mm, page_alloc: use unlikely() in task_capc()

2020-06-16 Thread Vlastimil Babka
that we don't need to test for cc->direct_compaction as the only place we set current->task_capture is compact_zone_order() which also always sets cc->direct_compaction true. Suggested-by: Hugh Dickins Signed-off-by: Vlastimil Babka --- mm/page_alloc.c | 5 ++--- 1 file changed, 2 inserti

[PATCH 1/2] mm, compaction: make capture control handling safe wrt interrupts

2020-06-16 Thread Vlastimil Babka
e leaking with WRITE_ONCE/READ_ONCE in the proper order. Fixes: 5e1f0f098b46 ("mm, compaction: capture a page under direct compaction") Cc: sta...@vger.kernel.org # 5.1+ Reported-by: Hugh Dickins Suggested-by: Hugh Dickins Signed-off-by: Vlastimil Babka --- mm/compaction.c | 17 +

Re: [kernel/watchdog.c] f117955a22: kmsg.Failed_to_set_sysctl_parameter'kernel.softlockup_panic=#':parameter_not_found

2020-06-16 Thread Vlastimil Babka
On 6/16/20 9:38 AM, kernel test robot wrote: > Greeting, > > FYI, we noticed the following commit (built with gcc-9): > > commit: f117955a2255721a6a0e9cecf6cad3a6eb43cbc3 ("kernel/watchdog.c: convert > {soft/hard}lockup boot parameters to sysctl aliases") >

Re: [PATCH] mm, page_alloc: capture page in task context only

2020-06-16 Thread Vlastimil Babka
On 6/15/20 11:03 PM, Hugh Dickins wrote: > On Fri, 12 Jun 2020, Vlastimil Babka wrote: >> > This could presumably be fixed by a barrier() before setting >> > current->capture_control in compact_zone_order(); but would also need >> > more care on return from com

Re: [PATCH 1/3] mm/slub: Fix slabs_node return value when CONFIG_SLUB_DEBUG disabled

2020-06-15 Thread Vlastimil Babka
On 6/14/20 2:39 PM, Muchun Song wrote: > The slabs_node() always return zero when CONFIG_SLUB_DEBUG is disabled. > But some codes determine whether slab is empty by checking the return > value of slabs_node(). As you know, the result is not correct. This > problem can be reproduce by the follow

Re: [PATCH] mm/slab: Add a __GFP_ACCOUNT GFP flag check for slab allocation

2020-06-15 Thread Vlastimil Babka
On 6/14/20 8:38 AM, Muchun Song wrote: > When a kmem_cache is initialized with SLAB_ACCOUNT slab flag, we must > not call kmem_cache_alloc with __GFP_ACCOUNT GFP flag. In this case, > we can be accounted to kmemcg twice. This is not correct. So we add a Are you sure? How does that happen? The

Re: [PATCH v2] page_alloc: consider highatomic reserve in wmartermark fast

2020-06-15 Thread Vlastimil Babka
> > > managed:2673676kB mlocked:2444kB kernel_stack:62512kB >> > > pagetables:105264kB bounce:0kB free_pcp:4140kB local_pcp:40kB >> > > free_cma:712kB > > Checked this mem info, wondering why there's no 'reserved_highatomic' > printing in all these examples.

<    1   2   3   4   5   6   7   8   9   10   >