On 11/11/20 6:46 PM, Vlastimil Babka wrote:
Acked-by: Vlastimil Babka
Err, not yet, that was supposed for patch 16/17
work, so __pagevec_lru_add() goes its own
way.
Reported-by: Hugh Dickins
Signed-off-by: Alex Shi
Acked-by: Hugh Dickins
Acked-by: Johannes Weiner
Cc: Andrew Morton
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Vlastimil Babka
---
mm/sw
rew Morton
Cc: Johannes Weiner
Cc: Michal Hocko
Cc: Vladimir Davydov
Cc: Yang Shi
Cc: Matthew Wilcox
Cc: Konstantin Khlebnikov
Cc: Tejun Heo
Cc: linux-kernel@vger.kernel.org
Cc: linux...@kvack.org
Cc: cgro...@vger.kernel.org
Acked-by: Vlastimil Babka
Cc: Matthew Wilcox
Cc: linux-kernel@vger.kernel.org
Cc: linux...@kvack.org
Acked-by: Vlastimil Babka
A question below:
@@ -979,10 +995,6 @@ static bool too_many_isolated(pg_data_t *pgdat)
goto isolate_abort;
}
- /* Recheck P
On 11/5/20 9:55 AM, Alex Shi wrote:
Currently lru_lock still guards both lru list and page's lru bit, that's
ok. but if we want to use specific lruvec lock on the page, we need to
pin down the page's lruvec/memcg during locking. Just taking lruvec
lock first may be undermined by the page's memcg
On 11/5/20 9:55 AM, Alex Shi wrote:
The func only has one caller, remove it to clean up code and simplify
code.
Signed-off-by: Alex Shi
Acked-by: Hugh Dickins
Acked-by: Johannes Weiner
Cc: Hugh Dickins
Cc: Kirill A. Shutemov
Cc: Vlastimil Babka
Cc: Andrew Morton
Cc: linux...@kvack.org
Cc
on __mod_zone_page_state which need change
to mod_zone_page_state. Thanks!
Signed-off-by: Alex Shi
Acked-by: Hugh Dickins
Acked-by: Johannes Weiner
Cc: Kirill A. Shutemov
Cc: Vlastimil Babka
Cc: Andrew Morton
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Vlastimil Babka
Nit
tly
but not entirely prevented by page_count() check in ksm.c's
write_protect_page(): that risk being shared with page_referenced() and
not helped by lru_lock).
Signed-off-by: Hugh Dickins
Signed-off-by: Alex Shi
Cc: Andrew Morton
Cc: Vladimir Davydov
Cc: Vlastimil Babka
Cc: Minchan Kim
Cc: Alex
on.org: coding style fixes]
Signed-off-by: Alex Shi
Acked-by: Hugh Dickins
Acked-by: Johannes Weiner
Cc: Andrew Morton
Cc: Johannes Weiner
Cc: Tejun Heo
Cc: Matthew Wilcox
Cc: Hugh Dickins
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Vlastimil Babka
Nice cleanup!
On 11/11/20 10:06 AM, David Hildenbrand wrote:
On 11.11.20 09:47, Michal Hocko wrote:
On Tue 10-11-20 20:32:40, David Hildenbrand wrote:
commit 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and
init_on_free=1 boot options") resulted with init_on_alloc=1 in all pages
leaving the buddy
ary read tearing, but mainly to alert anybody
making future changes to the code that special care is needed.
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Acked-by: David Hildenbrand
Acked-by: Michal Hocko
---
mm/page_alloc.c | 40 ++--
1 file change
the zone_pageset_init() and __zone_pcp_update()
wrappers.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Reviewed-by: David Hildenbrand
Acked-by: Michal Hocko
---
mm/page_alloc.c | 42 ++
1 file changed, 18 insertions(+), 24 deletions
-by: David Hildenbrand
Suggested-by: Pavel Tatashin
Signed-off-by: Vlastimil Babka
Reviewed-by: David Hildenbrand
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
mm/memory_hotplug.c | 11 ++-
mm/page_alloc.c | 2 ++
mm/page_isolation.c | 10 +-
3 files changed, 13
.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: David Hildenbrand
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
mm/page_alloc.c | 17 ++---
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2fa432762908
m/3d3b53db-aeaa-ff24-260b-36427fac9...@suse.cz/
[7] https://lore.kernel.org/linux-mm/20200922143712.12048-1-vba...@suse.cz/
[8] https://lore.kernel.org/linux-mm/20201008114201.18824-1-vba...@suse.cz/
Vlastimil Babka (7):
mm, page_alloc: clean up pageset high and batch update
mm, page_alloc: ca
users of
zone_pcp_disable()/enable().
Currently the only user of this functionality is offline_pages().
Suggested-by: David Hildenbrand
Suggested-by: Michal Hocko
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
mm/internal.h | 2 ++
mm/memory_hotplug.c |
wrappers was:
build_all_zonelists_init()
setup_pageset()
pageset_set_batch()
which was hardcoding batch as 0, so we can just open-code a call to
pageset_update() with constant parameters instead.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Reviewed-by: David
-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
include/linux/mmzone.h | 6 ++
mm/page_alloc.c| 16 ++--
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 7385871768d4
On 11/8/20 7:57 AM, Mike Rapoport wrote:
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct kmem_cache
*cachep)
return false;
}
-#ifdef CONFIG_DEBUG_PAGEALLOC
static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int
On 10/28/20 6:50 AM, Bharata B Rao wrote:
slub_max_order
--
The most promising tunable that shows consistent reduction in slab memory
is slub_max_order. Here is a table that shows the number of slabs that
end up with different orders and the total slab consumption at boot
for
On 11/5/20 2:19 PM, Michal Hocko wrote:
On Thu 05-11-20 14:14:25, Vlastimil Babka wrote:
On 11/5/20 1:58 PM, Michal Hocko wrote:
> On Thu 05-11-20 13:53:24, Vlastimil Babka wrote:
> > On 11/5/20 1:08 PM, Michal Hocko wrote:
> > > On Thu 05-11-20 09:40:28, Feng Tang wrote:
>
On 11/5/20 1:58 PM, Michal Hocko wrote:
On Thu 05-11-20 13:53:24, Vlastimil Babka wrote:
On 11/5/20 1:08 PM, Michal Hocko wrote:
> On Thu 05-11-20 09:40:28, Feng Tang wrote:
> > > > Could you be more specific? This sounds like a bug. Allocations
> > > shouldn't sp
On 11/5/20 1:08 PM, Michal Hocko wrote:
On Thu 05-11-20 09:40:28, Feng Tang wrote:
>
> Could you be more specific? This sounds like a bug. Allocations
> shouldn't spill over to a node which is not in the cpuset. There are few
> exceptions like IRQ context but that shouldn't happen regurarly.
On 11/5/20 10:04 AM, Kalle Valo wrote:
(changing the subject, adding more lists and people)
Pavel Procopiuc writes:
Op 04.11.2020 om 10:12 schreef Kalle Valo:
Yeah, it is unfortunately time consuming but it is the best way to get
bottom of this.
I have found the commit that breaks things
On 11/3/20 5:20 PM, Mike Rapoport wrote:
From: Mike Rapoport
Subject should have "on DEBUG_PAGEALLOC" ?
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some
,invalid}_noflush().
Still, add a pr_warn() so that future changes in set_memory APIs will not
silently break hibernation.
Signed-off-by: Mike Rapoport
Acked-by: Rafael J. Wysocki
Reviewed-by: David Hildenbrand
Acked-by: Kirill A. Shutemov
Acked-by: Vlastimil Babka
The bool param is a bit
when page
allocation debug is enabled.
Signed-off-by: Mike Rapoport
Reviewed-by: David Hildenbrand
Acked-by: Kirill A. Shutemov
Acked-by: Vlastimil Babka
But, the "enable" param is hideous. I would rather have map and unmap variants
(and just did the same split for page
be removed now that we have init_on_free
(Patch 4)
- CONFIG_PAGE_POISONING_ZERO can be most likely removed now that we have
init_on_free (Patch 5)
[1] https://lore.kernel.org/r/20201026173358.14704-1-vba...@suse.cz
Vlastimil Babka (5):
mm, page_alloc: do not rely on the order of p
. This results in a simpler and more
effective code.
Signed-off-by: Vlastimil Babka
Reviewed-by: David Hildenbrand
Reviewed-by: Mike Rapoport
---
include/linux/mm.h | 20 ++-
init/main.c| 2 +-
mm/page_alloc.c| 88 ++
3 files changed
t checking it back on alloc. Thus, remove this option and suggest
init_on_free instead in the main config's help.
Signed-off-by: Vlastimil Babka
---
drivers/virtio/virtio_balloon.c | 4 +---
mm/Kconfig.debug| 15 ---
mm/page_poison.c| 3 ---
3 files
We can use the same mechanism to instead poison free pages with PAGE_POISON
after resume. This covers both zero and 0xAA patterns. Thus we can remove the
Kconfig restriction that disables page poison sanity checking when hibernation
is enabled.
Signed-off-by: Vlastimil Babka
Cc: "Rafael J. Wy
us, remove the CONFIG_PAGE_POISONING_ZERO option for
being redundant.
Signed-off-by: Vlastimil Babka
---
include/linux/poison.h | 4
mm/Kconfig.debug | 12
mm/page_alloc.c | 8 +---
tools/include/linux/poison.h | 6 +-
4 files changed, 2
oc support. Move the check to
init_mem_debugging_and_hardening() to enable a single static key instead of
having two static branches in page_poisoning_enabled_static().
Signed-off-by: Vlastimil Babka
---
drivers/virtio/virtio_balloon.c | 2 +-
include/linux/mm.h | 23
e precise.
Yes, imho comparisons that make no sense are only misleading for the readers.
Compilers can probably find out easier, so maybe there's no code generation
change, but for making it less misleading:
Signed-off-by: Hui Su
Acked-by: Vlastimil Babka
---
include/linux/list_
COMPACT_CLUSTER_MAX, and loop forever in the while loop. Bailing immediately
prevents that.
Fixes: 1da2f328fa64 (“mm,thp,compaction,cma: allow THP migration for CMA
allocations”)
Suggested-by: Vlastimil Babka
Signed-off-by: Zi Yan
Cc:
Acked-by: Vlastimil Babka
---
mm/compaction.c | 4
ted function) until we kill either task.
With the patch applied, oom will kill the application with 10GB THPs and
let hugetlb page reservation finish.
Fixes: 1da2f328fa64 (“mm,thp,compaction,cma: allow THP migration for CMA
allocations”)
Signed-off-by: Zi Yan
Reviewed-by: Yang Shi
Cc:
Acked-b
On 10/30/20 7:55 PM, Yang Shi wrote:
On Fri, Oct 30, 2020 at 11:39 AM Zi Yan wrote:
On 30 Oct 2020, at 14:33, Yang Shi wrote:
> On Fri, Oct 30, 2020 at 6:36 AM Michal Hocko wrote:
>>
>> On Fri 30-10-20 08:20:50, Zi Yan wrote:
>>> On 30 Oct 2020, at 5:43, Michal Hocko wrote:
>>>
[Cc
a performance impact, but
how much depends on exactly what e.g. the BPF program does.
[ rost...@goodmis.org: in-depth examples of tracepoint_enabled() usage,
and per-cpu-per-context buffer design ]
Great, thanks Steven.
Signed-off-by: Axel Rasmussen
Acked-by: Vlastimil Babka
On 10/30/20 5:27 PM, Luis Chamberlain wrote:
On Mon, Oct 26, 2020 at 06:33:57PM +0100, Vlastimil Babka wrote:
Commit 11c9c7edae06 ("mm/page_poison.c: replace bool variable with static key")
changed page_poisoning_enabled() to a static key check. However, the function
is not inline
On 10/30/20 3:49 PM, Michal Hocko wrote:
On Fri 30-10-20 10:35:43, Zi Yan wrote:
On 30 Oct 2020, at 9:36, Michal Hocko wrote:
> On Fri 30-10-20 08:20:50, Zi Yan wrote:
>> On 30 Oct 2020, at 5:43, Michal Hocko wrote:
>>
>>> [Cc Vlastimil]
>>>
>>> On Thu 29-10-20 16:04:35, Zi Yan wrote:
>>>
>>>
On 10/29/20 9:04 PM, Zi Yan wrote:
From: Zi Yan
In isolate_migratepages_block, when cc->alloc_contig is true, we are
able to isolate compound pages, nr_migratepages and nr_isolated did not
count compound pages correctly, causing us to isolate more pages than we
thought. Use thp_nr_pages to
On 10/27/20 6:04 PM, Hui Su wrote:
In list_lru_walk(), nr_to_walk type is 'unsigned long',
so nr_to_walk won't be '< 0'.
In list_lru_walk_node(), nr_to_walk type is 'unsigned long',
so *nr_to_walk won't be '< 0' too.
We can use '!nr_to_walk' instead of 'nr_to_walk <= 0', which
is more precise.
who use
regular krealloc() to reallocate arrays. Let's provide an actual
krealloc_array() implementation.
Signed-off-by: Bartosz Golaszewski
Makes sense.
Acked-by: Vlastimil Babka
---
include/linux/slab.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/include/linux/slab.h
On 10/27/20 2:32 PM, Vlastimil Babka wrote:
So my conclusion:
- We can remove PAGE_POISONING_NO_SANITY because it only makes sense with
PAGE_POISONING_ZERO, and we can use init_on_free instead
Note for this we first have to make sanity checking compatible with
hibernation, but that should
d when it is called. It happens that this is not true in
that particular case, so check for page before calling node_match() here.
Fixes: 6159d0f5c03e ("mm/slub.c: page is always non-NULL in node_match()")
Signed-off-by: Laurent Dufour
With the expanded changelog,
Acked-by: Vlastimil B
On 10/27/20 12:05 PM, Vlastimil Babka wrote:
On 10/27/20 10:10 AM, David Hildenbrand wrote:
On 26.10.20 18:33, Vlastimil Babka wrote:
prep_new_page() will always zero a new page (regardless of __GFP_ZERO) when
init_on_alloc is enabled, but will also always skip zeroing if the page was
already
On 10/27/20 10:10 AM, David Hildenbrand wrote:
On 26.10.20 18:33, Vlastimil Babka wrote:
prep_new_page() will always zero a new page (regardless of __GFP_ZERO) when
init_on_alloc is enabled, but will also always skip zeroing if the page was
already zeroed on free by init_on_free or page
On 10/27/20 10:03 AM, David Hildenbrand wrote:
On 26.10.20 18:33, Vlastimil Babka wrote:
Enabling page_poison=1 together with init_on_alloc=1 or init_on_free=1 produces
a warning in dmesg that page_poison takes precendence. However, as these
warnings are printed in early_param handlers
keys. As prep_new_page() is really a hot path, let's introduce
a single static key free_pages_not_prezeroed for this purpose and initialize it
in init_mem_debugging().
Signed-off-by: Vlastimil Babka
---
mm/page_alloc.c | 21 ++---
1 file changed, 14 insertions(+), 7 deletions
es without proper debug_pagealloc support. Move the check to
init_mem_debugging() to enable a single static key instead of having two
static branches in page_poisoning_enabled_static().
Signed-off-by: Vlastimil Babka
---
drivers/virtio/virtio_balloon.c | 2 +-
include/linux/mm.h
and more effective code.
Signed-off-by: Vlastimil Babka
---
include/linux/mm.h | 20 ++
init/main.c| 2 +-
mm/page_alloc.c| 94 +++---
3 files changed, 50 insertions(+), 66 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
of parameters is also eliminated (Patch 1). The result
is more efficient and hopefully also more readable code.
Vlastimil Babka (3):
mm, page_alloc: do not rely on the order of page_poison and
init_on_alloc/free parameters
mm, page_poison: use static key more efficiently
mm, page_alloc
On 10/23/20 7:38 PM, Axel Rasmussen wrote:
On Fri, Oct 23, 2020 at 7:00 AM Vlastimil Babka wrote:
On 10/20/20 8:47 PM, Axel Rasmussen wrote:
> The goal of these tracepoints is to be able to debug lock contention
> issues. This lock is acquired on most (all?) mmap / munmap / page
On 10/20/20 8:47 PM, Axel Rasmussen wrote:
The goal of these tracepoints is to be able to debug lock contention
issues. This lock is acquired on most (all?) mmap / munmap / page fault
operations, so a multi-threaded process which does a lot of these can
experience significant contention.
We
s off), but should be good enough for now.
Agreed.
Acked-by: David Hildenbrand
Acked-by: Vlastimil Babka
ik van Riel
Acked-by: Vlastimil Babka
---
v2: move gfp calculation to shmem_getpage_gfp as suggested by Yu Xu
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index c603237e006c..0a5b164a26d9 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -614,6 +614,8 @@ bool gfp_pfmemalloc_al
On 10/22/20 4:51 PM, Vlastimil Babka wrote:
On 10/22/20 5:48 AM, Rik van Riel wrote:
The allocation flags of anonymous transparent huge pages can be controlled
through the files in /sys/kernel/mm/transparent_hugepage/defrag, which can
help the system from getting bogged down in the page reclaim
On 10/22/20 5:48 AM, Rik van Riel wrote:
The allocation flags of anonymous transparent huge pages can be controlled
through the files in /sys/kernel/mm/transparent_hugepage/defrag, which can
help the system from getting bogged down in the page reclaim and compaction
code when many THPs are
Babka
Cc: Andrew Morton
Cc: Alexander Duyck
Cc: Mel Gorman
Cc: Michal Hocko
Cc: Dave Hansen
Cc: Vlastimil Babka
Cc: Wei Yang
Cc: Oscar Salvador
Cc: Mike Rapoport
Cc: Pankaj Gupta
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 11 ---
1 file changed, 4 insertions
ase this instance and add a
proper comment.
This change results in all pages getting onlined via online_pages() to
be placed to the tail of the freelist.
Reviewed-by: Oscar Salvador
Acked-by: Pankaj Gupta
Reviewed-by: Wei Yang
Reviewed-by: Vlastimil Babka
ed-by: Michal Hocko
Reviewed-by: Vlastimil Babka
On 10/8/20 11:49 AM, Christophe Leroy wrote:
In a 10 years old commit
(https://github.com/linuxppc/linux/commit/d069cb4373fe0d451357c4d3769623a7564dfa9f),
powerpc 8xx has
made the handling of PTE accessed bit conditional to CONFIG_SWAP.
Since then, this has been extended to some other powerpc
On 10/10/20 12:05 AM, Axel Rasmussen wrote:
The goal of these tracepoints is to be able to debug lock contention
issues. This lock is acquired on most (all?) mmap / munmap / page fault
operations, so a multi-threaded process which does a lot of these can
experience significant contention.
We
th minimal header
requirements (avoid "include hell"). Convert the page_ref logic over to the
new helper macro.
Cc: Joonsoo Kim
Cc: Michal Nazarewicz
Cc: Vlastimil Babka
Cc: Minchan Kim
Cc: Mel Gorman
Cc: "Kirill A. Shutemov"
Cc: Sergey Senozhatsky
Cc: Arnd Bergmann
en Rostedt (VMware)
Nice! I'm late here, but you mentioned a v3, so FWIW:
Acked-by: Vlastimil Babka
On 10/19/20 12:29 PM, Xu, Yanfei wrote:
On 10/19/20 5:40 PM, Vlastimil Babka wrote:
On 10/19/20 10:36 AM, yanfei...@windriver.com wrote:
From: Yanfei Xu
There are two 'start_pfn' declared in compact_zone() which have
different meaning. Rename the second one to 'iteration_start_pfn
usion.", because trace_mm_compaction_end() has the
correct value even before the patch - the second start_pfn is out
of scope at that point.
Thanks
BTW, remove an useless semicolon.
Acked-by: David Hildenbrand
Acked-by: Vlastimil Babka
Signed-off-by: Yanfei Xu
---
v1->v2:
Rena
spin_lock(>list_lock);
}
+#endif
}
if (l != m) {
Hm I missed this, otherwise I would have suggested the following
-8<-
From 0b43c7e20c81241f4b74cdb366795fc0b94a25c9 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka
Date: Fri, 16 Oct 2020 18:46:
On 10/13/20 10:09 AM, Mike Rapoport wrote:
We are not complaining about TCP using too much memory, but how do
we know that TCP uses a lot of memory. When I firstly face this problem,
I do not know who uses the 25GB memory and it is not shown in the /proc/meminfo.
If we can know the amount memory
On 10/14/20 2:28 PM, David Hildenbrand wrote:
On 14.10.20 09:23, yanfei...@windriver.com wrote:
From: Yanfei Xu
start_pfn has been declared at the begin of compact_zone(), it's
no need to declare it again. And remove an useless semicolon.
Signed-off-by: Yanfei Xu
---
mm/compaction.c | 3
:
Acked-by: Vlastimil Babka
Nit below:
---
include/linux/page_ext.h | 8
init/main.c | 2 ++
mm/page_ext.c| 8 +++-
3 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
index cfce186..aff81ba
On 10/15/20 10:23 AM, Christopher Lameter wrote:
On Wed, 14 Oct 2020, Kees Cook wrote:
Note on patch 2: Christopher NAKed it, but I actually think this is a
reasonable thing to add -- the "too small" check is only made when built
with CONFIG_DEBUG_VM, so it *is* actually possible for someone
.google.com/
Fixes: 89b83f282d8b (slub: avoid redzone when choosing freepointer location)
Tested-by: Marco Elver
Link:
https://lore.kernel.org/lkml/canpmjnowz5vpkqn+sywovtkfb4vst-rpwyenbmak0dlcpqs...@mail.gmail.com
Signed-off-by: Kees Cook
Acked-by: Vlastimil Babka
This struggle to get
port left redzone")
Fixes: ffc79d288000 ("slub: use print_hex_dump")
Fixes: 2492268472e7 ("SLUB: change error reporting format to follow lockdep
loosely")
Not sure about those Fixes: tag as this is mainly an enhancement. I'd only use
those for real bug fixes.
Signed-off-by: Kees Cook
Acked-by: Vlastimil Babka
On 10/8/20 2:31 PM, Michal Hocko wrote:
On Thu 08-10-20 13:41:59, Vlastimil Babka wrote:
All per-cpu pagesets for a zone use the same high and batch values, that are
duplicated there just for performance (locality) reasons. This patch adds the
same variables also to struct zone as a shared copy
On 10/8/20 2:45 PM, Michal Hocko wrote:
On Thu 08-10-20 13:42:01, Vlastimil Babka wrote:
Memory offline relies on page isolation can race with process freeing pages to
pcplists in a way that a page from isolated pageblock can end up on pcplist.
"Memory offlining relies on page isol
On 10/8/20 2:23 PM, Michal Hocko wrote:
On Thu 08-10-20 13:41:57, Vlastimil Babka wrote:
We initialize boot-time pagesets with setup_pageset(), which sets high and
batch values that effectively disable pcplists.
We can remove this wrapper if we just set these values for all pagesets
wrappers was:
build_all_zonelists_init()
setup_pageset()
pageset_set_batch()
which was hardcoding batch as 0, so we can just open-code a call to
pageset_update() with constant parameters instead.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Reviewed-by: David
the zone_pageset_init() and __zone_pcp_update()
wrappers.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Reviewed-by: David Hildenbrand
Acked-by: Michal Hocko
---
mm/page_alloc.c | 42 ++
1 file changed, 18 insertions(+), 24 deletions
.kernel.org/linux-mm/20200909113647.gg7...@dhcp22.suse.cz/
[5]
https://lore.kernel.org/linux-mm/20200904151448.100489-3-pasha.tatas...@soleen.com/
[6]
https://lore.kernel.org/linux-mm/3d3b53db-aeaa-ff24-260b-36427fac9...@suse.cz/
[7] https://lore.kernel.org/linux-mm/20200922143712.12048-1-vba...@suse.cz/
-by: David Hildenbrand
Suggested-by: Pavel Tatashin
Signed-off-by: Vlastimil Babka
Reviewed-by: David Hildenbrand
Acked-by: Michal Hocko
---
mm/memory_hotplug.c | 11 ++-
mm/page_alloc.c | 2 ++
mm/page_isolation.c | 10 +-
3 files changed, 13 insertions(+), 10 deletions
.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: David Hildenbrand
---
mm/page_alloc.c | 17 ++---
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 463f40b12aca..f827b42a2475 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
-by: Vlastimil Babka
---
include/linux/mmzone.h | 6 ++
mm/page_alloc.c| 17 +++--
2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fb3bf696c05e..c63863794afc 100644
--- a/include/linux/mmzone.h
+++ b
s().
Suggested-by: David Hildenbrand
Suggested-by: Michal Hocko
Signed-off-by: Vlastimil Babka
---
mm/internal.h | 2 ++
mm/memory_hotplug.c | 28 --
mm/page_alloc.c | 69 +++--
mm/page_isolation.c | 6 ++--
4 files changed, 71 in
ary read tearing, but mainly to alert anybody
making future changes to the code that special care is needed.
Signed-off-by: Vlastimil Babka
Acked-by: David Hildenbrand
Acked-by: Michal Hocko
---
mm/page_alloc.c | 40 ++--
1 file changed, 18 insertions(+), 22 de
On 10/5/20 3:28 PM, Michal Hocko wrote:
On Tue 22-09-20 16:37:09, Vlastimil Babka wrote:
All per-cpu pagesets for a zone use the same high and batch values, that are
duplicated there just for performance (locality) reasons. This patch adds the
same variables also to struct zone as a shared copy
On 9/25/20 12:34 PM, David Hildenbrand wrote:
On 22.09.20 16:37, Vlastimil Babka wrote:
@@ -6300,6 +6310,8 @@ static __meminit void zone_pcp_init(struct zone *zone)
* offset of a (static) per cpu variable into the per cpu area.
*/
zone->pageset = _pageset;
+ z
On 10/5/20 3:24 PM, Michal Hocko wrote:
On Tue 22-09-20 16:37:08, Vlastimil Babka wrote:
setup_zone_pageset() replaces the boot_pageset by allocating and initializing a
proper percpu one. Currently it assigns zone->pageset with the newly allocated
one before initializing it. That's curren
On 10/5/20 2:59 PM, Michal Hocko wrote:
On Tue 22-09-20 16:37:06, Vlastimil Babka wrote:
We initialize boot-time pagesets with setup_pageset(), which sets high and
batch values that effectively disable pcplists.
We can remove this wrapper if we just set these values for all pagesets
On 10/5/20 2:52 PM, Michal Hocko wrote:
On Tue 22-09-20 16:37:05, Vlastimil Babka wrote:
We currently call pageset_set_high_and_batch() for each possible cpu, which
repeats the same calculations of high and batch values.
Instead call the function just once per zone, and make it apply
On 10/5/20 4:05 PM, Michal Hocko wrote:
> On Fri 25-09-20 13:10:05, Vlastimil Babka wrote:
>> On 9/25/20 12:54 PM, David Hildenbrand wrote:
>>
>> Hmm that temporary write lock would still block new callers until previous
>> finish with the downgraded-to-read lock.
>&
; non-SMP version of __mod_node_page_state().
>
> Signed-off-by: Roman Gushchin
> Reported-by: Bastian Bittorf
> Fixes: ea426c2a7de8 ("mm: memcg: prepare for byte-sized vmstat items")
Acked-by: Vlastimil Babka
For consistency we could also duplicate the
"VM_WARN_ON_O
On 9/30/20 12:07 AM, Uladzislau Rezki wrote:
> On Tue, Sep 29, 2020 at 12:15:34PM +0200, Vlastimil Babka wrote:
>> On 9/18/20 9:48 PM, Uladzislau Rezki (Sony) wrote:
>>
>> After reading all the threads and mulling over this, I am going to deflect
>> from
>> Mel a
On 9/18/20 9:48 PM, Uladzislau Rezki (Sony) wrote:
> Some background and kfree_rcu()
> ===
> The pointers to be freed are stored in the per-cpu array to improve
> performance, to enable an easier-to-use API, to accommodate vmalloc
> memmory and to support a single
ded completely
with !CONFIG_CMA.
Acked-by: Vlastimil Babka
> ---
> mm/page_alloc.c | 13 ++---
> 1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index fab5e97..104d2e1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_al
On 9/25/20 12:54 PM, David Hildenbrand wrote:
>>> --- a/mm/page_isolation.c
>>> +++ b/mm/page_isolation.c
>>> @@ -15,6 +15,22 @@
>>> #define CREATE_TRACE_POINTS
>>> #include
>>>
>>> +void zone_pcplist_disable(struct zone *zone)
>>> +{
>>> + down_read(_batch_high_lock);
>>> + if
On 9/25/20 6:59 AM, Joonsoo Kim wrote:
> 2020년 8월 28일 (금) 오전 8:54, Joonsoo Kim 님이 작성:
>
> Hello, Andrew and Vlastimil.
>
> It's better to fix this possible bug introduced in v5.9-rc1 before
> v5.9 is released.
> Which approach do you prefer?
> If it is determined, I will immediately send a patch
On 9/25/20 10:05 AM, David Hildenbrand wrote:
static inline void del_page_from_free_list(struct page *page, struct zone
*zone,
unsigned int order)
{
@@ -2323,7 +2332,7 @@ static inline struct page
On 9/23/20 5:26 PM, David Hildenbrand wrote:
> On 23.09.20 16:31, Vlastimil Babka wrote:
>> On 9/16/20 9:31 PM, David Hildenbrand wrote:
>>
>
> Hi Vlastimil,
>
>> I see the point, but I don't think the head/tail mechanism is great for
>> this. It
&
t that the new behavior is undesireable for
> __free_pages_core() during boot, we can let the caller specify the
> behavior.
>
> Cc: Andrew Morton
> Cc: Alexander Duyck
> Cc: Mel Gorman
> Cc: Michal Hocko
> Cc: Dave Hansen
> Cc: Vlastimil Babka
> Cc: Wei Yang
&
301 - 400 of 6119 matches
Mail list logo