tplug callback takes the
slab_mutex.
To sum up, this patch removes get/put_online_cpus() calls from slab as it
should be safe without further adjustments.
Signed-off-by: Vlastimil Babka
---
mm/slab_common.c | 10 --
1 file changed, 10 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.
The subject should say BUILD_BUG()
On 12/30/20 4:40 PM, Arnd Bergmann wrote:
> From: Arnd Bergmann
>
> clang cannt evaluate this function argument at compile time
> when the function is not inlined, which leads to a link
> time failure:
>
> ld.lld: error: undefined symbol:
On 12/14/20 10:16 PM, Hugh Dickins wrote:
> On Tue, 24 Nov 2020, Rik van Riel wrote:
>
>> The allocation flags of anonymous transparent huge pages can be controlled
>> through the files in /sys/kernel/mm/transparent_hugepage/defrag, which can
>> help the system from getting bogged down in the
ons in struct page changed, such changes should
be done consciously and needed changes evaluated - the comment should help with
that.
Signed-off-by: Vlastimil Babka
---
mm/slab.c | 3 ++-
mm/slub.c | 4 ++--
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index
On 12/10/20 12:04 AM, Paul E. McKenney wrote:
>> > +/**
>> > + * kmem_valid_obj - does the pointer reference a valid slab object?
>> > + * @object: pointer to query.
>> > + *
>> > + * Return: %true if the pointer is to a not-yet-freed object from
>> > + * kmalloc() or kmem_cache_alloc(), either
On 12/10/20 12:23 AM, Paul E. McKenney wrote:
> On Wed, Dec 09, 2020 at 06:51:20PM +0100, Vlastimil Babka wrote:
>> On 12/9/20 2:13 AM, paul...@kernel.org wrote:
>> > From: "Paul E. McKenney"
>> >
>> > This commit adds vmalloc() support to mem_du
On 12/9/20 2:13 AM, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> This commit adds vmalloc() support to mem_dump_obj(). Note that the
> vmalloc_dump_obj() function combines the checking and dumping, in
> contrast with the split between kmem_valid_obj() and kmem_dump_obj().
> The
David Rientjes
> Cc: Joonsoo Kim
> Cc: Andrew Morton
> Cc:
> Reported-by: Andrii Nakryiko
> Signed-off-by: Paul E. McKenney
Acked-by: Vlastimil Babka
> ---
> mm/util.c | 7 ++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/mm/util
On 12/9/20 2:12 AM, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> There are kernel facilities such as per-CPU reference counts that give
> error messages in generic handlers or callbacks, whose messages are
> unenlightening. In the case of per-CPU reference-count underflow, this
> is
o available memory, let's not complicate
things with making this optional.
> Signed-off-by: Liam Mark
> Signed-off-by: Georgi Djakov
Acked-by: Vlastimil Babka
> ---
>
> v2:
> - Improve the commit message (Andrew and Vlastimil)
> - Update page_owner.rst with more recent object size
ing that at that time the page could not be migrated, but
> that has nothing to do with an EIO error.
>
> Let us return -EBUSY instead, as we do in case we failed to isolate
> the page.
>
> While are it, let us remove the "ret" print as its value does not change.
>
&
On 12/9/20 8:58 AM, Dan Carpenter wrote:
> On Tue, Dec 08, 2020 at 09:01:49PM -0800, Joe Perches wrote:
>> On Tue, 2020-12-08 at 16:34 -0800, Kees Cook wrote:
>>
>> > If not "Adjusted-by", what about "Tweaked-by", "Helped-by",
>> > "Corrected-by"?
>>
>> Improved-by: / Enhanced-by: /
On 12/1/20 12:35 PM, Oscar Salvador wrote:
> On Wed, Nov 25, 2020 at 07:20:33PM +0100, Vlastimil Babka wrote:
>> On 11/19/20 11:57 AM, Oscar Salvador wrote:
>> > From: Naoya Horiguchi
>> >
>> > The call to get_user_pages_fast is only to get the pointer to a
On 12/2/20 2:11 AM, Shakeel Butt wrote:
> On Tue, Dec 1, 2020 at 5:07 PM Steven Rostedt wrote:
>>
>> On Tue, 1 Dec 2020 16:36:32 -0800
>> Shakeel Butt wrote:
>>
>> > SGTM but note that usually Andrew squash all the patches into one
>> > before sending to Linus. If you plan to replace the path
nr_swap_pages(0) -= ngoals
> nr_swap_pages =
> -1
>
> Signed-off-by: Zhaoyang Huang
Better now.
Acked-by: Vlastimil Babka
> ---
> change of v2: fix bug of unpaired of spin_lock
> ---
> ---
> mm/swapfile.c
Also adjust max_order initialization so that it's lower by one than previously,
which makes the code hopefully more clear.
> Signed-off-by: Muchun Song
Fixes: d9dddbf55667 ("mm/page_alloc: prevent merging between isolated and other
pageblocks")
Acked-by: Vlastimil Babka
Thanks!
On 12/4/20 5:03 AM, Muchun Song wrote:
> On Fri, Dec 4, 2020 at 1:37 AM Vlastimil Babka wrote:
>>
>> On 12/2/20 1:18 PM, Muchun Song wrote:
>> > When we free a page whose order is very close to MAX_ORDER and greater
>> > than pageblock_order, it wastes some
On 12/3/20 12:36 PM, Zhaoyang Huang wrote:
> The scenario on which "Free swap -4kB" happens in my system, which is caused
> by
> get_swap_page_of_type or get_swap_pages racing with show_mem. Remove the race
> here.
>
> Signed-off-by: Zhaoyang Huang
> ---
> mm/swapfile.c | 7 +++
> 1 file
On 12/2/20 1:18 PM, Muchun Song wrote:
> When we free a page whose order is very close to MAX_ORDER and greater
> than pageblock_order, it wastes some CPU cycles to increase max_order
> to MAX_ORDER one by one and check the pageblock migratetype of that page
But we have to do that. It's not the
On 12/3/20 5:26 PM, David Hildenbrand wrote:
> On 03.12.20 01:03, Vlastimil Babka wrote:
>> On 12/2/20 1:21 PM, Muchun Song wrote:
>>> The max order page has no buddy page and never merge to other order.
>>> So isolating and then freeing it is pointless.
>>
On 12/3/20 3:43 AM, Muchun Song wrote:
> On Thu, Dec 3, 2020 at 8:03 AM Vlastimil Babka wrote:
>>
>> On 12/2/20 1:21 PM, Muchun Song wrote:
>> > The max order page has no buddy page and never merge to other order.
>> > So isolating and then freeing it is pointless
On 12/2/20 1:21 PM, Muchun Song wrote:
> The max order page has no buddy page and never merge to other order.
> So isolating and then freeing it is pointless.
>
> Signed-off-by: Muchun Song
Acked-by: Vlastimil Babka
> ---
> mm/page_isolation.c | 2 +-
> 1 file chang
Hi,
there was a bit of debate on Twitter about this, so I thought I would bring it
here. Imagine a scenario where patch sits as a commit in -next and there's a bug
report or fix, possibly by a bot or with some static analysis. The maintainer
decides to fold it into the original patch, which makes
On 11/30/20 2:45 PM, Michal Hocko wrote:
On Mon 30-11-20 21:36:49, Muchun Song wrote:
On Mon, Nov 30, 2020 at 9:23 PM Michal Hocko wrote:
>
> On Mon 30-11-20 21:15:12, Muchun Song wrote:
> > We found a case of kernel panic. The stack trace is as follows
> > (omit some irrelevant information):
On 11/27/20 8:23 PM, Souptick Joarder wrote:
On Sat, Nov 28, 2020 at 12:36 AM Vlastimil Babka wrote:
On 11/27/20 7:57 PM, Georgi Djakov wrote:
> Hi Vlastimil,
>
> Thanks for the comment!
>
> On 11/27/20 19:52, Vlastimil Babka wrote:
>> On 11/12/20 8:14 PM, Andrew Morton
On 11/27/20 7:57 PM, Georgi Djakov wrote:
Hi Vlastimil,
Thanks for the comment!
On 11/27/20 19:52, Vlastimil Babka wrote:
On 11/12/20 8:14 PM, Andrew Morton wrote:
On Thu, 12 Nov 2020 20:41:06 +0200 Georgi Djakov
wrote:
From: Liam Mark
Collect the time for each allocation recorded
On 11/12/20 8:14 PM, Andrew Morton wrote:
On Thu, 12 Nov 2020 20:41:06 +0200 Georgi Djakov
wrote:
From: Liam Mark
Collect the time for each allocation recorded in page owner so that
allocation "surges" can be measured.
Record the pid for each allocation recorded in page owner so that
the
ove the inline for func declaration in shmem_fs.h
v2->v3:
make shmem_aops global, and export it to modules.
Signed-off-by: Hui Su
Acked-by: Vlastimil Babka
---
include/linux/shmem_fs.h | 6 +-
mm/shmem.c | 16 ++--
2 files changed, 11 insertions(+), 11 del
On 11/15/20 6:40 PM, Hui Su wrote:
in shmem_get_inode():
new_inode();
new_inode_pseudo();
alloc_inode();
ops->alloc_inode(); -> shmem_alloc_inode()
kmem_cache_alloc();
memset(info, 0, (char *)inode - (char *)info);
So use kmem_cache_zalloc() in shmem_alloc_inode(),
and
-off-by: Zou Wei
Acked-by: Vlastimil Babka
---
mm/page_alloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 63d8d8b..e7548344 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3037,7 +3037,7 @@ static void
of allocation patterns because of count value not being printed in
cma_release().
We are printing the count value in the trace logs, just extend the same
to pr_debug logs too.
Signed-off-by: Charan Teja Reddy
Acked-by: Vlastimil Babka
---
mm/cma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
On 11/27/20 3:19 PM, Muchun Song wrote:
Current pageblock isolation logic could isolate each pageblock individually
since commit d9dddbf55667 ("mm/page_alloc: prevent merging between isolated
and other pageblocks"). So we not need to concern about page allocator
merges buddies from different
On 11/26/20 7:14 PM, Rik van Riel wrote:
> On Thu, 2020-11-26 at 18:18 +0100, Vlastimil Babka wrote:
>> On 11/24/20 8:49 PM, Rik van Riel wrote:
>>> Currently if thp enabled=[madvise], mounting a tmpfs filesystem
>>> with huge=always and mmapping files from tha
On 11/24/20 8:49 PM, Rik van Riel wrote:
Currently if thp enabled=[madvise], mounting a tmpfs filesystem
with huge=always and mmapping files from that tmpfs does not
result in khugepaged collapsing those mappings, despite the
mount flag indicating that it should.
Fix that by breaking up the
ill be a little
more aggressive than today for files mmapped with MADV_HUGEPAGE,
and a little less aggressive for files that are not mmapped or
mapped without that flag.
Signed-off-by: Rik van Riel
Acked-by: Vlastimil Babka
On 11/26/20 12:22 PM, Vlastimil Babka wrote:
On 11/26/20 8:24 AM, Yu Zhao wrote:
On Thu, Nov 26, 2020 at 02:39:03PM +0800, Alex Shi wrote:
在 2020/11/26 下午12:52, Yu Zhao 写道:
>> */
>> void __pagevec_lru_add(struct pagevec *pvec)
>> {
>> - int i;
>> -
On 11/26/20 3:25 AM, Alex Shi wrote:
在 2020/11/26 上午7:43, Andrew Morton 写道:
On Tue, 24 Nov 2020 12:21:28 +0100 Vlastimil Babka wrote:
On 11/22/20 3:00 PM, Alex Shi wrote:
Thanks a lot for all comments, I picked all up and here is the v3:
From 167131dd106a96fd08af725df850e0da6ec899af Mon
On 11/19/20 11:57 AM, Oscar Salvador wrote:
get_hwpoison_page already drains pcplists, previously disabling
them when trying to grab a refcount.
We do not need shake_page to take care of it anymore.
Signed-off-by: Oscar Salvador
---
mm/memory-failure.c | 7 ++-
1 file changed, 2
soft_offline and memory_failure paths that is guarded by
zone_pcplist_disable/zone_pcplist_enable.
[1]
https://patchwork.kernel.org/project/linux-mm/cover/2020092812.11329-1-vba...@suse.cz/
Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
Note as you say the series should go after [1] above
On 11/26/20 8:24 AM, Yu Zhao wrote:
On Thu, Nov 26, 2020 at 02:39:03PM +0800, Alex Shi wrote:
在 2020/11/26 下午12:52, Yu Zhao 写道:
>> */
>> void __pagevec_lru_add(struct pagevec *pvec)
>> {
>> - int i;
>> - struct lruvec *lruvec = NULL;
>> + int i, nr_lruvec;
>>
On 11/26/20 4:12 AM, Alex Shi wrote:
在 2020/11/25 下午11:38, Vlastimil Babka 写道:
On 11/20/20 9:27 AM, Alex Shi wrote:
The current relock logical will change lru_lock when found a new
lruvec, so if 2 memcgs are reading file or alloc page at same time,
they could hold the lru_lock alternately
On 11/19/20 11:57 AM, Oscar Salvador wrote:
From: Naoya Horiguchi
The call to get_user_pages_fast is only to get the pointer to a struct
page of a given address, pinning it is memory-poisoning handler's job,
so drop the refcount grabbed by get_user_pages_fast().
Note that the target page is
get_any_page and __get_any_page, and let the message
be printed in soft_offline_page.
Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
On 11/19/20 11:57 AM, Oscar Salvador wrote:
pfn parameter is no longer needed, drop it.
Could have been also part of previous patch.
Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
---
mm/memory-failure.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git
took the page off a buddy freelist 2) the page was
in-use and we migrated it 3) was a clean pagecache.
Because of that, a page cannot longer be poisoned and be in a pcplist.
Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
---
mm/madvise.c | 5 -
1 file changed, 5 deletions
On 10/13/20 4:44 PM, Oscar Salvador wrote:
Currently, free hugetlb get dissolved, but we also need to make sure
to take the poisoned subpage off the buddy frelists, so no one stumbles
upon it (see previous patch for more information).
Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
we should be on the safe
side.
[1] https://lore.kernel.org/linux-mm/20190826104144.GA7849@linux/T/#u
[2] https://patchwork.kernel.org/cover/11792607/
Signed-off-by: Oscar Salvador
Acked-by: Naoya Horiguchi
Makes a lot of sense.
Acked-by: Vlastimil Babka
---
mm/memory-failure.c | 27
pcplists whenever we find this kind of page and retry
the check again. It might be that pcplists have been spilled into the
buddy allocator and so we can handle it.
Signed-off-by: Oscar Salvador
Acked-by: Naoya Horiguchi
Acked-by: Vlastimil Babka
---
mm/memory-failure.c | 24
compaction will be cleared
Cc: Andrew Morton
Cc: Alexander Potapenko
Cc: Michal Hocko
Cc: Mike Kravetz
Cc: Vlastimil Babka
Cc: Mike Rapoport
Cc: Oscar Salvador
Cc: Kees Cook
Cc: Michael Ellerman
Signed-off-by: David Hildenbrand
Acked-by: Vlastimil Babka
---
This is the follow-up of:
&quo
On 11/20/20 9:27 AM, Alex Shi wrote:
The current relock logical will change lru_lock when found a new
lruvec, so if 2 memcgs are reading file or alloc page at same time,
they could hold the lru_lock alternately, and wait for each other for
fairness attribute of ticket spin lock.
This patch will
;10%) in
previous formula.
Signed-off-by: Lin Feng
Acked-by: Vlastimil Babka
---
init/main.c | 2 --
mm/page_alloc.c | 3 +++
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/init/main.c b/init/main.c
index 20baced721ad..a3f7c3416286 100644
--- a/init/main.c
+++ b/i
On 11/25/20 4:46 AM, Matthew Wilcox (Oracle) wrote:
Code outside mm/ should not be calling free_unref_page(). Also
move free_unref_page_list().
Good idea.
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Vlastimil Babka
There seems to be some effort to remove "extern" fro
andle pgtable_page_ctor() fail")
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Vlastimil Babka
---
arch/sparc/mm/init_64.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 96edf64d4fb3..182bb7bdaa0a 100644
--- a/arch/sparc/
On 11/25/20 6:34 AM, Andrea Arcangeli wrote:
Hello,
On Mon, Nov 23, 2020 at 02:01:16PM +0100, Vlastimil Babka wrote:
On 11/21/20 8:45 PM, Andrea Arcangeli wrote:
> A corollary issue was fixed in
> 39639000-39814fff : Unknown E820 type
>
> pfn 0x7a200 -> 0x7a20 min_
Please CC linux-api on future versions.
On 10/26/20 5:05 PM, Topi Miettinen wrote:
Writing a new value of 3 to /proc/sys/kernel/randomize_va_space
enables full randomization of memory mappings created with mmap(NULL,
...). With 2, the base of the VMA used for such mappings is random,
but the
On 11/23/20 4:10 PM, Charan Teja Kalla wrote:
Thanks Michal!
On 11/23/2020 7:43 PM, Michal Hocko wrote:
On Mon 23-11-20 19:33:16, Charan Teja Reddy wrote:
When the pages are failed to get isolate or migrate, the page owner
information along with page info is dumped. If there are continuous
-by: Vlastimil Babka
---
include/linux/compaction.h | 12
mm/compaction.c| 8
2 files changed, 4 insertions(+), 16 deletions(-)
diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index 1de5a1151ee7..ed4070ed41ef 100644
--- a/include/linux
stayed, but it's not repeating that much
without it (list_move() + continue, 3 times) so...
Acked-by: Vlastimil Babka
Signed-off-by: Alex Shi
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Hugh Dickins
Cc: Yu Zhao
Cc: Vlastimil Babka
Cc: Michal Hocko
Cc: linux...@kvack.org
Cc: linux-kernel
+CC John Hubbard
On 11/20/20 9:27 PM, Pavel Tatashin wrote:
Recently, I encountered a hang that is happening during memory hot
remove operation. It turns out that the hang is caused by pinned user
pages in ZONE_MOVABLE.
Kernel expects that all pages in ZONE_MOVABLE can be migrated, but
this is
On 11/21/20 8:45 PM, Andrea Arcangeli wrote:
A corollary issue was fixed in
e577c8b64d58fe307ea4d5149d31615df2d90861. A second issue remained in
v5.7:
https://lkml.kernel.org/r/8c537eb7-85ee-4dcf-943e-3cc0ed0df...@lca.pw
==
page:eaaa refcount:1 mapcount:0
when determining the
mininum objects, thereby increasing the chances of chosing
a lower conservative page order for the slab.
Signed-off-by: Bharata B Rao
Acked-by: Vlastimil Babka
Ideally, we would react to hotplug events and update existing caches
accordingly. But for that, recalculation
On 11/13/20 1:10 PM, David Hildenbrand wrote:
@@ -1186,12 +1194,12 @@ void clear_free_pages(void)
if (WARN_ON(!(free_pages_map)))
return;
- if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
+ if (page_poisoning_enabled() ||
We can use the same mechanism to instead poison free pages with PAGE_POISON
after resume. This covers both zero and 0xAA patterns. Thus we can remove the
Kconfig restriction that disables page poison sanity checking when hibernation
is enabled.
Signed-off-by: Vlastimil Babka
Acked-by: Rafael J. Wysock
t checking it back on alloc. Thus, remove this option and suggest
init_on_free instead in the main config's help.
Signed-off-by: Vlastimil Babka
Acked-by: David Hildenbrand
---
drivers/virtio/virtio_balloon.c | 4 +---
mm/Kconfig.debug| 15 ---
mm/page_poison.c
. This results in a simpler and more
effective code.
Signed-off-by: Vlastimil Babka
Reviewed-by: David Hildenbrand
Reviewed-by: Mike Rapoport
---
include/linux/mm.h | 20 ++-
init/main.c| 2 +-
mm/page_alloc.c| 88 ++
3 files changed
us, remove the CONFIG_PAGE_POISONING_ZERO option for
being redundant.
Signed-off-by: Vlastimil Babka
Acked-by: David Hildenbrand
---
include/linux/poison.h | 4
mm/Kconfig.debug | 12
mm/page_alloc.c | 8 +---
tools/include/linux/poi
oc support. Move the check to
init_mem_debugging_and_hardening() to enable a single static key instead of
having two static branches in page_poisoning_enabled_static().
Signed-off-by: Vlastimil Babka
---
drivers/virtio/virtio_balloon.c | 2 +-
include/linux/mm.h | 33 ++
rnel.org/r/20201026173358.14704-1-vba...@suse.cz
[2] https://lore.kernel.org/linux-mm/20201103152237.9853-1-vba...@suse.cz/
Vlastimil Babka (5):
mm, page_alloc: do not rely on the order of page_poison and
init_on_alloc/free parameters
mm, page_poison: use static key more efficiently
kernel/po
On 11/11/20 6:58 PM, David Hildenbrand wrote:
On 11.11.20 10:28, Vlastimil Babka wrote:
- /*
-* per-cpu pages are drained after start_isolate_page_range, but
-* if there are still pages that are not free, make sure that we
-* drain
.
8<
From cae1e8ccfa57c28ed1b2f5f8a47319b86cbdcfbf Mon Sep 17 00:00:00 2001
From: Vlastimil Babka
Date: Thu, 12 Nov 2020 15:33:07 +0100
Subject: [PATCH] kernel/power: allow hibernation with page_poison sanity
checking-fix
Adapt to __kernel_unpoison_pages fixup. Split
On 11/11/20 4:38 PM, David Hildenbrand wrote:
On 03.11.20 16:22, Vlastimil Babka wrote:
Commit 11c9c7edae06 ("mm/page_poison.c: replace bool variable with static key")
changed page_poisoning_enabled() to a static key check. However, the function
is not inlined, so each check stil
Cc: Jann Horn
Cc: Mel Gorman
Cc: Johannes Weiner
Cc: Matthew Wilcox
Cc: Hugh Dickins
Cc: cgro...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux...@kvack.org
Acked-by: Vlastimil Babka
Duyck
Signed-off-by: Alex Shi
Acked-by: Hugh Dickins
Acked-by: Johannes Weiner
Acked-by: Vlastimil Babka
Cc: Johannes Weiner
Cc: Andrew Morton
Cc: Thomas Gleixner
Cc: Andrey Ryabinin
Cc: Matthew Wilcox
Cc: Mel Gorman
Cc: Konstantin Khlebnikov
Cc: Hugh Dickins
Cc: Tejun Heo
Cc: linux
On 11/5/20 9:55 AM, Alex Shi wrote:
This patch moves per node lru_lock into lruvec, thus bring a lru_lock for
each of memcg per node. So on a large machine, each of memcg don't
have to suffer from per node pgdat->lru_lock competition. They could go
fast with their self lru_lock.
After move
On 11/12/20 3:28 AM, Hugh Dickins wrote:
On Wed, 11 Nov 2020, Vlastimil Babka wrote:
On 11/5/20 9:55 AM, Alex Shi wrote:
> @@ -979,10 +995,6 @@ static bool too_many_isolated(pg_data_t *pgdat)
>goto isolate
On 11/12/20 3:03 AM, Hugh Dickins wrote:
On Wed, 11 Nov 2020, Vlastimil Babka wrote:
On 11/5/20 9:55 AM, Alex Shi wrote:
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1542,7 +1542,7 @@ unsigned int reclaim_clean_pages_from_list(struct
> zone *zone,
>*/
> int __isola
On 11/11/20 6:46 PM, Vlastimil Babka wrote:
Acked-by: Vlastimil Babka
Err, not yet, that was supposed for patch 16/17
work, so __pagevec_lru_add() goes its own
way.
Reported-by: Hugh Dickins
Signed-off-by: Alex Shi
Acked-by: Hugh Dickins
Acked-by: Johannes Weiner
Cc: Andrew Morton
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Vlastimil Babka
---
mm/sw
rew Morton
Cc: Johannes Weiner
Cc: Michal Hocko
Cc: Vladimir Davydov
Cc: Yang Shi
Cc: Matthew Wilcox
Cc: Konstantin Khlebnikov
Cc: Tejun Heo
Cc: linux-kernel@vger.kernel.org
Cc: linux...@kvack.org
Cc: cgro...@vger.kernel.org
Acked-by: Vlastimil Babka
Cc: Matthew Wilcox
Cc: linux-kernel@vger.kernel.org
Cc: linux...@kvack.org
Acked-by: Vlastimil Babka
A question below:
@@ -979,10 +995,6 @@ static bool too_many_isolated(pg_data_t *pgdat)
goto isolate_abort;
}
- /* Recheck P
On 11/5/20 9:55 AM, Alex Shi wrote:
Currently lru_lock still guards both lru list and page's lru bit, that's
ok. but if we want to use specific lruvec lock on the page, we need to
pin down the page's lruvec/memcg during locking. Just taking lruvec
lock first may be undermined by the page's memcg
On 11/5/20 9:55 AM, Alex Shi wrote:
The func only has one caller, remove it to clean up code and simplify
code.
Signed-off-by: Alex Shi
Acked-by: Hugh Dickins
Acked-by: Johannes Weiner
Cc: Hugh Dickins
Cc: Kirill A. Shutemov
Cc: Vlastimil Babka
Cc: Andrew Morton
Cc: linux...@kvack.org
Cc
on __mod_zone_page_state which need change
to mod_zone_page_state. Thanks!
Signed-off-by: Alex Shi
Acked-by: Hugh Dickins
Acked-by: Johannes Weiner
Cc: Kirill A. Shutemov
Cc: Vlastimil Babka
Cc: Andrew Morton
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Vlastimil Babka
Nit
tly
but not entirely prevented by page_count() check in ksm.c's
write_protect_page(): that risk being shared with page_referenced() and
not helped by lru_lock).
Signed-off-by: Hugh Dickins
Signed-off-by: Alex Shi
Cc: Andrew Morton
Cc: Vladimir Davydov
Cc: Vlastimil Babka
Cc: Minchan Kim
Cc: Alex
on.org: coding style fixes]
Signed-off-by: Alex Shi
Acked-by: Hugh Dickins
Acked-by: Johannes Weiner
Cc: Andrew Morton
Cc: Johannes Weiner
Cc: Tejun Heo
Cc: Matthew Wilcox
Cc: Hugh Dickins
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Vlastimil Babka
Nice cleanup!
On 11/11/20 10:06 AM, David Hildenbrand wrote:
On 11.11.20 09:47, Michal Hocko wrote:
On Tue 10-11-20 20:32:40, David Hildenbrand wrote:
commit 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and
init_on_free=1 boot options") resulted with init_on_alloc=1 in all pages
leaving the buddy
ary read tearing, but mainly to alert anybody
making future changes to the code that special care is needed.
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Acked-by: David Hildenbrand
Acked-by: Michal Hocko
---
mm/page_alloc.c | 40 ++--
1 file change
the zone_pageset_init() and __zone_pcp_update()
wrappers.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Reviewed-by: David Hildenbrand
Acked-by: Michal Hocko
---
mm/page_alloc.c | 42 ++
1 file changed, 18 insertions(+), 24 deletions
-by: David Hildenbrand
Suggested-by: Pavel Tatashin
Signed-off-by: Vlastimil Babka
Reviewed-by: David Hildenbrand
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
mm/memory_hotplug.c | 11 ++-
mm/page_alloc.c | 2 ++
mm/page_isolation.c | 10 +-
3 files changed, 13
.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: David Hildenbrand
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
mm/page_alloc.c | 17 ++---
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2fa432762908
m/3d3b53db-aeaa-ff24-260b-36427fac9...@suse.cz/
[7] https://lore.kernel.org/linux-mm/20200922143712.12048-1-vba...@suse.cz/
[8] https://lore.kernel.org/linux-mm/20201008114201.18824-1-vba...@suse.cz/
Vlastimil Babka (7):
mm, page_alloc: clean up pageset high and batch update
mm, page_alloc: ca
users of
zone_pcp_disable()/enable().
Currently the only user of this functionality is offline_pages().
Suggested-by: David Hildenbrand
Suggested-by: Michal Hocko
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
mm/internal.h | 2 ++
mm/memory_hotplug.c |
wrappers was:
build_all_zonelists_init()
setup_pageset()
pageset_set_batch()
which was hardcoding batch as 0, so we can just open-code a call to
pageset_update() with constant parameters instead.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Reviewed-by: David
-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
include/linux/mmzone.h | 6 ++
mm/page_alloc.c| 16 ++--
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 7385871768d4
On 11/8/20 7:57 AM, Mike Rapoport wrote:
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct kmem_cache
*cachep)
return false;
}
-#ifdef CONFIG_DEBUG_PAGEALLOC
static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int
On 10/28/20 6:50 AM, Bharata B Rao wrote:
slub_max_order
--
The most promising tunable that shows consistent reduction in slab memory
is slub_max_order. Here is a table that shows the number of slabs that
end up with different orders and the total slab consumption at boot
for
On 11/5/20 2:19 PM, Michal Hocko wrote:
On Thu 05-11-20 14:14:25, Vlastimil Babka wrote:
On 11/5/20 1:58 PM, Michal Hocko wrote:
> On Thu 05-11-20 13:53:24, Vlastimil Babka wrote:
> > On 11/5/20 1:08 PM, Michal Hocko wrote:
> > > On Thu 05-11-20 09:40:28, Feng Tang wrote:
>
On 11/5/20 1:58 PM, Michal Hocko wrote:
On Thu 05-11-20 13:53:24, Vlastimil Babka wrote:
On 11/5/20 1:08 PM, Michal Hocko wrote:
> On Thu 05-11-20 09:40:28, Feng Tang wrote:
> > > > Could you be more specific? This sounds like a bug. Allocations
> > > shouldn't sp
On 11/5/20 1:08 PM, Michal Hocko wrote:
On Thu 05-11-20 09:40:28, Feng Tang wrote:
>
> Could you be more specific? This sounds like a bug. Allocations
> shouldn't spill over to a node which is not in the cpuset. There are few
> exceptions like IRQ context but that shouldn't happen regurarly.
On 11/5/20 10:04 AM, Kalle Valo wrote:
(changing the subject, adding more lists and people)
Pavel Procopiuc writes:
Op 04.11.2020 om 10:12 schreef Kalle Valo:
Yeah, it is unfortunately time consuming but it is the best way to get
bottom of this.
I have found the commit that breaks things
201 - 300 of 6095 matches
Mail list logo