On 4/17/24 2:52 PM, Konstantin Ryabitsev wrote:
> On Wed, Apr 17, 2024 at 09:48:18AM +0200, Thorsten Leemhuis wrote:
>> Hi kernel.org helpdesk!
>>
>> Could you please create the email alias
>> do-not-apply-to-sta...@kernel.org which redirects all mail to /dev/null,
>> just like sta...@kernel.org
y for this part of mm in particular with regular contributors
> tagged as reviewers.
>
> Signed-off-by: Lorenzo Stoakes
Acked-by: Vlastimil Babka
Would be nice if this targetted sub-reviewing could be managed in a simpler
way as part of the MM section instead of having to define a new o
10685.68 ( 0.00%)11399.02 * -6.68%*
>
> Signed-off-by: Baolin Wang
> Acked-by: Mel Gorman
Reviewed-by: Vlastimil Babka
Thanks.
> ---
> Hi Andrew, please use this patch to replace below 2 old patches. Thanks.
> https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.g
gt; triggering KMSAN, so unpoison its return value.
>
> Signed-off-by: Ilya Leoshkevich
Acked-by: Vlastimil Babka
> ---
> mm/slub.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 2d29d368894c..802702748925 100644
> --- a/mm
ctions to KMSAN.
>
> Signed-off-by: Ilya Leoshkevich
Acked-by: Vlastimil Babka
> ---
> mm/slub.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 169e5f645ea8..6e61c27951a4 100644
> --- a/mm/slub.c
> +++ b
+Cc workflows
On 11/24/23 12:43, Greg Kroah-Hartman wrote:
> On Thu, Nov 23, 2023 at 07:20:46PM +0100, Oleksandr Natalenko wrote:
>> Hello.
>>
>> Since v6.6.2 kernel release I'm experiencing a regression with regard
>> to USB ports behaviour after a suspend/resume cycle.
>>
>> If a USB port is
the parameter and with that the whole ISOLATE_UNMAPPED flag.
Signed-off-by: Vlastimil Babka
---
.../trace/postprocess/trace-vmscan-postprocess.pl | 8
include/linux/mmzone.h| 2 --
include/trace/events/vmscan.h | 8 ++--
list is being scanned. However the parameter currently
only indicates ISOLATE_UNMAPPED. We can use the lru parameter instead to
determine which list is scanned, and stop checking isolate_mode.
Signed-off-by: Vlastimil Babka
---
.../postprocess/trace-vmscan-postprocess.pl | 40
t to save stack trace.
>
> The benefits are smaller memory overhead and possibility to aggregate
> per-cache statistics in the future using the stackdepot handle
> instead of matching stacks manually.
>
> Signed-off-by: Oliver Glitta
Reviewed-by: Vlastimil Babka
(again
On 4/16/21 4:27 PM, Faiyaz Mohammed wrote:
> alloc_calls and free_calls implementation in sysfs have two issues,
> one is PAGE_SIZE limitiation of sysfs and other is it does not adhere
> to "one value per file" rule.
>
> To overcome this issues, move the alloc_calls and free_calls implemeation
>
On 4/14/21 4:38 AM, lipeif...@oppo.com wrote:
> From: lipeifeng
>
> This patch would "sort" the free-pages in buddy by pages-PFN to concentrate
> low-order-pages allocation in the front area of memory and high-order-pages
> allcation on the contrary so that few memory-pollution in the back area
On 4/14/21 3:39 PM, Mel Gorman wrote:
> struct per_cpu_pages is protected by the pagesets lock but it can be
> embedded within struct per_cpu_pages at a minor cost. This is possible
> because per-cpu lookups are based on offsets. Paraphrasing an explanation
> from Peter Ziljstra
>
> The whole
On 4/14/21 3:39 PM, Mel Gorman wrote:
> VM events do not need explicit protection by disabling IRQs so
> update the counter with IRQs enabled in __free_pages_ok.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
> ---
> mm/page_alloc.c | 3 ++-
> 1 file change
; Note that this may incur a performance penalty while memory hot-remove
> is running but that is not a common operation.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
A nit below:
> @@ -3294,6 +3295,7 @@ void free_unref_page_list(struct list_head *list)
> st
On 4/15/21 11:33 AM, Mel Gorman wrote:
> On Wed, Apr 14, 2021 at 07:21:42PM +0200, Vlastimil Babka wrote:
>> On 4/14/21 3:39 PM, Mel Gorman wrote:
>> > Both free_pcppages_bulk() and free_one_page() have very similar
>> > checks about whether a page's migratetype has
On 4/15/21 12:10 PM, Oliver Glitta wrote:
> ut 13. 4. 2021 o 15:54 Marco Elver napísal(a):
>>
>> On Tue, 13 Apr 2021 at 12:07, wrote:
>> > From: Oliver Glitta
>> >
>> > SLUB has resiliency_test() function which is hidden behind #ifdef
>> > SLUB_RESILIENCY_TEST that is not part of Kconfig, so
never slab_bug() or slab_fix() is called or when
> the count of pages is wrong.
>
> Signed-off-by: Oliver Glitta
Acked-by: Vlastimil Babka
(again with a disclaimer that I'm the advisor of Oliver's student project)
es structure either or a local_lock would
> be used.
>
> This patch explicitly acquires the lock with spin_lock_irqsave instead of
> relying on a helper. This removes the last instance of local_irq_save()
> in page_alloc.c.
\o/
> Signed-off-by: Mel Gorman
Acked-by:
On 4/14/21 3:39 PM, Mel Gorman wrote:
> Both free_pcppages_bulk() and free_one_page() have very similar
> checks about whether a page's migratetype has changed under the
> zone lock. Use a common helper.
>
> Signed-off-by: Mel Gorman
Seems like for free_pcppages_bulk() this patch makes it check
_[lock|unlock]_irq on !PREEMPT_RT kernels. One
> __mod_zone_freepage_state is still called with IRQs disabled. While this
> could be moved out, it's not free on all architectures as some require
> IRQs to be disabled for mod_zone_page_state on !PREEMPT_RT kernels.
>
> Signed-off-by: Mel Gorman
Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
> ---
> include/linux/vmstat.h | 8
> mm/page_alloc.c| 30 +-
> 2 files changed, 21 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/vmstat.h b/include/linux/vmstat.
On 4/14/21 6:20 PM, Vlastimil Babka wrote:
> On 4/14/21 3:39 PM, Mel Gorman wrote:
>> __count_numa_event is small enough to be treated similarly to
>> __count_vm_event so inline it.
>>
>> Signed-off-by: Mel Gorman
>
> Acked-by: Vlastimil Babka
>
&
On 4/14/21 3:39 PM, Mel Gorman wrote:
> __count_numa_event is small enough to be treated similarly to
> __count_vm_event so inline it.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
> ---
> include/linux/vmstat.h | 9 +
> mm/vmstat.c| 9
On 4/14/21 5:18 PM, Mel Gorman wrote:
> On Wed, Apr 14, 2021 at 02:56:45PM +0200, Vlastimil Babka wrote:
>> So it seems that this intermediate assignment to zone counters (using
>> atomic_long_set() even) is unnecessary and this could mimic sum_vm_events()
>> that
>&
On 4/7/21 10:24 PM, Mel Gorman wrote:
> NUMA statistics are maintained on the zone level for hits, misses, foreign
> etc but nothing relies on them being perfectly accurate for functional
> correctness. The counters are used by userspace to get a general overview
> of a workloads NUMA behaviour
On 4/12/21 4:08 PM, Mel Gorman wrote:
> On Mon, Apr 12, 2021 at 02:40:18PM +0200, Vlastimil Babka wrote:
>> On 4/12/21 2:08 PM, Mel Gorman wrote:
>
> the pageset structures in place would be much more straight-forward
> assuming the structures were not allocated in the zone t
On 4/7/21 10:24 PM, Mel Gorman wrote:
> @@ -6691,7 +6697,7 @@ static __meminit void zone_pcp_init(struct zone *zone)
>* relies on the ability of the linker to provide the
>* offset of a (static) per cpu variable into the per cpu area.
>*/
> - zone->pageset = _pageset;
>
ecause all
> the pages have been freed and there is no page to put on the PCP lists.
>
> Signed-off-by: Mel Gorman
Yeah the irq disabling here is clearly bogus, so:
Acked-by: Vlastimil Babka
But I think Michal has a point that we might best leave the pagesets around, by
a future ch
:0.295955434 sec time_interval:295955434)
> - (invoke count:1000 tsc_interval:1065447105)
>
> Before:
> - Per elem: 110 cycles(tsc) 30.633 ns (step:64)
>
> Signed-off-by: Jesper Dangaard Brouer
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
> ---
> mm
ned-off-by: Jesper Dangaard Brouer
> Signed-off-by: Mel Gorman
Acked-By: Vlastimil Babka
> ---
> mm/page_alloc.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index be1e33a4df39..1ec18121268b 100644
ng all users of the bulk API to allocate and manage enough
> storage to store the pages.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
n is not very efficient and could be improved
> but it would require refactoring. The intent is to make it available early
> to determine what semantics are required by different callers. Once the
> full semantics are nailed down, it can be refactored.
>
> Signed-off-by: Mel Gorman
>
> tends to use the fake word "malloced" instead of the fake word mallocated.
> To be consistent, this preparation patch renames alloced to allocated
> in rmqueue_bulk so the bulk allocator and per-cpu allocator use similar
> names when the bulk allocator is introduced.
>
&
On 4/9/21 9:17 AM, Christian König wrote:
> To be able to switch to a spinlock and reduce lock contention in the TTM
> shrinker we don't want to hold a mutex while unmapping and freeing pages
> from the pool.
Does using spinlock instead of mutex really reduce lock contention?
> But then we
600
> free_unref_page+0x20/0x1c0
> __put_page+0x110/0x1a0
> migrate_pages+0x16d0/0x1dc0
> compact_zone+0xfc0/0x1aa0
> proactive_compact_node+0xd0/0x1e0
> kcompactd+0x550/0x600
> kthread+0x2c0/0x2e0
> call_payload+0x50/0x80
>
> Her
d you observe this in
practice? But anyway, the change is not wrong.
> CC: Andrew Morton
> CC: linux...@kvack.org
> Signed-off-by: Sergei Trofimovich
Acked-by: Vlastimil Babka
> ---
> mm/page_owner.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
&
On 4/1/21 8:59 PM, Linus Torvalds wrote:
> On Thu, Apr 1, 2021 at 11:17 AM Suren Baghdasaryan wrote:
Thanks Suren for bringing this up!
>> We received a report that the copy-on-write issue repored by Jann Horn in
>> https://bugs.chromium.org/p/project-zero/issues/detail?id=2045 is still
>>
t; unwind() [recursion]
>
> CC: Ingo Molnar
> CC: Peter Zijlstra
> CC: Juri Lelli
> CC: Vincent Guittot
> CC: Dietmar Eggemann
> CC: Steven Rostedt
> CC: Ben Segall
> CC: Mel Gorman
> CC: Daniel Bristot de Oliveira
> CC: Andrew Morton
> CC: linux...@k
On 4/2/21 1:50 PM, Sergei Trofimovich wrote:
> On Thu, 1 Apr 2021 17:05:19 -0700
> Andrew Morton wrote:
>
>> On Thu, 1 Apr 2021 23:30:10 +0100 Sergei Trofimovich
>> wrote:
>>
>> > Before the change page_owner recursion was detected via fetching
>> > backtrace and inspecting it for current
On 4/4/21 4:17 PM, Sergei Trofimovich wrote:
> When page_poison detects page corruption it's useful to see who
> freed a page recently to have a guess where write-after-free
> corruption happens.
>
> After this change corruption report has extra page_owner data.
> Example report from real
On 4/1/21 11:42 PM, Roman Gushchin wrote:
> In our production experience the percpu memory allocator is sometimes
> struggling
> with returning the memory to the system. A typical example is a creation of
> several thousands memory cgroups (each has several chunks of the percpu data
> used for
On 4/6/21 7:15 PM, Vlastimil Babka wrote:
> On 4/6/21 2:27 PM, Faiyaz Mohammed wrote:
>> alloc_calls and free_calls implementation in sysfs have two issues,
>> one is PAGE_SIZE limitiation of sysfs and other is it does not adhere
>> to "one value per file" rule.
On 4/6/21 2:27 PM, Faiyaz Mohammed wrote:
> alloc_calls and free_calls implementation in sysfs have two issues,
> one is PAGE_SIZE limitiation of sysfs and other is it does not adhere
> to "one value per file" rule.
>
> To overcome this issues, move the alloc_calls and free_calls implemeation
>
ntexts, and kunit_find_named_resource will call
spin_lock(>lock) that's not irq safe. Can we make the lock irq safe? I
tried the change below and it makde the problem go away. If you agree, the
question is how to proceed - make it part of Oliver's patch series and let
Andrew pick it all with eve
AB_NEVER_MERGE, SLAB_DEBUG_FLAGS,
> SLAB_FLAGS_PERMITTED macros.
>
> Signed-off-by: Oliver Glitta
Acked-by: Vlastimil Babka
On 3/31/21 2:11 PM, Vlastimil Babka wrote:
> On 3/31/21 7:44 AM, Andrew Morton wrote:
>> On Mon, 29 Mar 2021 20:36:35 +0800 qianjun.ker...@gmail.com wrote:
>>
>>> From: jun qian
>>>
>>> In our project, Many business delays come from fork, so
>
On 3/31/21 7:44 AM, Andrew Morton wrote:
> On Mon, 29 Mar 2021 20:36:35 +0800 qianjun.ker...@gmail.com wrote:
>
>> From: jun qian
>>
>> In our project, Many business delays come from fork, so
>> we started looking for the reason why fork is time-consuming.
>> I used the ftrace with
written for __GFP_ZERO allocations.
>
> Fix by restoring the initial order. Also add a warning comment.
>
> Reported-by: Vlastimil Babka
> Reported-by: Sergei Trofimovich
> Signed-off-by: Andrey Konovalov
Tested that the bug indeed occurs in -next and is fixed by thi
s enabled.
Correction: This leads to check_poison_mem() complain about memory corruption
because the poison pattern has already been overwritten by zeroes.
> Fix by restoring the initial order. Also add a warning comment.
>
> Reported-by: Vlastimil Babka
> Reported-by: Sergei Trofimovich
On 3/30/21 12:00 AM, Andrey Konovalov wrote:
> On Mon, Mar 29, 2021 at 2:10 PM Vlastimil Babka wrote:
>>
>> > commit 855a9c4018f3219db8be7e4b9a65ab22aebfde82
>> > Author: Andrey Konovalov
>> > Date: Thu Mar 18 17:01:40 2021 +1100
>> >
>> >
idate pfn first
> before touching the page.
>
> Signed-off-by: Kefeng Wang
> Signed-off-by: Liu Shixin
Acked-by: Vlastimil Babka
Agreed with Matthew's suggestion, also:
> @@ -2468,25 +2469,22 @@ static int move_freepages(struct zone *zone,
> int move_freepages_block(str
On 3/26/21 2:48 PM, David Hildenbrand wrote:
> On 26.03.21 12:26, Sergei Trofimovich wrote:
>> init_on_free=1 does not guarantee that free pages contain only zero bytes.
>>
>> Some examples:
>> 1. page_poison=on takes presedence over init_on_alloc=1 / ini_on_free=1
>
> s/ini_on_free/init_on_free/
Good catch, thanks for finding the root cause!
> After the change we execute only:
> - static_branch_enable(&_page_poisoning_enabled);
> and ignore init_on_free=1.
> CC: Vlastimil Babka
> CC: Andrew Morton
> CC: linux...@kvack.org
> CC: David Hildenbrand
>
On 3/26/21 12:26 PM, Sergei Trofimovich wrote:
> init_on_free=1 does not guarantee that free pages contain only zero bytes.
>
> Some examples:
> 1. page_poison=on takes presedence over init_on_alloc=1 / ini_on_free=1
Yes, and it spits out a message that you enabled both and poisoning takes
On 3/17/21 7:53 PM, David Rientjes wrote:
> On Wed, 17 Mar 2021, Vlastimil Babka wrote:
>> >
>> > [ 22.154049] random: get_random_u32 called from
>> > __kmem_cache_create+0x23/0x3e0 with crng_init=0
>> > [ 22.154070] random: get_random_u32 called
ons why it might be misleading.
> On Thu, Mar 18, 2021 at 8:56 PM Xunlei Pang wrote:
>>
>>
>>
>> On 3/18/21 8:18 PM, Vlastimil Babka wrote:
>> > On 3/17/21 8:54 AM, Xunlei Pang wrote:
>> >> The node list_lock in count_partial() spends long time iteratin
be refactored.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
Although maybe premature, if it changes significantly due to the users'
performance feedback, let's see :)
Some nits below:
...
> @@ -4963,6 +4978,107 @@ static inline bool prepare_alloc_pages
ttps://ci.linaro.org/view/lkft/job/openembedded-lkft-linux-next/DISTRO=lkft,MACHINE=juno,label=docker-buster-lkft/984/consoleFull
>
Andrew, please add this -fix
Thanks.
8<
>From f97312224278839321a5ff9be2b8487553a97c63 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka
Date: Fri
> tends to use the fake word "malloced" instead of the fake word mallocated.
> To be consistent, this preparation patch renames alloced to allocated
> in rmqueue_bulk so the bulk allocator and per-cpu allocator use similar
> names when the bulk allocator is introduced.
>
&
On 3/12/21 4:43 PM, Mel Gorman wrote:
> __alloc_pages updates GFP flags to enforce what flags are allowed
> during a global context such as booting or suspend. This patch moves the
> enforcement from __alloc_pages to prepare_alloc_pages so the code can be
> shared between the single page allocator
On 3/18/21 12:47 PM, Marco Elver wrote:
> On Tue, Mar 16, 2021 at 01:41PM +0100, glit...@gmail.com wrote:
>> From: Oliver Glitta
>>
>> SLUB has resiliency_test() function which is hidden behind #ifdef
>> SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody
>> runs it. Kselftest should
On 3/19/21 10:57 AM, Oscar Salvador wrote:
> On Thu, Mar 18, 2021 at 12:36:52PM +0100, Michal Hocko wrote:
>> Yeah, makes sense. I am not a fan of the above form of documentation.
>> Btw. maybe renaming the field would be even better, both from the
>> intention and review all existing users. I
On 3/18/21 6:48 AM, Kees Cook wrote:
> On Tue, Mar 09, 2021 at 07:18:32PM +0100, Vlastimil Babka wrote:
>> On 3/9/21 7:14 PM, Georgi Djakov wrote:
>> > Hi Vlastimil,
>> >
>> > Thanks for the comment!
>> >
>> > On 3/9/21 17:09, Vlastimil Bab
t;expected" state, which slightly optimizes the resulting
> assembly code.
>
> Reviewed-by: Alexander Potapenko
> Link:
> https://lore.kernel.org/lkml/CAG_fn=x0dvwqlahjto6jw7tgcmsm77gkhinrd0m_6y0szwo...@mail.gmail.com/
> Signed-off-by: Kees Cook
For the fixed version
Acked-by: Vlastimil Babka
74] sys_sendfile64+0x12c/0x140
> [ 20.195336] ret_fast_syscall+0x0/0x58
> [ 20.195491] 0xbeeacde4
>
> Co-developed-by: Vaneet Narang
> Signed-off-by: Vaneet Narang
> Signed-off-by: Maninder Singh
Acked-by: Vlastimil Babka
14.872621] splice_direct_to_actor+0xb8/0x290
> [ 14.872747] do_splice_direct+0xa0/0xe0
> [ 14.872896] do_sendfile+0x2d0/0x438
> [ 14.873044] sys_sendfile64+0x12c/0x140
> [ 14.873229] ret_fast_syscall+0x0/0x58
> [ 14.873372] 0xbe861de4
>
> Signed-off-by:
On 3/17/21 8:54 AM, Xunlei Pang wrote:
> The node list_lock in count_partial() spends long time iterating
> in case of large amount of partial page lists, which can cause
> thunder herd effect to the list_lock contention.
>
> We have HSF RT(High-speed Service Framework Response-Time) monitors,
>
On 3/18/21 11:22 AM, Michal Hocko wrote:
> On Thu 18-03-21 10:50:38, Vlastimil Babka wrote:
>> On 3/17/21 3:59 PM, Michal Hocko wrote:
>> > On Wed 17-03-21 15:38:35, Oscar Salvador wrote:
>> >> On Wed, Mar 17, 2021 at 03:12:29PM +0100, Michal Hocko wrote:
>> >
On 3/17/21 3:59 PM, Michal Hocko wrote:
> On Wed 17-03-21 15:38:35, Oscar Salvador wrote:
>> On Wed, Mar 17, 2021 at 03:12:29PM +0100, Michal Hocko wrote:
>> > > Since isolate_migratepages_block will stop returning the next pfn to be
>> > > scanned, we reuse the cc->migrate_pfn field to keep track
On 3/17/21 8:54 AM, Xunlei Pang wrote:
> The node list_lock in count_partial() spends long time iterating
> in case of large amount of partial page lists, which can cause
> thunder herd effect to the list_lock contention.
>
> We have HSF RT(High-speed Service Framework Response-Time) monitors,
>
On 3/17/21 9:36 AM, kernel test robot wrote:
>
>
> Greeting,
>
> FYI, we noticed the following commit (built with gcc-9):
>
> commit: e48d82b67a2b760eedf7b95ca15f41267496386c ("[PATCH 1/2] selftests: add
> a kselftest for SLUB debugging functionality")
> url:
>
us patch "selftests: add a kselftest for SLUB
> debugging functionality".
>
> Signed-off-by: Oliver Glitta
Acked-by: Vlastimil Babka
>
> Add new option CONFIG_TEST_SLUB in Kconfig.
>
> Add parameter to function validate_slab_cache() to return
> number of errors in cache.
>
> Signed-off-by: Oliver Glitta
Acked-by: Vlastimil Babka
Disclaimer: this is done as part of Oliver's university project that I'm
advising.
On 3/16/21 11:42 AM, Xunlei Pang wrote:
> On 3/16/21 2:49 AM, Vlastimil Babka wrote:
>> On 3/9/21 4:25 PM, Xunlei Pang wrote:
>>> count_partial() can hold n->list_lock spinlock for quite long, which
>>> makes much trouble to the system. This series eliminate this
On 3/16/21 11:07 AM, Christoph Lameter wrote:
> On Mon, 15 Mar 2021, Yang Shi wrote:
>
>> > It seems like CONFIG_SLUB_DEBUG is a more popular option than
>> > CONFIG_SLUB_STATS.
>> > CONFIG_SLUB_DEBUG is enabled on my Fedora workstation, CONFIG_SLUB_STATS
>> > is off.
>> > I doubt an average
On 3/9/21 4:25 PM, Xunlei Pang wrote:
> count_partial() can hold n->list_lock spinlock for quite long, which
> makes much trouble to the system. This series eliminate this problem.
Before I check the details, I have two high-level comments:
- patch 1 introduces some counting scheme that patch 4
On 3/15/21 6:32 PM, Paul E. McKenney wrote:
> On Mon, Mar 15, 2021 at 06:28:42PM +0100, Vlastimil Babka wrote:
>> On 3/15/21 6:16 PM, David Rientjes wrote:
>> > On Mon, 15 Mar 2021, Vlastimil Babka wrote:
>> >
>> >> Commit ca0cab65ea2b ("m
On 3/15/21 6:16 PM, David Rientjes wrote:
> On Mon, 15 Mar 2021, Vlastimil Babka wrote:
>
>> Commit ca0cab65ea2b ("mm, slub: introduce static key for slub_debug()")
>> introduced a static key to optimize the case where no debugging is enabled
>> for
>> a
ing cpu hotplug lock"),
static_branch_enable_cpuslocked() should be used.
[1] https://lore.kernel.org/linux-btrfs/20210315141824.26099-1-dste...@suse.com/
Reported-by: Oliver Glitta
Fixes: ca0cab65ea2b ("mm, slub: introduce static key for slub_debug()")
Signed-off-by: Vlast
pfn to be
> scanned, we reuse the cc->migrate_pfn field to keep track of that.
>
> Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
> ---
> mm/compaction.c | 48
> mm/internal.h | 2 +-
> mm/page_alloc.c
s (5 at the moment) instead of bailing out.
>
> migrate_pages bails out right away on -ENOMEM because it is considered a fatal
> error. Do the same here instead of keep going and retrying.
>
> Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
> ---
> mm/page_al
On 3/11/21 11:51 AM, Maninder Singh wrote:
> Hi,
>
>
>
>> Instead of your changes to SL*B, could you check mem_dump_obj() and others
>> added
>> by Paul in 5.12-rc1?
>
>> (+CC Paul, thus not trimming)
>
>
>
> checked mem_dump_obj(), but it only provides path of allocation and not of
>
On 2/25/21 8:56 AM, Maninder Singh wrote:
> In case of "Use After Free" kernel OOPs, free path of object
> is required to debug futher.
> And in most of cases object address is present in one of registers.
>
> Thus check for register address and if it belongs to slab,
> print its alloc and free
On 3/9/21 7:14 PM, Georgi Djakov wrote:
> Hi Vlastimil,
>
> Thanks for the comment!
>
> On 3/9/21 17:09, Vlastimil Babka wrote:
>> On 3/9/21 2:47 PM, Georgi Djakov wrote:
>>> Being able to stop the system immediately when a memory corruption
>>> is de
On 3/9/21 2:47 PM, Georgi Djakov wrote:
> Being able to stop the system immediately when a memory corruption
> is detected is crucial to finding the source of it. This is very
> useful when the memory can be inspected with kdump or other tools.
Is this in some testing scenarios where you would
gt; This will also eliminate the extern declaration from header file.
> No functionality is broken or changed this way.
>
> Signed-off-by: Pintu Kumar
> Signed-off-by: Pintu Agarwal
Reviewed-by: Vlastimil Babka
> ---
> v2: completely get rid of this variable and set .data t
On 3/2/21 6:56 PM, Pintu Kumar wrote:
> The sysctl_compact_memory is mostly unsed in mm/compaction.c
> It just acts as a place holder for sysctl.
>
> Thus we can remove it from here and move the declaration directly
> in kernel/sysctl.c itself.
> This will also eliminate the extern declaration
On 3/2/21 2:29 PM, Petr Mladek wrote:
> On Tue 2021-03-02 13:51:35, Geert Uytterhoeven wrote:
>> > > > +
>> > > > +
>> > > > pr_warn("**\n");
>> > > > + pr_warn("** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE
>> > > >
ady. I have verified
the POC no longer reproduces afterwards.
[1] https://bugs.chromium.org/p/project-zero/issues/detail?id=2045
Reported-by: Nicolai Stange
Signed-off-by: Vlastimil Babka
---
mm/huge_memory.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/mm/huge_memory.c b/mm
he DMA32 zone. Ensure the allocations resulting from
> the gfp_mask returned by limit_gfp_mask use the zone flags that were
> originally passed to shmem_getpage_gfp.
>
> Signed-off-by: Rik van Riel
> Suggested-by: Hugh Dickins
Acked-by: Vlastimil Babka
> ---
> mm/shmem.c
On 2/26/21 11:59 AM, Mike Rapoport wrote:
> On Thu, Feb 25, 2021 at 07:38:44PM +0100, Vlastimil Babka wrote:
>> On 2/25/21 7:05 PM, Mike Rapoport wrote:
>> >>
>> >> What if two zones are adjacent? I.e. if the hole was at a boundary
>> >> between
On 2/26/21 10:17 AM, Yu Zhao wrote:
> Patch series "mm: lru related cleanups" starting at commit 42895ea73bcd
> ("mm/vmscan.c: use add_page_to_lru_list()") bloated vmlinux by 1777
> bytes, according to:
>
> https://lore.kernel.org/linux-mm/85b3e8f2-5982-3329-c20d-cf062b8da...@suse.cz/
Huh, I
On 2/25/21 7:05 PM, Mike Rapoport wrote:
> On Thu, Feb 25, 2021 at 06:51:53PM +0100, Vlastimil Babka wrote:
>> >
>> > unset zone link in struct page will trigger
>> >
>> >VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);
>>
&g
t
> links to the adjacent zone/node.
What if two zones are adjacent? I.e. if the hole was at a boundary between two
zones.
> Fixes: 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather
> that check each PFN")
> Signed-off-by: Mike
should be
only used for procfs and similar files, not dmesg buffer. This patch clarifies
the documentation in that regard.
Signed-off-by: Vlastimil Babka
---
Documentation/core-api/printk-formats.rst | 26 ++-
lib/vsprintf.c| 7 --
2 files changed
heir kernels with everything that's needed to decode stack
> traces later.
Looks good!
> Signed-off-by: Thorsten Leemhuis
> Reviewed-by: Qais Yousef
Acked-by: Vlastimil Babka
Thanks!
On 2/18/21 6:24 PM, Charan Teja Reddy wrote:
> I would like to start discussion about balancing the occupancy of
> memory zones in a node in the system whose imabalance may be caused by
> migration of pages to other zones during hotremove and then hotadding
> same memory. In this case there is a
On 2/17/21 6:33 PM, Vlastimil Babka wrote:
> Compaction always operates on pages from a single given zone when isolating
> both pages to migrate and freepages. Pageblock boundaries are intersected with
> zone boundaries to be safe in case zone starts or ends in the middle of
> pagebl
Let's add include/uapi/ and arch/*/include/uapi/ to API/ABI section, so that
for patches modifying them, get_maintainers.pl suggests CCing linux-api@ so
people don't forget.
Reported-by: David Hildenbrand
Signed-off-by: Vlastimil Babka
---
MAINTAINERS | 2 ++
1 file changed, 2 insertions
() on a range of pfn's
from two different zones and end up e.g. isolating freepages under the wrong
zone's lock.
This patch should fix the above issues.
Fixes: 5a811889de10 ("mm, compaction: use free lists to quickly locate a
migration target")
Cc:
Signed-off-by: Vlastimil Babk
1 - 100 of 6093 matches
Mail list logo