On 6/25/24 7:12 PM, Alexei Starovoitov wrote:
> On Tue, Jun 25, 2024 at 7:24 AM Vlastimil Babka wrote:
>>
>> On 6/20/24 12:49 AM, Vlastimil Babka wrote:
>> > --- a/mm/slub.c
>> > +++ b/mm/slub.c
>> > @@ -3874,13 +3874,37 @@ static __always_inlin
On 6/20/24 12:49 AM, Vlastimil Babka wrote:
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3874,13 +3874,37 @@ static __always_inline void
> maybe_wipe_obj_freeptr(struct kmem_cache *s,
> 0, sizeof(void *));
> }
>
> -noinline int should_failslab(str
s to touch poisoned
> metadata without triggering KMSAN, is to unpoison its return value.
> However, this approach is too fragile. So simply disable the KMSAN
> checks in the respective functions.
>
> Reviewed-by: Alexander Potapenko
> Signed-off-by: Ilya Leoshkevich
Ack
On 6/20/24 3:18 AM, Alexei Starovoitov wrote:
> On Wed, Jun 19, 2024 at 3:49 PM Vlastimil Babka wrote:
>>
>> When CONFIG_FUNCTION_ERROR_INJECTION is disabled,
>> within_error_injection_list() will return false for any address and the
>> result of check_non_sleepa
. A measurement with the analogical change
for should_failslab() suggests that for a page allocator intensive
workload there might be noticeable improvement. It also makes
CONFIG_FAIL_ALLOC_PAGE an option suitable not only for debug kernels.
Reviewed-by: Roman Gushchin
Signed-off-by: Vlastimil Babka
tched kernel performace was
unaffected, as expected, while unpatched kernel's performance was worse,
resulting in the relative speedup being 10.5%. This means it no longer
needs to be an option suitable only for debug kernel builds.
Acked-by: Alexei Starovoitov
Reviewed-by: Roman Gushchin
Signed-
with
kprobe_override enabled, using get_injection_key() instead of
within_error_injection_list(). Introduce bpf_kprobe_ei_keys_control() to
control the static keys and call the control function when doing
multi_link_attach and release.
Signed-off-by: Vlastimil Babka
---
kernel/trace/bpf_trace.c | 59
sary preparatory changes, and not for any of the
configfs based fault injection users.
Signed-off-by: Vlastimil Babka
---
include/linux/fault-inject.h | 7 ++-
lib/fault-inject.c | 43 ++-
2 files changed, 48 insertions(+), 2 deletion
inject file enable
the static key when the function is added to the injection list, and
disable when removed.
Signed-off-by: Vlastimil Babka
---
include/asm-generic/error-injection.h | 13 -
include/asm-generic/vmlinux.lds.h | 2 +-
include/linux/error-injection.h | 12
//lore.kernel.org/6d5bb852-8703-4abf-a52b-90816bccb...@suse.cz/
[2]
https://lore.kernel.org/3j5d3p22ssv7xoaghzraa7crcfih3h2qqjlhmjppbp6f42pg2t@kg7qoicog5ye/
Signed-off-by: Vlastimil Babka
---
Changes in v2:
- Add error injection static key control for bpf programs with
kprobe_override.
- Add separ
.
This will allow to inline functions on the list when
CONFIG_FUNCTION_ERROR_INJECTION is disabled as there will be no BTF_ID()
reference for them.
Signed-off-by: Vlastimil Babka
---
kernel/bpf/verifier.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/kernel/bpf/verifier.c b
of within_error_injection_list(). Introduce
trace_kprobe_error_injection_control() to control the static key and
call the control function when attaching or detaching programs with
kprobe_override to perf events.
Signed-off-by: Vlastimil Babka
---
kernel/trace/bpf_trace.c| 6 ++
kernel
On 6/11/24 8:23 AM, Greg KH wrote:
> On Mon, Jun 10, 2024 at 11:40:54PM +0200, Vlastimil Babka wrote:
>> On 6/10/24 10:36 PM, Steven Rostedt wrote:
>> > On Mon, 10 Jun 2024 08:46:42 -0700
>> > "Paul E. McKenney" wrote:
>> >
>> >> >
On 6/10/24 10:36 PM, Steven Rostedt wrote:
> On Mon, 10 Jun 2024 08:46:42 -0700
> "Paul E. McKenney" wrote:
>
>> > > index 7c29f4afc23d..338c52168e61 100644
>> > > --- a/fs/tracefs/inode.c
>> > > +++ b/fs/tracefs/inode.c
>> > > @@ -53,14 +53,6 @@ static struct inode *tracefs_alloc_inode(struct
On 6/10/24 5:46 PM, Paul E. McKenney wrote:
> On Mon, Jun 10, 2024 at 11:22:23AM -0400, Steven Rostedt wrote:
>> On Sun, 9 Jun 2024 10:27:17 +0200
>> Julia Lawall wrote:
>>
>> > diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
>> > index 7c29f4afc23d..338c52168e61 100644
>> > ---
On 6/2/24 10:47 PM, David Rientjes wrote:
> On Fri, 31 May 2024, Vlastimil Babka wrote:
>
>> Patches 3 and 4 implement the static keys for the two mm fault injection
>> sites in slab and page allocators. For a quick demonstration I've run a
>> VM and the simple tes
On 5/31/24 6:43 PM, Alexei Starovoitov wrote:
> On Fri, May 31, 2024 at 2:33 AM Vlastimil Babka wrote:
>> might_alloc(flags);
>>
>> - if (unlikely(should_failslab(s, flags)))
>> - return NULL;
>> + if (static
On 6/1/24 1:39 AM, Roman Gushchin wrote:
> On Fri, May 31, 2024 at 11:33:31AM +0200, Vlastimil Babka wrote:
>> Incomplete, help needed from ftrace/kprobe and bpf folks.
>>
>> As previously mentioned by myself [1] and others [2] the functions
>> designed for error
On 5/31/24 5:31 PM, Mark Rutland wrote:
> Hi,
>
> On Fri, May 31, 2024 at 11:33:31AM +0200, Vlastimil Babka wrote:
>> Incomplete, help needed from ftrace/kprobe and bpf folks.
>
>> - the generic error injection using kretprobes with
>> override_function_wit
Similarly to should_failslab(), remove the overhead of calling the
noinline function should_fail_alloc_page() with a static key that guards
the allocation hotpath callsite and is controlled by the fault and error
injection frameworks.
Signed-off-by: Vlastimil Babka
---
mm/fail_page_alloc.c | 3
is point.
[1] https://lore.kernel.org/all/6d5bb852-8703-4abf-a52b-90816bccb...@suse.cz/
[2]
https://lore.kernel.org/all/3j5d3p22ssv7xoaghzraa7crcfih3h2qqjlhmjppbp6f42pg2t@kg7qoicog5ye/
Signed-off-by: Vlastimil Babka
---
Vlastimil Babka (4):
fault-inject: add support for static keys around f
the processing of writes to the debugfs inject file enable
the static key when the function is added to the injection list, and
disable when removed.
Signed-off-by: Vlastimil Babka
---
include/asm-generic/error-injection.h | 13 -
include/asm-generic/vmlinux.lds.h | 2 +-
include/linux
ccordingly.
Signed-off-by: Vlastimil Babka
---
mm/failslab.c | 2 +-
mm/slab.h | 3 +++
mm/slub.c | 10 +++---
3 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/mm/failslab.c b/mm/failslab.c
index ffc420c0e767..878fd08e5dac 100644
--- a/mm/failslab.c
+++ b/mm/failslab
sary preparatory changes, and none of the configfs
based fault injection users.
Signed-off-by: Vlastimil Babka
---
include/linux/fault-inject.h | 7 ++-
lib/fault-inject.c | 43 ++-
2 files changed, 48 insertions(+), 2 deletions(-)
diff --git
On 5/10/24 9:59 AM, wuqiang.matt wrote:
> On 2024/5/7 21:55, Vlastimil Babka wrote:
>>
>>> + } while (!try_cmpxchg_acquire(>tail, , tail + 1));
>>> +
>>> + /* now the tail position is reserved for the given obj */
>>> + WRITE_ONCE(slot->
On 4/24/24 11:52 PM, Andrii Nakryiko wrote:
> objpool_push() and objpool_pop() are very performance-critical functions
> and can be called very frequently in kretprobe triggering path.
>
> As such, it makes sense to allow compiler to inline them completely to
> eliminate function calls overhead.
On 4/17/24 2:52 PM, Konstantin Ryabitsev wrote:
> On Wed, Apr 17, 2024 at 09:48:18AM +0200, Thorsten Leemhuis wrote:
>> Hi kernel.org helpdesk!
>>
>> Could you please create the email alias
>> do-not-apply-to-sta...@kernel.org which redirects all mail to /dev/null,
>> just like sta...@kernel.org
y for this part of mm in particular with regular contributors
> tagged as reviewers.
>
> Signed-off-by: Lorenzo Stoakes
Acked-by: Vlastimil Babka
Would be nice if this targetted sub-reviewing could be managed in a simpler
way as part of the MM section instead of having to define a new o
10685.68 ( 0.00%)11399.02 * -6.68%*
>
> Signed-off-by: Baolin Wang
> Acked-by: Mel Gorman
Reviewed-by: Vlastimil Babka
Thanks.
> ---
> Hi Andrew, please use this patch to replace below 2 old patches. Thanks.
> https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.g
gt; triggering KMSAN, so unpoison its return value.
>
> Signed-off-by: Ilya Leoshkevich
Acked-by: Vlastimil Babka
> ---
> mm/slub.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 2d29d368894c..802702748925 100644
> --- a/mm
ctions to KMSAN.
>
> Signed-off-by: Ilya Leoshkevich
Acked-by: Vlastimil Babka
> ---
> mm/slub.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 169e5f645ea8..6e61c27951a4 100644
> --- a/mm/slub.c
> +++ b
+Cc workflows
On 11/24/23 12:43, Greg Kroah-Hartman wrote:
> On Thu, Nov 23, 2023 at 07:20:46PM +0100, Oleksandr Natalenko wrote:
>> Hello.
>>
>> Since v6.6.2 kernel release I'm experiencing a regression with regard
>> to USB ports behaviour after a suspend/resume cycle.
>>
>> If a USB port is
the parameter and with that the whole ISOLATE_UNMAPPED flag.
Signed-off-by: Vlastimil Babka
---
.../trace/postprocess/trace-vmscan-postprocess.pl | 8
include/linux/mmzone.h| 2 --
include/trace/events/vmscan.h | 8 ++--
list is being scanned. However the parameter currently
only indicates ISOLATE_UNMAPPED. We can use the lru parameter instead to
determine which list is scanned, and stop checking isolate_mode.
Signed-off-by: Vlastimil Babka
---
.../postprocess/trace-vmscan-postprocess.pl | 40
t to save stack trace.
>
> The benefits are smaller memory overhead and possibility to aggregate
> per-cache statistics in the future using the stackdepot handle
> instead of matching stacks manually.
>
> Signed-off-by: Oliver Glitta
Reviewed-by: Vlastimil Babka
(again
On 4/16/21 4:27 PM, Faiyaz Mohammed wrote:
> alloc_calls and free_calls implementation in sysfs have two issues,
> one is PAGE_SIZE limitiation of sysfs and other is it does not adhere
> to "one value per file" rule.
>
> To overcome this issues, move the alloc_calls and free_calls implemeation
>
On 4/14/21 4:38 AM, lipeif...@oppo.com wrote:
> From: lipeifeng
>
> This patch would "sort" the free-pages in buddy by pages-PFN to concentrate
> low-order-pages allocation in the front area of memory and high-order-pages
> allcation on the contrary so that few memory-pollution in the back area
On 4/14/21 3:39 PM, Mel Gorman wrote:
> struct per_cpu_pages is protected by the pagesets lock but it can be
> embedded within struct per_cpu_pages at a minor cost. This is possible
> because per-cpu lookups are based on offsets. Paraphrasing an explanation
> from Peter Ziljstra
>
> The whole
On 4/14/21 3:39 PM, Mel Gorman wrote:
> VM events do not need explicit protection by disabling IRQs so
> update the counter with IRQs enabled in __free_pages_ok.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
> ---
> mm/page_alloc.c | 3 ++-
> 1 file change
; Note that this may incur a performance penalty while memory hot-remove
> is running but that is not a common operation.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
A nit below:
> @@ -3294,6 +3295,7 @@ void free_unref_page_list(struct list_head *list)
> st
On 4/15/21 11:33 AM, Mel Gorman wrote:
> On Wed, Apr 14, 2021 at 07:21:42PM +0200, Vlastimil Babka wrote:
>> On 4/14/21 3:39 PM, Mel Gorman wrote:
>> > Both free_pcppages_bulk() and free_one_page() have very similar
>> > checks about whether a page's migratetype has
On 4/15/21 12:10 PM, Oliver Glitta wrote:
> ut 13. 4. 2021 o 15:54 Marco Elver napísal(a):
>>
>> On Tue, 13 Apr 2021 at 12:07, wrote:
>> > From: Oliver Glitta
>> >
>> > SLUB has resiliency_test() function which is hidden behind #ifdef
>> > SLUB_RESILIENCY_TEST that is not part of Kconfig, so
never slab_bug() or slab_fix() is called or when
> the count of pages is wrong.
>
> Signed-off-by: Oliver Glitta
Acked-by: Vlastimil Babka
(again with a disclaimer that I'm the advisor of Oliver's student project)
es structure either or a local_lock would
> be used.
>
> This patch explicitly acquires the lock with spin_lock_irqsave instead of
> relying on a helper. This removes the last instance of local_irq_save()
> in page_alloc.c.
\o/
> Signed-off-by: Mel Gorman
Acked-by:
On 4/14/21 3:39 PM, Mel Gorman wrote:
> Both free_pcppages_bulk() and free_one_page() have very similar
> checks about whether a page's migratetype has changed under the
> zone lock. Use a common helper.
>
> Signed-off-by: Mel Gorman
Seems like for free_pcppages_bulk() this patch makes it check
_[lock|unlock]_irq on !PREEMPT_RT kernels. One
> __mod_zone_freepage_state is still called with IRQs disabled. While this
> could be moved out, it's not free on all architectures as some require
> IRQs to be disabled for mod_zone_page_state on !PREEMPT_RT kernels.
>
> Signed-off-by: Mel Gorman
Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
> ---
> include/linux/vmstat.h | 8
> mm/page_alloc.c| 30 +-
> 2 files changed, 21 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/vmstat.h b/include/linux/vmstat.
On 4/14/21 6:20 PM, Vlastimil Babka wrote:
> On 4/14/21 3:39 PM, Mel Gorman wrote:
>> __count_numa_event is small enough to be treated similarly to
>> __count_vm_event so inline it.
>>
>> Signed-off-by: Mel Gorman
>
> Acked-by: Vlastimil Babka
>
&
On 4/14/21 3:39 PM, Mel Gorman wrote:
> __count_numa_event is small enough to be treated similarly to
> __count_vm_event so inline it.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
> ---
> include/linux/vmstat.h | 9 +
> mm/vmstat.c| 9
On 4/14/21 5:18 PM, Mel Gorman wrote:
> On Wed, Apr 14, 2021 at 02:56:45PM +0200, Vlastimil Babka wrote:
>> So it seems that this intermediate assignment to zone counters (using
>> atomic_long_set() even) is unnecessary and this could mimic sum_vm_events()
>> that
>&
On 4/7/21 10:24 PM, Mel Gorman wrote:
> NUMA statistics are maintained on the zone level for hits, misses, foreign
> etc but nothing relies on them being perfectly accurate for functional
> correctness. The counters are used by userspace to get a general overview
> of a workloads NUMA behaviour
On 4/12/21 4:08 PM, Mel Gorman wrote:
> On Mon, Apr 12, 2021 at 02:40:18PM +0200, Vlastimil Babka wrote:
>> On 4/12/21 2:08 PM, Mel Gorman wrote:
>
> the pageset structures in place would be much more straight-forward
> assuming the structures were not allocated in the zone t
On 4/7/21 10:24 PM, Mel Gorman wrote:
> @@ -6691,7 +6697,7 @@ static __meminit void zone_pcp_init(struct zone *zone)
>* relies on the ability of the linker to provide the
>* offset of a (static) per cpu variable into the per cpu area.
>*/
> - zone->pageset = _pageset;
>
ecause all
> the pages have been freed and there is no page to put on the PCP lists.
>
> Signed-off-by: Mel Gorman
Yeah the irq disabling here is clearly bogus, so:
Acked-by: Vlastimil Babka
But I think Michal has a point that we might best leave the pagesets around, by
a future ch
:0.295955434 sec time_interval:295955434)
> - (invoke count:1000 tsc_interval:1065447105)
>
> Before:
> - Per elem: 110 cycles(tsc) 30.633 ns (step:64)
>
> Signed-off-by: Jesper Dangaard Brouer
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
> ---
> mm
ned-off-by: Jesper Dangaard Brouer
> Signed-off-by: Mel Gorman
Acked-By: Vlastimil Babka
> ---
> mm/page_alloc.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index be1e33a4df39..1ec18121268b 100644
ng all users of the bulk API to allocate and manage enough
> storage to store the pages.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
n is not very efficient and could be improved
> but it would require refactoring. The intent is to make it available early
> to determine what semantics are required by different callers. Once the
> full semantics are nailed down, it can be refactored.
>
> Signed-off-by: Mel Gorman
>
> tends to use the fake word "malloced" instead of the fake word mallocated.
> To be consistent, this preparation patch renames alloced to allocated
> in rmqueue_bulk so the bulk allocator and per-cpu allocator use similar
> names when the bulk allocator is introduced.
>
&
On 4/9/21 9:17 AM, Christian König wrote:
> To be able to switch to a spinlock and reduce lock contention in the TTM
> shrinker we don't want to hold a mutex while unmapping and freeing pages
> from the pool.
Does using spinlock instead of mutex really reduce lock contention?
> But then we
600
> free_unref_page+0x20/0x1c0
> __put_page+0x110/0x1a0
> migrate_pages+0x16d0/0x1dc0
> compact_zone+0xfc0/0x1aa0
> proactive_compact_node+0xd0/0x1e0
> kcompactd+0x550/0x600
> kthread+0x2c0/0x2e0
> call_payload+0x50/0x80
>
> Her
d you observe this in
practice? But anyway, the change is not wrong.
> CC: Andrew Morton
> CC: linux...@kvack.org
> Signed-off-by: Sergei Trofimovich
Acked-by: Vlastimil Babka
> ---
> mm/page_owner.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
&
On 4/1/21 8:59 PM, Linus Torvalds wrote:
> On Thu, Apr 1, 2021 at 11:17 AM Suren Baghdasaryan wrote:
Thanks Suren for bringing this up!
>> We received a report that the copy-on-write issue repored by Jann Horn in
>> https://bugs.chromium.org/p/project-zero/issues/detail?id=2045 is still
>>
t; unwind() [recursion]
>
> CC: Ingo Molnar
> CC: Peter Zijlstra
> CC: Juri Lelli
> CC: Vincent Guittot
> CC: Dietmar Eggemann
> CC: Steven Rostedt
> CC: Ben Segall
> CC: Mel Gorman
> CC: Daniel Bristot de Oliveira
> CC: Andrew Morton
> CC: linux...@k
On 4/2/21 1:50 PM, Sergei Trofimovich wrote:
> On Thu, 1 Apr 2021 17:05:19 -0700
> Andrew Morton wrote:
>
>> On Thu, 1 Apr 2021 23:30:10 +0100 Sergei Trofimovich
>> wrote:
>>
>> > Before the change page_owner recursion was detected via fetching
>> > backtrace and inspecting it for current
On 4/4/21 4:17 PM, Sergei Trofimovich wrote:
> When page_poison detects page corruption it's useful to see who
> freed a page recently to have a guess where write-after-free
> corruption happens.
>
> After this change corruption report has extra page_owner data.
> Example report from real
On 4/1/21 11:42 PM, Roman Gushchin wrote:
> In our production experience the percpu memory allocator is sometimes
> struggling
> with returning the memory to the system. A typical example is a creation of
> several thousands memory cgroups (each has several chunks of the percpu data
> used for
On 4/6/21 7:15 PM, Vlastimil Babka wrote:
> On 4/6/21 2:27 PM, Faiyaz Mohammed wrote:
>> alloc_calls and free_calls implementation in sysfs have two issues,
>> one is PAGE_SIZE limitiation of sysfs and other is it does not adhere
>> to "one value per file" rule.
On 4/6/21 2:27 PM, Faiyaz Mohammed wrote:
> alloc_calls and free_calls implementation in sysfs have two issues,
> one is PAGE_SIZE limitiation of sysfs and other is it does not adhere
> to "one value per file" rule.
>
> To overcome this issues, move the alloc_calls and free_calls implemeation
>
ntexts, and kunit_find_named_resource will call
spin_lock(>lock) that's not irq safe. Can we make the lock irq safe? I
tried the change below and it makde the problem go away. If you agree, the
question is how to proceed - make it part of Oliver's patch series and let
Andrew pick it all with eve
AB_NEVER_MERGE, SLAB_DEBUG_FLAGS,
> SLAB_FLAGS_PERMITTED macros.
>
> Signed-off-by: Oliver Glitta
Acked-by: Vlastimil Babka
On 3/31/21 2:11 PM, Vlastimil Babka wrote:
> On 3/31/21 7:44 AM, Andrew Morton wrote:
>> On Mon, 29 Mar 2021 20:36:35 +0800 qianjun.ker...@gmail.com wrote:
>>
>>> From: jun qian
>>>
>>> In our project, Many business delays come from fork, so
>
On 3/31/21 7:44 AM, Andrew Morton wrote:
> On Mon, 29 Mar 2021 20:36:35 +0800 qianjun.ker...@gmail.com wrote:
>
>> From: jun qian
>>
>> In our project, Many business delays come from fork, so
>> we started looking for the reason why fork is time-consuming.
>> I used the ftrace with
written for __GFP_ZERO allocations.
>
> Fix by restoring the initial order. Also add a warning comment.
>
> Reported-by: Vlastimil Babka
> Reported-by: Sergei Trofimovich
> Signed-off-by: Andrey Konovalov
Tested that the bug indeed occurs in -next and is fixed by thi
s enabled.
Correction: This leads to check_poison_mem() complain about memory corruption
because the poison pattern has already been overwritten by zeroes.
> Fix by restoring the initial order. Also add a warning comment.
>
> Reported-by: Vlastimil Babka
> Reported-by: Sergei Trofimovich
On 3/30/21 12:00 AM, Andrey Konovalov wrote:
> On Mon, Mar 29, 2021 at 2:10 PM Vlastimil Babka wrote:
>>
>> > commit 855a9c4018f3219db8be7e4b9a65ab22aebfde82
>> > Author: Andrey Konovalov
>> > Date: Thu Mar 18 17:01:40 2021 +1100
>> >
>> >
idate pfn first
> before touching the page.
>
> Signed-off-by: Kefeng Wang
> Signed-off-by: Liu Shixin
Acked-by: Vlastimil Babka
Agreed with Matthew's suggestion, also:
> @@ -2468,25 +2469,22 @@ static int move_freepages(struct zone *zone,
> int move_freepages_block(str
On 3/26/21 2:48 PM, David Hildenbrand wrote:
> On 26.03.21 12:26, Sergei Trofimovich wrote:
>> init_on_free=1 does not guarantee that free pages contain only zero bytes.
>>
>> Some examples:
>> 1. page_poison=on takes presedence over init_on_alloc=1 / ini_on_free=1
>
> s/ini_on_free/init_on_free/
Good catch, thanks for finding the root cause!
> After the change we execute only:
> - static_branch_enable(&_page_poisoning_enabled);
> and ignore init_on_free=1.
> CC: Vlastimil Babka
> CC: Andrew Morton
> CC: linux...@kvack.org
> CC: David Hildenbrand
>
On 3/26/21 12:26 PM, Sergei Trofimovich wrote:
> init_on_free=1 does not guarantee that free pages contain only zero bytes.
>
> Some examples:
> 1. page_poison=on takes presedence over init_on_alloc=1 / ini_on_free=1
Yes, and it spits out a message that you enabled both and poisoning takes
On 3/17/21 7:53 PM, David Rientjes wrote:
> On Wed, 17 Mar 2021, Vlastimil Babka wrote:
>> >
>> > [ 22.154049] random: get_random_u32 called from
>> > __kmem_cache_create+0x23/0x3e0 with crng_init=0
>> > [ 22.154070] random: get_random_u32 called
ons why it might be misleading.
> On Thu, Mar 18, 2021 at 8:56 PM Xunlei Pang wrote:
>>
>>
>>
>> On 3/18/21 8:18 PM, Vlastimil Babka wrote:
>> > On 3/17/21 8:54 AM, Xunlei Pang wrote:
>> >> The node list_lock in count_partial() spends long time iteratin
be refactored.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
Although maybe premature, if it changes significantly due to the users'
performance feedback, let's see :)
Some nits below:
...
> @@ -4963,6 +4978,107 @@ static inline bool prepare_alloc_pages
ttps://ci.linaro.org/view/lkft/job/openembedded-lkft-linux-next/DISTRO=lkft,MACHINE=juno,label=docker-buster-lkft/984/consoleFull
>
Andrew, please add this -fix
Thanks.
8<
>From f97312224278839321a5ff9be2b8487553a97c63 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka
Date: Fri
> tends to use the fake word "malloced" instead of the fake word mallocated.
> To be consistent, this preparation patch renames alloced to allocated
> in rmqueue_bulk so the bulk allocator and per-cpu allocator use similar
> names when the bulk allocator is introduced.
>
&
On 3/12/21 4:43 PM, Mel Gorman wrote:
> __alloc_pages updates GFP flags to enforce what flags are allowed
> during a global context such as booting or suspend. This patch moves the
> enforcement from __alloc_pages to prepare_alloc_pages so the code can be
> shared between the single page allocator
On 3/18/21 12:47 PM, Marco Elver wrote:
> On Tue, Mar 16, 2021 at 01:41PM +0100, glit...@gmail.com wrote:
>> From: Oliver Glitta
>>
>> SLUB has resiliency_test() function which is hidden behind #ifdef
>> SLUB_RESILIENCY_TEST that is not part of Kconfig, so nobody
>> runs it. Kselftest should
On 3/19/21 10:57 AM, Oscar Salvador wrote:
> On Thu, Mar 18, 2021 at 12:36:52PM +0100, Michal Hocko wrote:
>> Yeah, makes sense. I am not a fan of the above form of documentation.
>> Btw. maybe renaming the field would be even better, both from the
>> intention and review all existing users. I
On 3/18/21 6:48 AM, Kees Cook wrote:
> On Tue, Mar 09, 2021 at 07:18:32PM +0100, Vlastimil Babka wrote:
>> On 3/9/21 7:14 PM, Georgi Djakov wrote:
>> > Hi Vlastimil,
>> >
>> > Thanks for the comment!
>> >
>> > On 3/9/21 17:09, Vlastimil Bab
t;expected" state, which slightly optimizes the resulting
> assembly code.
>
> Reviewed-by: Alexander Potapenko
> Link:
> https://lore.kernel.org/lkml/CAG_fn=x0dvwqlahjto6jw7tgcmsm77gkhinrd0m_6y0szwo...@mail.gmail.com/
> Signed-off-by: Kees Cook
For the fixed version
Acked-by: Vlastimil Babka
74] sys_sendfile64+0x12c/0x140
> [ 20.195336] ret_fast_syscall+0x0/0x58
> [ 20.195491] 0xbeeacde4
>
> Co-developed-by: Vaneet Narang
> Signed-off-by: Vaneet Narang
> Signed-off-by: Maninder Singh
Acked-by: Vlastimil Babka
14.872621] splice_direct_to_actor+0xb8/0x290
> [ 14.872747] do_splice_direct+0xa0/0xe0
> [ 14.872896] do_sendfile+0x2d0/0x438
> [ 14.873044] sys_sendfile64+0x12c/0x140
> [ 14.873229] ret_fast_syscall+0x0/0x58
> [ 14.873372] 0xbe861de4
>
> Signed-off-by:
On 3/17/21 8:54 AM, Xunlei Pang wrote:
> The node list_lock in count_partial() spends long time iterating
> in case of large amount of partial page lists, which can cause
> thunder herd effect to the list_lock contention.
>
> We have HSF RT(High-speed Service Framework Response-Time) monitors,
>
On 3/18/21 11:22 AM, Michal Hocko wrote:
> On Thu 18-03-21 10:50:38, Vlastimil Babka wrote:
>> On 3/17/21 3:59 PM, Michal Hocko wrote:
>> > On Wed 17-03-21 15:38:35, Oscar Salvador wrote:
>> >> On Wed, Mar 17, 2021 at 03:12:29PM +0100, Michal Hocko wrote:
>> >
On 3/17/21 3:59 PM, Michal Hocko wrote:
> On Wed 17-03-21 15:38:35, Oscar Salvador wrote:
>> On Wed, Mar 17, 2021 at 03:12:29PM +0100, Michal Hocko wrote:
>> > > Since isolate_migratepages_block will stop returning the next pfn to be
>> > > scanned, we reuse the cc->migrate_pfn field to keep track
On 3/17/21 8:54 AM, Xunlei Pang wrote:
> The node list_lock in count_partial() spends long time iterating
> in case of large amount of partial page lists, which can cause
> thunder herd effect to the list_lock contention.
>
> We have HSF RT(High-speed Service Framework Response-Time) monitors,
>
On 3/17/21 9:36 AM, kernel test robot wrote:
>
>
> Greeting,
>
> FYI, we noticed the following commit (built with gcc-9):
>
> commit: e48d82b67a2b760eedf7b95ca15f41267496386c ("[PATCH 1/2] selftests: add
> a kselftest for SLUB debugging functionality")
> url:
>
us patch "selftests: add a kselftest for SLUB
> debugging functionality".
>
> Signed-off-by: Oliver Glitta
Acked-by: Vlastimil Babka
>
> Add new option CONFIG_TEST_SLUB in Kconfig.
>
> Add parameter to function validate_slab_cache() to return
> number of errors in cache.
>
> Signed-off-by: Oliver Glitta
Acked-by: Vlastimil Babka
Disclaimer: this is done as part of Oliver's university project that I'm
advising.
On 3/16/21 11:42 AM, Xunlei Pang wrote:
> On 3/16/21 2:49 AM, Vlastimil Babka wrote:
>> On 3/9/21 4:25 PM, Xunlei Pang wrote:
>>> count_partial() can hold n->list_lock spinlock for quite long, which
>>> makes much trouble to the system. This series eliminate this
1 - 100 of 6119 matches
Mail list logo