Re: [PATCH] mm: kfence: Fix false positives on big endian

2023-05-05 Thread Alexander Potapenko
On Fri, May 5, 2023 at 5:51 AM Michael Ellerman  wrote:

> Since commit 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> __kfence_alloc() and __kfence_free()"), kfence reports failures in
> random places at boot on big endian machines.
>
> The problem is that the new KFENCE_CANARY_PATTERN_U64 encodes the
> address of each byte in its value, so it needs to be byte swapped on big
> endian machines.
>
> The compiler is smart enough to do the le64_to_cpu() at compile time, so
> there is no runtime overhead.
>
> Fixes: 1ba3cbf3ec3b ("mm: kfence: improve the performance of
> __kfence_alloc() and __kfence_free()")
> Signed-off-by: Michael Ellerman 
>
Reviewed-by: Alexander Potapenko 


Re: [PATCH v3 1/2] mm, hwpoison: Try to recover from copy-on write faults

2022-11-02 Thread Alexander Potapenko
On Wed, Nov 2, 2022 at 3:27 PM Alexander Potapenko  wrote:
>
> On Fri, Oct 28, 2022 at 6:14 PM Luck, Tony  wrote:
> >
> > >> +vfrom = kmap_local_page(from);
> > >> +vto = kmap_local_page(to);
> > >> +ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE);
> > >
> > > In copy_user_highpage(), kmsan_unpoison_memory(page_address(to), 
> > > PAGE_SIZE) is done after the copy when
> > > __HAVE_ARCH_COPY_USER_HIGHPAGE isn't defined. Do we need to do something 
> > > similar here? But I'm not familiar
> > > with kmsan, so I can easy be wrong.
> >
> > It looks like that kmsan_unpoison_memory() call was added recently, after I 
> > copied
> > copy_user_highpage() to create copy_mc_user_highpage(). I'm not familiar 
> > with
> > kmsan either. Adding Alexander to this thread since they added that code.
> >
>
> Given that copy_mc_user_highpage() replaces one of the calls to
> copy_user_highpage(), it sure makes sense to call
> kmsan_unpoison_memory() here.
>
> KMSAN tracks the status (initialized/uninitialized) of the kernel
> memory. Newly allocated memory is marked uninitialized, copying memory
> preserves its status, and writing constants to that memory makes it
> initialized.
> Userspace memory does not have its status tracked by KMSAN, so when
> values are copied from the userspace, KMSAN does nothing with their
> status.
> That's why every (successful) copy_from_user event should be followed
> by kmsan_unpoison_memory(), which marks the corresponding kernel
> buffer initialized - otherwise the status of that buffer may get
> stale.
>
> > > Anyway, this patch looks good to me. Thanks.
> > >
> > > Reviewed-by: Miaohe Lin 
Reviewed-by: Alexander Potapenko 

> >
> > Thanks for the review.
> >
> > -Tony
>
>
>
> --
> Alexander Potapenko
> Software Engineer
>
> Google Germany GmbH
> Erika-Mann-Straße, 33
> 80636 München
>
> Geschäftsführer: Paul Manicle, Liana Sebastian
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg


Re: [PATCH v3 1/2] mm, hwpoison: Try to recover from copy-on write faults

2022-11-02 Thread Alexander Potapenko
On Fri, Oct 28, 2022 at 6:14 PM Luck, Tony  wrote:
>
> >> +vfrom = kmap_local_page(from);
> >> +vto = kmap_local_page(to);
> >> +ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE);
> >
> > In copy_user_highpage(), kmsan_unpoison_memory(page_address(to), PAGE_SIZE) 
> > is done after the copy when
> > __HAVE_ARCH_COPY_USER_HIGHPAGE isn't defined. Do we need to do something 
> > similar here? But I'm not familiar
> > with kmsan, so I can easy be wrong.
>
> It looks like that kmsan_unpoison_memory() call was added recently, after I 
> copied
> copy_user_highpage() to create copy_mc_user_highpage(). I'm not familiar with
> kmsan either. Adding Alexander to this thread since they added that code.
>

Given that copy_mc_user_highpage() replaces one of the calls to
copy_user_highpage(), it sure makes sense to call
kmsan_unpoison_memory() here.

KMSAN tracks the status (initialized/uninitialized) of the kernel
memory. Newly allocated memory is marked uninitialized, copying memory
preserves its status, and writing constants to that memory makes it
initialized.
Userspace memory does not have its status tracked by KMSAN, so when
values are copied from the userspace, KMSAN does nothing with their
status.
That's why every (successful) copy_from_user event should be followed
by kmsan_unpoison_memory(), which marks the corresponding kernel
buffer initialized - otherwise the status of that buffer may get
stale.

> > Anyway, this patch looks good to me. Thanks.
> >
> > Reviewed-by: Miaohe Lin 
>
> Thanks for the review.
>
> -Tony



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg


Re: [PATCH v4 07/11] mm: kasan: Use is_kernel() helper

2021-09-30 Thread Alexander Potapenko
On Thu, Sep 30, 2021 at 9:09 AM Kefeng Wang  wrote:
>
> Directly use is_kernel() helper in kernel_or_module_addr().
>
> Cc: Andrey Ryabinin 
> Cc: Alexander Potapenko 
> Cc: Andrey Konovalov 
> Cc: Dmitry Vyukov 
> Signed-off-by: Kefeng Wang 
Reviewed-by: Alexander Potapenko 

> ---
>  mm/kasan/report.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 3239fd8f8747..1c955e1c98d5 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -226,7 +226,7 @@ static void describe_object(struct kmem_cache *cache, 
> void *object,
>
>  static inline bool kernel_or_module_addr(const void *addr)
>  {
> -   if (addr >= (void *)_stext && addr < (void *)_end)
> +   if (is_kernel((unsigned long)addr))
> return true;
> if (is_module_address((unsigned long)addr))
> return true;
> --
> 2.26.2
>


-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg


Re: [RFC PATCH v1] powerpc: Enable KFENCE for PPC32

2021-03-02 Thread Alexander Potapenko
> [   14.998426] BUG: KFENCE: invalid read in 
> finish_task_switch.isra.0+0x54/0x23c
> [   14.998426]
> [   15.007061] Invalid read at 0x(ptrval):
> [   15.010906]  finish_task_switch.isra.0+0x54/0x23c
> [   15.015633]  kunit_try_run_case+0x5c/0xd0
> [   15.019682]  kunit_generic_run_threadfn_adapter+0x24/0x30
> [   15.025099]  kthread+0x15c/0x174
> [   15.028359]  ret_from_kernel_thread+0x14/0x1c
> [   15.032747]
> [   15.034251] CPU: 0 PID: 111 Comm: kunit_try_catch Tainted: GB
> 5.12.0-rc1-s3k-dev-01534-g4f14ae75edf0-dirty #4674
> [   15.045811] 
> ==
> [   15.053324] # test_invalid_access: EXPECTATION FAILED at 
> mm/kfence/kfence_test.c:636
> [   15.053324] Expected report_matches() to be true, but is false
> [   15.068359] not ok 21 - test_invalid_access

The test expects the function name to be test_invalid_access, i. e.
the first line should be "BUG: KFENCE: invalid read in
test_invalid_access".
The error reporting function unwinds the stack, skips a couple of
"uninteresting" frames
(https://elixir.bootlin.com/linux/v5.12-rc1/source/mm/kfence/report.c#L43)
and uses the first "interesting" one frame to print the report header
(https://elixir.bootlin.com/linux/v5.12-rc1/source/mm/kfence/report.c#L226).

It's strange that test_invalid_access is missing altogether from the
stack trace - is that expected?
Can you try printing the whole stacktrace without skipping any frames
to see if that function is there?