Re: [PATCH] x86/mm: fix a crash with kmemleak_scan()

2019-04-24 Thread Catalin Marinas
On Tue, Apr 23, 2019 at 12:58:11PM -0400, Qian Cai wrote:
> The first kmemleak_scan() after boot would trigger a crash below because
> 
> kernel_init
>   free_initmem
> mem_encrypt_free_decrypted_mem
>   free_init_pages
> 
> unmapped some memory inside the .bss with DEBUG_PAGEALLOC=y. Since
> kmemleak_init() will register the .data/.bss sections (only register
> .data..ro_after_init if not within .data) and then kmemleak_scan() will
> scan those address and dereference them looking for pointer referencing.
> If free_init_pages() free and unmap pages in those sections,
> kmemleak_scan() will trigger a crash if referencing one of those
> addresses.
> 
> BUG: unable to handle kernel paging request at bd402000
> CPU: 12 PID: 325 Comm: kmemleak Not tainted 5.1.0-rc4+ #4
> RIP: 0010:scan_block+0x58/0x160
> Call Trace:
>  scan_gray_list+0x1d9/0x280
>  kmemleak_scan+0x485/0xad0
>  kmemleak_scan_thread+0x9f/0xc4
>  kthread+0x1d2/0x1f0
>  ret_from_fork+0x35/0x40
> 
> Since kmemleak_free_part() is tolerant to unknown objects (not tracked by
> kmemleak), it is fine to call it from free_init_pages() even if not all
> address ranges passed to this function are known to kmemleak.
> 
> Signed-off-by: Qian Cai 

Reviewed-by: Catalin Marinas 


[PATCH] x86/mm: fix a crash with kmemleak_scan()

2019-04-23 Thread Qian Cai
The first kmemleak_scan() after boot would trigger a crash below because

kernel_init
  free_initmem
mem_encrypt_free_decrypted_mem
  free_init_pages

unmapped some memory inside the .bss with DEBUG_PAGEALLOC=y. Since
kmemleak_init() will register the .data/.bss sections (only register
.data..ro_after_init if not within .data) and then kmemleak_scan() will
scan those address and dereference them looking for pointer referencing.
If free_init_pages() free and unmap pages in those sections,
kmemleak_scan() will trigger a crash if referencing one of those
addresses.

BUG: unable to handle kernel paging request at bd402000
CPU: 12 PID: 325 Comm: kmemleak Not tainted 5.1.0-rc4+ #4
RIP: 0010:scan_block+0x58/0x160
Call Trace:
 scan_gray_list+0x1d9/0x280
 kmemleak_scan+0x485/0xad0
 kmemleak_scan_thread+0x9f/0xc4
 kthread+0x1d2/0x1f0
 ret_from_fork+0x35/0x40

Since kmemleak_free_part() is tolerant to unknown objects (not tracked by
kmemleak), it is fine to call it from free_init_pages() even if not all
address ranges passed to this function are known to kmemleak.

Signed-off-by: Qian Cai 
---
 arch/x86/mm/init.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index f905a2371080..8dacdb96899e 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -5,6 +5,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -766,6 +767,11 @@ void free_init_pages(const char *what, unsigned long 
begin, unsigned long end)
if (debug_pagealloc_enabled()) {
pr_info("debug: unmapping init [mem %#010lx-%#010lx]\n",
begin, end - 1);
+   /*
+* Inform kmemleak about the hole in the memory since the
+* corresponding pages will be unmapped.
+*/
+   kmemleak_free_part((void *)begin, end - begin);
set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
} else {
/*
-- 
2.20.1 (Apple Git-117)