On Tue, Feb 23, 2021 at 01:55:44PM -0800, Mike Kravetz wrote:
> Gerald Schaefer reported a panic on s390 in hugepage_subpool_put_pages()
> with linux-next 5.12.0-20210222.
> Call trace:
>   hugepage_subpool_put_pages.part.0+0x2c/0x138
>   __free_huge_page+0xce/0x310
>   alloc_pool_huge_page+0x102/0x120
>   set_max_huge_pages+0x13e/0x350
>   hugetlb_sysctl_handler_common+0xd8/0x110
>   hugetlb_sysctl_handler+0x48/0x58
>   proc_sys_call_handler+0x138/0x238
>   new_sync_write+0x10e/0x198
>   vfs_write.part.0+0x12c/0x238
>   ksys_write+0x68/0xf8
>   do_syscall+0x82/0xd0
>   __do_syscall+0xb4/0xc8
>   system_call+0x72/0x98
> 
> This is a result of the change which moved the hugetlb page subpool
> pointer from page->private to page[1]->private.  When new pages are
> allocated from the buddy allocator, the private field of the head
> page will be cleared, but the private field of subpages is not modified.
> Therefore, old values may remain.
> 
> Fix by initializing hugetlb page subpool pointer in prep_new_huge_page().
> 
> Fixes: f1280272ae4d ("hugetlb: use page.private for hugetlb specific page 
> flags")
> Reported-by: Gerald Schaefer <gerald.schae...@linux.ibm.com>
> Signed-off-by: Mike Kravetz <mike.krav...@oracle.com>

Do we need the hugetlb_set_page_subpool() in __free_huge_page?

Reviewed-by: Oscar Salvador <osalva...@suse.de>


-- 
Oscar Salvador
SUSE L3

Reply via email to