Michael Ellerman's on July 23, 2019 8:52 pm: > Nicholas Piggin <npig...@gmail.com> writes: >> create_physical_mapping expects physical addresses, but creating and >> splitting these mappings after boot is supplying virtual (effective) >> addresses. This can be hit by booting with limited memory then probing >> new physical memory sections. >> >> Cc: Reza Arbab <ar...@linux.vnet.ibm.com> >> Fixes: 6cc27341b21a8 ("powerpc/mm: add radix__create_section_mapping()") >> Signed-off-by: Nicholas Piggin <npig...@gmail.com> > > This is not catastrophic because create_physical_mapping() just uses > start/end to construct virtual addresses anyway, and __va(__va(x)) == __va(x) > ?
A bit more subtle, it calls __map_kernel_page with the pa as well. pfn_pte ends up masking the top 0xc bits out with PTE_RPN_MASK, which is what saves us. Hmm, so we really should also have a VM_BUG_ON in pfn_pte if it's given a pfn with the top PAGE_SHIFT bit or PTE_RPN_MASK bits set. I'll add that as a patch 5. > Although we do pass those through as region_start/end which then go to > memblock_alloc_try_nid(). But I guess that doesn't happen after boot, > which is the case you're talking about. > > So I think looks good, change log could use a bit more detail though :) Thanks for taking a look. I'll resend after a bit more testing and some changelog improvement. Thanks, Nick