On Tue, Jun 6, 2017 at 3:48 PM, Michael Ellerman <m...@ellerman.id.au> wrote: > Currently we map the whole linear mapping with PAGE_KERNEL_X. Instead we > should check if the page overlaps the kernel text and only then add > PAGE_KERNEL_X. > > Note that we still use 1G pages if they're available, so this will > typically still result in a 1G executable page at KERNELBASE. So this fix is > primarily useful for catching stray branches to high linear mapping addresses. > > Without this patch, we can execute at 1G in xmon using: > > 0:mon> m c000000040000000 > c000000040000000 00 l > c000000040000000 00000000 01006038 > c000000040000004 00000000 2000804e > c000000040000008 00000000 x > 0:mon> di c000000040000000 > c000000040000000 38600001 li r3,1 > c000000040000004 4e800020 blr > 0:mon> p c000000040000000 > return value is 0x1 > > After we get a 400 as expected: > > 0:mon> p c000000040000000 > *** 400 exception occurred > > Fixes: 2bfd65e45e87 ("powerpc/mm/radix: Add radix callbacks for early init > routines") > Cc: sta...@vger.kernel.org # v4.7+ > Signed-off-by: Michael Ellerman <m...@ellerman.id.au> > --- > arch/powerpc/mm/pgtable-radix.c | 14 +++++++++++--- > 1 file changed, 11 insertions(+), 3 deletions(-) > > diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c > index c28165d8970b..6c062f92b9e4 100644 > --- a/arch/powerpc/mm/pgtable-radix.c > +++ b/arch/powerpc/mm/pgtable-radix.c > @@ -19,6 +19,7 @@ > #include <asm/mmu.h> > #include <asm/firmware.h> > #include <asm/powernv.h> > +#include <asm/sections.h> > > #include <trace/events/thp.h> > > @@ -121,7 +122,8 @@ static inline void __meminit print_mapping(unsigned long > start, > static int __meminit create_physical_mapping(unsigned long start, > unsigned long end) > { > - unsigned long addr, mapping_size = 0; > + unsigned long vaddr, addr, mapping_size = 0; > + pgprot_t prot; > > start = _ALIGN_UP(start, PAGE_SIZE); > for (addr = start; addr < end; addr += mapping_size) { > @@ -145,8 +147,14 @@ static int __meminit create_physical_mapping(unsigned > long start, > start = addr; > } > > - rc = radix__map_kernel_page((unsigned long)__va(addr), addr, > - PAGE_KERNEL_X, mapping_size); > + vaddr = (unsigned long)__va(addr); > + > + if (overlaps_kernel_text(vaddr, vaddr + mapping_size)) > + prot = PAGE_KERNEL_X; > + else > + prot = PAGE_KERNEL;
Do we need the kvm tmp/trampoline bits like hash? Balbir Singh.