Re: [PATCH 2/6] powerpc/pseries: Add key to flags in pSeries_lpar_hpte_updateboltedpp()
Daniel Axtens writes: > Michael Ellerman writes: > >> The flags argument to plpar_pte_protect() (aka. H_PROTECT), includes >> the key in bits 9-13, but currently we always set those bits to zero. >> >> In the past that hasn't been a problem because we always used key 0 >> for the kernel, and updateboltedpp() is only used for kernel mappings. >> >> However since commit d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 >> for kernel mapping with hash translation") we are now inadvertently >> changing the key (to zero) when we call plpar_pte_protect(). >> >> That hasn't broken anything because updateboltedpp() is only used for >> STRICT_KERNEL_RWX, which is currently disabled on 64s due to other >> bugs. >> >> But we want to fix that, so first we need to pass the key correctly to >> plpar_pte_protect(). In the `newpp` value the low 3 bits of the key >> are already in the correct spot, but the high 2 bits of the key need >> to be shifted down. >> >> Fixes: d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 for kernel mapping >> with hash translation") >> Signed-off-by: Michael Ellerman >> --- >> arch/powerpc/platforms/pseries/lpar.c | 4 +++- >> 1 file changed, 3 insertions(+), 1 deletion(-) >> >> diff --git a/arch/powerpc/platforms/pseries/lpar.c >> b/arch/powerpc/platforms/pseries/lpar.c >> index 764170fdb0f7..8bbbddff7226 100644 >> --- a/arch/powerpc/platforms/pseries/lpar.c >> +++ b/arch/powerpc/platforms/pseries/lpar.c >> @@ -976,11 +976,13 @@ static void pSeries_lpar_hpte_updateboltedpp(unsigned >> long newpp, >> slot = pSeries_lpar_hpte_find(vpn, psize, ssize); >> BUG_ON(slot == -1); >> >> -flags = newpp & 7; >> +flags = newpp & (HPTE_R_PP | HPTE_R_N); >> if (mmu_has_feature(MMU_FTR_KERNEL_RO)) >> /* Move pp0 into bit 8 (IBM 55) */ >> flags |= (newpp & HPTE_R_PP0) >> 55; >> >> +flags |= ((newpp & HPTE_R_KEY_HI) >> 48) | (newpp & HPTE_R_KEY_LO); >> + > > I'm really confused about how these bits are getting packed into the > flags parameter. It seems to match how they are unpacked in > kvmppc_h_pr_protect, but I cannot figure out why they are packed in that > order, and the LoPAR doesn't seem especially illuminating on this topic > - although I may have missed the relevant section. Yeah I agree it's not very clearly specified. The hcall we're using here is H_PROTECT, which is specified in section 14.5.4.1.6 of LoPAPR v1.1. It takes a `flags` parameter, and the description for flags says: * flags: AVPN, pp0, pp1, pp2, key0-key4, n, and for the CMO option: CMO Option flags as defined in Table 189‚ If you then go to the start of the parent section, 14.5.4.1, on page 405, it says: Register Linkage (For hcall() tokens 0x04 - 0x18) * On Call * R3 function call token * R4 flags (see Table 178‚ “Page Frame Table Access flags field definition‚” on page 401) Then you have to go to section 14.5.3, and on page 394 there is a list of hcalls and their tokens (table 176), and there you can see that H_PROTECT == 0x18. Finally you can look at table 178, on page 401, where it specifies the layout of the bits for the key: Bit Function -- 50-54 | key0-key4 Those are big-endian bit numbers, converting to normal bit numbers you get bits 9-13, or 0x3e00. If you look at the kernel source we have: #define HPTE_R_KEY_HI ASM_CONST(0x3000) #define HPTE_R_KEY_LO ASM_CONST(0x0e00) So the LO bits are already in the right place, and the HI bits just need to be shifted down by 48. Hope that makes it clearer :) cheers
Re: [PATCH 2/6] powerpc/pseries: Add key to flags in pSeries_lpar_hpte_updateboltedpp()
Michael Ellerman writes: > The flags argument to plpar_pte_protect() (aka. H_PROTECT), includes > the key in bits 9-13, but currently we always set those bits to zero. > > In the past that hasn't been a problem because we always used key 0 > for the kernel, and updateboltedpp() is only used for kernel mappings. > > However since commit d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 > for kernel mapping with hash translation") we are now inadvertently > changing the key (to zero) when we call plpar_pte_protect(). > > That hasn't broken anything because updateboltedpp() is only used for > STRICT_KERNEL_RWX, which is currently disabled on 64s due to other > bugs. > > But we want to fix that, so first we need to pass the key correctly to > plpar_pte_protect(). In the `newpp` value the low 3 bits of the key > are already in the correct spot, but the high 2 bits of the key need > to be shifted down. > > Fixes: d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 for kernel mapping > with hash translation") > Signed-off-by: Michael Ellerman > --- > arch/powerpc/platforms/pseries/lpar.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/arch/powerpc/platforms/pseries/lpar.c > b/arch/powerpc/platforms/pseries/lpar.c > index 764170fdb0f7..8bbbddff7226 100644 > --- a/arch/powerpc/platforms/pseries/lpar.c > +++ b/arch/powerpc/platforms/pseries/lpar.c > @@ -976,11 +976,13 @@ static void pSeries_lpar_hpte_updateboltedpp(unsigned > long newpp, > slot = pSeries_lpar_hpte_find(vpn, psize, ssize); > BUG_ON(slot == -1); > > - flags = newpp & 7; > + flags = newpp & (HPTE_R_PP | HPTE_R_N); > if (mmu_has_feature(MMU_FTR_KERNEL_RO)) > /* Move pp0 into bit 8 (IBM 55) */ > flags |= (newpp & HPTE_R_PP0) >> 55; > > + flags |= ((newpp & HPTE_R_KEY_HI) >> 48) | (newpp & HPTE_R_KEY_LO); > + I'm really confused about how these bits are getting packed into the flags parameter. It seems to match how they are unpacked in kvmppc_h_pr_protect, but I cannot figure out why they are packed in that order, and the LoPAR doesn't seem especially illuminating on this topic - although I may have missed the relevant section. Kind regards, Daniel > lpar_rc = plpar_pte_protect(flags, slot, 0); > > BUG_ON(lpar_rc != H_SUCCESS); > -- > 2.25.1
[PATCH 2/6] powerpc/pseries: Add key to flags in pSeries_lpar_hpte_updateboltedpp()
The flags argument to plpar_pte_protect() (aka. H_PROTECT), includes the key in bits 9-13, but currently we always set those bits to zero. In the past that hasn't been a problem because we always used key 0 for the kernel, and updateboltedpp() is only used for kernel mappings. However since commit d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation") we are now inadvertently changing the key (to zero) when we call plpar_pte_protect(). That hasn't broken anything because updateboltedpp() is only used for STRICT_KERNEL_RWX, which is currently disabled on 64s due to other bugs. But we want to fix that, so first we need to pass the key correctly to plpar_pte_protect(). In the `newpp` value the low 3 bits of the key are already in the correct spot, but the high 2 bits of the key need to be shifted down. Fixes: d94b827e89dc ("powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation") Signed-off-by: Michael Ellerman --- arch/powerpc/platforms/pseries/lpar.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c index 764170fdb0f7..8bbbddff7226 100644 --- a/arch/powerpc/platforms/pseries/lpar.c +++ b/arch/powerpc/platforms/pseries/lpar.c @@ -976,11 +976,13 @@ static void pSeries_lpar_hpte_updateboltedpp(unsigned long newpp, slot = pSeries_lpar_hpte_find(vpn, psize, ssize); BUG_ON(slot == -1); - flags = newpp & 7; + flags = newpp & (HPTE_R_PP | HPTE_R_N); if (mmu_has_feature(MMU_FTR_KERNEL_RO)) /* Move pp0 into bit 8 (IBM 55) */ flags |= (newpp & HPTE_R_PP0) >> 55; + flags |= ((newpp & HPTE_R_KEY_HI) >> 48) | (newpp & HPTE_R_KEY_LO); + lpar_rc = plpar_pte_protect(flags, slot, 0); BUG_ON(lpar_rc != H_SUCCESS); -- 2.25.1