On Wed, Feb 20, 2019 at 03:59:58PM +1100, Russell Currey wrote: > Using the hash MMU on P7+, the AMR is used for pkeys. It's important
This needs a bit of rewording. The "Using" ... "used" construct is a bit confusing on the first read. Also, there was a processor called P7+, but I think you're trying to say P7 and subsequent processors. > that the host and guest never end up with each other's AMR value, since > this could disrupt operations and break things. What sort of breakage? Guest kernel crash? Host kernel crash? > The AMR gets correctly restored on context switch, however before this > happens (i.e. in a program like qemu) having the host value of the AMR > be zero would interfere with that program using pkeys. > > In addition, the AMR on Radix can control kernel access to userspace > data, which you wouldn't want to be zeroed. > > So, just save and restore it like the other registers that get saved and > restored. > > Fixes: cf43d3b26452 ("powerpc: Enable pkey subsystem") > Cc: <sta...@vger.kernel.org> # v4.16+ > Signed-off-by: Russell Currey <rus...@russell.cc> > --- > I'm not entirely sure the stack frame numbers are correct, I've tested it > and it works but it'd be good if someone could double check this. > > arch/powerpc/kvm/book3s_hv_rmhandlers.S | 13 +++++++++---- > 1 file changed, 9 insertions(+), 4 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S > b/arch/powerpc/kvm/book3s_hv_rmhandlers.S > index 9b8d50a7cbaf..6291751c4ad9 100644 > --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S > +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S > @@ -47,7 +47,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) > #define NAPPING_NOVCPU 2 > > /* Stack frame offsets for kvmppc_hv_entry */ > -#define SFS 208 > +#define SFS 224 /* must be divisible by 16 */ I don't think this needs to be increased. What's happening here is that we have a "short path" where most SPRs are saved/loaded/restored etc. in C code in book3s_hv.c, and that path uses 144 bytes on the stack to save/restore 18 GPR values (r14 - r31). Your patch needs to add code to context-switch AMR in kvmhv_p9_guest_entry() to fix that path. Alternatively there is the slow path where we go to real mode and do most of the SPR loading in assembler code. That path uses the stack to store host SPR values in STACK_SLOT_TID, PSSCR, PID, etc. > #define STACK_SLOT_TRAP (SFS-4) > #define STACK_SLOT_SHORT_PATH (SFS-8) > #define STACK_SLOT_TID (SFS-16) > @@ -58,8 +58,9 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) > #define STACK_SLOT_DAWR (SFS-56) > #define STACK_SLOT_DAWRX (SFS-64) > #define STACK_SLOT_HFSCR (SFS-72) > +#define STACK_SLOT_AMR (SFS-80) > /* the following is used by the P9 short path */ > -#define STACK_SLOT_NVGPRS (SFS-152) /* 18 gprs */ > +#define STACK_SLOT_NVGPRS (SFS-160) /* 18 gprs */ I don't see why you need to change this either. > > /* > * Call kvmppc_hv_entry in real mode. > @@ -743,6 +744,9 @@ BEGIN_FTR_SECTION > std r7, STACK_SLOT_DAWRX(r1) > END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) > > + mfspr r5, SPRN_AMR > + std r5, STACK_SLOT_AMR(r1) > + > BEGIN_FTR_SECTION > /* Set partition DABR */ > /* Do this before re-enabling PMU to avoid P7 DABR corruption bug */ > @@ -1640,13 +1644,14 @@ BEGIN_FTR_SECTION > END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) > 8: > > - /* Save and reset AMR and UAMOR before turning on the MMU */ > + /* Save and restore/reset AMR and UAMOR before turning on the MMU */ > mfspr r5,SPRN_AMR > mfspr r6,SPRN_UAMOR > std r5,VCPU_AMR(r9) > std r6,VCPU_UAMOR(r9) > + ld r5,STACK_SLOT_AMR(r1) > li r6,0 > - mtspr SPRN_AMR,r6 > + mtspr SPRN_AMR, r5 > mtspr SPRN_UAMOR, r6 > > /* Switch DSCR back to host value */ > -- > 2.20.1 Rest looks fine, but as I said, the patch needs to hit kvmhv_p9_guest_entry() also. Paul.