David Gibson <da...@gibson.dropbear.id.au> writes:
Hi, just a nitpick, feel free to ignore. > When in VRMA mode (i.e. a guest thinks it has the MMU off, but the > hypervisor is still applying translation) we use a special SLB entry, > rather than looking up an SLBE by address as we do when guest translation > is on. > > We build that special entry in ppc_hash64_update_vrma() along with some > logic for handling some non-VRMA cases. Split the actual build of the > VRMA SLBE into a separate helper and streamline it a bit. > > Signed-off-by: David Gibson <da...@gibson.dropbear.id.au> > --- > target/ppc/mmu-hash64.c | 79 ++++++++++++++++++++--------------------- > 1 file changed, 38 insertions(+), 41 deletions(-) > > diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c > index 170a78bd2e..06cfff9860 100644 > --- a/target/ppc/mmu-hash64.c > +++ b/target/ppc/mmu-hash64.c > @@ -789,6 +789,39 @@ static target_ulong rmls_limit(PowerPCCPU *cpu) > } > } > > +static int build_vrma_slbe(PowerPCCPU *cpu, ppc_slb_t *slb) > +{ > + CPUPPCState *env = &cpu->env; > + target_ulong lpcr = env->spr[SPR_LPCR]; > + uint32_t vrmasd = (lpcr & LPCR_VRMASD) >> LPCR_VRMASD_SHIFT; > + target_ulong vsid = SLB_VSID_VRMA | ((vrmasd << 4) & SLB_VSID_LLP_MASK); > + int i; > + > + /* > + * Make one up. Mostly ignore the ESID which will not be needed > + * for translation > + */ I find this comment a bit vague. I suggest we either leave it behind or make it more precise. The ISA says: "translation of effective addresses to virtual addresses use the SLBE values in Figure 18 instead of the entry in the SLB corresponding to the ESID"