> -----Original Message-----
> From: Wood Scott-B07421
> Sent: Saturday, August 03, 2013 5:05 AM
> To: Bhushan Bharat-R65777
> Cc: b...@kernel.crashing.org; ag...@suse.de; kvm-ppc@vger.kernel.org;
> k...@vger.kernel.org; linuxppc-...@lists.ozlabs.org; Bhushan Bharat-R65777
> Subject: Re: [PATCH 6/6 v2] kvm: powerpc: use caching attributes as per linux
> pte
> 
> On Thu, Aug 01, 2013 at 04:42:38PM +0530, Bharat Bhushan wrote:
> > diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index
> > 17722d8..ebcccc2 100644
> > --- a/arch/powerpc/kvm/booke.c
> > +++ b/arch/powerpc/kvm/booke.c
> > @@ -697,7 +697,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run,
> > struct kvm_vcpu *vcpu)  #endif
> >
> >     kvmppc_fix_ee_before_entry();
> > -
> > +   vcpu->arch.pgdir = current->mm->pgd;
> >     ret = __kvmppc_vcpu_run(kvm_run, vcpu);
> 
> kvmppc_fix_ee_before_entry() is supposed to be the last thing that happens
> before __kvmppc_vcpu_run().
> 
> > @@ -332,6 +324,8 @@ static inline int kvmppc_e500_shadow_map(struct
> kvmppc_vcpu_e500 *vcpu_e500,
> >     unsigned long hva;
> >     int pfnmap = 0;
> >     int tsize = BOOK3E_PAGESZ_4K;
> > +   pte_t pte;
> > +   int wimg = 0;
> >
> >     /*
> >      * Translate guest physical to true physical, acquiring @@ -437,6
> > +431,8 @@ static inline int kvmppc_e500_shadow_map(struct
> > kvmppc_vcpu_e500 *vcpu_e500,
> >
> >     if (likely(!pfnmap)) {
> >             unsigned long tsize_pages = 1 << (tsize + 10 - PAGE_SHIFT);
> > +           pgd_t *pgdir;
> > +
> >             pfn = gfn_to_pfn_memslot(slot, gfn);
> >             if (is_error_noslot_pfn(pfn)) {
> >                     printk(KERN_ERR "Couldn't get real page for gfn 
> > %lx!\n", @@
> -447,9
> > +443,18 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500
> *vcpu_e500,
> >             /* Align guest and physical address to page map boundaries */
> >             pfn &= ~(tsize_pages - 1);
> >             gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
> > +           pgdir = vcpu_e500->vcpu.arch.pgdir;
> > +           pte = lookup_linux_pte(pgdir, hva, 1, &tsize_pages);
> > +           if (pte_present(pte)) {
> > +                   wimg = (pte >> PTE_WIMGE_SHIFT) & MAS2_WIMGE_MASK;
> > +           } else {
> > +                   printk(KERN_ERR "pte not present: gfn %lx, pfn %lx\n",
> > +                                   (long)gfn, pfn);
> > +                   return -EINVAL;
> > +           }
> >     }
> 
> How does wimg get set in the pfnmap case?

Pfnmap is not kernel managed pages, right? So should we set I+G there ?

> 
> Could you explain why we need to set dirty/referenced on the PTE, when we 
> didn't
> need to do that before? All we're getting from the PTE is wimg.
> We have MMU notifiers to take care of the page being unmapped, and we've 
> already
> marked the page itself as dirty if the TLB entry is writeable.

I pulled this code from book3s.

Ben, can you describe why we need this on book3s ?

Thanks
-Bharat
> 
> -Scott

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to