On Thu, Aug 29, 2013 at 12:00:53AM +0200, Alexander Graf wrote:
> 
> On 06.08.2013, at 06:16, Paul Mackerras wrote:
> 
> > kvm_start_lightweight:
> > +   /* Copy registers into shadow vcpu so we can access them in real mode */
> > +   GET_SHADOW_VCPU(r3)
> > +   bl      FUNC(kvmppc_copy_to_svcpu)
> 
> This will clobber r3 and r4, no? We need to restore them from the stack here 
> I would think.

You're right.  We don't need to restore r3 since we don't actually use
it, but we do need to restore r4.

> > #ifdef CONFIG_PPC_BOOK3S_32
> >             /* We set segments as unused segments when invalidating them. So
> >              * treat the respective fault as segment fault. */
> > -           if (svcpu->sr[kvmppc_get_pc(vcpu) >> SID_SHIFT] == SR_INVALID) {
> > -                   kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
> > -                   r = RESUME_GUEST;
> > +           {
> > +                   struct kvmppc_book3s_shadow_vcpu *svcpu;
> > +                   u32 sr;
> > +
> > +                   svcpu = svcpu_get(vcpu);
> > +                   sr = svcpu->sr[kvmppc_get_pc(vcpu) >> SID_SHIFT];
> 
> Doesn't this break two concurrently running guests now that we don't copy the 
> shadow vcpu anymore? Just move the sr array to a kmalloc'ed area until the 
> whole vcpu is kmalloc'ed. Then you can get rid of all shadow vcpu code.

This is 32-bit only... the svcpu is already kmalloc'ed, so I'm not
sure what you're asking for here or why you think this would break
with multiple guests.

Paul.

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to