On Sep 1, 2008, at 3:41 PM, Avi Kivity wrote:

Alexander Graf wrote:
This patch implements VMRUN. VMRUN enters a virtual CPU and runs that
in the same context as the normal guest CPU would run.
So basically it is implemented the same way, a normal CPU would do it.

We also prepare all intercepts that get OR'ed with the original
intercepts, as we do not allow a level 2 guest to be intercepted less
than the first level guest.

 +/* Not needed until device passthrough */
+/* #define NESTED_KVM_MERGE_IOPM */
+


I'd like to drop port 80 passthrough anyway. Device assignment is unlikely to make heavy use of ioports.

@@ -663,9 +674,21 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
        msrpm_pages = alloc_pages(GFP_KERNEL, MSRPM_ALLOC_ORDER);
        if (!msrpm_pages)
                goto uninit;
+
+       nested_msrpm_pages = alloc_pages(GFP_KERNEL, MSRPM_ALLOC_ORDER);
+       if (!nested_msrpm_pages)
+               goto uninit;
+
+       nested_iopm_pages = alloc_pages(GFP_KERNEL, IOPM_ALLOC_ORDER);
+       if (!nested_iopm_pages)
+               goto uninit;
+


Maybe we should do that on the first time the guest enters nested svm, to save a bit of memory.

We can do that in a later patch, though.

+
+static int nested_svm_vmrun_msrpm(struct vcpu_svm *svm, void *arg1,
+                                 void *arg2, void *opaque)
+{
+       int i;
+       u32 *nested_msrpm = (u32*)arg1;
+       for (i=0; i< PAGE_SIZE * (1 << MSRPM_ALLOC_ORDER) / 4; i++)
+               svm->nested_msrpm[i] = svm->msrpm[i] | nested_msrpm[i];
+       svm->vmcb->control.msrpm_base_pa = __pa(svm->nested_msrpm);
+
+       return 0;
+}


Hm. Have you verified that kvm actually has msr emulation for all the msrs it allows through msrpm?

I guess it has to, since the msrs can be set through save/restore.


(vmrun emulation)
+
+       force_new_asid(&svm->vcpu);


I would be nice not to do this (can be left for later of course; it could be quite complex).

+
+static int vmrun_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
+{
+       nsvm_printk("VMrun\n");
+
+       svm->next_rip = kvm_rip_read(&svm->vcpu) + 3;
+       skip_emulated_instruction(&svm->vcpu);
+
+       if (svm->vmcb->save.cpl) {
+               printk(KERN_ERR "%s: invalid cpl 0x%x at ip 0x%lx\n",
+                      __func__, svm->vmcb->save.cpl, kvm_rip_read(&svm->vcpu));
+               kvm_queue_exception(&svm->vcpu, GP_VECTOR);
+               return 1;
+       }


Skip after check.

I think you also need special treatment for the guest's eflags.if. If interrupts are enabled for the guest when vmrun is executed, and kvm tries to inject a virtual interrupt, then it should result in a virtual #VMEXIT.

For now I just always assume that's the case. It might be a good idea to store the real eflags.if somewhere in the hflags though.

Btw: Thanks a bunch for reviewing all this!



--
error compiling committee.c: too many arguments to function


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to