On 08/24/2010 12:33 PM, Gleb Natapov wrote:
@@ -505,6 +506,37 @@ out_unlock:
return 0;
}
+static int FNAME(page_fault_other_cr3)(struct kvm_vcpu *vcpu, gpa_t cr3,
+ gva_t addr, u32 error_code)
+{
+ int r = 0;
+ gpa_t curr_cr3 = vcpu->arch.cr3;
+
+ if (curr_cr3 != cr3) {
+ /*
+ * We do page fault on behalf of a process that is sleeping
+ * because of async PF. PV guest takes reference to mm that cr3
+ * belongs too, so it has to be valid here.
+ */
+ kvm_set_cr3(vcpu, cr3);
+ if (kvm_mmu_reload(vcpu))
+ goto switch_cr3;
+ }
With nested virtualization, we need to switch cr0, cr4, and efer as well...
On SVM or VMX or both?
Both. Let's defer this patch since it's an optimization, this is really
complicated.
+
+ r = FNAME(page_fault)(vcpu, addr, error_code, true);
+
+ if (kvm_check_request(KVM_REQ_MMU_SYNC, vcpu))
+ kvm_mmu_sync_roots(vcpu);
Why is this needed?
http://www.mail-archive.com/kvm@vger.kernel.org/msg37827.html
KVM_REQ_MMU_SYNC request generated here must be processed before
switching to a different cr3 (otherwise vcpu_enter_guest will process it
with the wrong cr3 in place).
Ah, it should be part of the cr3 switch block above.
+
+switch_cr3:
+ if (curr_cr3 != vcpu->arch.cr3) {
+ kvm_set_cr3(vcpu, curr_cr3);
+ kvm_mmu_reload(vcpu);
+ }
+
+ return r;
+}
This has the nasty effect of flushing the TLB on AMD.
What is more expansive reenter the guest and handle one more fault, or
flash TLB here?
No idea. Probably the reentry. On Intel the tlb is flushed anyway.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html