MMIO is slightly slower than port IO because it uses the page-tables, so
the CPU must do a pagewalk on each access.

This overhead is normally masked by using the TLB cache:
but not so for KVM MMIO, where PTEs are marked as reserved
and so are never cached.

As ioeventfd memory is never read, make it possible to use
RO pages on the host for ioeventfds, instead.
The result is that TLBs are cached, which finally makes MMIO
as fast as port IO.

Warning: untested.

Signed-off-by: Michael S. Tsirkin <m...@redhat.com>
---
 arch/x86/kvm/svm.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8e0c084..6422fac 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1812,6 +1812,11 @@ static int pf_interception(struct vcpu_svm *svm)
        switch (svm->apf_reason) {
        default:
                error_code = svm->vmcb->control.exit_info_1;
+               if (!kvm_io_bus_write(&svm->vcpu, KVM_FAST_MMIO_BUS,
+                                     fault_address, 0, NULL)) {
+                       skip_emulated_instruction(&svm->vcpu);
+                       return 1;
+               }
 
                trace_kvm_page_fault(fault_address, error_code);
                if (!npt_enabled && kvm_event_needs_reinjection(&svm->vcpu))
-- 
MST

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to