From: Paul Durrant <pdurr...@amazon.com>

If the guest sets an explicit vcpu_info GPA then, for any of the first 32
vCPUs, the content of the default vcpu_info in the shared_info page must be
copied into the new location. Because this copy may race with event
delivery (which updates the 'evtchn_pending_sel' field in vcpu_info) there
needs to be a way to defer that until the copy is complete.
Happily there is already a shadow of 'evtchn_pending_sel' in kvm_vcpu_xen
that is used in atomic context if the vcpu_info PFN cache has been
invalidated so that the update of vcpu_info can be deferred until the
cache can be refreshed (on vCPU thread's the way back into guest context).

Also use this shadow if the vcpu_info cache has been *deactivated*, so that
the VMM can safely copy the vcpu_info content and then re-activate the
cache with the new GPA. To do this, stop considering an inactive vcpu_info
cache as a hard error in kvm_xen_set_evtchn_fast().

Signed-off-by: Paul Durrant <pdurr...@amazon.com>
Reviewed-by: David Woodhouse <d...@amazon.co.uk>
---
Cc: David Woodhouse <dw...@infradead.org>
Cc: Sean Christopherson <sea...@google.com>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Borislav Petkov <b...@alien8.de>
Cc: Dave Hansen <dave.han...@linux.intel.com>
Cc: "H. Peter Anvin" <h...@zytor.com>
Cc: x...@kernel.org

v13:
 - Patch title change.

v8:
 - Update commit comment.

v6:
 - New in this version.
---
 arch/x86/kvm/xen.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 8650141b266e..11ab62ca011d 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -1802,9 +1802,6 @@ int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, 
struct kvm *kvm)
                WRITE_ONCE(xe->vcpu_idx, vcpu->vcpu_idx);
        }
 
-       if (!vcpu->arch.xen.vcpu_info_cache.active)
-               return -EINVAL;
-
        if (xe->port >= max_evtchn_port(kvm))
                return -EINVAL;
 
-- 
2.39.2


Reply via email to