Re: [PATCH v2 06/10] KVM: x86: acknowledgment mechanism for async pf page ready notifications

2020-05-28 Thread Paolo Bonzini
On 28/05/20 13:39, Vitaly Kuznetsov wrote:
>> How is the pageready_pending flag migrated?  Should we revert the
>> direction of the MSR (i.e. read the flag, and write 0 to clear it)?
> The flag is not migrated so it will be 'false'. This can just cause an
> extra kick in kvm_arch_async_page_present_queued() but this shouldn't be
> a big deal. Also, after migration we will just send 'wakeup all' event,
> async pf queue will be empty.

Ah, that's kvm_pv_enable_async_pf, where the destination writes to
MSR_KVM_ASYNC_PF.  Cool.

> MSR_KVM_ASYNC_PF_ACK by itself is not
> migrated, we don't even store it, not sure how invering it would change
> things.

Yes, it would only be useful to invert it if it needs to be stored and
migrated.

Thanks,

Paolo



Re: [PATCH v2 06/10] KVM: x86: acknowledgment mechanism for async pf page ready notifications

2020-05-28 Thread Vitaly Kuznetsov
Paolo Bonzini  writes:

> On 25/05/20 16:41, Vitaly Kuznetsov wrote:
>> +case MSR_KVM_ASYNC_PF_ACK:
>> +if (data & 0x1) {
>> +vcpu->arch.apf.pageready_pending = false;
>> +kvm_check_async_pf_completion(vcpu);
>> +}
>> +break;
>>  case MSR_KVM_STEAL_TIME:
>>  
>>  if (unlikely(!sched_info_on()))
>> @@ -3183,6 +3189,9 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct 
>> msr_data *msr_info)
>>  case MSR_KVM_ASYNC_PF_INT:
>>  msr_info->data = vcpu->arch.apf.msr_int_val;
>>  break;
>> +case MSR_KVM_ASYNC_PF_ACK:
>> +msr_info->data = 0;
>> +break;
>
> How is the pageready_pending flag migrated?  Should we revert the
> direction of the MSR (i.e. read the flag, and write 0 to clear it)?

The flag is not migrated so it will be 'false'. This can just cause an
extra kick in kvm_arch_async_page_present_queued() but this shouldn't be
a big deal. Also, after migration we will just send 'wakeup all' event,
async pf queue will be empty. MSR_KVM_ASYNC_PF_ACK by itself is not
migrated, we don't even store it, not sure how invering it would change
things.

-- 
Vitaly



Re: [PATCH v2 06/10] KVM: x86: acknowledgment mechanism for async pf page ready notifications

2020-05-28 Thread Paolo Bonzini
On 25/05/20 16:41, Vitaly Kuznetsov wrote:
> + case MSR_KVM_ASYNC_PF_ACK:
> + if (data & 0x1) {
> + vcpu->arch.apf.pageready_pending = false;
> + kvm_check_async_pf_completion(vcpu);
> + }
> + break;
>   case MSR_KVM_STEAL_TIME:
>  
>   if (unlikely(!sched_info_on()))
> @@ -3183,6 +3189,9 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct 
> msr_data *msr_info)
>   case MSR_KVM_ASYNC_PF_INT:
>   msr_info->data = vcpu->arch.apf.msr_int_val;
>   break;
> + case MSR_KVM_ASYNC_PF_ACK:
> + msr_info->data = 0;
> + break;

How is the pageready_pending flag migrated?  Should we revert the
direction of the MSR (i.e. read the flag, and write 0 to clear it)?

Paolo



[PATCH v2 06/10] KVM: x86: acknowledgment mechanism for async pf page ready notifications

2020-05-25 Thread Vitaly Kuznetsov
If two page ready notifications happen back to back the second one is not
delivered and the only mechanism we currently have is
kvm_check_async_pf_completion() check in vcpu_run() loop. The check will
only be performed with the next vmexit when it happens and in some cases
it may take a while. With interrupt based page ready notification delivery
the situation is even worse: unlike exceptions, interrupts are not handled
immediately so we must check if the slot is empty. This is slow and
unnecessary. Introduce dedicated MSR_KVM_ASYNC_PF_ACK MSR to communicate
the fact that the slot is free and host should check its notification
queue. Mandate using it for interrupt based 'page ready' APF event
delivery.

As kvm_check_async_pf_completion() is going away from vcpu_run() we need
a way to communicate the fact that vcpu->async_pf.done queue has
transitioned from empty to non-empty state. Introduce
kvm_arch_async_page_present_queued() and KVM_REQ_APF_READY to do the job.

Signed-off-by: Vitaly Kuznetsov 
---
 Documentation/virt/kvm/msr.rst   | 16 +++-
 arch/s390/include/asm/kvm_host.h |  2 ++
 arch/x86/include/asm/kvm_host.h  |  3 +++
 arch/x86/include/uapi/asm/kvm_para.h |  1 +
 arch/x86/kvm/x86.c   | 26 ++
 virt/kvm/async_pf.c  | 10 ++
 6 files changed, 53 insertions(+), 5 deletions(-)

diff --git a/Documentation/virt/kvm/msr.rst b/Documentation/virt/kvm/msr.rst
index be08df12f31a..8ea3fbcc67fd 100644
--- a/Documentation/virt/kvm/msr.rst
+++ b/Documentation/virt/kvm/msr.rst
@@ -236,7 +236,10 @@ data:
of these bytes is a token which was previously delivered as 'page not
present' event. The event indicates the page in now available. Guest is
supposed to write '0' to 'token' when it is done handling 'page ready'
-   event so the next one can be delivered.
+   event so the next one can be delivered.  It is also supposed to write
+   '1' to MSR_KVM_ASYNC_PF_ACK every time after clearing the location,
+   this forces KVM to re-scan its queue and deliver next pending
+   notification.
 
Note, MSR_KVM_ASYNC_PF_INT MSR specifying the interrupt vector for 'page
ready' APF delivery needs to be written to before enabling APF mechanism
@@ -359,3 +362,14 @@ data:
Interrupt vector for asynchnonous 'page ready' notifications delivery.
The vector has to be set up before asynchronous page fault mechanism
is enabled in MSR_KVM_ASYNC_PF_EN.
+
+MSR_KVM_ASYNC_PF_ACK:
+   0x4b564d07
+
+data:
+   Asynchronous page fault (APF) acknowledgment.
+
+   When the guest is done processing 'page ready' APF event and 'token'
+   field in 'struct kvm_vcpu_pv_apf_data' is cleared it is supposed to
+   write '1' to bit 0 of the MSR, this caused the host to re-scan its queue
+   and check if there are more notifications pending.
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 5ba9968c3436..bb1ede017b7e 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -982,6 +982,8 @@ void kvm_arch_async_page_not_present(struct kvm_vcpu *vcpu,
 void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
 struct kvm_async_pf *work);
 
+static inline void kvm_arch_async_page_present_queued(struct kvm_vcpu *vcpu) {}
+
 void kvm_arch_crypto_clear_masks(struct kvm *kvm);
 void kvm_arch_crypto_set_masks(struct kvm *kvm, unsigned long *apm,
   unsigned long *aqm, unsigned long *adm);
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c2a70e25a1f3..356c02bfa587 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -83,6 +83,7 @@
 #define KVM_REQ_GET_VMCS12_PAGES   KVM_ARCH_REQ(24)
 #define KVM_REQ_APICV_UPDATE \
KVM_ARCH_REQ_FLAGS(25, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
+#define KVM_REQ_APF_READY  KVM_ARCH_REQ(26)
 
 #define CR0_RESERVED_BITS   \
(~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \
@@ -772,6 +773,7 @@ struct kvm_vcpu_arch {
u32 host_apf_flags;
unsigned long nested_apf_token;
bool delivery_as_pf_vmexit;
+   bool pageready_pending;
} apf;
 
/* OSVW MSRs (AMD only) */
@@ -1643,6 +1645,7 @@ void kvm_arch_async_page_present(struct kvm_vcpu *vcpu,
 struct kvm_async_pf *work);
 void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu,
   struct kvm_async_pf *work);
+void kvm_arch_async_page_present_queued(struct kvm_vcpu *vcpu);
 bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu);
 extern bool kvm_find_async_pf_gfn(struct kvm_vcpu *vcpu, gfn_t gfn);
 
diff --git a/arch/x86/include/uapi/asm/kvm_para.h 
b/arch/x86/include/uapi/asm/kvm