Re: [PATCH v2 2/2] KVM: x86: emulate wait-for-SIPI and SIPI-VMExit

2020-11-06 Thread Paolo Bonzini

On 06/11/20 07:51, yadong...@intel.com wrote:

@@ -4036,6 +4060,8 @@ static void sync_vmcs02_to_vmcs12(struct kvm_vcpu *vcpu, 
struct vmcs12 *vmcs12)
  
  	if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED)

vmcs12->guest_activity_state = GUEST_ACTIVITY_HLT;
+   else if (vcpu->arch.mp_state == KVM_MP_STATE_INIT_RECEIVED)
+   vmcs12->guest_activity_state = GUEST_ACTIVITY_WAIT_SIPI;
else
vmcs12->guest_activity_state = GUEST_ACTIVITY_ACTIVE;
  


Updated, thanks.

Paolo



[PATCH v2 2/2] KVM: x86: emulate wait-for-SIPI and SIPI-VMExit

2020-11-05 Thread yadong . qi
From: Yadong Qi 

Background: We have a lightweight HV, it needs INIT-VMExit and
SIPI-VMExit to wake-up APs for guests since it do not monitor
the Local APIC. But currently virtual wait-for-SIPI(WFS) state
is not supported in nVMX, so when running on top of KVM, the L1
HV cannot receive the INIT-VMExit and SIPI-VMExit which cause
the L2 guest cannot wake up the APs.

According to Intel SDM Chapter 25.2 Other Causes of VM Exits,
SIPIs cause VM exits when a logical processor is in
wait-for-SIPI state.

In this patch:
1. introduce SIPI exit reason,
2. introduce wait-for-SIPI state for nVMX,
3. advertise wait-for-SIPI support to guest.

When L1 hypervisor is not monitoring Local APIC, L0 need to emulate
INIT-VMExit and SIPI-VMExit to L1 to emulate INIT-SIPI-SIPI for
L2. L2 LAPIC write would be traped by L0 Hypervisor(KVM), L0 should
emulate the INIT/SIPI vmexit to L1 hypervisor to set proper state
for L2's vcpu state.

Handle procdure:
Source vCPU:
L2 write LAPIC.ICR(INIT).
L0 trap LAPIC.ICR write(INIT): inject a latched INIT event to target
   vCPU.
Target vCPU:
L0 emulate an INIT VMExit to L1 if is guest mode.
L1 set guest VMCS, guest_activity_state=WAIT_SIPI, vmresume.
L0 set vcpu.mp_state to INIT_RECEIVED if (vmcs12.guest_activity_state
   == WAIT_SIPI).

Source vCPU:
L2 write LAPIC.ICR(SIPI).
L0 trap LAPIC.ICR write(INIT): inject a latched SIPI event to traget
   vCPU.
Target vCPU:
L0 emulate an SIPI VMExit to L1 if (vcpu.mp_state == INIT_RECEIVED).
L1 set CS:IP, guest_activity_state=ACTIVE, vmresume.
L0 resume to L2.
L2 start-up.

Signed-off-by: Yadong Qi 
Message-Id: <20200922052343.84388-1-yadong...@intel.com>
Signed-off-by: Paolo Bonzini 
---
v1->v2:
 * sync_vmcs02_to_vmcs12(): set vmcs12->guest_activity_state to WAIT_SIPI
   if vcpu's mp_state is INIT_RECEIVED state.
---
 arch/x86/include/asm/vmx.h  |  1 +
 arch/x86/include/uapi/asm/vmx.h |  2 ++
 arch/x86/kvm/vmx/nested.c   | 55 -
 3 files changed, 44 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index f8ba5289ecb0..38ca445a8429 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -113,6 +113,7 @@
 #define VMX_MISC_PREEMPTION_TIMER_RATE_MASK0x001f
 #define VMX_MISC_SAVE_EFER_LMA 0x0020
 #define VMX_MISC_ACTIVITY_HLT  0x0040
+#define VMX_MISC_ACTIVITY_WAIT_SIPI0x0100
 #define VMX_MISC_ZERO_LEN_INS  0x4000
 #define VMX_MISC_MSR_LIST_MULTIPLIER   512
 
diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
index b8ff9e8ac0d5..ada955c5ebb6 100644
--- a/arch/x86/include/uapi/asm/vmx.h
+++ b/arch/x86/include/uapi/asm/vmx.h
@@ -32,6 +32,7 @@
 #define EXIT_REASON_EXTERNAL_INTERRUPT  1
 #define EXIT_REASON_TRIPLE_FAULT2
 #define EXIT_REASON_INIT_SIGNAL3
+#define EXIT_REASON_SIPI_SIGNAL 4
 
 #define EXIT_REASON_INTERRUPT_WINDOW7
 #define EXIT_REASON_NMI_WINDOW  8
@@ -94,6 +95,7 @@
{ EXIT_REASON_EXTERNAL_INTERRUPT,"EXTERNAL_INTERRUPT" }, \
{ EXIT_REASON_TRIPLE_FAULT,  "TRIPLE_FAULT" }, \
{ EXIT_REASON_INIT_SIGNAL,   "INIT_SIGNAL" }, \
+   { EXIT_REASON_SIPI_SIGNAL,   "SIPI_SIGNAL" }, \
{ EXIT_REASON_INTERRUPT_WINDOW,  "INTERRUPT_WINDOW" }, \
{ EXIT_REASON_NMI_WINDOW,"NMI_WINDOW" }, \
{ EXIT_REASON_TASK_SWITCH,   "TASK_SWITCH" }, \
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 89af692deb7e..6dc9017289ba 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2952,7 +2952,8 @@ static int nested_vmx_check_vmcs_link_ptr(struct kvm_vcpu 
*vcpu,
 static int nested_check_guest_non_reg_state(struct vmcs12 *vmcs12)
 {
if (CC(vmcs12->guest_activity_state != GUEST_ACTIVITY_ACTIVE &&
-  vmcs12->guest_activity_state != GUEST_ACTIVITY_HLT))
+  vmcs12->guest_activity_state != GUEST_ACTIVITY_HLT &&
+  vmcs12->guest_activity_state != GUEST_ACTIVITY_WAIT_SIPI))
return -EINVAL;
 
return 0;
@@ -3559,19 +3560,29 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool 
launch)
 */
nested_cache_shadow_vmcs12(vcpu, vmcs12);
 
-   /*
-* If we're entering a halted L2 vcpu and the L2 vcpu won't be
-* awakened by event injection or by an NMI-window VM-exit or
-* by an interrupt-window VM-exit, halt the vcpu.
-*/
-   if ((vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT) &&
-   !(vmcs12->vm_entry_intr_info_field & INTR_INFO_VALID_MASK) &&
-   !(vmcs12->cpu_based_vm_exec_control & CPU_BASED_NMI_WINDOW_EXITING) 
&&
-   !((vmcs12->cpu_based_vm_exec_control & 
CPU_BASED_INTR_WINDOW_EXITING) &&
- (vmcs12->guest_rflags & X86_EFLAGS_IF))) {