Re: [PATCH 1/2] nVMX: Add KVM_REQ_IMMEDIATE_EXIT

2011-09-26 Thread Marcelo Tosatti
On Sun, Sep 25, 2011 at 11:13:06AM +0300, Nadav Har'El wrote:
> On Fri, Sep 23, 2011, Marcelo Tosatti wrote about "Re: [PATCH 1/2] nVMX: Add 
> KVM_REQ_IMMEDIATE_EXIT":
> > On Thu, Sep 22, 2011 at 01:52:56PM +0300, Nadav Har'El wrote:
> > > This patch adds a new vcpu->requests bit, KVM_REQ_IMMEDIATE_EXIT.
> > > This bit requests that when next entering the guest, we should run it only
> > > for as little as possible, and exit again.
> > > 
> > > We use this new option in nested VMX: When L1 launches L2, but L0 wishes 
> > > L1
> >...
> > > @@ -5647,6 +5648,8 @@ static int vcpu_enter_guest(struct kvm_v
> > >   }
> > >   if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
> > >   record_steal_time(vcpu);
> > > + req_immediate_exit =
> > > + kvm_check_request(KVM_REQ_IMMEDIATE_EXIT, vcpu);
> >...
> > The immediate exit information can be lost if entry decides to bail out.
> > You can do 
> > 
> > req_immediate_exit = kvm_check_request(KVM_REQ_IMMEDIATE_EXIT)
> > after preempt_disable()
> > and then transfer back the bit in the bail out case in
> > if (vcpu->mode == EXITING_GUEST_MODE || vcpu->requests
> 
> Thanks.
> 
> But thinking about this a bit, it seems to me that in my case *losing* this
> bit on a canceled entry is the correct thing to do, as turning on this bit was
> decided in the injection phase (in enable_irq_window()), and next time, if
> the reason to turn on this bit still exists (i.e., L0 has something to inject
> to L1, but L2 needs to run), we will turn it on again.

Correct, the loss is irrelevant.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] nVMX: Add KVM_REQ_IMMEDIATE_EXIT

2011-09-25 Thread Nadav Har'El
On Fri, Sep 23, 2011, Marcelo Tosatti wrote about "Re: [PATCH 1/2] nVMX: Add 
KVM_REQ_IMMEDIATE_EXIT":
> On Thu, Sep 22, 2011 at 01:52:56PM +0300, Nadav Har'El wrote:
> > This patch adds a new vcpu->requests bit, KVM_REQ_IMMEDIATE_EXIT.
> > This bit requests that when next entering the guest, we should run it only
> > for as little as possible, and exit again.
> > 
> > We use this new option in nested VMX: When L1 launches L2, but L0 wishes L1
>...
> > @@ -5647,6 +5648,8 @@ static int vcpu_enter_guest(struct kvm_v
> > }
> > if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
> > record_steal_time(vcpu);
> > +   req_immediate_exit =
> > +   kvm_check_request(KVM_REQ_IMMEDIATE_EXIT, vcpu);
>...
> The immediate exit information can be lost if entry decides to bail out.
> You can do 
> 
> req_immediate_exit = kvm_check_request(KVM_REQ_IMMEDIATE_EXIT)
> after preempt_disable()
> and then transfer back the bit in the bail out case in
> if (vcpu->mode == EXITING_GUEST_MODE || vcpu->requests

Thanks.

But thinking about this a bit, it seems to me that in my case *losing* this
bit on a canceled entry is the correct thing to do, as turning on this bit was
decided in the injection phase (in enable_irq_window()), and next time, if
the reason to turn on this bit still exists (i.e., L0 has something to inject
to L1, but L2 needs to run), we will turn it on again.

-- 
Nadav Har'El|Sunday, Sep 25 2011, 
n...@math.technion.ac.il |-
Phone +972-523-790466, ICQ 13349191 |Guarantee: this email is 100% free of
http://nadav.harel.org.il   |magnetic monopoles, or your money back!
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] nVMX: Add KVM_REQ_IMMEDIATE_EXIT

2011-09-23 Thread Marcelo Tosatti
On Thu, Sep 22, 2011 at 01:52:56PM +0300, Nadav Har'El wrote:
> This patch adds a new vcpu->requests bit, KVM_REQ_IMMEDIATE_EXIT.
> This bit requests that when next entering the guest, we should run it only
> for as little as possible, and exit again.
> 
> We use this new option in nested VMX: When L1 launches L2, but L0 wishes L1
> to continue running so it can inject an event to it, we unfortunately cannot
> just pretend to have run L2 for a little while - We must really launch L2,
> otherwise certain one-off vmcs12 parameters (namely, L1 injection into L2)
> will be lost. So the existing code runs L2 in this case.
> But L2 could potentially run for a long time until it exits, and the
> injection into L1 will be delayed. The new KVM_REQ_IMMEDIATE_EXIT allows us
> to request that L2 will be entered, as necessary, but will exit as soon as
> possible after entry.
> 
> Our implementation of this request uses smp_send_reschedule() to send a
> self-IPI, with interrupts disabled. The interrupts remain disabled until the
> guest is entered, and then, after the entry is complete (often including
> processing an injection and jumping to the relevant handler), the physical
> interrupt is noticed and causes an exit.
> 
> On recent Intel processors, we could have achieved the same goal by using
> MTF instead of a self-IPI. Another technique worth considering in the future
> is to use VM_EXIT_ACK_INTR_ON_EXIT and a highest-priority vector IPI - to
> slightly improve performance by avoiding the useless interrupt handler
> which ends up being called when smp_send_reschedule() is used.
> 
> Signed-off-by: Nadav Har'El 
> ---
>  arch/x86/kvm/vmx.c   |   11 +++
>  arch/x86/kvm/x86.c   |6 ++
>  include/linux/kvm_host.h |1 +
>  3 files changed, 14 insertions(+), 4 deletions(-)
> 
> --- .before/include/linux/kvm_host.h  2011-09-22 13:51:31.0 +0300
> +++ .after/include/linux/kvm_host.h   2011-09-22 13:51:31.0 +0300
> @@ -48,6 +48,7 @@
>  #define KVM_REQ_EVENT 11
>  #define KVM_REQ_APF_HALT  12
>  #define KVM_REQ_STEAL_UPDATE  13
> +#define KVM_REQ_IMMEDIATE_EXIT14
>  
>  #define KVM_USERSPACE_IRQ_SOURCE_ID  0
>  
> --- .before/arch/x86/kvm/x86.c2011-09-22 13:51:31.0 +0300
> +++ .after/arch/x86/kvm/x86.c 2011-09-22 13:51:31.0 +0300
> @@ -5610,6 +5610,7 @@ static int vcpu_enter_guest(struct kvm_v
>   bool nmi_pending;
>   bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
>   vcpu->run->request_interrupt_window;
> + bool req_immediate_exit = 0;
>  
>   if (vcpu->requests) {
>   if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
> @@ -5647,6 +5648,8 @@ static int vcpu_enter_guest(struct kvm_v
>   }
>   if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
>   record_steal_time(vcpu);
> + req_immediate_exit =
> + kvm_check_request(KVM_REQ_IMMEDIATE_EXIT, vcpu);

The immediate exit information can be lost if entry decides to bail out.

You can do 

req_immediate_exit = kvm_check_request(KVM_REQ_IMMEDIATE_EXIT)

after preempt_disable()

and then transfer back the bit in the bail out case in

if (vcpu->mode == EXITING_GUEST_MODE || vcpu->requests
...

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/2] nVMX: Add KVM_REQ_IMMEDIATE_EXIT

2011-09-22 Thread Nadav Har'El
This patch adds a new vcpu->requests bit, KVM_REQ_IMMEDIATE_EXIT.
This bit requests that when next entering the guest, we should run it only
for as little as possible, and exit again.

We use this new option in nested VMX: When L1 launches L2, but L0 wishes L1
to continue running so it can inject an event to it, we unfortunately cannot
just pretend to have run L2 for a little while - We must really launch L2,
otherwise certain one-off vmcs12 parameters (namely, L1 injection into L2)
will be lost. So the existing code runs L2 in this case.
But L2 could potentially run for a long time until it exits, and the
injection into L1 will be delayed. The new KVM_REQ_IMMEDIATE_EXIT allows us
to request that L2 will be entered, as necessary, but will exit as soon as
possible after entry.

Our implementation of this request uses smp_send_reschedule() to send a
self-IPI, with interrupts disabled. The interrupts remain disabled until the
guest is entered, and then, after the entry is complete (often including
processing an injection and jumping to the relevant handler), the physical
interrupt is noticed and causes an exit.

On recent Intel processors, we could have achieved the same goal by using
MTF instead of a self-IPI. Another technique worth considering in the future
is to use VM_EXIT_ACK_INTR_ON_EXIT and a highest-priority vector IPI - to
slightly improve performance by avoiding the useless interrupt handler
which ends up being called when smp_send_reschedule() is used.

Signed-off-by: Nadav Har'El 
---
 arch/x86/kvm/vmx.c   |   11 +++
 arch/x86/kvm/x86.c   |6 ++
 include/linux/kvm_host.h |1 +
 3 files changed, 14 insertions(+), 4 deletions(-)

--- .before/include/linux/kvm_host.h2011-09-22 13:51:31.0 +0300
+++ .after/include/linux/kvm_host.h 2011-09-22 13:51:31.0 +0300
@@ -48,6 +48,7 @@
 #define KVM_REQ_EVENT 11
 #define KVM_REQ_APF_HALT  12
 #define KVM_REQ_STEAL_UPDATE  13
+#define KVM_REQ_IMMEDIATE_EXIT14
 
 #define KVM_USERSPACE_IRQ_SOURCE_ID0
 
--- .before/arch/x86/kvm/x86.c  2011-09-22 13:51:31.0 +0300
+++ .after/arch/x86/kvm/x86.c   2011-09-22 13:51:31.0 +0300
@@ -5610,6 +5610,7 @@ static int vcpu_enter_guest(struct kvm_v
bool nmi_pending;
bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
vcpu->run->request_interrupt_window;
+   bool req_immediate_exit = 0;
 
if (vcpu->requests) {
if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
@@ -5647,6 +5648,8 @@ static int vcpu_enter_guest(struct kvm_v
}
if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
record_steal_time(vcpu);
+   req_immediate_exit =
+   kvm_check_request(KVM_REQ_IMMEDIATE_EXIT, vcpu);
 
}
 
@@ -5706,6 +5709,9 @@ static int vcpu_enter_guest(struct kvm_v
 
srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
 
+   if (req_immediate_exit)
+   smp_send_reschedule(vcpu->cpu);
+
kvm_guest_enter();
 
if (unlikely(vcpu->arch.switch_db_regs)) {
--- .before/arch/x86/kvm/vmx.c  2011-09-22 13:51:31.0 +0300
+++ .after/arch/x86/kvm/vmx.c   2011-09-22 13:51:31.0 +0300
@@ -3858,12 +3858,15 @@ static bool nested_exit_on_intr(struct k
 static void enable_irq_window(struct kvm_vcpu *vcpu)
 {
u32 cpu_based_vm_exec_control;
-   if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
-   /* We can get here when nested_run_pending caused
-* vmx_interrupt_allowed() to return false. In this case, do
-* nothing - the interrupt will be injected later.
+   if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu)) {
+   /*
+* We get here if vmx_interrupt_allowed() said we can't
+* inject to L1 now because L2 must run. Ask L2 to exit
+* right after entry, so we can inject to L1 more promptly.
 */
+   kvm_make_request(KVM_REQ_IMMEDIATE_EXIT, vcpu);
return;
+   }
 
cpu_based_vm_exec_control = vmcs_read32(CPU_BASED_VM_EXEC_CONTROL);
cpu_based_vm_exec_control |= CPU_BASED_VIRTUAL_INTR_PENDING;
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html