Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-20 Thread Suresh Siddha
On Thu, 2012-09-20 at 12:50 +0300, Avi Kivity wrote:
> On 09/20/2012 03:10 AM, Suresh Siddha wrote:
> > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> > index b06737d..8ff328b 100644
> > --- a/arch/x86/kvm/vmx.c
> > +++ b/arch/x86/kvm/vmx.c
> > @@ -1493,7 +1493,8 @@ static void __vmx_load_host_state(struct vcpu_vmx 
> > *vmx)
> >  #ifdef CONFIG_X86_64
> > wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base);
> >  #endif
> > -   if (user_has_fpu())
> > +   /* Did the host task or the guest vcpu has FPU restored lazily? */
> > +   if (!use_eager_fpu() && (user_has_fpu() || vmx->vcpu.guest_fpu_loaded))
> > clts();
> 
> Why do the clts() if guest_fpu_loaded()?
> 
> An interrupt might arrive after this, look at TS
> (interrupted_kernel_fpu_idle()), and stomp on the the guest's fpu.

Actually clts() is harmless, as this condition,
(read_cr0() & X86_CR0_TS)
in interrupted_kernel_fpu_idle() will return false.

But you raise a good point, any interrupt between the vmexit and the
__vmx_load_host_state() can stomp on the guest FPU as the vmexit was
unconditionally setting host's cr0.TS bit and with the kvm using
kernel_fpu_begin/end(), !__thread_has_fpu(current) in the
interrupted_kernel_fpu_idle() will be always true.

So the right thing to do here is to always have the cr0.TS bit clear
during vmexit and set that bit back in __vmx_load_host_state() if the
FPU state is not active.

Appended the modified patch.

thanks,
suresh
--8<--
From: Suresh Siddha 
Subject: x86, kvm: fix kvm's usage of kernel_fpu_begin/end()

Preemption is disabled between kernel_fpu_begin/end() and as such
it is not a good idea to use these routines in kvm_load/put_guest_fpu()
which can be very far apart.

kvm_load/put_guest_fpu() routines are already called with
preemption disabled and KVM already uses the preempt notifier to save
the guest fpu state using kvm_put_guest_fpu().

So introduce __kernel_fpu_begin/end() routines which don't touch
preemption and use them instead of kernel_fpu_begin/end()
for KVM's use model of saving/restoring guest FPU state.

Also with this change (and with eagerFPU model), fix the host cr0.TS vm-exit
state in the case of VMX. For eagerFPU case, host cr0.TS is always clear.
So no need to worry about it. For the traditional lazyFPU restore case,
change the cr0.TS bit for the host state during vm-exit to be always clear
and cr0.TS bit is set in the __vmx_load_host_state() when the FPU
(guest FPU or the host task's FPU) state is not active. This ensures
that the host/guest FPU state is properly saved, restored
during context-switch and with interrupts (using irq_fpu_usable()) not
stomping on the active FPU state.

Signed-off-by: Suresh Siddha 
---
 arch/x86/include/asm/i387.h |   28 ++--
 arch/x86/kernel/i387.c  |   13 +
 arch/x86/kvm/vmx.c  |   10 +++---
 arch/x86/kvm/x86.c  |4 ++--
 4 files changed, 40 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index 6c3bd37..ed8089d 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -24,8 +24,32 @@ extern int dump_fpu(struct pt_regs *, struct 
user_i387_struct *);
 extern void math_state_restore(void);
 
 extern bool irq_fpu_usable(void);
-extern void kernel_fpu_begin(void);
-extern void kernel_fpu_end(void);
+
+/*
+ * Careful: __kernel_fpu_begin/end() must be called with preempt disabled
+ * and they don't touch the preempt state on their own.
+ * If you enable preemption after __kernel_fpu_begin(), preempt notifier
+ * should call the __kernel_fpu_end() to prevent the kernel/user FPU
+ * state from getting corrupted. KVM for example uses this model.
+ *
+ * All other cases use kernel_fpu_begin/end() which disable preemption
+ * during kernel FPU usage.
+ */
+extern void __kernel_fpu_begin(void);
+extern void __kernel_fpu_end(void);
+
+static inline void kernel_fpu_begin(void)
+{
+   WARN_ON_ONCE(!irq_fpu_usable());
+   preempt_disable();
+   __kernel_fpu_begin();
+}
+
+static inline void kernel_fpu_end(void)
+{
+   __kernel_fpu_end();
+   preempt_enable();
+}
 
 /*
  * Some instructions like VIA's padlock instructions generate a spurious
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 6782e39..675a050 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -73,32 +73,29 @@ bool irq_fpu_usable(void)
 }
 EXPORT_SYMBOL(irq_fpu_usable);
 
-void kernel_fpu_begin(void)
+void __kernel_fpu_begin(void)
 {
struct task_struct *me = current;
 
-   WARN_ON_ONCE(!irq_fpu_usable());
-   preempt_disable();
if (__thread_has_fpu(me)) {
__save_init_fpu(me);
__thread_clear_has_fpu(me);
-   /* We do 'stts()' in kernel_fpu_end() */
+   /* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
clts();
   

Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-20 Thread Avi Kivity
On 09/20/2012 03:10 AM, Suresh Siddha wrote:
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index b06737d..8ff328b 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -1493,7 +1493,8 @@ static void __vmx_load_host_state(struct vcpu_vmx *vmx)
>  #ifdef CONFIG_X86_64
>   wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base);
>  #endif
> - if (user_has_fpu())
> + /* Did the host task or the guest vcpu has FPU restored lazily? */
> + if (!use_eager_fpu() && (user_has_fpu() || vmx->vcpu.guest_fpu_loaded))
>   clts();

Why do the clts() if guest_fpu_loaded()?

An interrupt might arrive after this, look at TS
(interrupted_kernel_fpu_idle()), and stomp on the the guest's fpu.

-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-20 Thread Avi Kivity
On 09/19/2012 08:26 PM, H. Peter Anvin wrote:
> On 09/19/2012 10:22 AM, Avi Kivity wrote:
>> 
>> Note, we could also go in a different direction and make
>> kernel_fpu_begin() use preempt notifiers and thus make its users
>> preemptible.  But that's for a separate patchset.
>> 
> 
> Where would you put the state if you were preempted?  You want to
> allocate a full extra buffer for the kernel xstate for each thread just
> in case?  ("Yes" is a valid answer to that question, but it is a fair
> chunk of memory.)

kernel_fpu_begin() could receive a pointer to a struct fpu, with
fpu->state either preallocated by the caller, or allocated by
kernel_fpu_begin() itself.

-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-20 Thread Avi Kivity
On 09/19/2012 08:25 PM, Suresh Siddha wrote:
> On Wed, 2012-09-19 at 20:22 +0300, Avi Kivity wrote:
>> On 09/19/2012 08:18 PM, Suresh Siddha wrote:
>> 
>> > These routines (kvm_load/put_guest_fpu()) are already called with
>> > preemption disabled but as you mentioned, we don't want the preemption
>> > to be disabled completely between the kvm_load_guest_fpu() and
>> > kvm_put_guest_fpu().
>> > 
>> > Also KVM already has the preempt notifier which is doing the
>> > kvm_put_guest_fpu(), so something like the appended should address this.
>> > I will test this shortly.
>> > 
>> 
>> Note, we could also go in a different direction and make
>> kernel_fpu_begin() use preempt notifiers and thus make its users
>> preemptible.  But that's for a separate patchset.
> 
> yep, but we need the fpu buffer to save/restore the kernel fpu state.
> 
> KVM already has those buffers allocated in the guest cpu state and hence
> it all works out ok. But yes, we can revisit this in future.

kernel_fpu_begin() can allocate it.  It means changing the APIs, but
changing the behaviour to be preemptible is a bigger change anyway.


-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-20 Thread Avi Kivity
On 09/19/2012 08:25 PM, Suresh Siddha wrote:
 On Wed, 2012-09-19 at 20:22 +0300, Avi Kivity wrote:
 On 09/19/2012 08:18 PM, Suresh Siddha wrote:
 
  These routines (kvm_load/put_guest_fpu()) are already called with
  preemption disabled but as you mentioned, we don't want the preemption
  to be disabled completely between the kvm_load_guest_fpu() and
  kvm_put_guest_fpu().
  
  Also KVM already has the preempt notifier which is doing the
  kvm_put_guest_fpu(), so something like the appended should address this.
  I will test this shortly.
  
 
 Note, we could also go in a different direction and make
 kernel_fpu_begin() use preempt notifiers and thus make its users
 preemptible.  But that's for a separate patchset.
 
 yep, but we need the fpu buffer to save/restore the kernel fpu state.
 
 KVM already has those buffers allocated in the guest cpu state and hence
 it all works out ok. But yes, we can revisit this in future.

kernel_fpu_begin() can allocate it.  It means changing the APIs, but
changing the behaviour to be preemptible is a bigger change anyway.


-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-20 Thread Avi Kivity
On 09/19/2012 08:26 PM, H. Peter Anvin wrote:
 On 09/19/2012 10:22 AM, Avi Kivity wrote:
 
 Note, we could also go in a different direction and make
 kernel_fpu_begin() use preempt notifiers and thus make its users
 preemptible.  But that's for a separate patchset.
 
 
 Where would you put the state if you were preempted?  You want to
 allocate a full extra buffer for the kernel xstate for each thread just
 in case?  (Yes is a valid answer to that question, but it is a fair
 chunk of memory.)

kernel_fpu_begin() could receive a pointer to a struct fpu, with
fpu-state either preallocated by the caller, or allocated by
kernel_fpu_begin() itself.

-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-20 Thread Avi Kivity
On 09/20/2012 03:10 AM, Suresh Siddha wrote:
 diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
 index b06737d..8ff328b 100644
 --- a/arch/x86/kvm/vmx.c
 +++ b/arch/x86/kvm/vmx.c
 @@ -1493,7 +1493,8 @@ static void __vmx_load_host_state(struct vcpu_vmx *vmx)
  #ifdef CONFIG_X86_64
   wrmsrl(MSR_KERNEL_GS_BASE, vmx-msr_host_kernel_gs_base);
  #endif
 - if (user_has_fpu())
 + /* Did the host task or the guest vcpu has FPU restored lazily? */
 + if (!use_eager_fpu()  (user_has_fpu() || vmx-vcpu.guest_fpu_loaded))
   clts();

Why do the clts() if guest_fpu_loaded()?

An interrupt might arrive after this, look at TS
(interrupted_kernel_fpu_idle()), and stomp on the the guest's fpu.

-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-20 Thread Suresh Siddha
On Thu, 2012-09-20 at 12:50 +0300, Avi Kivity wrote:
 On 09/20/2012 03:10 AM, Suresh Siddha wrote:
  diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
  index b06737d..8ff328b 100644
  --- a/arch/x86/kvm/vmx.c
  +++ b/arch/x86/kvm/vmx.c
  @@ -1493,7 +1493,8 @@ static void __vmx_load_host_state(struct vcpu_vmx 
  *vmx)
   #ifdef CONFIG_X86_64
  wrmsrl(MSR_KERNEL_GS_BASE, vmx-msr_host_kernel_gs_base);
   #endif
  -   if (user_has_fpu())
  +   /* Did the host task or the guest vcpu has FPU restored lazily? */
  +   if (!use_eager_fpu()  (user_has_fpu() || vmx-vcpu.guest_fpu_loaded))
  clts();
 
 Why do the clts() if guest_fpu_loaded()?
 
 An interrupt might arrive after this, look at TS
 (interrupted_kernel_fpu_idle()), and stomp on the the guest's fpu.

Actually clts() is harmless, as this condition,
(read_cr0()  X86_CR0_TS)
in interrupted_kernel_fpu_idle() will return false.

But you raise a good point, any interrupt between the vmexit and the
__vmx_load_host_state() can stomp on the guest FPU as the vmexit was
unconditionally setting host's cr0.TS bit and with the kvm using
kernel_fpu_begin/end(), !__thread_has_fpu(current) in the
interrupted_kernel_fpu_idle() will be always true.

So the right thing to do here is to always have the cr0.TS bit clear
during vmexit and set that bit back in __vmx_load_host_state() if the
FPU state is not active.

Appended the modified patch.

thanks,
suresh
--8--
From: Suresh Siddha suresh.b.sid...@intel.com
Subject: x86, kvm: fix kvm's usage of kernel_fpu_begin/end()

Preemption is disabled between kernel_fpu_begin/end() and as such
it is not a good idea to use these routines in kvm_load/put_guest_fpu()
which can be very far apart.

kvm_load/put_guest_fpu() routines are already called with
preemption disabled and KVM already uses the preempt notifier to save
the guest fpu state using kvm_put_guest_fpu().

So introduce __kernel_fpu_begin/end() routines which don't touch
preemption and use them instead of kernel_fpu_begin/end()
for KVM's use model of saving/restoring guest FPU state.

Also with this change (and with eagerFPU model), fix the host cr0.TS vm-exit
state in the case of VMX. For eagerFPU case, host cr0.TS is always clear.
So no need to worry about it. For the traditional lazyFPU restore case,
change the cr0.TS bit for the host state during vm-exit to be always clear
and cr0.TS bit is set in the __vmx_load_host_state() when the FPU
(guest FPU or the host task's FPU) state is not active. This ensures
that the host/guest FPU state is properly saved, restored
during context-switch and with interrupts (using irq_fpu_usable()) not
stomping on the active FPU state.

Signed-off-by: Suresh Siddha suresh.b.sid...@intel.com
---
 arch/x86/include/asm/i387.h |   28 ++--
 arch/x86/kernel/i387.c  |   13 +
 arch/x86/kvm/vmx.c  |   10 +++---
 arch/x86/kvm/x86.c  |4 ++--
 4 files changed, 40 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index 6c3bd37..ed8089d 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -24,8 +24,32 @@ extern int dump_fpu(struct pt_regs *, struct 
user_i387_struct *);
 extern void math_state_restore(void);
 
 extern bool irq_fpu_usable(void);
-extern void kernel_fpu_begin(void);
-extern void kernel_fpu_end(void);
+
+/*
+ * Careful: __kernel_fpu_begin/end() must be called with preempt disabled
+ * and they don't touch the preempt state on their own.
+ * If you enable preemption after __kernel_fpu_begin(), preempt notifier
+ * should call the __kernel_fpu_end() to prevent the kernel/user FPU
+ * state from getting corrupted. KVM for example uses this model.
+ *
+ * All other cases use kernel_fpu_begin/end() which disable preemption
+ * during kernel FPU usage.
+ */
+extern void __kernel_fpu_begin(void);
+extern void __kernel_fpu_end(void);
+
+static inline void kernel_fpu_begin(void)
+{
+   WARN_ON_ONCE(!irq_fpu_usable());
+   preempt_disable();
+   __kernel_fpu_begin();
+}
+
+static inline void kernel_fpu_end(void)
+{
+   __kernel_fpu_end();
+   preempt_enable();
+}
 
 /*
  * Some instructions like VIA's padlock instructions generate a spurious
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 6782e39..675a050 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -73,32 +73,29 @@ bool irq_fpu_usable(void)
 }
 EXPORT_SYMBOL(irq_fpu_usable);
 
-void kernel_fpu_begin(void)
+void __kernel_fpu_begin(void)
 {
struct task_struct *me = current;
 
-   WARN_ON_ONCE(!irq_fpu_usable());
-   preempt_disable();
if (__thread_has_fpu(me)) {
__save_init_fpu(me);
__thread_clear_has_fpu(me);
-   /* We do 'stts()' in kernel_fpu_end() */
+   /* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
   

Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread Suresh Siddha
On Wed, 2012-09-19 at 10:18 -0700, Suresh Siddha wrote:
> These routines (kvm_load/put_guest_fpu()) are already called with
> preemption disabled but as you mentioned, we don't want the preemption
> to be disabled completely between the kvm_load_guest_fpu() and
> kvm_put_guest_fpu().
> 
> Also KVM already has the preempt notifier which is doing the
> kvm_put_guest_fpu(), so something like the appended should address this.
> I will test this shortly.

Appended the tested fix (one more VMX based change needed as it fiddles
with cr0.TS host bit).

Thanks.
--8<--

From: Suresh Siddha 
Subject: x86, kvm: fix kvm's usage of kernel_fpu_begin/end()

Preemption is disabled between kernel_fpu_begin/end() and as such
it is not a good idea to use these routines in kvm_load/put_guest_fpu()
which can be very far apart.

kvm_load/put_guest_fpu() routines are already called with
preemption disabled and KVM already uses the preempt notifier to save
the guest fpu state using kvm_put_guest_fpu().

So introduce __kernel_fpu_begin/end() routines which don't touch
preemption and use them instead of kernel_fpu_begin/end()
for KVM's use model of saving/restoring guest FPU state.

Also with this change (and with eagerFPU model), fix the host cr0.TS vm-exit
state in the case of VMX. For eagerFPU case, host cr0.TS is always clear.
So no need to worry about it. For the traditional lazyFPU restore case,
cr0.TS bit is always set during vm-exit and depending on the guest FPU state
and the host task's FPU state, cr0.TS bit is cleared when needed.

Signed-off-by: Suresh Siddha 
---
 arch/x86/include/asm/fpu-internal.h |5 -
 arch/x86/include/asm/i387.h |   28 ++--
 arch/x86/include/asm/processor.h|5 +
 arch/x86/kernel/i387.c  |   13 +
 arch/x86/kvm/vmx.c  |   11 +--
 arch/x86/kvm/x86.c  |4 ++--
 6 files changed, 47 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/fpu-internal.h 
b/arch/x86/include/asm/fpu-internal.h
index 92f3c6e..a6b60c7 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -85,11 +85,6 @@ static inline int is_x32_frame(void)
 
 #define X87_FSW_ES (1 << 7)/* Exception Summary */
 
-static __always_inline __pure bool use_eager_fpu(void)
-{
-   return static_cpu_has(X86_FEATURE_EAGER_FPU);
-}
-
 static __always_inline __pure bool use_xsaveopt(void)
 {
return static_cpu_has(X86_FEATURE_XSAVEOPT);
diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index 6c3bd37..ed8089d 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -24,8 +24,32 @@ extern int dump_fpu(struct pt_regs *, struct 
user_i387_struct *);
 extern void math_state_restore(void);
 
 extern bool irq_fpu_usable(void);
-extern void kernel_fpu_begin(void);
-extern void kernel_fpu_end(void);
+
+/*
+ * Careful: __kernel_fpu_begin/end() must be called with preempt disabled
+ * and they don't touch the preempt state on their own.
+ * If you enable preemption after __kernel_fpu_begin(), preempt notifier
+ * should call the __kernel_fpu_end() to prevent the kernel/user FPU
+ * state from getting corrupted. KVM for example uses this model.
+ *
+ * All other cases use kernel_fpu_begin/end() which disable preemption
+ * during kernel FPU usage.
+ */
+extern void __kernel_fpu_begin(void);
+extern void __kernel_fpu_end(void);
+
+static inline void kernel_fpu_begin(void)
+{
+   WARN_ON_ONCE(!irq_fpu_usable());
+   preempt_disable();
+   __kernel_fpu_begin();
+}
+
+static inline void kernel_fpu_end(void)
+{
+   __kernel_fpu_end();
+   preempt_enable();
+}
 
 /*
  * Some instructions like VIA's padlock instructions generate a spurious
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index b98c0d9..d0e9adb 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -402,6 +402,11 @@ struct fpu {
union thread_xstate *state;
 };
 
+static __always_inline __pure bool use_eager_fpu(void)
+{
+   return static_cpu_has(X86_FEATURE_EAGER_FPU);
+}
+
 #ifdef CONFIG_X86_64
 DECLARE_PER_CPU(struct orig_ist, orig_ist);
 
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 6782e39..675a050 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -73,32 +73,29 @@ bool irq_fpu_usable(void)
 }
 EXPORT_SYMBOL(irq_fpu_usable);
 
-void kernel_fpu_begin(void)
+void __kernel_fpu_begin(void)
 {
struct task_struct *me = current;
 
-   WARN_ON_ONCE(!irq_fpu_usable());
-   preempt_disable();
if (__thread_has_fpu(me)) {
__save_init_fpu(me);
__thread_clear_has_fpu(me);
-   /* We do 'stts()' in kernel_fpu_end() */
+   /* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
clts();

Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread H. Peter Anvin
On 09/19/2012 10:22 AM, Avi Kivity wrote:
> 
> Note, we could also go in a different direction and make
> kernel_fpu_begin() use preempt notifiers and thus make its users
> preemptible.  But that's for a separate patchset.
> 

Where would you put the state if you were preempted?  You want to
allocate a full extra buffer for the kernel xstate for each thread just
in case?  ("Yes" is a valid answer to that question, but it is a fair
chunk of memory.)

-hpa

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread Suresh Siddha
On Wed, 2012-09-19 at 20:22 +0300, Avi Kivity wrote:
> On 09/19/2012 08:18 PM, Suresh Siddha wrote:
> 
> > These routines (kvm_load/put_guest_fpu()) are already called with
> > preemption disabled but as you mentioned, we don't want the preemption
> > to be disabled completely between the kvm_load_guest_fpu() and
> > kvm_put_guest_fpu().
> > 
> > Also KVM already has the preempt notifier which is doing the
> > kvm_put_guest_fpu(), so something like the appended should address this.
> > I will test this shortly.
> > 
> 
> Note, we could also go in a different direction and make
> kernel_fpu_begin() use preempt notifiers and thus make its users
> preemptible.  But that's for a separate patchset.

yep, but we need the fpu buffer to save/restore the kernel fpu state.

KVM already has those buffers allocated in the guest cpu state and hence
it all works out ok. But yes, we can revisit this in future.

thanks,
suresh


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread Avi Kivity
On 09/19/2012 08:18 PM, Suresh Siddha wrote:

> These routines (kvm_load/put_guest_fpu()) are already called with
> preemption disabled but as you mentioned, we don't want the preemption
> to be disabled completely between the kvm_load_guest_fpu() and
> kvm_put_guest_fpu().
> 
> Also KVM already has the preempt notifier which is doing the
> kvm_put_guest_fpu(), so something like the appended should address this.
> I will test this shortly.
> 

Note, we could also go in a different direction and make
kernel_fpu_begin() use preempt notifiers and thus make its users
preemptible.  But that's for a separate patchset.

-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread Suresh Siddha
On Wed, 2012-09-19 at 13:13 +0300, Avi Kivity wrote:
> On 08/25/2012 12:12 AM, Suresh Siddha wrote:
> > kvm's guest fpu save/restore should be wrapped around
> > kernel_fpu_begin/end(). This will avoid for example taking a DNA
> > in kvm_load_guest_fpu() when it tries to load the fpu immediately
> > after doing unlazy_fpu() on the host side.
> > 
> > More importantly this will prevent the host process fpu from being
> > corrupted.
> > 
> > Signed-off-by: Suresh Siddha 
> > Cc: Avi Kivity 
> > ---
> >  arch/x86/kvm/x86.c |3 ++-
> >  1 files changed, 2 insertions(+), 1 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 42bce48..67e773c 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -5969,7 +5969,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
> >  */
> > kvm_put_guest_xcr0(vcpu);
> > vcpu->guest_fpu_loaded = 1;
> > -   unlazy_fpu(current);
> > +   kernel_fpu_begin();
> > fpu_restore_checking(>arch.guest_fpu);
> > trace_kvm_fpu(1);
> 
> This breaks kvm, since it disables preemption.  What we want here is to
> save the user fpu state if it was loaded, and do nothing if wasn't.
> Don't know what's the new API for that.

These routines (kvm_load/put_guest_fpu()) are already called with
preemption disabled but as you mentioned, we don't want the preemption
to be disabled completely between the kvm_load_guest_fpu() and
kvm_put_guest_fpu().

Also KVM already has the preempt notifier which is doing the
kvm_put_guest_fpu(), so something like the appended should address this.
I will test this shortly.

Signed-off-by: Suresh Siddha 
---
 arch/x86/include/asm/i387.h |   17 +++--
 arch/x86/kernel/i387.c  |   13 +
 arch/x86/kvm/x86.c  |4 ++--
 3 files changed, 22 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index 6c3bd37..29429b1 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -24,8 +24,21 @@ extern int dump_fpu(struct pt_regs *, struct 
user_i387_struct *);
 extern void math_state_restore(void);
 
 extern bool irq_fpu_usable(void);
-extern void kernel_fpu_begin(void);
-extern void kernel_fpu_end(void);
+extern void __kernel_fpu_begin(void);
+extern void __kernel_fpu_end(void);
+
+static inline void kernel_fpu_begin(void)
+{
+   WARN_ON_ONCE(!irq_fpu_usable());
+   preempt_disable();
+   __kernel_fpu_begin();
+}
+
+static inline void kernel_fpu_end(void)
+{
+   __kernel_fpu_end();
+   preempt_enable();
+}
 
 /*
  * Some instructions like VIA's padlock instructions generate a spurious
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 6782e39..675a050 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -73,32 +73,29 @@ bool irq_fpu_usable(void)
 }
 EXPORT_SYMBOL(irq_fpu_usable);
 
-void kernel_fpu_begin(void)
+void __kernel_fpu_begin(void)
 {
struct task_struct *me = current;
 
-   WARN_ON_ONCE(!irq_fpu_usable());
-   preempt_disable();
if (__thread_has_fpu(me)) {
__save_init_fpu(me);
__thread_clear_has_fpu(me);
-   /* We do 'stts()' in kernel_fpu_end() */
+   /* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
clts();
}
 }
-EXPORT_SYMBOL(kernel_fpu_begin);
+EXPORT_SYMBOL(__kernel_fpu_begin);
 
-void kernel_fpu_end(void)
+void __kernel_fpu_end(void)
 {
if (use_eager_fpu())
math_state_restore();
else
stts();
-   preempt_enable();
 }
-EXPORT_SYMBOL(kernel_fpu_end);
+EXPORT_SYMBOL(__kernel_fpu_end);
 
 void unlazy_fpu(struct task_struct *tsk)
 {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3ddefb4..1f09552 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5979,7 +5979,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
 */
kvm_put_guest_xcr0(vcpu);
vcpu->guest_fpu_loaded = 1;
-   kernel_fpu_begin();
+   __kernel_fpu_begin();
fpu_restore_checking(>arch.guest_fpu);
trace_kvm_fpu(1);
 }
@@ -5993,7 +5993,7 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
 
vcpu->guest_fpu_loaded = 0;
fpu_save_init(>arch.guest_fpu);
-   kernel_fpu_end();
+   __kernel_fpu_end();
++vcpu->stat.fpu_reload;
kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
trace_kvm_fpu(0);


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread Avi Kivity
On 08/25/2012 12:12 AM, Suresh Siddha wrote:
> kvm's guest fpu save/restore should be wrapped around
> kernel_fpu_begin/end(). This will avoid for example taking a DNA
> in kvm_load_guest_fpu() when it tries to load the fpu immediately
> after doing unlazy_fpu() on the host side.
> 
> More importantly this will prevent the host process fpu from being
> corrupted.
> 
> Signed-off-by: Suresh Siddha 
> Cc: Avi Kivity 
> ---
>  arch/x86/kvm/x86.c |3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 42bce48..67e773c 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5969,7 +5969,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
>*/
>   kvm_put_guest_xcr0(vcpu);
>   vcpu->guest_fpu_loaded = 1;
> - unlazy_fpu(current);
> + kernel_fpu_begin();
>   fpu_restore_checking(>arch.guest_fpu);
>   trace_kvm_fpu(1);

This breaks kvm, since it disables preemption.  What we want here is to
save the user fpu state if it was loaded, and do nothing if wasn't.
Don't know what's the new API for that.

>  }
> @@ -5983,6 +5983,7 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
>  
>   vcpu->guest_fpu_loaded = 0;
>   fpu_save_init(>arch.guest_fpu);
> + kernel_fpu_end();
>   ++vcpu->stat.fpu_reload;
>   kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
>   trace_kvm_fpu(0);
> 


-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread Avi Kivity
On 08/25/2012 12:12 AM, Suresh Siddha wrote:
 kvm's guest fpu save/restore should be wrapped around
 kernel_fpu_begin/end(). This will avoid for example taking a DNA
 in kvm_load_guest_fpu() when it tries to load the fpu immediately
 after doing unlazy_fpu() on the host side.
 
 More importantly this will prevent the host process fpu from being
 corrupted.
 
 Signed-off-by: Suresh Siddha suresh.b.sid...@intel.com
 Cc: Avi Kivity a...@redhat.com
 ---
  arch/x86/kvm/x86.c |3 ++-
  1 files changed, 2 insertions(+), 1 deletions(-)
 
 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
 index 42bce48..67e773c 100644
 --- a/arch/x86/kvm/x86.c
 +++ b/arch/x86/kvm/x86.c
 @@ -5969,7 +5969,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
*/
   kvm_put_guest_xcr0(vcpu);
   vcpu-guest_fpu_loaded = 1;
 - unlazy_fpu(current);
 + kernel_fpu_begin();
   fpu_restore_checking(vcpu-arch.guest_fpu);
   trace_kvm_fpu(1);

This breaks kvm, since it disables preemption.  What we want here is to
save the user fpu state if it was loaded, and do nothing if wasn't.
Don't know what's the new API for that.

  }
 @@ -5983,6 +5983,7 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
  
   vcpu-guest_fpu_loaded = 0;
   fpu_save_init(vcpu-arch.guest_fpu);
 + kernel_fpu_end();
   ++vcpu-stat.fpu_reload;
   kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
   trace_kvm_fpu(0);
 


-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread Suresh Siddha
On Wed, 2012-09-19 at 13:13 +0300, Avi Kivity wrote:
 On 08/25/2012 12:12 AM, Suresh Siddha wrote:
  kvm's guest fpu save/restore should be wrapped around
  kernel_fpu_begin/end(). This will avoid for example taking a DNA
  in kvm_load_guest_fpu() when it tries to load the fpu immediately
  after doing unlazy_fpu() on the host side.
  
  More importantly this will prevent the host process fpu from being
  corrupted.
  
  Signed-off-by: Suresh Siddha suresh.b.sid...@intel.com
  Cc: Avi Kivity a...@redhat.com
  ---
   arch/x86/kvm/x86.c |3 ++-
   1 files changed, 2 insertions(+), 1 deletions(-)
  
  diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
  index 42bce48..67e773c 100644
  --- a/arch/x86/kvm/x86.c
  +++ b/arch/x86/kvm/x86.c
  @@ -5969,7 +5969,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
   */
  kvm_put_guest_xcr0(vcpu);
  vcpu-guest_fpu_loaded = 1;
  -   unlazy_fpu(current);
  +   kernel_fpu_begin();
  fpu_restore_checking(vcpu-arch.guest_fpu);
  trace_kvm_fpu(1);
 
 This breaks kvm, since it disables preemption.  What we want here is to
 save the user fpu state if it was loaded, and do nothing if wasn't.
 Don't know what's the new API for that.

These routines (kvm_load/put_guest_fpu()) are already called with
preemption disabled but as you mentioned, we don't want the preemption
to be disabled completely between the kvm_load_guest_fpu() and
kvm_put_guest_fpu().

Also KVM already has the preempt notifier which is doing the
kvm_put_guest_fpu(), so something like the appended should address this.
I will test this shortly.

Signed-off-by: Suresh Siddha suresh.b.sid...@intel.com
---
 arch/x86/include/asm/i387.h |   17 +++--
 arch/x86/kernel/i387.c  |   13 +
 arch/x86/kvm/x86.c  |4 ++--
 3 files changed, 22 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index 6c3bd37..29429b1 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -24,8 +24,21 @@ extern int dump_fpu(struct pt_regs *, struct 
user_i387_struct *);
 extern void math_state_restore(void);
 
 extern bool irq_fpu_usable(void);
-extern void kernel_fpu_begin(void);
-extern void kernel_fpu_end(void);
+extern void __kernel_fpu_begin(void);
+extern void __kernel_fpu_end(void);
+
+static inline void kernel_fpu_begin(void)
+{
+   WARN_ON_ONCE(!irq_fpu_usable());
+   preempt_disable();
+   __kernel_fpu_begin();
+}
+
+static inline void kernel_fpu_end(void)
+{
+   __kernel_fpu_end();
+   preempt_enable();
+}
 
 /*
  * Some instructions like VIA's padlock instructions generate a spurious
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 6782e39..675a050 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -73,32 +73,29 @@ bool irq_fpu_usable(void)
 }
 EXPORT_SYMBOL(irq_fpu_usable);
 
-void kernel_fpu_begin(void)
+void __kernel_fpu_begin(void)
 {
struct task_struct *me = current;
 
-   WARN_ON_ONCE(!irq_fpu_usable());
-   preempt_disable();
if (__thread_has_fpu(me)) {
__save_init_fpu(me);
__thread_clear_has_fpu(me);
-   /* We do 'stts()' in kernel_fpu_end() */
+   /* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, NULL);
clts();
}
 }
-EXPORT_SYMBOL(kernel_fpu_begin);
+EXPORT_SYMBOL(__kernel_fpu_begin);
 
-void kernel_fpu_end(void)
+void __kernel_fpu_end(void)
 {
if (use_eager_fpu())
math_state_restore();
else
stts();
-   preempt_enable();
 }
-EXPORT_SYMBOL(kernel_fpu_end);
+EXPORT_SYMBOL(__kernel_fpu_end);
 
 void unlazy_fpu(struct task_struct *tsk)
 {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3ddefb4..1f09552 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5979,7 +5979,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
 */
kvm_put_guest_xcr0(vcpu);
vcpu-guest_fpu_loaded = 1;
-   kernel_fpu_begin();
+   __kernel_fpu_begin();
fpu_restore_checking(vcpu-arch.guest_fpu);
trace_kvm_fpu(1);
 }
@@ -5993,7 +5993,7 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
 
vcpu-guest_fpu_loaded = 0;
fpu_save_init(vcpu-arch.guest_fpu);
-   kernel_fpu_end();
+   __kernel_fpu_end();
++vcpu-stat.fpu_reload;
kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
trace_kvm_fpu(0);


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread Avi Kivity
On 09/19/2012 08:18 PM, Suresh Siddha wrote:

 These routines (kvm_load/put_guest_fpu()) are already called with
 preemption disabled but as you mentioned, we don't want the preemption
 to be disabled completely between the kvm_load_guest_fpu() and
 kvm_put_guest_fpu().
 
 Also KVM already has the preempt notifier which is doing the
 kvm_put_guest_fpu(), so something like the appended should address this.
 I will test this shortly.
 

Note, we could also go in a different direction and make
kernel_fpu_begin() use preempt notifiers and thus make its users
preemptible.  But that's for a separate patchset.

-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread Suresh Siddha
On Wed, 2012-09-19 at 20:22 +0300, Avi Kivity wrote:
 On 09/19/2012 08:18 PM, Suresh Siddha wrote:
 
  These routines (kvm_load/put_guest_fpu()) are already called with
  preemption disabled but as you mentioned, we don't want the preemption
  to be disabled completely between the kvm_load_guest_fpu() and
  kvm_put_guest_fpu().
  
  Also KVM already has the preempt notifier which is doing the
  kvm_put_guest_fpu(), so something like the appended should address this.
  I will test this shortly.
  
 
 Note, we could also go in a different direction and make
 kernel_fpu_begin() use preempt notifiers and thus make its users
 preemptible.  But that's for a separate patchset.

yep, but we need the fpu buffer to save/restore the kernel fpu state.

KVM already has those buffers allocated in the guest cpu state and hence
it all works out ok. But yes, we can revisit this in future.

thanks,
suresh


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread H. Peter Anvin
On 09/19/2012 10:22 AM, Avi Kivity wrote:
 
 Note, we could also go in a different direction and make
 kernel_fpu_begin() use preempt notifiers and thus make its users
 preemptible.  But that's for a separate patchset.
 

Where would you put the state if you were preempted?  You want to
allocate a full extra buffer for the kernel xstate for each thread just
in case?  (Yes is a valid answer to that question, but it is a fair
chunk of memory.)

-hpa

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-09-19 Thread Suresh Siddha
On Wed, 2012-09-19 at 10:18 -0700, Suresh Siddha wrote:
 These routines (kvm_load/put_guest_fpu()) are already called with
 preemption disabled but as you mentioned, we don't want the preemption
 to be disabled completely between the kvm_load_guest_fpu() and
 kvm_put_guest_fpu().
 
 Also KVM already has the preempt notifier which is doing the
 kvm_put_guest_fpu(), so something like the appended should address this.
 I will test this shortly.

Appended the tested fix (one more VMX based change needed as it fiddles
with cr0.TS host bit).

Thanks.
--8--

From: Suresh Siddha suresh.b.sid...@intel.com
Subject: x86, kvm: fix kvm's usage of kernel_fpu_begin/end()

Preemption is disabled between kernel_fpu_begin/end() and as such
it is not a good idea to use these routines in kvm_load/put_guest_fpu()
which can be very far apart.

kvm_load/put_guest_fpu() routines are already called with
preemption disabled and KVM already uses the preempt notifier to save
the guest fpu state using kvm_put_guest_fpu().

So introduce __kernel_fpu_begin/end() routines which don't touch
preemption and use them instead of kernel_fpu_begin/end()
for KVM's use model of saving/restoring guest FPU state.

Also with this change (and with eagerFPU model), fix the host cr0.TS vm-exit
state in the case of VMX. For eagerFPU case, host cr0.TS is always clear.
So no need to worry about it. For the traditional lazyFPU restore case,
cr0.TS bit is always set during vm-exit and depending on the guest FPU state
and the host task's FPU state, cr0.TS bit is cleared when needed.

Signed-off-by: Suresh Siddha suresh.b.sid...@intel.com
---
 arch/x86/include/asm/fpu-internal.h |5 -
 arch/x86/include/asm/i387.h |   28 ++--
 arch/x86/include/asm/processor.h|5 +
 arch/x86/kernel/i387.c  |   13 +
 arch/x86/kvm/vmx.c  |   11 +--
 arch/x86/kvm/x86.c  |4 ++--
 6 files changed, 47 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/fpu-internal.h 
b/arch/x86/include/asm/fpu-internal.h
index 92f3c6e..a6b60c7 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -85,11 +85,6 @@ static inline int is_x32_frame(void)
 
 #define X87_FSW_ES (1  7)/* Exception Summary */
 
-static __always_inline __pure bool use_eager_fpu(void)
-{
-   return static_cpu_has(X86_FEATURE_EAGER_FPU);
-}
-
 static __always_inline __pure bool use_xsaveopt(void)
 {
return static_cpu_has(X86_FEATURE_XSAVEOPT);
diff --git a/arch/x86/include/asm/i387.h b/arch/x86/include/asm/i387.h
index 6c3bd37..ed8089d 100644
--- a/arch/x86/include/asm/i387.h
+++ b/arch/x86/include/asm/i387.h
@@ -24,8 +24,32 @@ extern int dump_fpu(struct pt_regs *, struct 
user_i387_struct *);
 extern void math_state_restore(void);
 
 extern bool irq_fpu_usable(void);
-extern void kernel_fpu_begin(void);
-extern void kernel_fpu_end(void);
+
+/*
+ * Careful: __kernel_fpu_begin/end() must be called with preempt disabled
+ * and they don't touch the preempt state on their own.
+ * If you enable preemption after __kernel_fpu_begin(), preempt notifier
+ * should call the __kernel_fpu_end() to prevent the kernel/user FPU
+ * state from getting corrupted. KVM for example uses this model.
+ *
+ * All other cases use kernel_fpu_begin/end() which disable preemption
+ * during kernel FPU usage.
+ */
+extern void __kernel_fpu_begin(void);
+extern void __kernel_fpu_end(void);
+
+static inline void kernel_fpu_begin(void)
+{
+   WARN_ON_ONCE(!irq_fpu_usable());
+   preempt_disable();
+   __kernel_fpu_begin();
+}
+
+static inline void kernel_fpu_end(void)
+{
+   __kernel_fpu_end();
+   preempt_enable();
+}
 
 /*
  * Some instructions like VIA's padlock instructions generate a spurious
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index b98c0d9..d0e9adb 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -402,6 +402,11 @@ struct fpu {
union thread_xstate *state;
 };
 
+static __always_inline __pure bool use_eager_fpu(void)
+{
+   return static_cpu_has(X86_FEATURE_EAGER_FPU);
+}
+
 #ifdef CONFIG_X86_64
 DECLARE_PER_CPU(struct orig_ist, orig_ist);
 
diff --git a/arch/x86/kernel/i387.c b/arch/x86/kernel/i387.c
index 6782e39..675a050 100644
--- a/arch/x86/kernel/i387.c
+++ b/arch/x86/kernel/i387.c
@@ -73,32 +73,29 @@ bool irq_fpu_usable(void)
 }
 EXPORT_SYMBOL(irq_fpu_usable);
 
-void kernel_fpu_begin(void)
+void __kernel_fpu_begin(void)
 {
struct task_struct *me = current;
 
-   WARN_ON_ONCE(!irq_fpu_usable());
-   preempt_disable();
if (__thread_has_fpu(me)) {
__save_init_fpu(me);
__thread_clear_has_fpu(me);
-   /* We do 'stts()' in kernel_fpu_end() */
+   /* We do 'stts()' in __kernel_fpu_end() */
} else if (!use_eager_fpu()) {
this_cpu_write(fpu_owner_task, 

[PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-08-24 Thread Suresh Siddha
kvm's guest fpu save/restore should be wrapped around
kernel_fpu_begin/end(). This will avoid for example taking a DNA
in kvm_load_guest_fpu() when it tries to load the fpu immediately
after doing unlazy_fpu() on the host side.

More importantly this will prevent the host process fpu from being
corrupted.

Signed-off-by: Suresh Siddha 
Cc: Avi Kivity 
---
 arch/x86/kvm/x86.c |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 42bce48..67e773c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5969,7 +5969,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
 */
kvm_put_guest_xcr0(vcpu);
vcpu->guest_fpu_loaded = 1;
-   unlazy_fpu(current);
+   kernel_fpu_begin();
fpu_restore_checking(>arch.guest_fpu);
trace_kvm_fpu(1);
 }
@@ -5983,6 +5983,7 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
 
vcpu->guest_fpu_loaded = 0;
fpu_save_init(>arch.guest_fpu);
+   kernel_fpu_end();
++vcpu->stat.fpu_reload;
kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
trace_kvm_fpu(0);
-- 
1.7.6.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/6] x86, kvm: use kernel_fpu_begin/end() in kvm_load/put_guest_fpu()

2012-08-24 Thread Suresh Siddha
kvm's guest fpu save/restore should be wrapped around
kernel_fpu_begin/end(). This will avoid for example taking a DNA
in kvm_load_guest_fpu() when it tries to load the fpu immediately
after doing unlazy_fpu() on the host side.

More importantly this will prevent the host process fpu from being
corrupted.

Signed-off-by: Suresh Siddha suresh.b.sid...@intel.com
Cc: Avi Kivity a...@redhat.com
---
 arch/x86/kvm/x86.c |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 42bce48..67e773c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5969,7 +5969,7 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
 */
kvm_put_guest_xcr0(vcpu);
vcpu-guest_fpu_loaded = 1;
-   unlazy_fpu(current);
+   kernel_fpu_begin();
fpu_restore_checking(vcpu-arch.guest_fpu);
trace_kvm_fpu(1);
 }
@@ -5983,6 +5983,7 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
 
vcpu-guest_fpu_loaded = 0;
fpu_save_init(vcpu-arch.guest_fpu);
+   kernel_fpu_end();
++vcpu-stat.fpu_reload;
kvm_make_request(KVM_REQ_DEACTIVATE_FPU, vcpu);
trace_kvm_fpu(0);
-- 
1.7.6.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/