Re: [PATCH stable v4.9 v2] arm64: entry: Place an SB sequence following an ERET instruction

2020-07-10 Thread Sasha Levin

On Thu, Jul 09, 2020 at 12:50:23PM -0700, Florian Fainelli wrote:

From: Will Deacon 

commit 679db70801da9fda91d26caf13bf5b5ccc74e8e8 upstream

Some CPUs can speculate past an ERET instruction and potentially perform
speculative accesses to memory before processing the exception return.
Since the register state is often controlled by a lower privilege level
at the point of an ERET, this could potentially be used as part of a
side-channel attack.

This patch emits an SB sequence after each ERET so that speculation is
held up on exception return.

Signed-off-by: Will Deacon 
[florian: Adjust hyp-entry.S to account for the label
added change to hyp/entry.S]
Signed-off-by: Florian Fainelli 


I've queued it up, thanks!

--
Thanks,
Sasha
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v6 0/5] clean up redundant 'kvm_run' parameters

2020-07-10 Thread Tianjia Zhang

Hi Paolo,

Any opinion on this series patches? Can I help with this patchset ?

Thanks and best,
Tianjia

On 2020/6/23 21:14, Tianjia Zhang wrote:

In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
structure. For historical reasons, many kvm-related function parameters
retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
patch does a unified cleanup of these remaining redundant parameters.

This series of patches has completely cleaned the architecture of
arm64, mips, ppc, and s390 (no such redundant code on x86). Due to
the large number of modified codes, a separate patch is made for each
platform. On the ppc platform, there is also a redundant structure
pointer of 'kvm_run' in 'vcpu_arch', which has also been cleaned
separately.

---
v6 changes:
   Rearrange patch sets, only keep the unmerged patch.
   rebase on mainline.

v5 change:
   ppc: fix for review.

v4 change:
   mips: fixes two errors in entry.c.

v3 change:
   Keep the existing `vcpu->run` in the function body unchanged.

v2 change:
   s390 retains the original variable name and minimizes modification.

Tianjia Zhang (5):
   KVM: s390: clean up redundant 'kvm_run' parameters
   KVM: arm64: clean up redundant 'kvm_run' parameters
   KVM: PPC: clean up redundant kvm_run parameters in assembly
   KVM: MIPS: clean up redundant 'kvm_run' parameters
   KVM: MIPS: clean up redundant kvm_run parameters in assembly

  arch/arm64/include/asm/kvm_coproc.h   |  12 +--
  arch/arm64/include/asm/kvm_host.h |  11 +--
  arch/arm64/include/asm/kvm_mmu.h  |   2 +-
  arch/arm64/kvm/arm.c  |   6 +-
  arch/arm64/kvm/handle_exit.c  |  36 
  arch/arm64/kvm/mmio.c |  11 +--
  arch/arm64/kvm/mmu.c  |   5 +-
  arch/arm64/kvm/sys_regs.c |  13 ++-
  arch/mips/include/asm/kvm_host.h  |  32 ++--
  arch/mips/kvm/emulate.c   |  59 +
  arch/mips/kvm/entry.c |  21 ++---
  arch/mips/kvm/mips.c  |  14 ++--
  arch/mips/kvm/trap_emul.c | 114 +++---
  arch/mips/kvm/vz.c|  26 +++---
  arch/powerpc/include/asm/kvm_ppc.h|   2 +-
  arch/powerpc/kvm/book3s_interrupts.S  |  22 +++--
  arch/powerpc/kvm/book3s_pr.c  |   9 +-
  arch/powerpc/kvm/booke.c  |   9 +-
  arch/powerpc/kvm/booke_interrupts.S   |   9 +-
  arch/powerpc/kvm/bookehv_interrupts.S |  10 +--
  arch/s390/kvm/kvm-s390.c  |  23 --
  21 files changed, 188 insertions(+), 258 deletions(-)


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host

2020-07-10 Thread Andrew Scull
The task state can be checked by the host and the vcpu flags updated
before calling into hyp. This more neatly separates the concerns and
removes the need to map the task flags to EL2.

Hyp acts on the state provided to it by the host and updates it when
switching to the vcpu state.

No functional change.

Signed-off-by: Andrew Scull 
---
 arch/arm64/include/asm/kvm_host.h   |  5 +--
 arch/arm64/kvm/arm.c|  4 +-
 arch/arm64/kvm/fpsimd.c | 57 ++---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 19 -
 arch/arm64/kvm/hyp/nvhe/switch.c|  3 +-
 arch/arm64/kvm/hyp/vhe/switch.c |  3 +-
 6 files changed, 48 insertions(+), 43 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index b06f24b5f443..ca1621eeb9d9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -24,7 +24,6 @@
 #include 
 #include 
 #include 
-#include 
 
 #define __KVM_HAVE_ARCH_INTC_INITIALIZED
 
@@ -320,7 +319,6 @@ struct kvm_vcpu_arch {
struct kvm_guest_debug_arch vcpu_debug_state;
struct kvm_guest_debug_arch external_debug_state;
 
-   struct thread_info *host_thread_info;   /* hyp VA */
struct user_fpsimd_state *host_fpsimd_state;/* hyp VA */
 
struct {
@@ -616,7 +614,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
-void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
+void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu);
+void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu);
 
 static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 98f05bdac3c1..c7a711ca840e 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -682,6 +682,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 
local_irq_disable();
 
+   kvm_arch_vcpu_enter_ctxsync_fp(vcpu);
+
kvm_vgic_flush_hwstate(vcpu);
 
/*
@@ -769,7 +771,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
if (static_branch_unlikely(&userspace_irqchip_in_use))
kvm_timer_sync_user(vcpu);
 
-   kvm_arch_vcpu_ctxsync_fp(vcpu);
+   kvm_arch_vcpu_exit_ctxsync_fp(vcpu);
 
/*
 * We may have taken a host interrupt in HYP mode (ie
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 3e081d556e81..aec91f43df62 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -27,22 +27,13 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
 {
int ret;
 
-   struct thread_info *ti = ¤t->thread_info;
struct user_fpsimd_state *fpsimd = ¤t->thread.uw.fpsimd_state;
 
-   /*
-* Make sure the host task thread flags and fpsimd state are
-* visible to hyp:
-*/
-   ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP);
-   if (ret)
-   goto error;
-
+   /* Make sure the host task fpsimd state is visible to hyp: */
ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP);
if (ret)
goto error;
 
-   vcpu->arch.host_thread_info = kern_hyp_va(ti);
vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd);
 error:
return ret;
@@ -52,7 +43,7 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
  * Prepare vcpu for saving the host's FPSIMD state and loading the guest's.
  * The actual loading is done by the FPSIMD access trap taken to hyp.
  *
- * Here, we just set the correct metadata to indicate that the FPSIMD
+ * Here, we just set the correct metadata to indicate whether the FPSIMD
  * state in the cpu regs (if any) belongs to current on the host.
  *
  * TIF_SVE is backed up here, since it may get clobbered with guest state.
@@ -63,15 +54,46 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
BUG_ON(!current->mm);
 
vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
+ KVM_ARM64_FP_HOST |
  KVM_ARM64_HOST_SVE_IN_USE |
  KVM_ARM64_HOST_SVE_ENABLED);
+
+   if (!system_supports_fpsimd())
+   return;
+
+   /*
+* Having just come from the user task, if any FP state is loaded it
+* will be that of the task. Make a note of this but, just before
+* entering the vcpu, it will be double checked that the loaded FP
+* state isn't transient because things could change between now and
+* then.
+*/
vcpu->arch.flags |= KVM_ARM64_FP_HOST;
 
-   if (test_thread_flag(TIF_SVE))
-   vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
+   if (system_supports_sve())

[PATCH 0/2] Manage vcpu flags from the host

2020-07-10 Thread Andrew Scull
This is a clean up and no functional change is intended.

The aim is to keep management of the flags in the host and out of hyp. I
find this makes it easier to understand how the flags are used.

This is the result of a previous thread with Dave where it was explained
that there is an intent to synchronize the flags in clearly defined
places <20200707145713.287710-1-asc...@google.com>

Andrew Scull (2):
  KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host
  KVM: arm64: Leave vcpu FPSIMD synchronization in host

 arch/arm64/include/asm/kvm_host.h |  7 ++-
 arch/arm64/kvm/arm.c  |  4 +-
 arch/arm64/kvm/debug.c|  2 +
 arch/arm64/kvm/fpsimd.c   | 57 ---
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h |  2 -
 arch/arm64/kvm/hyp/include/hyp/switch.h   | 19 
 arch/arm64/kvm/hyp/nvhe/switch.c  |  3 +-
 arch/arm64/kvm/hyp/vhe/switch.c   |  3 +-
 8 files changed, 51 insertions(+), 46 deletions(-)

-- 
2.27.0.383.g050319c2ae-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 1/2] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host

2020-07-10 Thread Andrew Scull
Move the clearing of KVM_ARM64_DEBUG_DIRTY from being one of the last
things hyp does before exiting to the host to being one of the first
things the host does after hyp exits.

This means the host always manages the state of the bit and hyp simply
respects that in the context switch.

No functional change.

Signed-off-by: Andrew Scull 
---
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/kvm/debug.c| 2 ++
 arch/arm64/kvm/hyp/include/hyp/debug-sr.h | 2 --
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index e1a32c0707bb..b06f24b5f443 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -404,7 +404,7 @@ struct kvm_vcpu_arch {
 })
 
 /* vcpu_arch flags field values: */
-#define KVM_ARM64_DEBUG_DIRTY  (1 << 0)
+#define KVM_ARM64_DEBUG_DIRTY  (1 << 0) /* vcpu is using debug */
 #define KVM_ARM64_FP_ENABLED   (1 << 1) /* guest FP regs loaded */
 #define KVM_ARM64_FP_HOST  (1 << 2) /* host FP regs loaded */
 #define KVM_ARM64_HOST_SVE_IN_USE  (1 << 3) /* backup for host TIF_SVE */
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 7a7e425616b5..e9932618a362 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -209,6 +209,8 @@ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu)
 {
trace_kvm_arm_clear_debug(vcpu->guest_debug);
 
+   vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
+
if (vcpu->guest_debug) {
restore_guest_debug_regs(vcpu);
 
diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h 
b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
index 0297dc63988c..50ca5d048017 100644
--- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
@@ -161,8 +161,6 @@ static inline void __debug_switch_to_host_common(struct 
kvm_vcpu *vcpu)
 
__debug_save_state(guest_dbg, guest_ctxt);
__debug_restore_state(host_dbg, host_ctxt);
-
-   vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
 }
 
 #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
-- 
2.27.0.383.g050319c2ae-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v6 0/5] clean up redundant 'kvm_run' parameters

2020-07-10 Thread Paolo Bonzini
On 10/07/20 09:32, Tianjia Zhang wrote:
> Hi Paolo,
> 
> Any opinion on this series patches? Can I help with this patchset ?

I was hoping to have some Tested-by, for now I'm queuing patches 1 and
2.  Thanks,

Paolo

> Thanks and best,
> Tianjia
> 
> On 2020/6/23 21:14, Tianjia Zhang wrote:
>> In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu'
>> structure. For historical reasons, many kvm-related function parameters
>> retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
>> patch does a unified cleanup of these remaining redundant parameters.
>>
>> This series of patches has completely cleaned the architecture of
>> arm64, mips, ppc, and s390 (no such redundant code on x86). Due to
>> the large number of modified codes, a separate patch is made for each
>> platform. On the ppc platform, there is also a redundant structure
>> pointer of 'kvm_run' in 'vcpu_arch', which has also been cleaned
>> separately.
>>
>> ---
>> v6 changes:
>>    Rearrange patch sets, only keep the unmerged patch.
>>    rebase on mainline.
>>
>> v5 change:
>>    ppc: fix for review.
>>
>> v4 change:
>>    mips: fixes two errors in entry.c.
>>
>> v3 change:
>>    Keep the existing `vcpu->run` in the function body unchanged.
>>
>> v2 change:
>>    s390 retains the original variable name and minimizes modification.
>>
>> Tianjia Zhang (5):
>>    KVM: s390: clean up redundant 'kvm_run' parameters
>>    KVM: arm64: clean up redundant 'kvm_run' parameters
>>    KVM: PPC: clean up redundant kvm_run parameters in assembly
>>    KVM: MIPS: clean up redundant 'kvm_run' parameters
>>    KVM: MIPS: clean up redundant kvm_run parameters in assembly
>>
>>   arch/arm64/include/asm/kvm_coproc.h   |  12 +--
>>   arch/arm64/include/asm/kvm_host.h |  11 +--
>>   arch/arm64/include/asm/kvm_mmu.h  |   2 +-
>>   arch/arm64/kvm/arm.c  |   6 +-
>>   arch/arm64/kvm/handle_exit.c  |  36 
>>   arch/arm64/kvm/mmio.c |  11 +--
>>   arch/arm64/kvm/mmu.c  |   5 +-
>>   arch/arm64/kvm/sys_regs.c |  13 ++-
>>   arch/mips/include/asm/kvm_host.h  |  32 ++--
>>   arch/mips/kvm/emulate.c   |  59 +
>>   arch/mips/kvm/entry.c |  21 ++---
>>   arch/mips/kvm/mips.c  |  14 ++--
>>   arch/mips/kvm/trap_emul.c | 114 +++---
>>   arch/mips/kvm/vz.c    |  26 +++---
>>   arch/powerpc/include/asm/kvm_ppc.h    |   2 +-
>>   arch/powerpc/kvm/book3s_interrupts.S  |  22 +++--
>>   arch/powerpc/kvm/book3s_pr.c  |   9 +-
>>   arch/powerpc/kvm/booke.c  |   9 +-
>>   arch/powerpc/kvm/booke_interrupts.S   |   9 +-
>>   arch/powerpc/kvm/bookehv_interrupts.S |  10 +--
>>   arch/s390/kvm/kvm-s390.c  |  23 --
>>   21 files changed, 188 insertions(+), 258 deletions(-)
>>
> 

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v6 3/5] KVM: PPC: clean up redundant kvm_run parameters in assembly

2020-07-10 Thread Paolo Bonzini
On 23/06/20 15:14, Tianjia Zhang wrote:
>  
>   /* Load non-volatile guest state from the vcpu */
> - VCPU_LOAD_NVGPRS(r4)
> + VCPU_LOAD_NVGPRS(r3)
>  
>  kvm_start_lightweight:
>   /* Copy registers into shadow vcpu so we can access them in real mode */
> - mr  r3, r4
>   bl  FUNC(kvmppc_copy_to_svcpu)
>   nop
> - REST_GPR(4, r1)
> + REST_GPR(3, r1)
>  
>  #ifdef CONFIG_PPC_BOOK3S_64
>   /* Get the dcbz32 flag */
> @@ -146,7 +144,7 @@ after_sprg3_load:

Below, there are a bunch of references to r3 and r4 left

rldicl  r3, r3, 0, 63   /* r3 &= 1 */
stb r3, HSTATE_RESTORE_HID5(r13)

/* Load up guest SPRG3 value, since it's user readable */
lwz r3, VCPU_SHAREDBE(r4) <<<
cmpwi   r3, 0
ld  r5, VCPU_SHARED(r4)   <<<

where r3 is also destroyed.  So I'd rather have three patches:

- one that is as small as possible, changing the prototypes and adding

mr r4, r3

- one that entirely swaps out r3 and r4.  This would be the hard one to
review!

- one that cleans up the prologue and epilogue

But overall I think it's simplest to just leave out this patch.

Paolo

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v6 1/5] KVM: s390: clean up redundant 'kvm_run' parameters

2020-07-10 Thread Paolo Bonzini
On 23/06/20 17:31, Christian Borntraeger wrote:
> 
> I have trouble seeing value in this particular patch. We add LOCs
> without providing any noticable benefit. All other patches in this series at
> least reduce the amount of code. So I would defer this to Paolo if he prefers
> to have this way across all architectures. 

Yes, it adds lines of code but they're just

+   struct kvm_run *kvm_run = vcpu->run;

You could avoid the LoC increase by repeating vcpu->run over and over,
but I think the code overall is clearer.

Paolo

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm