[PATCH stable v4.9 v2] arm64: entry: Place an SB sequence following an ERET instruction

2020-07-09 Thread Florian Fainelli
From: Will Deacon 

commit 679db70801da9fda91d26caf13bf5b5ccc74e8e8 upstream

Some CPUs can speculate past an ERET instruction and potentially perform
speculative accesses to memory before processing the exception return.
Since the register state is often controlled by a lower privilege level
at the point of an ERET, this could potentially be used as part of a
side-channel attack.

This patch emits an SB sequence after each ERET so that speculation is
held up on exception return.

Signed-off-by: Will Deacon 
[florian: Adjust hyp-entry.S to account for the label
 added change to hyp/entry.S]
Signed-off-by: Florian Fainelli 
---
Changes in v2:

- added missing hunk in hyp/entry.S per Will's feedback

 arch/arm64/kernel/entry.S  | 2 ++
 arch/arm64/kvm/hyp/entry.S | 2 ++
 arch/arm64/kvm/hyp/hyp-entry.S | 4 
 3 files changed, 8 insertions(+)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index ca978d7d98eb..3408c782702c 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -255,6 +255,7 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
.else
eret
.endif
+   sb
.endm
 
.macro  get_thread_info, rd
@@ -945,6 +946,7 @@ __ni_sys_trace:
mrs x30, far_el1
.endif
eret
+   sb
.endm
 
.align  11
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index a360ac6e89e9..93704e6894d2 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -83,6 +83,7 @@ ENTRY(__guest_enter)
 
// Do not touch any register after this!
eret
+   sb
 ENDPROC(__guest_enter)
 
 ENTRY(__guest_exit)
@@ -195,4 +196,5 @@ alternative_endif
ldp x0, x1, [sp], #16
 
eret
+   sb
 ENDPROC(__fpsimd_guest_restore)
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index bf4988f9dae8..3675e7f0ab72 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -97,6 +97,7 @@ el1_sync: // Guest trapped into 
EL2
do_el2_call
 
 2: eret
+   sb
 
 el1_hvc_guest:
/*
@@ -147,6 +148,7 @@ wa_epilogue:
mov x0, xzr
add sp, sp, #16
eret
+   sb
 
 el1_trap:
get_vcpu_ptrx1, x0
@@ -198,6 +200,7 @@ el2_error:
b.ne__hyp_panic
mov x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
eret
+   sb
 
 ENTRY(__hyp_do_panic)
mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
@@ -206,6 +209,7 @@ ENTRY(__hyp_do_panic)
ldr lr, =panic
msr elr_el2, lr
eret
+   sb
 ENDPROC(__hyp_do_panic)
 
 ENTRY(__hyp_panic)
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 00/21] KVM: Cleanup and unify kvm_mmu_memory_cache usage

2020-07-09 Thread Paolo Bonzini
On 03/07/20 04:35, Sean Christopherson wrote:
> The only interesting delta from v2 is that patch 18 is updated to handle
> a conflict with arm64's p4d rework.  Resolution was straightforward
> (famous last words).
> 
> 
> This series resurrects Christoffer Dall's series[1] to provide a common
> MMU memory cache implementation that can be shared by x86, arm64 and MIPS.
> 
> It also picks up a suggested change from Ben Gardon[2] to clear shadow
> page tables during initial allocation so as to avoid clearing entire
> pages while holding mmu_lock.
> 
> The front half of the patches do house cleaning on x86's memory cache
> implementation in preparation for moving it to common code, along with a
> fair bit of cleanup on the usage.  The middle chunk moves the patches to
> common KVM, and the last two chunks convert arm64 and MIPS to the common
> implementation.
> 
> Fully tested on x86 only.  Compile tested patches 14-21 on arm64, MIPS,
> s390 and PowerPC.

Queued, thanks.

Paolo

> v3:
>   - Rebased to kvm/queue, commit a037ff353ba6 ("Merge ... into HEAD")
>   - Collect more review tags. [Ben]
> 
> v2:
>   - Rebase to kvm-5.8-2, commit 49b3deaad345 ("Merge tag ...").
>   - Use an asm-generic kvm_types.h for s390 and PowerPC instead of an
> empty arch-specific file. [Marc]
>   - Explicit document "GFP_PGTABLE_USER == GFP_KERNEL_ACCOUNT | GFP_ZERO"
> in the arm64 conversion patch. [Marc]
>   - Collect review tags. [Ben]
> 
> Sean Christopherson (21):
>   KVM: x86/mmu: Track the associated kmem_cache in the MMU caches
>   KVM: x86/mmu: Consolidate "page" variant of memory cache helpers
>   KVM: x86/mmu: Use consistent "mc" name for kvm_mmu_memory_cache locals
>   KVM: x86/mmu: Remove superfluous gotos from mmu_topup_memory_caches()
>   KVM: x86/mmu: Try to avoid crashing KVM if a MMU memory cache is empty
>   KVM: x86/mmu: Move fast_page_fault() call above
> mmu_topup_memory_caches()
>   KVM: x86/mmu: Topup memory caches after walking GVA->GPA
>   KVM: x86/mmu: Clean up the gorilla math in mmu_topup_memory_caches()
>   KVM: x86/mmu: Separate the memory caches for shadow pages and gfn
> arrays
>   KVM: x86/mmu: Make __GFP_ZERO a property of the memory cache
>   KVM: x86/mmu: Zero allocate shadow pages (outside of mmu_lock)
>   KVM: x86/mmu: Skip filling the gfn cache for guaranteed direct MMU
> topups
>   KVM: x86/mmu: Prepend "kvm_" to memory cache helpers that will be
> global
>   KVM: Move x86's version of struct kvm_mmu_memory_cache to common code
>   KVM: Move x86's MMU memory cache helpers to common KVM code
>   KVM: arm64: Drop @max param from mmu_topup_memory_cache()
>   KVM: arm64: Use common code's approach for __GFP_ZERO with memory
> caches
>   KVM: arm64: Use common KVM implementation of MMU memory caches
>   KVM: MIPS: Drop @max param from mmu_topup_memory_cache()
>   KVM: MIPS: Account pages used for GPA page tables
>   KVM: MIPS: Use common KVM implementation of MMU memory caches
> 
>  arch/arm64/include/asm/kvm_host.h  |  11 ---
>  arch/arm64/include/asm/kvm_types.h |   8 ++
>  arch/arm64/kvm/arm.c   |   2 +
>  arch/arm64/kvm/mmu.c   |  56 +++--
>  arch/mips/include/asm/kvm_host.h   |  11 ---
>  arch/mips/include/asm/kvm_types.h  |   7 ++
>  arch/mips/kvm/mmu.c|  44 ++
>  arch/powerpc/include/asm/Kbuild|   1 +
>  arch/s390/include/asm/Kbuild   |   1 +
>  arch/x86/include/asm/kvm_host.h|  14 +---
>  arch/x86/include/asm/kvm_types.h   |   7 ++
>  arch/x86/kvm/mmu/mmu.c | 129 +
>  arch/x86/kvm/mmu/paging_tmpl.h |  10 +--
>  include/asm-generic/kvm_types.h|   5 ++
>  include/linux/kvm_host.h   |   7 ++
>  include/linux/kvm_types.h  |  19 +
>  virt/kvm/kvm_main.c|  55 
>  17 files changed, 176 insertions(+), 211 deletions(-)
>  create mode 100644 arch/arm64/include/asm/kvm_types.h
>  create mode 100644 arch/mips/include/asm/kvm_types.h
>  create mode 100644 arch/x86/include/asm/kvm_types.h
>  create mode 100644 include/asm-generic/kvm_types.h
> 

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm