Re: [PATCH 0/9] arm64: Stolen time support

2020-07-20 Thread zhukeqian
Hi Steven,

On 2019/8/2 22:50, Steven Price wrote:
> This series add support for paravirtualized time for arm64 guests and
> KVM hosts following the specification in Arm's document DEN 0057A:
> 
> https://developer.arm.com/docs/den0057/a
> 
> It implements support for stolen time, allowing the guest to
> identify time when it is forcibly not executing.
> 
> It doesn't implement support for Live Physical Time (LPT) as there are
> some concerns about the overheads and approach in the above
Do you plan to pick up LPT support? As there is demand of cross-frequency 
migration
(from older platform to newer platform).

I am not clear about the overheads and approach problem here, could you please
give some detail information? Maybe we can work together to solve these 
concerns. :-)

Thanks,
Keqian
> specification, and I expect an updated version of the specification to
> be released soon with just the stolen time parts.
> 
> I previously posted a series including LPT (as well as stolen time):
> https://lore.kernel.org/kvmarm/20181212150226.38051-1-steven.pr...@arm.com/
> 
> Patches 2, 5, 7 and 8 are cleanup patches and could be taken separately.
> 
> Christoffer Dall (1):
>   KVM: arm/arm64: Factor out hypercall handling from PSCI code
> 
> Steven Price (8):
>   KVM: arm64: Document PV-time interface
>   KVM: arm64: Implement PV_FEATURES call
>   KVM: arm64: Support stolen time reporting via shared structure
>   KVM: Allow kvm_device_ops to be const
>   KVM: arm64: Provide a PV_TIME device to user space
>   arm/arm64: Provide a wrapper for SMCCC 1.1 calls
>   arm/arm64: Make use of the SMCCC 1.1 wrapper
>   arm64: Retrieve stolen time as paravirtualized guest
> 
>  Documentation/virtual/kvm/arm/pvtime.txt | 107 +
>  arch/arm/kvm/Makefile|   2 +-
>  arch/arm/kvm/handle_exit.c   |   2 +-
>  arch/arm/mm/proc-v7-bugs.c   |  13 +-
>  arch/arm64/include/asm/kvm_host.h|  13 +-
>  arch/arm64/include/asm/kvm_mmu.h |   2 +
>  arch/arm64/include/asm/pvclock-abi.h |  20 +++
>  arch/arm64/include/uapi/asm/kvm.h|   6 +
>  arch/arm64/kernel/Makefile   |   1 +
>  arch/arm64/kernel/cpu_errata.c   |  80 --
>  arch/arm64/kernel/kvm.c  | 155 ++
>  arch/arm64/kvm/Kconfig   |   1 +
>  arch/arm64/kvm/Makefile  |   2 +
>  arch/arm64/kvm/handle_exit.c |   4 +-
>  include/kvm/arm_hypercalls.h |  44 ++
>  include/kvm/arm_psci.h   |   2 +-
>  include/linux/arm-smccc.h|  58 +++
>  include/linux/cpuhotplug.h   |   1 +
>  include/linux/kvm_host.h |   4 +-
>  include/linux/kvm_types.h|   2 +
>  include/uapi/linux/kvm.h |   2 +
>  virt/kvm/arm/arm.c   |  18 +++
>  virt/kvm/arm/hypercalls.c| 138 
>  virt/kvm/arm/mmu.c   |  44 ++
>  virt/kvm/arm/psci.c  |  84 +-
>  virt/kvm/arm/pvtime.c| 190 +++
>  virt/kvm/kvm_main.c  |   6 +-
>  27 files changed, 848 insertions(+), 153 deletions(-)
>  create mode 100644 Documentation/virtual/kvm/arm/pvtime.txt
>  create mode 100644 arch/arm64/include/asm/pvclock-abi.h
>  create mode 100644 arch/arm64/kernel/kvm.c
>  create mode 100644 include/kvm/arm_hypercalls.h
>  create mode 100644 virt/kvm/arm/hypercalls.c
>  create mode 100644 virt/kvm/arm/pvtime.c
> 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism

2020-07-20 Thread Andrew Scull
On Wed, Jul 15, 2020 at 07:44:08PM +0100, Andrew Scull wrote:
> diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> index e727bee8e110..c441aabb8ab0 100644
> --- a/arch/arm64/kvm/hyp/hyp-entry.S
> +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> @@ -177,7 +177,6 @@ el2_error:
>   adr x1, abort_guest_exit_end
>   ccmpx0, x1, #4, ne
>   b.ne__hyp_panic
> - mov x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
>   eret
>   sb

Having looked at this again, I also meant to remove the hunk below. It
used to check that the SError had actually been taken through the
exception vector but I am removing that.

However, do I need to continue doing that check to make sure the SError
didn't get cancelled (if that is possible) or some other architectural
phenomenon happened that I haven't factored in to my thinking?

--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -210,13 +222,8 @@ abort_guest_exit_end:
 
 msr daifset, #4 // Mask aborts
  
-   // If the exception took place, restore the EL1 exception
-   // context so that we can report some information.
-   // Merge the exception code with the SError pending bit.
-   tbz x0, #ARM_EXIT_WITH_SERROR_BIT, 1f
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH stable 4.14.y] arm64: entry: Place an SB sequence following an ERET instruction

2020-07-20 Thread Florian Fainelli
From: Will Deacon 

commit 679db70801da9fda91d26caf13bf5b5ccc74e8e8 upstream

Some CPUs can speculate past an ERET instruction and potentially perform
speculative accesses to memory before processing the exception return.
Since the register state is often controlled by a lower privilege level
at the point of an ERET, this could potentially be used as part of a
side-channel attack.

This patch emits an SB sequence after each ERET so that speculation is
held up on exception return.

Signed-off-by: Will Deacon 
[florian: update arch/arm64/kvm/entry.S::__fpsimd_guest_restore]
Signed-off-by: Florian Fainelli 
---
 arch/arm64/kernel/entry.S  | 2 ++
 arch/arm64/kvm/hyp/entry.S | 2 ++
 arch/arm64/kvm/hyp/hyp-entry.S | 4 
 3 files changed, 8 insertions(+)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index c1ffa95c0ad2..f70e0893ba51 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -367,6 +367,7 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
.else
eret
.endif
+   sb
.endm
 
.macro  irq_stack_entry
@@ -1046,6 +1047,7 @@ alternative_insn isb, nop, 
ARM64_WORKAROUND_QCOM_FALKOR_E1003
mrs x30, far_el1
.endif
eret
+   sb
.endm
 
.align  11
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index a360ac6e89e9..93704e6894d2 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -83,6 +83,7 @@ ENTRY(__guest_enter)
 
// Do not touch any register after this!
eret
+   sb
 ENDPROC(__guest_enter)
 
 ENTRY(__guest_exit)
@@ -195,4 +196,5 @@ alternative_endif
ldp x0, x1, [sp], #16
 
eret
+   sb
 ENDPROC(__fpsimd_guest_restore)
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 3c283fd8c8f5..b4d6a6c6c6ce 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -96,6 +96,7 @@ el1_sync: // Guest trapped into 
EL2
do_el2_call
 
eret
+   sb
 
 el1_hvc_guest:
/*
@@ -146,6 +147,7 @@ wa_epilogue:
mov x0, xzr
add sp, sp, #16
eret
+   sb
 
 el1_trap:
get_vcpu_ptrx1, x0
@@ -204,6 +206,7 @@ el2_error:
b.ne__hyp_panic
mov x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
eret
+   sb
 
 ENTRY(__hyp_do_panic)
mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
@@ -212,6 +215,7 @@ ENTRY(__hyp_do_panic)
ldr lr, =panic
msr elr_el2, lr
eret
+   sb
 ENDPROC(__hyp_do_panic)
 
 ENTRY(__hyp_panic)
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH stable 4.19.y] arm64: entry: Place an SB sequence following an ERET instruction

2020-07-20 Thread Florian Fainelli
From: Will Deacon 

commit 679db70801da9fda91d26caf13bf5b5ccc74e8e8 upstream

Some CPUs can speculate past an ERET instruction and potentially perform
speculative accesses to memory before processing the exception return.
Since the register state is often controlled by a lower privilege level
at the point of an ERET, this could potentially be used as part of a
side-channel attack.

This patch emits an SB sequence after each ERET so that speculation is
held up on exception return.

Signed-off-by: Will Deacon 
Signed-off-by: Florian Fainelli 
---
 arch/arm64/kernel/entry.S  | 2 ++
 arch/arm64/kvm/hyp/entry.S | 1 +
 arch/arm64/kvm/hyp/hyp-entry.S | 4 
 3 files changed, 7 insertions(+)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 5f800384cb9a..49f80b5627fa 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -363,6 +363,7 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
.else
eret
.endif
+   sb
.endm
 
.macro  irq_stack_entry
@@ -994,6 +995,7 @@ alternative_insn isb, nop, 
ARM64_WORKAROUND_QCOM_FALKOR_E1003
mrs x30, far_el1
.endif
eret
+   sb
.endm
 
.align  11
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index fad1e164fe48..675fdc186e3b 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -83,6 +83,7 @@ ENTRY(__guest_enter)
 
// Do not touch any register after this!
eret
+   sb
 ENDPROC(__guest_enter)
 
 ENTRY(__guest_exit)
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 24b4fbafe3e4..e35abf84eb96 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -96,6 +96,7 @@ el1_sync: // Guest trapped into 
EL2
do_el2_call
 
eret
+   sb
 
 el1_hvc_guest:
/*
@@ -146,6 +147,7 @@ wa_epilogue:
mov x0, xzr
add sp, sp, #16
eret
+   sb
 
 el1_trap:
get_vcpu_ptrx1, x0
@@ -185,6 +187,7 @@ el2_error:
b.ne__hyp_panic
mov x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
eret
+   sb
 
 ENTRY(__hyp_do_panic)
mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
@@ -193,6 +196,7 @@ ENTRY(__hyp_do_panic)
ldr lr, =panic
msr elr_el2, lr
eret
+   sb
 ENDPROC(__hyp_do_panic)
 
 ENTRY(__hyp_panic)
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH stable v4.9 v2] arm64: entry: Place an SB sequence following an ERET instruction

2020-07-20 Thread Florian Fainelli
On 7/20/20 6:04 AM, Greg KH wrote:
> On Thu, Jul 09, 2020 at 12:50:23PM -0700, Florian Fainelli wrote:
>> From: Will Deacon 
>>
>> commit 679db70801da9fda91d26caf13bf5b5ccc74e8e8 upstream
>>
>> Some CPUs can speculate past an ERET instruction and potentially perform
>> speculative accesses to memory before processing the exception return.
>> Since the register state is often controlled by a lower privilege level
>> at the point of an ERET, this could potentially be used as part of a
>> side-channel attack.
>>
>> This patch emits an SB sequence after each ERET so that speculation is
>> held up on exception return.
>>
>> Signed-off-by: Will Deacon 
>> [florian: Adjust hyp-entry.S to account for the label
>>  added change to hyp/entry.S]
>> Signed-off-by: Florian Fainelli 
>> ---
>> Changes in v2:
>>
>> - added missing hunk in hyp/entry.S per Will's feedback
> 
> What about 4.19.y and 4.14.y trees?  I can't take something for 4.9.y
> and then have a regression if someone moves to a newer release, right?

Sure, send you candidates for 4.14 and 4.19.
-- 
Florian
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism

2020-07-20 Thread Andrew Scull
On Sat, Jul 18, 2020 at 10:00:30AM +0100, Marc Zyngier wrote:
> Hi Andrew,
> 
> On Wed, 15 Jul 2020 19:44:08 +0100,
> Andrew Scull  wrote:
> > 
> > When exiting a guest, just check whether there is an SError pending and
> > set the bit in the exit code. The fixup then initiates the ceremony
> > should it be required.
> > 
> > The separation allows for easier choices to be made as to whether the
> > demonic consultation should proceed.
> 
> Such as?

It's used in the next patch to keep host SErrors pending and left for
the host to handle when reentering the host vcpu. IIUC, this matches the
previous behaviour since hyp would mask SErrors.

We wanted to avoid the need to convert host SErrors into virtual ones
and I opted for this approach to keep as much of the logic/policy as
possible in C.

Let me know if you'd prefer a different split of the patches or there
should be different design goals.

> > 
> > Signed-off-by: Andrew Scull 
> > ---
> >  arch/arm64/include/asm/kvm_hyp.h|  2 ++
> >  arch/arm64/kvm/hyp/entry.S  | 27 +
> >  arch/arm64/kvm/hyp/hyp-entry.S  |  1 -
> >  arch/arm64/kvm/hyp/include/hyp/switch.h |  4 
> >  4 files changed, 25 insertions(+), 9 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_hyp.h 
> > b/arch/arm64/include/asm/kvm_hyp.h
> > index 07745d9c49fc..50a774812761 100644
> > --- a/arch/arm64/include/asm/kvm_hyp.h
> > +++ b/arch/arm64/include/asm/kvm_hyp.h
> > @@ -91,6 +91,8 @@ void deactivate_traps_vhe_put(void);
> >  
> >  u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context 
> > *host_ctxt);
> >  
> > +void __vaxorcize_serror(void);
> 
> I think a VAXorsist reference in the comments is plenty. The code can
> definitely stay architectural. Something like "__handle_SEI()" should
> be good. I'm not *that* fun.

Fine... ;)

> > +
> >  void __noreturn hyp_panic(struct kvm_cpu_context *host_ctxt);
> >  #ifdef __KVM_NVHE_HYPERVISOR__
> >  void __noreturn __hyp_do_panic(unsigned long, ...);
> > diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> > index 6a641fcff4f7..dc4e3e7e7407 100644
> > --- a/arch/arm64/kvm/hyp/entry.S
> > +++ b/arch/arm64/kvm/hyp/entry.S
> > @@ -174,18 +174,31 @@ alternative_if ARM64_HAS_RAS_EXTN
> > mrs_s   x2, SYS_DISR_EL1
> > str x2, [x1, #(VCPU_FAULT_DISR - VCPU_CONTEXT)]
> > cbz x2, 1f
> > -   msr_s   SYS_DISR_EL1, xzr
> > orr x0, x0, #(1< > -1: ret
> > +   nop
> > +1:
> >  alternative_else
> > dsb sy  // Synchronize against in-flight ld/st
> > isb // Prevent an early read of side-effect free ISR
> > mrs x2, isr_el1
> > -   tbnzx2, #8, 2f  // ISR_EL1.A
> > -   ret
> > -   nop
> > +   tbz x2, #8, 2f  // ISR_EL1.A
> > +   orr x0, x0, #(1< >  2:
> >  alternative_endif
> > +
> > +   ret
> > +SYM_FUNC_END(__guest_enter)
> > +
> > +/*
> > + * void __vaxorcize_serror(void);
> > + */
> > +SYM_FUNC_START(__vaxorcize_serror)
> > +
> > +alternative_if ARM64_HAS_RAS_EXTN
> > +   msr_s   SYS_DISR_EL1, xzr
> > +   ret
> > +alternative_else_nop_endif
> > +
> > // We know we have a pending asynchronous abort, now is the
> > // time to flush it out. From your VAXorcist book, page 666:
> > // "Threaten me not, oh Evil one!  For I speak with
> > @@ -193,7 +206,6 @@ alternative_endif
> > mrs x2, elr_el2
> > mrs x3, esr_el2
> > mrs x4, spsr_el2
> > -   mov x5, x0
> >  
> > msr daifclr, #4 // Unmask aborts
> >  
> > @@ -217,6 +229,5 @@ abort_guest_exit_end:
> > msr elr_el2, x2
> > msr esr_el2, x3
> > msr spsr_el2, x4
> > -   orr x0, x0, x5
> >  1: ret
> > -SYM_FUNC_END(__guest_enter)
> > +SYM_FUNC_END(__vaxorcize_serror)
> > diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
> > index e727bee8e110..c441aabb8ab0 100644
> > --- a/arch/arm64/kvm/hyp/hyp-entry.S
> > +++ b/arch/arm64/kvm/hyp/hyp-entry.S
> > @@ -177,7 +177,6 @@ el2_error:
> > adr x1, abort_guest_exit_end
> > ccmpx0, x1, #4, ne
> > b.ne__hyp_panic
> > -   mov x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
> > eret
> > sb
> >  
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
> > b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 0511af14dc81..14a774d1a35a 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -405,6 +405,10 @@ static inline bool __hyp_handle_ptrauth(struct 
> > kvm_vcpu *vcpu)
> >   */
> >  static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
> >  {
> > +   /* Flush guest SErrors. */
> > +   if (ARM_SERROR_PENDING(*exit_code))
> > +   __vaxorcize_serror();
> > +
> > if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
> > vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
> >  
> > -- 
> > 2.27.0.389.gc38d7665816-goog
> > 
> > 
> 
> I'm not against this patch, but 

Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism

2020-07-20 Thread Marc Zyngier

On 2020-07-20 16:40, Andrew Scull wrote:

On Wed, Jul 15, 2020 at 07:44:08PM +0100, Andrew Scull wrote:
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S 
b/arch/arm64/kvm/hyp/hyp-entry.S

index e727bee8e110..c441aabb8ab0 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -177,7 +177,6 @@ el2_error:
adr x1, abort_guest_exit_end
ccmpx0, x1, #4, ne
b.ne__hyp_panic
-   mov x0, #(1 << ARM_EXIT_WITH_SERROR_BIT)
eret
sb


Having looked at this again, I also meant to remove the hunk below. It
used to check that the SError had actually been taken through the
exception vector but I am removing that.

However, do I need to continue doing that check to make sure the SError
didn't get cancelled (if that is possible) or some other architectural
phenomenon happened that I haven't factored in to my thinking?

--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -210,13 +222,8 @@ abort_guest_exit_end:

 msr daifset, #4 // Mask aborts

-   // If the exception took place, restore the EL1 exception
-   // context so that we can report some information.
-   // Merge the exception code with the SError pending bit.
-   tbz x0, #ARM_EXIT_WITH_SERROR_BIT, 1f


You have now inverted the logic: you detect the SError before
handling it, and set the flag accordingly. Which means that:

- you always enter this function having set ARM_EXIT_WITH_SERROR_BIT
  in the error code (which isn't passed to this function).

- you always take an exception, because you *know* there is a pending
  SError. Taking the exception on a non-RAS systems consumes it.

- If there is another SError pending after that, something bad happened
  at EL2, and we'll explode later on (on return to EL1).

So I think removing this hunk is the right thing to do.

M.
--
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 07/37] KVM: arm64: Separate SError detection from VAXorcism

2020-07-20 Thread Marc Zyngier

On 2020-07-20 15:13, Andrew Scull wrote:

On Sat, Jul 18, 2020 at 10:00:30AM +0100, Marc Zyngier wrote:

Hi Andrew,

On Wed, 15 Jul 2020 19:44:08 +0100,
Andrew Scull  wrote:
>
> When exiting a guest, just check whether there is an SError pending and
> set the bit in the exit code. The fixup then initiates the ceremony
> should it be required.
>
> The separation allows for easier choices to be made as to whether the
> demonic consultation should proceed.

Such as?


It's used in the next patch to keep host SErrors pending and left for
the host to handle when reentering the host vcpu. IIUC, this matches 
the

previous behaviour since hyp would mask SErrors.

We wanted to avoid the need to convert host SErrors into virtual ones
and I opted for this approach to keep as much of the logic/policy as
possible in C.


Right. That's the kind of information you should put in your commit
message, as it makes your intent much clearer.

Thanks,

M.
--
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH stable v4.9 v2] arm64: entry: Place an SB sequence following an ERET instruction

2020-07-20 Thread Greg KH
On Thu, Jul 09, 2020 at 12:50:23PM -0700, Florian Fainelli wrote:
> From: Will Deacon 
> 
> commit 679db70801da9fda91d26caf13bf5b5ccc74e8e8 upstream
> 
> Some CPUs can speculate past an ERET instruction and potentially perform
> speculative accesses to memory before processing the exception return.
> Since the register state is often controlled by a lower privilege level
> at the point of an ERET, this could potentially be used as part of a
> side-channel attack.
> 
> This patch emits an SB sequence after each ERET so that speculation is
> held up on exception return.
> 
> Signed-off-by: Will Deacon 
> [florian: Adjust hyp-entry.S to account for the label
>  added change to hyp/entry.S]
> Signed-off-by: Florian Fainelli 
> ---
> Changes in v2:
> 
> - added missing hunk in hyp/entry.S per Will's feedback

What about 4.19.y and 4.14.y trees?  I can't take something for 4.9.y
and then have a regression if someone moves to a newer release, right?

thanks,

greg k-h
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm