On 05/21/2018 04:44 AM, Russell King wrote:
> Harden the branch predictor against Spectre v2 attacks on context
> switches for ARMv7 and later CPUs. We do this by:
>
> Cortex A9, A12, A17, A73, A75: invalidating the BTB.
> Cortex A15, Brahma B15: invalidating the instruction cache.
>
> Cortex
Hi Eric,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on kvmarm/next]
[also build test WARNING on v4.17-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
On Mon, May 21, 2018 at 03:40:02PM +0100, Marc Zyngier wrote:
> On 21/05/18 15:17, Dave Martin wrote:
> > This patch adds SVE context saving to the hyp FPSIMD context switch
> > path. This means that it is no longer necessary to save the host
> > SVE state in advance of entering the guest, when
The conversion of the FPSIMD context switch trap code to C has added
some overhead to calling it, due to the need to save registers that
the procedure call standard defines as caller-saved.
So, perhaps it is no longer worth invoking this trap handler quite
so early.
Instead, we can invoke it
The entire tail of fixup_guest_exit() is contained in if statements
of the form if (x && *exit_code == ARM_EXCEPTION_TRAP). As a result,
we can check just once and bail out of the function early, allowing
the remaining if conditions to be simplified.
The only awkward case is where *exit_code is
In fixup_guest_exit(), there are a couple of cases where after
checking what the exit code was, we assign it explicitly with the
value it already had.
Assuming this is not indicative of a bug, these assignments are not
needed.
This patch removes the redundant assignments, and simplifies some
Now that the host SVE context can be saved on demand from Hyp,
there is no longer any need to save this state in advance before
entering the guest.
This patch removes the relevant call to
kvm_fpsimd_flush_cpu_state().
Since the problem that function was intended to solve now no longer
exists,
This patch adds SVE context saving to the hyp FPSIMD context switch
path. This means that it is no longer necessary to save the host
SVE state in advance of entering the guest, when in use.
In order to avoid adding pointless complexity to the code, VHE is
assumed if SVE is in use. VHE is an
In order to make sve_save_state()/sve_load_state() more easily
reusable and to get rid of a potential branch on context switch
critical paths, this patch makes sve_pffr() inline and moves it to
fpsimd.h.
must be included in fpsimd.h in order to make
this work, and this creates an #include cycle
sve_pffr(), which is used to derive the base address used for
low-level SVE save/restore routines, currently takes the relevant
task_struct as an argument.
The only accessed fields are actually part of thread_struct, so
this patch changes the argument type accordingly. This is done in
Having read_zcr_features() inline in cpufeature.h results in that
header requiring #includes which make it hard to include
elsewhere without triggering header inclusion
cycles.
This is not a hot-path function and arguably should not be in
cpufeature.h in the first place, so this patch moves it
This patch refactors KVM to align the host and guest FPSIMD
save/restore logic with each other for arm64. This reduces the
number of redundant save/restore operations that must occur, and
reduces the common-case IRQ blackout time during guest exit storms
by saving the host state lazily and
In preparation for allowing non-task (i.e., KVM vcpu) FPSIMD
contexts to be handled by the fpsimd common code, this patch adapts
task_fpsimd_save() to save back the currently loaded context,
removing the explicit dependency on current.
The relevant storage to write back to in memory is now found
In preparation for optimising the way KVM manages switching the
guest and host FPSIMD state, it is necessary to provide a means for
code outside arch/arm64/kernel/fpsimd.c to restore the user trap
configuration for SVE correctly for the current task.
Rather than requiring external code to
In struct vcpu_arch, the debug_flags field is used to store
debug-related flags about the vcpu state.
Since we are about to add some more flags related to FPSIMD and
SVE, it makes sense to add them to the existing flags field rather
than adding new fields. Since there is only one debug_flags
From: Christoffer Dall
KVM/ARM differs from other architectures in having to maintain an
additional virtual address space from that of the host and the
guest, because we split the execution of KVM across both EL1 and
EL2.
This results in a need to explicitly map
This patch uses the new update_thread_flag() helpers to simplify a
couple of if () set; else clear; constructs.
No functional change.
Signed-off-by: Dave Martin
Acked-by: Marc Zyngier
Acked-by: Catalin Marinas
Cc: Will Deacon
To make the lazy FPSIMD context switch trap code easier to hack on,
this patch converts it to C.
This is not amazingly efficient, but the trap should typically only
be taken once per host context switch.
Signed-off-by: Dave Martin
Reviewed-by: Marc Zyngier
There are a number of bits of code sprinkled around the kernel to
set a thread flag if a certain condition is true, and clear it
otherwise.
To help make those call sites terser and less cumbersome, this
patch adds a new family of thread flag manipulators
update*_thread_flag([...,] flag,
Note: Most of these patches are Arm-specific. People not Cc'd on the
whole series can find it in the linux-arm-kernel archive [2].
This series aims to improve the way FPSIMD context is handled by KVM.
Only minor changes have been made since the previous v8 [1], though
this posting does apply a
We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible.
So let's intercept it as early as we can by testing for the
function call number as soon as we've identified a HVC call
coming from the guest.
Signed-off-by: Russell King
---
Report support for SMCCC_ARCH_WORKAROUND_1 to KVM guests for affected
CPUs.
Signed-off-by: Russell King
---
arch/arm/include/asm/kvm_host.h | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h
Include Brahma B15 in the Spectre v2 KVM workarounds.
Signed-off-by: Russell King
---
arch/arm/include/asm/kvm_mmu.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 48edb1f4ced4..fea770f78144
From: Marc Zyngier
In order to avoid aliasing attacks against the branch predictor
on Cortex-A15, let's invalidate the BTB on guest exit, which can
only be done by invalidating the icache (with ACTLR[0] being set).
We use the same hack as for A12/A17 to perform the vector
From: Marc Zyngier
In order to avoid aliasing attacks against the branch predictor,
let's invalidate the BTB on guest exit. This is made complicated
by the fact that we cannot take a branch before invalidating the
BTB.
We only apply this to A12 and A17, which are the only
In order to prevent aliasing attacks on the branch predictor,
invalidate the BTB or instruction cache on CPUs that are known to be
affected when taking an abort on a address that is outside of a user
task limit:
Cortex A8, A9, A12, A17, A73, A75: flush BTB.
Cortex A15, Brahma B15: invalidate
When the branch predictor hardening is enabled, firmware must have set
the IBE bit in the auxiliary control register. If this bit has not
been set, the Spectre workarounds will not be functional.
Add validation that this bit is set, and print a warning at alert level
if this is not the case.
Add PSCI based hardening for cores that require more complex handling in
firmware.
Signed-off-by: Russell King
Acked-by: Marc Zyngier
---
arch/arm/mm/proc-v7-bugs.c | 50 ++
arch/arm/mm/proc-v7.S
Add a Kconfig symbol for CPUs which are vulnerable to the Spectre
attacks.
Signed-off-by: Russell King
Reviewed-by: Florian Fainelli
---
arch/arm/mm/Kconfig | 4
1 file changed, 4 insertions(+)
diff --git a/arch/arm/mm/Kconfig
Add support for per-processor bug checking - each processor function
descriptor gains a function pointer for this check, which must not be
an __init function. If non-NULL, this will be called whenever a CPU
enters the kernel via which ever path (boot CPU, secondary CPU startup,
CPU resuming,
Prepare the processor bug infrastructure so that it can be expanded to
check for per-processor bugs.
Signed-off-by: Russell King
Reviewed-by: Florian Fainelli
---
arch/arm/include/asm/bugs.h | 4 ++--
arch/arm/kernel/Makefile| 1 +
Check for CPU bugs when secondary processors are being brought online,
and also when CPUs are resuming from a low power mode. This gives an
opportunity to check that processor specific bug workarounds are
correctly enabled for all paths that a CPU re-enters the kernel.
Signed-off-by: Russell
Add CPU part numbers for the above mentioned CPUs
Signed-off-by: Russell King
Acked-by: Florian Fainelli
---
arch/arm/include/asm/cputype.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/arm/include/asm/cputype.h
This is the second posting - the original cover note is below. Comments
from previous series addresesd:
- Drop R7 and R8 changes.
- Remove "PSCI" from the hypervisor version of the workaround.
arch/arm/include/asm/bugs.h| 6 +-
arch/arm/include/asm/cp15.h| 3 +
On Mon, May 21, 2018 at 06:12:16AM +0800, kbuild test robot wrote:
> tree: https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git
> queue
> head: f2d1aab22d57235b58db391d318727d3e5ef1e89
> commit: 61d47b5d95db9a4ce12c50ffaa6918a40619984f [13/29] KVM: arm64: Save
> host SVE context
On Sun, May 20, 2018 at 02:14:41PM +0100, Marc Zyngier wrote:
> On Wed, 16 May 2018 11:49:42 +0100
> Dave Martin wrote:
>
> Hi Dave,
>
> > Hi Marc,
> >
> > This is a trivial update to the previously posted v7 [1]. The only
> > changes are a couple of minor cosmetic
36 matches
Mail list logo