4.8-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Marc Zyngier <marc.zyng...@arm.com>

commit 21cbe3cc8a48ff17059912e019fbde28ed54745a upstream.

The ARMv8 architecture allows the cycle counter to be configured
by setting PMSELR_EL0.SEL==0x1f and then accessing PMXEVTYPER_EL0,
hence accessing PMCCFILTR_EL0. But it disallows the use of
PMSELR_EL0.SEL==0x1f to access the cycle counter itself through
PMXEVCNTR_EL0.

Linux itself doesn't violate this rule, but we may end up with
PMSELR_EL0.SEL being set to 0x1f when we enter a guest. If that
guest accesses PMXEVCNTR_EL0, the access may UNDEF at EL1,
despite the guest not having done anything wrong.

In order to avoid this unfortunate course of events (haha!), let's
sanitize PMSELR_EL0 on guest entry. This ensures that the guest
won't explode unexpectedly.

Acked-by: Will Deacon <will.dea...@arm.com>
Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 arch/arm64/kvm/hyp/switch.c |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -82,7 +82,13 @@ static void __hyp_text __activate_traps(
        write_sysreg(val, hcr_el2);
        /* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
        write_sysreg(1 << 15, hstr_el2);
-       /* Make sure we trap PMU access from EL0 to EL2 */
+       /*
+        * Make sure we trap PMU access from EL0 to EL2. Also sanitize
+        * PMSELR_EL0 to make sure it never contains the cycle
+        * counter, which could make a PMXEVCNTR_EL0 access UNDEF at
+        * EL1 instead of being trapped to EL2.
+        */
+       write_sysreg(0, pmselr_el0);
        write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
        write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
        __activate_traps_arch()();


Reply via email to