4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Marc Zyngier <marc.zyng...@arm.com>

Commit 44a497abd621a71c645f06d3d545ae2f46448830 upstream.

kvm_vgic_global_state is part of the read-only section, and is
usually accessed using a PC-relative address generation (adrp + add).

It is thus useless to use kern_hyp_va() on it, and actively problematic
if kern_hyp_va() becomes non-idempotent. On the other hand, there is
no way that the compiler is going to guarantee that such access is
always PC relative.

So let's bite the bullet and provide our own accessor.

Acked-by: Catalin Marinas <catalin.mari...@arm.com>
Reviewed-by: James Morse <james.mo...@arm.com>
Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>
---
 arch/arm/include/asm/kvm_mmu.h   |    7 +++++++
 arch/arm64/include/asm/kvm_mmu.h |   20 ++++++++++++++++++++
 virt/kvm/arm/hyp/vgic-v2-sr.c    |    2 +-
 3 files changed, 28 insertions(+), 1 deletion(-)

--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -28,6 +28,13 @@
  */
 #define kern_hyp_va(kva)       (kva)
 
+/* Contrary to arm64, there is no need to generate a PC-relative address */
+#define hyp_symbol_addr(s)                                             \
+       ({                                                              \
+               typeof(s) *addr = &(s);                                 \
+               addr;                                                   \
+       })
+
 /*
  * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation 
levels.
  */
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -131,6 +131,26 @@ static inline unsigned long __kern_hyp_v
 #define kern_hyp_va(v)         ((typeof(v))(__kern_hyp_va((unsigned long)(v))))
 
 /*
+ * Obtain the PC-relative address of a kernel symbol
+ * s: symbol
+ *
+ * The goal of this macro is to return a symbol's address based on a
+ * PC-relative computation, as opposed to a loading the VA from a
+ * constant pool or something similar. This works well for HYP, as an
+ * absolute VA is guaranteed to be wrong. Only use this if trying to
+ * obtain the address of a symbol (i.e. not something you obtained by
+ * following a pointer).
+ */
+#define hyp_symbol_addr(s)                                             \
+       ({                                                              \
+               typeof(s) *addr;                                        \
+               asm("adrp       %0, %1\n"                               \
+                   "add        %0, %0, :lo12:%1\n"                     \
+                   : "=r" (addr) : "S" (&s));                          \
+               addr;                                                   \
+       })
+
+/*
  * We currently only support a 40bit IPA.
  */
 #define KVM_PHYS_SHIFT (40)
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -139,7 +139,7 @@ int __hyp_text __vgic_v2_perform_cpuif_a
                return -1;
 
        rd = kvm_vcpu_dabt_get_rd(vcpu);
-       addr  = 
kern_hyp_va((kern_hyp_va(&kvm_vgic_global_state))->vcpu_base_va);
+       addr  = 
kern_hyp_va(hyp_symbol_addr(kvm_vgic_global_state)->vcpu_base_va);
        addr += fault_ipa - vgic->vgic_cpu_base;
 
        if (kvm_vcpu_dabt_iswrite(vcpu)) {


Reply via email to