Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm
> > > + /* VTCR_EL2 value for this VM */ > > > + u64vtcr; > > > > VTCR seems quite strongly tied to the MMU config. Is it not controlled > > independently for the nested MMUs and so remains in this struct? > > This particular instance of VTCR_EL2 is the host's version. Which > means it describes the virtual HW for the EL1 guest. It constraints, > among other things, the number of IPA bits for the guest, for example, > and is configured by the VMM. > > Once you start nesting, each vcpu has its own VTCR_EL2 which is still > constrained by the main one (no nested guest can have a T0SZ bigger > than the value imposed by userspace for this guest as a whole). > > Does it make sense? It does up to my ignorance of the spec in this regard. Simliar to James's question, should `vtcr` live inside the mmu struct with the top level `kvm::mmu` field containing the host's version and the nested mmus containing the nested version of vtcr to be applied to the vCPU? I didn't noticed there being a vtcr for the nested version in the ~90-patch series so maybe that just isn't something that needs thinking about? ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH 06/26] arm64: Add level-hinted TLB invalidation helper
> +#define __tlbi_level(op, addr, level) > \ > + do {\ > + u64 arg = addr; \ > + \ > + if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) && \ > + level) {\ > + u64 ttl = level;\ > + \ > + switch (PAGE_SIZE) {\ > + case SZ_4K: \ > + ttl |= 1 << 2; \ > + break; \ > + case SZ_16K:\ > + ttl |= 2 << 2; \ > + break; \ > + case SZ_64K:\ > + ttl |= 3 << 2; \ > + break; \ > + } \ > + \ > + arg &= ~TLBI_TTL_MASK; \ > + arg |= FIELD_PREP(TLBI_TTL_MASK, ttl); \ Despite the spec saying both tables apply to TLB maintenance instructions that do not apply to a range of addresses I think it only means the 4-bit version (bug report to Arm, or I'm on the wrong spec). This is consistent with Table D5-53 and the macro takes a single address argument to make misuse with range based tlbi less likely. It relies on the caller to get the level right and getting it wrong could be pretty bad as the spec says all bets are off in that case. Is it worth adding a check of the level against the address (seems a bit involved), or that it is just 2 bits or adding a short doc comment to explain it? (Looks like we get some constants for the levels in a later patch that could be referenced with some form of time travel) > + } \ > + \ > + __tlbi(op, arg); \ cosmetic nit: double space in here ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH 05/26] arm64: Document SW reserved PTE/PMD bits in Stage-2 descriptors
On Wed, Apr 22, 2020 at 01:00:29PM +0100, Marc Zyngier wrote: > Advertise bits [58:55] as reserved for SW in the S2 descriptors. > > Signed-off-by: Marc Zyngier > --- > arch/arm64/include/asm/pgtable-hwdef.h | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/arm64/include/asm/pgtable-hwdef.h > b/arch/arm64/include/asm/pgtable-hwdef.h > index 6bf5e650da788..7eab0d23cdb52 100644 > --- a/arch/arm64/include/asm/pgtable-hwdef.h > +++ b/arch/arm64/include/asm/pgtable-hwdef.h > @@ -177,10 +177,12 @@ > #define PTE_S2_RDONLY(_AT(pteval_t, 1) << 6) /* HAP[2:1] */ > #define PTE_S2_RDWR (_AT(pteval_t, 3) << 6) /* HAP[2:1] */ > #define PTE_S2_XN(_AT(pteval_t, 2) << 53) /* XN[1:0] */ > +#define PTE_S2_SW_RESVD (_AT(pteval_t, 15) << 55) /* Reserved > for SW */ > > #define PMD_S2_RDONLY(_AT(pmdval_t, 1) << 6) /* HAP[2:1] */ > #define PMD_S2_RDWR (_AT(pmdval_t, 3) << 6) /* HAP[2:1] */ > #define PMD_S2_XN(_AT(pmdval_t, 2) << 53) /* XN[1:0] */ > +#define PMD_S2_SW_RESVD (_AT(pmdval_t, 15) << 55) /* Reserved > for SW */ > > #define PUD_S2_RDONLY(_AT(pudval_t, 1) << 6) /* HAP[2:1] */ > #define PUD_S2_RDWR (_AT(pudval_t, 3) << 6) /* HAP[2:1] */ > -- > 2.26.1 > > ___ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm This is consistent with "Attribute fields in stage 1 VMSAv8-64 Block and Page descriptors" Reviewed-by: Andrew Scull ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v5 3/4] KVM: arm64: Change CONFIG_KVM to a menuconfig entry
From: Will Deacon Changing CONFIG_KVM to be a 'menuconfig' entry in Kconfig mean that we can straightforwardly enumerate optional features, such as the virtual PMU device as dependent options. Signed-off-by: Will Deacon Signed-off-by: Fuad Tabba --- arch/arm64/kvm/Kconfig | 16 +++- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index d2cf4f099454..f1c1f981482c 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -3,7 +3,6 @@ # KVM configuration # -source "virt/kvm/Kconfig" source "virt/lib/Kconfig" menuconfig VIRTUALIZATION @@ -18,7 +17,7 @@ menuconfig VIRTUALIZATION if VIRTUALIZATION -config KVM +menuconfig KVM bool "Kernel-based Virtual Machine (KVM) support" depends on OF # for TASKSTATS/TASK_DELAY_ACCT: @@ -33,7 +32,6 @@ config KVM select KVM_VFIO select HAVE_KVM_EVENTFD select HAVE_KVM_IRQFD - select KVM_ARM_PMU if HW_PERF_EVENTS select HAVE_KVM_MSI select HAVE_KVM_IRQCHIP select HAVE_KVM_IRQ_ROUTING @@ -47,13 +45,21 @@ config KVM If unsure, say N. +if KVM + +source "virt/kvm/Kconfig" + config KVM_ARM_PMU - bool + bool "Virtual Performance Monitoring Unit (PMU) support" + depends on HW_PERF_EVENTS + default y ---help--- Adds support for a virtual Performance Monitoring Unit (PMU) in virtual machines. config KVM_INDIRECT_VECTORS - def_bool KVM && (HARDEN_BRANCH_PREDICTOR || HARDEN_EL2_VECTORS) + def_bool HARDEN_BRANCH_PREDICTOR || HARDEN_EL2_VECTORS + +endif # KVM endif # VIRTUALIZATION -- 2.26.2.526.g744177e7f7-goog ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v5 1/4] KVM: arm64: Kill off CONFIG_KVM_ARM_HOST
From: Will Deacon CONFIG_KVM_ARM_HOST is just a proxy for CONFIG_KVM, so remove it in favour of the latter. Signed-off-by: Will Deacon Signed-off-by: Fuad Tabba --- arch/arm64/kernel/asm-offsets.c | 2 +- arch/arm64/kernel/cpu_errata.c | 2 +- arch/arm64/kernel/smp.c | 2 +- arch/arm64/kvm/Kconfig | 6 arch/arm64/kvm/Makefile | 52 - arch/arm64/kvm/hyp/Makefile | 22 +++--- 6 files changed, 40 insertions(+), 46 deletions(-) diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 9981a0a5a87f..a27e0cd731e9 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -96,7 +96,7 @@ int main(void) DEFINE(CPU_BOOT_PTRAUTH_KEY, offsetof(struct secondary_data, ptrauth_key)); #endif BLANK(); -#ifdef CONFIG_KVM_ARM_HOST +#ifdef CONFIG_KVM DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1)); DEFINE(VCPU_WORKAROUND_FLAGS,offsetof(struct kvm_vcpu, arch.workaround_flags)); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index df56d2295d16..a102321fc8a2 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -234,7 +234,7 @@ static int detect_harden_bp_fw(void) smccc_end = NULL; break; -#if IS_ENABLED(CONFIG_KVM_ARM_HOST) +#if IS_ENABLED(CONFIG_KVM) case SMCCC_CONDUIT_SMC: cb = call_smc_arch_workaround_1; smccc_start = __smccc_workaround_1_smc; diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 061f60fe452f..0a3045d9f33f 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -430,7 +430,7 @@ static void __init hyp_mode_check(void) "CPU: CPUs started in inconsistent modes"); else pr_info("CPU: All CPU(s) started at EL1\n"); - if (IS_ENABLED(CONFIG_KVM_ARM_HOST)) + if (IS_ENABLED(CONFIG_KVM)) kvm_compute_layout(); } diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 449386d76441..ce724e526689 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -28,7 +28,6 @@ config KVM select HAVE_KVM_CPU_RELAX_INTERCEPT select HAVE_KVM_ARCH_TLB_FLUSH_ALL select KVM_MMIO - select KVM_ARM_HOST select KVM_GENERIC_DIRTYLOG_READ_PROTECT select SRCU select KVM_VFIO @@ -50,11 +49,6 @@ config KVM If unsure, say N. -config KVM_ARM_HOST - bool - ---help--- - Provides host support for ARM processors. - config KVM_ARM_PMU bool ---help--- diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 7a3768538343..419696e615b3 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -7,33 +7,33 @@ ccflags-y += -I $(srctree)/$(src) KVM=../../../virt/kvm -obj-$(CONFIG_KVM_ARM_HOST) += kvm.o -obj-$(CONFIG_KVM_ARM_HOST) += hyp/ +obj-$(CONFIG_KVM) += kvm.o +obj-$(CONFIG_KVM) += hyp/ -kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o -kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/eventfd.o $(KVM)/vfio.o $(KVM)/irqchip.o -kvm-$(CONFIG_KVM_ARM_HOST) += arm.o mmu.o mmio.o -kvm-$(CONFIG_KVM_ARM_HOST) += psci.o perf.o -kvm-$(CONFIG_KVM_ARM_HOST) += hypercalls.o -kvm-$(CONFIG_KVM_ARM_HOST) += pvtime.o +kvm-$(CONFIG_KVM) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o +kvm-$(CONFIG_KVM) += $(KVM)/eventfd.o $(KVM)/vfio.o $(KVM)/irqchip.o +kvm-$(CONFIG_KVM) += arm.o mmu.o mmio.o +kvm-$(CONFIG_KVM) += psci.o perf.o +kvm-$(CONFIG_KVM) += hypercalls.o +kvm-$(CONFIG_KVM) += pvtime.o -kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o va_layout.o -kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o -kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic-sys-reg-v3.o fpsimd.o pmu.o -kvm-$(CONFIG_KVM_ARM_HOST) += aarch32.o -kvm-$(CONFIG_KVM_ARM_HOST) += arch_timer.o +kvm-$(CONFIG_KVM) += inject_fault.o regmap.o va_layout.o +kvm-$(CONFIG_KVM) += hyp.o hyp-init.o handle_exit.o +kvm-$(CONFIG_KVM) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o +kvm-$(CONFIG_KVM) += vgic-sys-reg-v3.o fpsimd.o pmu.o +kvm-$(CONFIG_KVM) += aarch32.o +kvm-$(CONFIG_KVM) += arch_timer.o kvm-$(CONFIG_KVM_ARM_PMU) += pmu-emul.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-init.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-irqfd.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-v2.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-v3.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-v4.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-mmio.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-mmio-v2.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-mmio-v3.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-kvm-device.o -kvm-$(CONFIG_KVM_ARM_HOST) +=
[PATCH v5 0/4] KVM: arm64: Tidy up arch Kconfig and Makefiles
Hi, This small patch series tidies up the arm64 KVM build system by rejigging config options, removing some redundant help text, and consolidating some of the Makefile rules. The changes are cosmetic, but it seemed worthwhile to send this out for consideration. This series is a refresh on top of 5.7-rc1. It depends on Marc's kvm-arm64/welcome-home branch [1] plus the fix from Will [2]. Changes since V4 [3]: * Restore the KVM_ARM_PMU conditional for pmu-emul.o Cheers, /fuad [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/welcome-home [2] https://lists.cs.columbia.edu/pipermail/kvmarm/2020-April/040336.html [3] https://lists.cs.columbia.edu/pipermail/kvmarm/2020-April/040346.html Fuad Tabba (1): KVM: arm64: Clean up kvm makefiles Will Deacon (3): KVM: arm64: Kill off CONFIG_KVM_ARM_HOST KVM: arm64: Update help text KVM: arm64: Change CONFIG_KVM to a menuconfig entry arch/arm64/kernel/asm-offsets.c | 2 +- arch/arm64/kernel/cpu_errata.c | 2 +- arch/arm64/kernel/smp.c | 2 +- arch/arm64/kvm/Kconfig | 22 - arch/arm64/kvm/Makefile | 42 - arch/arm64/kvm/hyp/Makefile | 15 6 files changed, 32 insertions(+), 53 deletions(-) base-commit: 3dd059fcb44bae19f503377d14d7afff0a1ea229 -- 2.26.2.526.g744177e7f7-goog ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v5 4/4] KVM: arm64: Clean up kvm makefiles
Consolidate references to the CONFIG_KVM configuration item to encompass entire folders rather than per line. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/Makefile | 38 + arch/arm64/kvm/hyp/Makefile | 15 --- 2 files changed, 17 insertions(+), 36 deletions(-) diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 419696e615b3..8d3d9513cbfe 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -10,30 +10,18 @@ KVM=../../../virt/kvm obj-$(CONFIG_KVM) += kvm.o obj-$(CONFIG_KVM) += hyp/ -kvm-$(CONFIG_KVM) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o -kvm-$(CONFIG_KVM) += $(KVM)/eventfd.o $(KVM)/vfio.o $(KVM)/irqchip.o -kvm-$(CONFIG_KVM) += arm.o mmu.o mmio.o -kvm-$(CONFIG_KVM) += psci.o perf.o -kvm-$(CONFIG_KVM) += hypercalls.o -kvm-$(CONFIG_KVM) += pvtime.o +kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \ +$(KVM)/vfio.o $(KVM)/irqchip.o \ +arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \ +inject_fault.o regmap.o va_layout.o hyp.o hyp-init.o handle_exit.o \ +guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o \ +vgic-sys-reg-v3.o fpsimd.o pmu.o \ +aarch32.o arch_timer.o \ +vgic/vgic.o vgic/vgic-init.o \ +vgic/vgic-irqfd.o vgic/vgic-v2.o \ +vgic/vgic-v3.o vgic/vgic-v4.o \ +vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \ +vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \ +vgic/vgic-its.o vgic/vgic-debug.o -kvm-$(CONFIG_KVM) += inject_fault.o regmap.o va_layout.o -kvm-$(CONFIG_KVM) += hyp.o hyp-init.o handle_exit.o -kvm-$(CONFIG_KVM) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o -kvm-$(CONFIG_KVM) += vgic-sys-reg-v3.o fpsimd.o pmu.o -kvm-$(CONFIG_KVM) += aarch32.o -kvm-$(CONFIG_KVM) += arch_timer.o kvm-$(CONFIG_KVM_ARM_PMU) += pmu-emul.o - -kvm-$(CONFIG_KVM) += vgic/vgic.o -kvm-$(CONFIG_KVM) += vgic/vgic-init.o -kvm-$(CONFIG_KVM) += vgic/vgic-irqfd.o -kvm-$(CONFIG_KVM) += vgic/vgic-v2.o -kvm-$(CONFIG_KVM) += vgic/vgic-v3.o -kvm-$(CONFIG_KVM) += vgic/vgic-v4.o -kvm-$(CONFIG_KVM) += vgic/vgic-mmio.o -kvm-$(CONFIG_KVM) += vgic/vgic-mmio-v2.o -kvm-$(CONFIG_KVM) += vgic/vgic-mmio-v3.o -kvm-$(CONFIG_KVM) += vgic/vgic-kvm-device.o -kvm-$(CONFIG_KVM) += vgic/vgic-its.o -kvm-$(CONFIG_KVM) += vgic/vgic-debug.o diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index 8229e47ba870..8c9880783839 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -6,17 +6,10 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING \ $(DISABLE_STACKLEAK_PLUGIN) -obj-$(CONFIG_KVM) += vgic-v3-sr.o -obj-$(CONFIG_KVM) += timer-sr.o -obj-$(CONFIG_KVM) += aarch32.o -obj-$(CONFIG_KVM) += vgic-v2-cpuif-proxy.o -obj-$(CONFIG_KVM) += sysreg-sr.o -obj-$(CONFIG_KVM) += debug-sr.o -obj-$(CONFIG_KVM) += entry.o -obj-$(CONFIG_KVM) += switch.o -obj-$(CONFIG_KVM) += fpsimd.o -obj-$(CONFIG_KVM) += tlb.o -obj-$(CONFIG_KVM) += hyp-entry.o +obj-$(CONFIG_KVM) += hyp.o + +hyp-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \ +debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o # KVM code is run at a different exception code with a different map, so # compiler instrumentation that inserts callbacks or checks into the code may -- 2.26.2.526.g744177e7f7-goog ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v5 2/4] KVM: arm64: Update help text
From: Will Deacon arm64 KVM supports 16k pages since 02e0b7600f83 ("arm64: kvm: Add support for 16K pages"), so update the Kconfig help text accordingly. Signed-off-by: Will Deacon Signed-off-by: Fuad Tabba --- arch/arm64/kvm/Kconfig | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index ce724e526689..d2cf4f099454 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -44,8 +44,6 @@ config KVM select TASK_DELAY_ACCT ---help--- Support hosting virtualized guest machines. - We don't support KVM with 16K page tables yet, due to the multiple - levels of fake page tables. If unsure, say N. -- 2.26.2.526.g744177e7f7-goog ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm
Having a go at reviewing. Might turn out to be more useful as a learning exercise for me rather than useful feedback but we've got to start somewhere.. > -struct kvm_arch { > +struct kvm_s2_mmu { > struct kvm_vmid vmid; > > - /* stage2 entry level table */ > - pgd_t *pgd; > - phys_addr_t pgd_phys; > - > - /* VTCR_EL2 value for this VM */ > - u64vtcr; > + /* > + * stage2 entry level table > + * > + * Two kvm_s2_mmu structures in the same VM can point to the same pgd > + * here. This happens when running a non-VHE guest hypervisor which > + * uses the canonical stage 2 page table for both vEL2 and for vEL1/0 > + * with vHCR_EL2.VM == 0. > + */ > + pgd_t *pgd; > + phys_addr_t pgd_phys; > > /* The last vcpu id that ran on each physical CPU */ > int __percpu *last_vcpu_ran; > > + struct kvm *kvm; > +}; > + > +struct kvm_arch { > + struct kvm_s2_mmu mmu; > + > + /* VTCR_EL2 value for this VM */ > + u64vtcr; VTCR seems quite strongly tied to the MMU config. Is it not controlled independently for the nested MMUs and so remains in this struct? > -static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t > *pmd) > +static void stage2_dissolve_pmd(struct kvm_s2_mmu *mmu, phys_addr_t addr, > pmd_t *pmd) How strictly is the long line style rule enforced? checkpatch has 16 such warnings on this patch. > -static void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr, pud_t > *pudp) > +static void stage2_dissolve_pud(struct kvm_s2_mmu *mmu, phys_addr_t addr, > pud_t *pudp) > { > + struct kvm *kvm __maybe_unused = mmu->kvm; > + > if (!stage2_pud_huge(kvm, *pudp)) > return; There're a couple places with `__maybe_unused` on variables that are then used soon after. Can they be dropped in these cases so as not to hide legitimate warning? ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm
On Tue, 05 May 2020 18:23:51 +0100, Andrew Scull wrote: > > > > > + /* VTCR_EL2 value for this VM */ > > > > + u64vtcr; > > > > > > VTCR seems quite strongly tied to the MMU config. Is it not controlled > > > independently for the nested MMUs and so remains in this struct? > > > > This particular instance of VTCR_EL2 is the host's version. Which > > means it describes the virtual HW for the EL1 guest. It constraints, > > among other things, the number of IPA bits for the guest, for example, > > and is configured by the VMM. > > > > Once you start nesting, each vcpu has its own VTCR_EL2 which is still > > constrained by the main one (no nested guest can have a T0SZ bigger > > than the value imposed by userspace for this guest as a whole). > > > > Does it make sense? > > It does up to my ignorance of the spec in this regard. > > Simliar to James's question, should `vtcr` live inside the mmu struct > with the top level `kvm::mmu` field containing the host's version and > the nested mmus containing the nested version of vtcr to be applied to > the vCPU? I didn't noticed there being a vtcr for the nested version in > the ~90-patch series so maybe that just isn't something that needs > thinking about? They serve two very different purposes. One defines the virtual HW, the other one is the view that a guest hypervisor gives to its own guests. The latter is also per-vcpu, and not per VM (yes, NV implies the "de-facto CnP", for those who remember an intense whiteboard session). It thus lives in the system register file (being private to each vcpu). Another reason is that the HW can directly access the in-memory version of VTCR_EL2 when ARMv8.4-NV is present. Thanks, M. -- Jazz is not dead, it just smells funny. ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm
Hi James, On Tue, 05 May 2020 17:03:15 +0100, James Morse wrote: > > Hi Marc, > > On 22/04/2020 13:00, Marc Zyngier wrote: > > From: Christoffer Dall > > > > As we are about to reuse our stage 2 page table manipulation code for > > shadow stage 2 page tables in the context of nested virtualization, we > > are going to manage multiple stage 2 page tables for a single VM. > > > > This requires some pretty invasive changes to our data structures, > > which moves the vmid and pgd pointers into a separate structure and > > change pretty much all of our mmu code to operate on this structure > > instead. > > > > The new structure is called struct kvm_s2_mmu. > > > > There is no intended functional change by this patch alone. > > It's not obvious to me that VTCR_EL2.T0SZ is a per-vm thing, today > the size of the IPA space comes from the VMM, its not a > hardware/compile-time property. Where does the vEL2's T0SZ go? > ... but using this for nested guests would 'only' cause a > translation fault, it would still need handling in the emulation > code. So making it per-vm should be simpler. My reasoning is that this VTCR defines the virtual HW, and the guest's own VTCR_EL2 is just another guest system register. It is the role of the NV code to compose the two in a way that makes sense (delivering translation faults if the NV guest's S2 output range doesn't fit in the host's view of the VM IPA range). > But accessing VTCR is why the stage2_dissolve_p?d() stuff still > needs the kvm pointer, hence the backreference... it might be neater > to push the vtcr properties into kvm_s2_mmu that way you could drop > the kvm backref, and only things that take vm-wide locks would need > the kvm pointer. But I don't think it matters. That's an interesting consideration. I'll have a look. > I think I get it. I can't see anything that should be the other > vm/vcpu pointer. > > Reviewed-by: James Morse Thanks! > Some boring fiddly stuff: > > [...] > > > @@ -125,24 +123,24 @@ static void __hyp_text > > __tlb_switch_to_host_nvhe(struct kvm *kvm, > > } > > } > > > > -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, > > +static void __hyp_text __tlb_switch_to_host(struct kvm_s2_mmu *mmu, > > struct tlb_inv_context *cxt) > > { > > if (has_vhe()) > > - __tlb_switch_to_host_vhe(kvm, cxt); > > + __tlb_switch_to_host_vhe(cxt); > > else > > - __tlb_switch_to_host_nvhe(kvm, cxt); > > + __tlb_switch_to_host_nvhe(cxt); > > } > > What does __tlb_switch_to_host() need the kvm_s2_mmu for? Not much. Obviously mechanical conversion of kvm->kvm_s2_mmu, and not finishing the job. I'll fix that. > > [...] > > > > void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) > > { > > - struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); > > + struct kvm_s2_mmu *mmu = kern_hyp_va(kern_hyp_va(vcpu)->arch.hw_mmu); > > struct tlb_inv_context cxt; > > > > /* Switch to requested VMID */ > > - __tlb_switch_to_guest(kvm, ); > > + __tlb_switch_to_guest(mmu, ); > > > > __tlbi(vmalle1); > > dsb(nsh); > > isb(); > > > > - __tlb_switch_to_host(kvm, ); > > + __tlb_switch_to_host(mmu, ); > > } > > Does this need the vcpu in the future? > Its the odd one out, the other tlb functions here take the s2_mmu, or nothing. > We only use the s2_mmu here. I think this was done as a way not impact the 32bit code (rest in peace). Definitely a candidate for immediate cleanup. > > [...] > > > > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > > index e3b9ee268823b..2f99749048285 100644 > > --- a/virt/kvm/arm/mmu.c > > +++ b/virt/kvm/arm/mmu.c > > > @@ -96,31 +96,33 @@ static bool kvm_is_device_pfn(unsigned long pfn) > > * > > * Function clears a PMD entry, flushes addr 1st and 2nd stage TLBs. > > */ > > -static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t > > *pmd) > > +static void stage2_dissolve_pmd(struct kvm_s2_mmu *mmu, phys_addr_t addr, > > pmd_t *pmd) > > The comment above this function still has '@kvm: pointer to kvm > structure.' > > [...] > > > > @@ -331,8 +339,9 @@ static void unmap_stage2_puds(struct kvm *kvm, pgd_t > > *pgd, > > * destroying the VM), otherwise another faulting VCPU may come in and mess > > * with things behind our backs. > > */ > > -static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 > > size) > > +static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, > > u64 size) > > The comment above this function still has '@kvm: The VM pointer' > > [...] > > > -static void stage2_flush_memslot(struct kvm *kvm, > > +static void stage2_flush_memslot(struct kvm_s2_mmu *mmu, > > struct kvm_memory_slot *memslot) > > { > > Wouldn't something manipulating a memslot have to mess with a set of > kvm_s2_mmu once this is all assembled? stage2_unmap_memslot() takes >
Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm
Hi Andrew, On Tue, 05 May 2020 16:26:48 +0100, Andrew Scull wrote: > > Having a go at reviewing. Might turn out to be more useful as a learning > exercise for me rather than useful feedback but we've got to start > somewhere.. Thanks for making the effort. Asking questions is never a pointless exercise, as it usually means that something isn't as crystal clear as the author expects... ;-) > > > -struct kvm_arch { > > +struct kvm_s2_mmu { > > struct kvm_vmid vmid; > > > > - /* stage2 entry level table */ > > - pgd_t *pgd; > > - phys_addr_t pgd_phys; > > - > > - /* VTCR_EL2 value for this VM */ > > - u64vtcr; > > + /* > > +* stage2 entry level table > > +* > > +* Two kvm_s2_mmu structures in the same VM can point to the same pgd > > +* here. This happens when running a non-VHE guest hypervisor which > > +* uses the canonical stage 2 page table for both vEL2 and for vEL1/0 > > +* with vHCR_EL2.VM == 0. > > +*/ > > + pgd_t *pgd; > > + phys_addr_t pgd_phys; > > > > /* The last vcpu id that ran on each physical CPU */ > > int __percpu *last_vcpu_ran; > > > > + struct kvm *kvm; > > +}; > > + > > +struct kvm_arch { > > + struct kvm_s2_mmu mmu; > > + > > + /* VTCR_EL2 value for this VM */ > > + u64vtcr; > > VTCR seems quite strongly tied to the MMU config. Is it not controlled > independently for the nested MMUs and so remains in this struct? This particular instance of VTCR_EL2 is the host's version. Which means it describes the virtual HW for the EL1 guest. It constraints, among other things, the number of IPA bits for the guest, for example, and is configured by the VMM. Once you start nesting, each vcpu has its own VTCR_EL2 which is still constrained by the main one (no nested guest can have a T0SZ bigger than the value imposed by userspace for this guest as a whole). Does it make sense? > > > -static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t > > *pmd) > > +static void stage2_dissolve_pmd(struct kvm_s2_mmu *mmu, phys_addr_t addr, > > pmd_t *pmd) > > How strictly is the long line style rule enforced? checkpatch has 16 > such warnings on this patch. It isn't enforced at all for KVM/arm. I am perfectly happy with longish lines (I stupidly gave away my vt100 a very long time ago). In general, checkpatch warnings are to be looked into (it sometimes brings interesting stuff up), but this falls into the *cosmetic* department, and I cannot be bothered. > > > -static void stage2_dissolve_pud(struct kvm *kvm, phys_addr_t addr, pud_t > > *pudp) > > +static void stage2_dissolve_pud(struct kvm_s2_mmu *mmu, phys_addr_t addr, > > pud_t *pudp) > > { > > + struct kvm *kvm __maybe_unused = mmu->kvm; > > + > > if (!stage2_pud_huge(kvm, *pudp)) > > return; > > There're a couple places with `__maybe_unused` on variables that are > then used soon after. Can they be dropped in these cases so as not to > hide legitimate warning? Absolutely. I'll have a look. Thanks for the review! M. -- Jazz is not dead, it just smells funny. ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH 03/26] KVM: arm64: Factor out stage 2 page table data from struct kvm
Hi Marc, On 22/04/2020 13:00, Marc Zyngier wrote: > From: Christoffer Dall > > As we are about to reuse our stage 2 page table manipulation code for > shadow stage 2 page tables in the context of nested virtualization, we > are going to manage multiple stage 2 page tables for a single VM. > > This requires some pretty invasive changes to our data structures, > which moves the vmid and pgd pointers into a separate structure and > change pretty much all of our mmu code to operate on this structure > instead. > > The new structure is called struct kvm_s2_mmu. > > There is no intended functional change by this patch alone. It's not obvious to me that VTCR_EL2.T0SZ is a per-vm thing, today the size of the IPA space comes from the VMM, its not a hardware/compile-time property. Where does the vEL2's T0SZ go? ... but using this for nested guests would 'only' cause a translation fault, it would still need handling in the emulation code. So making it per-vm should be simpler. But accessing VTCR is why the stage2_dissolve_p?d() stuff still needs the kvm pointer, hence the backreference... it might be neater to push the vtcr properties into kvm_s2_mmu that way you could drop the kvm backref, and only things that take vm-wide locks would need the kvm pointer. But I don't think it matters. I think I get it. I can't see anything that should be the other vm/vcpu pointer. Reviewed-by: James Morse Some boring fiddly stuff: [...] > @@ -125,24 +123,24 @@ static void __hyp_text __tlb_switch_to_host_nvhe(struct > kvm *kvm, > } > } > > -static void __hyp_text __tlb_switch_to_host(struct kvm *kvm, > +static void __hyp_text __tlb_switch_to_host(struct kvm_s2_mmu *mmu, > struct tlb_inv_context *cxt) > { > if (has_vhe()) > - __tlb_switch_to_host_vhe(kvm, cxt); > + __tlb_switch_to_host_vhe(cxt); > else > - __tlb_switch_to_host_nvhe(kvm, cxt); > + __tlb_switch_to_host_nvhe(cxt); > } What does __tlb_switch_to_host() need the kvm_s2_mmu for? [...] > void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) > { > - struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); > + struct kvm_s2_mmu *mmu = kern_hyp_va(kern_hyp_va(vcpu)->arch.hw_mmu); > struct tlb_inv_context cxt; > > /* Switch to requested VMID */ > - __tlb_switch_to_guest(kvm, ); > + __tlb_switch_to_guest(mmu, ); > > __tlbi(vmalle1); > dsb(nsh); > isb(); > > - __tlb_switch_to_host(kvm, ); > + __tlb_switch_to_host(mmu, ); > } Does this need the vcpu in the future? Its the odd one out, the other tlb functions here take the s2_mmu, or nothing. We only use the s2_mmu here. [...] > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index e3b9ee268823b..2f99749048285 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -96,31 +96,33 @@ static bool kvm_is_device_pfn(unsigned long pfn) > * > * Function clears a PMD entry, flushes addr 1st and 2nd stage TLBs. > */ > -static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t > *pmd) > +static void stage2_dissolve_pmd(struct kvm_s2_mmu *mmu, phys_addr_t addr, > pmd_t *pmd) The comment above this function still has '@kvm:pointer to kvm structure.' [...] > @@ -331,8 +339,9 @@ static void unmap_stage2_puds(struct kvm *kvm, pgd_t *pgd, > * destroying the VM), otherwise another faulting VCPU may come in and mess > * with things behind our backs. > */ > -static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) > +static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, > u64 size) The comment above this function still has '@kvm: The VM pointer' [...] > -static void stage2_flush_memslot(struct kvm *kvm, > +static void stage2_flush_memslot(struct kvm_s2_mmu *mmu, >struct kvm_memory_slot *memslot) > { Wouldn't something manipulating a memslot have to mess with a set of kvm_s2_mmu once this is all assembled? stage2_unmap_memslot() takes struct kvm, it seems odd to pass one kvm_s2_mmu here. [...] > @@ -886,21 +898,23 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, > size_t size, > -int kvm_alloc_stage2_pgd(struct kvm *kvm) > +int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu) > { > phys_addr_t pgd_phys; > pgd_t *pgd; > + int cpu; > > - if (kvm->arch.pgd != NULL) { > + if (mmu->pgd != NULL) { > kvm_err("kvm_arch already initialized?\n"); Does this error message still make sense? > return -EINVAL; > } [...] > @@ -1439,9 +1467,10 @@ static void stage2_wp_ptes(pmd_t *pmd, phys_addr_t > addr, phys_addr_t end) > * @addr:range start address > * @end: range end address > */ > -static void stage2_wp_pmds(struct kvm *kvm, pud_t *pud, > +static void stage2_wp_pmds(struct kvm_s2_mmu *mmu, pud_t *pud, >
Re: [PATCH V3 04/16] arm64/cpufeature: Introduce ID_PFR2 CPU register
On Tue, May 05, 2020 at 01:12:39PM +0100, Will Deacon wrote: > On Tue, May 05, 2020 at 12:50:54PM +0100, Mark Rutland wrote: > > On Tue, May 05, 2020 at 12:27:19PM +0100, Will Deacon wrote: > > > On Tue, May 05, 2020 at 12:16:07PM +0100, Mark Rutland wrote: > > > > On Tue, May 05, 2020 at 12:12:41PM +0100, Will Deacon wrote: > > > > > On Sat, May 02, 2020 at 07:03:53PM +0530, Anshuman Khandual wrote: > > > > > > diff --git a/arch/arm64/include/asm/sysreg.h > > > > > > b/arch/arm64/include/asm/sysreg.h > > > > > > index e5317a6367b6..c977449e02db 100644 > > > > > > --- a/arch/arm64/include/asm/sysreg.h > > > > > > +++ b/arch/arm64/include/asm/sysreg.h > > > > > > @@ -153,6 +153,7 @@ > > > > > > #define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0) > > > > > > #define SYS_MVFR1_EL1 sys_reg(3, 0, 0, 3, 1) > > > > > > #define SYS_MVFR2_EL1 sys_reg(3, 0, 0, 3, 2) > > > > > > +#define SYS_ID_PFR2_EL1sys_reg(3, 0, 0, 3, 4) > > > > > > > > > > nit: but please group these defines by name rather than encoding. > > > > > > > > So far we've *always* grouped these by encoding in this file, so can we > > > > keep things that way for now? Otherwise we're inconsistent with both > > > > schemes. > > > > > > Hmm, but it's really hard to read sorted that way and we'll end up with > > > duplicate definitions like we had for some of the field offsets already. > > > > I appreciate that, and don't disagree that the current scheme is not > > obvious. > > > > I just want to ensure that we don't make things less consistent, and if > > we're going to change the scheme in order to make that easier, it should > > be a separate patch. There'll be other changes like MMFR4_EL1, and we > > should probably add a comment as to what the policy is either way (e.g. > > if we're just grouping at the top level, or if that should be sorted > > too). > > Ok, I added a comment below. Thanks! Acked-by: Mark Rutland Mark. > > Will > > --->8 > > commit be7ab6a6cdb0a6d7b10883094c2adf96f5d4e1e8 > Author: Will Deacon > Date: Tue May 5 13:08:02 2020 +0100 > > arm64: cpufeature: Group indexed system register definitions by name > > Some system registers contain an index in the name (e.g. ID_MMFR_EL1) > and, while this index often follows the register encoding, newer additions > to the architecture are necessarily tacked on the end. Sorting these > registers by encoding therefore becomes a bit of a mess. > > Group the indexed system register definitions by name so that it's easier > to > read and will hopefully reduce the chance of us accidentally introducing > duplicate definitions in the future. > > Signed-off-by: Will Deacon > > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h > index 2dd3f4ca9780..194684301df0 100644 > --- a/arch/arm64/include/asm/sysreg.h > +++ b/arch/arm64/include/asm/sysreg.h > @@ -105,6 +105,10 @@ > #define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2) > #define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2) > > +/* > + * System registers, organised loosely by encoding but grouped together > + * where the architected name contains an index. e.g. ID_MMFR_EL1. > + */ > #define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2) > #define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0) > #define SYS_MDSCR_EL1sys_reg(2, 0, 0, 2, 2) > @@ -140,6 +144,7 @@ > #define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5) > #define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6) > #define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7) > +#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) > > #define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0) > #define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1) > @@ -147,7 +152,6 @@ > #define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3) > #define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4) > #define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5) > -#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) > #define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7) > > #define SYS_MVFR0_EL1sys_reg(3, 0, 0, 3, 0) > ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH V3 04/16] arm64/cpufeature: Introduce ID_PFR2 CPU register
On Tue, May 05, 2020 at 12:50:54PM +0100, Mark Rutland wrote: > On Tue, May 05, 2020 at 12:27:19PM +0100, Will Deacon wrote: > > On Tue, May 05, 2020 at 12:16:07PM +0100, Mark Rutland wrote: > > > On Tue, May 05, 2020 at 12:12:41PM +0100, Will Deacon wrote: > > > > On Sat, May 02, 2020 at 07:03:53PM +0530, Anshuman Khandual wrote: > > > > > diff --git a/arch/arm64/include/asm/sysreg.h > > > > > b/arch/arm64/include/asm/sysreg.h > > > > > index e5317a6367b6..c977449e02db 100644 > > > > > --- a/arch/arm64/include/asm/sysreg.h > > > > > +++ b/arch/arm64/include/asm/sysreg.h > > > > > @@ -153,6 +153,7 @@ > > > > > #define SYS_MVFR0_EL1sys_reg(3, 0, 0, 3, 0) > > > > > #define SYS_MVFR1_EL1sys_reg(3, 0, 0, 3, 1) > > > > > #define SYS_MVFR2_EL1sys_reg(3, 0, 0, 3, 2) > > > > > +#define SYS_ID_PFR2_EL1 sys_reg(3, 0, 0, 3, 4) > > > > > > > > nit: but please group these defines by name rather than encoding. > > > > > > So far we've *always* grouped these by encoding in this file, so can we > > > keep things that way for now? Otherwise we're inconsistent with both > > > schemes. > > > > Hmm, but it's really hard to read sorted that way and we'll end up with > > duplicate definitions like we had for some of the field offsets already. > > I appreciate that, and don't disagree that the current scheme is not > obvious. > > I just want to ensure that we don't make things less consistent, and if > we're going to change the scheme in order to make that easier, it should > be a separate patch. There'll be other changes like MMFR4_EL1, and we > should probably add a comment as to what the policy is either way (e.g. > if we're just grouping at the top level, or if that should be sorted > too). Ok, I added a comment below. Will --->8 commit be7ab6a6cdb0a6d7b10883094c2adf96f5d4e1e8 Author: Will Deacon Date: Tue May 5 13:08:02 2020 +0100 arm64: cpufeature: Group indexed system register definitions by name Some system registers contain an index in the name (e.g. ID_MMFR_EL1) and, while this index often follows the register encoding, newer additions to the architecture are necessarily tacked on the end. Sorting these registers by encoding therefore becomes a bit of a mess. Group the indexed system register definitions by name so that it's easier to read and will hopefully reduce the chance of us accidentally introducing duplicate definitions in the future. Signed-off-by: Will Deacon diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 2dd3f4ca9780..194684301df0 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -105,6 +105,10 @@ #define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2) #define SYS_DC_CISWsys_insn(1, 0, 7, 14, 2) +/* + * System registers, organised loosely by encoding but grouped together + * where the architected name contains an index. e.g. ID_MMFR_EL1. + */ #define SYS_OSDTRRX_EL1sys_reg(2, 0, 0, 0, 2) #define SYS_MDCCINT_EL1sys_reg(2, 0, 0, 2, 0) #define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2) @@ -140,6 +144,7 @@ #define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5) #define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6) #define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7) +#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) #define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0) #define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1) @@ -147,7 +152,6 @@ #define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3) #define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4) #define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5) -#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) #define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7) #define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0) ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH V3 04/16] arm64/cpufeature: Introduce ID_PFR2 CPU register
On Tue, May 05, 2020 at 12:27:19PM +0100, Will Deacon wrote: > On Tue, May 05, 2020 at 12:16:07PM +0100, Mark Rutland wrote: > > On Tue, May 05, 2020 at 12:12:41PM +0100, Will Deacon wrote: > > > On Sat, May 02, 2020 at 07:03:53PM +0530, Anshuman Khandual wrote: > > > > This adds basic building blocks required for ID_PFR2 CPU register which > > > > provides information about the AArch32 programmers model which must be > > > > interpreted along with ID_PFR0 and ID_PFR1 CPU registers. This is added > > > > per ARM DDI 0487F.a specification. > > > > > > > > Cc: Catalin Marinas > > > > Cc: Will Deacon > > > > Cc: Marc Zyngier > > > > Cc: Mark Rutland > > > > Cc: James Morse > > > > Cc: Suzuki K Poulose > > > > Cc: kvmarm@lists.cs.columbia.edu > > > > Cc: linux-arm-ker...@lists.infradead.org > > > > Cc: linux-ker...@vger.kernel.org > > > > > > > > Suggested-by: Mark Rutland > > > > Reviewed-by: Suzuki K Poulose > > > > Signed-off-by: Anshuman Khandual > > > > --- > > > > arch/arm64/include/asm/cpu.h| 1 + > > > > arch/arm64/include/asm/sysreg.h | 4 > > > > arch/arm64/kernel/cpufeature.c | 11 +++ > > > > arch/arm64/kernel/cpuinfo.c | 1 + > > > > arch/arm64/kvm/sys_regs.c | 2 +- > > > > 5 files changed, 18 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h > > > > index b4a40535a3d8..464e828a994d 100644 > > > > --- a/arch/arm64/include/asm/cpu.h > > > > +++ b/arch/arm64/include/asm/cpu.h > > > > @@ -46,6 +46,7 @@ struct cpuinfo_arm64 { > > > > u32 reg_id_mmfr3; > > > > u32 reg_id_pfr0; > > > > u32 reg_id_pfr1; > > > > + u32 reg_id_pfr2; > > > > > > > > u32 reg_mvfr0; > > > > u32 reg_mvfr1; > > > > diff --git a/arch/arm64/include/asm/sysreg.h > > > > b/arch/arm64/include/asm/sysreg.h > > > > index e5317a6367b6..c977449e02db 100644 > > > > --- a/arch/arm64/include/asm/sysreg.h > > > > +++ b/arch/arm64/include/asm/sysreg.h > > > > @@ -153,6 +153,7 @@ > > > > #define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0) > > > > #define SYS_MVFR1_EL1 sys_reg(3, 0, 0, 3, 1) > > > > #define SYS_MVFR2_EL1 sys_reg(3, 0, 0, 3, 2) > > > > +#define SYS_ID_PFR2_EL1sys_reg(3, 0, 0, 3, 4) > > > > > > nit: but please group these defines by name rather than encoding. > > > > So far we've *always* grouped these by encoding in this file, so can we > > keep things that way for now? Otherwise we're inconsistent with both > > schemes. > > Hmm, but it's really hard to read sorted that way and we'll end up with > duplicate definitions like we had for some of the field offsets already. I appreciate that, and don't disagree that the current scheme is not obvious. I just want to ensure that we don't make things less consistent, and if we're going to change the scheme in order to make that easier, it should be a separate patch. There'll be other changes like MMFR4_EL1, and we should probably add a comment as to what the policy is either way (e.g. if we're just grouping at the top level, or if that should be sorted too). Thanks, Mark. > The only ID register that seems to be out of place atm is MMFR4, which I > can move (see below) > > Will > > --->8 > > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h > index 2dd3f4ca9780..137201ea383b 100644 > --- a/arch/arm64/include/asm/sysreg.h > +++ b/arch/arm64/include/asm/sysreg.h > @@ -140,6 +140,7 @@ > #define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5) > #define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6) > #define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7) > +#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) > > #define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0) > #define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1) > @@ -147,7 +148,6 @@ > #define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3) > #define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4) > #define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5) > -#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) > #define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7) > > #define SYS_MVFR0_EL1sys_reg(3, 0, 0, 3, 0) ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH V3 04/16] arm64/cpufeature: Introduce ID_PFR2 CPU register
On Tue, May 05, 2020 at 12:16:07PM +0100, Mark Rutland wrote: > On Tue, May 05, 2020 at 12:12:41PM +0100, Will Deacon wrote: > > On Sat, May 02, 2020 at 07:03:53PM +0530, Anshuman Khandual wrote: > > > This adds basic building blocks required for ID_PFR2 CPU register which > > > provides information about the AArch32 programmers model which must be > > > interpreted along with ID_PFR0 and ID_PFR1 CPU registers. This is added > > > per ARM DDI 0487F.a specification. > > > > > > Cc: Catalin Marinas > > > Cc: Will Deacon > > > Cc: Marc Zyngier > > > Cc: Mark Rutland > > > Cc: James Morse > > > Cc: Suzuki K Poulose > > > Cc: kvmarm@lists.cs.columbia.edu > > > Cc: linux-arm-ker...@lists.infradead.org > > > Cc: linux-ker...@vger.kernel.org > > > > > > Suggested-by: Mark Rutland > > > Reviewed-by: Suzuki K Poulose > > > Signed-off-by: Anshuman Khandual > > > --- > > > arch/arm64/include/asm/cpu.h| 1 + > > > arch/arm64/include/asm/sysreg.h | 4 > > > arch/arm64/kernel/cpufeature.c | 11 +++ > > > arch/arm64/kernel/cpuinfo.c | 1 + > > > arch/arm64/kvm/sys_regs.c | 2 +- > > > 5 files changed, 18 insertions(+), 1 deletion(-) > > > > > > diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h > > > index b4a40535a3d8..464e828a994d 100644 > > > --- a/arch/arm64/include/asm/cpu.h > > > +++ b/arch/arm64/include/asm/cpu.h > > > @@ -46,6 +46,7 @@ struct cpuinfo_arm64 { > > > u32 reg_id_mmfr3; > > > u32 reg_id_pfr0; > > > u32 reg_id_pfr1; > > > + u32 reg_id_pfr2; > > > > > > u32 reg_mvfr0; > > > u32 reg_mvfr1; > > > diff --git a/arch/arm64/include/asm/sysreg.h > > > b/arch/arm64/include/asm/sysreg.h > > > index e5317a6367b6..c977449e02db 100644 > > > --- a/arch/arm64/include/asm/sysreg.h > > > +++ b/arch/arm64/include/asm/sysreg.h > > > @@ -153,6 +153,7 @@ > > > #define SYS_MVFR0_EL1sys_reg(3, 0, 0, 3, 0) > > > #define SYS_MVFR1_EL1sys_reg(3, 0, 0, 3, 1) > > > #define SYS_MVFR2_EL1sys_reg(3, 0, 0, 3, 2) > > > +#define SYS_ID_PFR2_EL1 sys_reg(3, 0, 0, 3, 4) > > > > nit: but please group these defines by name rather than encoding. > > So far we've *always* grouped these by encoding in this file, so can we > keep things that way for now? Otherwise we're inconsistent with both > schemes. Hmm, but it's really hard to read sorted that way and we'll end up with duplicate definitions like we had for some of the field offsets already. The only ID register that seems to be out of place atm is MMFR4, which I can move (see below) Will --->8 diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 2dd3f4ca9780..137201ea383b 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -140,6 +140,7 @@ #define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5) #define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6) #define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7) +#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) #define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0) #define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1) @@ -147,7 +148,6 @@ #define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3) #define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4) #define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5) -#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) #define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7) #define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0) ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH V3 04/16] arm64/cpufeature: Introduce ID_PFR2 CPU register
On Tue, May 05, 2020 at 12:16:07PM +0100, Mark Rutland wrote: > On Tue, May 05, 2020 at 12:12:41PM +0100, Will Deacon wrote: > > On Sat, May 02, 2020 at 07:03:53PM +0530, Anshuman Khandual wrote: > > > This adds basic building blocks required for ID_PFR2 CPU register which > > > provides information about the AArch32 programmers model which must be > > > interpreted along with ID_PFR0 and ID_PFR1 CPU registers. This is added > > > per ARM DDI 0487F.a specification. > > > > > > Cc: Catalin Marinas > > > Cc: Will Deacon > > > Cc: Marc Zyngier > > > Cc: Mark Rutland > > > Cc: James Morse > > > Cc: Suzuki K Poulose > > > Cc: kvmarm@lists.cs.columbia.edu > > > Cc: linux-arm-ker...@lists.infradead.org > > > Cc: linux-ker...@vger.kernel.org > > > > > > Suggested-by: Mark Rutland > > > Reviewed-by: Suzuki K Poulose > > > Signed-off-by: Anshuman Khandual > > > --- > > > arch/arm64/include/asm/cpu.h| 1 + > > > arch/arm64/include/asm/sysreg.h | 4 > > > arch/arm64/kernel/cpufeature.c | 11 +++ > > > arch/arm64/kernel/cpuinfo.c | 1 + > > > arch/arm64/kvm/sys_regs.c | 2 +- > > > 5 files changed, 18 insertions(+), 1 deletion(-) > > > > > > diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h > > > index b4a40535a3d8..464e828a994d 100644 > > > --- a/arch/arm64/include/asm/cpu.h > > > +++ b/arch/arm64/include/asm/cpu.h > > > @@ -46,6 +46,7 @@ struct cpuinfo_arm64 { > > > u32 reg_id_mmfr3; > > > u32 reg_id_pfr0; > > > u32 reg_id_pfr1; > > > + u32 reg_id_pfr2; > > > > > > u32 reg_mvfr0; > > > u32 reg_mvfr1; > > > diff --git a/arch/arm64/include/asm/sysreg.h > > > b/arch/arm64/include/asm/sysreg.h > > > index e5317a6367b6..c977449e02db 100644 > > > --- a/arch/arm64/include/asm/sysreg.h > > > +++ b/arch/arm64/include/asm/sysreg.h > > > @@ -153,6 +153,7 @@ > > > #define SYS_MVFR0_EL1sys_reg(3, 0, 0, 3, 0) > > > #define SYS_MVFR1_EL1sys_reg(3, 0, 0, 3, 1) > > > #define SYS_MVFR2_EL1sys_reg(3, 0, 0, 3, 2) > > > +#define SYS_ID_PFR2_EL1 sys_reg(3, 0, 0, 3, 4) > > > > nit: but please group these defines by name rather than encoding. > > So far we've *always* grouped these by encoding in this file, so can we > keep things that way for now? Otherwise we're inconsistent with both > schemes. Unless you just meant "please put a newline before this" to avoid grouping without affecting ordering, in which case agreed! Mark. ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH V3 04/16] arm64/cpufeature: Introduce ID_PFR2 CPU register
On Tue, May 05, 2020 at 12:12:41PM +0100, Will Deacon wrote: > On Sat, May 02, 2020 at 07:03:53PM +0530, Anshuman Khandual wrote: > > This adds basic building blocks required for ID_PFR2 CPU register which > > provides information about the AArch32 programmers model which must be > > interpreted along with ID_PFR0 and ID_PFR1 CPU registers. This is added > > per ARM DDI 0487F.a specification. > > > > Cc: Catalin Marinas > > Cc: Will Deacon > > Cc: Marc Zyngier > > Cc: Mark Rutland > > Cc: James Morse > > Cc: Suzuki K Poulose > > Cc: kvmarm@lists.cs.columbia.edu > > Cc: linux-arm-ker...@lists.infradead.org > > Cc: linux-ker...@vger.kernel.org > > > > Suggested-by: Mark Rutland > > Reviewed-by: Suzuki K Poulose > > Signed-off-by: Anshuman Khandual > > --- > > arch/arm64/include/asm/cpu.h| 1 + > > arch/arm64/include/asm/sysreg.h | 4 > > arch/arm64/kernel/cpufeature.c | 11 +++ > > arch/arm64/kernel/cpuinfo.c | 1 + > > arch/arm64/kvm/sys_regs.c | 2 +- > > 5 files changed, 18 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h > > index b4a40535a3d8..464e828a994d 100644 > > --- a/arch/arm64/include/asm/cpu.h > > +++ b/arch/arm64/include/asm/cpu.h > > @@ -46,6 +46,7 @@ struct cpuinfo_arm64 { > > u32 reg_id_mmfr3; > > u32 reg_id_pfr0; > > u32 reg_id_pfr1; > > + u32 reg_id_pfr2; > > > > u32 reg_mvfr0; > > u32 reg_mvfr1; > > diff --git a/arch/arm64/include/asm/sysreg.h > > b/arch/arm64/include/asm/sysreg.h > > index e5317a6367b6..c977449e02db 100644 > > --- a/arch/arm64/include/asm/sysreg.h > > +++ b/arch/arm64/include/asm/sysreg.h > > @@ -153,6 +153,7 @@ > > #define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0) > > #define SYS_MVFR1_EL1 sys_reg(3, 0, 0, 3, 1) > > #define SYS_MVFR2_EL1 sys_reg(3, 0, 0, 3, 2) > > +#define SYS_ID_PFR2_EL1sys_reg(3, 0, 0, 3, 4) > > nit: but please group these defines by name rather than encoding. So far we've *always* grouped these by encoding in this file, so can we keep things that way for now? Otherwise we're inconsistent with both schemes. Mark. ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH V3 04/16] arm64/cpufeature: Introduce ID_PFR2 CPU register
On Sat, May 02, 2020 at 07:03:53PM +0530, Anshuman Khandual wrote: > This adds basic building blocks required for ID_PFR2 CPU register which > provides information about the AArch32 programmers model which must be > interpreted along with ID_PFR0 and ID_PFR1 CPU registers. This is added > per ARM DDI 0487F.a specification. > > Cc: Catalin Marinas > Cc: Will Deacon > Cc: Marc Zyngier > Cc: Mark Rutland > Cc: James Morse > Cc: Suzuki K Poulose > Cc: kvmarm@lists.cs.columbia.edu > Cc: linux-arm-ker...@lists.infradead.org > Cc: linux-ker...@vger.kernel.org > > Suggested-by: Mark Rutland > Reviewed-by: Suzuki K Poulose > Signed-off-by: Anshuman Khandual > --- > arch/arm64/include/asm/cpu.h| 1 + > arch/arm64/include/asm/sysreg.h | 4 > arch/arm64/kernel/cpufeature.c | 11 +++ > arch/arm64/kernel/cpuinfo.c | 1 + > arch/arm64/kvm/sys_regs.c | 2 +- > 5 files changed, 18 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h > index b4a40535a3d8..464e828a994d 100644 > --- a/arch/arm64/include/asm/cpu.h > +++ b/arch/arm64/include/asm/cpu.h > @@ -46,6 +46,7 @@ struct cpuinfo_arm64 { > u32 reg_id_mmfr3; > u32 reg_id_pfr0; > u32 reg_id_pfr1; > + u32 reg_id_pfr2; > > u32 reg_mvfr0; > u32 reg_mvfr1; > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h > index e5317a6367b6..c977449e02db 100644 > --- a/arch/arm64/include/asm/sysreg.h > +++ b/arch/arm64/include/asm/sysreg.h > @@ -153,6 +153,7 @@ > #define SYS_MVFR0_EL1sys_reg(3, 0, 0, 3, 0) > #define SYS_MVFR1_EL1sys_reg(3, 0, 0, 3, 1) > #define SYS_MVFR2_EL1sys_reg(3, 0, 0, 3, 2) > +#define SYS_ID_PFR2_EL1 sys_reg(3, 0, 0, 3, 4) nit: but please group these defines by name rather than encoding. Will ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH v4 2/7] KVM: arm64: clean up redundant 'kvm_run' parameters
Hi Tianjia, On 2020-04-27 05:35, Tianjia Zhang wrote: In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu' structure. For historical reasons, many kvm-related function parameters retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This patch does a unified cleanup of these remaining redundant parameters. Signed-off-by: Tianjia Zhang On the face of it, this looks OK, but I haven't tried to run the resulting kernel. I'm not opposed to taking this patch *if* there is an agreement across architectures to take the series (I value consistency over the janitorial exercise). Another thing is that this is going to conflict with the set of patches that move the KVM/arm code back where it belongs (arch/arm64/kvm), so I'd probably cherry-pick that one directly. Thanks, M. -- Jazz is not dead. It just smells funny... ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH v4 0/7] clean up redundant 'kvm_run' parameters
Paolo Bonzini, any opinion on this? Thanks and best, Tianjia On 2020/4/27 12:35, Tianjia Zhang wrote: In the current kvm version, 'kvm_run' has been included in the 'kvm_vcpu' structure. For historical reasons, many kvm-related function parameters retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This patch does a unified cleanup of these remaining redundant parameters. This series of patches has completely cleaned the architecture of arm64, mips, ppc, and s390 (no such redundant code on x86). Due to the large number of modified codes, a separate patch is made for each platform. On the ppc platform, there is also a redundant structure pointer of 'kvm_run' in 'vcpu_arch', which has also been cleaned separately. --- v4 change: mips: fixes two errors in entry.c. v3 change: Keep the existing `vcpu->run` in the function body unchanged. v2 change: s390 retains the original variable name and minimizes modification. Tianjia Zhang (7): KVM: s390: clean up redundant 'kvm_run' parameters KVM: arm64: clean up redundant 'kvm_run' parameters KVM: PPC: Remove redundant kvm_run from vcpu_arch KVM: PPC: clean up redundant 'kvm_run' parameters KVM: PPC: clean up redundant kvm_run parameters in assembly KVM: MIPS: clean up redundant 'kvm_run' parameters KVM: MIPS: clean up redundant kvm_run parameters in assembly arch/arm64/include/asm/kvm_coproc.h | 12 +-- arch/arm64/include/asm/kvm_host.h| 11 +-- arch/arm64/include/asm/kvm_mmu.h | 2 +- arch/arm64/kvm/handle_exit.c | 36 +++ arch/arm64/kvm/sys_regs.c| 13 ++- arch/mips/include/asm/kvm_host.h | 32 +-- arch/mips/kvm/emulate.c | 59 arch/mips/kvm/entry.c| 21 ++--- arch/mips/kvm/mips.c | 14 +-- arch/mips/kvm/trap_emul.c| 114 ++- arch/mips/kvm/vz.c | 26 ++ arch/powerpc/include/asm/kvm_book3s.h| 16 ++-- arch/powerpc/include/asm/kvm_host.h | 1 - arch/powerpc/include/asm/kvm_ppc.h | 27 +++--- arch/powerpc/kvm/book3s.c| 4 +- arch/powerpc/kvm/book3s.h| 2 +- arch/powerpc/kvm/book3s_64_mmu_hv.c | 12 +-- arch/powerpc/kvm/book3s_64_mmu_radix.c | 4 +- arch/powerpc/kvm/book3s_emulate.c| 10 +- arch/powerpc/kvm/book3s_hv.c | 64 ++--- arch/powerpc/kvm/book3s_hv_nested.c | 12 +-- arch/powerpc/kvm/book3s_interrupts.S | 17 ++-- arch/powerpc/kvm/book3s_paired_singles.c | 72 +++--- arch/powerpc/kvm/book3s_pr.c | 33 --- arch/powerpc/kvm/booke.c | 39 arch/powerpc/kvm/booke.h | 8 +- arch/powerpc/kvm/booke_emulate.c | 2 +- arch/powerpc/kvm/booke_interrupts.S | 9 +- arch/powerpc/kvm/bookehv_interrupts.S| 10 +- arch/powerpc/kvm/e500_emulate.c | 15 ++- arch/powerpc/kvm/emulate.c | 10 +- arch/powerpc/kvm/emulate_loadstore.c | 32 +++ arch/powerpc/kvm/powerpc.c | 72 +++--- arch/powerpc/kvm/trace_hv.h | 6 +- arch/s390/kvm/kvm-s390.c | 23 +++-- virt/kvm/arm/arm.c | 6 +- virt/kvm/arm/mmio.c | 11 ++- virt/kvm/arm/mmu.c | 5 +- 38 files changed, 392 insertions(+), 470 deletions(-) ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH V3 06/16] arm64/cpufeature: Introduce ID_MMFR5 CPU register
On 05/05/2020 02:03 AM, Will Deacon wrote: > On Sat, May 02, 2020 at 07:03:55PM +0530, Anshuman Khandual wrote: >> This adds basic building blocks required for ID_MMFR5 CPU register which >> provides information about the implemented memory model and memory >> management support in AArch32 state. This is added per ARM DDI 0487F.a >> specification. >> >> Cc: Catalin Marinas >> Cc: Will Deacon >> Cc: Marc Zyngier >> Cc: Mark Rutland >> Cc: James Morse >> Cc: Suzuki K Poulose >> Cc: kvmarm@lists.cs.columbia.edu >> Cc: linux-arm-ker...@lists.infradead.org >> Cc: linux-ker...@vger.kernel.org >> >> Suggested-by: Will Deacon >> Signed-off-by: Anshuman Khandual >> --- >> arch/arm64/include/asm/cpu.h| 1 + >> arch/arm64/include/asm/sysreg.h | 3 +++ >> arch/arm64/kernel/cpufeature.c | 10 ++ >> arch/arm64/kernel/cpuinfo.c | 1 + >> arch/arm64/kvm/sys_regs.c | 2 +- >> 5 files changed, 16 insertions(+), 1 deletion(-) > > [...] > >> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c >> index 2ce952d9668d..c790cc200bb1 100644 >> --- a/arch/arm64/kernel/cpufeature.c >> +++ b/arch/arm64/kernel/cpufeature.c >> @@ -403,6 +403,11 @@ static const struct arm64_ftr_bits ftr_id_isar4[] = { >> ARM64_FTR_END, >> }; >> >> +static const struct arm64_ftr_bits ftr_id_mmfr5[] = { >> +ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, >> ID_MMFR5_ETS_SHIFT, 4, 0), >> +ARM64_FTR_END, >> +}; >> + >> static const struct arm64_ftr_bits ftr_id_isar6[] = { >> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, >> ID_ISAR6_I8MM_SHIFT, 4, 0), >> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, >> ID_ISAR6_BF16_SHIFT, 4, 0), >> @@ -527,6 +532,7 @@ static const struct __ftr_reg_entry { >> ARM64_FTR_REG(SYS_MVFR2_EL1, ftr_mvfr2), >> ARM64_FTR_REG(SYS_ID_PFR2_EL1, ftr_id_pfr2), >> ARM64_FTR_REG(SYS_ID_DFR1_EL1, ftr_id_dfr1), >> +ARM64_FTR_REG(SYS_ID_MMFR5_EL1, ftr_id_mmfr5), >> >> /* Op1 = 0, CRn = 0, CRm = 4 */ >> ARM64_FTR_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0), >> @@ -732,6 +738,7 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) >> init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1); >> init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2); >> init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3); >> +init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5); >> init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0); >> init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1); >> init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2); >> @@ -866,6 +873,8 @@ static int update_32bit_cpu_features(int cpu, struct >> cpuinfo_arm64 *info, >>info->reg_id_mmfr2, boot->reg_id_mmfr2); >> taint |= check_update_ftr_reg(SYS_ID_MMFR3_EL1, cpu, >>info->reg_id_mmfr3, boot->reg_id_mmfr3); > > Looks like MMFR4 is missing here? ID_MMFR4 is missing from cpuinfo_arm64 itself, hence from init_cpu_features() and update_cpu_features() as well. But it is defined in arm64_ftr_regs[]. I was wondering about it but left as it is (due to lack of complete context). Unless there is any other concern, will add it up in cpuinfo_arm64 and make it a part of the CPU context. ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm