Re: Unexpected interrupt received in Guest OS when booting after "system_reset"
On 2019/3/29 18:54, Marc Zyngier wrote: On 29/03/2019 09:19, Heyi Guo wrote: Hi Marc, The patch works. I tested for 1.5 hour and 52 VM resets. There were 16 times that a virtual LPI left in the ap_list (seen by an additional printk) during reset and we never saw "Unexpected interrupt received" any more. Thanks for testing, much appreciated. Just a minor comment: how about replacing /vcpu->arch.vgic_cpu./ with /vgic_cpu->/ in the lock/unlock code line, to reduce some words? Well, as I said, the patch is wrong in other ways, so I wouldn't bother with that. It only serves as a test for my theory. Sure, I hadn't caught the last sentence of your previous mail... I think I'm slowly warming up to you initial proposal to hook things into the PROPBASER/PENDBASER registers, as the LPIs do have a life outside of the ITS itself. I'll try to respin something next week. Thanks, Heyi Thanks, M. Thanks, Heyi On 2019/3/29 9:19, Heyi Guo wrote: On 2019/3/29 1:18, Marc Zyngier wrote: [Please do not send HTML emails] Sorry; will keep in mind next time :) On 28/03/2019 15:44, Heyi Guo wrote: Hi Marc and Christoffer, When we issue "system_reset" from qemu monitor to a running VM, guest Linux will occasionally get "Unexpected interrupt" after rebooting, with kernel message at the bottom. After some investigation, we found it might be caused by the preservation of virtual LPI during system reset: it seems the virtual LPI remains in the ap_list during VM reset, as well as its "enabled" and "pending_latch" status, and this causes the virtual LPI to be injected wrongly after VCPU reboots and enables interrupt. We propose to clear "enabled" flag of virtual LPI when PROPBASER (or GICR_CTRL) of virtual GICR is written to 0, and update virtual LPI properties when GICR_CTRL.enableLPIs is set to 1 again. Any advice? Or did we miss something? We're clearly missing a trick here, but I'm not convinced of your approach. To be honest, we were not fully convinced by ourselves either. I was worrying about guest switching GICR_CTRL or GICR_PROPBASER at runtime which probably causes issue for our rough approach. What should happend is that the redistributors should be reset as well, and that this should recall any LPI that has been made pending. Unfortunately, we don't seem to have such code in place, which is embarrassing. Can you give the following, untested patch a go? It isn't right either, but it should have the right effect. If you confirm that it solves your problem, we can look at adding the right hooks... Thanks, I'll test this and get back to you. Heyi Thanks, M. diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index ab3f47745d9c..bd9a9250f323 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -2403,8 +2403,32 @@ static int vgic_its_commit_v0(struct vgic_its *its) return 0; } +static void vgic_nuke_pending_lpis(struct kvm_vcpu *vcpu) +{ +struct vgic_cpu *vgic_cpu = >arch.vgic_cpu; +struct vgic_irq *irq, *tmp; +unsigned long flags; + + raw_spin_lock_irqsave(>arch.vgic_cpu.ap_list_lock, flags); + +list_for_each_entry_safe(irq, tmp, _cpu->ap_list_head, ap_list) { +if (irq->intid >= VGIC_MIN_LPI) { +list_del(>ap_list); +vgic_put_irq(vcpu->kvm, irq); +} +} + + raw_spin_unlock_irqrestore(>arch.vgic_cpu.ap_list_lock, flags); +} + static void vgic_its_reset(struct kvm *kvm, struct vgic_its *its) { +struct kvm_vcpu *vcpu; +int c; + +kvm_for_each_vcpu(c, vcpu, kvm) +vgic_nuke_pending_lpis(vcpu); + /* We need to keep the ABI specific field values */ its->baser_coll_table &= ~GITS_BASER_VALID; its->baser_device_table &= ~GITS_BASER_VALID; . ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH 1/1] KVM: arm/arm64: arch_timer: Fix TimerValue TVAL calculation in KVM
Recently the generic timer test of kvm-unit-tests failed to complete (stalled) when a physical timer is being used. This issue is caused by incorrect update of CNT_CVAL when TimerValue is being accessed, introduced by 'Commit 84135d3d18da ("KVM: arm/arm64: consolidate arch timer trap handlers")'. According to Arm ARM, the read/write behavior of accesses to TimeValue registers is expected to be: * READ: TimerValue = (CompareValue – (Counter - Offset) * WRITE: CompareValue = ((Counter - Offset) + Sign(TimerValue) This patch fixes the TVAL read/write code path according to the specification. Signed-off-by: Wei Huang --- virt/kvm/arm/arch_timer.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c index 3417f2dbc366..d43308dc3617 100644 --- a/virt/kvm/arm/arch_timer.c +++ b/virt/kvm/arm/arch_timer.c @@ -812,7 +812,7 @@ static u64 kvm_arm_timer_read(struct kvm_vcpu *vcpu, switch (treg) { case TIMER_REG_TVAL: - val = kvm_phys_timer_read() - timer->cntvoff - timer->cnt_cval; + val = timer->cnt_cval - kvm_phys_timer_read() + timer->cntvoff; break; case TIMER_REG_CTL: @@ -858,7 +858,7 @@ static void kvm_arm_timer_write(struct kvm_vcpu *vcpu, { switch (treg) { case TIMER_REG_TVAL: - timer->cnt_cval = val - kvm_phys_timer_read() - timer->cntvoff; + timer->cnt_cval = kvm_phys_timer_read() - timer->cntvoff + val; break; case TIMER_REG_CTL: -- 2.14.5 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH v7 00/27] KVM: arm64: SVE guest support
On Fri, Mar 29, 2019 at 02:56:36PM +, Marc Zyngier wrote: > Hi Dave, > > On 29/03/2019 13:00, Dave Martin wrote: > > This series implements support for allowing KVM guests to use the Arm > > Scalable Vector Extension (SVE), superseding the previous v6 series [1]. > > > > The patches are also available on a branch for reviewer convenience. [2] > > > > The patches are based on v5.1-rc2. > > > > This series addresses a couple of minor review comments received on v6 > > and otherwise applies reviewer tags only. The code differences > > between v6 and this series consist of minor cosmetic fixups only. > > > > Draft kvmtool patches were posted separately [3], [4]. > > > > For a description of minor updates, see the individual patches. > > > > > > Thanks go to Julien Thierry and Julian Grall for their review efforts, > > and to Zhang Lei for testing the series -- many thanks for their help > > in getting the series to this point! > > > > > > Reviewers' attention is drawn to the following patches, which have no > > Reviewed-by/Acked-by. Please take a look if you have a moment. > > > > * Patch 11 (KVM: arm64: Support runtime sysreg visibility filtering) > > > >Previously Reviewed-by Julien Thierry, but this version of the > >patch contains some minor rework suggested by Mark Rutland during > >the v5 review [5]. > > > > * Patch 15 (KVM: arm64: Add missing #include of > >in guest.c) > > > >(Trivial patch.) > > > > * Patch 26: (KVM: Document errors for KVM_GET_ONE_REG and > >KVM_SET_ONE_REG) > > > >(Documentation only.) > > > > * Patch 27: KVM: arm64/sve: Document KVM API extensions for SVE > > > >(Documentation only.) > > In order to get people to test this a bit more, I've now pushed this > series into -next. I'm pretty confident that it won't break a thing, but > better safe than sorry. I also expect any fix to be pushed on top of > this series. > > This doesn't mean that the review is over, quite the opposite. I intend > to go over it once more, and I'd like other people to do the same > (specially anyone looking at implementing the required support in QEMU). Agreed, thanks for pushing it to next. I'll continue testing and await further review from people. Cheers ---Dave ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH v7 00/27] KVM: arm64: SVE guest support
Hi Dave, On 29/03/2019 13:00, Dave Martin wrote: > This series implements support for allowing KVM guests to use the Arm > Scalable Vector Extension (SVE), superseding the previous v6 series [1]. > > The patches are also available on a branch for reviewer convenience. [2] > > The patches are based on v5.1-rc2. > > This series addresses a couple of minor review comments received on v6 > and otherwise applies reviewer tags only. The code differences > between v6 and this series consist of minor cosmetic fixups only. > > Draft kvmtool patches were posted separately [3], [4]. > > For a description of minor updates, see the individual patches. > > > Thanks go to Julien Thierry and Julian Grall for their review efforts, > and to Zhang Lei for testing the series -- many thanks for their help > in getting the series to this point! > > > Reviewers' attention is drawn to the following patches, which have no > Reviewed-by/Acked-by. Please take a look if you have a moment. > > * Patch 11 (KVM: arm64: Support runtime sysreg visibility filtering) > >Previously Reviewed-by Julien Thierry, but this version of the >patch contains some minor rework suggested by Mark Rutland during >the v5 review [5]. > > * Patch 15 (KVM: arm64: Add missing #include of >in guest.c) > >(Trivial patch.) > > * Patch 26: (KVM: Document errors for KVM_GET_ONE_REG and >KVM_SET_ONE_REG) > >(Documentation only.) > > * Patch 27: KVM: arm64/sve: Document KVM API extensions for SVE > >(Documentation only.) In order to get people to test this a bit more, I've now pushed this series into -next. I'm pretty confident that it won't break a thing, but better safe than sorry. I also expect any fix to be pushed on top of this series. This doesn't mean that the review is over, quite the opposite. I intend to go over it once more, and I'd like other people to do the same (specially anyone looking at implementing the required support in QEMU). Thanks, M. -- Jazz is not dead. It just smells funny... ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 27/27] KVM: arm64/sve: Document KVM API extensions for SVE
This patch adds sections to the KVM API documentation describing the extensions for supporting the Scalable Vector Extension (SVE) in guests. Signed-off-by: Dave Martin --- Changes since v5: * Document KVM_ARM_VCPU_FINALIZE and its interactions with SVE. --- Documentation/virtual/kvm/api.txt | 132 +- 1 file changed, 129 insertions(+), 3 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index cd920dd..68509de 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -1873,6 +1873,7 @@ Parameters: struct kvm_one_reg (in) Returns: 0 on success, negative value on failure Errors: ENOENT: no such register + EPERM: register access forbidden for architecture-dependent reasons EINVAL: other errors, such as bad size encoding for a known register struct kvm_one_reg { @@ -2127,13 +2128,20 @@ Specifically: 0x6030 0010 004c SPSR_UND64 spsr[KVM_SPSR_UND] 0x6030 0010 004e SPSR_IRQ64 spsr[KVM_SPSR_IRQ] 0x6060 0010 0050 SPSR_FIQ64 spsr[KVM_SPSR_FIQ] - 0x6040 0010 0054 V0 128 fp_regs.vregs[0] - 0x6040 0010 0058 V1 128 fp_regs.vregs[1] + 0x6040 0010 0054 V0 128 fp_regs.vregs[0](*) + 0x6040 0010 0058 V1 128 fp_regs.vregs[1](*) ... - 0x6040 0010 00d0 V31128 fp_regs.vregs[31] + 0x6040 0010 00d0 V31128 fp_regs.vregs[31] (*) 0x6020 0010 00d4 FPSR32 fp_regs.fpsr 0x6020 0010 00d5 FPCR32 fp_regs.fpcr +(*) These encodings are not accepted for SVE-enabled vcpus. See +KVM_ARM_VCPU_INIT. + +The equivalent register content can be accessed via bits [127:0] of +the corresponding SVE Zn registers instead for vcpus that have SVE +enabled (see below). + arm64 CCSIDR registers are demultiplexed by CSSELR value: 0x6020 0011 00 @@ -2143,6 +2151,61 @@ arm64 system registers have the following id bit patterns: arm64 firmware pseudo-registers have the following bit pattern: 0x6030 0014 +arm64 SVE registers have the following bit patterns: + 0x6080 0015 00 Zn bits[2048*slice + 2047 : 2048*slice] + 0x6050 0015 04 Pn bits[256*slice + 255 : 256*slice] + 0x6050 0015 060 FFR bits[256*slice + 255 : 256*slice] + 0x6060 0015 KVM_REG_ARM64_SVE_VLS pseudo-register + +Access to slices beyond the maximum vector length configured for the +vcpu (i.e., where 16 * slice >= max_vq (**)) will fail with ENOENT. + +These registers are only accessible on vcpus for which SVE is enabled. +See KVM_ARM_VCPU_INIT for details. + +In addition, except for KVM_REG_ARM64_SVE_VLS, these registers are not +accessible until the vcpu's SVE configuration has been finalized +using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). See KVM_ARM_VCPU_INIT +and KVM_ARM_VCPU_FINALIZE for more information about this procedure. + +KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector +lengths supported by the vcpu to be discovered and configured by +userspace. When transferred to or from user memory via KVM_GET_ONE_REG +or KVM_SET_ONE_REG, the value of this register is of type __u64[8], and +encodes the set of vector lengths as follows: + +__u64 vector_lengths[8]; + +if (vq >= SVE_VQ_MIN && vq <= SVE_VQ_MAX && +((vector_lengths[(vq - 1) / 64] >> ((vq - 1) % 64)) & 1)) + /* Vector length vq * 16 bytes supported */ +else + /* Vector length vq * 16 bytes not supported */ + +(**) The maximum value vq for which the above condition is true is +max_vq. This is the maximum vector length available to the guest on +this vcpu, and determines which register slices are visible through +this ioctl interface. + +(See Documentation/arm64/sve.txt for an explanation of the "vq" +nomenclature.) + +KVM_REG_ARM64_SVE_VLS is only accessible after KVM_ARM_VCPU_INIT. +KVM_ARM_VCPU_INIT initialises it to the best set of vector lengths that +the host supports. + +Userspace may subsequently modify it if desired until the vcpu's SVE +configuration is finalized using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). + +Apart from simply removing all vector lengths from the host set that +exceed some value, support for arbitrarily chosen sets of vector lengths +is hardware-dependent and may not be available. Attempting to configure +an invalid set of vector lengths via KVM_SET_ONE_REG will fail with +EINVAL. + +After the vcpu's SVE configuration is finalized, further attempts to +write this register will fail with EPERM. + MIPS registers are mapped using the lower 32 bits. The upper 16 of that is the register group type: @@ -2197,6 +2260,7 @@ Parameters: struct kvm_one_reg (in and out) Returns: 0 on success, negative value on failure Errors: ENOENT: no such register + EPERM: register access forbidden for architecture-dependent reasons EINVAL: other errors, such as
[PATCH v7 15/27] KVM: arm64: Add missing #include of in guest.c
arch/arm64/kvm/guest.c uses the string functions, but the corresponding header is not included. We seem to get away with this for now, but for completeness this patch adds the #include, in preparation for adding yet more memset() calls. Signed-off-by: Dave Martin Tested-by: zhang.lei --- arch/arm64/kvm/guest.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 62514cb..3e38eb2 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 11/27] KVM: arm64: Support runtime sysreg visibility filtering
Some optional features of the Arm architecture add new system registers that are not present in the base architecture. Where these features are optional for the guest, the visibility of these registers may need to depend on some runtime configuration, such as a flag passed to KVM_ARM_VCPU_INIT. For example, ZCR_EL1 and ID_AA64ZFR0_EL1 need to be hidden if SVE is not enabled for the guest, even though these registers may be present in the hardware and visible to the host at EL2. Adding special-case checks all over the place for individual registers is going to get messy as the number of conditionally- visible registers grows. In order to help solve this problem, this patch adds a new sysreg method visibility() that can be used to hook in any needed runtime visibility checks. This method can currently return REG_HIDDEN_USER to inhibit enumeration and ioctl access to the register for userspace, and REG_HIDDEN_GUEST to inhibit runtime access by the guest using MSR/MRS. Wrappers are added to allow these flags to be conveniently queried. This approach allows a conditionally modified view of individual system registers such as the CPU ID registers, in addition to completely hiding register where appropriate. Signed-off-by: Dave Martin Tested-by: zhang.lei --- Changes since v5: * Rename the visibility override flags, add some comments, and rename/ introduce helpers to make the purpose of this code clearer. --- arch/arm64/kvm/sys_regs.c | 24 +--- arch/arm64/kvm/sys_regs.h | 25 + 2 files changed, 46 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index a5d14b5..c86a7b0 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1927,6 +1927,12 @@ static void perform_access(struct kvm_vcpu *vcpu, { trace_kvm_sys_access(*vcpu_pc(vcpu), params, r); + /* Check for regs disabled by runtime config */ + if (sysreg_hidden_from_guest(vcpu, r)) { + kvm_inject_undefined(vcpu); + return; + } + /* * Not having an accessor means that we have configured a trap * that we don't know how to handle. This certainly qualifies @@ -2438,6 +2444,10 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (!r) return get_invariant_sys_reg(reg->id, uaddr); + /* Check for regs disabled by runtime config */ + if (sysreg_hidden_from_user(vcpu, r)) + return -ENOENT; + if (r->get_user) return (r->get_user)(vcpu, r, reg, uaddr); @@ -2459,6 +2469,10 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (!r) return set_invariant_sys_reg(reg->id, uaddr); + /* Check for regs disabled by runtime config */ + if (sysreg_hidden_from_user(vcpu, r)) + return -ENOENT; + if (r->set_user) return (r->set_user)(vcpu, r, reg, uaddr); @@ -2515,7 +2529,8 @@ static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind) return true; } -static int walk_one_sys_reg(const struct sys_reg_desc *rd, +static int walk_one_sys_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, u64 __user **uind, unsigned int *total) { @@ -2526,6 +2541,9 @@ static int walk_one_sys_reg(const struct sys_reg_desc *rd, if (!(rd->reg || rd->get_user)) return 0; + if (sysreg_hidden_from_user(vcpu, rd)) + return 0; + if (!copy_reg_to_user(rd, uind)) return -EFAULT; @@ -2554,9 +2572,9 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind) int cmp = cmp_sys_reg(i1, i2); /* target-specific overrides generic entry. */ if (cmp <= 0) - err = walk_one_sys_reg(i1, , ); + err = walk_one_sys_reg(vcpu, i1, , ); else - err = walk_one_sys_reg(i2, , ); + err = walk_one_sys_reg(vcpu, i2, , ); if (err) return err; diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h index 3b1bc7f..2be9950 100644 --- a/arch/arm64/kvm/sys_regs.h +++ b/arch/arm64/kvm/sys_regs.h @@ -64,8 +64,15 @@ struct sys_reg_desc { const struct kvm_one_reg *reg, void __user *uaddr); int (*set_user)(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr); + + /* Return mask of REG_* runtime visibility overrides */ + unsigned int (*visibility)(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd); }; +#define REG_HIDDEN_USER(1 << 0) /*
[PATCH v7 18/27] KVM: arm64/sve: Add SVE support to register access ioctl interface
This patch adds the following registers for access via the KVM_{GET,SET}_ONE_REG interface: * KVM_REG_ARM64_SVE_ZREG(n, i) (n = 0..31) (in 2048-bit slices) * KVM_REG_ARM64_SVE_PREG(n, i) (n = 0..15) (in 256-bit slices) * KVM_REG_ARM64_SVE_FFR(i) (in 256-bit slices) In order to adapt gracefully to future architectural extensions, the registers are logically divided up into slices as noted above: the i parameter denotes the slice index. This allows us to reserve space in the ABI for future expansion of these registers. However, as of today the architecture does not permit registers to be larger than a single slice, so no code is needed in the kernel to expose additional slices, for now. The code can be extended later as needed to expose them up to a maximum of 32 slices (as carved out in the architecture itself) if they really exist someday. The registers are only visible for vcpus that have SVE enabled. They are not enumerated by KVM_GET_REG_LIST on vcpus that do not have SVE. Accesses to the FPSIMD registers via KVM_REG_ARM_CORE is not allowed for SVE-enabled vcpus: SVE-aware userspace can use the KVM_REG_ARM64_SVE_ZREG() interface instead to access the same register state. This avoids some complex and pointless emulation in the kernel to convert between the two views of these aliased registers. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- Changes since v5: * [Julien Thierry] rename sve_reg_region() to sve_reg_to_region() to make its purpose a bit clearer. * [Julien Thierry] rename struct sve_state_region to sve_state_reg_region to make it clearer this this struct only describes the bounds of (part of) a single register within sve_state. * [Julien Thierry] Add a comment to clarify the purpose of struct sve_state_reg_region. --- arch/arm64/include/asm/kvm_host.h | 14 arch/arm64/include/uapi/asm/kvm.h | 17 + arch/arm64/kvm/guest.c| 139 ++ 3 files changed, 158 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 4fabfd2..205438a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -329,6 +329,20 @@ struct kvm_vcpu_arch { #define vcpu_sve_pffr(vcpu) ((void *)((char *)((vcpu)->arch.sve_state) + \ sve_ffr_offset((vcpu)->arch.sve_max_vl))) +#define vcpu_sve_state_size(vcpu) ({ \ + size_t __size_ret; \ + unsigned int __vcpu_vq; \ + \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ + __size_ret = 0; \ + } else {\ + __vcpu_vq = sve_vq_from_vl((vcpu)->arch.sve_max_vl);\ + __size_ret = SVE_SIG_REGS_SIZE(__vcpu_vq); \ + } \ + \ + __size_ret; \ +}) + /* vcpu_arch flags field values: */ #define KVM_ARM64_DEBUG_DIRTY (1 << 0) #define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 97c3478..ced760c 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -226,6 +226,23 @@ struct kvm_vcpu_events { KVM_REG_ARM_FW | ((r) & 0x)) #define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0) +/* SVE registers */ +#define KVM_REG_ARM64_SVE (0x15 << KVM_REG_ARM_COPROC_SHIFT) + +/* Z- and P-regs occupy blocks at the following offsets within this range: */ +#define KVM_REG_ARM64_SVE_ZREG_BASE0 +#define KVM_REG_ARM64_SVE_PREG_BASE0x400 + +#define KVM_REG_ARM64_SVE_ZREG(n, i) (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \ +KVM_REG_ARM64_SVE_ZREG_BASE | \ +KVM_REG_SIZE_U2048 | \ +((n) << 5) | (i)) +#define KVM_REG_ARM64_SVE_PREG(n, i) (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \ +KVM_REG_ARM64_SVE_PREG_BASE | \ +KVM_REG_SIZE_U256 |\ +((n) << 5) | (i)) +#define KVM_REG_ARM64_SVE_FFR(i) KVM_REG_ARM64_SVE_PREG(16, i) + /* Device Control API: ARM VGIC */ #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index
[PATCH v7 05/27] KVM: arm64: Add missing #includes to kvm_host.h
kvm_host.h uses some declarations from other headers that are currently included by accident, without an explicit #include. This patch adds a few #includes that are clearly missing. Although the header builds without them today, this should help to avoid future surprises. Signed-off-by: Dave Martin Acked-by: Mark Rutland Tested-by: zhang.lei --- Changes since v5: * [Mark Rutland] Add additional missing #includes , , , while we're about it. Commit message reworded to match. --- arch/arm64/include/asm/kvm_host.h | 4 1 file changed, 4 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a01fe087..6d10100 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -22,9 +22,13 @@ #ifndef __ARM64_KVM_HOST_H__ #define __ARM64_KVM_HOST_H__ +#include #include +#include #include +#include #include +#include #include #include #include -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 25/27] KVM: arm64: Add a capability to advertise SVE support
To provide a uniform way to check for KVM SVE support amongst other features, this patch adds a suitable capability KVM_CAP_ARM_SVE, and reports it as present when SVE is available. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- Changes since v5: * [Julien Thierry] Strip out has_vhe() sanity-check, which wasn't in the most logical place, and anyway doesn't really belong in this patch. Moved to KVM: arm64/sve: Allow userspace to enable SVE for vcpus instead. --- arch/arm64/kvm/reset.c | 3 +++ include/uapi/linux/kvm.h | 1 + 2 files changed, 4 insertions(+) diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 32c5ac0..f13378d 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -98,6 +98,9 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_VM_IPA_SIZE: r = kvm_ipa_limit; break; + case KVM_CAP_ARM_SVE: + r = system_supports_sve(); + break; default: r = 0; } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index c3b8e7a..1d56444 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -988,6 +988,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_ARM_VM_IPA_SIZE 165 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166 #define KVM_CAP_HYPERV_CPUID 167 +#define KVM_CAP_ARM_SVE 168 #ifdef KVM_CAP_IRQ_ROUTING -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 23/27] KVM: arm64/sve: Add pseudo-register for the guest's vector lengths
This patch adds a new pseudo-register KVM_REG_ARM64_SVE_VLS to allow userspace to set and query the set of vector lengths visible to the guest. In the future, multiple register slices per SVE register may be visible through the ioctl interface. Once the set of slices has been determined we would not be able to allow the vector length set to be changed any more, in order to avoid userspace seeing inconsistent sets of registers. For this reason, this patch adds support for explicit finalization of the SVE configuration via the KVM_ARM_VCPU_FINALIZE ioctl. Finalization is the proper place to allocate the SVE register state storage in vcpu->arch.sve_state, so this patch adds that as appropriate. The data is freed via kvm_arch_vcpu_uninit(), which was previously a no-op on arm64. To simplify the logic for determining what vector lengths can be supported, some code is added to KVM init to work this out, in the kvm_arm_init_arch_resources() hook. The KVM_REG_ARM64_SVE_VLS pseudo-register is not exposed yet. Subsequent patches will allow SVE to be turned on for guest vcpus, making it visible. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- Changes since v5: * [Julien Thierry] Delete overzealous BUILD_BUG_ON() checks. It also turns out that these could cause kernel build failures in some configurations, even though the checked condition is compile- time constant. Because of the way the affected functions are called, the checks are superfluous, so the simplest option is simply to get rid of them. * [Julien Thierry] Free vcpu->arch.sve_state (if any) in kvm_arch_vcpu_uninit() (which is currently a no-op). This was accidentally lost during a previous rebase. * Add kvm_arm_init_arch_resources() hook, and use it to probe SVE configuration for KVM, to avoid duplicating the logic elsewhere. We only need to do this once. * Move sve_state buffer allocation to kvm_arm_vcpu_finalize(). As well as making the code more straightforward, this avoids the need to allocate memory in kvm_reset_vcpu(), the meat of which is non-preemptible since commit 358b28f09f0a ("arm/arm64: KVM: Allow a VCPU to fully reset itself"). The refactoring means that if this has not been done by the time we hit KVM_RUN, then this allocation will happen on the kvm_arm_first_run_init() path, where preemption remains enabled. * Add a couple of comments in {get,set}_sve_reg() to make the handling of the KVM_REG_ARM64_SVE_VLS special case a little clearer. * Move mis-split rework to avoid put_user() being the correct size by accident in KVM_GET_REG_LIST to KVM: arm64: Enumerate SVE register indices for KVM_GET_REG_LIST. * Fix wrong WARN_ON() check sense when checking whether the implementation may needs move SVE register slices than KVM can support. * Fix erroneous setting of vcpu->arch.sve_max_vl based on stale loop control veriable vq. * Move definition of KVM_ARM_VCPU_SVE from KVM: arm64/sve: Allow userspace to enable SVE for vcpus. * Migrate to explicit finalization of the SVE configuration, using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). --- arch/arm64/include/asm/kvm_host.h | 15 +++-- arch/arm64/include/uapi/asm/kvm.h | 5 ++ arch/arm64/kvm/guest.c| 114 +- arch/arm64/kvm/reset.c| 89 + 4 files changed, 215 insertions(+), 8 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 98658f7..5475cc4 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -23,7 +23,6 @@ #define __ARM64_KVM_HOST_H__ #include -#include #include #include #include @@ -50,6 +49,7 @@ #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS +/* Will be incremented when KVM_ARM_VCPU_SVE is fully implemented: */ #define KVM_VCPU_MAX_FEATURES 4 #define KVM_REQ_SLEEP \ @@ -59,10 +59,12 @@ DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); -static inline int kvm_arm_init_arch_resources(void) { return 0; } +extern unsigned int kvm_sve_max_vl; +int kvm_arm_init_arch_resources(void); int __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); +void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu); int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext); void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start); @@ -353,6 +355,7 @@ struct kvm_vcpu_arch { #define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ #define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ #define KVM_ARM64_GUEST_HAS_SVE(1 << 5) /* SVE exposed to guest */ +#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config completed */ #define vcpu_has_sve(vcpu) (system_supports_sve() && \ ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE)) @@ -525,7
[PATCH v7 20/27] arm64/sve: In-kernel vector length availability query interface
KVM will need to interrogate the set of SVE vector lengths available on the system. This patch exposes the relevant bits to the kernel, along with a sve_vq_available() helper to check whether a particular vector length is supported. __vq_to_bit() and __bit_to_vq() are not intended for use outside these functions: now that these are exposed outside fpsimd.c, they are prefixed with __ in order to provide an extra hint that they are not intended for general-purpose use. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei --- arch/arm64/include/asm/fpsimd.h | 29 + arch/arm64/kernel/fpsimd.c | 35 --- 2 files changed, 37 insertions(+), 27 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index df7a143..ad6d2e4 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -24,10 +24,13 @@ #ifndef __ASSEMBLY__ +#include #include +#include #include #include #include +#include #if defined(__KERNEL__) && defined(CONFIG_COMPAT) /* Masks for extracting the FPSR and FPCR from the FPSCR */ @@ -89,6 +92,32 @@ extern u64 read_zcr_features(void); extern int __ro_after_init sve_max_vl; extern int __ro_after_init sve_max_virtualisable_vl; +/* Set of available vector lengths, as vq_to_bit(vq): */ +extern __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); + +/* + * Helpers to translate bit indices in sve_vq_map to VQ values (and + * vice versa). This allows find_next_bit() to be used to find the + * _maximum_ VQ not exceeding a certain value. + */ +static inline unsigned int __vq_to_bit(unsigned int vq) +{ + return SVE_VQ_MAX - vq; +} + +static inline unsigned int __bit_to_vq(unsigned int bit) +{ + if (WARN_ON(bit >= SVE_VQ_MAX)) + bit = SVE_VQ_MAX - 1; + + return SVE_VQ_MAX - bit; +} + +/* Ensure vq >= SVE_VQ_MIN && vq <= SVE_VQ_MAX before calling this function */ +static inline bool sve_vq_available(unsigned int vq) +{ + return test_bit(__vq_to_bit(vq), sve_vq_map); +} #ifdef CONFIG_ARM64_SVE diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 8a93afa..577296b 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -136,7 +136,7 @@ static int sve_default_vl = -1; int __ro_after_init sve_max_vl = SVE_VL_MIN; int __ro_after_init sve_max_virtualisable_vl = SVE_VL_MIN; /* Set of available vector lengths, as vq_to_bit(vq): */ -static __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); +__ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); /* Set of vector lengths present on at least one cpu: */ static __ro_after_init DECLARE_BITMAP(sve_vq_partial_map, SVE_VQ_MAX); static void __percpu *efi_sve_state; @@ -270,25 +270,6 @@ void fpsimd_save(void) } /* - * Helpers to translate bit indices in sve_vq_map to VQ values (and - * vice versa). This allows find_next_bit() to be used to find the - * _maximum_ VQ not exceeding a certain value. - */ - -static unsigned int vq_to_bit(unsigned int vq) -{ - return SVE_VQ_MAX - vq; -} - -static unsigned int bit_to_vq(unsigned int bit) -{ - if (WARN_ON(bit >= SVE_VQ_MAX)) - bit = SVE_VQ_MAX - 1; - - return SVE_VQ_MAX - bit; -} - -/* * All vector length selection from userspace comes through here. * We're on a slow path, so some sanity-checks are included. * If things go wrong there's a bug somewhere, but try to fall back to a @@ -309,8 +290,8 @@ static unsigned int find_supported_vector_length(unsigned int vl) vl = max_vl; bit = find_next_bit(sve_vq_map, SVE_VQ_MAX, - vq_to_bit(sve_vq_from_vl(vl))); - return sve_vl_from_vq(bit_to_vq(bit)); + __vq_to_bit(sve_vq_from_vl(vl))); + return sve_vl_from_vq(__bit_to_vq(bit)); } #ifdef CONFIG_SYSCTL @@ -648,7 +629,7 @@ static void sve_probe_vqs(DECLARE_BITMAP(map, SVE_VQ_MAX)) write_sysreg_s(zcr | (vq - 1), SYS_ZCR_EL1); /* self-syncing */ vl = sve_get_vl(); vq = sve_vq_from_vl(vl); /* skip intervening lengths */ - set_bit(vq_to_bit(vq), map); + set_bit(__vq_to_bit(vq), map); } } @@ -717,7 +698,7 @@ int sve_verify_vq_map(void) * Mismatches above sve_max_virtualisable_vl are fine, since * no guest is allowed to configure ZCR_EL2.LEN to exceed this: */ - if (sve_vl_from_vq(bit_to_vq(b)) <= sve_max_virtualisable_vl) { + if (sve_vl_from_vq(__bit_to_vq(b)) <= sve_max_virtualisable_vl) { pr_warn("SVE: cpu%d: Unsupported vector length(s) present\n", smp_processor_id()); return -EINVAL; @@ -801,8 +782,8 @@ void __init sve_setup(void) * so sve_vq_map must have at least SVE_VQ_MIN set. * If something went wrong, at least try to patch it up: */ -
[PATCH v7 02/27] arm64: fpsimd: Always set TIF_FOREIGN_FPSTATE on task state flush
This patch updates fpsimd_flush_task_state() to mirror the new semantics of fpsimd_flush_cpu_state() introduced by commit d8ad71fa38a9 ("arm64: fpsimd: Fix TIF_FOREIGN_FPSTATE after invalidating cpu regs"). Both functions now implicitly set TIF_FOREIGN_FPSTATE to indicate that the task's FPSIMD state is not loaded into the cpu. As a side-effect, fpsimd_flush_task_state() now sets TIF_FOREIGN_FPSTATE even for non-running tasks. In the case of non-running tasks this is not useful but also harmless, because the flag is live only while the corresponding task is running. This function is not called from fast paths, so special-casing this for the task == current case is not really worth it. Compiler barriers previously present in restore_sve_fpsimd_context() are pulled into fpsimd_flush_task_state() so that it can be safely called with preemption enabled if necessary. Explicit calls to set TIF_FOREIGN_FPSTATE that accompany fpsimd_flush_task_state() calls and are now redundant are removed as appropriate. fpsimd_flush_task_state() is used to get exclusive access to the representation of the task's state via task_struct, for the purpose of replacing the state. Thus, the call to this function should happen before manipulating fpsimd_state or sve_state etc. in task_struct. Anomalous cases are reordered appropriately in order to make the code more consistent, although there should be no functional difference since these cases are protected by local_bh_disable() anyway. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Reviewed-by: Julien Grall Tested-by: zhang.lei --- arch/arm64/kernel/fpsimd.c | 25 +++-- arch/arm64/kernel/signal.c | 5 - 2 files changed, 19 insertions(+), 11 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 5ebe73b..62c37f0 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -550,7 +550,6 @@ int sve_set_vector_length(struct task_struct *task, local_bh_disable(); fpsimd_save(); - set_thread_flag(TIF_FOREIGN_FPSTATE); } fpsimd_flush_task_state(task); @@ -816,12 +815,11 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs) local_bh_disable(); fpsimd_save(); - fpsimd_to_sve(current); /* Force ret_to_user to reload the registers: */ fpsimd_flush_task_state(current); - set_thread_flag(TIF_FOREIGN_FPSTATE); + fpsimd_to_sve(current); if (test_and_set_thread_flag(TIF_SVE)) WARN_ON(1); /* SVE access shouldn't have trapped */ @@ -894,9 +892,9 @@ void fpsimd_flush_thread(void) local_bh_disable(); + fpsimd_flush_task_state(current); memset(>thread.uw.fpsimd_state, 0, sizeof(current->thread.uw.fpsimd_state)); - fpsimd_flush_task_state(current); if (system_supports_sve()) { clear_thread_flag(TIF_SVE); @@ -933,8 +931,6 @@ void fpsimd_flush_thread(void) current->thread.sve_vl_onexec = 0; } - set_thread_flag(TIF_FOREIGN_FPSTATE); - local_bh_enable(); } @@ -1043,12 +1039,29 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state) /* * Invalidate live CPU copies of task t's FPSIMD state + * + * This function may be called with preemption enabled. The barrier() + * ensures that the assignment to fpsimd_cpu is visible to any + * preemption/softirq that could race with set_tsk_thread_flag(), so + * that TIF_FOREIGN_FPSTATE cannot be spuriously re-cleared. + * + * The final barrier ensures that TIF_FOREIGN_FPSTATE is seen set by any + * subsequent code. */ void fpsimd_flush_task_state(struct task_struct *t) { t->thread.fpsimd_cpu = NR_CPUS; + + barrier(); + set_tsk_thread_flag(t, TIF_FOREIGN_FPSTATE); + + barrier(); } +/* + * Invalidate any task's FPSIMD state that is present on this cpu. + * This function must be called with softirqs disabled. + */ void fpsimd_flush_cpu_state(void) { __this_cpu_write(fpsimd_last_state.st, NULL); diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 867a7ce..a9b0485 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -296,11 +296,6 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user) */ fpsimd_flush_task_state(current); - barrier(); - /* From now, fpsimd_thread_switch() won't clear TIF_FOREIGN_FPSTATE */ - - set_thread_flag(TIF_FOREIGN_FPSTATE); - barrier(); /* From now, fpsimd_thread_switch() won't touch thread.sve_state */ sve_alloc(current); -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 07/27] arm64/sve: Check SVE virtualisability
Due to the way the effective SVE vector length is controlled and trapped at different exception levels, certain mismatches in the sets of vector lengths supported by different physical CPUs in the system may prevent straightforward virtualisation of SVE at parity with the host. This patch analyses the extent to which SVE can be virtualised safely without interfering with migration of vcpus between physical CPUs, and rejects late secondary CPUs that would erode the situation further. It is left up to KVM to decide what to do with this information. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- QUESTION: The final structure of this code makes it quite natural to clamp the vector length for KVM guests to the maximum value supportable across all CPUs; such a value is guaranteed to exist, but may be surprisingly small on a given hardware platform. Should we be stricter and actually refuse to support KVM at all on such hardware? This may help to force people to fix Linux (or the architecture) if/when this issue comes up. For now, I stick with pr_warn() and make do with a limited SVE vector length for guests. Changes since v5: * pr_info() about the presence of unvirtualisable vector lengths in sve_setup() upgrade to pr_warn(), for consistency with sve_verify_vq_map(). --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/kernel/cpufeature.c | 2 +- arch/arm64/kernel/fpsimd.c | 86 ++--- 3 files changed, 73 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index dd1ad39..964adc9 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -87,6 +87,7 @@ extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused); extern u64 read_zcr_features(void); extern int __ro_after_init sve_max_vl; +extern int __ro_after_init sve_max_virtualisable_vl; #ifdef CONFIG_ARM64_SVE diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 4061de1..7f8cc51 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1863,7 +1863,7 @@ static void verify_sve_features(void) unsigned int len = zcr & ZCR_ELx_LEN_MASK; if (len < safe_len || sve_verify_vq_map()) { - pr_crit("CPU%d: SVE: required vector length(s) missing\n", + pr_crit("CPU%d: SVE: vector length support mismatch\n", smp_processor_id()); cpu_die_early(); } diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index f59ea67..b219796a 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -18,6 +18,7 @@ */ #include +#include #include #include #include @@ -48,6 +49,7 @@ #include #include #include +#include #define FPEXC_IOF (1 << 0) #define FPEXC_DZF (1 << 1) @@ -130,14 +132,18 @@ static int sve_default_vl = -1; /* Maximum supported vector length across all CPUs (initially poisoned) */ int __ro_after_init sve_max_vl = SVE_VL_MIN; +int __ro_after_init sve_max_virtualisable_vl = SVE_VL_MIN; /* Set of available vector lengths, as vq_to_bit(vq): */ static __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); +/* Set of vector lengths present on at least one cpu: */ +static __ro_after_init DECLARE_BITMAP(sve_vq_partial_map, SVE_VQ_MAX); static void __percpu *efi_sve_state; #else /* ! CONFIG_ARM64_SVE */ /* Dummy declaration for code that will be optimised out: */ extern __ro_after_init DECLARE_BITMAP(sve_vq_map, SVE_VQ_MAX); +extern __ro_after_init DECLARE_BITMAP(sve_vq_partial_map, SVE_VQ_MAX); extern void __percpu *efi_sve_state; #endif /* ! CONFIG_ARM64_SVE */ @@ -623,12 +629,6 @@ int sve_get_current_vl(void) return sve_prctl_status(0); } -/* - * Bitmap for temporary storage of the per-CPU set of supported vector lengths - * during secondary boot. - */ -static DECLARE_BITMAP(sve_secondary_vq_map, SVE_VQ_MAX); - static void sve_probe_vqs(DECLARE_BITMAP(map, SVE_VQ_MAX)) { unsigned int vq, vl; @@ -654,6 +654,7 @@ static void sve_probe_vqs(DECLARE_BITMAP(map, SVE_VQ_MAX)) void __init sve_init_vq_map(void) { sve_probe_vqs(sve_vq_map); + bitmap_copy(sve_vq_partial_map, sve_vq_map, SVE_VQ_MAX); } /* @@ -663,8 +664,11 @@ void __init sve_init_vq_map(void) */ void sve_update_vq_map(void) { - sve_probe_vqs(sve_secondary_vq_map); - bitmap_and(sve_vq_map, sve_vq_map, sve_secondary_vq_map, SVE_VQ_MAX); + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + + sve_probe_vqs(tmp_map); + bitmap_and(sve_vq_map, sve_vq_map, tmp_map, SVE_VQ_MAX); + bitmap_or(sve_vq_partial_map, sve_vq_partial_map, tmp_map, SVE_VQ_MAX); } /* @@ -673,18 +677,48 @@ void sve_update_vq_map(void) */ int sve_verify_vq_map(void) { - int ret = 0; + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + unsigned
[PATCH v7 16/27] KVM: arm64: Factor out core register ID enumeration
In preparation for adding logic to filter out some KVM_REG_ARM_CORE registers from the KVM_GET_REG_LIST output, this patch factors out the core register enumeration into a separate function and rebuilds num_core_regs() on top of it. This may be a little more expensive (depending on how good a job the compiler does of specialising the code), but KVM_GET_REG_LIST is not a hot path. This will make it easier to consolidate ID filtering code in one place. No functional change. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- Changes since v5: * New patch. This reimplements part of the separately-posted patch "KVM: arm64: Factor out KVM_GET_REG_LIST core register enumeration", minus aspects that potentially break the ABI. As a result, the opportunity to truly consolidate all the ID reg filtering in one place is deliberately left on the floor, for now. This will be addressed in a separate series later on. --- arch/arm64/kvm/guest.c | 33 + 1 file changed, 25 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 3e38eb2..a391a61 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -194,9 +195,28 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) return -EINVAL; } +static int kvm_arm_copy_core_reg_indices(u64 __user *uindices) +{ + unsigned int i; + int n = 0; + const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE; + + for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) { + if (uindices) { + if (put_user(core_reg | i, uindices)) + return -EFAULT; + uindices++; + } + + n++; + } + + return n; +} + static unsigned long num_core_regs(void) { - return sizeof(struct kvm_regs) / sizeof(__u32); + return kvm_arm_copy_core_reg_indices(NULL); } /** @@ -276,15 +296,12 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) */ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) { - unsigned int i; - const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE; int ret; - for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) { - if (put_user(core_reg | i, uindices)) - return -EFAULT; - uindices++; - } + ret = kvm_arm_copy_core_reg_indices(uindices); + if (ret) + return ret; + uindices += ret; ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); if (ret) -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 10/27] KVM: arm64: Propagate vcpu into read_id_reg()
Architecture features that are conditionally visible to the guest will require run-time checks in the ID register accessor functions. In particular, read_id_reg() will need to perform checks in order to generate the correct emulated value for certain ID register fields such as ID_AA64PFR0_EL1.SVE for example. This patch propagates vcpu into read_id_reg() so that future patches can add run-time checks on the guest configuration here. For now, there is no functional change. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei --- arch/arm64/kvm/sys_regs.c | 23 +-- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 539feec..a5d14b5 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1044,7 +1044,8 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu, } /* Read a sanitised cpufeature ID register by sys_reg_desc */ -static u64 read_id_reg(struct sys_reg_desc const *r, bool raz) +static u64 read_id_reg(const struct kvm_vcpu *vcpu, + struct sys_reg_desc const *r, bool raz) { u32 id = sys_reg((u32)r->Op0, (u32)r->Op1, (u32)r->CRn, (u32)r->CRm, (u32)r->Op2); @@ -1078,7 +1079,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu, if (p->is_write) return write_to_read_only(vcpu, p, r); - p->regval = read_id_reg(r, raz); + p->regval = read_id_reg(vcpu, r, raz); return true; } @@ -1107,16 +1108,18 @@ static u64 sys_reg_to_index(const struct sys_reg_desc *reg); * are stored, and for set_id_reg() we don't allow the effective value * to be changed. */ -static int __get_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, +static int __get_id_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, void __user *uaddr, bool raz) { const u64 id = sys_reg_to_index(rd); - const u64 val = read_id_reg(rd, raz); + const u64 val = read_id_reg(vcpu, rd, raz); return reg_to_user(uaddr, , id); } -static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, +static int __set_id_reg(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd, void __user *uaddr, bool raz) { const u64 id = sys_reg_to_index(rd); @@ -1128,7 +1131,7 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, return err; /* This is what we mean by invariant: you can't change it. */ - if (val != read_id_reg(rd, raz)) + if (val != read_id_reg(vcpu, rd, raz)) return -EINVAL; return 0; @@ -1137,25 +1140,25 @@ static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr, static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __get_id_reg(rd, uaddr, false); + return __get_id_reg(vcpu, rd, uaddr, false); } static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __set_id_reg(rd, uaddr, false); + return __set_id_reg(vcpu, rd, uaddr, false); } static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __get_id_reg(rd, uaddr, true); + return __get_id_reg(vcpu, rd, uaddr, true); } static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, const struct kvm_one_reg *reg, void __user *uaddr) { - return __set_id_reg(rd, uaddr, true); + return __set_id_reg(vcpu, rd, uaddr, true); } static bool access_ctr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 24/27] KVM: arm64/sve: Allow userspace to enable SVE for vcpus
Now that all the pieces are in place, this patch offers a new flag KVM_ARM_VCPU_SVE that userspace can pass to KVM_ARM_VCPU_INIT to turn on SVE for the guest, on a per-vcpu basis. As part of this, support for initialisation and reset of the SVE vector length set and registers is added in the appropriate places, as well as finally setting the KVM_ARM64_GUEST_HAS_SVE vcpu flag, to turn on the SVE support code. Allocation of the SVE register storage in vcpu->arch.sve_state is deferred until the SVE configuration is finalized, by which time the size of the registers is known. Setting the vector lengths supported by the vcpu is considered configuration of the emulated hardware rather than runtime configuration, so no support is offered for changing the vector lengths available to an existing vcpu across reset. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- Changes since v6: * [Kristina Martšenko] Amend comments explaining the kvm_arm_vcpu_sve_finalized() versus !kvm_arm_vcpu_sve_finalized() cases in kvm_reset_vcpu(). Actually, I've just deleted the comments, since if anything they're more confusing than the code they're supposed to describe. Changes since v5: * Refactored to make the code flow clearer and clarify responsiblity for the various initialisation phases/checks. In place of the previous, confusingly dual-purpose kvm_reset_sve(), enabling and resetting of SVE are split into separate functions and called as appropriate from kvm_reset_vcpu(). To avoid interactions with preempt_disable(), memory allocation is done in the kvm_vcpu_first_fun_init() path instead. To achieve this, the SVE memory allocation is moved to kvm_arm_vcpu_finalize(), which now takes on the role of actually doing deferred setup instead of just setting a flag to indicate that the setup was done. * Add has_vhe() sanity-check into kvm_vcpu_enable_sve(), since it makes more sense here than when resetting the vcpu. * When checking for SVE finalization in kvm_reset_vcpu(), call the new SVE-specific function kvm_arm_vcpu_sve_finalized(). The new generic check kvm_arm_vcpu_is_finalized() is unnecessarily broad here: using the appropriate specific check makes the code more self-describing. * Definition of KVM_ARM_VCPU_SVE moved to KVM: arm64/sve: Add pseudo- register for the guest's vector lengths (which needs it for the KVM_ARM_VCPU_FINALIZE ioctl). --- arch/arm64/include/asm/kvm_host.h | 3 +-- arch/arm64/kvm/reset.c| 43 ++- 2 files changed, 43 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5475cc4..9d57cf8 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -49,8 +49,7 @@ #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS -/* Will be incremented when KVM_ARM_VCPU_SVE is fully implemented: */ -#define KVM_VCPU_MAX_FEATURES 4 +#define KVM_VCPU_MAX_FEATURES 5 #define KVM_REQ_SLEEP \ KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index e7f9c06..32c5ac0 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -20,10 +20,12 @@ */ #include +#include #include #include #include #include +#include #include #include @@ -37,6 +39,7 @@ #include #include #include +#include /* Maximum phys_shift supported for any VM on this host */ static u32 kvm_ipa_limit; @@ -130,6 +133,27 @@ int kvm_arm_init_arch_resources(void) return 0; } +static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) +{ + if (!system_supports_sve()) + return -EINVAL; + + /* Verify that KVM startup enforced this when SVE was detected: */ + if (WARN_ON(!has_vhe())) + return -EINVAL; + + vcpu->arch.sve_max_vl = kvm_sve_max_vl; + + /* +* Userspace can still customize the vector lengths by writing +* KVM_REG_ARM64_SVE_VLS. Allocation is deferred until +* kvm_arm_vcpu_finalize(), which freezes the configuration. +*/ + vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE; + + return 0; +} + /* * Finalize vcpu's maximum SVE vector length, allocating * vcpu->arch.sve_state as necessary. @@ -188,13 +212,20 @@ void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) kfree(vcpu->arch.sve_state); } +static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) +{ + if (vcpu_has_sve(vcpu)) + memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); +} + /** * kvm_reset_vcpu - sets core registers and sys_regs to reset value * @vcpu: The VCPU pointer * * This function finds the right table above and sets the registers on * the virtual CPU struct to their architecturally defined reset - * values. + * values, except for registers whose reset is deferred until + *
[PATCH v7 21/27] KVM: arm/arm64: Add hook for arch-specific KVM initialisation
This patch adds a kvm_arm_init_arch_resources() hook to perform subarch-specific initialisation when starting up KVM. This will be used in a subsequent patch for global SVE-related setup on arm64. No functional change. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- arch/arm/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/kvm_host.h | 2 ++ virt/kvm/arm/arm.c| 4 3 files changed, 8 insertions(+) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 770d732..a49ee01 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -53,6 +53,8 @@ DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); +static inline int kvm_arm_init_arch_resources(void) { return 0; } + u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode); int __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 205438a..3e89509 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -58,6 +58,8 @@ DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); +static inline int kvm_arm_init_arch_resources(void) { return 0; } + int __attribute_const__ kvm_target_cpu(void); int kvm_reset_vcpu(struct kvm_vcpu *vcpu); int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext); diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 99c3738..c69e137 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -1664,6 +1664,10 @@ int kvm_arch_init(void *opaque) if (err) return err; + err = kvm_arm_init_arch_resources(); + if (err) + return err; + if (!in_hyp_mode) { err = init_hyp_mode(); if (err) -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 22/27] KVM: arm/arm64: Add KVM_ARM_VCPU_FINALIZE ioctl
Some aspects of vcpu configuration may be too complex to be completed inside KVM_ARM_VCPU_INIT. Thus, there may be a requirement for userspace to do some additional configuration before various other ioctls will work in a consistent way. In particular this will be the case for SVE, where userspace will need to negotiate the set of vector lengths to be made available to the guest before the vcpu becomes fully usable. In order to provide an explicit way for userspace to confirm that it has finished setting up a particular vcpu feature, this patch adds a new ioctl KVM_ARM_VCPU_FINALIZE. When userspace has opted into a feature that requires finalization, typically by means of a feature flag passed to KVM_ARM_VCPU_INIT, a matching call to KVM_ARM_VCPU_FINALIZE is now required before KVM_RUN or KVM_GET_REG_LIST is allowed. Individual features may impose additional restrictions where appropriate. No existing vcpu features are affected by this, so current userspace implementations will continue to work exactly as before, with no need to issue KVM_ARM_VCPU_FINALIZE. As implemented in this patch, KVM_ARM_VCPU_FINALIZE is currently a placeholder: no finalizable features exist yet, so ioctl is not required and will always yield EINVAL. Subsequent patches will add the finalization logic to make use of this ioctl for SVE. No functional change for existing userspace. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- Changes since v5: * Commit message, including subject line, rewritten. This patch is a rework of "KVM: arm/arm64: Add hook to finalize the vcpu configuration". The old subject line and commit message no longer accurately described what the patch does. However, the code is an evolution of the previous patch rather than a wholesale rewrite. * Added an explicit KVM_ARM_VCPU_FINALIZE ioctl, rather than just providing internal hooks in the kernel to finalize the vcpu configuration implicitly. This allows userspace to confirm exactly when it has finished configuring the vcpu and is ready to use it. This results in simpler (and hopefully more maintainable) ioctl ordering rules. --- arch/arm/include/asm/kvm_host.h | 4 arch/arm64/include/asm/kvm_host.h | 4 include/uapi/linux/kvm.h | 3 +++ virt/kvm/arm/arm.c| 18 ++ 4 files changed, 29 insertions(+) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index a49ee01..e80cfc1 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -19,6 +19,7 @@ #ifndef __ARM_KVM_HOST_H__ #define __ARM_KVM_HOST_H__ +#include #include #include #include @@ -411,4 +412,7 @@ static inline int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) return 0; } +#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL) +#define kvm_arm_vcpu_is_finalized(vcpu) true + #endif /* __ARM_KVM_HOST_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 3e89509..98658f7 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -23,6 +23,7 @@ #define __ARM64_KVM_HOST_H__ #include +#include #include #include #include @@ -625,4 +626,7 @@ void kvm_arch_free_vm(struct kvm *kvm); int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type); +#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL) +#define kvm_arm_vcpu_is_finalized(vcpu) true + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index dc77a5a..c3b8e7a 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1441,6 +1441,9 @@ struct kvm_enc_region { /* Available with KVM_CAP_HYPERV_CPUID */ #define KVM_GET_SUPPORTED_HV_CPUID _IOWR(KVMIO, 0xc1, struct kvm_cpuid2) +/* Available with KVM_CAP_ARM_SVE */ +#define KVM_ARM_VCPU_FINALIZE_IOW(KVMIO, 0xc2, int) + /* Secure Encrypted Virtualization command */ enum sev_cmd_id { /* Guest initialization commands */ diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index c69e137..9edbf0f 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -545,6 +545,9 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) if (likely(vcpu->arch.has_run_once)) return 0; + if (!kvm_arm_vcpu_is_finalized(vcpu)) + return -EPERM; + vcpu->arch.has_run_once = true; if (likely(irqchip_in_kernel(kvm))) { @@ -1116,6 +1119,10 @@ long kvm_arch_vcpu_ioctl(struct file *filp, if (unlikely(!kvm_vcpu_initialized(vcpu))) break; + r = -EPERM; + if (!kvm_arm_vcpu_is_finalized(vcpu)) + break; + r = -EFAULT; if (copy_from_user(_list, user_list, sizeof(reg_list))) break; @@ -1169,6 +1176,17 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
[PATCH v7 01/27] KVM: Documentation: Document arm64 core registers in detail
Since the the sizes of individual members of the core arm64 registers vary, the list of register encodings that make sense is not a simple linear sequence. To clarify which encodings to use, this patch adds a brief list to the documentation. Signed-off-by: Dave Martin Reviewed-by: Julien Grall Reviewed-by: Peter Maydell --- Documentation/virtual/kvm/api.txt | 24 1 file changed, 24 insertions(+) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 7de9eee..2d4f7ce 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -2107,6 +2107,30 @@ contains elements ranging from 32 to 128 bits. The index is a 32bit value in the kvm_regs structure seen as a 32bit array. 0x60x0 0010 +Specifically: +EncodingRegister Bits kvm_regs member + + 0x6030 0010 X0 64 regs.regs[0] + 0x6030 0010 0002 X1 64 regs.regs[1] +... + 0x6030 0010 003c X30 64 regs.regs[30] + 0x6030 0010 003e SP 64 regs.sp + 0x6030 0010 0040 PC 64 regs.pc + 0x6030 0010 0042 PSTATE 64 regs.pstate + 0x6030 0010 0044 SP_EL1 64 sp_el1 + 0x6030 0010 0046 ELR_EL1 64 elr_el1 + 0x6030 0010 0048 SPSR_EL164 spsr[KVM_SPSR_EL1] (alias SPSR_SVC) + 0x6030 0010 004a SPSR_ABT64 spsr[KVM_SPSR_ABT] + 0x6030 0010 004c SPSR_UND64 spsr[KVM_SPSR_UND] + 0x6030 0010 004e SPSR_IRQ64 spsr[KVM_SPSR_IRQ] + 0x6060 0010 0050 SPSR_FIQ64 spsr[KVM_SPSR_FIQ] + 0x6040 0010 0054 V0 128 fp_regs.vregs[0] + 0x6040 0010 0058 V1 128 fp_regs.vregs[1] +... + 0x6040 0010 00d0 V31128 fp_regs.vregs[31] + 0x6020 0010 00d4 FPSR32 fp_regs.fpsr + 0x6020 0010 00d5 FPCR32 fp_regs.fpcr + arm64 CCSIDR registers are demultiplexed by CSSELR value: 0x6020 0011 00 -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 06/27] arm64/sve: Clarify role of the VQ map maintenance functions
The roles of sve_init_vq_map(), sve_update_vq_map() and sve_verify_vq_map() are highly non-obvious to anyone who has not dug through cpufeatures.c in detail. Since the way these functions interact with each other is more important here than a full understanding of the cpufeatures code, this patch adds comments to make the functions' roles clearer. No functional change. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Reviewed-by: Julien Grall Tested-by: zhang.lei --- Changes since v5: * [Julien Thierry] This patch is useful for explaining the previous patch (as per the v5 ordering), and is anyway non-functional. Swapped it with the previous patch to provide a more logical reading order for the series. --- arch/arm64/kernel/fpsimd.c | 10 +- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 62c37f0..f59ea67 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -647,6 +647,10 @@ static void sve_probe_vqs(DECLARE_BITMAP(map, SVE_VQ_MAX)) } } +/* + * Initialise the set of known supported VQs for the boot CPU. + * This is called during kernel boot, before secondary CPUs are brought up. + */ void __init sve_init_vq_map(void) { sve_probe_vqs(sve_vq_map); @@ -655,6 +659,7 @@ void __init sve_init_vq_map(void) /* * If we haven't committed to the set of supported VQs yet, filter out * those not supported by the current CPU. + * This function is called during the bring-up of early secondary CPUs only. */ void sve_update_vq_map(void) { @@ -662,7 +667,10 @@ void sve_update_vq_map(void) bitmap_and(sve_vq_map, sve_vq_map, sve_secondary_vq_map, SVE_VQ_MAX); } -/* Check whether the current CPU supports all VQs in the committed set */ +/* + * Check whether the current CPU supports all VQs in the committed set. + * This function is called during the bring-up of late secondary CPUs only. + */ int sve_verify_vq_map(void) { int ret = 0; -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 17/27] KVM: arm64: Reject ioctl access to FPSIMD V-regs on SVE vcpus
In order to avoid the pointless complexity of maintaining two ioctl register access views of the same data, this patch blocks ioctl access to the FPSIMD V-registers on vcpus that support SVE. This will make it more straightforward to add SVE register access support. Since SVE is an opt-in feature for userspace, this will not affect existing users. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- Changes since v5: * Refactored to cope with the removal of core_reg_size_from_offset() (which was added by another series which will now be handled independently). This leaves some duplication in that we still filter the V-regs out in two places, but this no worse than other existing code in guest.c. I plan to tidy this up independently later on. --- arch/arm64/kvm/guest.c | 48 1 file changed, 36 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index a391a61..756d0d6 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -54,12 +54,19 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) return 0; } +static bool core_reg_offset_is_vreg(u64 off) +{ + return off >= KVM_REG_ARM_CORE_REG(fp_regs.vregs) && + off < KVM_REG_ARM_CORE_REG(fp_regs.fpsr); +} + static u64 core_reg_offset_from_id(u64 id) { return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE); } -static int validate_core_offset(const struct kvm_one_reg *reg) +static int validate_core_offset(const struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { u64 off = core_reg_offset_from_id(reg->id); int size; @@ -91,11 +98,19 @@ static int validate_core_offset(const struct kvm_one_reg *reg) return -EINVAL; } - if (KVM_REG_SIZE(reg->id) == size && - IS_ALIGNED(off, size / sizeof(__u32))) - return 0; + if (KVM_REG_SIZE(reg->id) != size || + !IS_ALIGNED(off, size / sizeof(__u32))) + return -EINVAL; - return -EINVAL; + /* +* The KVM_REG_ARM64_SVE regs must be used instead of +* KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on +* SVE-enabled vcpus: +*/ + if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off)) + return -EINVAL; + + return 0; } static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) @@ -117,7 +132,7 @@ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs) return -ENOENT; - if (validate_core_offset(reg)) + if (validate_core_offset(vcpu, reg)) return -EINVAL; if (copy_to_user(uaddr, ((u32 *)regs) + off, KVM_REG_SIZE(reg->id))) @@ -142,7 +157,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) (off + (KVM_REG_SIZE(reg->id) / sizeof(__u32))) >= nr_regs) return -ENOENT; - if (validate_core_offset(reg)) + if (validate_core_offset(vcpu, reg)) return -EINVAL; if (KVM_REG_SIZE(reg->id) > sizeof(tmp)) @@ -195,13 +210,22 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) return -EINVAL; } -static int kvm_arm_copy_core_reg_indices(u64 __user *uindices) +static int copy_core_reg_indices(const struct kvm_vcpu *vcpu, +u64 __user *uindices) { unsigned int i; int n = 0; const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | KVM_REG_ARM_CORE; for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) { + /* +* The KVM_REG_ARM64_SVE regs must be used instead of +* KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on +* SVE-enabled vcpus: +*/ + if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(i)) + continue; + if (uindices) { if (put_user(core_reg | i, uindices)) return -EFAULT; @@ -214,9 +238,9 @@ static int kvm_arm_copy_core_reg_indices(u64 __user *uindices) return n; } -static unsigned long num_core_regs(void) +static unsigned long num_core_regs(const struct kvm_vcpu *vcpu) { - return kvm_arm_copy_core_reg_indices(NULL); + return copy_core_reg_indices(vcpu, NULL); } /** @@ -281,7 +305,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) { unsigned long res = 0; - res += num_core_regs(); + res += num_core_regs(vcpu); res += kvm_arm_num_sys_reg_descs(vcpu); res += kvm_arm_get_fw_num_regs(vcpu); res += NUM_TIMER_REGS; @@ -298,7 +322,7 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu,
[PATCH v7 26/27] KVM: Document errors for KVM_GET_ONE_REG and KVM_SET_ONE_REG
KVM_GET_ONE_REG and KVM_SET_ONE_REG return some error codes that are not documented (but hopefully not surprising either). To give an indication of what these may mean, this patch adds brief documentation. Signed-off-by: Dave Martin --- Documentation/virtual/kvm/api.txt | 6 ++ 1 file changed, 6 insertions(+) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 2d4f7ce..cd920dd 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -1871,6 +1871,9 @@ Architectures: all Type: vcpu ioctl Parameters: struct kvm_one_reg (in) Returns: 0 on success, negative value on failure +Errors: + ENOENT: no such register + EINVAL: other errors, such as bad size encoding for a known register struct kvm_one_reg { __u64 id; @@ -2192,6 +2195,9 @@ Architectures: all Type: vcpu ioctl Parameters: struct kvm_one_reg (in and out) Returns: 0 on success, negative value on failure +Errors: + ENOENT: no such register + EINVAL: other errors, such as bad size encoding for a known register This ioctl allows to receive the value of a single register implemented in a vcpu. The register to read is indicated by the "id" field of the -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 19/27] KVM: arm64: Enumerate SVE register indices for KVM_GET_REG_LIST
This patch includes the SVE register IDs in the list returned by KVM_GET_REG_LIST, as appropriate. On a non-SVE-enabled vcpu, no new IDs are added. On an SVE-enabled vcpu, IDs for the FPSIMD V-registers are removed from the list, since userspace is required to access the Z- registers instead in order to access the V-register content. For the variably-sized SVE registers, the appropriate set of slice IDs are enumerated, depending on the maximum vector length for the vcpu. As it currently stands, the SVE architecture never requires more than one slice to exist per register, so this patch adds no explicit support for enumerating multiple slices. The code can be extended straightforwardly to support this in the future, if needed. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- Changes since v6: * [Julien Thierry] Add a #define to replace the magic "slices = 1", and add a comment explaining to maintainers what needs to happen if this is updated in the future. Changes since v5: (Dropped Julien Thierry's Reviewed-by due to non-trivial rebasing) * Move mis-split reword to prevent put_user()s being accidentally the correct size from KVM: arm64/sve: Add pseudo-register for the guest's vector lengths. --- arch/arm64/kvm/guest.c | 63 ++ 1 file changed, 63 insertions(+) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 736d8cb..2aa80a5 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -222,6 +222,13 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) #define KVM_SVE_ZREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_ZREG(0, 0)) #define KVM_SVE_PREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_PREG(0, 0)) +/* + * number of register slices required to cover each whole SVE register on vcpu + * NOTE: If you are tempted to modify this, you must also to rework + * sve_reg_to_region() to match: + */ +#define vcpu_sve_slices(vcpu) 1 + /* Bounds of a single SVE register slice within vcpu->arch.sve_state */ struct sve_state_reg_region { unsigned int koffset; /* offset into sve_state in kernel memory */ @@ -411,6 +418,56 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return copy_to_user(uaddr, , KVM_REG_SIZE(reg->id)) ? -EFAULT : 0; } +static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) +{ + /* Only the first slice ever exists, for now */ + const unsigned int slices = vcpu_sve_slices(vcpu); + + if (!vcpu_has_sve(vcpu)) + return 0; + + return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */); +} + +static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, + u64 __user *uindices) +{ + /* Only the first slice ever exists, for now */ + const unsigned int slices = vcpu_sve_slices(vcpu); + u64 reg; + unsigned int i, n; + int num_regs = 0; + + if (!vcpu_has_sve(vcpu)) + return 0; + + for (i = 0; i < slices; i++) { + for (n = 0; n < SVE_NUM_ZREGS; n++) { + reg = KVM_REG_ARM64_SVE_ZREG(n, i); + if (put_user(reg, uindices++)) + return -EFAULT; + + num_regs++; + } + + for (n = 0; n < SVE_NUM_PREGS; n++) { + reg = KVM_REG_ARM64_SVE_PREG(n, i); + if (put_user(reg, uindices++)) + return -EFAULT; + + num_regs++; + } + + reg = KVM_REG_ARM64_SVE_FFR(i); + if (put_user(reg, uindices++)) + return -EFAULT; + + num_regs++; + } + + return num_regs; +} + /** * kvm_arm_num_regs - how many registers do we present via KVM_GET_ONE_REG * @@ -421,6 +478,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) unsigned long res = 0; res += num_core_regs(vcpu); + res += num_sve_regs(vcpu); res += kvm_arm_num_sys_reg_descs(vcpu); res += kvm_arm_get_fw_num_regs(vcpu); res += NUM_TIMER_REGS; @@ -442,6 +500,11 @@ int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) return ret; uindices += ret; + ret = copy_sve_reg_indices(vcpu, uindices); + if (ret) + return ret; + uindices += ret; + ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); if (ret) return ret; -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 14/27] KVM: Allow 2048-bit register access via ioctl interface
The Arm SVE architecture defines registers that are up to 2048 bits in size (with some possibility of further future expansion). In order to avoid the need for an excessively large number of ioctls when saving and restoring a vcpu's registers, this patch adds a #define to make support for individual 2048-bit registers through the KVM_{GET,SET}_ONE_REG ioctl interface official. This will allow each SVE register to be accessed in a single call. There are sufficient spare bits in the register id size field for this change, so there is no ABI impact, providing that KVM_GET_REG_LIST does not enumerate any 2048-bit register unless userspace explicitly opts in to the relevant architecture-specific features. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei --- include/uapi/linux/kvm.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 6d4ea4b..dc77a5a 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1145,6 +1145,7 @@ struct kvm_dirty_tlb { #define KVM_REG_SIZE_U256 0x0050ULL #define KVM_REG_SIZE_U512 0x0060ULL #define KVM_REG_SIZE_U1024 0x0070ULL +#define KVM_REG_SIZE_U2048 0x0080ULL struct kvm_reg_list { __u64 n; /* number of regs */ -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 12/27] KVM: arm64/sve: System register context switch and access support
This patch adds the necessary support for context switching ZCR_EL1 for each vcpu. ZCR_EL1 is trapped alongside the FPSIMD/SVE registers, so it makes sense for it to be handled as part of the guest FPSIMD/SVE context for context switch purposes instead of handling it as a general system register. This means that it can be switched in lazily at the appropriate time. No effort is made to track host context for this register, since SVE requires VHE: thus the hosts's value for this register lives permanently in ZCR_EL2 and does not alias the guest's value at any time. The Hyp switch and fpsimd context handling code is extended appropriately. Accessors are added in sys_regs.c to expose the SVE system registers and ID register fields. Because these need to be conditionally visible based on the guest configuration, they are implemented separately for now rather than by use of the generic system register helpers. This may be abstracted better later on when/if there are more features requiring this model. ID_AA64ZFR0_EL1 is RO-RAZ for MRS/MSR when SVE is disabled for the guest, but for compatibility with non-SVE aware KVM implementations the register should not be enumerated at all for KVM_GET_REG_LIST in this case. For consistency we also reject ioctl access to the register. This ensures that a non-SVE-enabled guest looks the same to userspace, irrespective of whether the kernel KVM implementation supports SVE. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- Changes since v5: * Port to the renamed visibility() framework. * Swap visiblity() helpers so that they appear by the relevant accessor functions. * [Julien Grall] With the visibility() checks, {get,set}_zcr_el1() degenerate to doing exactly what the common code does, so drop them. The ID_AA64ZFR0_EL1 handlers are still needed to provide contitional RAZ behaviour. This could be moved to the common code too, but since this is a one-off case I don't do this for now. We can address this later if other regs need to follow the same pattern. * [Julien Thierry] Reset ZCR_EL1 to a fixed value using reset_val instead of using relying on reset_unknown() honouring set bits in val as RES0. Most of the bits in ZCR_EL1 are RES0 anyway, and many implementations of SVE will support larger vectors than 128 bits, so 0 seems as good a value as any to expose guests that forget to initialise this register properly. --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/sysreg.h | 3 ++ arch/arm64/kvm/fpsimd.c | 9 - arch/arm64/kvm/hyp/switch.c | 3 ++ arch/arm64/kvm/sys_regs.c | 83 --- 5 files changed, 93 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index ad4f7f0..22cf484 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -121,6 +121,7 @@ enum vcpu_sysreg { SCTLR_EL1, /* System Control Register */ ACTLR_EL1, /* Auxiliary Control Register */ CPACR_EL1, /* Coprocessor Access Control */ + ZCR_EL1,/* SVE Control */ TTBR0_EL1, /* Translation Table Base Register 0 */ TTBR1_EL1, /* Translation Table Base Register 1 */ TCR_EL1,/* Translation Control Register */ diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 5b267de..4d6262d 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -454,6 +454,9 @@ #define SYS_ICH_LR14_EL2 __SYS__LR8_EL2(6) #define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7) +/* VHE encodings for architectural EL0/1 system registers */ +#define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0) + /* Common SCTLR_ELx flags. */ #define SCTLR_ELx_DSSBS(_BITUL(44)) #define SCTLR_ELx_ENIA (_BITUL(31)) diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 1cf4f02..7053bf4 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -103,14 +103,21 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) { unsigned long flags; + bool host_has_sve = system_supports_sve(); + bool guest_has_sve = vcpu_has_sve(vcpu); local_irq_save(flags); if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { + u64 *guest_zcr = >arch.ctxt.sys_regs[ZCR_EL1]; + /* Clean guest FP state to memory and invalidate cpu view */ fpsimd_save(); fpsimd_flush_cpu_state(); - } else if (system_supports_sve()) { + + if (guest_has_sve) + *guest_zcr = read_sysreg_s(SYS_ZCR_EL12); + } else if (host_has_sve) { /* * The FPSIMD/SVE state in the CPU has not been touched, and we *
[PATCH v7 13/27] KVM: arm64/sve: Context switch the SVE registers
In order to give each vcpu its own view of the SVE registers, this patch adds context storage via a new sve_state pointer in struct vcpu_arch. An additional member sve_max_vl is also added for each vcpu, to determine the maximum vector length visible to the guest and thus the value to be configured in ZCR_EL2.LEN while the vcpu is active. This also determines the layout and size of the storage in sve_state, which is read and written by the same backend functions that are used for context-switching the SVE state for host tasks. On SVE-enabled vcpus, SVE access traps are now handled by switching in the vcpu's SVE context and disabling the trap before returning to the guest. On other vcpus, the trap is not handled and an exit back to the host occurs, where the handle_sve() fallback path reflects an undefined instruction exception back to the guest, consistently with the behaviour of non-SVE-capable hardware (as was done unconditionally prior to this patch). No SVE handling is added on non-VHE-only paths, since VHE is an architectural and Kconfig prerequisite of SVE. Signed-off-by: Dave Martin Reviewed-by: Julien Thierry Tested-by: zhang.lei --- Changes since v5: * [Julien Thierry, Julien Grall] Commit message typo fixes * [Mark Rutland] Rename trap_class to hsr_ec, for consistency with existing code. * [Mark Rutland] Simplify condition for refusing to handle an FPSIMD/SVE trap, using multiple if () statements for clarity. The previous condition was a bit tortuous, and how that the static_key checks have been hoisted out, it makes little difference to the compiler how we express the condition here. --- arch/arm64/include/asm/kvm_host.h | 6 arch/arm64/kvm/fpsimd.c | 5 +-- arch/arm64/kvm/hyp/switch.c | 75 +-- 3 files changed, 66 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 22cf484..4fabfd2 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -228,6 +228,8 @@ struct vcpu_reset_state { struct kvm_vcpu_arch { struct kvm_cpu_context ctxt; + void *sve_state; + unsigned int sve_max_vl; /* HYP configuration */ u64 hcr_el2; @@ -323,6 +325,10 @@ struct kvm_vcpu_arch { bool sysregs_loaded_on_cpu; }; +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) ((void *)((char *)((vcpu)->arch.sve_state) + \ + sve_ffr_offset((vcpu)->arch.sve_max_vl))) + /* vcpu_arch flags field values: */ #define KVM_ARM64_DEBUG_DIRTY (1 << 0) #define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 7053bf4..6e3c9c8 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -87,10 +87,11 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { fpsimd_bind_state_to_cpu(>arch.ctxt.gp_regs.fp_regs, -NULL, SVE_VL_MIN); +vcpu->arch.sve_state, +vcpu->arch.sve_max_vl); clear_thread_flag(TIF_FOREIGN_FPSTATE); - clear_thread_flag(TIF_SVE); + update_thread_flag(TIF_SVE, vcpu_has_sve(vcpu)); } } diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 9d46066..5444b9c 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -100,7 +100,10 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu) val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; val &= ~CPACR_EL1_ZEN; - if (!update_fp_enabled(vcpu)) { + if (update_fp_enabled(vcpu)) { + if (vcpu_has_sve(vcpu)) + val |= CPACR_EL1_ZEN; + } else { val &= ~CPACR_EL1_FPEN; __activate_traps_fpsimd32(vcpu); } @@ -317,16 +320,48 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu) return true; } -static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu) +/* Check for an FPSIMD/SVE trap and handle as appropriate */ +static bool __hyp_text __hyp_handle_fpsimd(struct kvm_vcpu *vcpu) { - struct user_fpsimd_state *host_fpsimd = vcpu->arch.host_fpsimd_state; + bool vhe, sve_guest, sve_host; + u8 hsr_ec; - if (has_vhe()) - write_sysreg(read_sysreg(cpacr_el1) | CPACR_EL1_FPEN, -cpacr_el1); - else + if (!system_supports_fpsimd()) + return false; + + if (system_supports_sve()) { + sve_guest = vcpu_has_sve(vcpu); + sve_host = vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE; + vhe = true; + } else { +
[PATCH v7 09/27] KVM: arm64: Add a vcpu flag to control SVE visibility for the guest
Since SVE will be enabled or disabled on a per-vcpu basis, a flag is needed in order to track which vcpus have it enabled. This patch adds a suitable flag and a helper for checking it. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei --- arch/arm64/include/asm/kvm_host.h | 4 1 file changed, 4 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 6d10100..ad4f7f0 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -328,6 +328,10 @@ struct kvm_vcpu_arch { #define KVM_ARM64_FP_HOST (1 << 2) /* host FP regs loaded */ #define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ #define KVM_ARM64_HOST_SVE_ENABLED (1 << 4) /* SVE enabled for EL0 */ +#define KVM_ARM64_GUEST_HAS_SVE(1 << 5) /* SVE exposed to guest */ + +#define vcpu_has_sve(vcpu) (system_supports_sve() && \ + ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE)) #define vcpu_gp_regs(v)(&(v)->arch.ctxt.gp_regs) -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 03/27] KVM: arm64: Delete orphaned declaration for __fpsimd_enabled()
__fpsimd_enabled() no longer exists, but a dangling declaration has survived in kvm_hyp.h. This patch gets rid of it. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei --- arch/arm64/include/asm/kvm_hyp.h | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 4da765f..ef8b839 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -149,7 +149,6 @@ void __debug_switch_to_host(struct kvm_vcpu *vcpu); void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); -bool __fpsimd_enabled(void); void activate_traps_vhe_load(struct kvm_vcpu *vcpu); void deactivate_traps_vhe_put(void); -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 08/27] arm64/sve: Enable SVE state tracking for non-task contexts
The current FPSIMD/SVE context handling support for non-task (i.e., KVM vcpu) contexts does not take SVE into account. This means that only task contexts can safely use SVE at present. In preparation for enabling KVM guests to use SVE, it is necessary to keep track of SVE state for non-task contexts too. This patch adds the necessary support, removing assumptions from the context switch code about the location of the SVE context storage. When binding a vcpu context, its vector length is arbitrarily specified as SVE_VL_MIN for now. In any case, because TIF_SVE is presently cleared at vcpu context bind time, the specified vector length will not be used for anything yet. In later patches TIF_SVE will be set here as appropriate, and the appropriate maximum vector length for the vcpu will be passed when binding. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Reviewed-by: Julien Grall Tested-by: zhang.lei --- arch/arm64/include/asm/fpsimd.h | 3 ++- arch/arm64/kernel/fpsimd.c | 20 +++- arch/arm64/kvm/fpsimd.c | 5 - 3 files changed, 21 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 964adc9..df7a143 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -56,7 +56,8 @@ extern void fpsimd_restore_current_state(void); extern void fpsimd_update_current_state(struct user_fpsimd_state const *state); extern void fpsimd_bind_task_to_cpu(void); -extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state); +extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, +void *sve_state, unsigned int sve_vl); extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_flush_cpu_state(void); diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index b219796a..8a93afa 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -121,6 +121,8 @@ */ struct fpsimd_last_state_struct { struct user_fpsimd_state *st; + void *sve_state; + unsigned int sve_vl; }; static DEFINE_PER_CPU(struct fpsimd_last_state_struct, fpsimd_last_state); @@ -241,14 +243,15 @@ static void task_fpsimd_load(void) */ void fpsimd_save(void) { - struct user_fpsimd_state *st = __this_cpu_read(fpsimd_last_state.st); + struct fpsimd_last_state_struct const *last = + this_cpu_ptr(_last_state); /* set by fpsimd_bind_task_to_cpu() or fpsimd_bind_state_to_cpu() */ WARN_ON(!in_softirq() && !irqs_disabled()); if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { if (system_supports_sve() && test_thread_flag(TIF_SVE)) { - if (WARN_ON(sve_get_vl() != current->thread.sve_vl)) { + if (WARN_ON(sve_get_vl() != last->sve_vl)) { /* * Can't save the user regs, so current would * re-enter user with corrupt state. @@ -258,9 +261,11 @@ void fpsimd_save(void) return; } - sve_save_state(sve_pffr(>thread), >fpsr); + sve_save_state((char *)last->sve_state + + sve_ffr_offset(last->sve_vl), + >st->fpsr); } else - fpsimd_save_state(st); + fpsimd_save_state(last->st); } } @@ -1034,6 +1039,8 @@ void fpsimd_bind_task_to_cpu(void) this_cpu_ptr(_last_state); last->st = >thread.uw.fpsimd_state; + last->sve_state = current->thread.sve_state; + last->sve_vl = current->thread.sve_vl; current->thread.fpsimd_cpu = smp_processor_id(); if (system_supports_sve()) { @@ -1047,7 +1054,8 @@ void fpsimd_bind_task_to_cpu(void) } } -void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st) +void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st, void *sve_state, + unsigned int sve_vl) { struct fpsimd_last_state_struct *last = this_cpu_ptr(_last_state); @@ -1055,6 +1063,8 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st) WARN_ON(!in_softirq() && !irqs_disabled()); last->st = st; + last->sve_state = sve_state; + last->sve_vl = sve_vl; } /* diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index aac7808..1cf4f02 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -85,7 +86,9 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) WARN_ON_ONCE(!irqs_disabled()); if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { -
[PATCH v7 04/27] KVM: arm64: Refactor kvm_arm_num_regs() for easier maintenance
kvm_arm_num_regs() adds together various partial register counts in a freeform sum expression, which makes it harder than necessary to read diffs that add, modify or remove a single term in the sum (which is expected to the common case under maintenance). This patch refactors the code to add the term one per line, for maximum readability. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Tested-by: zhang.lei --- arch/arm64/kvm/guest.c | 10 -- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index dd436a5..62514cb 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -258,8 +258,14 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) */ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) { - return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu) - + kvm_arm_get_fw_num_regs(vcpu) + NUM_TIMER_REGS; + unsigned long res = 0; + + res += num_core_regs(); + res += kvm_arm_num_sys_reg_descs(vcpu); + res += kvm_arm_get_fw_num_regs(vcpu); + res += NUM_TIMER_REGS; + + return res; } /** -- 2.1.4 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v7 00/27] KVM: arm64: SVE guest support
This series implements support for allowing KVM guests to use the Arm Scalable Vector Extension (SVE), superseding the previous v6 series [1]. The patches are also available on a branch for reviewer convenience. [2] The patches are based on v5.1-rc2. This series addresses a couple of minor review comments received on v6 and otherwise applies reviewer tags only. The code differences between v6 and this series consist of minor cosmetic fixups only. Draft kvmtool patches were posted separately [3], [4]. For a description of minor updates, see the individual patches. Thanks go to Julien Thierry and Julian Grall for their review efforts, and to Zhang Lei for testing the series -- many thanks for their help in getting the series to this point! Reviewers' attention is drawn to the following patches, which have no Reviewed-by/Acked-by. Please take a look if you have a moment. * Patch 11 (KVM: arm64: Support runtime sysreg visibility filtering) Previously Reviewed-by Julien Thierry, but this version of the patch contains some minor rework suggested by Mark Rutland during the v5 review [5]. * Patch 15 (KVM: arm64: Add missing #include of in guest.c) (Trivial patch.) * Patch 26: (KVM: Document errors for KVM_GET_ONE_REG and KVM_SET_ONE_REG) (Documentation only.) * Patch 27: KVM: arm64/sve: Document KVM API extensions for SVE (Documentation only.) Known issues: none Testing status: * Lightweight testing on the Arm Fast Model, primarily to exercise the new vcpu finalization API. Ran sve-stress testing for several days on v6 on the Arm Fast Model, with no errors observed. * ThunderX2: basic testing of arm64 defconfig, including booting guests (no SVE support on this hardware). Some stress testing with fpsimd-stress (to be published separately) and paranoia, with no problems observed over several days. This testing was done on v6. * aarch32 host testing has only been done in v5 so far. arch/arm changes between v5 and v7 are minimal. I plan to redo sanity-check testing after this posting. [1] Previous series: [PATCH v5 00/26] KVM: arm64: SVE guest support https://lists.cs.columbia.edu/pipermail/kvmarm/2019-March/thread.html (Note, the subject line for this posting was incorrect. It should have read: [PATCH v6 00/27] KVM: arm64: SVE guest support) [2] This series in git: http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm/v7/head git://linux-arm.org/linux-dm.git sve-kvm/v7/head [3] [PATCH kvmtool v2 0/3] arm64: Basic SVE guest support https://lists.cs.columbia.edu/pipermail/kvmarm/2019-March/035198.html [4] git://linux-arm.org/kvmtool-dm.git sve-linuxv6/v2/head http://linux-arm.org/git?p=kvmtool-dm.git;a=shortlog;h=refs/heads/sve-linuxv6/v2/head [5] Mark Rutland Re: [PATCH v5 12/26] KVM: arm64: Support runtime sysreg visibility filtering https://lists.cs.columbia.edu/pipermail/kvmarm/2019-February/034718.html Dave Martin (27): KVM: Documentation: Document arm64 core registers in detail arm64: fpsimd: Always set TIF_FOREIGN_FPSTATE on task state flush KVM: arm64: Delete orphaned declaration for __fpsimd_enabled() KVM: arm64: Refactor kvm_arm_num_regs() for easier maintenance KVM: arm64: Add missing #includes to kvm_host.h arm64/sve: Clarify role of the VQ map maintenance functions arm64/sve: Check SVE virtualisability arm64/sve: Enable SVE state tracking for non-task contexts KVM: arm64: Add a vcpu flag to control SVE visibility for the guest KVM: arm64: Propagate vcpu into read_id_reg() KVM: arm64: Support runtime sysreg visibility filtering KVM: arm64/sve: System register context switch and access support KVM: arm64/sve: Context switch the SVE registers KVM: Allow 2048-bit register access via ioctl interface KVM: arm64: Add missing #include of in guest.c KVM: arm64: Factor out core register ID enumeration KVM: arm64: Reject ioctl access to FPSIMD V-regs on SVE vcpus KVM: arm64/sve: Add SVE support to register access ioctl interface KVM: arm64: Enumerate SVE register indices for KVM_GET_REG_LIST arm64/sve: In-kernel vector length availability query interface KVM: arm/arm64: Add hook for arch-specific KVM initialisation KVM: arm/arm64: Add KVM_ARM_VCPU_FINALIZE ioctl KVM: arm64/sve: Add pseudo-register for the guest's vector lengths KVM: arm64/sve: Allow userspace to enable SVE for vcpus KVM: arm64: Add a capability to advertise SVE support KVM: Document errors for KVM_GET_ONE_REG and KVM_SET_ONE_REG KVM: arm64/sve: Document KVM API extensions for SVE Documentation/virtual/kvm/api.txt | 156 +++ arch/arm/include/asm/kvm_host.h | 6 + arch/arm64/include/asm/fpsimd.h | 33 +++- arch/arm64/include/asm/kvm_host.h | 43 - arch/arm64/include/asm/kvm_hyp.h | 1 - arch/arm64/include/asm/sysreg.h | 3 + arch/arm64/include/uapi/asm/kvm.h | 22 +++ arch/arm64/kernel/cpufeature.c| 2 +- arch/arm64/kernel/fpsimd.c|
Re: Unexpected interrupt received in Guest OS when booting after "system_reset"
On 29/03/2019 09:19, Heyi Guo wrote: > Hi Marc, > > The patch works. I tested for 1.5 hour and 52 VM resets. There were > 16 times that a virtual LPI left in the ap_list (seen by an > additional printk) during reset and we never saw "Unexpected > interrupt received" any more. Thanks for testing, much appreciated. > Just a minor comment: how about replacing /vcpu->arch.vgic_cpu./ with > /vgic_cpu->/ in the lock/unlock code line, to reduce some words? Well, as I said, the patch is wrong in other ways, so I wouldn't bother with that. It only serves as a test for my theory. I think I'm slowly warming up to you initial proposal to hook things into the PROPBASER/PENDBASER registers, as the LPIs do have a life outside of the ITS itself. I'll try to respin something next week. Thanks, M. > > Thanks, > > Heyi > > On 2019/3/29 9:19, Heyi Guo wrote: >> >> >> On 2019/3/29 1:18, Marc Zyngier wrote: >>> [Please do not send HTML emails] >> Sorry; will keep in mind next time :) >>> >>> On 28/03/2019 15:44, Heyi Guo wrote: Hi Marc and Christoffer, When we issue "system_reset" from qemu monitor to a running VM, guest Linux will occasionally get "Unexpected interrupt" after rebooting, with kernel message at the bottom. After some investigation, we found it might be caused by the preservation of virtual LPI during system reset: it seems the virtual LPI remains in the ap_list during VM reset, as well as its "enabled" and "pending_latch" status, and this causes the virtual LPI to be injected wrongly after VCPU reboots and enables interrupt. We propose to clear "enabled" flag of virtual LPI when PROPBASER (or GICR_CTRL) of virtual GICR is written to 0, and update virtual LPI properties when GICR_CTRL.enableLPIs is set to 1 again. Any advice? Or did we miss something? >>> We're clearly missing a trick here, but I'm not convinced of your >>> approach. >> To be honest, we were not fully convinced by ourselves either. I was >> worrying about guest switching GICR_CTRL or GICR_PROPBASER at runtime which >> probably causes issue for our rough approach. >> >>> What should happend is that the redistributors should be reset >>> as well, and that this should recall any LPI that has been made pending. >>> Unfortunately, we don't seem to have such code in place, which is >>> embarrassing. >>> >>> Can you give the following, untested patch a go? It isn't right either, >>> but it should have the right effect. If you confirm that it solves your >>> problem, we can look at adding the right hooks... >> Thanks, I'll test this and get back to you. >> Heyi >> >>> Thanks, >>> >>> M. >>> >>> diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c >>> index ab3f47745d9c..bd9a9250f323 100644 >>> --- a/virt/kvm/arm/vgic/vgic-its.c >>> +++ b/virt/kvm/arm/vgic/vgic-its.c >>> @@ -2403,8 +2403,32 @@ static int vgic_its_commit_v0(struct vgic_its *its) >>> return 0; >>> } >>> +static void vgic_nuke_pending_lpis(struct kvm_vcpu *vcpu) >>> +{ >>> +struct vgic_cpu *vgic_cpu = >arch.vgic_cpu; >>> +struct vgic_irq *irq, *tmp; >>> +unsigned long flags; >>> + >>> + raw_spin_lock_irqsave(>arch.vgic_cpu.ap_list_lock, flags); >>> + >>> +list_for_each_entry_safe(irq, tmp, _cpu->ap_list_head, ap_list) { >>> +if (irq->intid >= VGIC_MIN_LPI) { >>> +list_del(>ap_list); >>> +vgic_put_irq(vcpu->kvm, irq); >>> +} >>> +} >>> + >>> + raw_spin_unlock_irqrestore(>arch.vgic_cpu.ap_list_lock, flags); >>> +} >>> + >>> static void vgic_its_reset(struct kvm *kvm, struct vgic_its *its) >>> { >>> +struct kvm_vcpu *vcpu; >>> +int c; >>> + >>> +kvm_for_each_vcpu(c, vcpu, kvm) >>> +vgic_nuke_pending_lpis(vcpu); >>> + >>> /* We need to keep the ABI specific field values */ >>> its->baser_coll_table &= ~GITS_BASER_VALID; >>> its->baser_device_table &= ~GITS_BASER_VALID; >>> >> >> >> >> . >> > > -- Jazz is not dead. It just smells funny... ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: Unexpected interrupt received in Guest OS when booting after "system_reset"
Hi Marc, The patch works. I tested for 1.5 hour and 52 VM resets. There were 16 times that a virtual LPI left in the ap_list (seen by an additional printk) during reset and we never saw "Unexpected interrupt received" any more. Just a minor comment: how about replacing /vcpu->arch.vgic_cpu./ with /vgic_cpu->/ in the lock/unlock code line, to reduce some words? Thanks, Heyi On 2019/3/29 9:19, Heyi Guo wrote: On 2019/3/29 1:18, Marc Zyngier wrote: [Please do not send HTML emails] Sorry; will keep in mind next time :) On 28/03/2019 15:44, Heyi Guo wrote: Hi Marc and Christoffer, When we issue "system_reset" from qemu monitor to a running VM, guest Linux will occasionally get "Unexpected interrupt" after rebooting, with kernel message at the bottom. After some investigation, we found it might be caused by the preservation of virtual LPI during system reset: it seems the virtual LPI remains in the ap_list during VM reset, as well as its "enabled" and "pending_latch" status, and this causes the virtual LPI to be injected wrongly after VCPU reboots and enables interrupt. We propose to clear "enabled" flag of virtual LPI when PROPBASER (or GICR_CTRL) of virtual GICR is written to 0, and update virtual LPI properties when GICR_CTRL.enableLPIs is set to 1 again. Any advice? Or did we miss something? We're clearly missing a trick here, but I'm not convinced of your approach. To be honest, we were not fully convinced by ourselves either. I was worrying about guest switching GICR_CTRL or GICR_PROPBASER at runtime which probably causes issue for our rough approach. What should happend is that the redistributors should be reset as well, and that this should recall any LPI that has been made pending. Unfortunately, we don't seem to have such code in place, which is embarrassing. Can you give the following, untested patch a go? It isn't right either, but it should have the right effect. If you confirm that it solves your problem, we can look at adding the right hooks... Thanks, I'll test this and get back to you. Heyi Thanks, M. diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index ab3f47745d9c..bd9a9250f323 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -2403,8 +2403,32 @@ static int vgic_its_commit_v0(struct vgic_its *its) return 0; } +static void vgic_nuke_pending_lpis(struct kvm_vcpu *vcpu) +{ +struct vgic_cpu *vgic_cpu = >arch.vgic_cpu; +struct vgic_irq *irq, *tmp; +unsigned long flags; + + raw_spin_lock_irqsave(>arch.vgic_cpu.ap_list_lock, flags); + +list_for_each_entry_safe(irq, tmp, _cpu->ap_list_head, ap_list) { +if (irq->intid >= VGIC_MIN_LPI) { +list_del(>ap_list); +vgic_put_irq(vcpu->kvm, irq); +} +} + + raw_spin_unlock_irqrestore(>arch.vgic_cpu.ap_list_lock, flags); +} + static void vgic_its_reset(struct kvm *kvm, struct vgic_its *its) { +struct kvm_vcpu *vcpu; +int c; + +kvm_for_each_vcpu(c, vcpu, kvm) +vgic_nuke_pending_lpis(vcpu); + /* We need to keep the ABI specific field values */ its->baser_coll_table &= ~GITS_BASER_VALID; its->baser_device_table &= ~GITS_BASER_VALID; . ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm