On Fri, May 08, 2026 at 06:20:27PM +0100, Mark Rutland wrote:
> > +static bool access_smcr_el2(struct kvm_vcpu *vcpu,
> > + struct sys_reg_params *p,
> > + const struct sys_reg_desc *r)
> > +{
> > + vq = SYS_FIELD_GET(SMCR_ELx, LEN, smcr) + 1;
> > + vq = min(vq, vcpu_sme_max_vq(vcpu));
> > + smcr &= ~SMCR_ELx_LEN_MASK;
> > + smcr |= SYS_FIELD_PREP(SMCR_ELx, LEN, vq - 1);
> I'm not sure this sanitization is correct or necessary, and the same
> concern applies to ZCR_ELx.LEN.
> AFAICT, none of the values for the SMCR_ELx.LEN and ZCR_ELx.LEN fields
> are reserved or unallocated. Thus all the bits of those fields should be
> stateful, and a read should observe the last value written, regardless
> of the effective value of the field.
...
> Either what we're doing is wrong, or the architcture requires a
> clarification to say that values corresponding to unimplmented vector
> lengths are reserved.
> If those bit are always stateful, the the logic to sanitize the LEN
> field shouldn't live here, and that will need to happen when consuming
> the effective value.
Your understanding of how these fields work matches mine, and writing
unimplemented values is part of the documented procedure for enumerating
the set of supported vector lengths.
As you note this is duplicated from the handling of ZCR_ELx, it's not
clear to me why the code does this but I figured there must be some good
reason for doing things this way that I just wasn't seeing and that it
was safer to fit in with the existing code. The handling for vector
lengths in general and especially with NV was quite unclear,
particularly prior to your fixes in 59419f10045b (KVM: arm64: Eagerly
switch ZCR_EL{1,2}).
The changelog for b3d29a823099 ("KVM: arm64: nv: Handle ZCR_EL2 traps")
which introduced this for ZCR_ELx has a mention of mapping the requested
VL but it's not entirely clear to me what it means by that. It does
mean that we can just load guest ZCR_EL2 and get the correct behaviour
for guest EL1 and EL0 when loading the guest state so perhaps that might
be all there is to it.
My expectation would have been to restrict the guest EL1/0 VL when we
load state into the registers as you allude to, something more like the
below (off the top of my head and completely untested, I'll pull this
into a proper patch later):
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h
b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 98b2976837b1..ddf8c2246139 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -501,11 +501,11 @@ static inline void fpsimd_lazy_switch_to_guest(struct
kvm_vcpu *vcpu)
return;
if (vcpu_has_sve(vcpu)) {
+ zcr_el2 = vcpu_sve_max_vq(vcpu) - 1;
+
/* A guest hypervisor may restrict the effective max VL. */
- if (is_nested_ctxt(vcpu))
- zcr_el2 = __vcpu_sys_reg(vcpu, ZCR_EL2);
- else
- zcr_el2 = vcpu_sve_max_vq(vcpu) - 1;
+ if (is_nested_ctxt(vcpu) && !is_hyp_ctxt(vcpu))
+ zcr_el2 = min(zcr_el2, __vcpu_sys_reg(vcpu, ZCR_EL2));
write_sysreg_el2(zcr_el2, SYS_ZCR);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 148fc3400ea8..b48f41acff82 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2874,9 +2874,7 @@ static bool access_zcr_el2(struct kvm_vcpu *vcpu,
return true;
}
- vq = SYS_FIELD_GET(ZCR_ELx, LEN, p->regval) + 1;
- vq = min(vq, vcpu_sve_max_vq(vcpu));
- __vcpu_assign_sys_reg(vcpu, ZCR_EL2, vq - 1);
+ __vcpu_assign_sys_reg(vcpu, ZCR_EL2, p->regval);
return true;
}
signature.asc
Description: PGP signature

