On Fri, Mar 06, 2026 at 05:01:07PM +0000, Mark Brown wrote:
> SME is configured by the system registers SMCR_EL1 and SMCR_EL2, add
> definitions and userspace access for them.  These control the SME vector
> length in a manner similar to that for SVE and also have feature enable
> bits for SME2 and FA64.  A subsequent patch will add management of them
> for guests as part of the general floating point context switch, as is
> done for the equivalent SVE registers.
> 
> Signed-off-by: Mark Brown <[email protected]>
> ---
>  arch/arm64/include/asm/kvm_emulate.h  | 14 ++++++++++++
>  arch/arm64/include/asm/kvm_host.h     |  2 ++
>  arch/arm64/include/asm/vncr_mapping.h |  1 +
>  arch/arm64/kvm/sys_regs.c             | 42 
> ++++++++++++++++++++++++++++++++++-
>  4 files changed, 58 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h 
> b/arch/arm64/include/asm/kvm_emulate.h
> index 5bf3d7e1d92c..7a11dd7d554c 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -89,6 +89,14 @@ static inline void kvm_inject_nested_sve_trap(struct 
> kvm_vcpu *vcpu)
>       kvm_inject_nested_sync(vcpu, esr);
>  }
>  
> +static inline void kvm_inject_nested_sme_trap(struct kvm_vcpu *vcpu)
> +{
> +     u64 esr = FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_SME) |
> +               ESR_ELx_IL;
> +
> +     kvm_inject_nested_sync(vcpu, esr);
> +}

This implicilty has the SMTC field as 0b000, which is correct for traps
of SMCR_EL{1,2} due to SMEN, but wouldn't be right for other traps (e.g.
traps of ZT0).

If we only use this for traps of SMCR_EL{1,2}, that's ok, but I think
it's worth a comment, and possibly a more specific name. Perhaps
kvm_inject_nested_sme_smen_trap() for now.

[...]

> +static bool access_smcr_el2(struct kvm_vcpu *vcpu,
> +                         struct sys_reg_params *p,
> +                         const struct sys_reg_desc *r)
> +{
> +     unsigned int vq;
> +     u64 smcr;
> +
> +     if (guest_hyp_sme_traps_enabled(vcpu)) {
> +             kvm_inject_nested_sme_trap(vcpu);
> +             return false;
> +     }
> +
> +     if (!p->is_write) {
> +             p->regval = __vcpu_sys_reg(vcpu, SMCR_EL2);
> +             return true;
> +     }
> +
> +     smcr = p->regval & ~SMCR_ELx_RES0;
> +     if (!vcpu_has_fa64(vcpu))
> +             smcr &= ~SMCR_ELx_FA64;
> +     if (!vcpu_has_sme2(vcpu))
> +             smcr &= ~SMCR_ELx_EZT0;
> +
> +     vq = SYS_FIELD_GET(SMCR_ELx, LEN, smcr) + 1;
> +     vq = min(vq, vcpu_sme_max_vq(vcpu));
> +     smcr &= ~SMCR_ELx_LEN_MASK;
> +     smcr |= SYS_FIELD_PREP(SMCR_ELx, LEN, vq - 1);

I'm not sure this sanitization is correct or necessary, and the same
concern applies to ZCR_ELx.LEN.

AFAICT, none of the values for the SMCR_ELx.LEN and ZCR_ELx.LEN fields
are reserved or unallocated. Thus all the bits of those fields should be
stateful, and a read should observe the last value written, regardless
of the effective value of the field.

That means that the following at EL2 or vEL2 shouldn't produce a
warning:
                
        int len_write, len_read;

        for (len_write = 0; len_write < 16; len_write++) {
                write_sysreg_s(len_write, SYS_SMCR_EL2);

                len_read = read_sysreg_s(SYS_SMCR_EL2) & SMCR_ELx_LEN_MASK;
                WARN_ON(len_read != len_write);
        }

Either what we're doing is wrong, or the architcture requires a
clarification to say that values corresponding to unimplmented vector
lengths are reserved.

If those bit are always stateful, the the logic to sanitize the LEN
field shouldn't live here, and that will need to happen when consuming
the effective value.

Mark.

Reply via email to