On 19/01/2026 11:50, Harry Ramsey wrote:
> From: Penny Zheng <[email protected]>
> 
> For MMU system, setup_virt_paging is used to configure stage 2 address
> translation regime, like IPA bits, VMID allocator set up, etc.
> Some could be inherited in MPU system, like VMID allocator set up, etc.
The commit msg only mentions setup_virt_paging, so why do you implement other
p2m functions in the same commit? I would split the patch in two for clarity and
ease of review. I will focus on setup_virt_paging.

> 
> For MPU system, we could have the following memory translation regime:
> - PMSAv8-64 at both EL1/EL0 and EL2
> - VMSAv8-64 at EL1/EL0 and PMSAv8-64 at EL2
> The default option will be the second, unless the platform could not support,
> which could be checked against MSA_frac bit in Memory Model Feature Register 
> 0(
> ID_AA64MMFR0_EL1).
What's the reasoning behind it? Why do we prefer VMSA at EL1 rather than PMSA?
AFAICT PMSA is a default and VMSA is optional, so logically PMSA should be
preffered. We should also make it configurable I think, so that a user has a 
choice.

> 
> Signed-off-by: Penny Zheng <[email protected]>
> Signed-off-by: Wei Chen <[email protected]>
> Signed-off-by: Luca Fancellu <[email protected]>
> Signed-off-by: Hari Limaye <[email protected]>
> Signed-off-by: Harry Ramsey <[email protected]>
> ---
>  xen/arch/arm/arm64/mpu/p2m.c             | 90 +++++++++++++++++++++++-
>  xen/arch/arm/include/asm/arm32/mpu.h     |  2 +
>  xen/arch/arm/include/asm/arm64/mpu.h     |  2 +
>  xen/arch/arm/include/asm/arm64/sysregs.h |  5 ++
>  xen/arch/arm/include/asm/cpufeature.h    |  7 ++
>  xen/arch/arm/include/asm/mpu.h           |  7 +-
>  xen/arch/arm/include/asm/mpu/p2m.h       | 12 ++++
>  xen/arch/arm/include/asm/p2m.h           |  5 ++
>  xen/arch/arm/include/asm/processor.h     | 13 ++++
>  xen/arch/arm/mpu/p2m.c                   | 71 ++++++++++++++++++-
>  10 files changed, 209 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/mpu/p2m.c b/xen/arch/arm/arm64/mpu/p2m.c
> index b6d8b2777b..13b633d9fe 100644
> --- a/xen/arch/arm/arm64/mpu/p2m.c
> +++ b/xen/arch/arm/arm64/mpu/p2m.c
> @@ -2,11 +2,99 @@
>  
>  #include <xen/bug.h>
>  #include <xen/init.h>
> +#include <xen/warning.h>
>  #include <asm/p2m.h>
>  
>  void __init setup_virt_paging(void)
>  {
> -    BUG_ON("unimplemented");
> +    uint64_t vtcr_el2 = 0, vstcr_el2 = 0;
> +    bool p2m_vmsa = true;
> +
> +    /* PA size */
> +    const unsigned int pa_range_info[] = { 32, 36, 40, 42, 44, 48, 52, 0, /* 
> Invalid */ };
80 chars exceeded.
Do we still need 0 here to denote invalid value?

> +
> +    /*
> +     * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
> +     * with IPA bits == PA bits, compare against "pabits".
> +     */
> +    if ( pa_range_info[system_cpuinfo.mm64.pa_range] < p2m_ipa_bits )
> +        p2m_ipa_bits = pa_range_info[system_cpuinfo.mm64.pa_range];
> +
> +    /*
> +     * Clear VTCR_EL2.NSA bit to configure non-secure stage 2 translation 
> output
> +     * address space to access the Secure PA space.
> +     */
> +    vtcr_el2 &= NSA_SEL2;
This has no effect.

> +
> +    /*
> +     * The MSA and MSA_frac fields in the ID_AA64MMFR0_EL1 register identify 
> the
> +     * memory system configurations supported at EL1. In Armv8-R AArch64, the
The spec mentions all translation regimes + specific configuration for EL1, so
it's not EL1 only.

> +     * only permitted value for ID_AA64MMFR0_EL1.MSA is 0b1111.
> +     */
> +    if ( system_cpuinfo.mm64.msa != MM64_MSA_PMSA_SUPPORT )
> +        goto fault;
> +
> +    /* Permitted values for ID_AA64MMFR0_EL1.MSA_frac are 0b0001 and 0b0010. 
> */
> +    if ( system_cpuinfo.mm64.msa_frac == MM64_MSA_FRAC_NONE_SUPPORT )
> +        goto fault;
> +
> +    /*
> +     * When ID_AA64MMFR0_EL1.MSA_frac is 0b0010 the stage 1 EL1&0 translation
> +     * regime supports both PMSAv8-64 and VMSAv8-64.
> +     */
> +    if ( system_cpuinfo.mm64.msa_frac != MM64_MSA_FRAC_VMSA_SUPPORT )
> +    {
> +        p2m_vmsa = false;
> +        warning_add("Be aware of that there is no support for VMSAv8-64 at 
> EL1 on this platform\n");
I think we should make PMSA a default.
If at all, a warning message could be made simpler e.g. "No support for
VMSAv8-64 at EL1". Seeing a warning already means that user should be aware of
sth and it's clear that the message is about this platform.

> +    }
> +
> +    /*
> +     * If PE supports both PMSAv8-64 and VMSAv8-64 at EL1, then VTCR_EL2.MSA
> +     * determines the memory system architecture enabled at stage 1 of the
> +     * Secure EL1&0 translation regime.
> +     *
> +     * Normally, we set the initial VTCR_EL2.MSA value VMSAv8-64 support,
> +     * unless this platform only supports PMSAv8-64.
> +     */
> +    if ( !p2m_vmsa )
> +        vtcr_el2 &= VTCR_MSA_PMSA;
> +    else
> +        vtcr_el2 |= VTCR_MSA_VMSA;
> +
> +    /*
> +     * cpuinfo sanitization makes sure we support 16bits VMID only if all 
> cores
> +     * are supporting it.
> +     */
> +    if ( system_cpuinfo.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
> +        max_vmid = MAX_VMID_16_BIT;
> +
> +    /* Set the VS bit only if 16 bit VMID is supported. */
> +    if ( MAX_VMID == MAX_VMID_16_BIT )
Instead of a macro, use max_vmid variable here

> +        vtcr_el2 |= VTCR_VS;
> +
> +    p2m_vmid_allocator_init();
> +
> +    WRITE_SYSREG(vtcr_el2, VTCR_EL2);
Where do we set NSA bit?

> +
> +    /*
> +     * VSTCR_EL2.SA defines secure stage 2 translation output address space.
> +     * To make sure that all stage 2 translations for the Secure PA space 
> access
> +     * the Secure PA space, we keep SA bit as 0.
This says that we want to access secure PA space but not why.

> +     *
> +     * VSTCR_EL2.SC is NS check enable bit. To make sure that Stage 2 NS
> +     * configuration is checked against stage 1 NS configuration in EL1&0
> +     * translation regime for the given address, and generates a fault if 
> they
> +     * are different, we set SC bit 1.
> +     */
> +    vstcr_el2 = 1 << VSTCR_EL2_RES1_SHIFT;
Why are we setting the RES1 bit?

> +    vstcr_el2 &= VSTCR_EL2_SA;
This has no effect because you initialized it to 0.

> +    vstcr_el2 |= VSTCR_EL2_SC;
> +    WRITE_SYSREG(vstcr_el2, VSTCR_EL2);
> +
> +    return;
> +
> + fault:
> +    panic("Hardware with no PMSAv8-64 support in any translation regime\n");
>  }
>  
>  /*
> diff --git a/xen/arch/arm/include/asm/arm32/mpu.h 
> b/xen/arch/arm/include/asm/arm32/mpu.h
> index 2cf0f8cbac..d565230f84 100644
> --- a/xen/arch/arm/include/asm/arm32/mpu.h
> +++ b/xen/arch/arm/include/asm/arm32/mpu.h
> @@ -11,6 +11,8 @@
>   */
>  #define MPU_REGION_RES0       0x0
>  
> +#define VSCTLR_VMID_SHIFT     16
> +
>  /* Hypervisor Protection Region Base Address Register */
>  typedef union {
>      struct {
> diff --git a/xen/arch/arm/include/asm/arm64/mpu.h 
> b/xen/arch/arm/include/asm/arm64/mpu.h
> index 4f694190a8..8b86a03fee 100644
> --- a/xen/arch/arm/include/asm/arm64/mpu.h
> +++ b/xen/arch/arm/include/asm/arm64/mpu.h
> @@ -7,6 +7,8 @@
>  
>  #define MPU_REGION_RES0        (0xFFFFULL << 48)
>  
> +#define VSCTLR_VMID_SHIFT      48
> +
>  /* Protection Region Base Address Register */
>  typedef union {
>      struct __packed {
> diff --git a/xen/arch/arm/include/asm/arm64/sysregs.h 
> b/xen/arch/arm/include/asm/arm64/sysregs.h
> index 19d409d3eb..4ed8ac0440 100644
> --- a/xen/arch/arm/include/asm/arm64/sysregs.h
> +++ b/xen/arch/arm/include/asm/arm64/sysregs.h
> @@ -462,6 +462,11 @@
>  #define ZCR_ELx_LEN_SIZE             9
>  #define ZCR_ELx_LEN_MASK             0x1ff
>  
> +/* Virtualization Secure Translation Control Register */
> +#define VSTCR_EL2_RES1_SHIFT         31
> +#define VSTCR_EL2_SA                 ~(_AC(0x1,UL)<<30)
> +#define VSTCR_EL2_SC                 (_AC(0x1,UL)<<20)
> +
>  #ifdef CONFIG_MPU
>  /*
>   * The Armv8-R AArch64 architecture always executes code in Secure
> diff --git a/xen/arch/arm/include/asm/cpufeature.h 
> b/xen/arch/arm/include/asm/cpufeature.h
> index 13353c8e1a..677cb2b96d 100644
> --- a/xen/arch/arm/include/asm/cpufeature.h
> +++ b/xen/arch/arm/include/asm/cpufeature.h
> @@ -248,6 +248,12 @@ struct cpuinfo_arm {
>              unsigned long tgranule_16K:4;
>              unsigned long tgranule_64K:4;
>              unsigned long tgranule_4K:4;
> +#if !defined(CONFIG_MMU)
Why not #ifdef CONFIG_MPU here and elsewhere?

> +            unsigned long __res:16;
If there are multiple RES0 fields, we usually start from __res0 and iterate the
number with each occurence of RES0.

~Michal


Reply via email to