On 31.07.2025 17:58, Oleksii Kurochko wrote:
> Introduce helper functions for safely querying the P2M (physical-to-machine)
> mapping:
>  - add p2m_read_lock(), p2m_read_unlock(), and p2m_is_locked() for managing
>    P2M lock state.
>  - Implement p2m_get_entry() to retrieve mapping details for a given GFN,
>    including MFN, page order, and validity.
>  - Add p2m_lookup() to encapsulate read-locked MFN retrieval.
>  - Introduce p2m_get_page_from_gfn() to convert a GFN into a page_info
>    pointer, acquiring a reference to the page if valid.
>  - Introduce get_page().
> 
> Implementations are based on Arm's functions with some minor modifications:
> - p2m_get_entry():
>   - Reverse traversal of page tables, as RISC-V uses the opposite level
>     numbering compared to Arm.
>   - Removed the return of p2m_access_t from p2m_get_entry() since
>     mem_access_settings is not introduced for RISC-V.
>   - Updated BUILD_BUG_ON() to check using the level 0 mask, which corresponds
>     to Arm's THIRD_MASK.
>   - Replaced open-coded bit shifts with the BIT() macro.
>   - Other minor changes, such as using RISC-V-specific functions to validate
>     P2M PTEs, and replacing Arm-specific GUEST_* macros with their RISC-V
>     equivalents.
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kuroc...@gmail.com>
> ---
> Changes in V3:
>  - Add is_p2m_foreign() macro and connected stuff.

What is this about?

> --- a/xen/arch/riscv/include/asm/p2m.h
> +++ b/xen/arch/riscv/include/asm/p2m.h
> @@ -202,6 +202,24 @@ static inline int p2m_is_write_locked(struct p2m_domain 
> *p2m)
>  
>  unsigned long construct_hgatp(struct p2m_domain *p2m, uint16_t vmid);
>  
> +static inline void p2m_read_lock(struct p2m_domain *p2m)
> +{
> +    read_lock(&p2m->lock);
> +}
> +
> +static inline void p2m_read_unlock(struct p2m_domain *p2m)
> +{
> +    read_unlock(&p2m->lock);
> +}
> +
> +static inline int p2m_is_locked(struct p2m_domain *p2m)

bool return type (also for p2m_is_write_locked() in patch 11)? Also perhaps
pointer-to-const parameter?

> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -852,3 +852,139 @@ int map_regions_p2mt(struct domain *d,
>  {
>      return p2m_insert_mapping(p2m_get_hostp2m(d), gfn, nr, mfn, p2mt);
>  }
> +
> +/*
> + * Get the details of a given gfn.
> + *
> + * If the entry is present, the associated MFN will be returned type filled 
> up.

This sentence doesn't really parse, perhaps due to missing words.

> + * The page_order will correspond to the order of the mapping in the page
> + * table (i.e it could be a superpage).
> + *
> + * If the entry is not present, INVALID_MFN will be returned and the
> + * page_order will be set according to the order of the invalid range.
> + *
> + * valid will contain the value of bit[0] (e.g valid bit) of the
> + * entry.
> + */
> +static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
> +                           p2m_type_t *t,
> +                           unsigned int *page_order,
> +                           bool *valid)
> +{
> +    unsigned int level = 0;
> +    pte_t entry, *table;
> +    int rc;
> +    mfn_t mfn = INVALID_MFN;
> +    DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
> +
> +    ASSERT(p2m_is_locked(p2m));
> +    BUILD_BUG_ON(XEN_PT_LEVEL_MAP_MASK(0) != PAGE_MASK);

What function-wide property is this check about? Even when moved ...

> +    if ( valid )
> +        *valid = false;
> +
> +    /* XXX: Check if the mapping is lower than the mapped gfn */

(Nested: What is this about?)

> +    /* This gfn is higher than the highest the p2m map currently holds */
> +    if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) )
> +    {
> +        for ( level = P2M_ROOT_LEVEL; level; level-- )
> +            if ( (gfn_x(gfn) & (XEN_PT_LEVEL_MASK(level) >> PAGE_SHIFT)) >

... into the more narrow scope where another XEN_PT_LEVEL_MASK() exists I
can't really spot what the check is to guard against.

> +                 gfn_x(p2m->max_mapped_gfn) )
> +                break;
> +
> +        goto out;
> +    }
> +
> +    table = p2m_get_root_pointer(p2m, gfn);
> +
> +    /*
> +     * the table should always be non-NULL because the gfn is below
> +     * p2m->max_mapped_gfn and the root table pages are always present.
> +     */

Nit: Style.

> +    if ( !table )
> +    {
> +        ASSERT_UNREACHABLE();
> +        level = P2M_ROOT_LEVEL;
> +        goto out;
> +    }
> +
> +    for ( level = P2M_ROOT_LEVEL; level; level-- )
> +    {
> +        rc = p2m_next_level(p2m, true, level, &table, offsets[level]);

Why would you blindly allocate a page table (hierarchy) here? If anything,
this may need doing upon caller request (as it's only up the call chain
where the necessary knowledge exists). For example, ...

> +static mfn_t p2m_lookup(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t)
> +{
> +    mfn_t mfn;
> +
> +    p2m_read_lock(p2m);
> +    mfn = p2m_get_entry(p2m, gfn, t, NULL, NULL);

... this (by its name) pretty likely won't want allocation, while ...

> +    p2m_read_unlock(p2m);
> +
> +    return mfn;
> +}
> +
> +struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
> +                                        p2m_type_t *t)
> +{

... this will. Yet then ...

> +    struct page_info *page;
> +    p2m_type_t p2mt = p2m_invalid;
> +    mfn_t mfn = p2m_lookup(p2m, gfn, t);

... you use the earlier one here.

> +    if ( !mfn_valid(mfn) )
> +        return NULL;
> +
> +    if ( t )
> +        p2mt = *t;
> +
> +    page = mfn_to_page(mfn);
> +
> +    /*
> +     * get_page won't work on foreign mapping because the page doesn't
> +     * belong to the current domain.
> +     */
> +    if ( p2m_is_foreign(p2mt) )
> +    {
> +        struct domain *fdom = page_get_owner_and_reference(page);
> +        ASSERT(fdom != NULL);
> +        ASSERT(fdom != p2m->domain);
> +        return page;

In a release build (with no assertions) this will be wrong if either of the
two condition would not be satisfied. See x86'es respective code.

Jan

Reply via email to