On Wed, Dec 10, 2025 at 01:37:27PM +0000, Shameer Kolothum wrote:
> From: Nicolin Chen <[email protected]>
> 
> Tegra241 CMDQV assigns each VINTF a 128KB MMIO region split into two
> 64 KB pages:
>  - Page0: guest accessible control/status registers for all VCMDQs
>  - Page1: configuration registers (queue GPA/size) that must be trapped
>           by the VMM and translated before programming the HW queue.
> 
> This patch implements the Page0 handling in QEMU. Using the vintf offset
> returned by IOMMUFD during VIOMMU allocation, QEMU maps Page0 into
> guest physical address space and exposes it via two guest MMIO windows:
>  - 0x10000 :VCMDQ register

global VCMDQ MMIO pages.

>  - 0x30000 :VINTF register

private VINTF MMIO pages

> +static bool tegra241_cmdqv_init_vcmdq_page0(Tegra241CMDQV *cmdqv, Error 
> **errp)
> +{
> +    SMMUv3State *smmu = cmdqv->smmu;
> +    SMMUv3AccelState *s_accel = smmu->s_accel;
> +    IOMMUFDViommu *viommu;
> +    char *name;
> +
> +    if (!s_accel) {
> +        return true;
> +    }

g_assert?

The entire thing can't work without s_accel, so returning true
doesn't seem to make sense.

> +    viommu = &s_accel->viommu;
> +    if (!iommufd_backend_viommu_mmap(viommu->iommufd, viommu->viommu_id,
> +                                     VCMDQ_REG_PAGE_SIZE,
> +                                     cmdqv->cmdqv_data.out_vintf_mmap_offset,
> +                                     &cmdqv->vcmdq_page0, errp)) {
> +        cmdqv->vcmdq_page0 = NULL;

We probably shouldn't nuke the vcmdq_page0.

And I think we should add g_assert(!cmdqv->vcmdq_page0) too. It
would be a bug if we pass in a valid page0 pointer.

> +    name = g_strdup_printf("%s vcmdq", 
> memory_region_name(&cmdqv->mmio_cmdqv));
> +    memory_region_init_ram_device_ptr(&cmdqv->mmio_vcmdq_page,
> +                                      
> memory_region_owner(&cmdqv->mmio_cmdqv),
> +                                      name, 0x10000, cmdqv->vcmdq_page0);
> +    memory_region_add_subregion_overlap(&cmdqv->mmio_cmdqv, 0x10000,
> +                                        &cmdqv->mmio_vcmdq_page, 1);
> +    g_free(name);
> +
> +    name = g_strdup_printf("%s vintf", 
> memory_region_name(&cmdqv->mmio_cmdqv));
> +    memory_region_init_ram_device_ptr(&cmdqv->mmio_vintf_page,
> +                                      
> memory_region_owner(&cmdqv->mmio_cmdqv),
> +                                      name, 0x10000, cmdqv->vcmdq_page0);
> +    memory_region_add_subregion_overlap(&cmdqv->mmio_cmdqv, 0x30000,
> +                                        &cmdqv->mmio_vintf_page, 1);

Let's add some comments here (maybe something similar in commit log also):

    /*
     * Each VM can only own one VINTF exposed by the kernel via a VIOMMU object.
     * And all available VCMDQs are already preallocated in the VINTF. Thus, the
     * global VCMDQ MMIO page0 and the private VINTF MMIO page0 are effectively
     * the same, i.e. cmdqv->vcmdq_page0.
     */

Nicolin

Reply via email to