On Thu, May 18, 2023 at 11:06:50AM +0200, Eric Auger wrote: > Hi Nicolin, > > On 5/18/23 05:22, Nicolin Chen wrote: > > Hi Peter, > > > > Eric previously mentioned that you might not like the idea. > > Before we start this big effort, would it possible for you > > to comment a word or two on this topic? > > > > Thanks! > > > > On Mon, Apr 24, 2023 at 04:42:57PM -0700, Nicolin Chen wrote: > >> Hi all, > >> > >> (Please feel free to include related folks into this thread.) > >> > >> In light of an ongoing nested-IOMMU support effort via IOMMUFD, we > >> would likely have a need of a multi-vIOMMU support in QEMU, or more > >> specificly a multi-vSMMU support for an underlying HW that has multi > >> physical SMMUs. This would be used in the following use cases. > >> 1) Multiple physical SMMUs with different feature bits so that one > >> vSMMU enabling a nesting configuration cannot reflect properly. > >> 2) NVIDIA Grace CPU has a VCMDQ HW extension for SMMU CMDQ. Every > >> VCMDQ HW has an MMIO region (CONS and PROD indexes) that should > >> be exposed to a VM, so that a hypervisor can avoid trappings by > >> using this HW accelerator for performance. However, one single > >> vSMMU cannot mmap multiple MMIO regions from multiple pSMMUs. > >> 3) With the latest iommufd design, a single vIOMMU model shares the > >> same stage-2 HW pagetable across all physical SMMUs with a shared > >> VMID. Then a stage-1 pagetable invalidation (for one device) at > >> the vSMMU would have to be broadcasted to all the SMMU instances, > >> which would hurt the overall performance. > Well if there is a real production use case behind the requirement of > having mutliple vSMMUs (and more generally vIOMMUs) sure you can go > ahead. I just wanted to warn you that as far as I know multiple vIOMMUS > are not supported even on Intel iommu and virtio-iommu. Let's add Peter > Xu in CC. I foresee added complexicity with regard to how you define the > RID scope of each vIOMMU, ACPI table generation, impact on arm-virt > machine options, how you pass the feature associated to each instance, > notifier propagation impact? And I don't evoke the VCMDQ feat addition. > We are still far from having a singleton QEMU nested stage SMMU > implementation at the moment but I understand you may want to feed the > pipeline to pave the way for enhanced use cases.
I agree with Eric that we're still lacking quite a few things for >1 vIOMMUs support, afaik. What you mentioned above makes sense to me from the POV that 1 vIOMMU may not suffice, but that's at least totally new area to me because I never used >1 IOMMUs even bare metal (excluding the case where I'm aware that e.g. a GPU could have its own IOMMU-like dma translator). What's the system layout of your multi-vIOMMU world? Is there still a centric vIOMMU, or multi-vIOMMUs can run fully in parallel, so that e.g. we can have DEV1,DEV2 under vIOMMU1 and DEV3,DEV4 under vIOMMU2? Can vIOMMU get involved in any plug/unplug dynamically in any form? What else can be different from that regard? Is it a common hardware layout or nVidia specific? Thanks, > > Thanks > > Eric > >> > >> I previously discussed with Eric this topic in a private email. Eric > >> felt the difficulty of implementing this in the current QEMU system, > >> as it would touch different subsystems like IORT and platform device, > >> since the passthrough devices would be attached to different vIOMMUs. > >> > >> Yet, given the situations above, it's likely the best by duplicating > >> the vIOMMU instance corresponding to the number of the physical SMMU > >> instances. > >> > >> So, I am sending this email to collect opinions on this and see what > >> would be a potential TODO list if we decide to go on this path. -- Peter Xu