On Tue, Oct 01, 2024 at 11:55:59AM +1000, Alexey Kardashevskiy wrote:
> On 11/9/24 17:08, Nicolin Chen wrote:
> > On Wed, Sep 11, 2024 at 06:12:21AM +0000, Tian, Kevin wrote:
> > > > From: Nicolin Chen <nicol...@nvidia.com>
> > > > Sent: Wednesday, August 28, 2024 1:00 AM
> > > > 
> > > [...]
> > > > On a multi-IOMMU system, the VIOMMU object can be instanced to the
> > > > number
> > > > of vIOMMUs in a guest VM, while holding the same parent HWPT to share
> > > > the
> > > 
> > > Is there restriction that multiple vIOMMU objects can be only created
> > > on a multi-IOMMU system?
> > 
> > I think it should be generally restricted to the number of pIOMMUs,
> > although likely (not 100% sure) we could do multiple vIOMMUs on a
> > single-pIOMMU system. Any reason for doing that?
> 
> 
> Just to clarify the terminology here - what are pIOMMU and vIOMMU exactly?
> 
> On AMD, IOMMU is a pretend-pcie device, one per a rootport, manages a DT
> - device table, one entry per BDFn, the entry owns a queue. A slice of
> that can be passed to a VM (== queues mapped directly to the VM, and
> such IOMMU appears in the VM as a pretend-pcie device too). So what is
> [pv]IOMMU here? Thanks,
 
The "p" stands for physical: the entire IOMMU unit/instance. In
the IOMMU subsystem terminology, it's a struct iommu_device. It
sounds like AMD would register one iommu device per rootport?

The "v" stands for virtual: a slice of the pIOMMU that could be
shared or passed through to a VM:
 - Intel IOMMU doesn't have passthrough queues, so it uses a
   shared queue (for invalidation). In this case, vIOMMU will
   be a pure SW structure for HW queue sharing (with the host
   machine and other VMs). That said, I think the channel (or
   the port) that Intel VT-d uses internally for a device to
   do a two-stage translation can be seen as a "passthrough"
   feature, held by a vIOMMU.
 - AMD IOMMU can assign passthrough queues to VMs, in which
   case, vIOMMU will be a structure holding all passthrough
   resource (of the pIOMMU) assisgned to a VM. If there is a
   shared resource, it can be packed into the vIOMMU struct
   too. FYI, vQUEUE (future series) on the other hand will
   represent each passthrough queue in a vIOMMU struct. The
   VM then, per that specific pIOMMU (rootport?), will have
   one vIOMMU holding a number of vQUEUEs.
 - ARM SMMU is sort of in the middle, depending on the impls.
   vIOMMU will be a structure holding both passthrough and
   shared resource. It can define vQUEUEs, if the impl has
   passthrough queues like AMD does.

Allowing a vIOMMU to hold shared resource makes it a bit of an
upgraded model for IOMMU virtualization, from the existing HWPT
model that now looks like a subset of the vIOMMU model. 

Thanks
Nicolin

Reply via email to