On Thu, 1 Apr 2021 10:12:27 -0300 Jason Gunthorpe <j...@nvidia.com> wrote:
> On Mon, Mar 29, 2021 at 05:10:53PM -0600, Alex Williamson wrote: > > On Tue, 23 Mar 2021 16:32:13 -0300 > > Jason Gunthorpe <j...@nvidia.com> wrote: > > > > > On Mon, Mar 22, 2021 at 10:40:16AM -0600, Alex Williamson wrote: > > > > > > > Of course if you start looking at features like migration support, > > > > that's more than likely not simply an additional region with optional > > > > information, it would need to interact with the actual state of the > > > > device. For those, I would very much support use of a specific > > > > id_table. That's not these. > > > > > > What I don't understand is why do we need two different ways of > > > inserting vendor code? > > > > Because a PCI id table only identifies the device, these drivers are > > looking for a device in the context of firmware dependencies. > > The firmware dependencies only exist for a defined list of ID's, so I > don't entirely agree with this statement. I agree with below though, > so lets leave it be. > > > > I understood he ment that NVIDI GPUs *without* NVLINK can exist, but > > > the ID table we have here is supposed to be the NVLINK compatible > > > ID's. > > > > Those IDs are just for the SXM2 variants of the device that can > > exist on a variety of platforms, only one of which includes the > > firmware tables to activate the vfio support. > > AFAIK, SXM2 is a special physical form factor that has the nvlink > physical connection - it is only for this specific generation of power > servers that can accept the specific nvlink those cards have. SXM2 is not unique to Power, there are various x86 systems that support the interface, everything from NVIDIA's own line of DGX systems, various vendor systems, all the way to VARs like Super Micro and Gigabyte. > > I think you're looking for a significant inflection in vendor's stated > > support for vfio use cases, beyond the "best-effort, give it a try", > > that we currently have. > > I see, so they don't want to. Lets leave it then. > > Though if Xe breaks everything they need to add/maintain a proper ID > table, not more hackery. e4eccb853664 ("vfio/pci: Bypass IGD init in case of -ENODEV") is supposed to enable Xe, where the IGD code is expected to return -ENODEV and we go on with the base vfio-pci support. > > > And again, I feel this is all a big tangent, especially now that > > > HCH wants to delete the nvlink stuff we should just leave igd > > > alone. > > > > Determining which things stay in vfio-pci-core and which things are > > split to variant drivers and how those variant drivers can match the > > devices they intend to support seems very inline with this series. > > > > IMHO, the main litmus test for core is if variant drivers will need it > or not. > > No variant driver should be stacked on an igd device, or if it someday > is, it should implement the special igd hackery internally (and have a > proper ID table). So when we split it up igd goes into vfio_pci.ko as > some special behavior vfio_pci.ko's universal driver provides for IGD. > > Every variant driver will still need the zdev data to be exposed to > userspace, and every PCI device on s390 has that extra information. So > vdev goes to vfio_pci_core.ko > > Future things going into vfio_pci.ko need a really good reason why > they can't be varian drivers instead. That sounds fair. Thanks, Alex