> From: Neo Jia [mailto:c...@nvidia.com]
> Sent: Tuesday, February 16, 2016 3:13 PM
> 
> On Tue, Feb 16, 2016 at 06:49:30AM +0000, Tian, Kevin wrote:
> > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > Sent: Thursday, February 04, 2016 3:33 AM
> > >
> > > On Wed, 2016-02-03 at 09:28 +0100, Gerd Hoffmann wrote:
> > > >   Hi,
> > > >
> > > > > Actually I have a long puzzle in this area. Definitely libvirt will 
> > > > > use UUID to
> > > > > mark a VM. And obviously UUID is not recorded within KVM. Then how 
> > > > > does
> > > > > libvirt talk to KVM based on UUID? It could be a good reference to 
> > > > > this design.
> > > >
> > > > libvirt keeps track which qemu instance belongs to which vm.
> > > > qemu also gets started with "-uuid ...", so one can query qemu via
> > > > monitor ("info uuid") to figure what the uuid is.  It is also in the
> > > > smbios tables so the guest can see it in the system information table.
> > > >
> > > > The uuid is not visible to the kernel though, the kvm kernel driver
> > > > doesn't know what the uuid is (and neither does vfio).  qemu uses file
> > > > handles to talk to both kvm and vfio.  qemu notifies both kvm and vfio
> > > > about anything relevant events (guest address space changes etc) and
> > > > connects file descriptors (eventfd -> irqfd).
> > >
> > > I think the original link to using a VM UUID for the vGPU comes from
> > > NVIDIA having a userspace component which might get launched from a udev
> > > event as the vGPU is created or the set of vGPUs within that UUID is
> > > started.  Using the VM UUID then gives them a way to associate that
> > > userspace process with a VM instance.  Maybe it could register with
> > > libvirt for some sort of service provided for the VM, I don't know.
> >
> > Intel doesn't have this requirement. It should be enough as long as
> > libvirt maintains which sysfs vgpu node is associated to a VM UUID.
> >
> > >
> > > > qemu needs a sysfs node as handle to the vfio device, something
> > > > like /sys/devices/virtual/vgpu/<name>.  <name> can be a uuid if you want
> > > > have it that way, but it could be pretty much anything.  The sysfs node
> > > > will probably show up as-is in the libvirt xml when assign a vgpu to a
> > > > vm.  So the name should be something stable (i.e. when using a uuid as
> > > > name you should better not generate a new one on each boot).
> > >
> > > Actually I don't think there's really a persistent naming issue, that's
> > > probably where we diverge from the SR-IOV model.  SR-IOV cannot
> > > dynamically add a new VF, it needs to reset the number of VFs to zero,
> > > then re-allocate all of them up to the new desired count.  That has some
> > > obvious implications.  I think with both vendors here, we can
> > > dynamically allocate new vGPUs, so I would expect that libvirt would
> > > create each vGPU instance as it's needed.  None would be created by
> > > default without user interaction.
> > >
> > > Personally I think using a UUID makes sense, but it needs to be
> > > userspace policy whether that UUID has any implicit meaning like
> > > matching the VM UUID.  Having an index within a UUID bothers me a bit,
> > > but it doesn't seem like too much of a concession to enable the use case
> > > that NVIDIA is trying to achieve.  Thanks,
> > >
> >
> > I would prefer to making UUID an optional parameter, while not tieing
> > sysfs vgpu naming to UUID. This would be more flexible to different
> > scenarios where UUID might not be required.
> 
> Hi Kevin,
> 
> Happy Chinese New Year!
> 
> I think having UUID as the vgpu device name will allow us to have an gpu 
> vendor
> agnostic solution for the upper layer software stack such as QEMU, who is
> supposed to open the device.
> 

Qemu can use whatever sysfs path provided to open the device, regardless
of whether there is an UUID within the path...

Thanks
Kevin

Reply via email to