On Tue, 2016-02-02 at 00:31 -0800, Neo Jia wrote:
> On Tue, Feb 02, 2016 at 08:18:44AM +0000, Tian, Kevin wrote:
> > > From: Neo Jia [mailto:c...@nvidia.com]
> > > Sent: Tuesday, February 02, 2016 4:13 PM
> > > 
> > > On Tue, Feb 02, 2016 at 09:00:43AM +0100, Gerd Hoffmann wrote:
> > > >   Hi,
> > > > 
> > > > > And for UUID, I remember Alex had a concern on using it in kernel.
> > > > > Honestly speaking I don't have a good idea here. In Xen side there is 
> > > > > a VM ID
> > > > > which can be easily used as the index. But for KVM, what would be the 
> > > > > best
> > > > > identifier to associate with a VM?
> > > > 
> > > > The vgpu code doesn't need to associate the vgpu device with a vm in the
> > > > first place.  You get all guest address space information from qemu, via
> > > > vfio iommu interface.
> > > > 
> > > > When qemu does't use kvm (tcg mode), things should still work fine.
> > > > Using vfio-based vgpu devices with non-qemu apps (some kind of test
> > > > suite for example) should work fine too.
> > > 
> > > Hi Gerd and Kevin,
> > > 
> > > I thought Alex had agreed with the UUID as long as it is not tied with VM,
> > > probably it is just his comment gets lost in our previous long email 
> > > thread.
> > > 
> > 
> > I think the point is... what is the value to introduce a UUID here? If
> > what Gerd describes is enough, we can simply invent a vgpu ID which
> > is returned at vgpu_create, and then used as the index for other
> > interfaces.
> > 
> 
> Hi Kevin,
> 
> It can just be a plain UUID, and the meaning of the UUID is up to upper layer 
> SW, for
> example with libvirt, you can create a new "vgpu group" object representing a
> list of vgpu device. so the UUID will be the input on vgpu_create instead of
> return value.

Jumping in at th end, but yes, this was my thinking.  A UUID is a 
perfectly fine name for a vgpu, but it should be user policy whether
than UUID matches a VM definition or is simply an arbitrary grouping of
vgpus.

> For the TCG mode, this should just work as long as libvirt can create the 
> proper
> internal objects there plus other vfio iommu interface Gerd has called out,
> although the vector->kvm_interrupt part might need some tweaks.

Interrupts should be eventfds and whether the eventfd triggers into
userspace or into an irqfd in KVM should be completely transparent to
the vgpu code, just as is done with vfio today.  Thanks,

Alex


Reply via email to