> From: Tian, Kevin > Sent: Thursday, February 04, 2016 11:02 AM > > > > > Thanks for your summary, Kevin. It does seem like there are only a few > > outstanding issues which should be manageable and hopefully the overall > > approach is cleaner for QEMU, management tools, and provides a more > > consistent user interface as well. If we can translate the solution to > > Xen, that's even better. Thanks, > > > > Here is the main open in my head, after thinking about the role of VFIO: > > For above 7 services required by vGPU device model, they can fall into > two categories: > > a) services to connect vGPU with VM, which are essentially what a device > driver is doing (so VFIO can fit here), including: > 1) Selectively pass-through a region to a VM > 2) Trap-and-emulate a region > 3) Inject a virtual interrupt > 5) Pin/unpin guest memory > 7) GPA->IOVA/HVA translation (as a side-effect) > > b) services to support device emulation, which gonna be hypervisor > specific, including: > 4) Map/unmap guest memory > 6) Write-protect a guest memory page > > VFIO can fulfill category a), but not for b). A possible abstraction would > be in vGPU core driver, to allow specific hypervisor registering callbacks > for category b) (which means a KVMGT specific file say KVM-vGPU will > be added to KVM to connect both together). > > Then a likely layered blocks would be like: > > VFIO-vGPU <---------> vGPU Core <-------------> KVMGT-vGPU > ^ ^ > | | > | | > v v > nvidia intel > vGPU vGPU > > Xen will register its own vGPU bus driver (not using VFIO today) and > also hypervisor services using the same framework. With this design, > everything is abstracted/registered through vGPU core driver, instead > of talking with each other directly. > > Thoughts? > > P.S. from the description of above requirements, the whole framework > might be also extended to cover any device type using same mediated > pass-through approach. Though graphics has some special requirement, > the majority are actually device agnostics. Maybe better not limiting it > with a vGPU name at all. :-) >
Any feedback on above open? btw, based on above description I believe the interaction between VFIO and vGPU has become very clear. The remaining two services are related to how a hypervisor provides emulation services to vendor specific vGPU device model (more generally it's not vGPU specific. Can apply to any in-kernel emulation requirement so KVMGT-vGPU might not be a good name). This part is not related to VFIO at all, so we'll start prototyping VFIO related changes in parallel. Since this is related to KVM, Paolo, your comment is also welcomed. :-) Thanks, Kevin