Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]

2016-06-01 Thread Jike Song
On 05/31/2016 10:29 PM, Alex Williamson wrote:
> On Tue, 31 May 2016 10:29:10 +0800
> Jike Song  wrote:
> 
>> On 05/28/2016 10:56 PM, Alex Williamson wrote:
>>> On Fri, 27 May 2016 22:43:54 +
>>> "Tian, Kevin"  wrote:
>>>   

 My impression was that you don't like hypervisor specific thing in VFIO,
 which makes it a bit tricky to accomplish those tasks in kernel. If we 
 can add Xen specific logic directly in VFIO (like vfio-iommu-xen you 
 mentioned), the whole thing would be easier.  
>>>
>>> If vfio is hosted in dom0, then Xen is the platform and we need to
>>> interact with the hypervisor to manage the iommu.  That said, there are
>>> aspects of vfio that do not seem to map well to a hypervisor managed
>>> iommu or a Xen-like hypervisor.  For instance, how does dom0 manage
>>> iommu groups and what's the distinction of using vfio to manage a
>>> userspace driver in dom0 versus managing a device for another domain.
>>> In the case of kvm, vfio has no dependency on kvm, there is some minor
>>> interaction, but we're not running on kvm and it's not appropriate to
>>> use vfio as a gateway to interact with a hypervisor that may or may not
>>> exist.  Thanks,  
>>
>> Hi Alex,
>>
>> Beyond iommu, there are other aspects vfio need to interact with Xen?
>> e.g. to pass-through MMIO, one have to call hypercalls to establish EPT
>> mappings.
> 
> If it's part of running on a Xen platform and not trying to interact
> with a VM in ways that are out of scope for vfio, I might be open to
> it, I'd need to see a proposal.  This also goes back to my question of
> how does vfio know whether it's configuring a device for a guest driver
> or a guest VM, with kvm these are one and the same.  Thanks,


Yes, this brings us back to Kevin suggestion, 

> I'm not sure whether VFIO can support this usage today. It is somehow 
> similar to channel io passthru in s390, where we also rely on Qemu to 
> mediate ccw commands to ensure isolation. Maybe just some slight 
> extension is required (e.g. not assume some API must be invoked). Of 
> course Qemu side vfio code also need some change. If this can work, 
> at least we can first put it as the enumeration interface for mediated 
> device in Xen. In the future it may be extended to cover normal Xen 
> PCI assignment as well instead of using sysfs to read PCI resource
> today.
> 
> If above works, then we have a sound plan to enable mediated devices 
> based on VFIO first for KVM, and then extend to Xen with reasonable 
> effort.

We'll work on the proposal, thanks!

--
Thanks,
Jike




Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]

2016-05-31 Thread Alex Williamson
On Tue, 31 May 2016 10:29:10 +0800
Jike Song  wrote:

> On 05/28/2016 10:56 PM, Alex Williamson wrote:
> > On Fri, 27 May 2016 22:43:54 +
> > "Tian, Kevin"  wrote:
> >   
> >>
> >> My impression was that you don't like hypervisor specific thing in VFIO,
> >> which makes it a bit tricky to accomplish those tasks in kernel. If we 
> >> can add Xen specific logic directly in VFIO (like vfio-iommu-xen you 
> >> mentioned), the whole thing would be easier.  
> > 
> > If vfio is hosted in dom0, then Xen is the platform and we need to
> > interact with the hypervisor to manage the iommu.  That said, there are
> > aspects of vfio that do not seem to map well to a hypervisor managed
> > iommu or a Xen-like hypervisor.  For instance, how does dom0 manage
> > iommu groups and what's the distinction of using vfio to manage a
> > userspace driver in dom0 versus managing a device for another domain.
> > In the case of kvm, vfio has no dependency on kvm, there is some minor
> > interaction, but we're not running on kvm and it's not appropriate to
> > use vfio as a gateway to interact with a hypervisor that may or may not
> > exist.  Thanks,  
> 
> Hi Alex,
> 
> Beyond iommu, there are other aspects vfio need to interact with Xen?
> e.g. to pass-through MMIO, one have to call hypercalls to establish EPT
> mappings.

If it's part of running on a Xen platform and not trying to interact
with a VM in ways that are out of scope for vfio, I might be open to
it, I'd need to see a proposal.  This also goes back to my question of
how does vfio know whether it's configuring a device for a guest driver
or a guest VM, with kvm these are one and the same.  Thanks,

Alex



Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]

2016-05-30 Thread Jike Song
On 05/28/2016 10:56 PM, Alex Williamson wrote:
> On Fri, 27 May 2016 22:43:54 +
> "Tian, Kevin"  wrote:
> 
>>
>> My impression was that you don't like hypervisor specific thing in VFIO,
>> which makes it a bit tricky to accomplish those tasks in kernel. If we 
>> can add Xen specific logic directly in VFIO (like vfio-iommu-xen you 
>> mentioned), the whole thing would be easier.
> 
> If vfio is hosted in dom0, then Xen is the platform and we need to
> interact with the hypervisor to manage the iommu.  That said, there are
> aspects of vfio that do not seem to map well to a hypervisor managed
> iommu or a Xen-like hypervisor.  For instance, how does dom0 manage
> iommu groups and what's the distinction of using vfio to manage a
> userspace driver in dom0 versus managing a device for another domain.
> In the case of kvm, vfio has no dependency on kvm, there is some minor
> interaction, but we're not running on kvm and it's not appropriate to
> use vfio as a gateway to interact with a hypervisor that may or may not
> exist.  Thanks,

Hi Alex,

Beyond iommu, there are other aspects vfio need to interact with Xen?
e.g. to pass-through MMIO, one have to call hypercalls to establish EPT
mappings.


--
Thanks,
Jike



Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]

2016-05-28 Thread Alex Williamson
On Fri, 27 May 2016 22:43:54 +
"Tian, Kevin"  wrote:

> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Friday, May 27, 2016 10:55 PM
> > 
> > On Fri, 27 May 2016 11:02:46 +
> > "Tian, Kevin"  wrote:
> >   
> > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > Sent: Wednesday, May 25, 2016 9:44 PM
> > > >
> > > > On Wed, 25 May 2016 07:13:58 +
> > > > "Tian, Kevin"  wrote:
> > > >  
> > > > > > From: Kirti Wankhede [mailto:kwankh...@nvidia.com]
> > > > > > Sent: Wednesday, May 25, 2016 3:58 AM
> > > > > >
> > > > > > This series adds Mediated device support to v4.6 Linux host kernel. 
> > > > > > Purpose
> > > > > > of this series is to provide a common interface for mediated device
> > > > > > management that can be used by different devices. This series 
> > > > > > introduces
> > > > > > Mdev core module that create and manage mediated devices, VFIO 
> > > > > > based driver
> > > > > > for mediated PCI devices that are created by Mdev core module and 
> > > > > > update
> > > > > > VFIO type1 IOMMU module to support mediated devices.  
> > > > >
> > > > > Thanks. "Mediated device" is more generic than previous one. :-)
> > > > >  
> > > > > >
> > > > > > What's new in v4?
> > > > > > - Renamed 'vgpu' module to 'mdev' module that represent generic term
> > > > > >   'Mediated device'.
> > > > > > - Moved mdev directory to drivers/vfio directory as this is the 
> > > > > > extension
> > > > > >   of VFIO APIs for mediated devices.
> > > > > > - Updated mdev driver to be flexible to register multiple types of 
> > > > > > drivers
> > > > > >   to mdev_bus_type bus.
> > > > > > - Updated mdev core driver with mdev_put_device() and 
> > > > > > mdev_get_device() for
> > > > > >   mediated devices.
> > > > > >
> > > > > >  
> > > > >
> > > > > Just curious. In this version you move the whole mdev core under
> > > > > VFIO now. Sorry if I missed any agreement on this change. IIRC Alex
> > > > > doesn't want VFIO to manage mdev life-cycle directly. Instead VFIO is
> > > > > just a mdev driver on created mediated devices  
> > > >
> > > > I did originally suggest keeping them separate, but as we've progressed
> > > > through the implementation, it's become more clear that the mediated
> > > > device interface is very much tied to the vfio interface, acting mostly
> > > > as a passthrough.  So I thought it made sense to pull them together.
> > > > Still open to discussion of course.  Thanks,
> > > >  
> > >
> > > The main benefit of maintaining a separate mdev framework, IMHO, is
> > > to allow better support of both KVM and Xen. Xen doesn't work with VFIO
> > > today, because other VM's memory is not allocated from Dom0 which
> > > means VFIO within Dom0 doesn't has view/permission to control isolation
> > > for other VMs.  
> > 
> > Isn't this just a matter of the vfio iommu model selected?  There could
> > be a vfio-iommu-xen that knows how to do the grant calls.
> >   
> > > However, after some thinking I think it might not be a big problem to
> > > combine VFIO/mdev together, if we extend Xen to just use VFIO for
> > > resource enumeration. In such model, VFIO still behaves as a single
> > > kernel portal to enumerate mediated devices to user space, but give up
> > > permission control to Qemu which will request a secure agent - Xen
> > > hypervisor - to ensure isolation of VM usage on mediated device (including
> > > EPT/IOMMU configuration).  
> > 
> > The whole point here is to use the vfio user api and we seem to be
> > progressing towards using vfio-core as a conduit where the mediated
> > driver api is also fairly vfio-ish.  So it seems we're really headed
> > towards a vfio-mediated device rather than some sort generic mediated
> > driver interface.  I would object to leaving permission control to
> > QEMU, QEMU is just a vfio user, there are others like DPDK.  The kernel
> > needs to be in charge of protecting itself and users from each other,
> > QEMU can't do this, which is part of reason that KVM has moved to vfio
> > rather than the pci-sysfs resource interface.
> >   
> > > I'm not sure whether VFIO can support this usage today. It is somehow
> > > similar to channel io passthru in s390, where we also rely on Qemu to
> > > mediate ccw commands to ensure isolation. Maybe just some slight
> > > extension is required (e.g. not assume some API must be invoked). Of
> > > course Qemu side vfio code also need some change. If this can work,
> > > at least we can first put it as the enumeration interface for mediated
> > > device in Xen. In the future it may be extended to cover normal Xen
> > > PCI assignment as well instead of using sysfs to read PCI resource
> > > today.  
> > 
> > The channel io proposal doesn't rely on QEMU for security either, the
> > mediation occurs in the host kernel, parsing the ccw command program,
> > and doing translations to replace the guest 

Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]

2016-05-27 Thread Tian, Kevin
> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Friday, May 27, 2016 10:55 PM
> 
> On Fri, 27 May 2016 11:02:46 +
> "Tian, Kevin"  wrote:
> 
> > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > Sent: Wednesday, May 25, 2016 9:44 PM
> > >
> > > On Wed, 25 May 2016 07:13:58 +
> > > "Tian, Kevin"  wrote:
> > >
> > > > > From: Kirti Wankhede [mailto:kwankh...@nvidia.com]
> > > > > Sent: Wednesday, May 25, 2016 3:58 AM
> > > > >
> > > > > This series adds Mediated device support to v4.6 Linux host kernel. 
> > > > > Purpose
> > > > > of this series is to provide a common interface for mediated device
> > > > > management that can be used by different devices. This series 
> > > > > introduces
> > > > > Mdev core module that create and manage mediated devices, VFIO based 
> > > > > driver
> > > > > for mediated PCI devices that are created by Mdev core module and 
> > > > > update
> > > > > VFIO type1 IOMMU module to support mediated devices.
> > > >
> > > > Thanks. "Mediated device" is more generic than previous one. :-)
> > > >
> > > > >
> > > > > What's new in v4?
> > > > > - Renamed 'vgpu' module to 'mdev' module that represent generic term
> > > > >   'Mediated device'.
> > > > > - Moved mdev directory to drivers/vfio directory as this is the 
> > > > > extension
> > > > >   of VFIO APIs for mediated devices.
> > > > > - Updated mdev driver to be flexible to register multiple types of 
> > > > > drivers
> > > > >   to mdev_bus_type bus.
> > > > > - Updated mdev core driver with mdev_put_device() and 
> > > > > mdev_get_device() for
> > > > >   mediated devices.
> > > > >
> > > > >
> > > >
> > > > Just curious. In this version you move the whole mdev core under
> > > > VFIO now. Sorry if I missed any agreement on this change. IIRC Alex
> > > > doesn't want VFIO to manage mdev life-cycle directly. Instead VFIO is
> > > > just a mdev driver on created mediated devices
> > >
> > > I did originally suggest keeping them separate, but as we've progressed
> > > through the implementation, it's become more clear that the mediated
> > > device interface is very much tied to the vfio interface, acting mostly
> > > as a passthrough.  So I thought it made sense to pull them together.
> > > Still open to discussion of course.  Thanks,
> > >
> >
> > The main benefit of maintaining a separate mdev framework, IMHO, is
> > to allow better support of both KVM and Xen. Xen doesn't work with VFIO
> > today, because other VM's memory is not allocated from Dom0 which
> > means VFIO within Dom0 doesn't has view/permission to control isolation
> > for other VMs.
> 
> Isn't this just a matter of the vfio iommu model selected?  There could
> be a vfio-iommu-xen that knows how to do the grant calls.
> 
> > However, after some thinking I think it might not be a big problem to
> > combine VFIO/mdev together, if we extend Xen to just use VFIO for
> > resource enumeration. In such model, VFIO still behaves as a single
> > kernel portal to enumerate mediated devices to user space, but give up
> > permission control to Qemu which will request a secure agent - Xen
> > hypervisor - to ensure isolation of VM usage on mediated device (including
> > EPT/IOMMU configuration).
> 
> The whole point here is to use the vfio user api and we seem to be
> progressing towards using vfio-core as a conduit where the mediated
> driver api is also fairly vfio-ish.  So it seems we're really headed
> towards a vfio-mediated device rather than some sort generic mediated
> driver interface.  I would object to leaving permission control to
> QEMU, QEMU is just a vfio user, there are others like DPDK.  The kernel
> needs to be in charge of protecting itself and users from each other,
> QEMU can't do this, which is part of reason that KVM has moved to vfio
> rather than the pci-sysfs resource interface.
> 
> > I'm not sure whether VFIO can support this usage today. It is somehow
> > similar to channel io passthru in s390, where we also rely on Qemu to
> > mediate ccw commands to ensure isolation. Maybe just some slight
> > extension is required (e.g. not assume some API must be invoked). Of
> > course Qemu side vfio code also need some change. If this can work,
> > at least we can first put it as the enumeration interface for mediated
> > device in Xen. In the future it may be extended to cover normal Xen
> > PCI assignment as well instead of using sysfs to read PCI resource
> > today.
> 
> The channel io proposal doesn't rely on QEMU for security either, the
> mediation occurs in the host kernel, parsing the ccw command program,
> and doing translations to replace the guest physical addresses with
> verified and pinned host physical addresses before submitting the
> program to be run.  A mediated device is policed by the mediated
> vendor driver in the host kernel, QEMU is untrusted, just like any
> other user.
> 
> If xen is currently using pci-sysfs for mapping device 

Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]

2016-05-27 Thread Alex Williamson
On Fri, 27 May 2016 11:02:46 +
"Tian, Kevin"  wrote:

> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Wednesday, May 25, 2016 9:44 PM
> > 
> > On Wed, 25 May 2016 07:13:58 +
> > "Tian, Kevin"  wrote:
> >   
> > > > From: Kirti Wankhede [mailto:kwankh...@nvidia.com]
> > > > Sent: Wednesday, May 25, 2016 3:58 AM
> > > >
> > > > This series adds Mediated device support to v4.6 Linux host kernel. 
> > > > Purpose
> > > > of this series is to provide a common interface for mediated device
> > > > management that can be used by different devices. This series introduces
> > > > Mdev core module that create and manage mediated devices, VFIO based 
> > > > driver
> > > > for mediated PCI devices that are created by Mdev core module and update
> > > > VFIO type1 IOMMU module to support mediated devices.  
> > >
> > > Thanks. "Mediated device" is more generic than previous one. :-)
> > >  
> > > >
> > > > What's new in v4?
> > > > - Renamed 'vgpu' module to 'mdev' module that represent generic term
> > > >   'Mediated device'.
> > > > - Moved mdev directory to drivers/vfio directory as this is the 
> > > > extension
> > > >   of VFIO APIs for mediated devices.
> > > > - Updated mdev driver to be flexible to register multiple types of 
> > > > drivers
> > > >   to mdev_bus_type bus.
> > > > - Updated mdev core driver with mdev_put_device() and mdev_get_device() 
> > > > for
> > > >   mediated devices.
> > > >
> > > >  
> > >
> > > Just curious. In this version you move the whole mdev core under
> > > VFIO now. Sorry if I missed any agreement on this change. IIRC Alex
> > > doesn't want VFIO to manage mdev life-cycle directly. Instead VFIO is
> > > just a mdev driver on created mediated devices  
> > 
> > I did originally suggest keeping them separate, but as we've progressed
> > through the implementation, it's become more clear that the mediated
> > device interface is very much tied to the vfio interface, acting mostly
> > as a passthrough.  So I thought it made sense to pull them together.
> > Still open to discussion of course.  Thanks,
> >   
> 
> The main benefit of maintaining a separate mdev framework, IMHO, is
> to allow better support of both KVM and Xen. Xen doesn't work with VFIO
> today, because other VM's memory is not allocated from Dom0 which
> means VFIO within Dom0 doesn't has view/permission to control isolation 
> for other VMs.

Isn't this just a matter of the vfio iommu model selected?  There could
be a vfio-iommu-xen that knows how to do the grant calls.

> However, after some thinking I think it might not be a big problem to
> combine VFIO/mdev together, if we extend Xen to just use VFIO for
> resource enumeration. In such model, VFIO still behaves as a single 
> kernel portal to enumerate mediated devices to user space, but give up 
> permission control to Qemu which will request a secure agent - Xen
> hypervisor - to ensure isolation of VM usage on mediated device (including
> EPT/IOMMU configuration).

The whole point here is to use the vfio user api and we seem to be
progressing towards using vfio-core as a conduit where the mediated
driver api is also fairly vfio-ish.  So it seems we're really headed
towards a vfio-mediated device rather than some sort generic mediated
driver interface.  I would object to leaving permission control to
QEMU, QEMU is just a vfio user, there are others like DPDK.  The kernel
needs to be in charge of protecting itself and users from each other,
QEMU can't do this, which is part of reason that KVM has moved to vfio
rather than the pci-sysfs resource interface.
 
> I'm not sure whether VFIO can support this usage today. It is somehow 
> similar to channel io passthru in s390, where we also rely on Qemu to 
> mediate ccw commands to ensure isolation. Maybe just some slight 
> extension is required (e.g. not assume some API must be invoked). Of 
> course Qemu side vfio code also need some change. If this can work, 
> at least we can first put it as the enumeration interface for mediated 
> device in Xen. In the future it may be extended to cover normal Xen 
> PCI assignment as well instead of using sysfs to read PCI resource
> today.

The channel io proposal doesn't rely on QEMU for security either, the
mediation occurs in the host kernel, parsing the ccw command program,
and doing translations to replace the guest physical addresses with
verified and pinned host physical addresses before submitting the
program to be run.  A mediated device is policed by the mediated
vendor driver in the host kernel, QEMU is untrusted, just like any
other user.

If xen is currently using pci-sysfs for mapping device resources, then
vfio should be directly usable, which leaves the IOMMU interfaces, such
as pinning and mapping user memory and making use of the IOMMU API,
that part of vfio is fairly modular though IOMMU groups is a fairly
fundamental concept within the core.  Thanks,

Alex



Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]

2016-05-27 Thread Tian, Kevin
> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Wednesday, May 25, 2016 9:44 PM
> 
> On Wed, 25 May 2016 07:13:58 +
> "Tian, Kevin"  wrote:
> 
> > > From: Kirti Wankhede [mailto:kwankh...@nvidia.com]
> > > Sent: Wednesday, May 25, 2016 3:58 AM
> > >
> > > This series adds Mediated device support to v4.6 Linux host kernel. 
> > > Purpose
> > > of this series is to provide a common interface for mediated device
> > > management that can be used by different devices. This series introduces
> > > Mdev core module that create and manage mediated devices, VFIO based 
> > > driver
> > > for mediated PCI devices that are created by Mdev core module and update
> > > VFIO type1 IOMMU module to support mediated devices.
> >
> > Thanks. "Mediated device" is more generic than previous one. :-)
> >
> > >
> > > What's new in v4?
> > > - Renamed 'vgpu' module to 'mdev' module that represent generic term
> > >   'Mediated device'.
> > > - Moved mdev directory to drivers/vfio directory as this is the extension
> > >   of VFIO APIs for mediated devices.
> > > - Updated mdev driver to be flexible to register multiple types of drivers
> > >   to mdev_bus_type bus.
> > > - Updated mdev core driver with mdev_put_device() and mdev_get_device() 
> > > for
> > >   mediated devices.
> > >
> > >
> >
> > Just curious. In this version you move the whole mdev core under
> > VFIO now. Sorry if I missed any agreement on this change. IIRC Alex
> > doesn't want VFIO to manage mdev life-cycle directly. Instead VFIO is
> > just a mdev driver on created mediated devices
> 
> I did originally suggest keeping them separate, but as we've progressed
> through the implementation, it's become more clear that the mediated
> device interface is very much tied to the vfio interface, acting mostly
> as a passthrough.  So I thought it made sense to pull them together.
> Still open to discussion of course.  Thanks,
> 

The main benefit of maintaining a separate mdev framework, IMHO, is
to allow better support of both KVM and Xen. Xen doesn't work with VFIO
today, because other VM's memory is not allocated from Dom0 which
means VFIO within Dom0 doesn't has view/permission to control isolation 
for other VMs.

However, after some thinking I think it might not be a big problem to
combine VFIO/mdev together, if we extend Xen to just use VFIO for
resource enumeration. In such model, VFIO still behaves as a single 
kernel portal to enumerate mediated devices to user space, but give up 
permission control to Qemu which will request a secure agent - Xen
hypervisor - to ensure isolation of VM usage on mediated device (including
EPT/IOMMU configuration).

I'm not sure whether VFIO can support this usage today. It is somehow 
similar to channel io passthru in s390, where we also rely on Qemu to 
mediate ccw commands to ensure isolation. Maybe just some slight 
extension is required (e.g. not assume some API must be invoked). Of 
course Qemu side vfio code also need some change. If this can work, 
at least we can first put it as the enumeration interface for mediated 
device in Xen. In the future it may be extended to cover normal Xen 
PCI assignment as well instead of using sysfs to read PCI resource
today.

If above works, then we have a sound plan to enable mediated devices 
based on VFIO first for KVM, and then extend to Xen with reasonable 
effort.
 
How do you think about it?

Thanks
Kevin



Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]

2016-05-25 Thread Alex Williamson
On Wed, 25 May 2016 07:13:58 +
"Tian, Kevin"  wrote:

> > From: Kirti Wankhede [mailto:kwankh...@nvidia.com]
> > Sent: Wednesday, May 25, 2016 3:58 AM
> > 
> > This series adds Mediated device support to v4.6 Linux host kernel. Purpose
> > of this series is to provide a common interface for mediated device
> > management that can be used by different devices. This series introduces
> > Mdev core module that create and manage mediated devices, VFIO based driver
> > for mediated PCI devices that are created by Mdev core module and update
> > VFIO type1 IOMMU module to support mediated devices.  
> 
> Thanks. "Mediated device" is more generic than previous one. :-)
> 
> > 
> > What's new in v4?
> > - Renamed 'vgpu' module to 'mdev' module that represent generic term
> >   'Mediated device'.
> > - Moved mdev directory to drivers/vfio directory as this is the extension
> >   of VFIO APIs for mediated devices.
> > - Updated mdev driver to be flexible to register multiple types of drivers
> >   to mdev_bus_type bus.
> > - Updated mdev core driver with mdev_put_device() and mdev_get_device() for
> >   mediated devices.
> > 
> >   
> 
> Just curious. In this version you move the whole mdev core under
> VFIO now. Sorry if I missed any agreement on this change. IIRC Alex 
> doesn't want VFIO to manage mdev life-cycle directly. Instead VFIO is 
> just a mdev driver on created mediated devices

I did originally suggest keeping them separate, but as we've progressed
through the implementation, it's become more clear that the mediated
device interface is very much tied to the vfio interface, acting mostly
as a passthrough.  So I thought it made sense to pull them together.
Still open to discussion of course.  Thanks,

Alex



Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]

2016-05-25 Thread Tian, Kevin
> From: Kirti Wankhede [mailto:kwankh...@nvidia.com]
> Sent: Wednesday, May 25, 2016 3:58 AM
> 
> This series adds Mediated device support to v4.6 Linux host kernel. Purpose
> of this series is to provide a common interface for mediated device
> management that can be used by different devices. This series introduces
> Mdev core module that create and manage mediated devices, VFIO based driver
> for mediated PCI devices that are created by Mdev core module and update
> VFIO type1 IOMMU module to support mediated devices.

Thanks. "Mediated device" is more generic than previous one. :-)

> 
> What's new in v4?
> - Renamed 'vgpu' module to 'mdev' module that represent generic term
>   'Mediated device'.
> - Moved mdev directory to drivers/vfio directory as this is the extension
>   of VFIO APIs for mediated devices.
> - Updated mdev driver to be flexible to register multiple types of drivers
>   to mdev_bus_type bus.
> - Updated mdev core driver with mdev_put_device() and mdev_get_device() for
>   mediated devices.
> 
> 

Just curious. In this version you move the whole mdev core under
VFIO now. Sorry if I missed any agreement on this change. IIRC Alex 
doesn't want VFIO to manage mdev life-cycle directly. Instead VFIO is 
just a mdev driver on created mediated devices

Thanks
Kevin



[Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]

2016-05-24 Thread Kirti Wankhede
This series adds Mediated device support to v4.6 Linux host kernel. Purpose
of this series is to provide a common interface for mediated device
management that can be used by different devices. This series introduces
Mdev core module that create and manage mediated devices, VFIO based driver
for mediated PCI devices that are created by Mdev core module and update
VFIO type1 IOMMU module to support mediated devices.

What's new in v4?
- Renamed 'vgpu' module to 'mdev' module that represent generic term
  'Mediated device'.
- Moved mdev directory to drivers/vfio directory as this is the extension
  of VFIO APIs for mediated devices.
- Updated mdev driver to be flexible to register multiple types of drivers
  to mdev_bus_type bus.
- Updated mdev core driver with mdev_put_device() and mdev_get_device() for
  mediated devices.


What's left to do?
VFIO driver for vGPU device doesn't support devices with MSI-X enabled.

Please review.

Kirti Wankhede (3):
  Mediated device Core driver
  VFIO driver for mediated PCI device
  VFIO Type1 IOMMU: Add support for mediated devices

 drivers/vfio/Kconfig|   1 +
 drivers/vfio/Makefile   |   1 +
 drivers/vfio/mdev/Kconfig   |  18 +
 drivers/vfio/mdev/Makefile  |   6 +
 drivers/vfio/mdev/mdev-core.c   | 462 +
 drivers/vfio/mdev/mdev-driver.c | 139 
 drivers/vfio/mdev/mdev-sysfs.c  | 312 +
 drivers/vfio/mdev/mdev_private.h|  33 ++
 drivers/vfio/mdev/vfio_mpci.c   | 648 
 drivers/vfio/pci/vfio_pci_private.h |   6 -
 drivers/vfio/pci/vfio_pci_rdwr.c|   1 +
 drivers/vfio/vfio_iommu_type1.c | 433 ++--
 include/linux/mdev.h| 224 +
 include/linux/vfio.h|  13 +
 14 files changed, 2259 insertions(+), 38 deletions(-)
 create mode 100644 drivers/vfio/mdev/Kconfig
 create mode 100644 drivers/vfio/mdev/Makefile
 create mode 100644 drivers/vfio/mdev/mdev-core.c
 create mode 100644 drivers/vfio/mdev/mdev-driver.c
 create mode 100644 drivers/vfio/mdev/mdev-sysfs.c
 create mode 100644 drivers/vfio/mdev/mdev_private.h
 create mode 100644 drivers/vfio/mdev/vfio_mpci.c
 create mode 100644 include/linux/mdev.h

-- 
2.7.0