RE: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-07-26 Thread Liu, Yi L
Hi Alex,

> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Saturday, July 20, 2019 4:58 AM
> To: Liu, Yi L 
> Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> 
> On Fri, 12 Jul 2019 12:55:27 +
> "Liu, Yi L"  wrote:
> 
> > Hi Alex,
> >
> > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > Sent: Friday, July 12, 2019 3:08 AM
> > > To: Liu, Yi L 
> > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > >
> > > On Thu, 11 Jul 2019 12:27:26 +
> > > "Liu, Yi L"  wrote:
> > >
> > > > Hi Alex,
> > > >
> > > > > From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On
> > > Behalf
> > > > > Of Alex Williamson
> > > > > Sent: Friday, July 5, 2019 11:55 PM
> > > > > To: Liu, Yi L 
> > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > > >
> > > > > On Thu, 4 Jul 2019 09:11:02 +0000
> > > > > "Liu, Yi L"  wrote:
> > > > >
> > > > > > Hi Alex,
> > > > > >
> > > > > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > > > > Sent: Thursday, July 4, 2019 1:22 AM
> > > > > > > To: Liu, Yi L 
> > > > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > > [...]
[...]
> > > Maybe what you're getting at is that vfio needs to understand
> > > that the mdev is a child of the endpoint device in its determination of
> > > whether the group is viable.
> >
> > Is the group here the group of iommu_device or a group of a mdev device?
> > :-) Actually, I think the group of a mdev device is always viable since
> > it has only a device and mdev_driver will add the mdev device to vfio
> > controlled scope to make the mdev group viable. Per my understanding,
> > VFIO guarantees the isolation by two major arts. First is checking if
> > group is viable before adding it to a container, second is preventing
> > multiple opens to /dev/vfio/group_id by the vfio_group->opened field
> > maintained in vfio.c.
> 
> Yes, minor nit, an mdev needs to be bound to vfio-mdev for the group to
> be vfio "viable", we expect that there will eventually be non-vfio
> drivers for mdev devices.

Then I guess the mdev group is non-viable per vfio's mind. :-)

> > Back to the configuration we are talking here (For example a group where
> > one devices is bound to a native host driver and the other device bound
> > to a vfio driver[1].), we have two groups( iommu backed one and mdev group).
> > I think for iommu_device which wants to "donate" its iommu_group, the
> > host driver should explicitly call vfio_add_group_dev() to add itself
> > to the vfio controlled scope. And thus make its iommu backed group be
> > viable. So that we can have two viable iommu groups. iommu backed group
> > is viable by the host driver's vfio_add_group_dev() calling, and mdev
> > group is naturally viable. Until now, we can passthru the devices
> > (vfio-pci device and a mdev device) under this configuration to VM well.
> > But we cannot prevent user to passthru the devices to different VMs since
> > the two iommu groups are both viable. If I'm still understanding vfio
> > correct until this line, I think we need to fail the attempt of passthru
> > to multiple VMs in vfio_iommu_type1_attach_group() by checking the
> > vfio_group->opened field which is maintained in vfio.c. e.g. let's say
> > for iommu backed group, we have vfio_group#1 and mdev group, we have
> > vfio_group#2 in vfio.c, then opening vfio_group#1 requires to inc the
> > vfio_group#2->opened. And vice versa.
> >
> > [1] the example from the previous reply of you.
> 
> I think there's a problem with incrementing the group, the user still
> needs to be able to open the group for devices within the group that
> may be bound to vfio-pci, so I don't think this plan really works.

Perhaps I failed to make it clear. Let me explain. By incrementing the
group, vfio can prevent the usage of passthru a single iommu group
to different QEMUs (VMs). Once a QEMU opens a group. It will not
open again. e.g. current QEMU implementation checks the
vfio_group_list in vfio_get_group() before opening group for an
assigned device. Thus it avoids to open multiple times in a QEMU
process. This makes sense since kernel VFIO will attach all the devices
within a given iommu group to the all

Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-07-19 Thread Alex Williamson
On Fri, 12 Jul 2019 12:55:27 +
"Liu, Yi L"  wrote:

> Hi Alex,
> 
> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Friday, July 12, 2019 3:08 AM
> > To: Liu, Yi L 
> > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > 
> > On Thu, 11 Jul 2019 12:27:26 +
> > "Liu, Yi L"  wrote:
> >   
> > > Hi Alex,
> > >  
> > > > From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On  
> > Behalf  
> > > > Of Alex Williamson
> > > > Sent: Friday, July 5, 2019 11:55 PM
> > > > To: Liu, Yi L 
> > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > >
> > > > On Thu, 4 Jul 2019 09:11:02 +
> > > > "Liu, Yi L"  wrote:
> > > >  
> > > > > Hi Alex,
> > > > >  
> > > > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > > > Sent: Thursday, July 4, 2019 1:22 AM
> > > > > > To: Liu, Yi L 
> > > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver  
> > > [...]  
> > > > >  
> > > > > > It's really unfortunate that we don't have the mdev inheriting the
> > > > > > iommu group of the iommu_device so that userspace can really 
> > > > > > understand
> > > > > > this relationship.  A separate group makes sense for the aux-domain
> > > > > > case, and is (I guess) not a significant issue in the case of a
> > > > > > singleton iommu_device group, but it's pretty awkward here.  Perhaps
> > > > > > this is something we should correct in design of iommu backed 
> > > > > > mdevs.  
> > > > >
> > > > > Yeah, for aux-domain case, it is not significant issue as aux-domain 
> > > > > essentially
> > > > > means singleton iommu_devie group. And in early time, when designing 
> > > > > the  
> > > > support  
> > > > > for wrap pci as a mdev, we also considered to let vfio-mdev-pci to 
> > > > > reuse
> > > > > iommu_device group. But this results in an iommu backed group 
> > > > > includes mdev  
> > and  
> > > > > physical devices, which might also be strange. Do you think it is 
> > > > > valuable to  
> > > > reconsider  
> > > > > it?  
> > > >
> > > > From a group perspective, the cleanest solution would seem to be that
> > > > IOMMU backed mdevs w/o aux domain support should inherit the IOMMU
> > > > group of the iommu_device,  
> > >
> > > A confirm here. Regards to inherit the IOMMU group of iommu_device, do
> > > you mean mdev device should be added to the IOMMU group of iommu_device
> > > or maintain a parent and inheritor relationship within vfio? I guess you 
> > > mean the
> > > later one? :-)  
> > 
> > I was thinking the former, I'm not sure what the latter implies.  There
> > is no hierarchy within or between IOMMU groups, it's simply a set of
> > devices.  
> 
> I have a concern on adding the mdev device to the iommu_group of
> iommu_device. In such configuration, a iommu backed group includes
> mdev devices and physical devices. Then it might be necessary to advertise
> the mdev info to the in-kernel software which want to loop all devices within
> such an iommu_group. An example I can see is the virtual SVA threads in
> community. e.g. for a guest pasid bind, the changes below loops all the
> devices within an iommu_group, and each loop will call into vendor iommu
> driver with a device structure passed in. It is quite possible that vendor
> iommu driver need to get something behind a physical device (e.g.
> intel_iommu structure). For a physical device, it is fine. While for mdev
> device, it would be a problem if no mdev info advertised to iommu driver. :-(
> Although we have agreement that PASID support should be disabled for
> devices which are from non-singleton group. But I don't feel like to rely on
> such assumptions when designing software flows. Also, it's just an example,
> we have no idea if there will be more similar flows which require to loop all
> devices in an iommu group in future. May be we want to avoid adding a mdev
> to an iommu backed group. :-) More replies to you response below.
> 
> +static long vfio_iommu_type1_bind_gpasid(struct vfio_iommu *iommu,
> +   

RE: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-07-12 Thread Liu, Yi L
Hi Alex,

> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Friday, July 12, 2019 3:08 AM
> To: Liu, Yi L 
> Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> 
> On Thu, 11 Jul 2019 12:27:26 +
> "Liu, Yi L"  wrote:
> 
> > Hi Alex,
> >
> > > From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On
> Behalf
> > > Of Alex Williamson
> > > Sent: Friday, July 5, 2019 11:55 PM
> > > To: Liu, Yi L 
> > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > >
> > > On Thu, 4 Jul 2019 09:11:02 +
> > > "Liu, Yi L"  wrote:
> > >
> > > > Hi Alex,
> > > >
> > > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > > Sent: Thursday, July 4, 2019 1:22 AM
> > > > > To: Liu, Yi L 
> > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > [...]
> > > >
> > > > > It's really unfortunate that we don't have the mdev inheriting the
> > > > > iommu group of the iommu_device so that userspace can really 
> > > > > understand
> > > > > this relationship.  A separate group makes sense for the aux-domain
> > > > > case, and is (I guess) not a significant issue in the case of a
> > > > > singleton iommu_device group, but it's pretty awkward here.  Perhaps
> > > > > this is something we should correct in design of iommu backed mdevs.
> > > >
> > > > Yeah, for aux-domain case, it is not significant issue as aux-domain 
> > > > essentially
> > > > means singleton iommu_devie group. And in early time, when designing the
> > > support
> > > > for wrap pci as a mdev, we also considered to let vfio-mdev-pci to reuse
> > > > iommu_device group. But this results in an iommu backed group includes 
> > > > mdev
> and
> > > > physical devices, which might also be strange. Do you think it is 
> > > > valuable to
> > > reconsider
> > > > it?
> > >
> > > From a group perspective, the cleanest solution would seem to be that
> > > IOMMU backed mdevs w/o aux domain support should inherit the IOMMU
> > > group of the iommu_device,
> >
> > A confirm here. Regards to inherit the IOMMU group of iommu_device, do
> > you mean mdev device should be added to the IOMMU group of iommu_device
> > or maintain a parent and inheritor relationship within vfio? I guess you 
> > mean the
> > later one? :-)
> 
> I was thinking the former, I'm not sure what the latter implies.  There
> is no hierarchy within or between IOMMU groups, it's simply a set of
> devices.

I have a concern on adding the mdev device to the iommu_group of
iommu_device. In such configuration, a iommu backed group includes
mdev devices and physical devices. Then it might be necessary to advertise
the mdev info to the in-kernel software which want to loop all devices within
such an iommu_group. An example I can see is the virtual SVA threads in
community. e.g. for a guest pasid bind, the changes below loops all the
devices within an iommu_group, and each loop will call into vendor iommu
driver with a device structure passed in. It is quite possible that vendor
iommu driver need to get something behind a physical device (e.g.
intel_iommu structure). For a physical device, it is fine. While for mdev
device, it would be a problem if no mdev info advertised to iommu driver. :-(
Although we have agreement that PASID support should be disabled for
devices which are from non-singleton group. But I don't feel like to rely on
such assumptions when designing software flows. Also, it's just an example,
we have no idea if there will be more similar flows which require to loop all
devices in an iommu group in future. May be we want to avoid adding a mdev
to an iommu backed group. :-) More replies to you response below.

+static long vfio_iommu_type1_bind_gpasid(struct vfio_iommu *iommu,
+   void __user *arg,
+   struct vfio_iommu_type1_bind *bind)
+ ...
+   list_for_each_entry(domain, &iommu->domain_list, next) {
+   list_for_each_entry(group, &domain->group_list, next) {
+   ret = iommu_group_for_each_dev(group->iommu_group,
+  &guest_bind, vfio_bind_gpasid_fn);
+   if (ret)
+   goto out_unbind;
+   }
+   }
+ ...
+}

> Maybe what you're getting at is that vfio needs to understan

Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-07-11 Thread Alex Williamson
On Thu, 11 Jul 2019 12:27:26 +
"Liu, Yi L"  wrote:

> Hi Alex,
> 
> > From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf
> > Of Alex Williamson
> > Sent: Friday, July 5, 2019 11:55 PM
> > To: Liu, Yi L 
> > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > 
> > On Thu, 4 Jul 2019 09:11:02 +
> > "Liu, Yi L"  wrote:
> >   
> > > Hi Alex,
> > >  
> > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > Sent: Thursday, July 4, 2019 1:22 AM
> > > > To: Liu, Yi L 
> > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver  
> [...]
> > >  
> > > > It's really unfortunate that we don't have the mdev inheriting the
> > > > iommu group of the iommu_device so that userspace can really understand
> > > > this relationship.  A separate group makes sense for the aux-domain
> > > > case, and is (I guess) not a significant issue in the case of a
> > > > singleton iommu_device group, but it's pretty awkward here.  Perhaps
> > > > this is something we should correct in design of iommu backed mdevs.  
> > >
> > > Yeah, for aux-domain case, it is not significant issue as aux-domain 
> > > essentially
> > > means singleton iommu_devie group. And in early time, when designing the  
> > support  
> > > for wrap pci as a mdev, we also considered to let vfio-mdev-pci to reuse
> > > iommu_device group. But this results in an iommu backed group includes 
> > > mdev and
> > > physical devices, which might also be strange. Do you think it is 
> > > valuable to  
> > reconsider  
> > > it?  
> > 
> > From a group perspective, the cleanest solution would seem to be that
> > IOMMU backed mdevs w/o aux domain support should inherit the IOMMU
> > group of the iommu_device,  
> 
> A confirm here. Regards to inherit the IOMMU group of iommu_device, do
> you mean mdev device should be added to the IOMMU group of iommu_device
> or maintain a parent and inheritor relationship within vfio? I guess you mean 
> the
> later one? :-)

I was thinking the former, I'm not sure what the latter implies.  There
is no hierarchy within or between IOMMU groups, it's simply a set of
devices.  Maybe what you're getting at is that vfio needs to understand
that the mdev is a child of the endpoint device in its determination of
whether the group is viable.  That's true, but we can also have IOMMU
groups composed of SR-IOV VFs along with their parent PF if the root of
the IOMMU group is (for example) a downstream switch port above the PF.
So we can't simply look at the parent/child relationship within the
group, we somehow need to know that the parent device sharing the IOMMU
group is operating in host kernel space on behalf of the mdev.
 
> > but I think the barrier here is that we have
> > a difficult time determining if the group is "viable" in that case.
> > For example a group where one devices is bound to a native host driver
> > and the other device bound to a vfio driver would typically be
> > considered non-viable as it breaks the isolation guarantees.  However  
> 
> yes, this is how vfio guarantee the isolation before allowing user to further
> add a group to a vfio container and so on.
> 
> > I think in this configuration, the parent device is effectively
> > participating in the isolation and "donating" its iommu group on behalf
> > of the mdev device.  I don't think we can simultaneously use that iommu
> > group for any other purpose.   
> 
> Agree. At least host cannot make use of the iommu group any more in such
> configuration.
> 
> > I'm sure we could come up with a way for
> > vifo-core to understand this relationship and add it to the white list,  
> 
> The configuration is host driver still exists while we want to let mdev device
> to somehow "own" the iommu backed DMA isolation capability. So one possible
> way may be calling vfio_add_group_dev() which will creates a vfio_device 
> instance
> for the iommu_device in vfio.c when creating a iommu backed mdev. Then the
> iommu group is fairly viable.

"fairly viable" ;)  It's a correct use of the term, it's a little funny
though as "fairly" can also mean reasonably/sufficiently/adequately as
well as I think the intended use here equivalent to justly. 

That's an interesting idea to do an implicit vfio_add_group_dev() on
the iommu_device in this case, if you've worked through how that could
play out, it'd be interest

RE: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-07-11 Thread Liu, Yi L
Hi Alex,

> From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf
> Of Alex Williamson
> Sent: Friday, July 5, 2019 11:55 PM
> To: Liu, Yi L 
> Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> 
> On Thu, 4 Jul 2019 09:11:02 +
> "Liu, Yi L"  wrote:
> 
> > Hi Alex,
> >
> > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > Sent: Thursday, July 4, 2019 1:22 AM
> > > To: Liu, Yi L 
> > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
[...]
> >
> > > It's really unfortunate that we don't have the mdev inheriting the
> > > iommu group of the iommu_device so that userspace can really understand
> > > this relationship.  A separate group makes sense for the aux-domain
> > > case, and is (I guess) not a significant issue in the case of a
> > > singleton iommu_device group, but it's pretty awkward here.  Perhaps
> > > this is something we should correct in design of iommu backed mdevs.
> >
> > Yeah, for aux-domain case, it is not significant issue as aux-domain 
> > essentially
> > means singleton iommu_devie group. And in early time, when designing the
> support
> > for wrap pci as a mdev, we also considered to let vfio-mdev-pci to reuse
> > iommu_device group. But this results in an iommu backed group includes mdev 
> > and
> > physical devices, which might also be strange. Do you think it is valuable 
> > to
> reconsider
> > it?
> 
> From a group perspective, the cleanest solution would seem to be that
> IOMMU backed mdevs w/o aux domain support should inherit the IOMMU
> group of the iommu_device,

A confirm here. Regards to inherit the IOMMU group of iommu_device, do
you mean mdev device should be added to the IOMMU group of iommu_device
or maintain a parent and inheritor relationship within vfio? I guess you mean 
the
later one? :-)

> but I think the barrier here is that we have
> a difficult time determining if the group is "viable" in that case.
> For example a group where one devices is bound to a native host driver
> and the other device bound to a vfio driver would typically be
> considered non-viable as it breaks the isolation guarantees.  However

yes, this is how vfio guarantee the isolation before allowing user to further
add a group to a vfio container and so on.

> I think in this configuration, the parent device is effectively
> participating in the isolation and "donating" its iommu group on behalf
> of the mdev device.  I don't think we can simultaneously use that iommu
> group for any other purpose. 

Agree. At least host cannot make use of the iommu group any more in such
configuration.

> I'm sure we could come up with a way for
> vifo-core to understand this relationship and add it to the white list,

The configuration is host driver still exists while we want to let mdev device
to somehow "own" the iommu backed DMA isolation capability. So one possible
way may be calling vfio_add_group_dev() which will creates a vfio_device 
instance
for the iommu_device in vfio.c when creating a iommu backed mdev. Then the
iommu group is fairly viable.

> I wonder though how confusing this might be to users who now understand
> the group/driver requirement to be "all endpoints bound to vfio
> drivers".  This might still be the best approach regardless of this.

Yes, another thing I'm considering is how to prevent such a host driver from
issuing DMA. If we finally get a device bound to vfio-pci and another device
wrapped as mdev and passthru them to VM, the host driver is still capable to
issue DMA. Though IOMMU can block some DMAs, but not all of them. If a
DMA issued by host driver happens to have mapping in IOMMU side, then
host is kind of doing things on behalf on VM. Though we may trust the host
driver, but it looks to be a little bit awkward to me. :-(

> Thanks,
> 
> Alex

Regards,
Yi Liu


Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-07-05 Thread Alex Williamson
On Thu, 4 Jul 2019 09:11:02 +
"Liu, Yi L"  wrote:

> Hi Alex,
> 
> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Thursday, July 4, 2019 1:22 AM
> > To: Liu, Yi L 
> > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > 
> > On Wed, 3 Jul 2019 08:25:25 +
> > "Liu, Yi L"  wrote:
> >   
> > > Hi Alex,
> > >
> > > Thanks for the comments. Have four inline responses below. And one
> > > of them need your further help. :-)  
> 
> [...]
> 
> > > > > >  
> > > > > > > > > > used iommu_attach_device() rather than iommu_attach_group()
> > > > > > > > > > for non-aux mdev iommu_device.  Is there a requirement that
> > > > > > > > > > the mdev parent device is in a singleton iommu group?  
> > > > > > > > >
> > > > > > > > > I don't think there should have such limitation. Per my
> > > > > > > > > understanding, vfio-mdev-pci should also be able to bind to
> > > > > > > > > devices which shares iommu group with other devices. vfio-pci 
> > > > > > > > > works  
> > well  
> > > > for such devices.  
> > > > > > > > > And since the two drivers share most of the codes, I think
> > > > > > > > > vfio-mdev-pci should naturally support it as well.  
> > > > > > > >
> > > > > > > > Yes, the difference though is that vfio.c knows when devices are
> > > > > > > > in the same group, which mdev vfio.c only knows about the
> > > > > > > > non-iommu backed group, not the group that is actually used for
> > > > > > > > the iommu backing.  So we either need to enlighten vfio.c or
> > > > > > > > further abstract those details in vfio_iommu_type1.c.  
> > > > > > >
> > > > > > > Not sure if it is necessary to introduce more changes to vfio.c or
> > > > > > > vfio_iommu_type1.c. If it's only for the scenario which two
> > > > > > > devices share an iommu_group, I guess it could be supported by
> > > > > > > using __iommu_attach_device() which has no device counting for the
> > > > > > > group. But maybe I missed something here. It would be great if you
> > > > > > > can elaborate a bit for it. :-)  
> > > > > >
> > > > > > We need to use the group semantics, there's a reason
> > > > > > __iommu_attach_device() is not exposed, it's an internal helper.  I
> > > > > > think there's no way around that we need to somewhere track the
> > > > > > actual group we're attaching to and have the smarts to re-use it for
> > > > > > other devices in the same group.  
> > > > >
> > > > > Hmmm, exposing __iommu_attach_device() is not good, let's forget it.
> > > > > :-)
> > > > >  
> > > > > > > > > > If this is a simplification, then vfio-mdev-pci should not
> > > > > > > > > > bind to devices where this is violated since there's no way
> > > > > > > > > > to use the device.  Can we support it though?  
> > > > > > > > >
> > > > > > > > > yeah, I think we need to support it.  
> > >
> > > I've already made vfio-mdev-pci driver work for non-singleton iommu
> > > group. e.g. for devices in a single iommu group, I can bind the devices
> > > to eithervfio-pci or vfio-mdev-pci and then passthru them to a VM. And
> > > it will fail if user tries to passthru a vfio-mdev-pci device via vfio-pci
> > > manner "-device vfio-pci,host=01:00.1". In other words, vfio-mdev-pci
> > > device can only passthru via
> > > "-device vfio-pci,sysfsdev=/sys/bus/mdev/devices/UUID". This is what
> > > we expect.
> > >
> > > However, I encountered a problem when trying to prevent user from
> > > passthru these devices to different VMs. I've tried in my side, and I
> > > can passthru vfio-pci device and vfio-mdev-pci device to different
> > > VMs. But actually this operation should be failed. If all the devices
> > > are bound to vfio-pci, Qemu will open iommu backed group. So
> > > Qemu can check i

RE: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-07-04 Thread Liu, Yi L
Hi Alex,

> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Thursday, July 4, 2019 1:22 AM
> To: Liu, Yi L 
> Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> 
> On Wed, 3 Jul 2019 08:25:25 +
> "Liu, Yi L"  wrote:
> 
> > Hi Alex,
> >
> > Thanks for the comments. Have four inline responses below. And one
> > of them need your further help. :-)

[...]

> > > > >
> > > > > > > > > used iommu_attach_device() rather than iommu_attach_group()
> > > > > > > > > for non-aux mdev iommu_device.  Is there a requirement that
> > > > > > > > > the mdev parent device is in a singleton iommu group?
> > > > > > > >
> > > > > > > > I don't think there should have such limitation. Per my
> > > > > > > > understanding, vfio-mdev-pci should also be able to bind to
> > > > > > > > devices which shares iommu group with other devices. vfio-pci 
> > > > > > > > works
> well
> > > for such devices.
> > > > > > > > And since the two drivers share most of the codes, I think
> > > > > > > > vfio-mdev-pci should naturally support it as well.
> > > > > > >
> > > > > > > Yes, the difference though is that vfio.c knows when devices are
> > > > > > > in the same group, which mdev vfio.c only knows about the
> > > > > > > non-iommu backed group, not the group that is actually used for
> > > > > > > the iommu backing.  So we either need to enlighten vfio.c or
> > > > > > > further abstract those details in vfio_iommu_type1.c.
> > > > > >
> > > > > > Not sure if it is necessary to introduce more changes to vfio.c or
> > > > > > vfio_iommu_type1.c. If it's only for the scenario which two
> > > > > > devices share an iommu_group, I guess it could be supported by
> > > > > > using __iommu_attach_device() which has no device counting for the
> > > > > > group. But maybe I missed something here. It would be great if you
> > > > > > can elaborate a bit for it. :-)
> > > > >
> > > > > We need to use the group semantics, there's a reason
> > > > > __iommu_attach_device() is not exposed, it's an internal helper.  I
> > > > > think there's no way around that we need to somewhere track the
> > > > > actual group we're attaching to and have the smarts to re-use it for
> > > > > other devices in the same group.
> > > >
> > > > Hmmm, exposing __iommu_attach_device() is not good, let's forget it.
> > > > :-)
> > > >
> > > > > > > > > If this is a simplification, then vfio-mdev-pci should not
> > > > > > > > > bind to devices where this is violated since there's no way
> > > > > > > > > to use the device.  Can we support it though?
> > > > > > > >
> > > > > > > > yeah, I think we need to support it.
> >
> > I've already made vfio-mdev-pci driver work for non-singleton iommu
> > group. e.g. for devices in a single iommu group, I can bind the devices
> > to eithervfio-pci or vfio-mdev-pci and then passthru them to a VM. And
> > it will fail if user tries to passthru a vfio-mdev-pci device via vfio-pci
> > manner "-device vfio-pci,host=01:00.1". In other words, vfio-mdev-pci
> > device can only passthru via
> > "-device vfio-pci,sysfsdev=/sys/bus/mdev/devices/UUID". This is what
> > we expect.
> >
> > However, I encountered a problem when trying to prevent user from
> > passthru these devices to different VMs. I've tried in my side, and I
> > can passthru vfio-pci device and vfio-mdev-pci device to different
> > VMs. But actually this operation should be failed. If all the devices
> > are bound to vfio-pci, Qemu will open iommu backed group. So
> > Qemu can check if a given group has already been used by an
> > AddressSpace (a.ka. VM) in vfio_get_group() thus to prevent
> > user from passthru these devices to different VMs if the devices
> > are in the same iommu backed group. However, here for a
> > vfio-mdev-pci device, it has a new group and group ID, Qemu
> > will not be able to detect if the other devices (share iommu group
> > with vfio-mdev-pci device) are passthru to existing V

Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-07-03 Thread Alex Williamson
On Wed, 3 Jul 2019 08:25:25 +
"Liu, Yi L"  wrote:

> Hi Alex,
> 
> Thanks for the comments. Have four inline responses below. And one
> of them need your further help. :-)
> .
> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Friday, June 28, 2019 11:08 PM
> > To: Liu, Yi L 
> > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > 
> > On Mon, 24 Jun 2019 08:20:38 +
> > "Liu, Yi L"  wrote:
> >   
> > > Hi Alex,
> > >  
> > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > Sent: Friday, June 21, 2019 11:58 PM
> > > > To: Liu, Yi L 
> > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > >
> > > > On Fri, 21 Jun 2019 10:23:10 +
> > > > "Liu, Yi L"  wrote:
> > > >  
> > > > > Hi Alex,
> > > > >  
> > > > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > > > Sent: Friday, June 21, 2019 5:08 AM
> > > > > > To: Liu, Yi L 
> > > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > > > >
> > > > > > On Thu, 20 Jun 2019 13:00:34 + "Liu, Yi L"
> > > > > >  wrote:
> > > > > >  
> > > > > > > Hi Alex,
> > > > > > >  
> > > > > > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > > > > > Sent: Thursday, June 20, 2019 12:27 PM
> > > > > > > > To: Liu, Yi L 
> > > > > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci
> > > > > > > > driver
> > > > > > > >
> > > > > > > > On Sat,  8 Jun 2019 21:21:11 +0800 Liu Yi L
> > > > > > > >  wrote:
> > > > > > > >  
> > > > > > > > > This patch adds sample driver named vfio-mdev-pci. It is
> > > > > > > > > to wrap a PCI device as a mediated device. For a pci
> > > > > > > > > device, once bound to vfio-mdev-pci driver, user space
> > > > > > > > > access of this device will go through vfio mdev framework.
> > > > > > > > > The usage of the device follows mdev management method.
> > > > > > > > > e.g. user should create a mdev before exposing the device to 
> > > > > > > > > user-space.  
> > > > > [...]  
> > > > > > >  
> > > > > > > > However, the patch below just makes the mdev interface
> > > > > > > > behave correctly, I can't make it work on my system because
> > > > > > > > commit 7bd50f0cd2fd ("vfio/type1: Add domain at(de)taching
> > > > > > > > group helpers")  
> > > > > > >
> > > > > > > What error did you encounter. I tested the patch with a device
> > > > > > > in a singleton iommu group. I'm also searching a proper
> > > > > > > machine with multiple devices in an iommu group and test it.  
> > > > > >
> > > > > > In vfio_iommu_type1, iommu backed mdev devices use the
> > > > > > iommu_attach_device() interface, which includes:
> > > > > >
> > > > > > if (iommu_group_device_count(group) != 1)
> > > > > > goto out_unlock;
> > > > > >
> > > > > > So it's impossible to use with non-singleton groups currently.  
> > > > >
> > > > > Hmmm, I think it is no longer good to use iommu_attach_device()
> > > > > for iommu backed mdev devices now. In this flow, the purpose here
> > > > > is to attach a device to a domain and no need to check whether the
> > > > > device is in a singleton iommu group. I think it would be better
> > > > > to use __iommu_attach_device() instead of iommu_attach_device().  
> > > >
> > > > That's a static and unexported, it's intentionally not an exposed
> > > > interface.  We can't attach devices in the same group to separate
> > > > domains allocated through iommu_domain_alloc(), this would violate
> > > > the iommu group isolation principles.  
> > >
> > > Go it. :-) Then not goo

RE: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-07-03 Thread Liu, Yi L
Hi Alex,

Thanks for the comments. Have four inline responses below. And one
of them need your further help. :-)
.
> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Friday, June 28, 2019 11:08 PM
> To: Liu, Yi L 
> Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> 
> On Mon, 24 Jun 2019 08:20:38 +
> "Liu, Yi L"  wrote:
> 
> > Hi Alex,
> >
> > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > Sent: Friday, June 21, 2019 11:58 PM
> > > To: Liu, Yi L 
> > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > >
> > > On Fri, 21 Jun 2019 10:23:10 +
> > > "Liu, Yi L"  wrote:
> > >
> > > > Hi Alex,
> > > >
> > > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > > Sent: Friday, June 21, 2019 5:08 AM
> > > > > To: Liu, Yi L 
> > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > > >
> > > > > On Thu, 20 Jun 2019 13:00:34 + "Liu, Yi L"
> > > > >  wrote:
> > > > >
> > > > > > Hi Alex,
> > > > > >
> > > > > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > > > > Sent: Thursday, June 20, 2019 12:27 PM
> > > > > > > To: Liu, Yi L 
> > > > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci
> > > > > > > driver
> > > > > > >
> > > > > > > On Sat,  8 Jun 2019 21:21:11 +0800 Liu Yi L
> > > > > > >  wrote:
> > > > > > >
> > > > > > > > This patch adds sample driver named vfio-mdev-pci. It is
> > > > > > > > to wrap a PCI device as a mediated device. For a pci
> > > > > > > > device, once bound to vfio-mdev-pci driver, user space
> > > > > > > > access of this device will go through vfio mdev framework.
> > > > > > > > The usage of the device follows mdev management method.
> > > > > > > > e.g. user should create a mdev before exposing the device to 
> > > > > > > > user-space.
> > > > [...]
> > > > > >
> > > > > > > However, the patch below just makes the mdev interface
> > > > > > > behave correctly, I can't make it work on my system because
> > > > > > > commit 7bd50f0cd2fd ("vfio/type1: Add domain at(de)taching
> > > > > > > group helpers")
> > > > > >
> > > > > > What error did you encounter. I tested the patch with a device
> > > > > > in a singleton iommu group. I'm also searching a proper
> > > > > > machine with multiple devices in an iommu group and test it.
> > > > >
> > > > > In vfio_iommu_type1, iommu backed mdev devices use the
> > > > > iommu_attach_device() interface, which includes:
> > > > >
> > > > > if (iommu_group_device_count(group) != 1)
> > > > > goto out_unlock;
> > > > >
> > > > > So it's impossible to use with non-singleton groups currently.
> > > >
> > > > Hmmm, I think it is no longer good to use iommu_attach_device()
> > > > for iommu backed mdev devices now. In this flow, the purpose here
> > > > is to attach a device to a domain and no need to check whether the
> > > > device is in a singleton iommu group. I think it would be better
> > > > to use __iommu_attach_device() instead of iommu_attach_device().
> > >
> > > That's a static and unexported, it's intentionally not an exposed
> > > interface.  We can't attach devices in the same group to separate
> > > domains allocated through iommu_domain_alloc(), this would violate
> > > the iommu group isolation principles.
> >
> > Go it. :-) Then not good to expose such interface. But to support
> > devices in non-singleton iommu group, we need to have a new interface
> > which doesn't count the devices but attach all the devices.
> 
> We have iommu_attach_group(), we just need to track which groups are attached.

yep.

> > > > Also I found a potential mutex lock issue if using 
> > > > iommu_attach_device().
> > > > In vfio_iommu_attach_group(), it uses iommu_group_for_each_dev()
> > &

Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-06-28 Thread Alex Williamson
On Mon, 24 Jun 2019 08:20:38 +
"Liu, Yi L"  wrote:

> Hi Alex,
> 
> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Friday, June 21, 2019 11:58 PM
> > To: Liu, Yi L 
> > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > 
> > On Fri, 21 Jun 2019 10:23:10 +
> > "Liu, Yi L"  wrote:
> >   
> > > Hi Alex,
> > >  
> > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > Sent: Friday, June 21, 2019 5:08 AM
> > > > To: Liu, Yi L 
> > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > >
> > > > On Thu, 20 Jun 2019 13:00:34 +
> > > > "Liu, Yi L"  wrote:
> > > >  
> > > > > Hi Alex,
> > > > >  
> > > > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > > > Sent: Thursday, June 20, 2019 12:27 PM
> > > > > > To: Liu, Yi L 
> > > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > > > >
> > > > > > On Sat,  8 Jun 2019 21:21:11 +0800
> > > > > > Liu Yi L  wrote:
> > > > > >  
> > > > > > > This patch adds sample driver named vfio-mdev-pci. It is to wrap
> > > > > > > a PCI device as a mediated device. For a pci device, once bound
> > > > > > > to vfio-mdev-pci driver, user space access of this device will
> > > > > > > go through vfio mdev framework. The usage of the device follows
> > > > > > > mdev management method. e.g. user should create a mdev before
> > > > > > > exposing the device to user-space.  
> > > [...]  
> > > > >  
> > > > > > However, the patch below just makes the mdev interface behave
> > > > > > correctly, I can't make it work on my system because commit
> > > > > > 7bd50f0cd2fd ("vfio/type1: Add domain at(de)taching group helpers") 
> > > > > >  
> > > > >
> > > > > What error did you encounter. I tested the patch with a device in a
> > > > > singleton iommu group. I'm also searching a proper machine with
> > > > > multiple devices in an iommu group and test it.  
> > > >
> > > > In vfio_iommu_type1, iommu backed mdev devices use the
> > > > iommu_attach_device() interface, which includes:
> > > >
> > > > if (iommu_group_device_count(group) != 1)
> > > > goto out_unlock;
> > > >
> > > > So it's impossible to use with non-singleton groups currently.  
> > >
> > > Hmmm, I think it is no longer good to use iommu_attach_device() for iommu
> > > backed mdev devices now. In this flow, the purpose here is to attach a 
> > > device
> > > to a domain and no need to check whether the device is in a singleton 
> > > iommu
> > > group. I think it would be better to use __iommu_attach_device() instead 
> > > of
> > > iommu_attach_device().  
> > 
> > That's a static and unexported, it's intentionally not an exposed
> > interface.  We can't attach devices in the same group to separate
> > domains allocated through iommu_domain_alloc(), this would violate the
> > iommu group isolation principles.  
> 
> Go it. :-) Then not good to expose such interface. But to support devices in
> non-singleton iommu group, we need to have a new interface which doesn't
> count the devices but attach all the devices.

We have iommu_attach_group(), we just need to track which groups are
attached.
 
> > > Also I found a potential mutex lock issue if using iommu_attach_device().
> > > In vfio_iommu_attach_group(), it uses iommu_group_for_each_dev() to loop
> > > all the devices in the group. It holds group->mutex. And then  
> > vfio_mdev_attach_domain()  
> > > calls iommu_attach_device() which also tries to get group->mutex. This 
> > > would be
> > > an issue. If you are fine with it, I may post another patch for it. :-)  
> > 
> > Gack, yes, please send a patch.  
> 
> Would do it, may be together with the support of vfio-mdev-pci on devices in
> non-singleton iommu group.
> 
> >   
> > > > > > used iommu_attach_device() rather than iommu_attach_group() for 
> > > > > > non-aux
> > > > > > mdev iommu_device.  Is there 

RE: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-06-24 Thread Liu, Yi L
Hi Alex,

> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Friday, June 21, 2019 11:58 PM
> To: Liu, Yi L 
> Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> 
> On Fri, 21 Jun 2019 10:23:10 +
> "Liu, Yi L"  wrote:
> 
> > Hi Alex,
> >
> > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > Sent: Friday, June 21, 2019 5:08 AM
> > > To: Liu, Yi L 
> > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > >
> > > On Thu, 20 Jun 2019 13:00:34 +
> > > "Liu, Yi L"  wrote:
> > >
> > > > Hi Alex,
> > > >
> > > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > > Sent: Thursday, June 20, 2019 12:27 PM
> > > > > To: Liu, Yi L 
> > > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > > >
> > > > > On Sat,  8 Jun 2019 21:21:11 +0800
> > > > > Liu Yi L  wrote:
> > > > >
> > > > > > This patch adds sample driver named vfio-mdev-pci. It is to wrap
> > > > > > a PCI device as a mediated device. For a pci device, once bound
> > > > > > to vfio-mdev-pci driver, user space access of this device will
> > > > > > go through vfio mdev framework. The usage of the device follows
> > > > > > mdev management method. e.g. user should create a mdev before
> > > > > > exposing the device to user-space.
> > [...]
> > > >
> > > > > However, the patch below just makes the mdev interface behave
> > > > > correctly, I can't make it work on my system because commit
> > > > > 7bd50f0cd2fd ("vfio/type1: Add domain at(de)taching group helpers")
> > > >
> > > > What error did you encounter. I tested the patch with a device in a
> > > > singleton iommu group. I'm also searching a proper machine with
> > > > multiple devices in an iommu group and test it.
> > >
> > > In vfio_iommu_type1, iommu backed mdev devices use the
> > > iommu_attach_device() interface, which includes:
> > >
> > > if (iommu_group_device_count(group) != 1)
> > > goto out_unlock;
> > >
> > > So it's impossible to use with non-singleton groups currently.
> >
> > Hmmm, I think it is no longer good to use iommu_attach_device() for iommu
> > backed mdev devices now. In this flow, the purpose here is to attach a 
> > device
> > to a domain and no need to check whether the device is in a singleton iommu
> > group. I think it would be better to use __iommu_attach_device() instead of
> > iommu_attach_device().
> 
> That's a static and unexported, it's intentionally not an exposed
> interface.  We can't attach devices in the same group to separate
> domains allocated through iommu_domain_alloc(), this would violate the
> iommu group isolation principles.

Go it. :-) Then not good to expose such interface. But to support devices in
non-singleton iommu group, we need to have a new interface which doesn't
count the devices but attach all the devices.

> > Also I found a potential mutex lock issue if using iommu_attach_device().
> > In vfio_iommu_attach_group(), it uses iommu_group_for_each_dev() to loop
> > all the devices in the group. It holds group->mutex. And then
> vfio_mdev_attach_domain()
> > calls iommu_attach_device() which also tries to get group->mutex. This 
> > would be
> > an issue. If you are fine with it, I may post another patch for it. :-)
> 
> Gack, yes, please send a patch.

Would do it, may be together with the support of vfio-mdev-pci on devices in
non-singleton iommu group.

> 
> > > > > used iommu_attach_device() rather than iommu_attach_group() for 
> > > > > non-aux
> > > > > mdev iommu_device.  Is there a requirement that the mdev parent device
> > > > > is in a singleton iommu group?
> > > >
> > > > I don't think there should have such limitation. Per my understanding,
> > > > vfio-mdev-pci should also be able to bind to devices which shares
> > > > iommu group with other devices. vfio-pci works well for such devices.
> > > > And since the two drivers share most of the codes, I think vfio-mdev-pci
> > > > should naturally support it as well.
> > >
> > > Yes, the difference though is that vfio.c knows when devices are in the
> > > same group, which mdev vfio.c on

Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-06-21 Thread Alex Williamson
On Fri, 21 Jun 2019 10:23:10 +
"Liu, Yi L"  wrote:

> Hi Alex,
> 
> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Friday, June 21, 2019 5:08 AM
> > To: Liu, Yi L 
> > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > 
> > On Thu, 20 Jun 2019 13:00:34 +
> > "Liu, Yi L"  wrote:
> >   
> > > Hi Alex,
> > >  
> > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > > Sent: Thursday, June 20, 2019 12:27 PM
> > > > To: Liu, Yi L 
> > > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > > >
> > > > On Sat,  8 Jun 2019 21:21:11 +0800
> > > > Liu Yi L  wrote:
> > > >  
> > > > > This patch adds sample driver named vfio-mdev-pci. It is to wrap
> > > > > a PCI device as a mediated device. For a pci device, once bound
> > > > > to vfio-mdev-pci driver, user space access of this device will
> > > > > go through vfio mdev framework. The usage of the device follows
> > > > > mdev management method. e.g. user should create a mdev before
> > > > > exposing the device to user-space.  
> [...]
> > >  
> > > > However, the patch below just makes the mdev interface behave
> > > > correctly, I can't make it work on my system because commit
> > > > 7bd50f0cd2fd ("vfio/type1: Add domain at(de)taching group helpers")  
> > >
> > > What error did you encounter. I tested the patch with a device in a
> > > singleton iommu group. I'm also searching a proper machine with
> > > multiple devices in an iommu group and test it.  
> > 
> > In vfio_iommu_type1, iommu backed mdev devices use the
> > iommu_attach_device() interface, which includes:
> > 
> > if (iommu_group_device_count(group) != 1)
> > goto out_unlock;
> > 
> > So it's impossible to use with non-singleton groups currently.  
> 
> Hmmm, I think it is no longer good to use iommu_attach_device() for iommu
> backed mdev devices now. In this flow, the purpose here is to attach a device
> to a domain and no need to check whether the device is in a singleton iommu
> group. I think it would be better to use __iommu_attach_device() instead of
> iommu_attach_device().

That's a static and unexported, it's intentionally not an exposed
interface.  We can't attach devices in the same group to separate
domains allocated through iommu_domain_alloc(), this would violate the
iommu group isolation principles.

> Also I found a potential mutex lock issue if using iommu_attach_device().
> In vfio_iommu_attach_group(), it uses iommu_group_for_each_dev() to loop
> all the devices in the group. It holds group->mutex. And then 
> vfio_mdev_attach_domain()
> calls iommu_attach_device() which also tries to get group->mutex. This would 
> be
> an issue. If you are fine with it, I may post another patch for it. :-)

Gack, yes, please send a patch.

> > > > used iommu_attach_device() rather than iommu_attach_group() for non-aux
> > > > mdev iommu_device.  Is there a requirement that the mdev parent device
> > > > is in a singleton iommu group?  
> > >
> > > I don't think there should have such limitation. Per my understanding,
> > > vfio-mdev-pci should also be able to bind to devices which shares
> > > iommu group with other devices. vfio-pci works well for such devices.
> > > And since the two drivers share most of the codes, I think vfio-mdev-pci
> > > should naturally support it as well.  
> > 
> > Yes, the difference though is that vfio.c knows when devices are in the
> > same group, which mdev vfio.c only knows about the non-iommu backed
> > group, not the group that is actually used for the iommu backing.  So
> > we either need to enlighten vfio.c or further abstract those details in
> > vfio_iommu_type1.c.  
> 
> Not sure if it is necessary to introduce more changes to vfio.c or
> vfio_iommu_type1.c. If it's only for the scenario which two devices share an
> iommu_group, I guess it could be supported by using __iommu_attach_device()
> which has no device counting for the group. But maybe I missed something
> here. It would be great if you can elaborate a bit for it. :-)

We need to use the group semantics, there's a reason
__iommu_attach_device() is not exposed, it's an internal helper.  I
think there's no way around that we need to somewhere track the actual
group we're attaching to and have the smarts to re-use it for other
devices in 

RE: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-06-21 Thread Liu, Yi L
Hi Alex,

> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Friday, June 21, 2019 5:08 AM
> To: Liu, Yi L 
> Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> 
> On Thu, 20 Jun 2019 13:00:34 +
> "Liu, Yi L"  wrote:
> 
> > Hi Alex,
> >
> > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > > Sent: Thursday, June 20, 2019 12:27 PM
> > > To: Liu, Yi L 
> > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > >
> > > On Sat,  8 Jun 2019 21:21:11 +0800
> > > Liu Yi L  wrote:
> > >
> > > > This patch adds sample driver named vfio-mdev-pci. It is to wrap
> > > > a PCI device as a mediated device. For a pci device, once bound
> > > > to vfio-mdev-pci driver, user space access of this device will
> > > > go through vfio mdev framework. The usage of the device follows
> > > > mdev management method. e.g. user should create a mdev before
> > > > exposing the device to user-space.
[...]
> >
> > > However, the patch below just makes the mdev interface behave
> > > correctly, I can't make it work on my system because commit
> > > 7bd50f0cd2fd ("vfio/type1: Add domain at(de)taching group helpers")
> >
> > What error did you encounter. I tested the patch with a device in a
> > singleton iommu group. I'm also searching a proper machine with
> > multiple devices in an iommu group and test it.
> 
> In vfio_iommu_type1, iommu backed mdev devices use the
> iommu_attach_device() interface, which includes:
> 
> if (iommu_group_device_count(group) != 1)
> goto out_unlock;
> 
> So it's impossible to use with non-singleton groups currently.

Hmmm, I think it is no longer good to use iommu_attach_device() for iommu
backed mdev devices now. In this flow, the purpose here is to attach a device
to a domain and no need to check whether the device is in a singleton iommu
group. I think it would be better to use __iommu_attach_device() instead of
iommu_attach_device().

Also I found a potential mutex lock issue if using iommu_attach_device().
In vfio_iommu_attach_group(), it uses iommu_group_for_each_dev() to loop
all the devices in the group. It holds group->mutex. And then 
vfio_mdev_attach_domain()
calls iommu_attach_device() which also tries to get group->mutex. This would be
an issue. If you are fine with it, I may post another patch for it. :-)

> > > used iommu_attach_device() rather than iommu_attach_group() for non-aux
> > > mdev iommu_device.  Is there a requirement that the mdev parent device
> > > is in a singleton iommu group?
> >
> > I don't think there should have such limitation. Per my understanding,
> > vfio-mdev-pci should also be able to bind to devices which shares
> > iommu group with other devices. vfio-pci works well for such devices.
> > And since the two drivers share most of the codes, I think vfio-mdev-pci
> > should naturally support it as well.
> 
> Yes, the difference though is that vfio.c knows when devices are in the
> same group, which mdev vfio.c only knows about the non-iommu backed
> group, not the group that is actually used for the iommu backing.  So
> we either need to enlighten vfio.c or further abstract those details in
> vfio_iommu_type1.c.

Not sure if it is necessary to introduce more changes to vfio.c or
vfio_iommu_type1.c. If it's only for the scenario which two devices share an
iommu_group, I guess it could be supported by using __iommu_attach_device()
which has no device counting for the group. But maybe I missed something
here. It would be great if you can elaborate a bit for it. :-)

> 
> > > If this is a simplification, then
> > > vfio-mdev-pci should not bind to devices where this is violated since
> > > there's no way to use the device.  Can we support it though?
> >
> > yeah, I think we need to support it.
> >
> > > If I have two devices in the same group and bind them both to
> > > vfio-mdev-pci, I end up with three groups, one for each mdev device and
> > > the original physical device group.  vfio.c works with the mdev groups
> > > and will try to match both groups to the container.  vfio_iommu_type1.c
> > > also works with the mdev groups, except for the point where we actually
> > > try to attach a group to a domain, which is the only window where we use
> > > the iommu_device rather than the provided group, but we don't record
> > > that anywhere.  Should struct vfio_group have a pointer to a reference
> > > counted object that tracks the actual iommu_group attached, s

Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-06-20 Thread Alex Williamson
On Thu, 20 Jun 2019 13:00:34 +
"Liu, Yi L"  wrote:

> Hi Alex,
> 
> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Thursday, June 20, 2019 12:27 PM
> > To: Liu, Yi L 
> > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> > 
> > On Sat,  8 Jun 2019 21:21:11 +0800
> > Liu Yi L  wrote:
> >   
> > > This patch adds sample driver named vfio-mdev-pci. It is to wrap
> > > a PCI device as a mediated device. For a pci device, once bound
> > > to vfio-mdev-pci driver, user space access of this device will
> > > go through vfio mdev framework. The usage of the device follows
> > > mdev management method. e.g. user should create a mdev before
> > > exposing the device to user-space.
> > >
> > > Benefit of this new driver would be acting as a sample driver
> > > for recent changes from "vfio/mdev: IOMMU aware mediated device"
> > > patchset. Also it could be a good experiment driver for future
> > > device specific mdev migration support.
> > >
> > > To use this driver:
> > > a) build and load vfio-mdev-pci.ko module
> > >execute "make menuconfig" and config CONFIG_SAMPLE_VFIO_MDEV_PCI
> > >then load it with following command  
> > >> sudo modprobe vfio
> > >> sudo modprobe vfio-pci
> > >> sudo insmod drivers/vfio/pci/vfio-mdev-pci.ko  
> > >
> > > b) unbind original device driver
> > >e.g. use following command to unbind its original driver  
> > >> echo $dev_bdf > /sys/bus/pci/devices/$dev_bdf/driver/unbind  
> > >
> > > c) bind vfio-mdev-pci driver to the physical device  
> > >> echo $vend_id $dev_id > /sys/bus/pci/drivers/vfio-mdev-pci/new_id  
> > >
> > > d) check the supported mdev instances  
> > >> ls /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/  
> > >  vfio-mdev-pci-type1  
> > >> ls /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/\  
> > >  vfio-mdev-pci-type1/
> > >  available_instances  create  device_api  devices  name  
> > 
> > 
> > I think the static type name here is a problem (and why does it
> > include "type1"?).  We generally consider that a type defines a
> > software compatible mdev, but in this case any PCI device wrapped in
> > vfio-mdev-pci gets the same mdev type.  This is only a sample driver,
> > but that's a bad precedent.  I've taken a stab at fixing this in the
> > patch below, using the PCI vendor ID, device ID, subsystem vendor ID,
> > subsystem device ID, class code, and revision to try to make the type
> > as specific to the physical device assigned as we can through PCI.  
> 
> Thanks, it is much better than what I proposed.
> 
> >   
> > >
> > > e)  create mdev on this physical device (only 1 instance)  
> > >> echo "83b8f4f2-509f-382f-3c1e-e6bfe0fa1003" > \  
> > >  /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/\
> > >  vfio-mdev-pci-type1/create  
> > 
> > Whoops, available_instances always reports 1 and it doesn't appear that
> > the create function prevents additional mdevs.  Also addressed in the
> > patch below.  
> 
> yep, thanks.
> 
> >   
> > > f) passthru the mdev to guest
> > >add the following line in Qemu boot command
> > >-device vfio-pci,\
> > > sysfsdev=/sys/bus/mdev/devices/83b8f4f2-509f-382f-3c1e-e6bfe0fa1003
> > >
> > > g) destroy mdev  
> > >> echo 1 > 
> > > /sys/bus/mdev/devices/83b8f4f2-509f-382f-3c1e-e6bfe0fa1003/\  
> > >  remove
> > >  
> > 
> > I also found that unbinding the parent device doesn't unregister with
> > mdev, so it cannot be bound again, also fixed below.  
> 
> Oops, good catch. :-)
> 
> > However, the patch below just makes the mdev interface behave
> > correctly, I can't make it work on my system because commit
> > 7bd50f0cd2fd ("vfio/type1: Add domain at(de)taching group helpers")  
> 
> What error did you encounter. I tested the patch with a device in a
> singleton iommu group. I'm also searching a proper machine with
> multiple devices in an iommu group and test it.

In vfio_iommu_type1, iommu backed mdev devices use the
iommu_attach_device() interface, which includes:

if (iommu_group_device_count(group) != 1)
goto out_unlock;

So it's impossible to use with non-singleton groups currently.

RE: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-06-20 Thread Liu, Yi L
Hi Alex,

> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Thursday, June 20, 2019 12:27 PM
> To: Liu, Yi L 
> Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
> 
> On Sat,  8 Jun 2019 21:21:11 +0800
> Liu Yi L  wrote:
> 
> > This patch adds sample driver named vfio-mdev-pci. It is to wrap
> > a PCI device as a mediated device. For a pci device, once bound
> > to vfio-mdev-pci driver, user space access of this device will
> > go through vfio mdev framework. The usage of the device follows
> > mdev management method. e.g. user should create a mdev before
> > exposing the device to user-space.
> >
> > Benefit of this new driver would be acting as a sample driver
> > for recent changes from "vfio/mdev: IOMMU aware mediated device"
> > patchset. Also it could be a good experiment driver for future
> > device specific mdev migration support.
> >
> > To use this driver:
> > a) build and load vfio-mdev-pci.ko module
> >execute "make menuconfig" and config CONFIG_SAMPLE_VFIO_MDEV_PCI
> >then load it with following command
> >> sudo modprobe vfio
> >> sudo modprobe vfio-pci
> >> sudo insmod drivers/vfio/pci/vfio-mdev-pci.ko
> >
> > b) unbind original device driver
> >e.g. use following command to unbind its original driver
> >> echo $dev_bdf > /sys/bus/pci/devices/$dev_bdf/driver/unbind
> >
> > c) bind vfio-mdev-pci driver to the physical device
> >> echo $vend_id $dev_id > /sys/bus/pci/drivers/vfio-mdev-pci/new_id
> >
> > d) check the supported mdev instances
> >> ls /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/
> >  vfio-mdev-pci-type1
> >> ls /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/\
> >  vfio-mdev-pci-type1/
> >  available_instances  create  device_api  devices  name
> 
> 
> I think the static type name here is a problem (and why does it
> include "type1"?).  We generally consider that a type defines a
> software compatible mdev, but in this case any PCI device wrapped in
> vfio-mdev-pci gets the same mdev type.  This is only a sample driver,
> but that's a bad precedent.  I've taken a stab at fixing this in the
> patch below, using the PCI vendor ID, device ID, subsystem vendor ID,
> subsystem device ID, class code, and revision to try to make the type
> as specific to the physical device assigned as we can through PCI.

Thanks, it is much better than what I proposed.

> 
> >
> > e)  create mdev on this physical device (only 1 instance)
> >> echo "83b8f4f2-509f-382f-3c1e-e6bfe0fa1003" > \
> >  /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/\
> >  vfio-mdev-pci-type1/create
> 
> Whoops, available_instances always reports 1 and it doesn't appear that
> the create function prevents additional mdevs.  Also addressed in the
> patch below.

yep, thanks.

> 
> > f) passthru the mdev to guest
> >add the following line in Qemu boot command
> >-device vfio-pci,\
> > sysfsdev=/sys/bus/mdev/devices/83b8f4f2-509f-382f-3c1e-e6bfe0fa1003
> >
> > g) destroy mdev
> >> echo 1 > /sys/bus/mdev/devices/83b8f4f2-509f-382f-3c1e-e6bfe0fa1003/\
> >  remove
> >
> 
> I also found that unbinding the parent device doesn't unregister with
> mdev, so it cannot be bound again, also fixed below.

Oops, good catch. :-)

> However, the patch below just makes the mdev interface behave
> correctly, I can't make it work on my system because commit
> 7bd50f0cd2fd ("vfio/type1: Add domain at(de)taching group helpers")

What error did you encounter. I tested the patch with a device in a
singleton iommu group. I'm also searching a proper machine with
multiple devices in an iommu group and test it.

> used iommu_attach_device() rather than iommu_attach_group() for non-aux
> mdev iommu_device.  Is there a requirement that the mdev parent device
> is in a singleton iommu group?

I don't think there should have such limitation. Per my understanding,
vfio-mdev-pci should also be able to bind to devices which shares
iommu group with other devices. vfio-pci works well for such devices.
And since the two drivers share most of the codes, I think vfio-mdev-pci
should naturally support it as well.

> If this is a simplification, then
> vfio-mdev-pci should not bind to devices where this is violated since
> there's no way to use the device.  Can we support it though?

yeah, I think we need to support it.

> If I have two devices in the same group and bind them both to
> vfio-mdev-pci, I end up with three gro

Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-06-19 Thread Alex Williamson
On Sat,  8 Jun 2019 21:21:11 +0800
Liu Yi L  wrote:

> This patch adds sample driver named vfio-mdev-pci. It is to wrap
> a PCI device as a mediated device. For a pci device, once bound
> to vfio-mdev-pci driver, user space access of this device will
> go through vfio mdev framework. The usage of the device follows
> mdev management method. e.g. user should create a mdev before
> exposing the device to user-space.
> 
> Benefit of this new driver would be acting as a sample driver
> for recent changes from "vfio/mdev: IOMMU aware mediated device"
> patchset. Also it could be a good experiment driver for future
> device specific mdev migration support.
> 
> To use this driver:
> a) build and load vfio-mdev-pci.ko module
>execute "make menuconfig" and config CONFIG_SAMPLE_VFIO_MDEV_PCI
>then load it with following command
>> sudo modprobe vfio
>> sudo modprobe vfio-pci
>> sudo insmod drivers/vfio/pci/vfio-mdev-pci.ko  
> 
> b) unbind original device driver
>e.g. use following command to unbind its original driver
>> echo $dev_bdf > /sys/bus/pci/devices/$dev_bdf/driver/unbind  
> 
> c) bind vfio-mdev-pci driver to the physical device
>> echo $vend_id $dev_id > /sys/bus/pci/drivers/vfio-mdev-pci/new_id  
> 
> d) check the supported mdev instances
>> ls /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/  
>  vfio-mdev-pci-type1
>> ls /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/\  
>  vfio-mdev-pci-type1/
>  available_instances  create  device_api  devices  name


I think the static type name here is a problem (and why does it
include "type1"?).  We generally consider that a type defines a
software compatible mdev, but in this case any PCI device wrapped in
vfio-mdev-pci gets the same mdev type.  This is only a sample driver,
but that's a bad precedent.  I've taken a stab at fixing this in the
patch below, using the PCI vendor ID, device ID, subsystem vendor ID,
subsystem device ID, class code, and revision to try to make the type
as specific to the physical device assigned as we can through PCI.

> 
> e)  create mdev on this physical device (only 1 instance)
>> echo "83b8f4f2-509f-382f-3c1e-e6bfe0fa1003" > \  
>  /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/\
>  vfio-mdev-pci-type1/create

Whoops, available_instances always reports 1 and it doesn't appear that
the create function prevents additional mdevs.  Also addressed in the
patch below.
 
> f) passthru the mdev to guest
>add the following line in Qemu boot command
>-device vfio-pci,\
> sysfsdev=/sys/bus/mdev/devices/83b8f4f2-509f-382f-3c1e-e6bfe0fa1003
> 
> g) destroy mdev
>> echo 1 > /sys/bus/mdev/devices/83b8f4f2-509f-382f-3c1e-e6bfe0fa1003/\  
>  remove
> 

I also found that unbinding the parent device doesn't unregister with
mdev, so it cannot be bound again, also fixed below.

However, the patch below just makes the mdev interface behave
correctly, I can't make it work on my system because commit
7bd50f0cd2fd ("vfio/type1: Add domain at(de)taching group helpers")
used iommu_attach_device() rather than iommu_attach_group() for non-aux
mdev iommu_device.  Is there a requirement that the mdev parent device
is in a singleton iommu group?  If this is a simplification, then
vfio-mdev-pci should not bind to devices where this is violated since
there's no way to use the device.  Can we support it though?

If I have two devices in the same group and bind them both to
vfio-mdev-pci, I end up with three groups, one for each mdev device and
the original physical device group.  vfio.c works with the mdev groups
and will try to match both groups to the container.  vfio_iommu_type1.c
also works with the mdev groups, except for the point where we actually
try to attach a group to a domain, which is the only window where we use
the iommu_device rather than the provided group, but we don't record
that anywhere.  Should struct vfio_group have a pointer to a reference
counted object that tracks the actual iommu_group attached, such that
we can determine that the group is already attached to the domain and
not try to attach again?  Ideally I'd be able to bind one device to
vfio-pci, the other to vfio-mdev-pci, and be able to use them both
within the same container.  It seems like this should be possible, it's
the same effective iommu configuration as if they were both bound to
vfio-pci.  Thanks,

Alex

diff --git a/drivers/vfio/pci/vfio_mdev_pci.c b/drivers/vfio/pci/vfio_mdev_pci.c
index 07c8067b3f73..09143d3e5473 100644
--- a/drivers/vfio/pci/vfio_mdev_pci.c
+++ b/drivers/vfio/pci/vfio_mdev_pci.c
@@ -65,18 +65,22 @@ MODULE_PARM_DESC(disable_idle_d3,
 
 static struct pci_driver vfio_mdev_pci_driver;
 
-static ssize_t
-name_show(struct kobject *kobj, struct device *dev, char *buf)
-{
-   return sprintf(buf, "%s-type1\n", dev_name(dev));
-}
-
-MDEV_TYPE_ATTR_RO(name);
+struct vfio_mdev_pci_device {
+   struct vfio_pci_device vdev;
+   struct mdev_parent_ops ops;
+   

[PATCH v1 9/9] smaples: add vfio-mdev-pci driver

2019-06-09 Thread Liu Yi L
This patch adds sample driver named vfio-mdev-pci. It is to wrap
a PCI device as a mediated device. For a pci device, once bound
to vfio-mdev-pci driver, user space access of this device will
go through vfio mdev framework. The usage of the device follows
mdev management method. e.g. user should create a mdev before
exposing the device to user-space.

Benefit of this new driver would be acting as a sample driver
for recent changes from "vfio/mdev: IOMMU aware mediated device"
patchset. Also it could be a good experiment driver for future
device specific mdev migration support.

To use this driver:
a) build and load vfio-mdev-pci.ko module
   execute "make menuconfig" and config CONFIG_SAMPLE_VFIO_MDEV_PCI
   then load it with following command
   > sudo modprobe vfio
   > sudo modprobe vfio-pci
   > sudo insmod drivers/vfio/pci/vfio-mdev-pci.ko

b) unbind original device driver
   e.g. use following command to unbind its original driver
   > echo $dev_bdf > /sys/bus/pci/devices/$dev_bdf/driver/unbind

c) bind vfio-mdev-pci driver to the physical device
   > echo $vend_id $dev_id > /sys/bus/pci/drivers/vfio-mdev-pci/new_id

d) check the supported mdev instances
   > ls /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/
 vfio-mdev-pci-type1
   > ls /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/\
 vfio-mdev-pci-type1/
 available_instances  create  device_api  devices  name

e)  create mdev on this physical device (only 1 instance)
   > echo "83b8f4f2-509f-382f-3c1e-e6bfe0fa1003" > \
 /sys/bus/pci/devices/$dev_bdf/mdev_supported_types/\
 vfio-mdev-pci-type1/create

f) passthru the mdev to guest
   add the following line in Qemu boot command
   -device vfio-pci,\
sysfsdev=/sys/bus/mdev/devices/83b8f4f2-509f-382f-3c1e-e6bfe0fa1003

g) destroy mdev
   > echo 1 > /sys/bus/mdev/devices/83b8f4f2-509f-382f-3c1e-e6bfe0fa1003/\
 remove

Cc: Kevin Tian 
Cc: Lu Baolu 
Cc: Masahiro Yamada 
Suggested-by: Alex Williamson 
Signed-off-by: Liu Yi L 
---
 drivers/vfio/pci/Makefile|   6 +
 drivers/vfio/pci/vfio_mdev_pci.c | 403 +++
 samples/Kconfig  |  11 ++
 3 files changed, 420 insertions(+)
 create mode 100644 drivers/vfio/pci/vfio_mdev_pci.c

diff --git a/drivers/vfio/pci/Makefile b/drivers/vfio/pci/Makefile
index d94317a..ac118ef 100644
--- a/drivers/vfio/pci/Makefile
+++ b/drivers/vfio/pci/Makefile
@@ -5,4 +5,10 @@ vfio-pci-y := vfio_pci.o vfio_pci_common.o vfio_pci_intrs.o \
 vfio-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
 vfio-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
 
+vfio-mdev-pci-y := vfio_mdev_pci.o vfio_pci_common.o vfio_pci_intrs.o \
+   vfio_pci_rdwr.o vfio_pci_config.o
+vfio-mdev-pci-$(CONFIG_VFIO_PCI_IGD) += vfio_pci_igd.o
+vfio-mdev-pci-$(CONFIG_VFIO_PCI_NVLINK2) += vfio_pci_nvlink2.o
+
 obj-$(CONFIG_VFIO_PCI) += vfio-pci.o
+obj-$(CONFIG_SAMPLE_VFIO_MDEV_PCI) += vfio-mdev-pci.o
diff --git a/drivers/vfio/pci/vfio_mdev_pci.c b/drivers/vfio/pci/vfio_mdev_pci.c
new file mode 100644
index 000..07c8067
--- /dev/null
+++ b/drivers/vfio/pci/vfio_mdev_pci.c
@@ -0,0 +1,403 @@
+/*
+ * Copyright © 2019 Intel Corporation.
+ * Author: Liu, Yi L 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Derived from original vfio_pci.c:
+ * Copyright (C) 2012 Red Hat, Inc.  All rights reserved.
+ * Author: Alex Williamson 
+ *
+ * Derived from original vfio:
+ * Copyright 2010 Cisco Systems, Inc.  All rights reserved.
+ * Author: Tom Lyon, p...@cisco.com
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "vfio_pci_private.h"
+
+#define DRIVER_VERSION  "0.1"
+#define DRIVER_AUTHOR   "Liu, Yi L "
+#define DRIVER_DESC "VFIO Mdev PCI - Sample driver for PCI device as a 
mdev"
+
+#define VFIO_MDEV_PCI_NAME  "vfio-mdev-pci"
+
+static char ids[1024] __initdata;
+module_param_string(ids, ids, sizeof(ids), 0);
+MODULE_PARM_DESC(ids, "Initial PCI IDs to add to the vfio-mdev-pci driver, 
format is \"vendor:device[:subvendor[:subdevice[:class[:class_mask\" and 
multiple comma separated entries can be specified");
+
+static bool nointxmask;
+module_param_named(nointxmask, nointxmask, bool, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(nointxmask,
+ "Disable support for PCI 2.3 style INTx masking.  If this 
resolves problems for specific devices, report lspci -vvvxxx to 
linux-...@vger.kernel.org so the device can be fixed automatically via the 
broken_intx_masking flag.");
+
+#ifdef CONFIG_VFIO_PCI_VGA
+static bool disable_vga;
+module_param(disable_vga, bool, S_IRUGO);
+MODULE_PARM_DESC(disable_vga, "Disable VGA resource access through 
vfio-m