> From: Tian, Kevin
> Sent: Friday, September 28, 2018 9:27 AM
> 
> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Friday, September 28, 2018 4:01 AM
> >
> > On Fri, 28 Sep 2018 00:53:00 +0530
> > Kirti Wankhede <kwankh...@nvidia.com> wrote:
> >
> > > On 9/27/2018 9:29 PM, Alex Williamson wrote:
> > > > On Thu, 27 Sep 2018 06:48:27 +0000
> > > > "Tian, Kevin" <kevin.t...@intel.com> wrote:
> > > >
> > > >>> From: Kirti Wankhede
> > > >>> Sent: Thursday, September 27, 2018 2:22 PM
> > > >>>
> > > >>> Generally a single instance of mdev device, a share of physical
> device,
> > is
> > > >>> assigned to user space application or a VM. There are cases when
> > multiple
> > > >>> instances of mdev devices of same or different types are required
> by
> > User
> > > >>> space application or VM. For example in case of vGPU, multiple
> mdev
> > > >>> devices
> > > >>> of type which represents whole GPU can be assigned to one
> instance
> > of
> > > >>> application or VM.
> > > >>>
> > > >>> All types of mdev devices may not support assigning multiple mdev
> > devices
> > > >>> to a user space application. In that case vendor driver can fail 
> > > >>> open()
> > > >>> call of mdev device. But there is no way to know User space
> > application
> > > >>> about the configuration supported by vendor driver.
> > > >>>
> > > >>> To expose supported configuration, vendor driver should add
> > > >>> 'multiple_mdev_support' attribute to type-id directory if vendor
> > driver
> > > >>> supports assigning multiple mdev devices of a particular type-id to
> > one
> > > >>> instance of user space application or a VM.
> > > >>>
> > > >>> User space application should read if 'multiple_mdev_support'
> > attibute is
> > > >>> present in type-id directory of all mdev devices which are going to
> be
> > > >>> used. If all read 1 then user space application can proceed with
> > multiple
> > > >>> mdev devices.
> > > >>>
> > > >>> This is optional and readonly attribute.
> > > >>
> > > >> I didn't get what is the exact problem from above description. Isn't it
> > the
> > > >> basic point behind mdev concept that parent driver can create
> multiple
> > > >> mdev instances on a physical device? VFIO usually doesn't care
> > whether
> > > >> multiple devices (pci or mdev) are assigned to same process/VM or
> not.
> > > >> Can you give some example of actual problem behind this change?
> > > >
> > > > The situation here is that mdev imposes no restrictions regarding how
> > > > mdev devices can be used by the user.  Multiple mdevs, regardless of
> > > > type, can be combined and used in any way the user sees fit, at least
> > > > that's the theory.  However, in practice, certain vendor drivers impose
> > > > a limitation that prevents multiple mdev devices from being used in
> the
> > > > same VM.  This is done by returning an error when the user attempts
> to
> > > > open those additional devices.  Now we're in the very predictable
> > > > situation that we want to consider that some of the vendor's mdev
> > types
> > > > might be combined in the same userspace instance, while others still
> > > > impose a restriction, and how can we optionally expose such per
> mdev
> > > > type restrictions to the userspace so we have something better than
> > > > "try it and see if it works".
> > > >
> > > > The below proposal assumes that a single instance per VM is the
> norm
> > > > and that we add something to the API to indicate multiple instances
> are
> > > > supported.  However, that's really never been the position of the
> mdev
> > > > core, where there's no concept of limiting devices per user instance;
> > > > it's a vendor driver restriction.  So I think the question is then why
> > > > should we burden vendor drivers who do not impose a restriction
> with
> > an
> > > > additional field?  Does the below extension provide a better
> backwards
> > > > compatibility scenario?
> > >
> > > The vendor driver who do not want to impose restriction, doesn't need
> to
> > > provide this attribute. In that case, behavior would remain same as it
> > > is now, i.e. multiple mdev instances can be used by one instance of
> > > application.
> > >
> > >
> > > > With the current code, I think it's reasonable for userspace to assume
> > > > there are no mdev limits per userspace instance, those that exist are
> > > > vendor specific.  The below proposal is for an optional field, what
> > > > does the management tool assume when it's not supplied?  We have
> > both
> > > > cases currently, mdev devices that allow multiple instances per VM
> and
> > > > those that do not, so I don't see that the user is more informed with
> > > > this addition and no attempt is made here to synchronously update all
> > > > drivers to expose this new attribute.
> > > >
> > >
> > > When this attribute is not provided, behavior should remain same as it
> > > is now. But if this attribute is provided then management tool should
> > > read this attribute to verify that such combination is supported by
> > > vendor driver.
> > >
> > > > Does it make more sense then to add an attribute to expose the
> > > > restriction rather than the lack of restriction.  Perhaps something
> > > > like "singleton_usage_restriction".
> > >
> > > I would prefer to expose what is supported than what is restricted.
> >
> > Above, it's stated that vendors who do not impose a restriction do not
> > need to implement this.  So it's designed to expose a restriction.
> > We're stating that exposing multiple_mdev_support=1/Y is the equivalent
> > of not reporting the attribute at all, so the only reason to implement
> > it would be because there is a restriction.  So why masquerade as a
> > feature?  Thanks,
> >
> 
> Agree it looks more like a restriction than feature. Especially a natural
> thought
> of missing a feature knob means no support of the feature, which doesn't
> fit the intention here. Also what default behavior should high level mgmt..
> stack assume if the attribute is missing? some vendor drivers allows while
> other doesn't today. Then do we ask to include vendor-specific knowledge
> for "remain same behavior"?

forgot the latter part. Just realized Alex already explained the current
behavior - "try it and see if it works". :-)

Thanks
Kevin

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Reply via email to