On 1/6/21 3:05 PM, Daniel P. Berrangé wrote:
On Wed, Jan 06, 2021 at 02:40:15PM -0300, Daniel Henrique Barboza wrote:


On 1/6/21 2:30 PM, Daniel P. Berrangé wrote:
On Wed, Jan 06, 2021 at 02:24:35PM -0300, Daniel Henrique Barboza wrote:


On 1/6/21 8:13 AM, Erik Skultety wrote:
On Wed, Jan 06, 2021 at 08:00:52AM -0300, Daniel Henrique Barboza wrote:


On 1/6/21 7:09 AM, Daniel P. Berrangé wrote:
On Tue, Jan 05, 2021 at 05:18:13PM -0300, Daniel Henrique Barboza wrote:

[...]



This is similar to what we do for  the nwfilter-binding and net-port XML
where we have an <owner> element present.

The complication here is that right now we don't ever touch the nodedev
driver when doing host device assignment, and so don't especially want
to introduce a dependancy.

One possible alternative would be a new API that operates on hostdevs instead
of nodedevs. "hostdev-list" would list the devices assigned to any domain, as
opposed to "nodedev-list" that lists all nodedevs of the host. I'm not sure if 
this
differentiation between hostdev and nodedev (i.e. hostdev is a nodedev that is
assigned to a domain) would be clear enough to users though. We would need to
document it clearer in the docs.

Wasn't this about the connection to the nodedev though? E.g. with mdevs we only
have a UUID in the domain XML which doesn't tell you anything about the device
nor its parent and you also can't take the uuid and try finding the
corresponding nodedev entry for it (well, you can hack it so that you construct
the resulting nodedev name). Maybe I'm just misunderstanding the use case
though.

This particular case I'm asking for comments is related to PCI hostdevs (namely,
SR-IOV virtual functions) that might get removed from the host, while being
assigned to a running domain. We don't support that (albeit I posted patches
that tries to alleviate the issue in Libvirt), and at the same time we don't
provide easy tools for the user to check whether a specific hostdev is
assigned to a domain. The user must query the running domains to find out.

This isn't all that much different to other host resources that are given
to guests. eg if pinning vCPUs 1:1 to pCPUs, the admin/mgmt app has to
keep track of which pCPUs are used. If assuming host block devices to a
guest, the admin/mgmt app has to keep track of block devices in use.
If assigning NICs for dedicated guest use the admin/mgmt app has to keep
track. etc, etc.

Apps like oVirt, OpenStack, KubeVirt will all do this tracking themselves
generally. This is especially important when they need to have this usage
information kept on a separate host so that the schedular can use it
when deciding which host to place a new guest on.

So, I'm not entirely convinced libvirt needs has a critical need to do
anything for PCI devices in this respect.

I agree that whether we implement this or not, this is a feature 'good to have'
at best, that just the average admin that has access to a SR-IOV card and
doesn't have OVirt like apps to manage the VMs will end up using. Not sure how
many ppl out there that fits this profile TBH.

Definitely nothing that warrants breaking thing to implement.

For the adhoc use case we don't especially need to know which VM is using
a PCI devices. We just need to know that the device is in use or not.

We know if a PCI device is in use because it will be bound to a specific
kernel driver whenever assigned. Could we use this perhaps as a way to
filter the list of node devs to only show those which are not assigned


Interesting. Making 'nodedev-list' showing which PCI nodedevs are
assigned/unassigned via sysfs is already a good info to have, and we
don't create any new dependency in the nodedev driver.

I'll investigate it.


Thanks,


DHB



Regards,
Daniel


Reply via email to