Some hypervisors have the ability to hot-plug VCPUs exposed to the guest. Right now, libvirt XML only has the ability to describe the total number of vcpus assigned to a domain (the <vcpu> element under <domain>). It has the following APIs:

virConnectGetMaxVcpus - provide maximum that host can assign to guests
virDomainGetMaxVcpus - if domain is active, then max it was booted with; if inactive, then same as virConnectGetMaxVcpus virDomainSetVcpus - change current vcpus assigned to domain; active domain only
virDomainPinVcpu - control how vcpus are pinned; active domain only
virDomainGetVcpus - detailed map of how vcpus are mapped to host cpus

And virsh has these commands:

setvcpus - maps to virDomainSetVcpus
vcpuinfo - maps to virDomainGetVcpus
vcpupin - maps to virDomainPinVcpu



https://bugzilla.redhat.com/show_bug.cgi?id=545118 describes the use case of booting a Xen HV with one value set for the maximum vcpu count, but another value for the current count. Technically, this can already be approximated by calling virDomainSetVcpus immediately after the guest is booted, but that can be resource-intensive, compared to the alternative of using xen's command line options to boot with a smaller current value than the maximum, and only later hot-plugging additional vcpus when needed (up to the maximum set at boot time). And it is not persistent, so the extra vcpus must be manually unplugged every boot.


At the XML layer, I'm proposing the addition of a new element <currentVcpu>:

<domain ...>
  <vcpu>2</vcpu>
  <currentVcpu>1</vcpu>
...

If absent, then we keep the status quo of starting the domain with the same number of vcpus as the maximum. If present, it must be between 1 and <vcpu> inclusive (where supported, and exactly <vcpu> for hypervisors that lack vcpu hot-plug support), and dumping the xml of a domain will update the element to match virDomainSetVcpus; this provides the persistence aspect, and allows domain startup to take advantage of any command line options to start with a reduced current vcpu count rather than having to unplug vcpus after the fact.


At the library API layer, I plan on adding:

virDomainSetMaxVcpus - alter the <vcpu> xml aspect of a domain for next boot; only affects persistent state

virDomainSetVcpusFlags - alter the <currentVcpu> xml aspect of a domain, with a flag to state whether the change is persistent (inactive domains or affecting next boot of active domain) or live (active domains only).

and altering:

virDomainSetVcpus - can additionally be used on inactive domains to affect next boot; no change to active semantics, basically now a wrapper for virDomainSetVcpusFlags(,0)

virDomainGetMaxVcpus - on inactive domains, this value now matches the <vcpu> setting rather than blindly matching virConnectGetMaxVcpus

I think that the existing virDomainGetVcpus is adequate for determining the number of current vcpus in an active domain. Any other API changes that you think might be necessary?


Finally, at the virsh layer, I plan on:

vcpuinfo: add --count flag; if flag is present, then inactive domains show current and max vcpus rather than erroring out, and active domains add current and max vcpu information to the overall output

setvcpus: add --max and --persistent flags; without flags, this still maps to virDomainSetVcpus and only affects active domains; with --max, it maps to virDomainSetMaxVcpus, with --persistent, it maps to virDomainSetVcpusFlags


Any thoughts on this plan of attack before I start submitting code patches?

--
Eric Blake   ebl...@redhat.com    +1-801-349-2682
Libvirt virtualization library http://libvirt.org

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Reply via email to