On Fri, Jul 12, 2024 at 09:44:02AM -0300, Fabiano Rosas wrote:
> Do you have a reference for that kubevirt issue I could look at? It
> maybe interesting to investigate further. Where's the throttling coming
> from? And doesn't less vcpu time imply less dirtying and therefore
> faster convergence?

Sorry I don't have a link on hand.. sometimes it's not about converge, it's
about impacting the guest workload too much without intention which is not
wanted, especially if on a public cloud.

It's understandable to me since they're under the same cgroup with
throttled cpu resources applie to QEMU+Libvirt processes as a whole,
probably based on N_VCPUS with some tiny extra room for other stuff.

For example, I remember they also hit other threads content with the vcpu
threads like the block layer thread pools.

It's a separate issue here when talking about locked_vm, as kubevirt
probably need to figure out a way to say "these are mgmt threads, and those
are vcpu threads", because mgmt threads can take quite some cpu resources
sometimes and it's not avoidable.  Page pinning will be another story, as
in many cases pinning should not be required, except VFIO, zerocopy and
other special stuff.

-- 
Peter Xu


Reply via email to