Anthony Liguori wrote:
Andre Przywara wrote:
Hi,

this patch series introduces multiple NUMA nodes support within KVM guests. This will improve the performance of guests which are bigger than one node (number of VCPUs and/or amount of memory) and also allows better balancing by taking better usage of each node's memory.
It also improves the one node case by pinning a guest to this node and
avoiding access of remote memory from one VCPU.

Could you please post this to qemu-devel? There's really nothing KVM specific here.


It's almost useless to qemu until it can run vcpus on host threads. I agree it should be posted there though.


I think the dependency on libnuma is a bad idea. It's mixing a mechanism (emulating NUMA layout) with a policy (how to do memory/VCPU placement).

If you split the NUMA emulation bits into a separate patch series, that has no dependency on the host NUMA topology, I think we look at the existing mechanisms we have to see if they're sufficient to do static placement on NUMA boundaries. vcpu pinning is easy enough, I think the only place we're lacking is memory layout. Note, that's totally independent of the guest's NUMA characteristics though. You may still want half of memory to be pinned between two nodes even if the guest has no SRAT tables.

You can do that easily with numactl. Fine grained control of host numa layout and guest numa emulation are only useful together (one could argue that guest numa emulation is useful by itself, for debugging the guest OS numa algorithms).

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to