> Am 05.05.2014 um 16:35 schrieb "Aneesh Kumar K.V" > <aneesh.ku...@linux.vnet.ibm.com>: > > Alexander Graf <ag...@suse.de> writes: > >>> On 05/04/2014 07:25 PM, Aneesh Kumar K.V wrote: >>> We reserve 5% of total ram for CMA allocation and not using that can >>> result in us running out of numa node memory with specific >>> configuration. One caveat is we may not have node local hpt with pinned >>> vcpu configuration. But currently libvirt also pins the vcpu to cpuset >>> after creating hash page table. >> >> I don't understand the problem. Can you please elaborate? > > Lets take a system with 100GB RAM. We reserve around 5GB for htab > allocation. Now if we use rest of available memory for hugetlbfs > (because we want all the guest to be backed by huge pages), we would > end up in a situation where we have a few GB of free RAM and 5GB of CMA > reserve area. Now if we allow hash page table allocation to consume the > free space, we would end up hitting page allocation failure for other > non movable kernel allocation even though we still have 5GB CMA reserve > space free.
Isn't this a greater problem? We should start swapping before we hit the point where non movable kernel allocation fails, no? The fact that KVM uses a good number of normal kernel pages is maybe suboptimal, but shouldn't be a critical problem. Alex > > -aneesh > -- To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html