On Mon, Dec 12, 2011 at 07:26:47AM +0900, Takuya Yoshikawa wrote:
> From: Takuya Yoshikawa <yoshikawa.tak...@oss.ntt.co.jp>
> 
> Currently, mmu_shrink() tries to free a shadow page from one kvm and
> does not use nr_to_scan correctly.
> 
> This patch fixes this by making it try to free some shadow pages from
> each kvm.  The number of shadow pages each kvm frees becomes
> proportional to the number of shadow pages it is using.
> 
> Note: an easy way to see how this code works is to do
>   echo 3 > /proc/sys/vm/drop_caches
> during some virtual machines are running.  Shadow pages will be zapped
> as expected by this.

I'm not sure this is a meaningful test to verify this change is
worthwhile, because while the shrinker tries to free a shadow page from
one vm, the vm's position in the kvm list is changed, so the over time
the shrinker will cycle over all VMs.

Can you measure whether there is a significant difference in a synthetic
workload, and what that change is? Perhaps apply {moderate, high} memory
pressure load with {2, 4, 8, 16} VMs or something like that.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to