On Sun, Apr 22, 2012 at 2:16 AM, Avi Kivity <a...@redhat.com> wrote:
> On 04/21/2012 05:15 AM, Mike Waychison wrote:
[...]
> There is no mmu_list_lock.  Do you mean kvm_lock or kvm->mmu_lock?
>
> If the former, then we could easily fix this by dropping kvm_lock while
> the work is being done.  If the latter, then it's more difficult.
>
> (kvm_lock being contended implies that mmu_shrink is called concurrently?)

On a 32-core system experiencing memory pressure, mmu_shrink was often
being called concurrently (before we turned it off).

With just one, or a small number of VMs on a host, when the
mmu_shrinker contents on the kvm_lock, that's just a proxy for the
contention on kvm->mmu_lock.  It is the one that gets reported,
though, since it gets acquired first.

The contention on mmu_lock would indeed be difficult to remove.  Our
case was perhaps unusual, because of the use of memory containers.  So
some cgroups were under memory pressure (thus calling the shrinker)
but the various VCPU threads (whose guest page tables were being
evicted by the shrinker) could immediately turn around and
successfully re-allocate them.  That made the kvm->mmu_lock really
hot.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to