Re: [PATCH] kvm: don't call mmu_shrinker w/o used_mmu_pages
On Sun, Apr 22, 2012 at 2:35 AM, Avi Kivity wrote: > On 04/21/2012 01:11 AM, Andrew Morton wrote: >> On Fri, 13 Apr 2012 15:38:41 -0700 >> Ying Han wrote: >> >> > The mmu_shrink() is heavy by itself by iterating all kvms and holding >> > the kvm_lock. spotted the code w/ Rik during LSF, and it turns out we >> > don't need to call the shrinker if nothing to shrink. >> > >> >> We should probably tell the kvm maintainers about this ;) >> > > > Andrew, I see you added this to -mm. First, it should go through the > kvm tree. Second, unless we misunderstand something, the patch does > nothing, so I don't think it should be added at all. Avi, does this patch help the case as you mentioned above, where kvm module is loaded but no virtual machines are present ? Why we have to walk the empty while holding the spinlock? --Ying > > -- > error compiling committee.c: too many arguments to function > -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] kvm: don't call mmu_shrinker w/o used_mmu_pages
On Fri, Apr 20, 2012 at 3:53 PM, Rik van Riel wrote: > On 04/20/2012 06:11 PM, Andrew Morton wrote: >> >> On Fri, 13 Apr 2012 15:38:41 -0700 >> Ying Han wrote: >> >>> The mmu_shrink() is heavy by itself by iterating all kvms and holding >>> the kvm_lock. spotted the code w/ Rik during LSF, and it turns out we >>> don't need to call the shrinker if nothing to shrink. > > >>> @@ -3900,6 +3905,9 @@ static int mmu_shrink(struct shrinker *shrink, >>> struct shrink_control *sc) >>> if (nr_to_scan == 0) >>> goto out; >>> >>> + if (!get_kvm_total_used_mmu_pages()) >>> + return 0; >>> + > > >> Do we actually know that this patch helps anything? Any measurements? Is >> kvm_total_used_mmu_pages==0 at all common? >> > > On re-reading mmu.c, it looks like even with EPT or NPT, > we end up creating mmu pages for the nested page tables. I think you are right here. So the patch doesn't help the real pain. My understanding of the real pain is the poor implementation of the mmu_shrinker. It iterates all the registered mmu_shrink callbacks for each kvm and only does little work at a time while holding two big locks. I learned from mikew@ (also ++cc-ed) that is causing latency spikes and unfairness among kvm instance in some of the experiment we've seen. Mike might tell more on that. --Ying > > I have not had the time to look into it more, but it would > be nice to know if the patch has any effect at all. > > -- > All rights reversed -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html