On Mon, May 03, 2010 at 09:38:54PM +0800, Gui Jianfeng wrote:
Hi Marcelo
Actually, it doesn't only affect kvm_mmu_change_mmu_pages() but also affects
kvm_mmu_remove_some_alloc_mmu_pages()
which is called by mmu shrink routine. This will induce upper layer get a
wrong number, so i think
Marcelo Tosatti wrote:
On Mon, May 03, 2010 at 09:38:54PM +0800, Gui Jianfeng wrote:
Hi Marcelo
Actually, it doesn't only affect kvm_mmu_change_mmu_pages() but also affects
kvm_mmu_remove_some_alloc_mmu_pages()
which is called by mmu shrink routine. This will induce upper layer get a
Marcelo Tosatti wrote:
On Fri, Apr 23, 2010 at 01:58:22PM +0800, Gui Jianfeng wrote:
Currently, in kvm_mmu_change_mmu_pages(kvm, page), used_pages-- is
performed after calling
kvm_mmu_zap_page() in spite of that whether page is actually reclaimed.
Because root sp won't
be reclaimed by
On Fri, Apr 23, 2010 at 01:58:22PM +0800, Gui Jianfeng wrote:
Currently, in kvm_mmu_change_mmu_pages(kvm, page), used_pages-- is
performed after calling
kvm_mmu_zap_page() in spite of that whether page is actually reclaimed.
Because root sp won't
be reclaimed by kvm_mmu_zap_page(). So
Currently, in kvm_mmu_change_mmu_pages(kvm, page), used_pages-- is performed
after calling
kvm_mmu_zap_page() in spite of that whether page is actually reclaimed.
Because root sp won't
be reclaimed by kvm_mmu_zap_page(). So making kvm_mmu_zap_page() return total
number of reclaimed
sp makes