On Fri, Mar 04, 2011 at 06:57:27PM +0800, Xiao Guangrong wrote:
> Don not walk to the next level if spte is mapping to the large page
> 
> Signed-off-by: Xiao Guangrong <xiaoguangr...@cn.fujitsu.com>
> ---
>  arch/x86/kvm/mmu.c |    3 ++-
>  1 files changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index b9bf016..10e0982 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3819,7 +3819,8 @@ int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, 
> u64 addr, u64 sptes[4])
>       for_each_shadow_entry(vcpu, addr, iterator) {
>               sptes[iterator.level-1] = *iterator.sptep;
>               nr_sptes++;
> -             if (!is_shadow_present_pte(*iterator.sptep))
> +             if (!is_shadow_present_pte(*iterator.sptep) ||
> +                   is_last_spte(*iterator.sptep, iterator.level))
>                       break;
>       }
>       spin_unlock(&vcpu->kvm->mmu_lock);

shadow_walk_okay covers that case.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to