On 13/10/2017 23:48, Peng Hao wrote:
> When poweroff L1 Guest with L2 guset on L1, it exists a path to
> trigger a bad_page bug_on.

How easy it is to reproduce?  CCing Junaid and Guangrong too.

> !page_count(pfn_to_page(pfn)) Warning in mmu_spte_clear_track_bits will
> appear before,then it may set A/D bit for the freed page and trigger a
> bad_page bug_on.
> 
> Signed-off-by: Peng Hao <[email protected]>
> ---
>  arch/x86/kvm/mmu.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index eca30c1..398de96 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -711,12 +711,8 @@ static int mmu_spte_clear_track_bits(u64 *sptep)
>  
>       pfn = spte_to_pfn(old_spte);
>  
> -     /*
> -      * KVM does not hold the refcount of the page used by
> -      * kvm mmu, before reclaiming the page, we should
> -      * unmap it from mmu first.
> -      */
> -     WARN_ON(!kvm_is_reserved_pfn(pfn) && !page_count(pfn_to_page(pfn)));
> +     if (!page_count(pfn_to_page(pfn)))
> +             return 1;

If the page count is zero, KVM should not have accessed this page at
all.  The bug is elsewhere. :(

Paolo

>       if (is_accessed_spte(old_spte))
>               kvm_set_pfn_accessed(pfn);
> 

Reply via email to