On 03/05/17 15:17, Suzuki K Poulose wrote:
> We yield the kvm->mmu_lock occassionaly while performing an operation
                             occasionally
> (e.g, unmap or permission changes) on a large area of stage2 mappings.
> However this could possibly cause another thread to clear and free up
> the stage2 page tables while we were waiting for regaining the lock and
> thus the original thread could end up in accessing memory that was
> freed. This patch fixes the problem by making sure that the stage2
> pagetable is still valid after we regain the lock. The fact that
> mmu_notifer->release() could be called twice (via __mmu_notifier_release
> and mmu_notifier_unregsister) enhances the possibility of hitting
                   unregister
> this race where there are two threads trying to unmap the entire guest
> shadow pages.
> 
> While at it, cleanup the redudant checks around cond_resched_lock in
                           redundant
> stage2_wp_range(), as cond_resched_lock already does the same checks.
> 
> Cc: Mark Rutland <mark.rutl...@arm.com>
> Cc: Radim Krčmář <rkrc...@redhat.com>
> Cc: andreyk...@google.com
> Cc: Christoffer Dall <christoffer.d...@linaro.org>
> Cc: Marc Zyngier <marc.zyng...@arm.com>
> Cc: Paolo Bonzini <pbonz...@redhat.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poul...@arm.com>
> ---
>  arch/arm/kvm/mmu.c | 17 ++++++++++++-----
>  1 file changed, 12 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 909a1a7..5b3e0db 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -301,9 +301,14 @@ static void unmap_stage2_range(struct kvm *kvm, 
> phys_addr_t start, u64 size)
>               /*
>                * If the range is too large, release the kvm->mmu_lock
>                * to prevent starvation and lockup detector warnings.
> +              * Make sure the page table is still active when we regain
> +              * the lock.
>                */
> -             if (next != end)
> +             if (next != end) {
>                       cond_resched_lock(&kvm->mmu_lock);
> +                     if (!READ_ONCE(kvm->arch.pgd))
> +                             break;
> +             }
>       } while (pgd++, addr = next, addr != end);
>  }
>  
> @@ -1170,11 +1175,13 @@ static void stage2_wp_range(struct kvm *kvm, 
> phys_addr_t addr, phys_addr_t end)
>                * large. Otherwise, we may see kernel panics with
>                * CONFIG_DETECT_HUNG_TASK, CONFIG_LOCKUP_DETECTOR,
>                * CONFIG_LOCKDEP. Additionally, holding the lock too long
> -              * will also starve other vCPUs.
> +              * will also starve other vCPUs. We have to also make sure
> +              * that the page tables are not freed while we released
> +              * the lock.
>                */
> -             if (need_resched() || spin_needbreak(&kvm->mmu_lock))
> -                     cond_resched_lock(&kvm->mmu_lock);
> -
> +             cond_resched_lock(&kvm->mmu_lock);
> +             if (!READ_ONCE(kvm->arch.pgd))
> +                     break;
>               next = stage2_pgd_addr_end(addr, end);
>               if (stage2_pgd_present(*pgd))
>                       stage2_wp_puds(pgd, addr, next);
> 

Reviewed-by: Marc Zyngier <marc.zyng...@arm.com>

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny...

Reply via email to