On Tue, Mar 14, 2017 at 02:52:34PM +0000, Suzuki K Poulose wrote:
> In kvm_free_stage2_pgd() we don't hold the kvm->mmu_lock while calling
> unmap_stage2_range() on the entire memory range for the guest. This could
> cause problems with other callers (e.g, munmap on a memslot) trying to
> unmap a range.
> 
> Fixes: commit d5d8184d35c9 ("KVM: ARM: Memory virtualization setup")
> Cc: sta...@vger.kernel.org # v3.10+
> Cc: Marc Zyngier <marc.zyng...@arm.com>
> Cc: Christoffer Dall <christoffer.d...@linaro.org>
> Signed-off-by: Suzuki K Poulose <suzuki.poul...@arm.com>
> ---
>  arch/arm/kvm/mmu.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 13b9c1f..b361f71 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -831,7 +831,10 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
>       if (kvm->arch.pgd == NULL)
>               return;
>  
> +     spin_lock(&kvm->mmu_lock);
>       unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> +     spin_unlock(&kvm->mmu_lock);
> +

This ends up holding the spin lock for potentially quite a while, where
we can do things like __flush_dcache_area(), which I think can fault.

Is that valid?

Thanks,
-Christoffer

>       /* Free the HW pgd, one page at a time */
>       free_pages_exact(kvm->arch.pgd, S2_PGD_SIZE);
>       kvm->arch.pgd = NULL;
> -- 
> 2.7.4
> 

Reply via email to