On 15/03/17 09:21, Christoffer Dall wrote:
> On Tue, Mar 14, 2017 at 02:52:34PM +0000, Suzuki K Poulose wrote:
>> In kvm_free_stage2_pgd() we don't hold the kvm->mmu_lock while calling
>> unmap_stage2_range() on the entire memory range for the guest. This could
>> cause problems with other callers (e.g, munmap on a memslot) trying to
>> unmap a range.
>>
>> Fixes: commit d5d8184d35c9 ("KVM: ARM: Memory virtualization setup")
>> Cc: sta...@vger.kernel.org # v3.10+
>> Cc: Marc Zyngier <marc.zyng...@arm.com>
>> Cc: Christoffer Dall <christoffer.d...@linaro.org>
>> Signed-off-by: Suzuki K Poulose <suzuki.poul...@arm.com>
>> ---
>>  arch/arm/kvm/mmu.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
>> index 13b9c1f..b361f71 100644
>> --- a/arch/arm/kvm/mmu.c
>> +++ b/arch/arm/kvm/mmu.c
>> @@ -831,7 +831,10 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
>>      if (kvm->arch.pgd == NULL)
>>              return;
>>  
>> +    spin_lock(&kvm->mmu_lock);
>>      unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
>> +    spin_unlock(&kvm->mmu_lock);
>> +
> 
> This ends up holding the spin lock for potentially quite a while, where
> we can do things like __flush_dcache_area(), which I think can fault.

I believe we're always using the linear mapping (or kmap on 32bit) in
order not to fault.

Thanks,

        M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to