On 04/15/2020 09:42 AM, Jiang Yi wrote:
Do cond_resched_lock() in stage2_flush_memslot() like what is done in
unmap_stage2_range() and other places holding mmu_lock while processing
a possibly large range of memory.

Signed-off-by: Jiang Yi <[email protected]>
---
  virt/kvm/arm/mmu.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index e3b9ee268823..7315af2c52f8 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -417,16 +417,19 @@ static void stage2_flush_memslot(struct kvm *kvm,
        phys_addr_t next;
        pgd_t *pgd;
pgd = kvm->arch.pgd + stage2_pgd_index(kvm, addr);
        do {
                next = stage2_pgd_addr_end(kvm, addr, end);
                if (!stage2_pgd_none(kvm, *pgd))
                        stage2_flush_puds(kvm, pgd, addr, next);
+
+               if (next != end)
+                       cond_resched_lock(&kvm->mmu_lock);
        } while (pgd++, addr = next, addr != end);
  }

Given that this is called under the srcu_lock this looks
good to me:

Reviewed-by: Suzuki K Poulose <[email protected]>

/**
   * stage2_flush_vm - Invalidate cache for pages mapped in stage 2
   * @kvm: The struct kvm pointer
   *
   * Go through the stage 2 page tables and invalidate any cache lines


Reply via email to