Mark pages/folios accessed+dirty prior to dropping mmu_lock, as marking a
page/folio dirty after it has been written back can make some filesystems
unhappy (backing KVM guests will such filesystem files is uncommon, and
the race is minuscule, hence the lack of complaints).  See the link below
for details.

This will also allow converting arm64 to kvm_release_faultin_page(), which
requires that mmu_lock be held (for the aforementioned reason).

Link: https://lore.kernel.org/all/cover.1683044162.git.lstoa...@gmail.com
Signed-off-by: Sean Christopherson <sea...@google.com>
---
 arch/arm64/kvm/mmu.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 22ee37360c4e..ce13c3d884d5 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1685,15 +1685,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
phys_addr_t fault_ipa,
        }
 
 out_unlock:
+       if (writable && !ret)
+               kvm_set_pfn_dirty(pfn);
+       else
+               kvm_release_pfn_clean(pfn);
+
        read_unlock(&kvm->mmu_lock);
 
        /* Mark the page dirty only if the fault is handled successfully */
-       if (writable && !ret) {
-               kvm_set_pfn_dirty(pfn);
+       if (writable && !ret)
                mark_page_dirty_in_slot(kvm, memslot, gfn);
-       }
 
-       kvm_release_pfn_clean(pfn);
        return ret != -EAGAIN ? ret : 0;
 }
 
-- 
2.46.0.rc1.232.g9752f9e123-goog

Reply via email to