Due to the possibility of do_swap_page dropping mmap_lock, abort fault
handling under VMA lock and retry holding mmap_lock. This can be handled
more gracefully in the future.

Signed-off-by: Suren Baghdasaryan <sur...@google.com>
Reviewed-by: Laurent Dufour <laurent.duf...@fr.ibm.com>
---
 mm/memory.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index 593548f24007..33ecc850d3cb 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3690,6 +3690,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
        if (!pte_unmap_same(vmf))
                goto out;
 
+       if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
+               ret = VM_FAULT_RETRY;
+               goto out;
+       }
+
        entry = pte_to_swp_entry(vmf->orig_pte);
        if (unlikely(non_swap_entry(entry))) {
                if (is_migration_entry(entry)) {
-- 
2.39.1

Reply via email to