Due to the possibility of handle_userfault dropping mmap_lock, avoid fault
handling under VMA lock and retry holding mmap_lock. This can be handled
more gracefully in the future.

Signed-off-by: Suren Baghdasaryan <sur...@google.com>
Suggested-by: Peter Xu <pet...@redhat.com>
---
 mm/memory.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index 33ecc850d3cb..55582c6fa2fd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5256,6 +5256,15 @@ struct vm_area_struct *lock_vma_under_rcu(struct 
mm_struct *mm,
        if (!vma_start_read(vma))
                goto inval;
 
+       /*
+        * Due to the possibility of userfault handler dropping mmap_lock, avoid
+        * it for now and fall back to page fault handling under mmap_lock.
+        */
+       if (userfaultfd_armed(vma)) {
+               vma_end_read(vma);
+               goto inval;
+       }
+
        /* Check since vm_start/vm_end might change before we lock the VMA */
        if (unlikely(address < vma->vm_start || address >= vma->vm_end)) {
                vma_end_read(vma);
-- 
2.39.1

Reply via email to