On 9/21/20 2:20 PM, Peter Xu wrote:
...
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 7ff29cc3d55c..c40aac0ad87e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1074,6 +1074,23 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct 
mm_struct *src_mm,
src_page = pmd_page(pmd);
        VM_BUG_ON_PAGE(!PageHead(src_page), src_page);
+
+       /*
+        * If this page is a potentially pinned page, split and retry the fault
+        * with smaller page size.  Normally this should not happen because the
+        * userspace should use MADV_DONTFORK upon pinned regions.  This is a
+        * best effort that the pinned pages won't be replaced by another
+        * random page during the coming copy-on-write.
+        */
+       if (unlikely(READ_ONCE(src_mm->has_pinned) &&
+                    page_maybe_dma_pinned(src_page))) {

This condition would make a good static inline function. It's used in 3 places,
and the condition is quite special and worth documenting, and having a separate
function helps with that, because the function name adds to the story. I'd 
suggest
approximately:

    page_likely_dma_pinned()

for the name.

+               pte_free(dst_mm, pgtable);
+               spin_unlock(src_ptl);
+               spin_unlock(dst_ptl);
+               __split_huge_pmd(vma, src_pmd, addr, false, NULL);
+               return -EAGAIN;
+       }


Why wait until we are so deep into this routine to detect this and unwind?
It seems like if you could do a check near the beginning of this routine, and
handle it there, with less unwinding? In fact, after taking only the src_ptl,
the check could be made, right?


thanks,
--
John Hubbard
NVIDIA

Reply via email to