On Wed, Oct 22, 2025 at 12:37:04PM -0600, Nico Pache wrote: >The khugepaged daemon and madvise_collapse have two different >implementations that do almost the same thing. > >Create collapse_single_pmd to increase code reuse and create an entry >point to these two users. > >Refactor madvise_collapse and collapse_scan_mm_slot to use the new >collapse_single_pmd function. This introduces a minor behavioral change >that is most likely an undiscovered bug. The current implementation of >khugepaged tests collapse_test_exit_or_disable before calling >collapse_pte_mapped_thp, but we weren't doing it in the madvise_collapse >case. By unifying these two callers madvise_collapse now also performs >this check. We also modify the return value to be SCAN_ANY_PROCESS which >properly indicates that this process is no longer valid to operate on. > >We also guard the khugepaged_pages_collapsed variable to ensure its only >incremented for khugepaged. > >Reviewed-by: Baolin Wang <[email protected]> >Acked-by: David Hildenbrand <[email protected]> >Signed-off-by: Nico Pache <[email protected]>
Reviewed-by: Wei Yang <[email protected]> One nit below. >--- > mm/khugepaged.c | 97 ++++++++++++++++++++++++++----------------------- > 1 file changed, 52 insertions(+), 45 deletions(-) > >diff --git a/mm/khugepaged.c b/mm/khugepaged.c >index 6c4abc7f45cf..36e31d99e507 100644 >--- a/mm/khugepaged.c >+++ b/mm/khugepaged.c >@@ -2370,6 +2370,53 @@ static int collapse_scan_file(struct mm_struct *mm, >unsigned long addr, > return result; > } > >+/* >+ * Try to collapse a single PMD starting at a PMD aligned addr, and return >+ * the results. >+ */ >+static int collapse_single_pmd(unsigned long addr, >+ struct vm_area_struct *vma, bool *mmap_locked, >+ struct collapse_control *cc) >+{ >+ struct mm_struct *mm = vma->vm_mm; >+ int result; >+ struct file *file; >+ pgoff_t pgoff; >+ >+ if (vma_is_anonymous(vma)) { >+ result = collapse_scan_pmd(mm, vma, addr, mmap_locked, cc); >+ goto end; >+ } >+ >+ file = get_file(vma->vm_file); >+ pgoff = linear_page_index(vma, addr); >+ >+ mmap_read_unlock(mm); >+ *mmap_locked = false; >+ result = collapse_scan_file(mm, addr, file, pgoff, cc); >+ fput(file); >+ if (result != SCAN_PTE_MAPPED_HUGEPAGE) >+ goto end; >+ >+ mmap_read_lock(mm); >+ *mmap_locked = true; >+ if (collapse_test_exit_or_disable(mm)) { >+ mmap_read_unlock(mm); >+ *mmap_locked = false; >+ return SCAN_ANY_PROCESS; >+ } >+ result = collapse_pte_mapped_thp(mm, addr, !cc->is_khugepaged); >+ if (result == SCAN_PMD_MAPPED) >+ result = SCAN_SUCCEED; >+ mmap_read_unlock(mm); >+ *mmap_locked = false; For all cases, we would set mmap_locked to false. Not sure it bother to adjust it. -- Wei Yang Help you, Help me
