On 26.02.24 21:55, Zi Yan wrote:
From: Zi Yan <[email protected]>

As multi-size THP support is added, not all THPs are PMD-mapped, thus
during a huge page split, there is no need to always split PMD mapping
in unmap_folio(). Make it conditional.

Signed-off-by: Zi Yan <[email protected]>
---
  mm/huge_memory.c | 7 +++++--
  1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 28341a5067fb..b20e535e874c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2727,11 +2727,14 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma,
static void unmap_folio(struct folio *folio)
  {
-       enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
-               TTU_SYNC | TTU_BATCH_FLUSH;
+       enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SYNC |
+               TTU_BATCH_FLUSH;
VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + if (folio_test_pmd_mappable(folio))
+               ttu_flags |= TTU_SPLIT_HUGE_PMD;
+
        /*
         * Anon pages need migration entries to preserve them, but file
         * pages can simply be left unmapped, then faulted back on demand.

Reviewed-by: David Hildenbrand <[email protected]>

--
Cheers,

David / dhildenb


Reply via email to