On Tue, Jun 17, 2025 at 05:43:37PM +0200, David Hildenbrand wrote: > Just like we do for vmf_insert_page_mkwrite() -> ... -> > insert_page_into_pte_locked(), support the huge zero folio. > > Signed-off-by: David Hildenbrand <da...@redhat.com>
insert_page_into_pte_locked() creates a special pte in case it finds the zero folio while insert_pmd() doesn't. I know that we didn't want to create special mappings for normal refcount folios but this seems inconsistent? I'm pretty sure there's a reason but could you elaborate on that? > --- > mm/huge_memory.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 1ea23900b5adb..92400f3baa9ff 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1418,9 +1418,11 @@ static vm_fault_t insert_pmd(struct vm_area_struct > *vma, unsigned long addr, > if (fop.is_folio) { > entry = folio_mk_pmd(fop.folio, vma->vm_page_prot); > > - folio_get(fop.folio); > - folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma); > - add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR); > + if (!is_huge_zero_folio(fop.folio)) { > + folio_get(fop.folio); > + folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, > vma); > + add_mm_counter(mm, mm_counter_file(fop.folio), > HPAGE_PMD_NR); > + } > } else { > entry = pmd_mkhuge(pfn_pmd(fop.pfn, prot)); > entry = pmd_mkspecial(entry); > -- > 2.49.0 > > -- Oscar Salvador SUSE Labs