On 12.06.25 18:10, Lorenzo Stoakes wrote:
On Wed, Jun 11, 2025 at 02:06:53PM +0200, David Hildenbrand wrote:
Marking PMDs that map a "normal" refcounted folios as special is
against our rules documented for vm_normal_page().

Fortunately, there are not that many pmd_special() check that can be
mislead, and most vm_normal_page_pmd()/vm_normal_folio_pmd() users that
would get this wrong right now are rather harmless: e.g., none so far
bases decisions whether to grab a folio reference on that decision.

Well, and GUP-fast will fallback to GUP-slow. All in all, so far no big
implications as it seems.

Getting this right will get more important as we use
folio_normal_page_pmd() in more places.

Fix it by teaching insert_pfn_pmd() to properly handle folios and
pfns -- moving refcount/mapcount/etc handling in there, renaming it to
insert_pmd(), and distinguishing between both cases using a new simple
"struct folio_or_pfn" structure.

Use folio_mk_pmd() to create a pmd for a folio cleanly.

Fixes: 6c88f72691f8 ("mm/huge_memory: add vmf_insert_folio_pmd()")
Signed-off-by: David Hildenbrand <da...@redhat.com>

Looks good to me, checked that the logic remains the same. Some micro
nits/thoughts below. So:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoa...@oracle.com>

Thanks!


---
  mm/huge_memory.c | 58 ++++++++++++++++++++++++++++++++----------------
  1 file changed, 39 insertions(+), 19 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 49b98082c5401..7e3e9028873e5 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1372,9 +1372,17 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault 
*vmf)
        return __do_huge_pmd_anonymous_page(vmf);
  }

-static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
-               pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write,
-               pgtable_t pgtable)
+struct folio_or_pfn {
+       union {
+               struct folio *folio;
+               pfn_t pfn;
+       };
+       bool is_folio;
+};

Interesting... I guess a memdesc world will make this easy... maybe? :)

But this is a neat way of passing this.

Another mega nit is mayyybe we could have a macro for making these like:


#define DECLARE_FOP_PFN(name_, pfn_)            \
        struct folio_or_pfn name_ {             \
                .pfn = pfn_,                    \
                .is_folio = false,              \
        }

#define DECLARE_FOP_FOLIO(name_, folio_)        \
        struct folio_or_pfn name_ {             \
                .folio = folio_,                \
                .is_folio = true,               \
        }

But yeah maybe overkill for this small usage in this file.

Yeah. I suspect at some point we will convert this into a folio+idx ("page") or "pfn" approach, at which point we could also use this for ordinary insert_pfn().

(hopefully, then we can also do pfn_t -> unsigned long)

So let's defer adding that for now.


+
+static int insert_pmd(struct vm_area_struct *vma, unsigned long addr,
+               pmd_t *pmd, struct folio_or_pfn fop, pgprot_t prot,
+               bool write, pgtable_t pgtable)
  {
        struct mm_struct *mm = vma->vm_mm;
        pmd_t entry;
@@ -1382,8 +1390,11 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, 
unsigned long addr,
        lockdep_assert_held(pmd_lockptr(mm, pmd));

        if (!pmd_none(*pmd)) {
+               const unsigned long pfn = fop.is_folio ? folio_pfn(fop.folio) :
+                                         pfn_t_to_pfn(fop.pfn);
+
                if (write) {
-                       if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
+                       if (pmd_pfn(*pmd) != pfn) {
                                WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
                                return -EEXIST;
                        }
@@ -1396,11 +1407,19 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, 
unsigned long addr,
                return -EEXIST;
        }

-       entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
-       if (pfn_t_devmap(pfn))
-               entry = pmd_mkdevmap(entry);
-       else
-               entry = pmd_mkspecial(entry);
+       if (fop.is_folio) {
+               entry = folio_mk_pmd(fop.folio, vma->vm_page_prot);
+
+               folio_get(fop.folio);
+               folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma);
+               add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR);
+       } else {
+               entry = pmd_mkhuge(pfn_t_pmd(fop.pfn, prot));

Mega micro annoying nit - in above branch you have a newline after entry =, here
you don't. Maybe should add here also?

Well, it's combining all the "entry" setup in one block. But I don't particularly care, so I'll just do it :)

--
Cheers,

David / dhildenb


Reply via email to