Re: [PATCH v2 2/3] mm/huge_memory: don't mark refcounted folios special in vmf_insert_folio_pmd()

2025-06-13 Thread David Hildenbrand

On 12.06.25 18:10, Lorenzo Stoakes wrote:

On Wed, Jun 11, 2025 at 02:06:53PM +0200, David Hildenbrand wrote:

Marking PMDs that map a "normal" refcounted folios as special is
against our rules documented for vm_normal_page().

Fortunately, there are not that many pmd_special() check that can be
mislead, and most vm_normal_page_pmd()/vm_normal_folio_pmd() users that
would get this wrong right now are rather harmless: e.g., none so far
bases decisions whether to grab a folio reference on that decision.

Well, and GUP-fast will fallback to GUP-slow. All in all, so far no big
implications as it seems.

Getting this right will get more important as we use
folio_normal_page_pmd() in more places.

Fix it by teaching insert_pfn_pmd() to properly handle folios and
pfns -- moving refcount/mapcount/etc handling in there, renaming it to
insert_pmd(), and distinguishing between both cases using a new simple
"struct folio_or_pfn" structure.

Use folio_mk_pmd() to create a pmd for a folio cleanly.

Fixes: 6c88f72691f8 ("mm/huge_memory: add vmf_insert_folio_pmd()")
Signed-off-by: David Hildenbrand 


Looks good to me, checked that the logic remains the same. Some micro
nits/thoughts below. So:

Reviewed-by: Lorenzo Stoakes 


Thanks!




---
  mm/huge_memory.c | 58 
  1 file changed, 39 insertions(+), 19 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 49b98082c5401..7e3e9028873e5 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1372,9 +1372,17 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault 
*vmf)
return __do_huge_pmd_anonymous_page(vmf);
  }

-static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
-   pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write,
-   pgtable_t pgtable)
+struct folio_or_pfn {
+   union {
+   struct folio *folio;
+   pfn_t pfn;
+   };
+   bool is_folio;
+};


Interesting... I guess a memdesc world will make this easy... maybe? :)

But this is a neat way of passing this.

Another mega nit is mayyybe we could have a macro for making these like:


#define DECLARE_FOP_PFN(name_, pfn_)\
struct folio_or_pfn name_ { \
.pfn = pfn_,\
.is_folio = false,  \
}

#define DECLARE_FOP_FOLIO(name_, folio_)\
struct folio_or_pfn name_ { \
.folio = folio_,\
.is_folio = true,   \
}

But yeah maybe overkill for this small usage in this file.


Yeah. I suspect at some point we will convert this into a folio+idx 
("page") or "pfn" approach, at which point we could also use this for 
ordinary insert_pfn().


(hopefully, then we can also do pfn_t -> unsigned long)

So let's defer adding that for now.




+
+static int insert_pmd(struct vm_area_struct *vma, unsigned long addr,
+   pmd_t *pmd, struct folio_or_pfn fop, pgprot_t prot,
+   bool write, pgtable_t pgtable)
  {
struct mm_struct *mm = vma->vm_mm;
pmd_t entry;
@@ -1382,8 +1390,11 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, 
unsigned long addr,
lockdep_assert_held(pmd_lockptr(mm, pmd));

if (!pmd_none(*pmd)) {
+   const unsigned long pfn = fop.is_folio ? folio_pfn(fop.folio) :
+ pfn_t_to_pfn(fop.pfn);
+
if (write) {
-   if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
+   if (pmd_pfn(*pmd) != pfn) {
WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
return -EEXIST;
}
@@ -1396,11 +1407,19 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, 
unsigned long addr,
return -EEXIST;
}

-   entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
-   if (pfn_t_devmap(pfn))
-   entry = pmd_mkdevmap(entry);
-   else
-   entry = pmd_mkspecial(entry);
+   if (fop.is_folio) {
+   entry = folio_mk_pmd(fop.folio, vma->vm_page_prot);
+
+   folio_get(fop.folio);
+   folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma);
+   add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR);
+   } else {
+   entry = pmd_mkhuge(pfn_t_pmd(fop.pfn, prot));


Mega micro annoying nit - in above branch you have a newline after entry =, here
you don't. Maybe should add here also?


Well, it's combining all the "entry" setup in one block. But I don't 
particularly care, so I'll just do it :)


--
Cheers,

David / dhildenb




Re: [PATCH v2 2/3] mm/huge_memory: don't mark refcounted folios special in vmf_insert_folio_pmd()

2025-06-12 Thread Jason Gunthorpe
On Wed, Jun 11, 2025 at 02:06:53PM +0200, David Hildenbrand wrote:
> Marking PMDs that map a "normal" refcounted folios as special is
> against our rules documented for vm_normal_page().
> 
> Fortunately, there are not that many pmd_special() check that can be
> mislead, and most vm_normal_page_pmd()/vm_normal_folio_pmd() users that
> would get this wrong right now are rather harmless: e.g., none so far
> bases decisions whether to grab a folio reference on that decision.
> 
> Well, and GUP-fast will fallback to GUP-slow. All in all, so far no big
> implications as it seems.
> 
> Getting this right will get more important as we use
> folio_normal_page_pmd() in more places.
> 
> Fix it by teaching insert_pfn_pmd() to properly handle folios and
> pfns -- moving refcount/mapcount/etc handling in there, renaming it to
> insert_pmd(), and distinguishing between both cases using a new simple
> "struct folio_or_pfn" structure.
> 
> Use folio_mk_pmd() to create a pmd for a folio cleanly.
> 
> Fixes: 6c88f72691f8 ("mm/huge_memory: add vmf_insert_folio_pmd()")
> Signed-off-by: David Hildenbrand 
> ---
>  mm/huge_memory.c | 58 
>  1 file changed, 39 insertions(+), 19 deletions(-)

Reviewed-by: Jason Gunthorpe 

Jason



Re: [PATCH v2 2/3] mm/huge_memory: don't mark refcounted folios special in vmf_insert_folio_pmd()

2025-06-12 Thread Lorenzo Stoakes
On Wed, Jun 11, 2025 at 02:06:53PM +0200, David Hildenbrand wrote:
> Marking PMDs that map a "normal" refcounted folios as special is
> against our rules documented for vm_normal_page().
>
> Fortunately, there are not that many pmd_special() check that can be
> mislead, and most vm_normal_page_pmd()/vm_normal_folio_pmd() users that
> would get this wrong right now are rather harmless: e.g., none so far
> bases decisions whether to grab a folio reference on that decision.
>
> Well, and GUP-fast will fallback to GUP-slow. All in all, so far no big
> implications as it seems.
>
> Getting this right will get more important as we use
> folio_normal_page_pmd() in more places.
>
> Fix it by teaching insert_pfn_pmd() to properly handle folios and
> pfns -- moving refcount/mapcount/etc handling in there, renaming it to
> insert_pmd(), and distinguishing between both cases using a new simple
> "struct folio_or_pfn" structure.
>
> Use folio_mk_pmd() to create a pmd for a folio cleanly.
>
> Fixes: 6c88f72691f8 ("mm/huge_memory: add vmf_insert_folio_pmd()")
> Signed-off-by: David Hildenbrand 

Looks good to me, checked that the logic remains the same. Some micro
nits/thoughts below. So:

Reviewed-by: Lorenzo Stoakes 

> ---
>  mm/huge_memory.c | 58 
>  1 file changed, 39 insertions(+), 19 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 49b98082c5401..7e3e9028873e5 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1372,9 +1372,17 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault 
> *vmf)
>   return __do_huge_pmd_anonymous_page(vmf);
>  }
>
> -static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
> - pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write,
> - pgtable_t pgtable)
> +struct folio_or_pfn {
> + union {
> + struct folio *folio;
> + pfn_t pfn;
> + };
> + bool is_folio;
> +};

Interesting... I guess a memdesc world will make this easy... maybe? :)

But this is a neat way of passing this.

Another mega nit is mayyybe we could have a macro for making these like:


#define DECLARE_FOP_PFN(name_, pfn_)\
struct folio_or_pfn name_ { \
.pfn = pfn_,\
.is_folio = false,  \
}

#define DECLARE_FOP_FOLIO(name_, folio_)\
struct folio_or_pfn name_ { \
.folio = folio_,\
.is_folio = true,   \
}

But yeah maybe overkill for this small usage in this file.

> +
> +static int insert_pmd(struct vm_area_struct *vma, unsigned long addr,
> + pmd_t *pmd, struct folio_or_pfn fop, pgprot_t prot,
> + bool write, pgtable_t pgtable)
>  {
>   struct mm_struct *mm = vma->vm_mm;
>   pmd_t entry;
> @@ -1382,8 +1390,11 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, 
> unsigned long addr,
>   lockdep_assert_held(pmd_lockptr(mm, pmd));
>
>   if (!pmd_none(*pmd)) {
> + const unsigned long pfn = fop.is_folio ? folio_pfn(fop.folio) :
> +   pfn_t_to_pfn(fop.pfn);
> +
>   if (write) {
> - if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
> + if (pmd_pfn(*pmd) != pfn) {
>   WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
>   return -EEXIST;
>   }
> @@ -1396,11 +1407,19 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, 
> unsigned long addr,
>   return -EEXIST;
>   }
>
> - entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
> - if (pfn_t_devmap(pfn))
> - entry = pmd_mkdevmap(entry);
> - else
> - entry = pmd_mkspecial(entry);
> + if (fop.is_folio) {
> + entry = folio_mk_pmd(fop.folio, vma->vm_page_prot);
> +
> + folio_get(fop.folio);
> + folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma);
> + add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR);
> + } else {
> + entry = pmd_mkhuge(pfn_t_pmd(fop.pfn, prot));

Mega micro annoying nit - in above branch you have a newline after entry =, here
you don't. Maybe should add here also?

> + if (pfn_t_devmap(fop.pfn))
> + entry = pmd_mkdevmap(entry);
> + else
> + entry = pmd_mkspecial(entry);
> + }
>   if (write) {
>   entry = pmd_mkyoung(pmd_mkdirty(entry));
>   entry = maybe_pmd_mkwrite(entry, vma);
> @@ -1431,6 +1450,9 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, 
> pfn_t pfn, bool write)
>   unsigned long addr = vmf->address & PMD_MASK;
>   struct vm_area_struct *vma = vmf->vma;
>   pgprot_t pgprot = vma->vm_page_prot;
> + struct folio_or_pfn fop = {
> + .pfn = pfn,
> + };
>

Re: [PATCH v2 2/3] mm/huge_memory: don't mark refcounted folios special in vmf_insert_folio_pmd()

2025-06-12 Thread David Hildenbrand

On 12.06.25 04:17, Alistair Popple wrote:

On Wed, Jun 11, 2025 at 02:06:53PM +0200, David Hildenbrand wrote:

Marking PMDs that map a "normal" refcounted folios as special is
against our rules documented for vm_normal_page().

Fortunately, there are not that many pmd_special() check that can be
mislead, and most vm_normal_page_pmd()/vm_normal_folio_pmd() users that
would get this wrong right now are rather harmless: e.g., none so far
bases decisions whether to grab a folio reference on that decision.

Well, and GUP-fast will fallback to GUP-slow. All in all, so far no big
implications as it seems.

Getting this right will get more important as we use
folio_normal_page_pmd() in more places.

Fix it by teaching insert_pfn_pmd() to properly handle folios and
pfns -- moving refcount/mapcount/etc handling in there, renaming it to
insert_pmd(), and distinguishing between both cases using a new simple
"struct folio_or_pfn" structure.

Use folio_mk_pmd() to create a pmd for a folio cleanly.

Fixes: 6c88f72691f8 ("mm/huge_memory: add vmf_insert_folio_pmd()")
Signed-off-by: David Hildenbrand 
---
  mm/huge_memory.c | 58 
  1 file changed, 39 insertions(+), 19 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 49b98082c5401..7e3e9028873e5 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1372,9 +1372,17 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault 
*vmf)
return __do_huge_pmd_anonymous_page(vmf);
  }
  
-static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,

-   pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write,
-   pgtable_t pgtable)
+struct folio_or_pfn {
+   union {
+   struct folio *folio;
+   pfn_t pfn;
+   };
+   bool is_folio;
+};


I know it's simple, but I'm still not a fan particularly as these types of
patterns tend to proliferate once introduced. See below for a suggestion.


It's much better than abusing pfn_t for folios -- and I don't 
particularly see a problem with this pattern here as long as it stays in 
this file.





+static int insert_pmd(struct vm_area_struct *vma, unsigned long addr,
+   pmd_t *pmd, struct folio_or_pfn fop, pgprot_t prot,
+   bool write, pgtable_t pgtable)
  {
struct mm_struct *mm = vma->vm_mm;
pmd_t entry;
@@ -1382,8 +1390,11 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, 
unsigned long addr,
lockdep_assert_held(pmd_lockptr(mm, pmd));
  
  	if (!pmd_none(*pmd)) {

+   const unsigned long pfn = fop.is_folio ? folio_pfn(fop.folio) :
+ pfn_t_to_pfn(fop.pfn);
+
if (write) {
-   if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
+   if (pmd_pfn(*pmd) != pfn) {
WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
return -EEXIST;
}
@@ -1396,11 +1407,19 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, 
unsigned long addr,
return -EEXIST;
}
  
-	entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));

-   if (pfn_t_devmap(pfn))
-   entry = pmd_mkdevmap(entry);
-   else
-   entry = pmd_mkspecial(entry);
+   if (fop.is_folio) {
+   entry = folio_mk_pmd(fop.folio, vma->vm_page_prot);
+
+   folio_get(fop.folio);
+   folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma);
+   add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR);
+   } else {
+   entry = pmd_mkhuge(pfn_t_pmd(fop.pfn, prot));
+   if (pfn_t_devmap(fop.pfn))
+   entry = pmd_mkdevmap(entry);
+   else
+   entry = pmd_mkspecial(entry);
+   }


Could we change insert_pfn_pmd() to insert_pmd_entry() and have callers call
something like pfn_to_pmd_entry() or folio_to_pmd_entry() to create the pmd_t
entry as appropriate, which is then passed to insert_pmd_entry() to do the bits
common to both?


Yeah, I had that idea as well but discarded it, because the 
refcounting+mapcounting handling is better placed where we are actually 
inserting the pmd (not possibly only upgrading permissions of an 
existing mapping). Avoid 4-line comments as the one we are removing in 
patch #3 ...


--
Cheers,

David / dhildenb




Re: [PATCH v2 2/3] mm/huge_memory: don't mark refcounted folios special in vmf_insert_folio_pmd()

2025-06-11 Thread Dan Williams
David Hildenbrand wrote:
> Marking PMDs that map a "normal" refcounted folios as special is
> against our rules documented for vm_normal_page().
> 
> Fortunately, there are not that many pmd_special() check that can be
> mislead, and most vm_normal_page_pmd()/vm_normal_folio_pmd() users that
> would get this wrong right now are rather harmless: e.g., none so far
> bases decisions whether to grab a folio reference on that decision.
> 
> Well, and GUP-fast will fallback to GUP-slow. All in all, so far no big
> implications as it seems.
> 
> Getting this right will get more important as we use
> folio_normal_page_pmd() in more places.
> 
> Fix it by teaching insert_pfn_pmd() to properly handle folios and
> pfns -- moving refcount/mapcount/etc handling in there, renaming it to
> insert_pmd(), and distinguishing between both cases using a new simple
> "struct folio_or_pfn" structure.
> 
> Use folio_mk_pmd() to create a pmd for a folio cleanly.

Looks good, I like copying the sockptr_t approach for this, and agree that this
seems to not cause any problems in practice today, but definitely will be a
trip hazard going forward.

Reviewed-by: Dan Williams 



Re: [PATCH v2 2/3] mm/huge_memory: don't mark refcounted folios special in vmf_insert_folio_pmd()

2025-06-11 Thread Alistair Popple
On Wed, Jun 11, 2025 at 02:06:53PM +0200, David Hildenbrand wrote:
> Marking PMDs that map a "normal" refcounted folios as special is
> against our rules documented for vm_normal_page().
> 
> Fortunately, there are not that many pmd_special() check that can be
> mislead, and most vm_normal_page_pmd()/vm_normal_folio_pmd() users that
> would get this wrong right now are rather harmless: e.g., none so far
> bases decisions whether to grab a folio reference on that decision.
> 
> Well, and GUP-fast will fallback to GUP-slow. All in all, so far no big
> implications as it seems.
> 
> Getting this right will get more important as we use
> folio_normal_page_pmd() in more places.
> 
> Fix it by teaching insert_pfn_pmd() to properly handle folios and
> pfns -- moving refcount/mapcount/etc handling in there, renaming it to
> insert_pmd(), and distinguishing between both cases using a new simple
> "struct folio_or_pfn" structure.
> 
> Use folio_mk_pmd() to create a pmd for a folio cleanly.
> 
> Fixes: 6c88f72691f8 ("mm/huge_memory: add vmf_insert_folio_pmd()")
> Signed-off-by: David Hildenbrand 
> ---
>  mm/huge_memory.c | 58 
>  1 file changed, 39 insertions(+), 19 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 49b98082c5401..7e3e9028873e5 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1372,9 +1372,17 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault 
> *vmf)
>   return __do_huge_pmd_anonymous_page(vmf);
>  }
>  
> -static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
> - pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write,
> - pgtable_t pgtable)
> +struct folio_or_pfn {
> + union {
> + struct folio *folio;
> + pfn_t pfn;
> + };
> + bool is_folio;
> +};

I know it's simple, but I'm still not a fan particularly as these types of
patterns tend to proliferate once introduced. See below for a suggestion.

> +static int insert_pmd(struct vm_area_struct *vma, unsigned long addr,
> + pmd_t *pmd, struct folio_or_pfn fop, pgprot_t prot,
> + bool write, pgtable_t pgtable)
>  {
>   struct mm_struct *mm = vma->vm_mm;
>   pmd_t entry;
> @@ -1382,8 +1390,11 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, 
> unsigned long addr,
>   lockdep_assert_held(pmd_lockptr(mm, pmd));
>  
>   if (!pmd_none(*pmd)) {
> + const unsigned long pfn = fop.is_folio ? folio_pfn(fop.folio) :
> +   pfn_t_to_pfn(fop.pfn);
> +
>   if (write) {
> - if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
> + if (pmd_pfn(*pmd) != pfn) {
>   WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
>   return -EEXIST;
>   }
> @@ -1396,11 +1407,19 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, 
> unsigned long addr,
>   return -EEXIST;
>   }
>  
> - entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
> - if (pfn_t_devmap(pfn))
> - entry = pmd_mkdevmap(entry);
> - else
> - entry = pmd_mkspecial(entry);
> + if (fop.is_folio) {
> + entry = folio_mk_pmd(fop.folio, vma->vm_page_prot);
> +
> + folio_get(fop.folio);
> + folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma);
> + add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR);
> + } else {
> + entry = pmd_mkhuge(pfn_t_pmd(fop.pfn, prot));
> + if (pfn_t_devmap(fop.pfn))
> + entry = pmd_mkdevmap(entry);
> + else
> + entry = pmd_mkspecial(entry);
> + }

Could we change insert_pfn_pmd() to insert_pmd_entry() and have callers call
something like pfn_to_pmd_entry() or folio_to_pmd_entry() to create the pmd_t
entry as appropriate, which is then passed to insert_pmd_entry() to do the bits
common to both?

>   if (write) {
>   entry = pmd_mkyoung(pmd_mkdirty(entry));
>   entry = maybe_pmd_mkwrite(entry, vma);
> @@ -1431,6 +1450,9 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, 
> pfn_t pfn, bool write)
>   unsigned long addr = vmf->address & PMD_MASK;
>   struct vm_area_struct *vma = vmf->vma;
>   pgprot_t pgprot = vma->vm_page_prot;
> + struct folio_or_pfn fop = {
> + .pfn = pfn,
> + };
>   pgtable_t pgtable = NULL;
>   spinlock_t *ptl;
>   int error;
> @@ -1458,8 +1480,8 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, 
> pfn_t pfn, bool write)
>   pfnmap_setup_cachemode_pfn(pfn_t_to_pfn(pfn), &pgprot);
>  
>   ptl = pmd_lock(vma->vm_mm, vmf->pmd);
> - error = insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write,
> - pgtable);
> + error = insert_pmd(vma, addr, vmf->pmd, fop, pgprot, write,
> +pgtable);
>   s