Re: [PATCH 1/2] mm/memory_hotplug: Export shrink span functions for zone and node

2022-02-02 Thread Jonghyeon Kim
On Fri, Jan 28, 2022 at 09:10:21AM +0100, David Hildenbrand wrote:
> On 28.01.22 05:19, Jonghyeon Kim wrote:
> > On Thu, Jan 27, 2022 at 10:54:23AM +0100, David Hildenbrand wrote:
> >> On 27.01.22 10:41, Jonghyeon Kim wrote:
> >>> On Wed, Jan 26, 2022 at 06:04:50PM +0100, David Hildenbrand wrote:
>  On 26.01.22 18:00, Jonghyeon Kim wrote:
> > Export shrink_zone_span() and update_pgdat_span() functions to head
> > file. We need to update real number of spanned pages for NUMA nodes and
> > zones when we add memory device node such as device dax memory.
> >
> 
>  Can you elaborate a bit more what you intend to fix?
> 
>  Memory onlining/offlining is reponsible for updating the node/zone span,
>  and that's triggered when the dax/kmem mamory gets onlined/offlined.
> 
> >>> Sure, sorry for the lack of explanation of the intended fix.
> >>>
> >>> Before onlining nvdimm memory using dax(devdax or fsdax), these memory 
> >>> belong to
> >>> cpu NUMA nodes, which extends span pages of node/zone as a ZONE_DEVICE. 
> >>> So there
> >>> is no problem because node/zone contain these additional non-visible 
> >>> memory
> >>> devices to the system.
> >>> But, if we online dax-memory, zone[ZONE_DEVICE] of CPU NUMA node is 
> >>> hot-plugged
> >>> to new NUMA node(but CPU-less). I think there is no need to hold
> >>> zone[ZONE_DEVICE] pages on the original node.
> >>>
> >>> Additionally, spanned pages are also used to calculate the end pfn of a 
> >>> node.
> >>> Thus, it is needed to maintain accurate page stats for node/zone.
> >>>
> >>> My machine contains two CPU-socket consisting of DRAM and Intel DCPMM
> >>> (DC persistent memory modules) with App-Direct mode.
> >>>
> >>> Below are my test results.
> >>>
> >>> Before memory onlining:
> >>>
> >>>   # ndctl create-namespace --mode=devdax
> >>>   # ndctl create-namespace --mode=devdax
> >>>   # cat /proc/zoneinfo | grep -E "Node|spanned" | paste - -
> >>>   Node 0, zone  DMA   spanned  4095
> >>>   Node 0, zoneDMA32   spanned  1044480
> >>>   Node 0, zone   Normal   spanned  7864320
> >>>   Node 0, zone  Movable   spanned  0
> >>>   Node 0, zone   Device   spanned  66060288
> >>>   Node 1, zone  DMA   spanned  0
> >>>   Node 1, zoneDMA32   spanned  0
> >>>   Node 1, zone   Normal   spanned  8388608
> >>>   Node 1, zone  Movable   spanned  0
> >>>   Node 1, zone   Device   spanned  66060288
> >>>
> >>> After memory onlining:
> >>>
> >>>   # daxctl reconfigure-device --mode=system-ram --no-online dax0.0
> >>>   # daxctl reconfigure-device --mode=system-ram --no-online dax1.0
> >>>
> >>>   # cat /proc/zoneinfo | grep -E "Node|spanned" | paste - -
> >>>   Node 0, zone  DMA   spanned  4095
> >>>   Node 0, zoneDMA32   spanned  1044480
> >>>   Node 0, zone   Normal   spanned  7864320
> >>>   Node 0, zone  Movable   spanned  0
> >>>   Node 0, zone   Device   spanned  66060288
> >>>   Node 1, zone  DMA   spanned  0
> >>>   Node 1, zoneDMA32   spanned  0
> >>>   Node 1, zone   Normal   spanned  8388608
> >>>   Node 1, zone  Movable   spanned  0
> >>>   Node 1, zone   Device   spanned  66060288
> >>>   Node 2, zone  DMA   spanned  0
> >>>   Node 2, zoneDMA32   spanned  0
> >>>   Node 2, zone   Normal   spanned  65011712
> >>>   Node 2, zone  Movable   spanned  0
> >>>   Node 2, zone   Device   spanned  0
> >>>   Node 3, zone  DMA   spanned  0
> >>>   Node 3, zoneDMA32   spanned  0
> >>>   Node 3, zone   Normal   spanned  65011712
> >>>   Node 3, zone  Movable   spanned  0
> >>>   Node 3, zone   Device   spanned  0
> >>>
> >>> As we can see, Node 0 and 1 still have zone_device pages after memory 
> >>> onlining.
> >>> This causes problem that Node 0 and Node 2 have same end of pfn values, 
> >>> also 
> >>> Node 1 and Node 3 have same problem.
> >>
> >> Thanks for the information, that makes it clearer.
> >>
> >> While this unfortunate, the node/zone span is something fairly
> >> unreliable/unusable for user space. Nodes and zones can overlap just 
> >> easily.
> >>
> >> What counts are present/managed pages in the node/zone.
> >>
> >> So at least I don't count this as something that "needs fixing",
> >> it's more something that's nice to handle better if easily possible.
> >>
> >> See below.
> >>
> >>>
> > Signed-off-by: Jonghyeon Kim 
> > ---
> >  include/linux/memory_hotplug.h | 3 +++
> >  mm/memory_hotplug.c| 6 --
> >  2 files changed, 7 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/linux/memory_hotplug.h 
> > b/include/linux/memory_hotplug.h
> > index be48e003a518..25c7f60c317e 100644
> > --- a/include/linux/memory_hotplug.h
> > +++ b/include/linux/memory_hotplug.h
> > @@ -3

Re: [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma

2022-02-02 Thread Matthew Wilcox
On Wed, Feb 02, 2022 at 10:33:04PM +0800, Muchun Song wrote:
> page_vma_mapped_walk() is supposed to check if a page is mapped into a vma.
> However, not all page frames (e.g. PFN_DEV) have a associated struct page
> with it. There is going to be some duplicate codes similar with this function
> if someone want to check if a pfn (without a struct page) is mapped into a
> vma. So add support for checking if a pfn is mapped into a vma. In the next
> patch, the dax will use this new feature.

I'm coming to more or less the same solution for fixing the bug in
page_mapped_in_vma().  If you call it with a head page, it will look
for any page in the THP instead of the precise page.  I think we can do
a fairly significant simplification though, so I'm going to go off
and work on that next ...




[PATCH v2 6/6] mm: remove range parameter from follow_invalidate_pte()

2022-02-02 Thread Muchun Song
The only user (DAX) of range parameter of follow_invalidate_pte()
is gone, it safe to remove the range paramter and make it static
to simlify the code.

Signed-off-by: Muchun Song 
---
 include/linux/mm.h |  3 ---
 mm/memory.c| 23 +++
 2 files changed, 3 insertions(+), 23 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index d211a06784d5..7895b17f6847 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1814,9 +1814,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long 
addr,
unsigned long end, unsigned long floor, unsigned long ceiling);
 int
 copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct 
*src_vma);
-int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
- struct mmu_notifier_range *range, pte_t **ptepp,
- pmd_t **pmdpp, spinlock_t **ptlp);
 int follow_pte(struct mm_struct *mm, unsigned long address,
   pte_t **ptepp, spinlock_t **ptlp);
 int follow_pfn(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 514a81cdd1ae..e8ce066be5f2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4869,9 +4869,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, 
unsigned long address)
 }
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
- struct mmu_notifier_range *range, pte_t **ptepp,
- pmd_t **pmdpp, spinlock_t **ptlp)
+static int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
+pte_t **ptepp, pmd_t **pmdpp, spinlock_t 
**ptlp)
 {
pgd_t *pgd;
p4d_t *p4d;
@@ -4898,31 +4897,17 @@ int follow_invalidate_pte(struct mm_struct *mm, 
unsigned long address,
if (!pmdpp)
goto out;
 
-   if (range) {
-   mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0,
-   NULL, mm, address & PMD_MASK,
-   (address & PMD_MASK) + 
PMD_SIZE);
-   mmu_notifier_invalidate_range_start(range);
-   }
*ptlp = pmd_lock(mm, pmd);
if (pmd_huge(*pmd)) {
*pmdpp = pmd;
return 0;
}
spin_unlock(*ptlp);
-   if (range)
-   mmu_notifier_invalidate_range_end(range);
}
 
if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
goto out;
 
-   if (range) {
-   mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm,
-   address & PAGE_MASK,
-   (address & PAGE_MASK) + PAGE_SIZE);
-   mmu_notifier_invalidate_range_start(range);
-   }
ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
if (!pte_present(*ptep))
goto unlock;
@@ -4930,8 +4915,6 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned 
long address,
return 0;
 unlock:
pte_unmap_unlock(ptep, *ptlp);
-   if (range)
-   mmu_notifier_invalidate_range_end(range);
 out:
return -EINVAL;
 }
@@ -4960,7 +4943,7 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned 
long address,
 int follow_pte(struct mm_struct *mm, unsigned long address,
   pte_t **ptepp, spinlock_t **ptlp)
 {
-   return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp);
+   return follow_invalidate_pte(mm, address, ptepp, NULL, ptlp);
 }
 EXPORT_SYMBOL_GPL(follow_pte);
 
-- 
2.11.0




[PATCH v2 5/6] dax: fix missing writeprotect the pte entry

2022-02-02 Thread Muchun Song
Currently dax_mapping_entry_mkclean() fails to clean and write protect
the pte entry within a DAX PMD entry during an *sync operation. This
can result in data loss in the following sequence:

  1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
 making the pmd entry dirty and writeable.
  2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
 write to the same file, dirtying PMD radix tree entry (already
 done in 1)) and making the pte entry dirty and writeable.
  3) fsync, flushing out PMD data and cleaning the radix tree entry. We
 currently fail to mark the pte entry as clean and write protected
 since the vma of process B is not covered in dax_entry_mkclean().
  4) process B writes to the pte. These don't cause any page faults since
 the pte entry is dirty and writeable. The radix tree entry remains
 clean.
  5) fsync, which fails to flush the dirty PMD data because the radix tree
 entry was clean.
  6) crash - dirty data that should have been fsync'd as part of 5) could
 still have been in the processor cache, and is lost.

Just to use pfn_mkclean_range() to clean the pfns to fix this issue.

Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song 
---
 fs/dax.c | 83 ++--
 1 file changed, 7 insertions(+), 76 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index e031e4b6c13c..b64ac02d55d7 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -25,6 +25,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #define CREATE_TRACE_POINTS
@@ -801,87 +802,17 @@ static void *dax_insert_entry(struct xa_state *xas,
return entry;
 }
 
-static inline
-unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma)
-{
-   unsigned long address;
-
-   address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
-   VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
-   return address;
-}
-
 /* Walk all mappings of a given index of a file and writeprotect them */
-static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
-   unsigned long pfn)
+static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn,
+ unsigned long npfn, pgoff_t start)
 {
struct vm_area_struct *vma;
-   pte_t pte, *ptep = NULL;
-   pmd_t *pmdp = NULL;
-   spinlock_t *ptl;
+   pgoff_t end = start + npfn - 1;
 
i_mmap_lock_read(mapping);
-   vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) {
-   struct mmu_notifier_range range;
-   unsigned long address;
-
+   vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) {
+   pfn_mkclean_range(pfn, npfn, start, vma);
cond_resched();
-
-   if (!(vma->vm_flags & VM_SHARED))
-   continue;
-
-   address = pgoff_address(index, vma);
-
-   /*
-* follow_invalidate_pte() will use the range to call
-* mmu_notifier_invalidate_range_start() on our behalf before
-* taking any lock.
-*/
-   if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep,
- &pmdp, &ptl))
-   continue;
-
-   /*
-* No need to call mmu_notifier_invalidate_range() as we are
-* downgrading page table protection not changing it to point
-* to a new page.
-*
-* See Documentation/vm/mmu_notifier.rst
-*/
-   if (pmdp) {
-#ifdef CONFIG_FS_DAX_PMD
-   pmd_t pmd;
-
-   if (pfn != pmd_pfn(*pmdp))
-   goto unlock_pmd;
-   if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
-   goto unlock_pmd;
-
-   flush_cache_range(vma, address,
- address + HPAGE_PMD_SIZE);
-   pmd = pmdp_invalidate(vma, address, pmdp);
-   pmd = pmd_wrprotect(pmd);
-   pmd = pmd_mkclean(pmd);
-   set_pmd_at(vma->vm_mm, address, pmdp, pmd);
-unlock_pmd:
-#endif
-   spin_unlock(ptl);
-   } else {
-   if (pfn != pte_pfn(*ptep))
-   goto unlock_pte;
-   if (!pte_dirty(*ptep) && !pte_write(*ptep))
-   goto unlock_pte;
-
-   flush_cache_page(vma, address, pfn);
-   pte = ptep_clear_flush(vma, address, ptep);
-   pte = pte_wrprotect(pte);
-   pte = pte_mkclean(pte);
-   set_pte_at(vma->vm_mm, address, ptep, pte);
-unl

[PATCH v2 4/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs

2022-02-02 Thread Muchun Song
The page_mkclean_one() is supposed to be used with the pfn that has a
associated struct page, but not all the pfns (e.g. DAX) have a struct
page. Introduce a new function pfn_mkclean_range() to cleans the PTEs
(including PMDs) mapped with range of pfns which has no struct page
associated with them. This helper will be used by DAX device in the
next patch to make pfns clean.

Signed-off-by: Muchun Song 
---
 include/linux/rmap.h |  3 ++
 mm/internal.h| 26 ++--
 mm/rmap.c| 84 +---
 3 files changed, 86 insertions(+), 27 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 78373935ad49..668a1e81b442 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -241,6 +241,9 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk 
*pvmw);
  */
 unsigned long page_address_in_vma(struct page *, struct vm_area_struct *);
 
+int pfn_mkclean_range(unsigned long pfn, int npfn, pgoff_t pgoff,
+ struct vm_area_struct *vma);
+
 /*
  * Cleans the PTEs of shared mappings.
  * (and since clean PTEs should also be readonly, write protects them too)
diff --git a/mm/internal.h b/mm/internal.h
index 5458cd08df33..dc71256e568f 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -449,26 +449,22 @@ extern void clear_page_mlock(struct page *page);
 extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
 
 /*
- * At what user virtual address is page expected in vma?
- * Returns -EFAULT if all of the page is outside the range of vma.
- * If page is a compound head, the entire compound page is considered.
+ * Return the start of user virtual address at the specific offset within
+ * a vma.
  */
 static inline unsigned long
-vma_address(struct page *page, struct vm_area_struct *vma)
+vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages,
+ struct vm_area_struct *vma)
 {
-   pgoff_t pgoff;
unsigned long address;
 
-   VM_BUG_ON_PAGE(PageKsm(page), page);/* KSM page->index unusable */
-   pgoff = page_to_pgoff(page);
if (pgoff >= vma->vm_pgoff) {
address = vma->vm_start +
((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
/* Check for address beyond vma (or wrapped through 0?) */
if (address < vma->vm_start || address >= vma->vm_end)
address = -EFAULT;
-   } else if (PageHead(page) &&
-  pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) {
+   } else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) {
/* Test above avoids possibility of wrap to 0 on 32-bit */
address = vma->vm_start;
} else {
@@ -478,6 +474,18 @@ vma_address(struct page *page, struct vm_area_struct *vma)
 }
 
 /*
+ * Return the start of user virtual address of a page within a vma.
+ * Returns -EFAULT if all of the page is outside the range of vma.
+ * If page is a compound head, the entire compound page is considered.
+ */
+static inline unsigned long
+vma_address(struct page *page, struct vm_area_struct *vma)
+{
+   VM_BUG_ON_PAGE(PageKsm(page), page);/* KSM page->index unusable */
+   return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma);
+}
+
+/*
  * Return the end of user virtual address at the specific offset within
  * a vma.
  */
diff --git a/mm/rmap.c b/mm/rmap.c
index 0ba12dc9fae3..8f1860dc22bc 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -928,34 +928,33 @@ int page_referenced(struct page *page,
return pra.referenced;
 }
 
-static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
-   unsigned long address, void *arg)
+static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
 {
-   struct page_vma_mapped_walk pvmw = {
-   .page = page,
-   .vma = vma,
-   .address = address,
-   .flags = PVMW_SYNC,
-   };
+   int cleaned = 0;
+   struct vm_area_struct *vma = pvmw->vma;
struct mmu_notifier_range range;
-   int *cleaned = arg;
+   unsigned long end;
+
+   if (pvmw->flags & PVMW_PFN_WALK)
+   end = vma_pgoff_address_end(pvmw->index, pvmw->nr, vma);
+   else
+   end = vma_address_end(pvmw->page, vma);
 
/*
 * We have to assume the worse case ie pmd for invalidation. Note that
 * the page can not be free from this function.
 */
-   mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
-   0, vma, vma->vm_mm, address,
-   vma_address_end(page, vma));
+   mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma,
+   vma->vm_mm, pvmw->address, end);
mmu_notifier_invalidate_range_start(&range);
 
-   while (page_vma_mapped_walk(&pvmw)) {
+   while (page_vma_mapped_walk(pvmw)) {
int r

[PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma

2022-02-02 Thread Muchun Song
page_vma_mapped_walk() is supposed to check if a page is mapped into a vma.
However, not all page frames (e.g. PFN_DEV) have a associated struct page
with it. There is going to be some duplicate codes similar with this function
if someone want to check if a pfn (without a struct page) is mapped into a
vma. So add support for checking if a pfn is mapped into a vma. In the next
patch, the dax will use this new feature.

Signed-off-by: Muchun Song 
---
 include/linux/rmap.h| 14 --
 include/linux/swapops.h | 13 +++---
 mm/internal.h   | 28 +---
 mm/page_vma_mapped.c| 68 +++--
 4 files changed, 83 insertions(+), 40 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 221c3c6438a7..78373935ad49 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -204,9 +204,18 @@ int make_device_exclusive_range(struct mm_struct *mm, 
unsigned long start,
 #define PVMW_SYNC  (1 << 0)
 /* Look for migarion entries rather than present PTEs */
 #define PVMW_MIGRATION (1 << 1)
+/* Walk the page table by checking the pfn instead of a struct page */
+#define PVMW_PFN_WALK  (1 << 2)
 
 struct page_vma_mapped_walk {
-   struct page *page;
+   union {
+   struct page *page;
+   struct {
+   unsigned long pfn;
+   unsigned int nr;
+   pgoff_t index;
+   };
+   };
struct vm_area_struct *vma;
unsigned long address;
pmd_t *pmd;
@@ -218,7 +227,8 @@ struct page_vma_mapped_walk {
 static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
 {
/* HugeTLB pte is set to the relevant page table entry without 
pte_mapped. */
-   if (pvmw->pte && !PageHuge(pvmw->page))
+   if (pvmw->pte && (pvmw->flags & PVMW_PFN_WALK ||
+ !PageHuge(pvmw->page)))
pte_unmap(pvmw->pte);
if (pvmw->ptl)
spin_unlock(pvmw->ptl);
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index d356ab4047f7..d28bf65fd6a5 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -247,17 +247,22 @@ static inline int is_writable_migration_entry(swp_entry_t 
entry)
 
 #endif
 
-static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
+static inline unsigned long pfn_swap_entry_to_pfn(swp_entry_t entry)
 {
-   struct page *p = pfn_to_page(swp_offset(entry));
+   unsigned long pfn = swp_offset(entry);
 
/*
 * Any use of migration entries may only occur while the
 * corresponding page is locked
 */
-   BUG_ON(is_migration_entry(entry) && !PageLocked(p));
+   BUG_ON(is_migration_entry(entry) && !PageLocked(pfn_to_page(pfn)));
+
+   return pfn;
+}
 
-   return p;
+static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
+{
+   return pfn_to_page(pfn_swap_entry_to_pfn(entry));
 }
 
 /*
diff --git a/mm/internal.h b/mm/internal.h
index deb9bda18e59..5458cd08df33 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -478,25 +478,35 @@ vma_address(struct page *page, struct vm_area_struct *vma)
 }
 
 /*
- * Then at what user virtual address will none of the page be found in vma?
- * Assumes that vma_address() already returned a good starting address.
- * If page is a compound head, the entire compound page is considered.
+ * Return the end of user virtual address at the specific offset within
+ * a vma.
  */
 static inline unsigned long
-vma_address_end(struct page *page, struct vm_area_struct *vma)
+vma_pgoff_address_end(pgoff_t pgoff, unsigned long nr_pages,
+ struct vm_area_struct *vma)
 {
-   pgoff_t pgoff;
-   unsigned long address;
+   unsigned long address = vma->vm_start;
 
-   VM_BUG_ON_PAGE(PageKsm(page), page);/* KSM page->index unusable */
-   pgoff = page_to_pgoff(page) + compound_nr(page);
-   address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
+   address += (pgoff + nr_pages - vma->vm_pgoff) << PAGE_SHIFT;
/* Check for address beyond vma (or wrapped through 0?) */
if (address < vma->vm_start || address > vma->vm_end)
address = vma->vm_end;
return address;
 }
 
+/*
+ * Return the end of user virtual address of a page within a vma. Assumes that
+ * vma_address() already returned a good starting address. If page is a 
compound
+ * head, the entire compound page is considered.
+ */
+static inline unsigned long
+vma_address_end(struct page *page, struct vm_area_struct *vma)
+{
+   VM_BUG_ON_PAGE(PageKsm(page), page);/* KSM page->index unusable */
+   return vma_pgoff_address_end(page_to_pgoff(page), compound_nr(page),
+vma);
+}
+
 static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
st

[PATCH v2 2/6] dax: fix cache flush on PMD-mapped pages

2022-02-02 Thread Muchun Song
The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue.

Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-by: Muchun Song 
---
 fs/dax.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/dax.c b/fs/dax.c
index 88be1c02a151..e031e4b6c13c 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -857,7 +857,8 @@ static void dax_entry_mkclean(struct address_space 
*mapping, pgoff_t index,
if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
goto unlock_pmd;
 
-   flush_cache_page(vma, address, pfn);
+   flush_cache_range(vma, address,
+ address + HPAGE_PMD_SIZE);
pmd = pmdp_invalidate(vma, address, pmdp);
pmd = pmd_wrprotect(pmd);
pmd = pmd_mkclean(pmd);
-- 
2.11.0




[PATCH v2 1/6] mm: rmap: fix cache flush on THP pages

2022-02-02 Thread Muchun Song
The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue. At least, no
problems were found due to this. Maybe because the architectures that
have virtual indexed caches is less.

Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use 
page_vma_mapped_walk()")
Signed-off-by: Muchun Song 
Reviewed-by: Yang Shi 
---
 mm/rmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index b0fd9dc19eba..0ba12dc9fae3 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -974,7 +974,8 @@ static bool page_mkclean_one(struct page *page, struct 
vm_area_struct *vma,
if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
continue;
 
-   flush_cache_page(vma, address, page_to_pfn(page));
+   flush_cache_range(vma, address,
+ address + HPAGE_PMD_SIZE);
entry = pmdp_invalidate(vma, address, pmd);
entry = pmd_wrprotect(entry);
entry = pmd_mkclean(entry);
-- 
2.11.0




[PATCH v2 0/6] Fix some bugs related to ramp and dax

2022-02-02 Thread Muchun Song
Patch 1-2 fix a cache flush bug, because subsequent patches depend on
those on those changes, there are placed in this series.  Patch 3-4
are preparation for fixing a dax bug in patch 5.  Patch 6 is code cleanup
since the previous patch remove the usage of follow_invalidate_pte().

Changes in v2:
  - Avoid the overly long line in lots of places suggested by Christoph.
  - Fix a compiler warning reported by kernel test robot since pmd_pfn()
is not defined when !CONFIG_TRANSPARENT_HUGEPAGE on powerpc architecture.
  - Split a new patch 4 for preparation of fixing the dax bug.

Muchun Song (6):
  mm: rmap: fix cache flush on THP pages
  dax: fix cache flush on PMD-mapped pages
  mm: page_vma_mapped: support checking if a pfn is mapped into a vma
  mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
  dax: fix missing writeprotect the pte entry
  mm: remove range parameter from follow_invalidate_pte()

 fs/dax.c| 82 --
 include/linux/mm.h  |  3 --
 include/linux/rmap.h| 17 --
 include/linux/swapops.h | 13 +---
 mm/internal.h   | 52 +++--
 mm/memory.c | 23 ++---
 mm/page_vma_mapped.c| 68 --
 mm/rmap.c   | 87 ++---
 8 files changed, 180 insertions(+), 165 deletions(-)

-- 
2.11.0




Re: [PATCH v10 4/9] fsdax: fix function description

2022-02-02 Thread Christoph Hellwig
Dan, can you send this to Linus for 5.17 to get it out of the queue?



Re: [PATCH v10 1/9] dax: Introduce holder for dax_device

2022-02-02 Thread Christoph Hellwig
On Thu, Jan 27, 2022 at 08:40:50PM +0800, Shiyang Ruan wrote:
> +void dax_register_holder(struct dax_device *dax_dev, void *holder,
> + const struct dax_holder_operations *ops)
> +{
> + if (!dax_alive(dax_dev))
> + return;
> +
> + dax_dev->holder_data = holder;
> + dax_dev->holder_ops = ops;

This needs to return an error if there is another holder already.  And
some kind of locking to prevent concurrent registrations.

Also please add kerneldoc comments for the new exported functions.

> +void *dax_get_holder(struct dax_device *dax_dev)
> +{
> + if (!dax_alive(dax_dev))
> + return NULL;
> +
> + return dax_dev->holder_data;
> +}
> +EXPORT_SYMBOL_GPL(dax_get_holder);

get tends to imply getting a reference.  Maybe just dax_holder()?
That being said I can't see where we'd even want to use the holder
outside of this file.