[PATCH v2] device-dax: Adding match parameter to select which driver to match dax devices

2022-03-02 Thread Zhenguo Yao
device_dax driver always match dax devices by default. The other
drivers only match devices by dax_id. There is situations which
need kmem drvier match all the dax device at boot time. So
adding a parameter to support this function.

Signed-off-by: Zhenguo Yao 
---

Changes:
- v1->v2 fix build errors report by kernel test robot 
---

 drivers/dax/device.c | 3 +++
 drivers/dax/kmem.c   | 4 
 2 files changed, 7 insertions(+)

diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index dd8222a..3d228b2 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -452,6 +452,7 @@ int dev_dax_probe(struct dev_dax *dev_dax)
 }
 EXPORT_SYMBOL_GPL(dev_dax_probe);
 
+unsigned int dax_match = 1;
 static struct dax_device_driver device_dax_driver = {
.probe = dev_dax_probe,
/* all probe actions are unwound by devm, so .remove isn't necessary */
@@ -460,6 +461,7 @@ int dev_dax_probe(struct dev_dax *dev_dax)
 
 static int __init dax_init(void)
 {
+   device_dax_driver.match_always = dax_match;
return dax_driver_register(_dax_driver);
 }
 
@@ -468,6 +470,7 @@ static void __exit dax_exit(void)
dax_driver_unregister(_dax_driver);
 }
 
+module_param(dax_match, uint, 0644);
 MODULE_AUTHOR("Intel Corporation");
 MODULE_LICENSE("GPL v2");
 module_init(dax_init);
diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
index a376220..2f1fb98 100644
--- a/drivers/dax/kmem.c
+++ b/drivers/dax/kmem.c
@@ -214,9 +214,11 @@ static void dev_dax_kmem_remove(struct dev_dax *dev_dax)
 }
 #endif /* CONFIG_MEMORY_HOTREMOVE */
 
+unsigned int kmem_match;
 static struct dax_device_driver device_dax_kmem_driver = {
.probe = dev_dax_kmem_probe,
.remove = dev_dax_kmem_remove,
+   .match_always = 0,
 };
 
 static int __init dax_kmem_init(void)
@@ -228,6 +230,7 @@ static int __init dax_kmem_init(void)
if (!kmem_name)
return -ENOMEM;
 
+   device_dax_kmem_driver.match_always = kmem_match;
rc = dax_driver_register(_dax_kmem_driver);
if (rc)
kfree_const(kmem_name);
@@ -241,6 +244,7 @@ static void __exit dax_kmem_exit(void)
kfree_const(kmem_name);
 }
 
+module_param(kmem_match, uint, 0644);
 MODULE_AUTHOR("Intel Corporation");
 MODULE_LICENSE("GPL v2");
 module_init(dax_kmem_init);
-- 
1.8.3.1




Re: [PATCH v1 1/1] ACPI: Switch to use list_entry_is_head() helper

2022-03-02 Thread Andy Shevchenko
On Wed, Mar 02, 2022 at 05:36:20PM +0100, Rafael J. Wysocki wrote:
> On Wed, Mar 2, 2022 at 4:50 PM Andy Shevchenko
>  wrote:
> > On Fri, Feb 11, 2022 at 01:04:23PM +0200, Andy Shevchenko wrote:
> > > Since we got list_entry_is_head() helper in the generic header,
> > > we may switch the ACPI modules to use it. This eliminates the
> > > need in additional variable. In some cases it reduces critical
> > > sections as well.
> >
> > Besides the work required in a couple of cases (LKP) there is an
> > ongoing discussion about list loops (and this particular API).
> >
> > Rafael, what do you think is the best course of action here?
> 
> I think the current approach is to do the opposite of what this patch
> is attempting to do: avoid using the list iterator outside of the
> loop.

OK, let's drop this change.

-- 
With Best Regards,
Andy Shevchenko





Re: [PATCH v1 1/1] ACPI: Switch to use list_entry_is_head() helper

2022-03-02 Thread Rafael J. Wysocki
On Wed, Mar 2, 2022 at 4:50 PM Andy Shevchenko
 wrote:
>
> On Fri, Feb 11, 2022 at 01:04:23PM +0200, Andy Shevchenko wrote:
> > Since we got list_entry_is_head() helper in the generic header,
> > we may switch the ACPI modules to use it. This eliminates the
> > need in additional variable. In some cases it reduces critical
> > sections as well.
>
> Besides the work required in a couple of cases (LKP) there is an
> ongoing discussion about list loops (and this particular API).
>
> Rafael, what do you think is the best course of action here?

I think the current approach is to do the opposite of what this patch
is attempting to do: avoid using the list iterator outside of the
loop.



Re: [PATCH v1 1/1] ACPI: Switch to use list_entry_is_head() helper

2022-03-02 Thread Andy Shevchenko
On Fri, Feb 11, 2022 at 01:04:23PM +0200, Andy Shevchenko wrote:
> Since we got list_entry_is_head() helper in the generic header,
> we may switch the ACPI modules to use it. This eliminates the
> need in additional variable. In some cases it reduces critical
> sections as well.

Besides the work required in a couple of cases (LKP) there is an
ongoing discussion about list loops (and this particular API).

Rafael, what do you think is the best course of action here?

-- 
With Best Regards,
Andy Shevchenko





[PATCH v4 6/6] mm: remove range parameter from follow_invalidate_pte()

2022-03-02 Thread Muchun Song
The only user (DAX) of range parameter of follow_invalidate_pte()
is gone, it safe to remove the range paramter and make it static
to simlify the code.

Signed-off-by: Muchun Song 
---
 include/linux/mm.h |  3 ---
 mm/memory.c| 23 +++
 2 files changed, 3 insertions(+), 23 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c9bada4096ac..be7ec4c37ebe 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1871,9 +1871,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long 
addr,
unsigned long end, unsigned long floor, unsigned long ceiling);
 int
 copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct 
*src_vma);
-int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
- struct mmu_notifier_range *range, pte_t **ptepp,
- pmd_t **pmdpp, spinlock_t **ptlp);
 int follow_pte(struct mm_struct *mm, unsigned long address,
   pte_t **ptepp, spinlock_t **ptlp);
 int follow_pfn(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index cc6968dc8e4e..278ab6d62b54 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4964,9 +4964,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, 
unsigned long address)
 }
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
- struct mmu_notifier_range *range, pte_t **ptepp,
- pmd_t **pmdpp, spinlock_t **ptlp)
+static int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
+pte_t **ptepp, pmd_t **pmdpp, spinlock_t 
**ptlp)
 {
pgd_t *pgd;
p4d_t *p4d;
@@ -4993,31 +4992,17 @@ int follow_invalidate_pte(struct mm_struct *mm, 
unsigned long address,
if (!pmdpp)
goto out;
 
-   if (range) {
-   mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0,
-   NULL, mm, address & PMD_MASK,
-   (address & PMD_MASK) + 
PMD_SIZE);
-   mmu_notifier_invalidate_range_start(range);
-   }
*ptlp = pmd_lock(mm, pmd);
if (pmd_huge(*pmd)) {
*pmdpp = pmd;
return 0;
}
spin_unlock(*ptlp);
-   if (range)
-   mmu_notifier_invalidate_range_end(range);
}
 
if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
goto out;
 
-   if (range) {
-   mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm,
-   address & PAGE_MASK,
-   (address & PAGE_MASK) + PAGE_SIZE);
-   mmu_notifier_invalidate_range_start(range);
-   }
ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
if (!pte_present(*ptep))
goto unlock;
@@ -5025,8 +5010,6 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned 
long address,
return 0;
 unlock:
pte_unmap_unlock(ptep, *ptlp);
-   if (range)
-   mmu_notifier_invalidate_range_end(range);
 out:
return -EINVAL;
 }
@@ -5055,7 +5038,7 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned 
long address,
 int follow_pte(struct mm_struct *mm, unsigned long address,
   pte_t **ptepp, spinlock_t **ptlp)
 {
-   return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp);
+   return follow_invalidate_pte(mm, address, ptepp, NULL, ptlp);
 }
 EXPORT_SYMBOL_GPL(follow_pte);
 
-- 
2.11.0




[PATCH v4 5/6] dax: fix missing writeprotect the pte entry

2022-03-02 Thread Muchun Song
Currently dax_mapping_entry_mkclean() fails to clean and write protect
the pte entry within a DAX PMD entry during an *sync operation. This
can result in data loss in the following sequence:

  1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
 making the pmd entry dirty and writeable.
  2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
 write to the same file, dirtying PMD radix tree entry (already
 done in 1)) and making the pte entry dirty and writeable.
  3) fsync, flushing out PMD data and cleaning the radix tree entry. We
 currently fail to mark the pte entry as clean and write protected
 since the vma of process B is not covered in dax_entry_mkclean().
  4) process B writes to the pte. These don't cause any page faults since
 the pte entry is dirty and writeable. The radix tree entry remains
 clean.
  5) fsync, which fails to flush the dirty PMD data because the radix tree
 entry was clean.
  6) crash - dirty data that should have been fsync'd as part of 5) could
 still have been in the processor cache, and is lost.

Just to use pfn_mkclean_range() to clean the pfns to fix this issue.

Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song 
---
 fs/dax.c | 83 ++--
 1 file changed, 7 insertions(+), 76 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index a372304c9695..7fd4a16769f9 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #define CREATE_TRACE_POINTS
@@ -789,87 +790,17 @@ static void *dax_insert_entry(struct xa_state *xas,
return entry;
 }
 
-static inline
-unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma)
-{
-   unsigned long address;
-
-   address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
-   VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
-   return address;
-}
-
 /* Walk all mappings of a given index of a file and writeprotect them */
-static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
-   unsigned long pfn)
+static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn,
+ unsigned long npfn, pgoff_t start)
 {
struct vm_area_struct *vma;
-   pte_t pte, *ptep = NULL;
-   pmd_t *pmdp = NULL;
-   spinlock_t *ptl;
+   pgoff_t end = start + npfn - 1;
 
i_mmap_lock_read(mapping);
-   vma_interval_tree_foreach(vma, >i_mmap, index, index) {
-   struct mmu_notifier_range range;
-   unsigned long address;
-
+   vma_interval_tree_foreach(vma, >i_mmap, start, end) {
+   pfn_mkclean_range(pfn, npfn, start, vma);
cond_resched();
-
-   if (!(vma->vm_flags & VM_SHARED))
-   continue;
-
-   address = pgoff_address(index, vma);
-
-   /*
-* follow_invalidate_pte() will use the range to call
-* mmu_notifier_invalidate_range_start() on our behalf before
-* taking any lock.
-*/
-   if (follow_invalidate_pte(vma->vm_mm, address, , ,
- , ))
-   continue;
-
-   /*
-* No need to call mmu_notifier_invalidate_range() as we are
-* downgrading page table protection not changing it to point
-* to a new page.
-*
-* See Documentation/vm/mmu_notifier.rst
-*/
-   if (pmdp) {
-#ifdef CONFIG_FS_DAX_PMD
-   pmd_t pmd;
-
-   if (pfn != pmd_pfn(*pmdp))
-   goto unlock_pmd;
-   if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
-   goto unlock_pmd;
-
-   flush_cache_range(vma, address,
- address + HPAGE_PMD_SIZE);
-   pmd = pmdp_invalidate(vma, address, pmdp);
-   pmd = pmd_wrprotect(pmd);
-   pmd = pmd_mkclean(pmd);
-   set_pmd_at(vma->vm_mm, address, pmdp, pmd);
-unlock_pmd:
-#endif
-   spin_unlock(ptl);
-   } else {
-   if (pfn != pte_pfn(*ptep))
-   goto unlock_pte;
-   if (!pte_dirty(*ptep) && !pte_write(*ptep))
-   goto unlock_pte;
-
-   flush_cache_page(vma, address, pfn);
-   pte = ptep_clear_flush(vma, address, ptep);
-   pte = pte_wrprotect(pte);
-   pte = pte_mkclean(pte);
-   set_pte_at(vma->vm_mm, address, ptep, pte);
-unlock_pte:
-   

[PATCH v4 4/6] mm: pvmw: add support for walking devmap pages

2022-03-02 Thread Muchun Song
The devmap pages can not use page_vma_mapped_walk() to check if a huge
devmap page is mapped into a vma.  Add support for walking huge devmap
pages so that DAX can use it in the next patch.

Signed-off-by: Muchun Song 
---
 mm/page_vma_mapped.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 1187f9c1ec5b..f9ffa84adf4d 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -210,10 +210,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk 
*pvmw)
 */
pmde = READ_ONCE(*pvmw->pmd);
 
-   if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) {
+   if (pmd_trans_huge(pmde) || pmd_devmap(pmde) ||
+   is_pmd_migration_entry(pmde)) {
pvmw->ptl = pmd_lock(mm, pvmw->pmd);
pmde = *pvmw->pmd;
-   if (likely(pmd_trans_huge(pmde))) {
+   if (likely(pmd_trans_huge(pmde) || pmd_devmap(pmde))) {
if (pvmw->flags & PVMW_MIGRATION)
return not_found(pvmw);
if (!check_pmd(pmd_pfn(pmde), pvmw))
-- 
2.11.0




[PATCH v4 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs

2022-03-02 Thread Muchun Song
The page_mkclean_one() is supposed to be used with the pfn that has a
associated struct page, but not all the pfns (e.g. DAX) have a struct
page. Introduce a new function pfn_mkclean_range() to cleans the PTEs
(including PMDs) mapped with range of pfns which has no struct page
associated with them. This helper will be used by DAX device in the
next patch to make pfns clean.

Signed-off-by: Muchun Song 
---
 include/linux/rmap.h |  3 +++
 mm/internal.h| 26 +
 mm/rmap.c| 65 +++-
 3 files changed, 74 insertions(+), 20 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index b58ddb8b2220..a6ec0d3e40c1 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -263,6 +263,9 @@ unsigned long page_address_in_vma(struct page *, struct 
vm_area_struct *);
  */
 int folio_mkclean(struct folio *);
 
+int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
+ struct vm_area_struct *vma);
+
 void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked);
 
 /*
diff --git a/mm/internal.h b/mm/internal.h
index f45292dc4ef5..ff873944749f 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -516,26 +516,22 @@ void mlock_page_drain(int cpu);
 extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
 
 /*
- * At what user virtual address is page expected in vma?
- * Returns -EFAULT if all of the page is outside the range of vma.
- * If page is a compound head, the entire compound page is considered.
+ * * Return the start of user virtual address at the specific offset within
+ * a vma.
  */
 static inline unsigned long
-vma_address(struct page *page, struct vm_area_struct *vma)
+vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages,
+ struct vm_area_struct *vma)
 {
-   pgoff_t pgoff;
unsigned long address;
 
-   VM_BUG_ON_PAGE(PageKsm(page), page);/* KSM page->index unusable */
-   pgoff = page_to_pgoff(page);
if (pgoff >= vma->vm_pgoff) {
address = vma->vm_start +
((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
/* Check for address beyond vma (or wrapped through 0?) */
if (address < vma->vm_start || address >= vma->vm_end)
address = -EFAULT;
-   } else if (PageHead(page) &&
-  pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) {
+   } else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) {
/* Test above avoids possibility of wrap to 0 on 32-bit */
address = vma->vm_start;
} else {
@@ -545,6 +541,18 @@ vma_address(struct page *page, struct vm_area_struct *vma)
 }
 
 /*
+ * Return the start of user virtual address of a page within a vma.
+ * Returns -EFAULT if all of the page is outside the range of vma.
+ * If page is a compound head, the entire compound page is considered.
+ */
+static inline unsigned long
+vma_address(struct page *page, struct vm_area_struct *vma)
+{
+   VM_BUG_ON_PAGE(PageKsm(page), page);/* KSM page->index unusable */
+   return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma);
+}
+
+/*
  * Then at what user virtual address will none of the range be found in vma?
  * Assumes that vma_address() already returned a good starting address.
  */
diff --git a/mm/rmap.c b/mm/rmap.c
index 723682ddb9e8..ad5cf0e45a73 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -929,12 +929,12 @@ int folio_referenced(struct folio *folio, int is_locked,
return pra.referenced;
 }
 
-static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
-   unsigned long address, void *arg)
+static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
 {
-   DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC);
+   int cleaned = 0;
+   struct vm_area_struct *vma = pvmw->vma;
struct mmu_notifier_range range;
-   int *cleaned = arg;
+   unsigned long address = pvmw->address;
 
/*
 * We have to assume the worse case ie pmd for invalidation. Note that
@@ -942,16 +942,16 @@ static bool page_mkclean_one(struct folio *folio, struct 
vm_area_struct *vma,
 */
mmu_notifier_range_init(, MMU_NOTIFY_PROTECTION_PAGE,
0, vma, vma->vm_mm, address,
-   vma_address_end());
+   vma_address_end(pvmw));
mmu_notifier_invalidate_range_start();
 
-   while (page_vma_mapped_walk()) {
+   while (page_vma_mapped_walk(pvmw)) {
int ret = 0;
 
-   address = pvmw.address;
-   if (pvmw.pte) {
+   address = pvmw->address;
+   if (pvmw->pte) {
pte_t entry;
-   pte_t *pte = pvmw.pte;
+   pte_t *pte = pvmw->pte;
 
if 

[PATCH v4 2/6] dax: fix cache flush on PMD-mapped pages

2022-03-02 Thread Muchun Song
The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue.

Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-by: Muchun Song 
---
 fs/dax.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/dax.c b/fs/dax.c
index 67a08a32fccb..a372304c9695 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -845,7 +845,8 @@ static void dax_entry_mkclean(struct address_space 
*mapping, pgoff_t index,
if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
goto unlock_pmd;
 
-   flush_cache_page(vma, address, pfn);
+   flush_cache_range(vma, address,
+ address + HPAGE_PMD_SIZE);
pmd = pmdp_invalidate(vma, address, pmdp);
pmd = pmd_wrprotect(pmd);
pmd = pmd_mkclean(pmd);
-- 
2.11.0




[PATCH v4 1/6] mm: rmap: fix cache flush on THP pages

2022-03-02 Thread Muchun Song
The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue. At least, no
problems were found due to this. Maybe because the architectures that
have virtual indexed caches is less.

Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use 
page_vma_mapped_walk()")
Signed-off-by: Muchun Song 
Reviewed-by: Yang Shi 
---
 mm/rmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index fc46a3d7b704..723682ddb9e8 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -970,7 +970,8 @@ static bool page_mkclean_one(struct folio *folio, struct 
vm_area_struct *vma,
if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
continue;
 
-   flush_cache_page(vma, address, folio_pfn(folio));
+   flush_cache_range(vma, address,
+ address + HPAGE_PMD_SIZE);
entry = pmdp_invalidate(vma, address, pmd);
entry = pmd_wrprotect(entry);
entry = pmd_mkclean(entry);
-- 
2.11.0




[PATCH v4 0/6] Fix some bugs related to ramp and dax

2022-03-02 Thread Muchun Song
This series is based on next-20220225.

Patch 1-2 fix a cache flush bug, because subsequent patches depend on
those on those changes, there are placed in this series.  Patch 3-4
are preparation for fixing a dax bug in patch 5.  Patch 6 is code cleanup
since the previous patch remove the usage of follow_invalidate_pte().

v4:
- Fix compilation error on riscv.

v3:
- Based on next-20220225.

v2:
- Avoid the overly long line in lots of places suggested by Christoph.
- Fix a compiler warning reported by kernel test robot since pmd_pfn()
  is not defined when !CONFIG_TRANSPARENT_HUGEPAGE on powerpc architecture.
- Split a new patch 4 for preparation of fixing the dax bug.

Muchun Song (6):
  mm: rmap: fix cache flush on THP pages
  dax: fix cache flush on PMD-mapped pages
  mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
  mm: pvmw: add support for walking devmap pages
  dax: fix missing writeprotect the pte entry
  mm: remove range parameter from follow_invalidate_pte()

 fs/dax.c | 82 +---
 include/linux/mm.h   |  3 --
 include/linux/rmap.h |  3 ++
 mm/internal.h| 26 +++--
 mm/memory.c  | 23 ++-
 mm/page_vma_mapped.c |  5 ++--
 mm/rmap.c| 68 +++
 7 files changed, 89 insertions(+), 121 deletions(-)

-- 
2.11.0