From: Thomas Hellstrom <thellst...@vmware.com>

Without the lock, anybody modifying a pte from within this function might
have it concurrently modified by someone else.

Cc: Matthew Wilcox <wi...@infradead.org>
Cc: Will Deacon <will.dea...@arm.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rik van Riel <r...@surriel.com>
Cc: Minchan Kim <minc...@kernel.org>
Cc: Michal Hocko <mho...@suse.com>
Cc: Huang Ying <ying.hu...@intel.com>
Cc: Jérôme Glisse <jgli...@redhat.com>
Cc: Kirill A. Shutemov <kir...@shutemov.name>
Suggested-by: Linus Torvalds <torva...@linux-foundation.org>
Signed-off-by: Thomas Hellstrom <thellst...@vmware.com>
---
 mm/pagewalk.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index d48c2a986ea3..83c0b78363b4 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -10,8 +10,9 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, 
unsigned long end,
        pte_t *pte;
        int err = 0;
        const struct mm_walk_ops *ops = walk->ops;
+       spinlock_t *ptl;
 
-       pte = pte_offset_map(pmd, addr);
+       pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
        for (;;) {
                err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
                if (err)
@@ -22,7 +23,7 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, 
unsigned long end,
                pte++;
        }
 
-       pte_unmap(pte);
+       pte_unmap_unlock(pte - 1, ptl);
        return err;
 }
 
-- 
2.21.0

Reply via email to