Instead of pte_lockptr(), use the recently added pte_offset_map_nolock()
in adjust_pte(): because it gives the not-locked ptl for precisely that
pte, which the caller can then safely lock; whereas pte_lockptr() is not
so tightly coupled, because it dereferences the pmd pointer again.

Signed-off-by: Hugh Dickins <hu...@google.com>
---
 arch/arm/mm/fault-armv.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c
index ca5302b0b7ee..7cb125497976 100644
--- a/arch/arm/mm/fault-armv.c
+++ b/arch/arm/mm/fault-armv.c
@@ -117,11 +117,10 @@ static int adjust_pte(struct vm_area_struct *vma, 
unsigned long address,
         * must use the nested version.  This also means we need to
         * open-code the spin-locking.
         */
-       pte = pte_offset_map(pmd, address);
+       pte = pte_offset_map_nolock(vma->vm_mm, pmd, address, &ptl);
        if (!pte)
                return 0;
 
-       ptl = pte_lockptr(vma->vm_mm, pmd);
        do_pte_lock(ptl);
 
        ret = do_adjust_pte(vma, address, pfn, pte);
-- 
2.35.3

Reply via email to