Re: [PATCH v4 1/4] mm/migrate_device.c: Flush TLB while holding PTL

2022-09-02 Thread Peter Xu
On Fri, Sep 02, 2022 at 10:35:51AM +1000, Alistair Popple wrote:
> When clearing a PTE the TLB should be flushed whilst still holding the
> PTL to avoid a potential race with madvise/munmap/etc. For example
> consider the following sequence:
> 
>   CPU0  CPU1
>     
> 
>   migrate_vma_collect_pmd()
>   pte_unmap_unlock()
> madvise(MADV_DONTNEED)
> -> zap_pte_range()
> pte_offset_map_lock()
> [ PTE not present, TLB not flushed ]
> pte_unmap_unlock()
> [ page is still accessible via stale TLB ]
>   flush_tlb_range()
> 
> In this case the page may still be accessed via the stale TLB entry
> after madvise returns. Fix this by flushing the TLB while holding the
> PTL.
> 
> Signed-off-by: Alistair Popple 
> Reported-by: Nadav Amit 
> Reviewed-by: "Huang, Ying" 
> Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while 
> collecting pages")
> Cc: sta...@vger.kernel.org

Acked-by: Peter Xu 

-- 
Peter Xu



Re: [PATCH v4 1/4] mm/migrate_device.c: Flush TLB while holding PTL

2022-09-01 Thread David Hildenbrand
On 02.09.22 02:35, Alistair Popple wrote:
> When clearing a PTE the TLB should be flushed whilst still holding the
> PTL to avoid a potential race with madvise/munmap/etc. For example
> consider the following sequence:
> 
>   CPU0  CPU1
>     
> 
>   migrate_vma_collect_pmd()
>   pte_unmap_unlock()
> madvise(MADV_DONTNEED)
> -> zap_pte_range()
> pte_offset_map_lock()
> [ PTE not present, TLB not flushed ]
> pte_unmap_unlock()
> [ page is still accessible via stale TLB ]
>   flush_tlb_range()
> 
> In this case the page may still be accessed via the stale TLB entry
> after madvise returns. Fix this by flushing the TLB while holding the
> PTL.
> 
> Signed-off-by: Alistair Popple 
> Reported-by: Nadav Amit 
> Reviewed-by: "Huang, Ying" 
> Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while 
> collecting pages")
> Cc: sta...@vger.kernel.org
> 
> ---
> 
> Changes for v4:
> 
>  - Added Review-by
> 
> Changes for v3:
> 
>  - New for v3
> ---
>  mm/migrate_device.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index 27fb37d..6a5ef9f 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -254,13 +254,14 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>   migrate->dst[migrate->npages] = 0;
>   migrate->src[migrate->npages++] = mpfn;
>   }
> - arch_leave_lazy_mmu_mode();
> - pte_unmap_unlock(ptep - 1, ptl);
>  
>   /* Only flush the TLB if we actually modified any entries */
>   if (unmapped)
>   flush_tlb_range(walk->vma, start, end);
>  
> + arch_leave_lazy_mmu_mode();
> + pte_unmap_unlock(ptep - 1, ptl);
> +
>   return 0;
>  }
>  
> 
> base-commit: ffcf9c5700e49c0aee42dcba9a12ba21338e8136


Acked-by: David Hildenbrand 

-- 
Thanks,

David / dhildenb



[PATCH v4 1/4] mm/migrate_device.c: Flush TLB while holding PTL

2022-09-01 Thread Alistair Popple
When clearing a PTE the TLB should be flushed whilst still holding the
PTL to avoid a potential race with madvise/munmap/etc. For example
consider the following sequence:

  CPU0  CPU1
    

  migrate_vma_collect_pmd()
  pte_unmap_unlock()
madvise(MADV_DONTNEED)
-> zap_pte_range()
pte_offset_map_lock()
[ PTE not present, TLB not flushed ]
pte_unmap_unlock()
[ page is still accessible via stale TLB ]
  flush_tlb_range()

In this case the page may still be accessed via the stale TLB entry
after madvise returns. Fix this by flushing the TLB while holding the
PTL.

Signed-off-by: Alistair Popple 
Reported-by: Nadav Amit 
Reviewed-by: "Huang, Ying" 
Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while 
collecting pages")
Cc: sta...@vger.kernel.org

---

Changes for v4:

 - Added Review-by

Changes for v3:

 - New for v3
---
 mm/migrate_device.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 27fb37d..6a5ef9f 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -254,13 +254,14 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
migrate->dst[migrate->npages] = 0;
migrate->src[migrate->npages++] = mpfn;
}
-   arch_leave_lazy_mmu_mode();
-   pte_unmap_unlock(ptep - 1, ptl);
 
/* Only flush the TLB if we actually modified any entries */
if (unmapped)
flush_tlb_range(walk->vma, start, end);
 
+   arch_leave_lazy_mmu_mode();
+   pte_unmap_unlock(ptep - 1, ptl);
+
return 0;
 }
 

base-commit: ffcf9c5700e49c0aee42dcba9a12ba21338e8136
-- 
git-series 0.9.1