Basically the same race as with numa balancing in change_huge_pmd(), but
a bit simpler to mitigate: we don't need to preserve dirty/young flags
here due to MADV_FREE functionality.

Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
Cc: Minchan Kim <minc...@kernel.org>
---
 mm/huge_memory.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index bb2b3646bd78..324217c31ec9 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1566,8 +1566,6 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct 
vm_area_struct *vma,
                deactivate_page(page);
 
        if (pmd_young(orig_pmd) || pmd_dirty(orig_pmd)) {
-               orig_pmd = pmdp_huge_get_and_clear_full(tlb->mm, addr, pmd,
-                       tlb->fullmm);
                orig_pmd = pmd_mkold(orig_pmd);
                orig_pmd = pmd_mkclean(orig_pmd);
 
-- 
2.11.0

Reply via email to