3.8.13.16 -stable review patch.  If anyone has any objections, please let me 
know.

------------------

From: Mel Gorman <[email protected]>

commit b0943d61b8fa420180f92f64ef67662b4f6cc493 upstream.

THP migration can fail for a variety of reasons.  Avoid flushing the TLB
to deal with THP migration races until the copy is ready to start.

Signed-off-by: Mel Gorman <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: Alex Thorlton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Kamal Mostafa <[email protected]>
---
 mm/huge_memory.c | 7 -------
 mm/migrate.c     | 3 +++
 2 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b43f35a..21ca328 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1379,13 +1379,6 @@ int do_huge_pmd_numa_page(struct mm_struct *mm, struct 
vm_area_struct *vma,
        }
 
        /*
-        * The page_table_lock above provides a memory barrier
-        * with change_protection_range.
-        */
-       if (mm_tlb_flush_pending(mm))
-               flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
-
-       /*
         * Migrate the THP to the requested node, returns with page unlocked
         * and pmd_numa cleared.
         */
diff --git a/mm/migrate.c b/mm/migrate.c
index 8e117b7..8edfa00 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1715,6 +1715,9 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
                goto out_fail;
        }
 
+       if (mm_tlb_flush_pending(mm))
+               flush_tlb_range(vma, mmun_start, mmun_end);
+
        /* Prepare a page as a migration target */
        __set_page_locked(new_page);
        SetPageSwapBacked(new_page);
-- 
1.8.3.2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to