4.4-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Minchan Kim <minc...@kernel.org>

commit 4855e4a7f29d6d10b0b9c84e189c770c9a94e91e upstream.

There is race between page freeing and unreserved highatomic.

 CPU 0                              CPU 1

    free_hot_cold_page
      mt = get_pfnblock_migratetype
      set_pcppage_migratetype(page, mt)
                                    unreserve_highatomic_pageblock
                                    spin_lock_irqsave(&zone->lock)
                                    move_freepages_block
                                    set_pageblock_migratetype(page)
                                    spin_unlock_irqrestore(&zone->lock)
      free_pcppages_bulk
        __free_one_page(mt) <- mt is stale

By above race, a page on CPU 0 could go non-highorderatomic free list
since the pageblock's type is changed.  By that, unreserve logic of
highorderatomic can decrease reserved count on a same pageblock severak
times and then it will make mismatch between nr_reserved_highatomic and
the number of reserved pageblock.

So, this patch verifies whether the pageblock is highatomic or not and
decrease the count only if the pageblock is highatomic.

Link: 
http://lkml.kernel.org/r/1476259429-18279-3-git-send-email-minc...@kernel.org
Signed-off-by: Minchan Kim <minc...@kernel.org>
Acked-by: Vlastimil Babka <vba...@suse.cz>
Acked-by: Mel Gorman <mgor...@techsingularity.net>
Cc: Joonsoo Kim <iamjoonsoo....@lge.com>
Cc: Sangseok Lee <sangseok....@lge.com>
Cc: Michal Hocko <mho...@suse.com>
Signed-off-by: Andrew Morton <a...@linux-foundation.org>
Signed-off-by: Linus Torvalds <torva...@linux-foundation.org>
Cc: Miles Chen <miles.c...@mediatek.com>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 mm/page_alloc.c |   24 ++++++++++++++++++------
 1 file changed, 18 insertions(+), 6 deletions(-)

--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1748,13 +1748,25 @@ static void unreserve_highatomic_pageblo
                                                struct page, lru);
 
                        /*
-                        * It should never happen but changes to locking could
-                        * inadvertently allow a per-cpu drain to add pages
-                        * to MIGRATE_HIGHATOMIC while unreserving so be safe
-                        * and watch for underflows.
+                        * In page freeing path, migratetype change is racy so
+                        * we can counter several free pages in a pageblock
+                        * in this loop althoug we changed the pageblock type
+                        * from highatomic to ac->migratetype. So we should
+                        * adjust the count once.
                         */
-                       zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
-                               zone->nr_reserved_highatomic);
+                       if (get_pageblock_migratetype(page) ==
+                                                       MIGRATE_HIGHATOMIC) {
+                               /*
+                                * It should never happen but changes to
+                                * locking could inadvertently allow a per-cpu
+                                * drain to add pages to MIGRATE_HIGHATOMIC
+                                * while unreserving so be safe and watch for
+                                * underflows.
+                                */
+                               zone->nr_reserved_highatomic -= min(
+                                               pageblock_nr_pages,
+                                               zone->nr_reserved_highatomic);
+                       }
 
                        /*
                         * Convert to ac->migratetype and avoid the normal


Reply via email to