4.8-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Kirill A. Shutemov <kirill.shute...@linux.intel.com>

commit 655548bf6271b212cd1e4c259da9dbe616348d38 upstream.

The following program triggers BUG() in munlock_vma_pages_range():

        // autogenerated by syzkaller (http://github.com/google/syzkaller)
        #include <sys/mman.h>

        int main()
        {
          mmap((void*)0x20105000ul, 0xc00000ul, 0x2ul, 0x2172ul, -1, 0);
          mremap((void*)0x201fd000ul, 0x4000ul, 0xc00000ul, 0x3ul, 
0x203f0000ul);
          return 0;
        }

The test-case constructs the situation when munlock_vma_pages_range()
finds PTE-mapped THP-head in the middle of page table and, by mistake,
skips HPAGE_PMD_NR pages after that.

As result, on the next iteration it hits the middle of PMD-mapped THP
and gets upset seeing mlocked tail page.

The solution is only skip HPAGE_PMD_NR pages if the THP was mlocked
during munlock_vma_page().  It would guarantee that the page is
PMD-mapped as we never mlock PTE-mapeed THPs.

Fixes: e90309c9f772 ("thp: allow mlocked THP again")
Link: 
http://lkml.kernel.org/r/20161115132703.7s7rrgmwttegc...@black.fi.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
Reported-by: Dmitry Vyukov <dvyu...@google.com>
Cc: Konstantin Khlebnikov <koc...@gmail.com>
Cc: Andrey Ryabinin <aryabi...@virtuozzo.com>
Cc: syzkaller <syzkal...@googlegroups.com>
Cc: Andrea Arcangeli <aarca...@redhat.com>
Signed-off-by: Andrew Morton <a...@linux-foundation.org>
Signed-off-by: Linus Torvalds <torva...@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 mm/mlock.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -190,10 +190,13 @@ unsigned int munlock_vma_page(struct pag
         */
        spin_lock_irq(zone_lru_lock(zone));
 
-       nr_pages = hpage_nr_pages(page);
-       if (!TestClearPageMlocked(page))
+       if (!TestClearPageMlocked(page)) {
+               /* Potentially, PTE-mapped THP: do not skip the rest PTEs */
+               nr_pages = 1;
                goto unlock_out;
+       }
 
+       nr_pages = hpage_nr_pages(page);
        __mod_zone_page_state(zone, NR_MLOCK, -nr_pages);
 
        if (__munlock_isolate_lru_page(page, true)) {


Reply via email to