Memory error handler calls try_to_unmap() for error pages in various
states. If the error page is a mlocked page, error handling could fail
with "still referenced by 1 users" message. This is because the page
is linked to and stays in lru cache after the following call chain.

  try_to_unmap_one
    page_remove_rmap
      clear_page_mlock
        putback_lru_page
          lru_cache_add

memory_failure() calls shake_page() to hanlde the similar issue, but
current code doesn't cover because shake_page() is called only before
try_to_unmap(). So this patches adds shake_page().

Link: http://lkml.kernel.org/r/20170417055948.GM31394@yexl-desktop
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Naoya Horiguchi <[email protected]>
---
 mm/memory-failure.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git v4.11-rc6-mmotm-2017-04-13-14-50/mm/memory-failure.c 
v4.11-rc6-mmotm-2017-04-13-14-50_patched/mm/memory-failure.c
index 77cf9c3..57f07ec 100644
--- v4.11-rc6-mmotm-2017-04-13-14-50/mm/memory-failure.c
+++ v4.11-rc6-mmotm-2017-04-13-14-50_patched/mm/memory-failure.c
@@ -919,6 +919,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned 
long pfn,
        bool unmap_success;
        int kill = 1, forcekill;
        struct page *hpage = *hpagep;
+       bool mlocked = PageMlocked(hpage);
 
        /*
         * Here we are interested only in user-mapped pages, so skip any
@@ -983,6 +984,13 @@ static bool hwpoison_user_mappings(struct page *p, 
unsigned long pfn,
                       pfn, page_mapcount(hpage));
 
        /*
+        * try_to_unmap() might put mlocked page in lru cache, so call
+        * shake_page() again to ensure that it's flushed.
+        */
+       if (mlocked)
+               shake_page(hpage, 0);
+
+       /*
         * Now that the dirty bit has been propagated to the
         * struct page and all unmaps done we can decide if
         * killing is needed or not.  Only kill when the page
-- 
2.7.0

Reply via email to