In collapse_file(), for !is_shmem case, current check cannot guarantee
the locked page is up-to-date. Specifically, xas_unlock_irq() should not
be called before lock_page() and get_page(); and it is necessary to
recheck PageUptodate() after locking the page.

With this bug and CONFIG_READ_ONLY_THP_FOR_FS=y, madvise(HUGE)'ed .text
may contain corrupted data. This is because khugepaged mistakenly
collapses some not up-to-date sub pages into a huge page, and assumes the
huge page is up-to-date. This will NOT corrupt data in the disk, because
the page is read-only and never written back. Fix this by properly
checking PageUptodate() after locking the page. This check replaces
"VM_BUG_ON_PAGE(!PageUptodate(page), page);".

Also, move PageDirty() check after locking the page. Current khugepaged
only collapse read-only .text. Therefore, the page could only be dirty
if it hasn't been flushed since first write. In such case, calls
filemap_flush() and defer the collapse.

Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS")
Cc: Kirill A. Shutemov <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: William Kucharski <[email protected]>
Cc: Andrew Morton <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Signed-off-by: Song Liu <[email protected]>

---
Changes v1 => v2:
Rearrange the checks per feedback from Johannes, Rik, and Kirill.

Changes v2 => v3:
Remove redudant checks before trylock_page().

Changes v3 => v4:
Rewrite commit log.
Trigger filemap_flush() for PageDirty() case. This covers one-off
situation, where the file hasn't been flushed since first write.
---
 mm/khugepaged.c | 36 ++++++++++++++++++++++++------------
 1 file changed, 24 insertions(+), 12 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 0a1b4b484ac5..cd480dce92c6 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1601,17 +1601,6 @@ static void collapse_file(struct mm_struct *mm,
                                        result = SCAN_FAIL;
                                        goto xa_unlocked;
                                }
-                       } else if (!PageUptodate(page)) {
-                               xas_unlock_irq(&xas);
-                               wait_on_page_locked(page);
-                               if (!trylock_page(page)) {
-                                       result = SCAN_PAGE_LOCK;
-                                       goto xa_unlocked;
-                               }
-                               get_page(page);
-                       } else if (PageDirty(page)) {
-                               result = SCAN_FAIL;
-                               goto xa_locked;
                        } else if (trylock_page(page)) {
                                get_page(page);
                                xas_unlock_irq(&xas);
@@ -1626,7 +1615,12 @@ static void collapse_file(struct mm_struct *mm,
                 * without racing with truncate.
                 */
                VM_BUG_ON_PAGE(!PageLocked(page), page);
-               VM_BUG_ON_PAGE(!PageUptodate(page), page);
+
+               /* double check the page is up to date */
+               if (unlikely(!PageUptodate(page))) {
+                       result = SCAN_FAIL;
+                       goto out_unlock;
+               }
 
                /*
                 * If file was truncated then extended, or hole-punched, before
@@ -1642,6 +1636,24 @@ static void collapse_file(struct mm_struct *mm,
                        goto out_unlock;
                }
 
+               if (!is_shmem && PageDirty(page)) {
+                       /*
+                        * khugepaged only works on read-only fd, so this
+                        * page is dirty because it hasn't been flushed
+                        * since first write. There won't be new dirty
+                        * pages.
+                        *
+                        * Trigger async flush here and hope the writeback
+                        * is done when khugepaged revisits this page.
+                        *
+                        * This is a one-off situation. We are not forcing
+                        * writeback in loop.
+                        */
+                       filemap_flush(mapping);
+                       result = SCAN_FAIL;
+                       goto out_unlock;
+               }
+
                if (isolate_lru_page(page)) {
                        result = SCAN_DEL_PAGE_LRU;
                        goto out_unlock;
-- 
2.17.1

Reply via email to