On 02.09.22 02:35, Alistair Popple wrote:
> migrate_vma_setup() has a fast path in migrate_vma_collect_pmd() that
> installs migration entries directly if it can lock the migrating page.
> When removing a dirty pte the dirty bit is supposed to be carried over
> to the underlying page to prevent it being lost.
> 
> Currently migrate_vma_*() can only be used for private anonymous
> mappings. That means loss of the dirty bit usually doesn't result in
> data loss because these pages are typically not file-backed. However
> pages may be backed by swap storage which can result in data loss if an
> attempt is made to migrate a dirty page that doesn't yet have the
> PageDirty flag set.
> 
> In this case migration will fail due to unexpected references but the
> dirty pte bit will be lost. If the page is subsequently reclaimed data
> won't be written back to swap storage as it is considered uptodate,
> resulting in data loss if the page is subsequently accessed.
> 
> Prevent this by copying the dirty bit to the page when removing the pte
> to match what try_to_migrate_one() does.
> 
> Signed-off-by: Alistair Popple <apop...@nvidia.com>
> Acked-by: Peter Xu <pet...@redhat.com>
> Reviewed-by: "Huang, Ying" <ying.hu...@intel.com>
> Reported-by: "Huang, Ying" <ying.hu...@intel.com>
> Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while 
> collecting pages")
> Cc: sta...@vger.kernel.org
> 
> ---
> 
> Changes for v4:
> 
>  - Added Reviewed-by
> 
> Changes for v3:
> 
>  - Defer TLB flushing
>  - Split a TLB flushing fix into a separate change.
> 
> Changes for v2:
> 
>  - Fixed up Reported-by tag.
>  - Added Peter's Acked-by.
>  - Atomically read and clear the pte to prevent the dirty bit getting
>    set after reading it.
>  - Added fixes tag
> ---
>  mm/migrate_device.c |  9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index 4cc849c..dbf6c7a 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -7,6 +7,7 @@
>  #include <linux/export.h>
>  #include <linux/memremap.h>
>  #include <linux/migrate.h>
> +#include <linux/mm.h>
>  #include <linux/mm_inline.h>
>  #include <linux/mmu_notifier.h>
>  #include <linux/oom.h>
> @@ -196,7 +197,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>                       flush_cache_page(vma, addr, pte_pfn(*ptep));
>                       anon_exclusive = PageAnon(page) && 
> PageAnonExclusive(page);
>                       if (anon_exclusive) {
> -                             ptep_clear_flush(vma, addr, ptep);
> +                             pte = ptep_clear_flush(vma, addr, ptep);
>  
>                               if (page_try_share_anon_rmap(page)) {
>                                       set_pte_at(mm, addr, ptep, pte);
> @@ -206,11 +207,15 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>                                       goto next;
>                               }
>                       } else {
> -                             ptep_get_and_clear(mm, addr, ptep);
> +                             pte = ptep_get_and_clear(mm, addr, ptep);
>                       }
>  
>                       migrate->cpages++;
>  
> +                     /* Set the dirty flag on the folio now the pte is gone. 
> */
> +                     if (pte_dirty(pte))
> +                             folio_mark_dirty(page_folio(page));
> +
>                       /* Setup special migration page table entry */
>                       if (mpfn & MIGRATE_PFN_WRITE)
>                               entry = make_writable_migration_entry(


This matches what we do in try_to_unmap_one()

Acked-by: David Hildenbrand <da...@redhat.com>

-- 
Thanks,

David / dhildenb

Reply via email to