On 02.09.22 02:35, Alistair Popple wrote:
> Currently we only call flush_cache_page() for the anon_exclusive case,
> however in both cases we clear the pte so should flush the cache.
> 
> Signed-off-by: Alistair Popple <apop...@nvidia.com>
> Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while 
> collecting pages")
> Cc: sta...@vger.kernel.org
> 
> ---
> 
> New for v4
> ---
>  mm/migrate_device.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index 6a5ef9f..4cc849c 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -193,9 +193,9 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>                       bool anon_exclusive;
>                       pte_t swp_pte;
>  
> +                     flush_cache_page(vma, addr, pte_pfn(*ptep));
>                       anon_exclusive = PageAnon(page) && 
> PageAnonExclusive(page);
>                       if (anon_exclusive) {
> -                             flush_cache_page(vma, addr, pte_pfn(*ptep));
>                               ptep_clear_flush(vma, addr, ptep);
>  
>                               if (page_try_share_anon_rmap(page)) {

Reviewed-by: David Hildenbrand <da...@redhat.com>

-- 
Thanks,

David / dhildenb

Reply via email to