On Sat, Feb 28, 2015 at 02:45:38PM -0800, [email protected] wrote:
> 
> The patch below does not apply to the 3.14-stable tree.
> If someone wants it applied there, or to any other stable or longterm
> tree, then please email the backport, including the original git commit
> id to <[email protected]>.
> 
> thanks,

This patch depends on the following one:

  commit 81d0fa623c5b8dbd5279d9713094b0f9b0a00fb4
  Author: Peter Feiner <[email protected]>
  Date:   Thu Oct 9 15:28:32 2014 -0700
  
      mm: softdirty: unmapped addresses between VMAs are clean

Commit 81d0fa623c5b is cleanly applicable onto v3.14.34, so we can backport
05fbf357d941 with this patch.

As for the change in 81d0fa623c5b, I think the fix looks clear and simple
(contained in a single function,) so it's worth backporting as a stable fix too.
Or is there any reason that this 81d0fa623c5b was not tagged for stable trees?
Any idea, Peter?

Thanks,
Naoya Horiguchi


> 
> greg k-h
> 
> ------------------ original commit in Linus's tree ------------------
> 
> From 05fbf357d94152171bc50f8a369390f1f16efd89 Mon Sep 17 00:00:00 2001
> From: Konstantin Khlebnikov <[email protected]>
> Date: Wed, 11 Feb 2015 15:27:31 -0800
> Subject: [PATCH] proc/pagemap: walk page tables under pte lock
> 
> Lockless access to pte in pagemap_pte_range() might race with page
> migration and trigger BUG_ON(!PageLocked()) in migration_entry_to_page():
> 
> CPU A (pagemap)                           CPU B (migration)
>                                           lock_page()
>                                           try_to_unmap(page, TTU_MIGRATION...)
>                                                make_migration_entry()
>                                                set_pte_at()
> <read *pte>
> pte_to_pagemap_entry()
>                                           remove_migration_ptes()
>                                           unlock_page()
>     if(is_migration_entry())
>         migration_entry_to_page()
>             BUG_ON(!PageLocked(page))
> 
> Also lockless read might be non-atomic if pte is larger than wordsize.
> Other pte walkers (smaps, numa_maps, clear_refs) already lock ptes.
> 
> Fixes: 052fb0d635df ("proc: report file/anon bit in /proc/pid/pagemap")
> Signed-off-by: Konstantin Khlebnikov <[email protected]>
> Reported-by: Andrey Ryabinin <[email protected]>
> Reviewed-by: Cyrill Gorcunov <[email protected]>
> Acked-by: Naoya Horiguchi <[email protected]>
> Acked-by: Kirill A. Shutemov <[email protected]>
> Cc: <[email protected]>  [3.5+]
> Signed-off-by: Andrew Morton <[email protected]>
> Signed-off-by: Linus Torvalds <[email protected]>
> 
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index e6e0abeb5d12..eeab30fcffcc 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1056,7 +1056,7 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long 
> addr, unsigned long end,
>       struct vm_area_struct *vma;
>       struct pagemapread *pm = walk->private;
>       spinlock_t *ptl;
> -     pte_t *pte;
> +     pte_t *pte, *orig_pte;
>       int err = 0;
>  
>       /* find the first VMA at or above 'addr' */
> @@ -1117,15 +1117,19 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned 
> long addr, unsigned long end,
>               BUG_ON(is_vm_hugetlb_page(vma));
>  
>               /* Addresses in the VMA. */
> -             for (; addr < min(end, vma->vm_end); addr += PAGE_SIZE) {
> +             orig_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> +             for (; addr < min(end, vma->vm_end); pte++, addr += PAGE_SIZE) {
>                       pagemap_entry_t pme;
> -                     pte = pte_offset_map(pmd, addr);
> +
>                       pte_to_pagemap_entry(&pme, pm, vma, addr, *pte);
> -                     pte_unmap(pte);
>                       err = add_to_pagemap(addr, &pme, pm);
>                       if (err)
> -                             return err;
> +                             break;
>               }
> +             pte_unmap_unlock(orig_pte, ptl);
> +
> +             if (err)
> +                     return err;
>  
>               if (addr == end)
>                       break;
> --
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to