On 12/16/2013 06:14 PM, Vlastimil Babka wrote:
> Since commit ff6a6da60 ("mm: accelerate munlock() treatment of THP pages")
> munlock skips tail pages of a munlocked THP page. However, when the head page
> already has PageMlocked unset, it will not skip the tail pages.
> 
> Commit 7225522bb ("mm: munlock: batch non-THP page isolation and
> munlock+putback using pagevec") has added a PageTransHuge() check which
> contains VM_BUG_ON(PageTail(page)). Sasha Levin found this triggered using
> trinity, on the first tail page of a THP page without PageMlocked flag.
> 
> This patch fixes the issue by skipping tail pages also in the case when
> PageMlocked flag is unset. There is still a possibility of race with THP page
> split between clearing PageMlocked and determining how many pages to skip.
> The race might result in former tail pages not being skipped, which is however
> no longer a bug, as during the skip the PageTail flags are cleared.
> 
> However this race also affects correctness of NR_MLOCK accounting, which is to
> be fixed in a separate patch.
> 
> Cc: sta...@kernel.org
> Reported-by: Sasha Levin <sasha.le...@oracle.com>
> Signed-off-by: Vlastimil Babka <vba...@suse.cz>
> ---
>  mm/mlock.c | 24 ++++++++++++++++++------
>  1 file changed, 18 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/mlock.c b/mm/mlock.c
> index d480cd6..3847b13 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -148,21 +148,30 @@ static void __munlock_isolation_failed(struct page 
> *page)
>   */
>  unsigned int munlock_vma_page(struct page *page)
>  {
> -     unsigned int page_mask = 0;
> +     unsigned int nr_pages;
>  
>       BUG_ON(!PageLocked(page));
>  
>       if (TestClearPageMlocked(page)) {
> -             unsigned int nr_pages = hpage_nr_pages(page);
> +             nr_pages = hpage_nr_pages(page);

This line can be put before the if.

>               mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
> -             page_mask = nr_pages - 1;
>               if (!isolate_lru_page(page))
>                       __munlock_isolated_page(page);
>               else
>                       __munlock_isolation_failed(page);
> +     } else {
> +             nr_pages = hpage_nr_pages(page);
>       }
>  
> -     return page_mask;
> +     /*
> +      * Regardless of the original PageMlocked flag, we determine nr_pages
> +      * after touching the flag. This leaves a possible race with a THP page
> +      * split, such that a whole THP page was munlocked, but nr_pages == 1.
> +      * Returning a smaller mask due to that is OK, the worst that can
> +      * happen is subsequent useless scanning of the former tail pages.
> +      * The NR_MLOCK accounting can however become broken.
> +      */
> +     return nr_pages - 1;
>  }

Personally, I'd prefer to make munlock_vma_page() return void.
If not please add some comment about the return value in this function's
description also.

>  
>  /**
> @@ -440,7 +449,8 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
>  
>       while (start < end) {
>               struct page *page = NULL;
> -             unsigned int page_mask, page_increm;
> +             unsigned int page_mask;
> +             unsigned long page_increm;
>               struct pagevec pvec;
>               struct zone *zone;
>               int zoneid;
> @@ -490,7 +500,9 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
>                               goto next;
>                       }
>               }
> -             page_increm = 1 + (~(start >> PAGE_SHIFT) & page_mask);
> +             /* It's a bug to munlock in the middle of a THP page */
> +             VM_BUG_ON((start >> PAGE_SHIFT) & page_mask);
> +             page_increm = 1 + page_mask;
>               start += page_increm * PAGE_SIZE;
>  next:
>               cond_resched();
> 

-- 
Regards,
-Bob
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to