On Tue, 31 Mar 2015 08:50:46 +0000 Naoya Horiguchi <[email protected]> 
wrote:

> We are not safe from calling isolate_huge_page() on a hugepage concurrently,
> which can make the victim hugepage in invalid state and results in BUG_ON().
> 
> The root problem of this is that we don't have any information on struct page
> (so easily accessible) about hugepages' activeness. Note that hugepages'
> activeness means just being linked to hstate->hugepage_activelist, which is
> not the same as normal pages' activeness represented by PageActive flag.
> 
> Normal pages are isolated by isolate_lru_page() which prechecks PageLRU before
> isolation, so let's do similarly for hugetlb with a new PageHugeActive flag.
> 
> Set/ClearPageHugeActive should be called within hugetlb_lock. But 
> hugetlb_cow()
> and hugetlb_no_page() don't do this, being justified because in these function
> SetPageHugeActive is called right after the hugepage is allocated and no other
> thread tries to isolate it.
> 
> ...
>
> +/*
> + * Page flag to show that the hugepage is "active/in-use" (i.e. being linked 
> to
> + * hstate->hugepage_activelist.)
> + *
> + * This function can be called for tail pages, but never returns true for 
> them.
> + */
> +int PageHugeActive(struct page *page)
> +{
> +     VM_BUG_ON_PAGE(!PageHuge(page), page);
> +     return PageHead(page) && PagePrivate(&page[1]);
> +}

This is not a "page flag".  It is a regular old C function which tests
for a certain page state.  Yes, it kind of pretends to act like a
separate page flag but its use of the peculiar naming convention is
rather misleading.

I mean, if you see

        SetPageHugeActive(page);

then you expect that to be doing set_bit(PG_hugeactive, &page->flags)
but that isn't what it does, so the name is misleading.

Agree?

From: Andrew Morton <[email protected]>
Subject: mm-hugetlb-introduce-pagehugeactive-flag-fix

s/PageHugeActive/page_huge_active/, make it return bool

Cc: Naoya Horiguchi <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---

 mm/hugetlb.c |   22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff -puN mm/hugetlb.c~mm-hugetlb-introduce-pagehugeactive-flag-fix mm/hugetlb.c
--- a/mm/hugetlb.c~mm-hugetlb-introduce-pagehugeactive-flag-fix
+++ a/mm/hugetlb.c
@@ -925,25 +925,25 @@ struct hstate *size_to_hstate(unsigned l
 }
 
 /*
- * Page flag to show that the hugepage is "active/in-use" (i.e. being linked to
- * hstate->hugepage_activelist.)
+ * Test to determine whether the hugepage is "active/in-use" (i.e. being linked
+ * to hstate->hugepage_activelist.)
  *
  * This function can be called for tail pages, but never returns true for them.
  */
-int PageHugeActive(struct page *page)
+bool page_huge_active(struct page *page)
 {
        VM_BUG_ON_PAGE(!PageHuge(page), page);
        return PageHead(page) && PagePrivate(&page[1]);
 }
 
 /* never called for tail page */
-void SetPageHugeActive(struct page *page)
+void set_page_huge_active(struct page *page)
 {
        VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
        SetPagePrivate(&page[1]);
 }
 
-void ClearPageHugeActive(struct page *page)
+void clear_page_huge_active(struct page *page)
 {
        VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
        ClearPagePrivate(&page[1]);
@@ -977,7 +977,7 @@ void free_huge_page(struct page *page)
                restore_reserve = true;
 
        spin_lock(&hugetlb_lock);
-       ClearPageHugeActive(page);
+       clear_page_huge_active(page);
        hugetlb_cgroup_uncharge_page(hstate_index(h),
                                     pages_per_huge_page(h), page);
        if (restore_reserve)
@@ -2998,7 +2998,7 @@ retry_avoidcopy:
        copy_user_huge_page(new_page, old_page, address, vma,
                            pages_per_huge_page(h));
        __SetPageUptodate(new_page);
-       SetPageHugeActive(new_page);
+       set_page_huge_active(new_page);
 
        mmun_start = address & huge_page_mask(h);
        mmun_end = mmun_start + huge_page_size(h);
@@ -3111,7 +3111,7 @@ retry:
                }
                clear_huge_page(page, address, pages_per_huge_page(h));
                __SetPageUptodate(page);
-               SetPageHugeActive(page);
+               set_page_huge_active(page);
 
                if (vma->vm_flags & VM_MAYSHARE) {
                        int err;
@@ -3946,11 +3946,11 @@ bool isolate_huge_page(struct page *page
 
        VM_BUG_ON_PAGE(!PageHead(page), page);
        spin_lock(&hugetlb_lock);
-       if (!PageHugeActive(page) || !get_page_unless_zero(page)) {
+       if (!page_huge_active(page) || !get_page_unless_zero(page)) {
                ret = false;
                goto unlock;
        }
-       ClearPageHugeActive(page);
+       clear_page_huge_active(page);
        list_move_tail(&page->lru, list);
 unlock:
        spin_unlock(&hugetlb_lock);
@@ -3961,7 +3961,7 @@ void putback_active_hugepage(struct page
 {
        VM_BUG_ON_PAGE(!PageHead(page), page);
        spin_lock(&hugetlb_lock);
-       SetPageHugeActive(page);
+       set_page_huge_active(page);
        list_move_tail(&page->lru, &(page_hstate(page))->hugepage_activelist);
        spin_unlock(&hugetlb_lock);
        put_page(page);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to