On Thu, Feb 28, 2019 at 10:21:54AM +0100, Michal Hocko wrote:
> On Thu 21-02-19 10:42:12, Oscar Salvador wrote:
> [...]
> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > index d5f7afda67db..04f6695b648c 100644
> > --- a/mm/memory_hotplug.c
> > +++ b/mm/memory_hotplug.c
> > @@ -1337,8 +1337,7 @@ static unsigned long scan_movable_pages(unsigned long 
> > start, unsigned long end)
> >             if (!PageHuge(page))
> >                     continue;
> >             head = compound_head(page);
> > -           if (hugepage_migration_supported(page_hstate(head)) &&
> > -               page_huge_active(head))
> > +           if (page_huge_active(head))
> >                     return pfn;
> >             skip = (1 << compound_order(head)) - (page - head);
> >             pfn += skip - 1;
> 
> Is this part correct? Say we have a gigantic page which is migrateable.
> Now scan_movable_pages would skip it and we will not migrate it, no?

All non-migrateable hugepages should have been caught in has_unmovable_pages:

<--
                if (PageHuge(page)) {
                        struct page *head = compound_head(page);
                        unsigned int skip_pages;

                        if (!hugepage_migration_supported(page_hstate(head)))
                                goto unmovable;
-->

So, there is no need to check again for migrateability here, as it is something
that does not change.
To put it in another way, all huge pages found in scan_movable_pages() should be
migrateable.
In scan_movable_pages() we just need to check whether the hugepage, gigantic or 
not, is
in use (aka active) to migrate it.

> 
> > @@ -1378,10 +1377,6 @@ do_migrate_range(unsigned long start_pfn, unsigned 
> > long end_pfn)
> >  
> >             if (PageHuge(page)) {
> >                     struct page *head = compound_head(page);
> > -                   if (compound_order(head) > PFN_SECTION_SHIFT) {
> > -                           ret = -EBUSY;
> > -                           break;
> > -                   }
> >                     pfn = page_to_pfn(head) + (1<<compound_order(head)) - 1;
> >                     isolate_huge_page(head, &source);
> >                     continue;
> 
> I think it would be much easier to have only this check removed in this
> patch. Because it is obviously bogus and wrong as well. The other check
> might be considered in a separate patch.

I do not have an issue sending both changes separedtly.
I mean, this check is the one we need to remove in order to make 1Gb-hugetlb
offlining to proceed.
The removed check from scan_movable_pages() is only removed because it is 
redundant
as we already checked for that condition in has_unmovable_pages()
(when isolating the range).

-- 
Oscar Salvador
SUSE L3

Reply via email to