On Fri 28-07-17 13:48:28, Mike Kravetz wrote:
> On 07/26/2017 03:50 AM, Michal Hocko wrote:
> > Hi,
> > I've just noticed that alloc_gigantic_page ignores movability of the
> > gigantic page and it uses any existing zone. Considering that
> > hugepage_migration_supported only supports 2MB and pgd level hugepages
> > then 1GB pages are not migratable and as such allocating them from a
> > movable zone will break the basic expectation of this zone. Standard
> > hugetlb allocations try to avoid that by using htlb_alloc_mask and I
> > believe we should do the same for gigantic pages as well.
> > 
> > I suspect this behavior is not intentional. What do you think about the
> > following untested patch?
> > ---
> > From 542d32c1eca7dcf38afca1a91bca4a472f6e8651 Mon Sep 17 00:00:00 2001
> > From: Michal Hocko <mho...@suse.com>
> > Date: Wed, 26 Jul 2017 12:43:43 +0200
> > Subject: [PATCH] mm, hugetlb: do not allocate non-migrateable gigantic pages
> >  from movable zones
> > 
> > alloc_gigantic_page doesn't consider movability of the gigantic hugetlb
> > when scanning eligible ranges for the allocation. As 1GB hugetlb pages
> > are not movable currently this can break the movable zone assumption
> > that all allocations are migrateable and as such break memory hotplug.
> > 
> > Reorganize the code and use the standard zonelist allocations scheme
> > that we use for standard hugetbl pages. htlb_alloc_mask will ensure that
> > only migratable hugetlb pages will ever see a movable zone.
> > 
> > Fixes: 944d9fec8d7a ("hugetlb: add support for gigantic page allocation at 
> > runtime")
> > Signed-off-by: Michal Hocko <mho...@suse.com>
> 
> This seems reasonable to me, and I like the fact that the code is more
> like the default huge page case.  I don't see any issues with the code.
> I did some simple smoke testing of allocating 1G pages with the new code
> and ensuring they ended up as expected.
>
> Reviewed-by: Mike Kravetz <mike.krav...@oracle.com>

Thanks a lot Mike! I will play with this some more today and tomorrow
and send the final patch later this week.
-- 
Michal Hocko
SUSE Labs

Reply via email to