On Thu, 5 Sep 2019, Mike Kravetz wrote:

> I don't have a specific test for this.  It is somewhat common for people
> to want to allocate "as many hugetlb pages as possible".  Therefore, they
> will try to allocate more pages than reasonable for their environment and
> take what they can get.  I 'tested' by simply creating some background
> activity and then seeing how many hugetlb pages could be allocated.  Of
> course, many tries over time in a loop.
> 
> This patch did not cause premature allocation failures in my limited testing.
> The number of pages which could be allocated with and without patch were
> pretty much the same.
> 
> Do note that I tested on top of Andrew's tree which contains this series:
> http://lkml.kernel.org/r/20190806014744.15446-1-mike.krav...@oracle.com
> Patch 3 in that series causes allocations to fail sooner in the case of
> COMPACT_DEFERRED:
> http://lkml.kernel.org/r/20190806014744.15446-4-mike.krav...@oracle.com
> 
> hugetlb allocations have the __GFP_RETRY_MAYFAIL flag set.  They are willing
> to retry and wait and callers are aware of this.  Even though my limited
> testing did not show regressions caused by this patch, I would prefer if the
> quick exit did not apply to __GFP_RETRY_MAYFAIL requests.

Good!  I think that is the ideal way of handling it: we can specify the 
preference to actually loop and retry (but still eventually fail) for 
hugetlb allocations specifically for this patch by testing for 
__GFP_RETRY_MAYFAIL.

I can add that to the formal proposal of patches 3 and 4 in this series 
assuming we get 5.3 settled by applying the reverts in patches 1 and 2 so 
that we don't cause various versions of Linux to have different default 
and madvise allocation policies wrt NUMA.

Reply via email to