On Wed, 7 Jun 2017, Mike Kravetz wrote:
> > @@ -2364,6 +2366,7 @@ static unsigned long set_max_huge_pages(struct hstate
> > *h, unsigned long count,
> > ret = alloc_fresh_gigantic_page(h, nodes_allowed);
> > else
> > ret =
On Wed, 7 Jun 2017, Mike Kravetz wrote:
> > @@ -2364,6 +2366,7 @@ static unsigned long set_max_huge_pages(struct hstate
> > *h, unsigned long count,
> > ret = alloc_fresh_gigantic_page(h, nodes_allowed);
> > else
> > ret =
On 06/07/2017 09:03 PM, David Rientjes wrote:
> A few hugetlb allocators loop while calling the page allocator and can
> potentially prevent rescheduling if the page allocator slowpath is not
> utilized.
>
> Conditionally schedule when large numbers of hugepages can be allocated.
>
>
On 06/07/2017 09:03 PM, David Rientjes wrote:
> A few hugetlb allocators loop while calling the page allocator and can
> potentially prevent rescheduling if the page allocator slowpath is not
> utilized.
>
> Conditionally schedule when large numbers of hugepages can be allocated.
>
>
A few hugetlb allocators loop while calling the page allocator and can
potentially prevent rescheduling if the page allocator slowpath is not
utilized.
Conditionally schedule when large numbers of hugepages can be allocated.
Signed-off-by: David Rientjes
---
Based on -mm
A few hugetlb allocators loop while calling the page allocator and can
potentially prevent rescheduling if the page allocator slowpath is not
utilized.
Conditionally schedule when large numbers of hugepages can be allocated.
Signed-off-by: David Rientjes
---
Based on -mm only to prevent merge
6 matches
Mail list logo