On Tue, Jul 07, 2020 at 01:46:14PM +0200, Michal Hocko wrote:
> On Tue 07-07-20 16:44:45, Joonsoo Kim wrote:
> [...]
> > @@ -1551,9 +1552,12 @@ struct page *alloc_migration_target(struct page
> > *page, unsigned long private)
> >
> > gfp_mask |= htlb_alloc_mask(h);
> >
On Tue 07-07-20 16:44:45, Joonsoo Kim wrote:
[...]
> @@ -1551,9 +1552,12 @@ struct page *alloc_migration_target(struct page *page,
> unsigned long private)
>
> gfp_mask |= htlb_alloc_mask(h);
> return alloc_huge_page_nodemask(h, nid, mtc->nmask,
> -
From: Joonsoo Kim
There is a well-defined migration target allocation callback. It's mostly
similar with new_non_cma_page() except considering CMA pages.
This patch adds a CMA consideration to the standard migration target
allocation callback and use it on gup.c.
Acked-by: Vlastimil Babka
3 matches
Mail list logo