On 6/23/20 8:13 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim <iamjoonsoo....@lge.com>
> 
> There is a well-defined standard migration target callback.
> Use it directly.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com>

Acked-by: Vlastimil Babka <vba...@suse.cz>

But you could move this to patch 5/8 to reduce churn. And do the same with
mm/memory-failure.c new_page() there really, to drop the simple wrappers. Only
new_node_page() is complex enough.
Hm wait, new_node_page() is only called by do_migrate_range() which is only
called by __offline_pages() with explicit test that all pages are from a single
zone, so the nmask could also be setup just once and not per each page, making
it possible to remove the wrapper.

But for new_page() you would have to define that mtc->nid == NUMA_NO_NODE means
alloc_migrate_target() does page_to_nid(page) by itself.



> ---
>  mm/page_alloc.c     |  9 +++++++--
>  mm/page_isolation.c | 11 -----------
>  2 files changed, 7 insertions(+), 13 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9808339..884dfb5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8359,6 +8359,11 @@ static int __alloc_contig_migrate_range(struct 
> compact_control *cc,
>       unsigned long pfn = start;
>       unsigned int tries = 0;
>       int ret = 0;
> +     struct migration_target_control mtc = {
> +             .nid = zone_to_nid(cc->zone),
> +             .nmask = &node_states[N_MEMORY],
> +             .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> +     };
>  
>       migrate_prep();
>  
> @@ -8385,8 +8390,8 @@ static int __alloc_contig_migrate_range(struct 
> compact_control *cc,
>                                                       &cc->migratepages);
>               cc->nr_migratepages -= nr_reclaimed;
>  
> -             ret = migrate_pages(&cc->migratepages, alloc_migrate_target,
> -                                 NULL, 0, cc->mode, MR_CONTIG_RANGE);
> +             ret = migrate_pages(&cc->migratepages, alloc_migration_target,
> +                             NULL, (unsigned long)&mtc, cc->mode, 
> MR_CONTIG_RANGE);
>       }
>       if (ret < 0) {
>               putback_movable_pages(&cc->migratepages);
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index adba031..242c031 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -306,14 +306,3 @@ int test_pages_isolated(unsigned long start_pfn, 
> unsigned long end_pfn,
>  
>       return pfn < end_pfn ? -EBUSY : 0;
>  }
> -
> -struct page *alloc_migrate_target(struct page *page, unsigned long private)
> -{
> -     struct migration_target_control mtc = {
> -             .nid = page_to_nid(page),
> -             .nmask = &node_states[N_MEMORY],
> -             .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> -     };
> -
> -     return alloc_migration_target(page, (unsigned long)&mtc);
> -}
> 

Reply via email to