On Mon, Jan 25, 2021 at 04:34:27PM -0800, Dave Hansen wrote:
> 
> From: Dave Hansen <[email protected]>
> 
> This is mostly derived from a patch from Yang Shi:
> 
>       
> https://lore.kernel.org/linux-mm/[email protected]/
> 
> Add code to the reclaim path (shrink_page_list()) to "demote" data
> to another NUMA node instead of discarding the data.  This always
> avoids the cost of I/O needed to read the page back in and sometimes
> avoids the writeout cost when the pagee is dirty.
> 
> A second pass through shrink_page_list() will be made if any demotions
> fail.  This essentally falls back to normal reclaim behavior in the
> case that demotions fail.  Previous versions of this patch may have
> simply failed to reclaim pages which were eligible for demotion but
> were unable to be demoted in practice.
> 
> Note: This just adds the start of infratructure for migration. It is
> actually disabled next to the FIXME in migrate_demote_page_ok().
> 
> Signed-off-by: Dave Hansen <[email protected]>
> Cc: Yang Shi <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Huang Ying <[email protected]>
> Cc: Dan Williams <[email protected]>
> Cc: osalvador <[email protected]>
> 
> --
> 
> changes from 202010:
>  * add MR_NUMA_MISPLACED to trace MIGRATE_REASON define
>  * make migrate_demote_page_ok() static, remove 'sc' arg until
>    later patch
>  * remove unnecessary alloc_demote_page() hugetlb warning
>  * Simplify alloc_demote_page() gfp mask.  Depend on
>    __GFP_NORETRY to make it lightweight instead of fancier
>    stuff like leaving out __GFP_IO/FS.
>  * Allocate migration page with alloc_migration_target()
>    instead of allocating directly.
> changes from 20200730:
>  * Add another pass through shrink_page_list() when demotion
>    fails.
> ---

[...]
  
> +static struct page *alloc_demote_page(struct page *page, unsigned long node)
> +{
> +        struct migration_target_control mtc = {
> +             /*
> +              * Fail quickly and quietly.  Page will likely
> +              * just be discarded instead of migrated.
> +              */
> +             .gfp_mask = GFP_HIGHUSER | __GFP_NORETRY | __GFP_NOWARN,
> +             .nid = node
> +     };
> +
> +        return alloc_migration_target(page, (unsigned long)&mtc);
> +}

Migration for THP pages will set direct reclaim. I guess that is fine right?
AFAIK, direct reclaim will only be tried once with GFP_NORETRY.

> +
> +/*
> + * Take pages on @demote_list and attempt to demote them to
> + * another node.  Pages which are not demoted are left on
> + * @demote_pages.
> + */
> +static unsigned int demote_page_list(struct list_head *demote_pages,
> +                                  struct pglist_data *pgdat,
> +                                  struct scan_control *sc)
> +{
> +     int target_nid = next_demotion_node(pgdat->node_id);
> +     unsigned int nr_succeeded = 0;
> +     int err;
> +
> +     if (list_empty(demote_pages))
> +             return 0;
> +
> +     /* Demotion ignores all cpuset and mempolicy settings */
> +     err = migrate_pages(demote_pages, alloc_demote_page, NULL,
> +                         target_nid, MIGRATE_ASYNC, MR_DEMOTION,
> +                         &nr_succeeded);
> +
> +     return nr_succeeded;
> +}
> +
>  /*
>   * shrink_page_list() returns the number of reclaimed pages
>   */
> @@ -1078,12 +1135,15 @@ static unsigned int shrink_page_list(str
>  {
>       LIST_HEAD(ret_pages);
>       LIST_HEAD(free_pages);
> +     LIST_HEAD(demote_pages);
>       unsigned int nr_reclaimed = 0;
>       unsigned int pgactivate = 0;
> +     bool do_demote_pass = true;
>  
>       memset(stat, 0, sizeof(*stat));
>       cond_resched();
>  
> +retry:
>       while (!list_empty(page_list)) {
>               struct address_space *mapping;
>               struct page *page;
> @@ -1233,6 +1293,16 @@ static unsigned int shrink_page_list(str
>               }
>  
>               /*
> +              * Before reclaiming the page, try to relocate
> +              * its contents to another node.
> +              */
> +             if (do_demote_pass && migrate_demote_page_ok(page)) {
> +                     list_add(&page->lru, &demote_pages);
> +                     unlock_page(page);
> +                     continue;
> +             }

Should we keep it simple for now and only try to demote those pages that are
free of cpusets and memory policies?
Actually, demoting those pages to a CPU or a NUMA node that does not fall into
their set, would violate those constraints right?
So I think we should leave those pages alone for now.

> +
> +             /*
>                * Anonymous process memory has backing store?
>                * Try to allocate it some swap space here.
>                * Lazyfree page could be freed directly
> @@ -1479,6 +1549,17 @@ keep:
>               list_add(&page->lru, &ret_pages);
>               VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page);
>       }
> +     /* 'page_list' is always empty here */
> +
> +     /* Migrate pages selected for demotion */
> +     nr_reclaimed += demote_page_list(&demote_pages, pgdat, sc);
> +     /* Pages that could not be demoted are still in @demote_pages */
> +     if (!list_empty(&demote_pages)) {
> +             /* Pages which failed to demoted go back on on @page_list for 
> retry: */
> +             list_splice_init(&demote_pages, page_list);
> +             do_demote_pass = false;
> +             goto retry;
> +     }
>  
>       pgactivate = stat->nr_activate[0] + stat->nr_activate[1];
>  
> _
> 

-- 
Oscar Salvador
SUSE L3

Reply via email to