On Fri, Feb 03, 2017 at 03:59:30PM +0800, Yisheng Xie wrote:
> We had considered all of the non-lru pages as unmovable before commit
> bda807d44454 ("mm: migrate: support non-lru movable page migration"). But
> now some of non-lru pages like zsmalloc, virtio-balloon pages also become
> movable. So we can offline such blocks by using non-lru page migration.
>
> This patch straightforwardly adds non-lru migration code, which means
> adding non-lru related code to the functions which scan over pfn and
> collect pages to be migrated and isolate them before migration.
>
> Signed-off-by: Yisheng Xie <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Minchan Kim <[email protected]>
> Cc: Naoya Horiguchi <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Andi Kleen <[email protected]>
> Cc: Hanjun Guo <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Joonsoo Kim <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Reza Arbab <[email protected]>
> Cc: Taku Izumi <[email protected]>
> Cc: Vitaly Kuznetsov <[email protected]>
> Cc: Xishi Qiu <[email protected]>
> ---
> mm/memory_hotplug.c | 28 +++++++++++++++++-----------
> mm/page_alloc.c | 8 ++++++--
> 2 files changed, 23 insertions(+), 13 deletions(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index ca2723d..ea1be08 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1516,10 +1516,10 @@ int test_pages_in_a_zone(unsigned long start_pfn,
> unsigned long end_pfn)
> }
>
> /*
> - * Scan pfn range [start,end) to find movable/migratable pages (LRU pages
> - * and hugepages). We scan pfn because it's much easier than scanning over
> - * linked list. This function returns the pfn of the first found movable
> - * page if it's found, otherwise 0.
> + * Scan pfn range [start,end) to find movable/migratable pages (LRU pages,
> + * non-lru movable pages and hugepages). We scan pfn because it's much
> + * easier than scanning over linked list. This function returns the pfn
> + * of the first found movable page if it's found, otherwise 0.
> */
> static unsigned long scan_movable_pages(unsigned long start, unsigned long
> end)
> {
> @@ -1530,6 +1530,8 @@ static unsigned long scan_movable_pages(unsigned long
> start, unsigned long end)
> page = pfn_to_page(pfn);
> if (PageLRU(page))
> return pfn;
> + if (__PageMovable(page))
> + return pfn;
> if (PageHuge(page)) {
> if (page_huge_active(page))
> return pfn;
> @@ -1606,21 +1608,25 @@ static struct page *new_node_page(struct page *page,
> unsigned long private,
> if (!get_page_unless_zero(page))
> continue;
> /*
> - * We can skip free pages. And we can only deal with pages on
> - * LRU.
> + * We can skip free pages. And we can deal with pages on
> + * LRU and non-lru movable pages.
> */
> - ret = isolate_lru_page(page);
> + if (PageLRU(page))
> + ret = isolate_lru_page(page);
> + else
> + ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
> if (!ret) { /* Success */
> put_page(page);
> list_add_tail(&page->lru, &source);
> move_pages--;
> - inc_node_page_state(page, NR_ISOLATED_ANON +
> - page_is_file_cache(page));
> + if (!__PageMovable(page))
If this check is identical with "if (PageLRU(page))" in this context,
PageLRU(page) looks better because you already add same "if" above.
Otherwise, looks good to me.
Thanks,
Naoya Horiguchi