On Tue 01-09-20 08:46:15, Pavel Tatashin wrote:
> There is a race during page offline that can lead to infinite loop:
> a page never ends up on a buddy list and __offline_pages() keeps
> retrying infinitely or until a termination signal is received.
> 
> Thread#1 - a new process:
> 
> load_elf_binary
>  begin_new_exec
>   exec_mmap
>    mmput
>     exit_mmap
>      tlb_finish_mmu
>       tlb_flush_mmu
>        release_pages
>         free_unref_page_list
>          free_unref_page_prepare
>           set_pcppage_migratetype(page, migratetype);
>              // Set page->index migration type below  MIGRATE_PCPTYPES
> 
> Thread#2 - hot-removes memory
> __offline_pages
>   start_isolate_page_range
>     set_migratetype_isolate
>       set_pageblock_migratetype(page, MIGRATE_ISOLATE);
>         Set migration type to MIGRATE_ISOLATE-> set
>         drain_all_pages(zone);
>              // drain per-cpu page lists to buddy allocator.

It is not really clear to me how we could have passed
has_unmovable_pages at this stage when the page is not PageBuddy. Is
this because you are using Movable Zones?

> 
> Thread#1 - continue
>          free_unref_page_commit
>            migratetype = get_pcppage_migratetype(page);
>               // get old migration type
>            list_add(&page->lru, &pcp->lists[migratetype]);
>               // add new page to already drained pcp list
> 
> Thread#2
> Never drains pcp again, and therefore gets stuck in the loop.
> 
> The fix is to try to drain per-cpu lists again after
> check_pages_isolated_cb() fails.

But this means that the page is not isolated and so it could be reused
for something else. No?
 
> Signed-off-by: Pavel Tatashin <pasha.tatas...@soleen.com>
> Cc: sta...@vger.kernel.org
> ---
>  mm/memory_hotplug.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index e9d5ab5d3ca0..d6d54922bfce 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1575,6 +1575,15 @@ static int __ref __offline_pages(unsigned long 
> start_pfn,
>               /* check again */
>               ret = walk_system_ram_range(start_pfn, end_pfn - start_pfn,
>                                           NULL, check_pages_isolated_cb);
> +             /*
> +              * per-cpu pages are drained in start_isolate_page_range, but if
> +              * there are still pages that are not free, make sure that we
> +              * drain again, because when we isolated range we might
> +              * have raced with another thread that was adding pages to
> +              * pcp list.
> +              */
> +             if (ret)
> +                     drain_all_pages(zone);
>       } while (ret);
>  
>       /* Ok, all of our target is isolated.
> -- 
> 2.25.1
> 

-- 
Michal Hocko
SUSE Labs

Reply via email to