On Mon 21-10-19 16:19:25, David Hildenbrand wrote:
> We call __offline_isolated_pages() from __offline_pages() after all
> pages were isolated and are either free (PageBuddy()) or PageHWPoison.
> Nothing can stop us from offlining memory at this point.
> 
> In __offline_isolated_pages() we first set all affected memory sections
> offline (offline_mem_sections(pfn, end_pfn)), to mark the memmap as
> invalid (pfn_to_online_page() will no longer succeed), and then walk over
> all pages to pull the free pages from the free lists (to the isolated
> free lists, to be precise).
> 
> Note that re-onlining a memory block will result in the whole memmap
> getting reinitialized, overwriting any old state. We already poision the
> memmap when offlining is complete to find any access to
> stale/uninitialized memmaps.
> 
> So, setting the pages PageReserved() is not helpful. The memap is marked
> offline and all pageblocks are isolated. As soon as offline, the memmap
> is stale either way.
> 
> This looks like a leftover from ancient times where we initialized the
> memmap when adding memory and not when onlining it (the pages were set
> PageReserved so re-onling would work as expected).
> 
> Cc: Andrew Morton <a...@linux-foundation.org>
> Cc: Michal Hocko <mho...@suse.com>
> Cc: Vlastimil Babka <vba...@suse.cz>
> Cc: Oscar Salvador <osalva...@suse.de>
> Cc: Mel Gorman <mgor...@techsingularity.net>
> Cc: Mike Rapoport <r...@linux.ibm.com>
> Cc: Dan Williams <dan.j.willi...@intel.com>
> Cc: Wei Yang <richard.weiy...@gmail.com>
> Cc: Alexander Duyck <alexander.h.du...@linux.intel.com>
> Cc: Anshuman Khandual <anshuman.khand...@arm.com>
> Cc: Pavel Tatashin <pavel.tatas...@microsoft.com>
> Signed-off-by: David Hildenbrand <da...@redhat.com>

Acked-by: Michal Hocko <mho...@suse.com>

We still set PageReserved before onlining pages and that one should be
good to go as well (memmap_init_zone).
Thanks!

There is a comment above offline_isolated_pages_cb that should be
removed as well.

> ---
>  mm/page_alloc.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ed8884dc0c47..bf6b21f02154 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8667,7 +8667,7 @@ __offline_isolated_pages(unsigned long start_pfn, 
> unsigned long end_pfn)
>  {
>       struct page *page;
>       struct zone *zone;
> -     unsigned int order, i;
> +     unsigned int order;
>       unsigned long pfn;
>       unsigned long flags;
>       unsigned long offlined_pages = 0;
> @@ -8695,7 +8695,6 @@ __offline_isolated_pages(unsigned long start_pfn, 
> unsigned long end_pfn)
>                */
>               if (unlikely(!PageBuddy(page) && PageHWPoison(page))) {
>                       pfn++;
> -                     SetPageReserved(page);
>                       offlined_pages++;
>                       continue;
>               }
> @@ -8709,8 +8708,6 @@ __offline_isolated_pages(unsigned long start_pfn, 
> unsigned long end_pfn)
>                       pfn, 1 << order, end_pfn);
>  #endif
>               del_page_from_free_area(page, &zone->free_area[order]);
> -             for (i = 0; i < (1 << order); i++)
> -                     SetPageReserved((page+i));
>               pfn += (1 << order);
>       }
>       spin_unlock_irqrestore(&zone->lock, flags);
> -- 
> 2.21.0
> 

-- 
Michal Hocko
SUSE Labs

Reply via email to