No need to recompute in case the zone is already marked contiguous. We will soon exploit this on the memory removal path, where we will only clear zone->contiguous on zones that intersect with the memory to be removed.
Cc: Andrew Morton <a...@linux-foundation.org> Cc: Michal Hocko <mho...@suse.com> Cc: Vlastimil Babka <vba...@suse.cz> Cc: Oscar Salvador <osalva...@suse.de> Cc: Pavel Tatashin <pavel.tatas...@microsoft.com> Cc: Mel Gorman <mgor...@techsingularity.net> Cc: Mike Rapoport <r...@linux.ibm.com> Cc: Dan Williams <dan.j.willi...@intel.com> Cc: Alexander Duyck <alexander.h.du...@linux.intel.com> Signed-off-by: David Hildenbrand <da...@redhat.com> --- mm/page_alloc.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5b799e11fba3..995708e05cde 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1546,6 +1546,9 @@ void set_zone_contiguous(struct zone *zone) unsigned long block_start_pfn = zone->zone_start_pfn; unsigned long block_end_pfn; + if (zone->contiguous) + return; + block_end_pfn = ALIGN(block_start_pfn + 1, pageblock_nr_pages); for (; block_start_pfn < zone_end_pfn(zone); block_start_pfn = block_end_pfn, -- 2.21.0