On 18.09.20 04:16, Wei Yang wrote: > On Wed, Sep 16, 2020 at 08:34:09PM +0200, David Hildenbrand wrote: >> __putback_isolated_page() already documents that pages will be placed to >> the tail of the freelist - this is, however, not the case for >> "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be >> the case for all existing users. >> >> This change affects two users: >> - free page reporting >> - page isolation, when undoing the isolation. >> >> This behavior is desireable for pages that haven't really been touched >> lately, so exactly the two users that don't actually read/write page >> content, but rather move untouched pages. >> >> The new behavior is especially desirable for memory onlining, where we >> allow allocation of newly onlined pages via undo_isolate_page_range() >> in online_pages(). Right now, we always place them to the head of the >> free list, resulting in undesireable behavior: Assume we add >> individual memory chunks via add_memory() and online them right away to >> the NORMAL zone. We create a dependency chain of unmovable allocations >> e.g., via the memmap. The memmap of the next chunk will be placed onto >> previous chunks - if the last block cannot get offlined+removed, all >> dependent ones cannot get offlined+removed. While this can already be >> observed with individual DIMMs, it's more of an issue for virtio-mem >> (and I suspect also ppc DLPAR). >> >> Note: If we observe a degradation due to the changed page isolation >> behavior (which I doubt), we can always make this configurable by the >> instance triggering undo of isolation (e.g., alloc_contig_range(), >> memory onlining, memory offlining). >> >> Cc: Andrew Morton <a...@linux-foundation.org> >> Cc: Alexander Duyck <alexander.h.du...@linux.intel.com> >> Cc: Mel Gorman <mgor...@techsingularity.net> >> Cc: Michal Hocko <mho...@kernel.org> >> Cc: Dave Hansen <dave.han...@intel.com> >> Cc: Vlastimil Babka <vba...@suse.cz> >> Cc: Wei Yang <richard.weiy...@linux.alibaba.com> >> Cc: Oscar Salvador <osalva...@suse.de> >> Cc: Mike Rapoport <r...@kernel.org> >> Cc: Scott Cheloha <chel...@linux.ibm.com> >> Cc: Michael Ellerman <m...@ellerman.id.au> >> Signed-off-by: David Hildenbrand <da...@redhat.com> >> --- >> mm/page_alloc.c | 10 +++++++++- >> 1 file changed, 9 insertions(+), 1 deletion(-) >> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 91cefb8157dd..bba9a0f60c70 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -89,6 +89,12 @@ typedef int __bitwise fop_t; >> */ >> #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0)) >> >> +/* >> + * Place the freed page to the tail of the freelist after buddy merging. >> Will >> + * get ignored with page shuffling enabled. >> + */ >> +#define FOP_TO_TAIL ((__force fop_t)BIT(1)) >> + >> /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ >> static DEFINE_MUTEX(pcp_batch_high_lock); >> #define MIN_PERCPU_PAGELIST_FRACTION (8) >> @@ -1040,6 +1046,8 @@ static inline void __free_one_page(struct page *page, >> unsigned long pfn, >> >> if (is_shuffle_order(order)) >> to_tail = shuffle_pick_tail(); >> + else if (fop_flags & FOP_TO_TAIL) >> + to_tail = true; > > Take another look into this part. Maybe we can move this check at top? > > For online_page case, currently we have following call flow: > > online_page > online_pages_range > shuffle_zone > > This means we would always shuffle the newly added pages. Maybe we don't need > to do the shuffle when adding them to the free_list?
Yeah we don't, but it doesn't really buy us too much as the call paths I am touching are used by other mechanisms as well that need it (especially undoing page isolation). -- Thanks, David / dhildenb