Re: [PATCH RFC 2/4] mm/page_alloc: place pages to tail in __putback_isolated_page()

2020-09-16 Thread Alexander Duyck
On Wed, Sep 16, 2020 at 11:34 AM David Hildenbrand  wrote:
>
> __putback_isolated_page() already documents that pages will be placed to
> the tail of the freelist - this is, however, not the case for
> "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
> the case for all existing users.
>
> This change affects two users:
> - free page reporting
> - page isolation, when undoing the isolation.
>
> This behavior is desireable for pages that haven't really been touched

I think "desirable" is misspelled here.

> lately, so exactly the two users that don't actually read/write page
> content, but rather move untouched pages.

So in reality we were already dealing with this for page reporting,
but not in the most direct way. If I recall we were adding the pages
to the head of the list and then when we would go back to pull more
pages we were doing list rotation in the report function so they were
technically being added to the head, but usually would end up back on
the tail anyway. If anything the benefit for page reporting is that it
should be more direct this way as we will only have to rescan the
pages now when we have consumed all of the reported ones on the list.

> The new behavior is especially desirable for memory onlining, where we
> allow allocation of newly onlined pages via undo_isolate_page_range()
> in online_pages(). Right now, we always place them to the head of the
> free list, resulting in undesireable behavior: Assume we add
> individual memory chunks via add_memory() and online them right away to
> the NORMAL zone. We create a dependency chain of unmovable allocations
> e.g., via the memmap. The memmap of the next chunk will be placed onto
> previous chunks - if the last block cannot get offlined+removed, all
> dependent ones cannot get offlined+removed. While this can already be
> observed with individual DIMMs, it's more of an issue for virtio-mem
> (and I suspect also ppc DLPAR).
>
> Note: If we observe a degradation due to the changed page isolation
> behavior (which I doubt), we can always make this configurable by the
> instance triggering undo of isolation (e.g., alloc_contig_range(),
> memory onlining, memory offlining).
>
> Cc: Andrew Morton 
> Cc: Alexander Duyck 
> Cc: Mel Gorman 
> Cc: Michal Hocko 
> Cc: Dave Hansen 
> Cc: Vlastimil Babka 
> Cc: Wei Yang 
> Cc: Oscar Salvador 
> Cc: Mike Rapoport 
> Cc: Scott Cheloha 
> Cc: Michael Ellerman 
> Signed-off-by: David Hildenbrand 
> ---
>  mm/page_alloc.c | 10 +-
>  1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 91cefb8157dd..bba9a0f60c70 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -89,6 +89,12 @@ typedef int __bitwise fop_t;
>   */
>  #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
>
> +/*
> + * Place the freed page to the tail of the freelist after buddy merging. Will
> + * get ignored with page shuffling enabled.
> + */
> +#define FOP_TO_TAIL((__force fop_t)BIT(1))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION   (8)
> @@ -1040,6 +1046,8 @@ static inline void __free_one_page(struct page *page, 
> unsigned long pfn,
>
> if (is_shuffle_order(order))
> to_tail = shuffle_pick_tail();
> +   else if (fop_flags & FOP_TO_TAIL)
> +   to_tail = true;
> else
> to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
>
> @@ -3289,7 +3297,7 @@ void __putback_isolated_page(struct page *page, 
> unsigned int order, int mt)
>
> /* Return isolated page to tail of freelist. */
> __free_one_page(page, page_to_pfn(page), zone, order, mt,
> -   FOP_SKIP_REPORT_NOTIFY);
> +   FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
>  }
>
>  /*

The code looks good to me.

Reviewed-by: Alexander Duyck 



Re: [PATCH RFC 1/4] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag

2020-09-16 Thread Alexander Duyck
On Wed, Sep 16, 2020 at 11:34 AM David Hildenbrand  wrote:
>
> Let's prepare for additional flags and avoid long parameter lists of bools.
> Follow-up patches will also make use of the flags in __free_pages_ok(),
> however, I wasn't able to come up with a better name for the type - should
> be good enough for internal purposes.
>
> Cc: Andrew Morton 
> Cc: Alexander Duyck 
> Cc: Mel Gorman 
> Cc: Michal Hocko 
> Cc: Dave Hansen 
> Cc: Vlastimil Babka 
> Cc: Wei Yang 
> Cc: Oscar Salvador 
> Cc: Mike Rapoport 
> Signed-off-by: David Hildenbrand 
> ---
>  mm/page_alloc.c | 28 
>  1 file changed, 20 insertions(+), 8 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6b699d273d6e..91cefb8157dd 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -77,6 +77,18 @@
>  #include "shuffle.h"
>  #include "page_reporting.h"
>
> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
> +typedef int __bitwise fop_t;
> +
> +/* No special request */
> +#define FOP_NONE   ((__force fop_t)0)
> +
> +/*
> + * Skip free page reporting notification after buddy merging (will *not* mark
> + * the page reported, only skip the notification).
> + */
> +#define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION   (8)
> @@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long 
> buddy_pfn,
>   * -- nyc
>   */
>
> -static inline void __free_one_page(struct page *page,
> -   unsigned long pfn,
> -   struct zone *zone, unsigned int order,
> -   int migratetype, bool report)
> +static inline void __free_one_page(struct page *page, unsigned long pfn,
> +  struct zone *zone, unsigned int order,
> +  int migratetype, fop_t fop_flags)
>  {
> struct capture_control *capc = task_capc(zone);
> unsigned long buddy_pfn;
> @@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page,
> add_to_free_list(page, zone, order, migratetype);
>
> /* Notify page reporting subsystem of freed page */
> -   if (report)
> +   if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY))
> page_reporting_notify_free(order);
>  }
>
> @@ -1368,7 +1379,7 @@ static void free_pcppages_bulk(struct zone *zone, int 
> count,
> if (unlikely(isolated_pageblocks))
> mt = get_pageblock_migratetype(page);
>
> -   __free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
> +   __free_one_page(page, page_to_pfn(page), zone, 0, mt, 
> FOP_NONE);
> trace_mm_page_pcpu_drain(page, 0, mt);
> }
> spin_unlock(&zone->lock);
> @@ -1384,7 +1395,7 @@ static void free_one_page(struct zone *zone,
> is_migrate_isolate(migratetype))) {
> migratetype = get_pfnblock_migratetype(page, pfn);
> }
> -   __free_one_page(page, pfn, zone, order, migratetype, true);
> +   __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
> spin_unlock(&zone->lock);
>  }
>
> @@ -3277,7 +3288,8 @@ void __putback_isolated_page(struct page *page, 
> unsigned int order, int mt)
> lockdep_assert_held(&zone->lock);
>
> /* Return isolated page to tail of freelist. */
> -   __free_one_page(page, page_to_pfn(page), zone, order, mt, false);
> +   __free_one_page(page, page_to_pfn(page), zone, order, mt,
> +   FOP_SKIP_REPORT_NOTIFY);
>  }
>
>  /*

Seems pretty straight forward. So we are basically flipping the logic
and replacing !report with FOP_SKIP_REPORT_NOTIFY.

Reviewed-by: Alexander Duyck 



Re: [Xen-devel] [PATCH] mm: fix regression with deferred struct page init

2019-06-20 Thread Alexander Duyck
On Thu, 2019-06-20 at 18:08 +0200, Juergen Gross wrote:
> Commit 0e56acae4b4dd4a9 ("mm: initialize MAX_ORDER_NR_PAGES at a time
> instead of doing larger sections") is causing a regression on some
> systems when the kernel is booted as Xen dom0.
> 
> The system will just hang in early boot.
> 
> Reason is an endless loop in get_page_from_freelist() in case the first
> zone looked at has no free memory. deferred_grow_zone() is always
> returning true due to the following code snipplet:
> 
>   /* If the zone is empty somebody else may have cleared out the zone */
>   if (!deferred_init_mem_pfn_range_in_zone(&i, zone, &spfn, &epfn,
>first_deferred_pfn)) {
>   pgdat->first_deferred_pfn = ULONG_MAX;
>   pgdat_resize_unlock(pgdat, &flags);
>   return true;
>   }
> 
> This in turn results in the loop as get_page_from_freelist() is
> assuming forward progress can be made by doing some more struct page
> initialization.
> 
> Cc: Alexander Duyck 
> Fixes: 0e56acae4b4dd4a9 ("mm: initialize MAX_ORDER_NR_PAGES at a time instead 
> of doing larger sections")
> Suggested-by: Alexander Duyck 
> Signed-off-by: Juergen Gross 

Acked-by: Alexander Duyck 

> ---
>  mm/page_alloc.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d66bc8abe0af..8e3bc949ebcc 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1826,7 +1826,8 @@ deferred_grow_zone(struct zone *zone, unsigned int 
> order)
>first_deferred_pfn)) {
>   pgdat->first_deferred_pfn = ULONG_MAX;
>   pgdat_resize_unlock(pgdat, &flags);
> - return true;
> + /* Retry only once. */
> + return first_deferred_pfn != ULONG_MAX;
>   }
>  
>   /*



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH RFC] mm: fix regression with deferred struct page init

2019-06-20 Thread Alexander Duyck
On Thu, 2019-06-20 at 11:40 +0200, Juergen Gross wrote:
> Commit 0e56acae4b4dd4a9 ("mm: initialize MAX_ORDER_NR_PAGES at a time
> instead of doing larger sections") is causing a regression on some
> systems when the kernel is booted as Xen dom0.
> 
> The system will just hang in early boot.
> 
> Reason is an endless loop in get_page_from_freelist() in case the first
> zone looked at has no free memory. deferred_grow_zone() is always
> returning true due to the following code snipplet:
> 
>   /* If the zone is empty somebody else may have cleared out the zone */
>   if (!deferred_init_mem_pfn_range_in_zone(&i, zone, &spfn, &epfn,
>first_deferred_pfn)) {
>   pgdat->first_deferred_pfn = ULONG_MAX;
>   pgdat_resize_unlock(pgdat, &flags);
>   return true;
>   }
> 
> This in turn results in the loop as get_page_from_freelist() is
> assuming forward progress can be made by doing some more struct page
> initialization.
> 
> Fixes: 0e56acae4b4dd4a9 ("mm: initialize MAX_ORDER_NR_PAGES at a time instead 
> of doing larger sections")
> ---
> This patch makes my system boot again as Xen dom0, but I'm not really
> sure it is the correct way to do it, hence the RFC.
> Signed-off-by: Juergen Gross 
> ---
>  mm/page_alloc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d66bc8abe0af..6ee754b5cd92 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1826,7 +1826,7 @@ deferred_grow_zone(struct zone *zone, unsigned int 
> order)
>first_deferred_pfn)) {
>   pgdat->first_deferred_pfn = ULONG_MAX;
>   pgdat_resize_unlock(pgdat, &flags);
> - return true;
> + return false;
>   }
>  
>   /*

The one change I might make to this would be to do:
return first_deferred_pfn != ULONG_MAX;

That way in the event the previous caller did free up the last of the 
pages and empty the zone just before we got here then we will try one more
time. Otherwise if it was already done before we got here we exit.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel