On 2026/5/13 23:39, Breno Leitao wrote:
> get_any_page() collapses three different failure modes into a single
> -EIO return:
> 
>   * the put_page race in the !count_increased path;
>   * the HWPoisonHandlable() rejection that bounces out of
>     __get_hwpoison_page() with -EBUSY and exhausts shake_page() retries;
>   * the HWPoisonHandlable() rejection that goes through the
>     count_increased / put_page / shake_page retry loop.
> 
> The first is transient (the page is racing with the allocator).  The
> second can be either transient (a userspace folio briefly off LRU
> during migration/compaction) or stable (slab/vmalloc/page-table/
> kernel-stack pages).  The third describes a stable kernel-owned page
> that the count_increased=true caller already held a reference on.
> 
> Distinguish them on the return path: keep -EIO for both the put_page
> race and the -EBUSY-after-retries branch (shake_page() cannot drag a
> folio back from active migration, so we cannot prove the page is
> permanently kernel-owned from there), keep -EBUSY for the allocation
> race (unchanged), and return -ENOTRECOVERABLE only from the
> count_increased-true HWPoisonHandlable() rejection that exhausts its
> retries -- the caller's reference is structural evidence that the
> page is owned by the kernel.
> 
> Extend the unhandlable-page pr_err() to fire for either errno and
> update the get_hwpoison_page() kerneldoc.
> 
> memory_failure() still folds every negative return into
> MF_MSG_GET_HWPOISON via its existing "else if (res < 0)" branch, so
> this patch is a no-op for users of memory_failure() and only changes
> the errno that soft_offline_page() can propagate to its callers.  A
> follow-up wires the new return code through memory_failure() and
> reports MF_MSG_KERNEL for the unrecoverable cases.
> 
> Suggested-by: David Hildenbrand <[email protected]>
> Signed-off-by: Breno Leitao <[email protected]>
> ---
>  mm/memory-failure.c | 18 +++++++++++++++---
>  1 file changed, 15 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 49bcfbd04d213..bae883df3ccb2 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1408,6 +1408,15 @@ static int get_any_page(struct page *p, unsigned long 
> flags)
>                               shake_page(p);
>                               goto try_again;
>                       }
> +                     /*
> +                      * Return -EIO rather than -ENOTRECOVERABLE: this
> +                      * branch is also reached for pages that are merely
> +                      * off-LRU transiently (e.g. a folio in the middle
> +                      * of migration or compaction), which shake_page()
> +                      * cannot drag back.  The caller cannot prove the
> +                      * page is permanently kernel-owned from here, so
> +                      * keep it on the recoverable errno.
> +                      */
>                       ret = -EIO;
>                       goto out;
>               }
> @@ -1427,10 +1436,10 @@ static int get_any_page(struct page *p, unsigned long 
> flags)
>                       goto try_again;
>               }
>               put_page(p);
> -             ret = -EIO;
> +             ret = -ENOTRECOVERABLE;

Theoretically, pages that are merely off-LRU transiently as you commented above 
could
reach here too? Or am I miss something?

Thanks.
.

>       }
>  out:
> -     if (ret == -EIO)
> +     if (ret == -EIO || ret == -ENOTRECOVERABLE)
>               pr_err("%#lx: unhandlable page.\n", page_to_pfn(p));
>  
>       return ret;
> @@ -1487,7 +1496,10 @@ static int __get_unpoison_page(struct page *page)
>   *         -EIO for pages on which we can not handle memory errors,
>   *         -EBUSY when get_hwpoison_page() has raced with page lifecycle
>   *         operations like allocation and free,
> - *         -EHWPOISON when the page is hwpoisoned and taken off from buddy.
> + *         -EHWPOISON when the page is hwpoisoned and taken off from buddy,
> + *         -ENOTRECOVERABLE for stable kernel-owned pages the handler
> + *         cannot recover (PG_reserved, slab, vmalloc, page tables,
> + *         kernel stacks, and similar non-LRU/non-buddy pages).
>   */
>  static int get_hwpoison_page(struct page *p, unsigned long flags)
>  {
> 


Reply via email to