On Fri, May 15, 2026 at 03:03:53PM +0800, Lance Yang wrote:
> 
> On Thu, May 14, 2026 at 07:37:14AM -0700, Breno Leitao wrote:
> >On Thu, May 14, 2026 at 09:28:30PM +0800, Lance Yang wrote:
> >> 
> >> On Wed, May 13, 2026 at 08:39:33AM -0700, Breno Leitao wrote:
> >> >get_any_page() collapses three different failure modes into a single
> >> >-EIO return:
> >> >
> >> >  * the put_page race in the !count_increased path;
> >> >  * the HWPoisonHandlable() rejection that bounces out of
> >> >    __get_hwpoison_page() with -EBUSY and exhausts shake_page() retries;
> >> >  * the HWPoisonHandlable() rejection that goes through the
> >> >    count_increased / put_page / shake_page retry loop.
> >> >
> >> >The first is transient (the page is racing with the allocator).  The
> >> >second can be either transient (a userspace folio briefly off LRU
> >> >during migration/compaction) or stable (slab/vmalloc/page-table/
> >> >kernel-stack pages).  The third describes a stable kernel-owned page
> >> >that the count_increased=true caller already held a reference on.
> >> >
> >> >Distinguish them on the return path: keep -EIO for both the put_page
> >> >race and the -EBUSY-after-retries branch (shake_page() cannot drag a
> >> >folio back from active migration, so we cannot prove the page is
> >> >permanently kernel-owned from there), keep -EBUSY for the allocation
> >> >race (unchanged), and return -ENOTRECOVERABLE only from the
> >> >count_increased-true HWPoisonHandlable() rejection that exhausts its
> >> >retries -- the caller's reference is structural evidence that the
> >> >page is owned by the kernel.
> >> >
> >> >Extend the unhandlable-page pr_err() to fire for either errno and
> >> >update the get_hwpoison_page() kerneldoc.
> >> >
> >> >memory_failure() still folds every negative return into
> >> >MF_MSG_GET_HWPOISON via its existing "else if (res < 0)" branch, so
> >> >this patch is a no-op for users of memory_failure() and only changes
> >> >the errno that soft_offline_page() can propagate to its callers.  A
> >> >follow-up wires the new return code through memory_failure() and
> >> >reports MF_MSG_KERNEL for the unrecoverable cases.
> >> >
> >> >Suggested-by: David Hildenbrand <[email protected]>
> >> >Signed-off-by: Breno Leitao <[email protected]>
> >> >---
> >> > mm/memory-failure.c | 18 +++++++++++++++---
> >> > 1 file changed, 15 insertions(+), 3 deletions(-)
> >> >
> >> >diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> >> >index 49bcfbd04d213..bae883df3ccb2 100644
> >> >--- a/mm/memory-failure.c
> >> >+++ b/mm/memory-failure.c
> >> >@@ -1408,6 +1408,15 @@ static int get_any_page(struct page *p, unsigned 
> >> >long flags)
> >> >                          shake_page(p);
> >> >                          goto try_again;
> >> >                  }
> >> >+                 /*
> >> >+                  * Return -EIO rather than -ENOTRECOVERABLE: this
> >> >+                  * branch is also reached for pages that are merely
> >> >+                  * off-LRU transiently (e.g. a folio in the middle
> >> >+                  * of migration or compaction), which shake_page()
> >> >+                  * cannot drag back.  The caller cannot prove the
> >> >+                  * page is permanently kernel-owned from here, so
> >> >+                  * keep it on the recoverable errno.
> >> >+                  */
> >> >                  ret = -EIO;
> >> >                  goto out;
> >> >          }
> >> >@@ -1427,10 +1436,10 @@ static int get_any_page(struct page *p, unsigned 
> >> >long flags)
> >> >                  goto try_again;
> >> >          }
> >> >          put_page(p);
> >> >-         ret = -EIO;
> >> >+         ret = -ENOTRECOVERABLE;
> >> >  }
> >> > out:
> >> >- if (ret == -EIO)
> >> >+ if (ret == -EIO || ret == -ENOTRECOVERABLE)
> >> >          pr_err("%#lx: unhandlable page.\n", page_to_pfn(p));
> >> > 
> >> >  return ret;
> >> >@@ -1487,7 +1496,10 @@ static int __get_unpoison_page(struct page *page)
> >> >  *         -EIO for pages on which we can not handle memory errors,
> >> >  *         -EBUSY when get_hwpoison_page() has raced with page lifecycle
> >> >  *         operations like allocation and free,
> >> >- *         -EHWPOISON when the page is hwpoisoned and taken off from 
> >> >buddy.
> >> >+ *         -EHWPOISON when the page is hwpoisoned and taken off from 
> >> >buddy,
> >> >+ *         -ENOTRECOVERABLE for stable kernel-owned pages the handler
> >> >+ *         cannot recover (PG_reserved, slab, vmalloc, page tables,
> >> >+ *         kernel stacks, and similar non-LRU/non-buddy pages).
> >> 
> >> Did you test this patch series? I don't see how we ever get to
> >> -ENOTRECOVERABLE there ...
> >
> >Yes, I did. I am using the following test case:
> 
> Okay.
> 
> >https://github.com/leitao/linux/commit/cfebe84ddeab5ac34ed456331db980d57e7025dc
> >
> >     # RUN_DESTRUCTIVE=1 tools/testing/selftests/mm/hwpoison-panic.sh
> >     # enabling /proc/sys/vm/panic_on_unrecoverable_memory_failure
> >     # injecting hwpoison at phys 0x2a00000 (Kernel rodata)
> >     # expecting kernel panic: 'Memory failure: <pfn>: unrecoverable page'
> >     [  501.113256] Memory failure: 0x2a00: recovery action for reserved 
> > kernel page: Ignored
> >     [  501.113956] Kernel panic - not syncing: Memory failure: 0x2a00: 
> > unrecoverable page
> >
> >
> >> Even with MF_COUNT_INCREASED, the first pass does:
> >> 
> >>    if (flags & MF_COUNT_INCREASED)
> >>            count_increased = true;
> >> 
> >>    [...]
> >> 
> >>    if (PageHuge(p) || HWPoisonHandlable(p, flags)) {
> >>            ret = 1;
> >>    } else {
> >>            if (pass++ < GET_PAGE_MAX_RETRY_NUM) { <-
> >>                    put_page(p);
> >>                    shake_page(p);
> >>                    count_increased = false;
> >>                    goto try_again; <-
> >>            }
> >>            put_page(p);
> >>            ret = -ENOTRECOVERABLE;
> >>    }
> >> 
> >> Then we come back with count_increased=false:
> >> 
> >> try_again:
> >>    if (!count_increased) {
> >>            ret = __get_hwpoison_page(p, flags); <-
> >>            if (!ret) {
> >>            [...]
> >>            } else if (ret == -EBUSY) { <-
> >>            [...]
> >>                    ret = -EIO;
> >>                    goto out; <-
> >>            }
> >>    }
> >> 
> >> For slab/vmalloc/page-table pages, __get_hwpoison_page() returns -EBUSY:
> >> 
> >>    if (!HWPoisonHandlable(&folio->page, flags))
> >>            return -EBUSY;
> >> 
> >> so they still seem to end up as -EIO ... Am I missing something?
> >
> >You are not, and thanks for catching this. I traced it again and the
> >-ENOTRECOVERABLE branch is unreachable for slab/vmalloc/page-table pages
> >exactly as you described. The __get_hwpoison_page() → -EBUSY → shake → retry
> >loop catches them first and they exit as -EIO.
> 
> Wonder if it would be simpler to just do a positive check near the top
> of get_any_page() instead. Something like:
> 
> static bool hwpoison_unrecoverable_kernel_page(struct page *page,
>                                               unsigned long flags)

Ack. We probably want to call it something like HWPoisonKernelOwned() to
follow the same naming sematics of these helpers, such as HWPoisonHandlable()

By the way, I will re-include the self test back to this patch series,
In case they are not useful, we do not merge it.

Thanks for the review,
--breno

Reply via email to