On Wed, May 13, 2026 at 10:10:27PM +0200, David Hildenbrand (Arm) wrote:
> On 5/13/26 17:39, Breno Leitao wrote:
> > * memory_failure() reaches identify_page_state() only after
> > get_hwpoison_page() returned 1. get_any_page() reaches that
> > return only via __get_hwpoison_page(), which gates the refcount
> > on HWPoisonHandlable(). HWPoisonHandlable() rejects PG_reserved
> > pages, so they fail with -EBUSY/-EIO long before
> > identify_page_state() runs.
>
> You should clarify why they are rejected. There is no explicit check for
> PG_reserved in there!
True, I meant that PG_reserved pages do not fit any of the criterias of
HWPoisonHandlable().
I will rewrite it more explictly:
__get_hwpoison_page() only takes a refcount when the page is
HWPoisonHandlable()'d, and HWPoisonHandlable() is an allowlist for LRU /
free-buddy / (soft-offline) movable_ops pages.
is it any better?
> > * try_memory_failure_hugetlb() reaches identify_page_state() on
> > the MF_HUGETLB_IN_USED branch, but the page is necessarily a
> > hugetlb folio there. The first table entry that matches a
> > hugetlb folio is { head, head, MF_MSG_HUGE, me_huge_page }, so
> > they dispatch to me_huge_page() before the (now-removed)
> > reserved entry would have matched, regardless of whether
> > PG_reserved happens to be set on the head page.
>
> See hugetlb_folio_init_vmemmap(): we always clear PG_reserved for hugetlb
> folios
> allocated from memblock.
Thanks. I clearly see a call to __folio_clear_reserved(folio), so, huge pagetlb
folios
are never reserved.
A better summary would be:
try_memory_failure_hugetlb() reaches identify_page_state() only via the
MF_HUGETLB_IN_USED branch, as hugetlb folios don't carry PG_reserved at
that point (hugetlb_folio_init_vmemmap() clears it during init).
> Yes, I think this should work.
>
> Acked-by: David Hildenbrand (Arm) <[email protected]>
Thanks for the review,
--breno