The previous patch already classifies PG_reserved pages as
MF_MSG_KERNEL through the long path: get_hwpoison_page() calls
__get_hwpoison_page() which fails HWPoisonHandlable(), get_any_page()
exhausts its shake_page() retry budget, and the resulting
-ENOTRECOVERABLE is mapped to MF_MSG_KERNEL by the switch.  The
outcome is correct but the work in between is wasted: shake_page()
cannot turn a reserved page into a handlable one.

Detect PG_reserved up front in memory_failure() and report
MF_MSG_KERNEL directly.  put_ref_page() releases the caller's
reference when MF_COUNT_INCREASED is set, which is important on the
MADV_HWPOISON path where get_user_pages_fast() holds a reference
across the call.

Suggested-by: Lance Yang <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
---
 mm/memory-failure.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 4b3a5d4190a07..8ba3df21d1270 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2398,6 +2398,19 @@ int memory_failure(unsigned long pfn, int flags)
                goto unlock_mutex;
        }
 
+       /*
+        * PG_reserved pages are kernel-owned (memblock reservations,
+        * driver reservations, ...) and cannot be recovered.  Skip the
+        * get_hwpoison_page() lifecycle dance and report MF_MSG_KERNEL
+        * straight away; HWPoisonHandlable() would just keep rejecting
+        * the page through the retry budget anyway.
+        */
+       if (PageReserved(p)) {
+               put_ref_page(pfn, flags);
+               res = action_result(pfn, MF_MSG_KERNEL, MF_IGNORED);
+               goto unlock_mutex;
+       }
+
        /*
         * We need/can do nothing about count=0 pages.
         * 1) it's a free page, and therefore in safe hand:

-- 
2.53.0-Meta


Reply via email to