The first entry of error_states[],
{ reserved, reserved, MF_MSG_KERNEL, me_kernel },
is unreachable. identify_page_state() has two callers, and neither
one can dispatch a PG_reserved page to me_kernel():
* memory_failure() reaches identify_page_state() only after
get_hwpoison_page() returned 1. get_any_page() reaches that
return only via __get_hwpoison_page(), which gates the refcount
on HWPoisonHandlable(). HWPoisonHandlable() rejects PG_reserved
pages, so they fail with -EBUSY/-EIO long before
identify_page_state() runs.
* try_memory_failure_hugetlb() reaches identify_page_state() on
the MF_HUGETLB_IN_USED branch, but the page is necessarily a
hugetlb folio there. The first table entry that matches a
hugetlb folio is { head, head, MF_MSG_HUGE, me_huge_page }, so
they dispatch to me_huge_page() before the (now-removed)
reserved entry would have matched, regardless of whether
PG_reserved happens to be set on the head page.
me_kernel() never executes and the entry exists only to be matched
against by code that cannot see it.
Drop the entry, the me_kernel() helper, and the now-unused
"reserved" macro. Leave the MF_MSG_KERNEL enum value in place: it
remains part of the tracepoint and pr_err() string tables, and
follow-on work to classify unrecoverable kernel pages can reuse it
without churning the user-visible enum.
No functional change.
Suggested-by: David Hildenbrand <[email protected]>
Signed-off-by: Breno Leitao <[email protected]>
---
mm/memory-failure.c | 14 --------------
1 file changed, 14 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 866c4428ac7ef..49bcfbd04d213 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -992,17 +992,6 @@ static bool has_extra_refcount(struct page_state *ps,
struct page *p,
return false;
}
-/*
- * Error hit kernel page.
- * Do nothing, try to be lucky and not touch this instead. For a few cases we
- * could be more sophisticated.
- */
-static int me_kernel(struct page_state *ps, struct page *p)
-{
- unlock_page(p);
- return MF_IGNORED;
-}
-
/*
* Page in unknown state. Do nothing.
* This is a catch-all in case we fail to make sense of the page state.
@@ -1211,10 +1200,8 @@ static int me_huge_page(struct page_state *ps, struct
page *p)
#define mlock (1UL << PG_mlocked)
#define lru (1UL << PG_lru)
#define head (1UL << PG_head)
-#define reserved (1UL << PG_reserved)
static struct page_state error_states[] = {
- { reserved, reserved, MF_MSG_KERNEL, me_kernel },
/*
* free pages are specially detected outside this table:
* PG_buddy pages only make a small fraction of all free pages.
@@ -1246,7 +1233,6 @@ static struct page_state error_states[] = {
#undef mlock
#undef lru
#undef head
-#undef reserved
static void update_per_node_mf_stats(unsigned long pfn,
enum mf_result result)
--
2.53.0-Meta