Add a sysctl panic_on_unrecoverable_memory_failure that triggers a kernel panic when memory_failure() encounters pages that cannot be recovered. This provides a clean crash with useful debug information rather than allowing silent data corruption or a delayed crash at an unrelated code path.
Panic eligibility is intentionally narrow: only MF_MSG_KERNEL with result == MF_IGNORED panics. After the previous patch, MF_MSG_KERNEL covers PG_reserved pages and the kernel-owned pages promoted from get_hwpoison_page() via -ENOTRECOVERABLE (slab, vmalloc, page tables, kernel stacks, ...). All other action types are excluded: - MF_MSG_GET_HWPOISON and MF_MSG_KERNEL_HIGH_ORDER can be reached by transient refcount races with the page allocator (an in-flight buddy allocation has refcount 0 and is no longer on the buddy free list, briefly), and panicking on them would risk killing the box for what is actually a recoverable userspace page. - MF_MSG_UNKNOWN means identify_page_state() could not classify the page; that is precisely the wrong basis for a panic decision. Signed-off-by: Breno Leitao <[email protected]> --- mm/memory-failure.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 8ba3df21d1270..cb2965c0ec0b4 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -74,6 +74,8 @@ static int sysctl_memory_failure_recovery __read_mostly = 1; static int sysctl_enable_soft_offline __read_mostly = 1; +static int sysctl_panic_on_unrecoverable_mf __read_mostly; + atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0); static bool hw_memory_failure __read_mostly = false; @@ -155,6 +157,15 @@ static const struct ctl_table memory_failure_table[] = { .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_ONE, + }, + { + .procname = "panic_on_unrecoverable_memory_failure", + .data = &sysctl_panic_on_unrecoverable_mf, + .maxlen = sizeof(sysctl_panic_on_unrecoverable_mf), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, } }; @@ -1267,6 +1278,15 @@ static void update_per_node_mf_stats(unsigned long pfn, ++mf_stats->total; } +static bool panic_on_unrecoverable_mf(enum mf_action_page_type type, + enum mf_result result) +{ + if (!sysctl_panic_on_unrecoverable_mf || result != MF_IGNORED) + return false; + + return type == MF_MSG_KERNEL; +} + /* * "Dirty/Clean" indication is not 100% accurate due to the possibility of * setting PG_dirty outside page lock. See also comment above set_page_dirty(). @@ -1284,6 +1304,9 @@ static int action_result(unsigned long pfn, enum mf_action_page_type type, pr_err("%#lx: recovery action for %s: %s\n", pfn, action_page_types[type], action_name[result]); + if (panic_on_unrecoverable_mf(type, result)) + panic("Memory failure: %#lx: unrecoverable page", pfn); + return (result == MF_RECOVERED || result == MF_DELAYED) ? 0 : -EBUSY; } -- 2.53.0-Meta
