On 27/04/2026 17:34, Ard Biesheuvel wrote: > From: Ard Biesheuvel <[email protected]> > > The linear aliases of the kernel text and rodata are mapped read-only in > the linear map as well. Given that the contents of these regions are > mostly identical to the version in the loadable image, mapping them > read-only and leaving their contents visible is a reasonable hardening > measure. > > Data and bss, however, are now also mapped read-only but the contents of > these regions are more likely to contain data that we'd rather not leak.
That sounds like a good rationale but I wonder, is there anything stopping us from unmapping text/rodata as well? > So let's unmap these entirely in the linear map when the kernel is > running normally. > > When going into hibernation or waking up from it, these regions need to > be mapped, so map the region initially, and toggle the valid bit so > map/unmap the region as needed. Doesn't safe_copy_page() already handle that? I suppose this is an optimisation to avoid modifying the linear map for every page, but if so it would be good to spell it out. > Signed-off-by: Ard Biesheuvel <[email protected]> > --- > arch/arm64/mm/mmu.c | 44 ++++++++++++++++---- > 1 file changed, 37 insertions(+), 7 deletions(-) > > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 9361b7efb848..a464f3d2d2df 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -24,6 +24,7 @@ > #include <linux/mm.h> > #include <linux/vmalloc.h> > #include <linux/set_memory.h> > +#include <linux/suspend.h> > #include <linux/kfence.h> > #include <linux/pkeys.h> > #include <linux/mm_inline.h> > @@ -1040,6 +1041,31 @@ static void __init __map_memblock(phys_addr_t start, > phys_addr_t end, > end - start, prot, early_pgtable_alloc, flags); > } > > +static void remap_linear_data_alias(bool unmap) > +{ > + set_memory_valid((unsigned long)lm_alias(__init_end), > + (unsigned long)(__fixmap_pgdir_start - __init_end) / > PAGE_SIZE, > + !unmap); > +} > + > +static int arm64_hibernate_pm_notify(struct notifier_block *nb, > + unsigned long mode, void *unused) > +{ > + switch (mode) { > + default: > + break; > + case PM_POST_HIBERNATION: > + case PM_POST_RESTORE: > + remap_linear_data_alias(true); > + break; > + case PM_HIBERNATION_PREPARE: > + case PM_RESTORE_PREPARE: > + remap_linear_data_alias(false); > + break; > + } > + return 0; > +} > + > void __init mark_linear_text_alias_ro(void) > { > /* > @@ -1048,6 +1074,16 @@ void __init mark_linear_text_alias_ro(void) > update_mapping_prot(__pa_symbol(_text), (unsigned long)lm_alias(_text), > (unsigned long)__init_begin - (unsigned long)_text, > pgprot_tagged(PAGE_KERNEL_RO)); > + > + remap_linear_data_alias(true); It's really hard to know what this does without looking at the function. How about mark_linear_data_alias_valid(false)? > + > + if (IS_ENABLED(CONFIG_HIBERNATION)) { > + static struct notifier_block nb = { > + .notifier_call = arm64_hibernate_pm_notify > + }; > + > + register_pm_notifier(&nb); > + } > } > > #ifdef CONFIG_KFENCE > @@ -1162,7 +1198,7 @@ static void __init map_mem(void) > > /* Map the kernel data/bss so it can be remapped later */ > __map_memblock(init_end, kernel_end, pgprot_tagged(PAGE_KERNEL), > - flags); > + flags | NO_BLOCK_MAPPINGS); Might be an obvious question but why do we need this? - Kevin > > /* map all the memory banks */ > for_each_mem_range(i, &start, &end) { > @@ -1174,12 +1210,6 @@ static void __init map_mem(void) > __map_memblock(start, end, pgprot_tagged(PAGE_KERNEL), > flags); > } > - > - /* Map the kernel data/bss read-only in the linear map */ > - __map_memblock(init_end, kernel_end, pgprot_tagged(PAGE_KERNEL_RO), > - flags); > - flush_tlb_kernel_range((unsigned long)lm_alias(__init_end), > - (unsigned long)lm_alias(__fixmap_pgdir_start)); > } > > void mark_rodata_ro(void)

