On Wed, 29 Apr 2026, at 15:55, Kevin Brodsky wrote:
> On 27/04/2026 17:34, Ard Biesheuvel wrote:
>> From: Ard Biesheuvel <[email protected]>
>>
>> The linear aliases of the kernel text and rodata are mapped read-only in
>> the linear map as well. Given that the contents of these regions are
>> mostly identical to the version in the loadable image, mapping them
>> read-only and leaving their contents visible is a reasonable hardening
>> measure.
>>
>> Data and bss, however, are now also mapped read-only but the contents of
>> these regions are more likely to contain data that we'd rather not leak.
>
> That sounds like a good rationale but I wonder, is there anything
> stopping us from unmapping text/rodata as well?
>

There is the zero page now, which may be accessed via
'page_address(ZERO_PAGE(0))'. Also, anything that dereferences page tables
(like /sys/kernel/debug/kernel_page_tables) will expect to have read-only
access to swapper_pg_dir.


>> So let's unmap these entirely in the linear map when the kernel is
>> running normally.
>>
>> When going into hibernation or waking up from it, these regions need to
>> be mapped, so map the region initially, and toggle the valid bit so
>> map/unmap the region as needed.
>
> Doesn't safe_copy_page() already handle that? I suppose this is an
> optimisation to avoid modifying the linear map for every page, but if so
> it would be good to spell it out.
>

Uhm, good question.

When hibernate was first implemented for arm64, we had to bring back the
linear alias of the kernel image, and when I started working on this, I
hadn't realised that we have safe_copy_page() now which should take care
of this even if the linear alias is invalid.

However, if I remove this handling, things breaks mysteriously, and it
is a bit tricky to debug so it may take me some time to answer this
question. In any case, I will address this in the next revision, and
put you on cc.

>> Signed-off-by: Ard Biesheuvel <[email protected]>
>> ---
>>  arch/arm64/mm/mmu.c | 44 ++++++++++++++++----
>>  1 file changed, 37 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 9361b7efb848..a464f3d2d2df 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -24,6 +24,7 @@
>>  #include <linux/mm.h>
>>  #include <linux/vmalloc.h>
>>  #include <linux/set_memory.h>
>> +#include <linux/suspend.h>
>>  #include <linux/kfence.h>
>>  #include <linux/pkeys.h>
>>  #include <linux/mm_inline.h>
>> @@ -1040,6 +1041,31 @@ static void __init __map_memblock(phys_addr_t start, 
>> phys_addr_t end,
>>                               end - start, prot, early_pgtable_alloc, flags);
>>  }
>>  
>> +static void remap_linear_data_alias(bool unmap)
>> +{
>> +    set_memory_valid((unsigned long)lm_alias(__init_end),
>> +                     (unsigned long)(__fixmap_pgdir_start - __init_end) / 
>> PAGE_SIZE,
>> +                     !unmap);
>> +}
>> +
>> +static int arm64_hibernate_pm_notify(struct notifier_block *nb,
>> +                                 unsigned long mode, void *unused)
>> +{
>> +    switch (mode) {
>> +    default:
>> +            break;
>> +    case PM_POST_HIBERNATION:
>> +    case PM_POST_RESTORE:
>> +            remap_linear_data_alias(true);
>> +            break;
>> +    case PM_HIBERNATION_PREPARE:
>> +    case PM_RESTORE_PREPARE:
>> +            remap_linear_data_alias(false);
>> +            break;
>> +    }
>> +    return 0;
>> +}
>> +
>>  void __init mark_linear_text_alias_ro(void)
>>  {
>>      /*
>> @@ -1048,6 +1074,16 @@ void __init mark_linear_text_alias_ro(void)
>>      update_mapping_prot(__pa_symbol(_text), (unsigned long)lm_alias(_text),
>>                          (unsigned long)__init_begin - (unsigned long)_text,
>>                          pgprot_tagged(PAGE_KERNEL_RO));
>> +
>> +    remap_linear_data_alias(true);
>
> It's really hard to know what this does without looking at the function.
> How about mark_linear_data_alias_valid(false)?
>

Sure.

>> +
>> +    if (IS_ENABLED(CONFIG_HIBERNATION)) {
>> +            static struct notifier_block nb = {
>> +                    .notifier_call = arm64_hibernate_pm_notify
>> +            };
>> +
>> +            register_pm_notifier(&nb);
>> +    }
>>  }
>>  
>>  #ifdef CONFIG_KFENCE
>> @@ -1162,7 +1198,7 @@ static void __init map_mem(void)
>>  
>>      /* Map the kernel data/bss so it can be remapped later */
>>      __map_memblock(init_end, kernel_end, pgprot_tagged(PAGE_KERNEL),
>> -                   flags);
>> +                   flags | NO_BLOCK_MAPPINGS);
>
> Might be an obvious question but why do we need this?
>

set_memory_valid() only works on regions that are mapped down to pages.



Reply via email to