On 27/04/2026 17:34, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <[email protected]>
>
> On systems where the bootloader adheres to the original arm64 boot
> protocol, the placement of the kernel in the physical address space is
> highly predictable, and this makes the placement of its linear alias in
> the kernel virtual address space equally predictable, given the lack of
> randomization of the linear map.
>
> The linear aliases of the kernel text and rodata regions are already
> mapped read-only, but the kernel data and bss are mapped read-write in
> this region. This is not needed, so map them read-only as well.
>
> Note that the statically allocated kernel page tables do need to be
> modifiable via the linear map, so leave these mapped read-write.
>
> Signed-off-by: Ard Biesheuvel <[email protected]>
> ---
>  arch/arm64/include/asm/sections.h |  1 +
>  arch/arm64/mm/mmu.c               | 16 ++++++++++++++--
>  2 files changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/sections.h 
> b/arch/arm64/include/asm/sections.h
> index 51b0d594239e..32ec21af0823 100644
> --- a/arch/arm64/include/asm/sections.h
> +++ b/arch/arm64/include/asm/sections.h
> @@ -23,6 +23,7 @@ extern char __irqentry_text_start[], __irqentry_text_end[];
>  extern char __mmuoff_data_start[], __mmuoff_data_end[];
>  extern char __entry_tramp_text_start[], __entry_tramp_text_end[];
>  extern char __relocate_new_kernel_start[], __relocate_new_kernel_end[];
> +extern char __fixmap_pgdir_start[];
>  
>  static inline size_t entry_tramp_text_size(void)
>  {
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 1a4b4337d29a..9361b7efb848 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1122,7 +1122,9 @@ static void __init map_mem(void)
>  {
>       static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN);
>       phys_addr_t kernel_start = __pa_symbol(_text);
> -     phys_addr_t kernel_end = __pa_symbol(__init_begin);
> +     phys_addr_t init_begin = __pa_symbol(__init_begin);
> +     phys_addr_t init_end = __pa_symbol(__init_end);
> +     phys_addr_t kernel_end = __pa_symbol(__fixmap_pgdir_start);

Using fixmap_pgdir as an anchor seems a bit arbitrary... Couldn't we use
__bss_end instead?

It could also be helpful to add comments in vmlinux.lds.S clarifying
which sections are RO/RW in the linear map, it's getting pretty
difficult to follow.

>       phys_addr_t start, end;
>       int flags = NO_EXEC_MAPPINGS;
>       u64 i;
> @@ -1155,7 +1157,11 @@ static void __init map_mem(void)
>        * of the region accessible to subsystems such as hibernate,
>        * but protects it from inadvertent modification or execution.
>        */
> -     __map_memblock(kernel_start, kernel_end, pgprot_tagged(PAGE_KERNEL),
> +     __map_memblock(kernel_start, init_begin, pgprot_tagged(PAGE_KERNEL),
> +                    flags);
> +
> +     /* Map the kernel data/bss so it can be remapped later */
> +     __map_memblock(init_end, kernel_end, pgprot_tagged(PAGE_KERNEL),

Maybe I'm missing something obvious, but considering patch 3/4 couldn't
we directly map the range RO here?

- Kevin

>                      flags);
>  
>       /* map all the memory banks */
> @@ -1168,6 +1174,12 @@ static void __init map_mem(void)
>               __map_memblock(start, end, pgprot_tagged(PAGE_KERNEL),
>                              flags);
>       }
> +
> +     /* Map the kernel data/bss read-only in the linear map */
> +     __map_memblock(init_end, kernel_end, pgprot_tagged(PAGE_KERNEL_RO),
> +                    flags);
> +     flush_tlb_kernel_range((unsigned long)lm_alias(__init_end),
> +                            (unsigned long)lm_alias(__fixmap_pgdir_start));
>  }
>  
>  void mark_rodata_ro(void)

Reply via email to