On Mon, Apr 27, 2026 at 5:44 PM Ard Biesheuvel <[email protected]> wrote:
> The empty zero page is used to back any kernel or user space mapping
> that is supposed to remain cleared, and so the page itself is never
> supposed to be modified.
>
> So make it __ro_after_init rather than __page_aligned_bss: on most
> architectures, this ensures that both the kernel's mapping of it and any
> aliases that are accessible via the kernel direct (linear) map are
> mapped read-only, and cannot be used (inadvertently or maliciously) to
> corrupt the contents of the zero page.
>
> Signed-off-by: Ard Biesheuvel <[email protected]>

Reviewed-by: Jann Horn <[email protected]>

Sorry, I should have looked at this properly earlier instead of ending
up duplicating this patch with
<https://lore.kernel.org/all/[email protected]/>.

> ---
>  mm/mm_init.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index f9f8e1af921c..6ca01ed2a5a4 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -57,7 +57,7 @@ unsigned long zero_page_pfn __ro_after_init;
>  EXPORT_SYMBOL(zero_page_pfn);
>
>  #ifndef __HAVE_COLOR_ZERO_PAGE
> -uint8_t empty_zero_page[PAGE_SIZE] __page_aligned_bss;
> +uint8_t empty_zero_page[PAGE_SIZE] __ro_after_init __aligned(PAGE_SIZE);

I think this is fine as-is; but FWIW:
"__ro_after_init __aligned(PAGE_SIZE)" means that this will land
in the middle of the .data..ro_after_init section, with padding in
front of it to create 4K alignment. So this probably wastes some
RAM on padding.

Looking at "nm ../linux-out/vmlinux | sort" with this patch applied
(from a build without any LTO or such), I see this:
```
[...]
ffffffff8473d378 d shmem_inode_cachep
ffffffff8473d380 d user_buckets
ffffffff8473e000 D zero_page_pfn
ffffffff8473f000 D empty_zero_page
ffffffff84740000 D __zero_page
ffffffff84740008 D pcpu_reserved_chunk
[...]
```
So I think there are almost 4K of padding between zero_page_pfn and
empty_zero_page for alignment; and I think when the linker linked
mm-init.o with the rest of the kernel, it also had to align the
compilation unit's entire .data..ro_after_init section to 4K, which is
why I also got ~3K of padding before zero_page_pfn, resulting in a
total of ~7K of padding.

If you want to change this:
I searched through the arch-specific linker scripts, and I think they
all rely on the generic RO_DATA() macro for emitting the rodata
section; so creating an analogous page-aligned rodata section should
be as simple as adding "*(.rodata..page_aligned)" directly after
"__start_rodata = .;", as I did in my duplicate patch.

Reply via email to