On 24/02/2023 2:48 pm, Oleksii Kurochko wrote:
> Signed-off-by: Oleksii Kurochko <oleksii.kuroc...@gmail.com>
> ---
>  xen/arch/riscv/setup.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
>
> diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
> index 154bf3a0bc..593bb471a4 100644
> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -24,6 +24,18 @@ static void test_macros_from_bug_h(void)
>      early_printk("WARN is most likely working\n");
>  }
>  
> +static void __init init_bss(void)
> +{
> +    extern char __bss_start;
> +    extern char __bss_end;
> +    char *bss = &__bss_start;
> +
> +    while ( bss < &__bss_end ) {
> +        *bss = 0;
> +        bss++;
> +    }
> +}
> +
>  void __init noreturn start_xen(void)
>  {
>      /*
> @@ -38,6 +50,8 @@ void __init noreturn start_xen(void)
>  
>      asm volatile( "mv %0, a1" : "=r" (dtb_base) );
>  
> +    init_bss();
> +
>      early_printk("Hello from C env\n");
>  
>      trap_init();

Zeroing the BSS needs to one of the earliest thing you do.  It really
does need to be before entering C, and needs to be as close to the start
of head.S as you can reasonably make it.

I'd put it even before loading sp in start.

Even like this, there are various things the compiler might do behind
your back which expect a) the BSS to already be zeroed, and b) not
change value unexpectedly.


Also, note:

arch/riscv/xen.lds.S-143-        . = ALIGN(POINTER_ALIGN);
arch/riscv/xen.lds.S:144:        __bss_end = .;

The POINTER_ALIGN there is specifically so you can depend on
__bss_{start,end} being suitably aligned to use a register-width store,
rather than using byte stores, which in 64bit means you've got 8x fewer
iterations.

~Andrew

Reply via email to