Re: [PATCH v5 2/2] xen/riscv: introduce identity mapping

2023-08-01 Thread Jan Beulich
On 01.08.2023 10:50, Oleksii wrote:
> On Mon, 2023-07-31 at 16:07 +0200, Jan Beulich wrote:
>> Everything else looks good to me now, but will of course want a
>> maintainer looking over.
> Would it be better to send a new version now or wait for a response
> from other maintainers?

Hard to tell. Maybe send a new version in the hope that that's what
then can go in.

Jan



Re: [PATCH v5 2/2] xen/riscv: introduce identity mapping

2023-08-01 Thread Oleksii
On Mon, 2023-07-31 at 16:07 +0200, Jan Beulich wrote:
> On 27.07.2023 15:38, Oleksii Kurochko wrote:
> > --- a/xen/arch/riscv/riscv64/head.S
> > +++ b/xen/arch/riscv/riscv64/head.S
> > @@ -39,6 +39,23 @@ ENTRY(start)
> >  jal calc_phys_offset
> >  mv  s2, a0
> >  
> > +    jal setup_initial_pagetables
> > +
> > +    /* Calculate proper VA after jump from 1:1 mapping */
> > +    la  t0, .L_primary_switched
> > +    sub t0, t0, s2
> > +
> > +    mv  a0, t0
> > +    jal turn_on_mmu
> 
> Any reason you don't do the calculation right in a0?
Probably it was before. But you are right there is no any sense in
using of t0 in the current code.
I'll update that. Thanks.

> 
> > @@ -54,3 +71,18 @@ ENTRY(reset_stack)
> >  
> >  ret
> >  
> > +    .section .text.ident, "ax", %progbits
> > +
> > +ENTRY(turn_on_mmu)
> > +    sfence.vma
> > +
> > +    li  t0, RV_STAGE1_MODE
> > +    li  t1, SATP_MODE_SHIFT
> > +    sll t0, t0, t1
> 
> Can't the last two be folded to
> 
>     slli t0, t0, SATP_MODE_SHIFT
> 
> (I don't recall what li's valid value range is, so I'm not sure if
> 
>     li  t0, RV_STAGE1_MODE << SATP_MODE_SHIFT
> 
> would be an option.)
Both of options will work but I prefer to use SLLI as LI expands into a
potentially long and inefficient shift-and-add sequence ( but in this
case I think this is not so important ).
> 
> Everything else looks good to me now, but will of course want a
> maintainer looking over.
Would it be better to send a new version now or wait for a response
from other maintainers?

~ Oleksii


Re: [PATCH v5 2/2] xen/riscv: introduce identity mapping

2023-07-31 Thread Jan Beulich
On 27.07.2023 15:38, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -39,6 +39,23 @@ ENTRY(start)
>  jal calc_phys_offset
>  mv  s2, a0
>  
> +jal setup_initial_pagetables
> +
> +/* Calculate proper VA after jump from 1:1 mapping */
> +la  t0, .L_primary_switched
> +sub t0, t0, s2
> +
> +mv  a0, t0
> +jal turn_on_mmu

Any reason you don't do the calculation right in a0?

> @@ -54,3 +71,18 @@ ENTRY(reset_stack)
>  
>  ret
>  
> +.section .text.ident, "ax", %progbits
> +
> +ENTRY(turn_on_mmu)
> +sfence.vma
> +
> +li  t0, RV_STAGE1_MODE
> +li  t1, SATP_MODE_SHIFT
> +sll t0, t0, t1

Can't the last two be folded to

slli t0, t0, SATP_MODE_SHIFT

(I don't recall what li's valid value range is, so I'm not sure if

li  t0, RV_STAGE1_MODE << SATP_MODE_SHIFT

would be an option.)

Everything else looks good to me now, but will of course want a
maintainer looking over.

Jan



[PATCH v5 2/2] xen/riscv: introduce identity mapping

2023-07-27 Thread Oleksii Kurochko
The way how switch to virtual address was implemented in the
commit e66003e7be ("xen/riscv: introduce setup_initial_pages")
isn't safe enough as:
* enable_mmu() depends on hooking all exceptions
  and pagefault.
* Any exception other than pagefault, or not taking a pagefault
  causes it to malfunction, which means you will fail to boot
  depending on where Xen was loaded into memory.

Instead of the proposed way of switching to virtual addresses was
decided to use identity mapping for area which constains needed code
to switch from identity mapping and after switching to virtual addresses,
identity mapping is removed from page-tables in the following way:
search for top-most page table entry and remove it.

Fixes: e66003e7be ("xen/riscv: introduce setup_initial_pages")
Signed-off-by: Oleksii Kurochko 
Suggested-by: Andrew Cooper 
---
Changes in V5:
 - update the algo of identity mapping removing.
 - introduce IDENT_AREA_SIZE.
 - introduce turn_on_mmu() function to enable and switch from 1:1 mapping.
 - fix typo in PGTBL_INITIAL_COUNT define.
 - update the comment above PGTBL_INITIAL_COUNT.
 - update the commit message.
---
Changes in V4:
 - remove definition of ARRAY_SIZE and ROUNDUP as  was introduced 
where these macros are located now.
 - update definition of PGTBL_INITIAL_COUNT
 - update the commit message
 - update the algo of identity mapping removing
---
Changes in V3:
 - remove unrelated to the patch changes ( SPDX tags in config.h ).
 - update definition of PGTBL_INITIAL_COUNT taking into account identity 
mapping.
 - refactor remove_identity_mapping() function.
 - add explanatory comments in xen.lds.S and mm.c.
 - update commit message.
 - move save/restore of a0/a1 registers to [PATCH v2 2/3] xen/riscv: introduce
   function for physical offset calculation.
---
Changes in V2:
  - update definition of PGTBL_INITIAL_COUNT and the comment above.
  - code style fixes.
  - 1:1 mapping for entire Xen.
  - remove id_addrs array becase entire Xen is mapped.
  - reverse condition for cycle inside remove_identity_mapping().
  - fix page table walk in remove_identity_mapping().
  - update the commit message.
  - add Suggested-by: Andrew Cooper 
  - save hart_id and dtb_addr before call MMU related C functions.
  - use phys_offset variable instead of doing calcultations to get phys offset
in head.S file. ( it can be easily done as entire Xen is 1:1 mapped )
  - declare enable_muu() as __init.
---
 xen/arch/riscv/include/asm/config.h |  2 +
 xen/arch/riscv/include/asm/mm.h |  5 +-
 xen/arch/riscv/mm.c | 90 +++--
 xen/arch/riscv/riscv64/head.S   | 32 ++
 xen/arch/riscv/setup.c  | 14 +
 xen/arch/riscv/xen.lds.S| 11 
 6 files changed, 99 insertions(+), 55 deletions(-)

diff --git a/xen/arch/riscv/include/asm/config.h 
b/xen/arch/riscv/include/asm/config.h
index fa90ae0898..f0544c6a20 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -95,6 +95,8 @@
 #define RV_STAGE1_MODE SATP_MODE_SV32
 #endif
 
+#define IDENT_AREA_SIZE 64
+
 #endif /* __RISCV_CONFIG_H__ */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index 7b94cbadd7..07c7a0abba 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -13,8 +13,11 @@ extern unsigned char cpu0_boot_stack[];
 void setup_initial_pagetables(void);
 
 void enable_mmu(void);
-void cont_after_mmu_is_enabled(void);
+
+void remove_identity_mapping(void);
 
 unsigned long calc_phys_offset(void);
 
+void turn_on_mmu(unsigned long ra);
+
 #endif /* _ASM_RISCV_MM_H */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 1df39ddf1b..d19fdb7878 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -4,6 +4,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #include 
@@ -35,8 +36,11 @@ static unsigned long __ro_after_init phys_offset;
  *
  * It might be needed one more page table in case when Xen load address
  * isn't 2 MB aligned.
+ *
+ * CONFIG_PAGING_LEVELS page tables are needed for the identity mapping,
+ * except that the root page table is shared with the initial mapping
  */
-#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
+#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2 + 1)
 
 pte_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
 stage1_pgtbl_root[PAGETABLE_ENTRIES];
@@ -75,6 +79,7 @@ static void __init setup_initial_mapping(struct mmu_desc 
*mmu_desc,
 unsigned int index;
 pte_t *pgtbl;
 unsigned long page_addr;
+bool is_identity_mapping = map_start == pa_start;
 
 if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) )
 {
@@ -108,16 +113,18 @@ static void __init setup_initial_mapping(struct mmu_desc 
*mmu_desc,
 {
 unsigned long paddr = (page_addr - map_start) + pa_start;
 unsigned int permissions = PTE_LEAF_DEFAULT;
+