On 11/3/26 12:59, Djordje Todorovic wrote:
The page table walker reads PTEs using address_space_ldl/ldq which use
compile-time native endianness (always LE for RISC-V). However, when a
big-endian kernel writes PTEs via normal store instructions, they are
stored in big-endian byte order. The walker then misinterprets the PTE
values, causing page faults and a hang when the kernel enables the MMU.

The RISC-V privileged specification states that implicit data memory
accesses to supervisor-level memory management data structures follow
the hart's endianness setting (MSTATUS SBE/MBE bits).

Fix both PTE reads and atomic A/D bit updates to use the explicit _le
or _be memory access variants based on the hart's runtime endianness.
---
  target/riscv/cpu_helper.c | 26 ++++++++++++++++++++------
  1 file changed, 20 insertions(+), 6 deletions(-)


@@ -1567,11 +1571,21 @@ static int get_physical_address(CPURISCVState *env, 
hwaddr *physical,
              target_ulong *pte_pa = qemu_map_ram_ptr(mr->ram_block, addr1);
              target_ulong old_pte;
              if (riscv_cpu_sxl(env) == MXL_RV32) {
-                old_pte = qatomic_cmpxchg((uint32_t *)pte_pa, 
cpu_to_le32(pte), cpu_to_le32(updated_pte));
-                old_pte = le32_to_cpu(old_pte);
+                if (riscv_cpu_data_is_big_endian(env)) {
+                    old_pte = qatomic_cmpxchg((uint32_t *)pte_pa, 
cpu_to_be32(pte), cpu_to_be32(updated_pte));
+                    old_pte = be32_to_cpu(old_pte);
+                } else {
+                    old_pte = qatomic_cmpxchg((uint32_t *)pte_pa, 
cpu_to_le32(pte), cpu_to_le32(updated_pte));
+                    old_pte = le32_to_cpu(old_pte);
+                }

I expect checkpatch.pl script to complain for long lines, otherwise:

Reviewed-by: Philippe Mathieu-Daudé <[email protected]>


Reply via email to