Recent debugging of misaligned access handling on RISC-V revealed that we
always call `tlb_fill` with `memop_size == 0`. This behavior effectively
disables natural alignment checks in `riscv_tlb_fill_align()`, because we
have to fall back from `memop_size` to `size` when computing the alignment bits.

With `memop_size == 0`, misaligned cross-page stores end up reported as
`store access fault` (AF, cause=7) instead of the expected
`store page fault` (PF, cause=15), since the “misalign” path triggers before
the second page translation can fault. This breaks misaligned accesses at
page boundaries.

After switching to pass the real `l->memop` into `tlb_fill`, the cross-page
faults are no longer mis-classified as AF.

Fixes: ec03dd972378 ("accel/tcg: Hoist first page lookup above pointer_wrap")

Signed-off-by: Nikita Novikov <[email protected]>
---
 accel/tcg/cputlb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 631f1fe135..271c061be1 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1782,7 +1782,7 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, 
MemOpIdx oi,
          * If the lookup potentially resized the table, refresh the
          * first CPUTLBEntryFull pointer.
          */
-        if (mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra)) {
+        if (mmu_lookup1(cpu, &l->page[1], l->memop, l->mmu_idx, type, ra)) {
             uintptr_t index = tlb_index(cpu, l->mmu_idx, addr);
             l->page[0].full = &cpu->neg.tlb.d[l->mmu_idx].fulltlb[index];
         }
-- 
2.51.0


Reply via email to