Le 30/06/2020 à 13:27, Mark Cave-Ayland a écrit : > Since all callers to get_physical_address() now apply the same page offset to > the translation result, move the logic into get_physical_address() itself to > avoid duplication. > > Suggested-by: Philippe Mathieu-Daudé <f4...@amsat.org> > Signed-off-by: Mark Cave-Ayland <mark.cave-ayl...@ilande.co.uk> > --- > target/m68k/helper.c | 18 +++++++----------- > 1 file changed, 7 insertions(+), 11 deletions(-) > > diff --git a/target/m68k/helper.c b/target/m68k/helper.c > index 631eab7774..71c2376910 100644 > --- a/target/m68k/helper.c > +++ b/target/m68k/helper.c > @@ -643,7 +643,7 @@ static int get_physical_address(CPUM68KState *env, hwaddr > *physical, > /* Transparent Translation Register bit */ > env->mmu.mmusr = M68K_MMU_T_040 | M68K_MMU_R_040; > } > - *physical = address & TARGET_PAGE_MASK; > + *physical = address; > *page_size = TARGET_PAGE_SIZE; > return 0; > } > @@ -771,7 +771,8 @@ static int get_physical_address(CPUM68KState *env, hwaddr > *physical, > } > *page_size = 1 << page_bits; > page_mask = ~(*page_size - 1); > - *physical = next & page_mask; > + address &= TARGET_PAGE_MASK;
I don't think you need TARGET_PAGE_MASK here: - TARGET_PAGE_MASK is 4096 - page_mask is either 4096 or 8192 > + *physical = (next & page_mask) + (address & (*page_size - 1)); > > if (access_type & ACCESS_PTEST) { > env->mmu.mmusr |= next & M68K_MMU_SR_MASK_040; > @@ -826,8 +827,6 @@ hwaddr m68k_cpu_get_phys_page_debug(CPUState *cs, vaddr > addr) > return -1; > } > > - addr &= TARGET_PAGE_MASK; > - phys_addr += addr & (page_size - 1); > return phys_addr; > } > > @@ -891,10 +890,8 @@ bool m68k_cpu_tlb_fill(CPUState *cs, vaddr address, int > size, > ret = get_physical_address(&cpu->env, &physical, &prot, > address, access_type, &page_size); > if (likely(ret == 0)) { > - address &= TARGET_PAGE_MASK; > - physical += address & (page_size - 1); > - tlb_set_page(cs, address, physical, > - prot, mmu_idx, TARGET_PAGE_SIZE); > + tlb_set_page(cs, address & TARGET_PAGE_MASK, > + physical & TARGET_PAGE_MASK, prot, mmu_idx, page_size); I had a look to tl_set_page() to see how it manages the entry when the addresses are not aligned to page_size, and it calls tlb_set_page_with_attrs() where we have a comment: /* Add a new TLB entry. At most one entry for a given virtual address * is permitted. Only a single TARGET_PAGE_SIZE region is mapped, the * supplied size is only used by tlb_flush_page. ... So I think it's correct to use TARGET_PAGE_MASK and page_size. Thanks, Laurent