On 7/16/19 11:08 PM, tony.ngu...@bt.com wrote: > diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c > index baa61719ad..11debb7dda 100644 > --- a/accel/tcg/cputlb.c > +++ b/accel/tcg/cputlb.c > @@ -731,7 +731,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong > vaddr, > vaddr, paddr, prot, mmu_idx); > > address = vaddr_page; > - if (size < TARGET_PAGE_SIZE) { > + if (size < TARGET_PAGE_SIZE || attrs.byte_swap) {
I don't think you want to re-use TLB_RECHECK. This operation requires the slow-path, yes, but not another call into cpu->cc->tlb_fill. r~