On 4 April 2016 at 17:28, Richard Henderson <r...@twiddle.net> wrote: > On 04/04/2016 08:51 AM, Peter Maydell wrote: >> In particular I think if you just do the relevant handling of the tag >> bits in target-arm's get_phys_addr() and its subroutines then this >> should work ok, with the exceptions that: >> * the QEMU TLB code will think that [tag A + address X] and >> [tag B + address X] are different virtual addresses and they will >> miss each other in the TLB > > > Yep. Not only miss, but actively contend with each other.
Yes. Can we avoid that, or do we just have to live with it? I guess if the TCG fast path is doing a compare on full insn+tag then we pretty much have to live with it. >> * tlb invalidate by address becomes nasty because we need to invalidate >> [every tag + address X] > > Hmm. We should require only one flush for X. But the common code doesn't > know that... I suppose a new tlb_flush_page_mask would do the trick. Yes, I think we would need that. >> Can we fix those just by having arm_tlb_fill() call >> tlb_set_page_with_attrs() with the vaddr with the tag masked out? > > No, that misses when we perform the full vaddr+tag comparison on the TCG > fast path. Rats, you're right. thanks -- PMM