On 06/20/2013 07:58 AM, Claudio Fontana wrote:
>> > +    tcg_out_ldst(s, TARGET_LONG_BITS == 64 ? LDST_64 : LDST_32,
>> > +                  LDST_LD, TCG_REG_X0, TCG_REG_X2, tlb_offset & 0xfff);
>> > +    tcg_out_ldst(s, LDST_64, LDST_LD, TCG_REG_X1, TCG_REG_X2,
>> > +        (tlb_offset & 0xfff) + (offsetof(CPUTLBEntry, addend) -
>> > +             (is_read ? offsetof(CPUTLBEntry, addr_read) :
>> > +                   offsetof(CPUTLBEntry, addr_write))));
>> > +
>> > +    tcg_out_cmp(s, 1, TCG_REG_X0, TCG_REG_X3, 0);
>> > +    *label_ptr = s->code_ptr;
>> > +    tcg_out_goto_cond_noaddr(s, TCG_COND_NE);
>> > +}
> hmm should not the compare and branch actually be before the loading of the 
> addend?
> If we jump to the slow path we don't need to load the addend do we?
> 

No, but it's the slow path, and we don't care if we do extra work.
What's more important is minimizing the memory load delay for the
fast path.


r~

Reply via email to