On 03/10/2018 11:19, Alex Bennée wrote: >> Fix it by using tlb_lock, a per-vCPU lock. All updaters of tlb_table >> and the corresponding victim cache now hold the lock. >> The readers that do not hold tlb_lock must use atomic reads when >> reading .addr_write, since this field can be updated by other threads; >> the conversion to atomic reads is done in the next patch. > What about the inline TLB lookup code? The original purpose of the > cmpxchg was to ensure the inline code would either see a valid entry or > and invalid one, not a potentially torn read. >
atomic_set also ensures that there are no torn reads. However, here: static void copy_tlb_helper_locked(CPUTLBEntry *d, const CPUTLBEntry *s) { #if TCG_OVERSIZED_GUEST *d = *s; #else if (atomic_set) { d->addr_read = s->addr_read; d->addr_code = s->addr_code; atomic_set(&d->addend, atomic_read(&s->addend)); /* Pairs with flag setting in tlb_reset_dirty_range */ atomic_mb_set(&d->addr_write, atomic_read(&s->addr_write)); } else { d->addr_read = s->addr_read; d->addr_write = atomic_read(&s->addr_write); d->addr_code = s->addr_code; d->addend = atomic_read(&s->addend); } #endif } it's probably best to do all atomic_set instead of just the memberwise copy. Paolo