On Mon, Jul 17, 2017 at 12:55:03 -1000, Richard Henderson wrote:
> On 07/16/2017 10:03 AM, Emilio G. Cota wrote:
> >@@ -1073,13 +1073,17 @@ void tb_phys_invalidate(TranslationBlock *tb, 
> >tb_page_addr_t page_addr)
> >      assert_tb_locked();
> >-    atomic_set(&tb->invalid, true);
> >-
> >      /* remove the TB from the hash list */
> >      phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK);
> >      h = tb_hash_func(phys_pc, tb->pc, tb->flags, tb->trace_vcpu_dstate);
> >      qht_remove(&tcg_ctx.tb_ctx.htable, tb, h);
> >+    /*
> >+     * Mark the TB as invalid *after* it's been removed from tb_hash, which
> >+     * eliminates the need to check this bit on lookups.
> >+     */
> >+    tb->invalid = true;
> 
> I believe you need atomic_store_release here.  Previously we were relying on
> the lock acquisition in qht_remove to provide the required memory barrier.
> 
> We definitely need to make sure this reaches memory before we zap the TB in
> the CPU_FOREACH loop.

After this patch tb->invalid is only read/set with tb_lock held, so no need for
atomics while accessing it.

                E.

Reply via email to