On 21/11/2017 11:59, Stefan Hajnoczi wrote:
> On Mon, Nov 20, 2017 at 09:23:24PM -0500, Jeff Cody wrote:
>> @@ -438,6 +439,16 @@ fail:
>>  void aio_co_schedule(AioContext *ctx, Coroutine *co)
>>  {
>>      trace_aio_co_schedule(ctx, co);
>> +    const char *scheduled = atomic_read(&co->scheduled);
>> +
>> +    if (scheduled) {
>> +        fprintf(stderr,
>> +                "%s: Co-routine was already scheduled in '%s'\n",
>> +                __func__, scheduled);
>> +        abort();
>> +    }
>> +    atomic_set(&co->scheduled, __func__);
> 
> According to docs/devel/atomics.txt atomic_set()/atomic_read() are
> weakly ordered.  They require memory barriers to provide guarantees
> about ordering.  Your patch doesn't include barriers or comments about
> where the implicit barriers are.
> 
> The docs recommend using the following instead of
> atomic_read()/atomic_set() to get ordering:
> 
>   typeof(*ptr) atomic_mb_read(ptr)
>   void         atomic_mb_set(ptr, val)

Even with atomic_mb_read/atomic_mb_set we should document what memory
ordering is required for correctness (i.e. what should be ordered
against what, or equivalently what operation has release/acquire semantics).

My reasoning is that the NULL write to co->scheduled should be ordered
before a write of co (anywhere), but the same is true for co->ctx so the
NULL write is ordered by the smp_wmb in qemu_aio_coroutine_enter.  So
that is fine but it needs a comment.

Dually, the read should be ordered after a read of co, but that's
handled by the callers via atomic_rcu_read if they need the ordering.
No comment needed here I think.

What's left is the __func__ write, where atomic_mb_set should be used
indeed. Or, perhaps better:

    const char *scheduled = atomic_cmpxchg(&co->scheduled,
                                           NULL, __func__);
    if (scheduled) {
        ...
    }

... which also removes the atomic_read.

Thanks,

Paolo

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to