Am 28/04/2022 um 13:09 schrieb Stefan Hajnoczi:
> On Tue, Apr 26, 2022 at 04:51:07AM -0400, Emanuele Giuseppe Esposito wrote:
>> It seems that aio_wait_kick always required a memory barrier
>> or atomic operation in the caller, but almost nobody actually
>> took care of doing it.
>>
>> Let's put the barrier in the function instead.
>>
>> Signed-off-by: Emanuele Giuseppe Esposito <eespo...@redhat.com>
>> ---
>> util/aio-wait.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/util/aio-wait.c b/util/aio-wait.c
>> index bdb3d3af22..c0a343ac87 100644
>> --- a/util/aio-wait.c
>> +++ b/util/aio-wait.c
>> @@ -35,7 +35,8 @@ static void dummy_bh_cb(void *opaque)
>>
>> void aio_wait_kick(void)
>> {
>> - /* The barrier (or an atomic op) is in the caller. */
>> + smp_mb();
>
> What is the purpose of the barrier and what does it pair with?
>
> I guess we want to make sure that all stores before aio_wait_kick() are
> visible to the other thread's AIO_WAIT_WHILE() cond expression. that
> would require smp_wmb(). I'm not sure why it's a full smp_mb() barrier.
I think we need the full smp_mb barrier because we have a read
afterwards (num_readers) and we want to ensure ordering also for that.
Regarding pairing, yes you are right. I need to also add a smp_mb()
between the write(num_waiters) and read(condition) in AIO_WAIT_WHILE,
otherwise it won't work properly.
So we basically have
Caller:
write(condition)
aio_wait_kick()
smp_mb()
read(num_writers)
AIO_WAIT_WHILE:
write(num_writers)
read(condition)
Emanuele
>
>> +
>> if (qatomic_read(&global_aio_wait.num_waiters)) {
>> aio_bh_schedule_oneshot(qemu_get_aio_context(), dummy_bh_cb, NULL);
>> }
>> --
>> 2.31.1
>>