On Mon, Jul 1, 2024 at 1:52 PM Wolfgang Bumiller <w.bumil...@proxmox.com> wrote:
>
> I spotted the weird-looking pattern of:
>     atomic_set(atomic_load() <some operation> N)
> in a few palces and one variable in the graph-lock code which was used with
> atomics except for a single case, which also seemed suspicious.
>
> I'm not sure if there are any known compiler-optimizations or ordering
> semantics already ensuring that these operations are indeed working correctly
> atomically, so I thought I'd point them out and ask about it by sending
> patches.

Hi Wolfgang,

indeed the usage is intended - but thanks a lot for the patches, it's
certainly a good way to proceed!

Here's a quick explanation:

- bdrv_graph_co_rdlock()/bdrv_graph_co_rdunlock(): only one coroutine
can run in each AioContext at a time, so the writes to
bdrv_graph->reader_count are effectively thread-local. It's just reads
that need to be synchronized.

- aio_poll(): similar, as the writer is the thread for the AioContext,
while the readers are the other threads that call aio_notify() for
that AioContext.

- aio_ctx_prepare(): same idea as aio_poll(), in this case the writer
is the thread running the GLib event loop (almost always the main
thread, except with the Cocoa UI because macOS wants the main thread
for itself).


So why is this written in full instead of using fetch-add-store? From
most to least important:

- to make you think about who reads and who writes, i.e. to show that
no atomicity is needed

- because it's faster

The disadvantage of course is the verbosity.

Thanks,

Paolo

> In patch 2 the ordering is changed (see the note in its mail)
>
> Wolfgang Bumiller (2):
>   graph-lock: make sure reader_count access is atomic
>   atomics: replace fetch-use-store with direct atomic operations
>
>  block/graph-lock.c | 8 +++-----
>  util/aio-posix.c   | 3 +--
>  util/aio-win32.c   | 3 +--
>  util/async.c       | 2 +-
>  4 files changed, 6 insertions(+), 10 deletions(-)
>
> --
> 2.39.2
>
>Hi


Reply via email to