On Fri, Oct 27, 2023 at 05:53:26PM +0200, Kevin Wolf wrote: > Almost all functions that access bs->backing already take the graph > lock now. Add locking to the remaining users and finally annotate the > struct field itself as protected by the graph lock. > > Signed-off-by: Kevin Wolf <kw...@redhat.com> > --- > include/block/block_int-common.h | 2 +- > block.c | 27 +++++++++++++++++---------- > block/commit.c | 5 ++++- > block/mirror.c | 17 ++++++++++++----- > block/qed.c | 2 +- > block/replication.c | 7 ++++++- > block/vmdk.c | 7 ++++--- > tests/unit/test-bdrv-drain.c | 22 ++++++++++++++++++---- > tests/unit/test-bdrv-graph-mod.c | 5 ++++- > 9 files changed, 67 insertions(+), 27 deletions(-) > > +++ b/block.c ... > @@ -5204,10 +5208,11 @@ static void bdrv_close(BlockDriverState *bs) > QLIST_FOREACH_SAFE(child, &bs->children, next, next) { > bdrv_unref_child(bs, child); > } > - bdrv_graph_wrunlock(); > > assert(!bs->backing); > assert(!bs->file); > + bdrv_graph_wrunlock(); > + > g_free(bs->opaque);
At first I wondered why you needed code motion to pull the asserts inside the lock. But now I get it - direct access to bs->backing itself now requires a slightly larger lock scope. > static int coroutine_fn mirror_run(Job *job, Error **errp) > { > MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job); > - BlockDriverState *bs = s->mirror_top_bs->backing->bs; > + BlockDriverState *bs; > MirrorBDSOpaque *mirror_top_opaque = s->mirror_top_bs->opaque; > BlockDriverState *target_bs = blk_bs(s->target); > bool need_drain = true; > @@ -935,6 +939,7 @@ static int coroutine_fn mirror_run(Job *job, Error **errp) > } > > bdrv_graph_co_rdlock(); > + bs = bdrv_filter_bs(s->mirror_top_bs); Interesting change to prefer bdrv_filter_bs() instead of direct access to ->backing->bs; but I think it's okay. > +++ b/tests/unit/test-bdrv-drain.c > @@ -218,8 +218,14 @@ static void do_drain_end_unlocked(enum drain_type > drain_type, BlockDriverState * > } > } > > -static void test_drv_cb_common(BlockBackend *blk, enum drain_type drain_type, > - bool recursive) > +/* > + * Locking the block graph would be a bit cumbersome here because this > function > + * is called both in coroutine and non-coroutine context. We know this is a > test > + * and nothing else is running, so don't bother with TSA. > + */ > +static void coroutine_mixed_fn TSA_NO_TSA > +test_drv_cb_common(BlockBackend *blk, enum drain_type drain_type, > + bool recursive) Fair enough. Reviewed-by: Eric Blake <ebl...@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. Virtualization: qemu.org | libguestfs.org