On 10/23/19 6:26 PM, Kevin Wolf wrote: > qcow2_cache_do_get() requires that s->lock is locked because it can > yield between picking a cache entry and actually taking ownership of it > by setting offset and increasing the reference count. > > Add an assertion to make sure the caller really holds the lock. The > function can be called outside of coroutine context, where bdrv_pread > and flushes become synchronous operations. The lock cannot and need not > be taken in this case. > > Signed-off-by: Kevin Wolf <kw...@redhat.com> > --- > block/qcow2-cache.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/block/qcow2-cache.c b/block/qcow2-cache.c > index d29b038a67..75b13dad99 100644 > --- a/block/qcow2-cache.c > +++ b/block/qcow2-cache.c > @@ -327,6 +327,9 @@ static int qcow2_cache_do_get(BlockDriverState *bs, > Qcow2Cache *c, > int min_lru_index = -1; > > assert(offset != 0); > + if (qemu_in_coroutine()) { > + qemu_co_mutex_assert_locked(&s->lock); > + }
that is looking not good to me. If this is really requires lock, we should check for the lock always. In the other hand we could face missed lock out of coroutine. Den > > trace_qcow2_cache_get(qemu_coroutine_self(), c == s->l2_table_cache, > offset, read_from_disk); > @@ -386,6 +389,8 @@ static int qcow2_cache_do_get(BlockDriverState *bs, > Qcow2Cache *c, > } > } > > + assert(c->entries[i].ref == 0); > + assert(c->entries[i].offset == 0); > c->entries[i].offset = offset; > > /* And return the right table */