18.01.2019 12:51, Alberto Garcia wrote: > On Tue 08 Jan 2019 06:06:54 PM CET, Vladimir Sementsov-Ogievskiy wrote: >> Encryption will be done in threads, to take benefit of it, we should >> move it out of the lock first. >> >> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com> >> --- >> block/qcow2.c | 35 +++++++++++++++++++++-------------- >> 1 file changed, 21 insertions(+), 14 deletions(-) >> >> diff --git a/block/qcow2.c b/block/qcow2.c >> index d6ef606d89..76d3715350 100644 >> --- a/block/qcow2.c >> +++ b/block/qcow2.c >> @@ -2077,11 +2077,20 @@ static coroutine_fn int >> qcow2_co_pwritev(BlockDriverState *bs, uint64_t offset, >> ret = qcow2_alloc_cluster_offset(bs, offset, &cur_bytes, >> &cluster_offset, &l2meta); >> if (ret < 0) { >> - goto fail; >> + goto out_locked; >> } >> >> assert((cluster_offset & 511) == 0); >> >> + ret = qcow2_pre_write_overlap_check(bs, 0, >> + cluster_offset + >> offset_in_cluster, >> + cur_bytes); >> + if (ret < 0) { >> + goto out_locked; >> + } >> + >> + qemu_co_mutex_unlock(&s->lock); > > The usage of lock() and unlock() functions inside and outside of the > loop plus the two 'locked' and 'unlocked' exit paths is starting to make > things a bit more messy. > > Having a look at the code it seems that there's only these three parts > in the whole function that need to have the lock held: > > one: > ret = qcow2_alloc_cluster_offset(bs, offset, &cur_bytes, > &cluster_offset, &l2meta); > /* ... */ > ret = qcow2_pre_write_overlap_check(bs, 0, > cluster_offset + > offset_in_cluster, > cur_bytes); > > two: > > ret = qcow2_handle_l2meta(bs, &l2meta, true); > > > three: > qcow2_handle_l2meta(bs, &l2meta, false); > > > So I wonder if it's perhaps simpler to add lock/unlock calls around > those blocks?
this means, that we'll unlock after qcow2_handle_l2meta and then immediately lock on next iteration before qcow2_alloc_cluster_offset, so current code avoids this extra unlock/lock.. > > Berto > -- Best regards, Vladimir