28.08.2019 17:22, Max Reitz wrote: > On 26.08.19 18:13, Vladimir Sementsov-Ogievskiy wrote: >> Split copying logic which will be shared with backup-top filter. >> >> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com> >> --- >> block/backup.c | 47 ++++++++++++++++++++++++++++++++--------------- >> 1 file changed, 32 insertions(+), 15 deletions(-) >> >> diff --git a/block/backup.c b/block/backup.c >> index 33b144305f..13a1d80157 100644 >> --- a/block/backup.c >> +++ b/block/backup.c >> @@ -248,26 +248,18 @@ static int64_t >> backup_bitmap_reset_unallocated(BackupBlockJob *s, >> return ret; >> } >> >> -static int coroutine_fn backup_do_cow(BackupBlockJob *job, >> - int64_t offset, uint64_t bytes, >> - bool *error_is_read, >> - bool is_write_notifier) >> +static int coroutine_fn backup_do_copy(BackupBlockJob *job, >> + int64_t offset, uint64_t bytes, >> + bool *error_is_read, >> + bool is_write_notifier) >> { >> - CowRequest cow_request; >> int ret = 0; >> - int64_t start, end; /* bytes */ >> + int64_t start = offset, end = bytes + offset; /* bytes */ > > Maybe just rename the “offset” parameter to “start”, replace the “bytes” > parameter by an “end” parameter, and drop this line? >
I really want final block_copy have more common in block-layer offset+bytes interface. So better to refactor a bit the function itself, but I'd prefer to do it as a follow-up and keep this patch simpler.. > >> void *bounce_buffer = NULL; >> int64_t status_bytes; >> >> - qemu_co_rwlock_rdlock(&job->flush_rwlock); >> - >> - start = QEMU_ALIGN_DOWN(offset, job->cluster_size); >> - end = QEMU_ALIGN_UP(bytes + offset, job->cluster_size); >> - >> - trace_backup_do_cow_enter(job, start, offset, bytes); >> - >> - wait_for_overlapping_requests(job, start, end); >> - cow_request_begin(&cow_request, job, start, end); >> + assert(QEMU_IS_ALIGNED(start, job->cluster_size)); >> + assert(QEMU_IS_ALIGNED(end, job->cluster_size)); >> >> while (start < end) { >> int64_t dirty_end; > -- Best regards, Vladimir