On 08.02.20 13:25, Vladimir Sementsov-Ogievskiy wrote:
> 07.02.2020 20:46, Max Reitz wrote:
>> On 27.11.19 19:08, Vladimir Sementsov-Ogievskiy wrote:
>>> Use bdrv_block_status_above to chose effective chunk size and to handle
>>> zeroes effectively.
>>>
>>> This substitutes checking for just being allocated or not, and drops
>>> old code path for it. Assistance by backup job is dropped too, as
>>> caching block-status information is more difficult than just caching
>>> is-allocated information in our dirty bitmap, and backup job is not
>>> good place for this caching anyway.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com>
>>> ---
>>>   block/block-copy.c | 67 +++++++++++++++++++++++++++++++++++++---------
>>>   block/trace-events |  1 +
>>>   2 files changed, 55 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/block/block-copy.c b/block/block-copy.c
>>> index 8602e2cae7..74295d93d5 100644
>>> --- a/block/block-copy.c
>>> +++ b/block/block-copy.c
>>
>> [...]
>>
>>> @@ -336,23 +375,25 @@ int coroutine_fn block_copy(BlockCopyState *s,
>>>               chunk_end = next_zero;
>>>           }
>>>   -        if (s->skip_unallocated) {
>>> -            ret = block_copy_reset_unallocated(s, start,
>>> &status_bytes);
>>> -            if (ret == 0) {
>>> -                trace_block_copy_skip_range(s, start, status_bytes);
>>> -                start += status_bytes;
>>> -                continue;
>>> -            }
>>> -            /* Clamp to known allocated region */
>>> -            chunk_end = MIN(chunk_end, start + status_bytes);
>>> +        ret = block_copy_block_status(s, start, chunk_end - start,
>>> +                                      &status_bytes);
>>> +        if (s->skip_unallocated && !(ret & BDRV_BLOCK_ALLOCATED)) {
>>> +            bdrv_reset_dirty_bitmap(s->copy_bitmap, start,
>>> status_bytes);
>>> +            s->progress_reset_callback(s->progress_opaque);
>>> +            trace_block_copy_skip_range(s, start, status_bytes);
>>> +            start += status_bytes;
>>> +            continue;
>>>           }
>>>   +        chunk_end = MIN(chunk_end, start + status_bytes);
>>
>> I’m not sure how much the old “Clamp to known allocated region” actually
>> helps, but I wouldn’t drop it anyway.
> 
> But it may be not allocated now. We just clamp to status_bytes.
> It's "known allocated" only if s->skip_unallocated is true.

Ah, yes, I suppose I was just thinking about that case when trying to
understand how the code changes.  So:

Reviewed-by: Max Reitz <mre...@redhat.com>

Max

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to