Hi all!
There is a following problem. When we need to write_zeroes or trim the
whole disk, we have to do it iteratively, because of 32-bit restriction
on request length.
For example, current implementation of mirror (see mirror_dirty_init())
do this by chunks of 2147418112 bytes (with default granularity of
65536). So, to zero 16tb disk we will make 8192 requests instead of one.
Incremental zeroing of 1tb qcow2 takes > 80 seconds for me (see below).
This means ~20 minutes for copying empty 16tb qcow2 disk which is
obviously a waste of time.
We see the following solutions for nbd:
||
1. Add command NBD_MAKE_EMPTY, with flag, saying what should be done:
trim or write_zeroes.
2. Add flag NBD_CMD_FLAG_WHOLE for commands NBD_TRIM and
NBD_WRITE_ZEROES, which will say (with zeroed offset and lenght of the
request), that the whole disk should be discarded/zeroed.
3. Increase length field of the request to 64bit.
As soon as we have some way to empty disk in nbd, we can use
qcow2_make_empty, to trim the whole disk (and something similar should
be done for zeroing).
What do you think about this all, and which way has a chance to get into
nbd proto?
== test incremental qcow2 zeroing in mirror ==
1. enable it. If we will use nbd, it will be enabled.
diff --git a/block/mirror.c b/block/mirror.c
index f9d1fec..4ac0c39 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -556,7 +556,7 @@ static int coroutine_fn
mirror_dirty_init(MirrorBlockJob *s)
end = s->bdev_length / BDRV_SECTOR_SIZE;
- if (base == NULL && !bdrv_has_zero_init(target_bs)) {
+ if (base == NULL) {
if (!bdrv_can_write_zeroes_with_unmap(target_bs)) {
bdrv_set_dirty_bitmap(s->dirty_bitmap, 0, end);
return 0;
==== test ====
qemu-img create -f qcow2 /tmp/1tb.qcow2 1T
virsh start backup-vm --paused
Domain backup-vm started
virsh qemu-monitor-command backup-vm
{"execute":"blockdev-add","arguments":{"options": {"aio": "native",
"file": {"driver": "file", "filename": "/tmp/1tb.qcow2"}, "discard":
"unmap", "cache": {"direct": true}, "driver": "qcow2", "id": "disk"}}}
{"return":{},"id":"libvirt-32"}
/usr/bin/time -f '%e seconds' sh -c 'virsh qemu-monitor-event' &
virsh qemu-monitor-command backup-vm
{"execute":"drive-mirror","arguments":{"device": "disk", "sync": "full",
"target": "/tmp/targ"}}
{"return":{},"id":"libvirt-33"}
[root@kvm qemu]# event BLOCK_JOB_READY at 1474652677.668624 for domain
backup-vm:
{"device":"disk","len":1099511627776,"offset":1099511627776,"speed":0,"type":"mirror"}
events received: 1
86.39 seconds
- the same for 2tb empty disk: 180.19 seconds
- and without patch, it takes < 1 second, of course.
--
Best regards,
Vladimir