On 09/03/2015 04:41 AM, Eric Blake wrote: > On 09/02/2015 02:51 AM, Wen Congyang wrote: >> Signed-off-by: Wen Congyang <we...@cn.fujitsu.com> >> Signed-off-by: Yang Hongyang <yan...@cn.fujitsu.com> >> Signed-off-by: zhanghailiang <zhang.zhanghaili...@huawei.com> >> Signed-off-by: Gonglei <arei.gong...@huawei.com> >> --- >> docs/block-replication.txt | 183 >> +++++++++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 183 insertions(+) >> create mode 100644 docs/block-replication.txt >> > > >> + >> + 1) Primary write requests will be copied and forwarded to Secondary >> + QEMU. >> + 2) Before Primary write requests are written to Secondary disk, the >> + original sector content will be read from Secondary disk and >> + buffered in the Disk buffer, but it will not overwrite the existing >> + sector content(it could be from either "Secondary Write Requests" or > > space before '(' in English sentences. > >> + previous COW of "Primary Write Requests") in the Disk buffer. >> + 3) Primary write requests will be written to Secondary disk. >> + 4) Secondary write requests will be buffered in the Disk buffer and it >> + will overwrite the existing sector content in the buffer. >> + >> +== Architecture == > >> + 3 NBD -------> 3 NBD >> | >> + client || server >> 2 filter >> + || ^ >> ^ >> +--------. || | >> | >> +Primary | || Secondary disk <--------- hidden-disk 5 >> <--------- active-disk 4 >> +--------' || | backing ^ >> backing >> + || | | >> + || | | >> + || '-------------------------' >> + || drive-backup sync=none >> + > >> + >> +4) The disk on the secondary is represented by a custom block device >> +(called active-disk). It should be an empty disk, and the format should >> +support bdrv_make_empty() and backing file. > > s/be an empty disk/start as an empty disk/ > >> + >> +5) The hidden-disk is created automatically. It buffers the original content >> +that is modified by the primary VM. It should also be an empty disk, and > > s/be/start as/ > >> +the driver supports bdrv_make_empty() and backing file. > > Missing mention that a drive-backup job is run to allow hidden-disk to > buffer any state that would otherwise be lost by the speculative > write-through of the NBD server into the secondary disk. > >> + >> +== Failure Handling == >> +There are 6 internal errors when block replication is running: >> +1. I/O error on primary disk >> +2. Forwarding primary write requests failed >> +3. Backup failed >> +4. I/O error on secondary disk >> +5. I/O error on active disk >> +6. Making active disk or hidden disk empty failed >> +In case 1 and 5, we just report the error to the disk layer. In case 2, 3, >> +4 and 6, we just report block replication's error to FT/HA manager(which > > space before '(' > >> +decides when to do a new checkpoint, when to do failover). >> +There is one internal error when doing failover: >> +1. Commiting the data in active disk/hidden disk to secondary disk failed > > s/Commiting/Committing/ > >> +We just to report this error to FT/HA manager. >> + >> +== New block driver interface == > >> + >> +== Usage == >> +Primary: >> + -drive if=xxx,driver=quorum,read-pattern=fifo,id=colo1,vote-threshold=1\ >> + children.0.file.filename=1.raw,\ >> + children.0.driver=raw,\ >> + >> + Run qmp command in primary qemu: >> + child_add disk1 child.driver=replication,child.mode=primary,\ >> + child.file.host=xxx,child.file.port=xxx,\ >> + child.file.driver=nbd,child.ignore-errors=on > > My comments earlier in this series mean this step should be two QMP > commands: the first is blockdev-add to create an unassociated BDS, the > second to then add that BDS into the quorum. > >> + Note: >> + 1. There should be only one NBD Client for each primary disk. >> + 2. host is the secondary physical machine's hostname or IP >> + 3. Each disk must have its own export name. >> + 4. It is all a single argument to -drive and child_add, and you should >> + ignore the leading whitespace. >> + 5. The qmp command line must be run after running qmp command line in >> + secondary qemu. >> + >> +Secondary: >> + -drive if=none,driver=raw,file=1.raw,id=colo1 \ >> + -drive if=xxx,driver=replication,mode=secondary,\ >> + file.file.filename=active_disk.qcow2,\ >> + file.driver=qcow2,\ >> + file.backing.file.filename=hidden_disk.qcow2,\ >> + file.backing.driver=qcow2,\ >> + file.backing.allow-write-backing-file=on,\ >> + file.backing.backing.backing_reference=colo1\ >> + >> + Then run qmp command in secondary qemu: >> + nbd-server-start host:port >> + nbd-server-add -w colo1 >> + >> + Note: >> + 1. The export name in secondary QEMU command line is the secondary >> + disk's id. >> + 2. The export name for the same disk must be the same >> + 3. The qmp command nbd-server-start and nbd-server-add must be run >> + before running the qmp command migrate on primary QEMU >> + 4. Don't use nbd-server-start's other options >> + 5. Active disk, hidden disk and nbd target's length should be the >> + same. >> + 6. It is better to put active disk and hidden disk in ramdisk. >> + 7. It is all a single argument to -drive, and you should ignore >> + the leading whitespace. > > Missing: document the steps taken during failover (that is, how do I > promote a Secondary into a new Primary, and then attach a new Secondary > to that point). In particular, I suspect there may be differences
Continuous block replication is in the TODO list. But I think it is very easy to implement it if the quorum's child can be hot-added/removed. > between whether you want to roll back to the state of the last > checkpoint (in hidden_disk) or just go with the current state of the For periodic checkpoint, the secondary vm is not running, so just commit hidden_disk to secondary disk. For COLO, the secondary vm is running, and we need this state, so just commit active disk to secondary disk(hidden_disk is also committed). In which case, do we need to drop secondary disk and commit hidden disk? Thanks Wen Congyang > Secondary (in Active); either way, it probably involves doing an active > commit of the state you want into Secondary, then the formation of a new > quorum to start handing replication data off through a new NBD client > connection. >