Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-02 Thread Michael R. Hines

Is this up to date:

On 06/29/2015 10:34 PM, Wen Congyang wrote:

Block replication is a very important feature which is used for
continuous checkpoints(for example: COLO).

Usage:
Please refer to docs/block-replication.txt

You can get the patch here:
https://github.com/wencongyang/qemu-colo/commits/block-replication-v7

You can get ths patch with framework here:
https://github.com/wencongyang/qemu-colo/commits/colo_framework_v7.2

TODO:
1. Continuous block replication. It will be started after basic functions
are accepted.

Changs Log:
V7:
1. Implement adding/removing quorum child. Remove the option non-connect.
2. Simplify the backing refrence option according to Stefan Hajnoczi's 
suggestion
V6:
1. Rebase to the newest qemu.
V5:
1. Address the comments from Gong Lei
2. Speed the failover up. The secondary vm can take over very quickly even
if there are too many I/O requests.
V4:
1. Introduce a new driver replication to avoid touch nbd and qcow2.
V3:
1: use error_setg() instead of error_set()
2. Add a new block job API
3. Active disk, hidden disk and nbd target uses the same AioContext
4. Add a testcase to test new hbitmap API
V2:
1. Redesign the secondary qemu(use image-fleecing)
2. Use Error objects to return error message
3. Address the comments from Max Reitz and Eric Blake


Wen Congyang (17):
   Add new block driver interface to add/delete a BDS's child
   quorum: implement block driver interfaces add/delete a BDS's child
   hmp: add monitor command to add/remove a child
   introduce a new API qemu_opts_absorb_qdict_by_index()
   quorum: allow ignoring child errors
   introduce a new API to enable/disable attach device model
   introduce a new API to check if blk is attached
   block: make bdrv_put_ref_bh_schedule() as a public API
   Backup: clear all bitmap when doing block checkpoint
   allow writing to the backing file
   Allow creating backup jobs when opening BDS
   block: Allow references for backing files
   docs: block replication's description
   Add new block driver interfaces to control block replication
   skip nbd_target when starting block replication
   quorum: implement block driver interfaces for block replication
   Implement new driver for block replication

  block.c| 198 +-
  block/Makefile.objs|   3 +-
  block/backup.c |  13 ++
  block/block-backend.c  |  33 +++
  block/quorum.c | 244 ++-
  block/replication.c| 443 +
  blockdev.c |  90 ++---
  blockjob.c |  10 +
  docs/block-replication.txt | 179 +
  hmp-commands.hx|  28 +++
  include/block/block.h  |  11 +
  include/block/block_int.h  |  19 ++
  include/block/blockjob.h   |  12 ++
  include/qemu/option.h  |   2 +
  include/sysemu/block-backend.h |   3 +
  include/sysemu/blockdev.h  |   2 +
  qapi/block.json|  16 ++
  util/qemu-option.c |  44 
  18 files changed, 1303 insertions(+), 47 deletions(-)
  create mode 100644 block/replication.c
  create mode 100644 docs/block-replication.txt



Is this update to date with the wiki the documentation?

https://github.com/wencongyang/qemu-colo/commits/block-replication-v7

I just want to test this patchset, rather than the whole colo patcheset.

- Michael




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-02 Thread Michael R. Hines

On 06/29/2015 10:34 PM, Wen Congyang wrote:

Block replication is a very important feature which is used for
continuous checkpoints(for example: COLO).

Usage:
Please refer to docs/block-replication.txt

You can get the patch here:
https://github.com/wencongyang/qemu-colo/commits/block-replication-v7

You can get ths patch with framework here:
https://github.com/wencongyang/qemu-colo/commits/colo_framework_v7.2

TODO:
1. Continuous block replication. It will be started after basic functions
are accepted.

Changs Log:
V7:
1. Implement adding/removing quorum child. Remove the option non-connect.
2. Simplify the backing refrence option according to Stefan Hajnoczi's 
suggestion
V6:
1. Rebase to the newest qemu.
V5:
1. Address the comments from Gong Lei
2. Speed the failover up. The secondary vm can take over very quickly even
if there are too many I/O requests.
V4:
1. Introduce a new driver replication to avoid touch nbd and qcow2.
V3:
1: use error_setg() instead of error_set()
2. Add a new block job API
3. Active disk, hidden disk and nbd target uses the same AioContext
4. Add a testcase to test new hbitmap API
V2:
1. Redesign the secondary qemu(use image-fleecing)
2. Use Error objects to return error message
3. Address the comments from Max Reitz and Eric Blake


Wen Congyang (17):
   Add new block driver interface to add/delete a BDS's child
   quorum: implement block driver interfaces add/delete a BDS's child
   hmp: add monitor command to add/remove a child
   introduce a new API qemu_opts_absorb_qdict_by_index()
   quorum: allow ignoring child errors
   introduce a new API to enable/disable attach device model
   introduce a new API to check if blk is attached
   block: make bdrv_put_ref_bh_schedule() as a public API
   Backup: clear all bitmap when doing block checkpoint
   allow writing to the backing file
   Allow creating backup jobs when opening BDS
   block: Allow references for backing files
   docs: block replication's description
   Add new block driver interfaces to control block replication
   skip nbd_target when starting block replication
   quorum: implement block driver interfaces for block replication
   Implement new driver for block replication

  block.c| 198 +-
  block/Makefile.objs|   3 +-
  block/backup.c |  13 ++
  block/block-backend.c  |  33 +++
  block/quorum.c | 244 ++-
  block/replication.c| 443 +
  blockdev.c |  90 ++---
  blockjob.c |  10 +
  docs/block-replication.txt | 179 +
  hmp-commands.hx|  28 +++
  include/block/block.h  |  11 +
  include/block/block_int.h  |  19 ++
  include/block/blockjob.h   |  12 ++
  include/qemu/option.h  |   2 +
  include/sysemu/block-backend.h |   3 +
  include/sysemu/blockdev.h  |   2 +
  qapi/block.json|  16 ++
  util/qemu-option.c |  44 
  18 files changed, 1303 insertions(+), 47 deletions(-)
  create mode 100644 block/replication.c
  create mode 100644 docs/block-replication.txt



Can you move the interfaces for 
blk_start_replication()/blk_stop_replication/blk_do_checkpoint() to this 
patch series

instead of the COLO patch series?

I don't think those functions are specific to COLO --- they should be 
available to all users.


- Michael




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-02 Thread Wen Congyang
On 07/02/2015 10:59 PM, Michael R. Hines wrote:
> On 06/29/2015 10:34 PM, Wen Congyang wrote:
>> Block replication is a very important feature which is used for
>> continuous checkpoints(for example: COLO).
>>
>> Usage:
>> Please refer to docs/block-replication.txt
>>
>> You can get the patch here:
>> https://github.com/wencongyang/qemu-colo/commits/block-replication-v7
>>
>> You can get ths patch with framework here:
>> https://github.com/wencongyang/qemu-colo/commits/colo_framework_v7.2
>>
>> TODO:
>> 1. Continuous block replication. It will be started after basic functions
>> are accepted.
>>
>> Changs Log:
>> V7:
>> 1. Implement adding/removing quorum child. Remove the option non-connect.
>> 2. Simplify the backing refrence option according to Stefan Hajnoczi's 
>> suggestion
>> V6:
>> 1. Rebase to the newest qemu.
>> V5:
>> 1. Address the comments from Gong Lei
>> 2. Speed the failover up. The secondary vm can take over very quickly even
>> if there are too many I/O requests.
>> V4:
>> 1. Introduce a new driver replication to avoid touch nbd and qcow2.
>> V3:
>> 1: use error_setg() instead of error_set()
>> 2. Add a new block job API
>> 3. Active disk, hidden disk and nbd target uses the same AioContext
>> 4. Add a testcase to test new hbitmap API
>> V2:
>> 1. Redesign the secondary qemu(use image-fleecing)
>> 2. Use Error objects to return error message
>> 3. Address the comments from Max Reitz and Eric Blake
>>
>>
>> Wen Congyang (17):
>>Add new block driver interface to add/delete a BDS's child
>>quorum: implement block driver interfaces add/delete a BDS's child
>>hmp: add monitor command to add/remove a child
>>introduce a new API qemu_opts_absorb_qdict_by_index()
>>quorum: allow ignoring child errors
>>introduce a new API to enable/disable attach device model
>>introduce a new API to check if blk is attached
>>block: make bdrv_put_ref_bh_schedule() as a public API
>>Backup: clear all bitmap when doing block checkpoint
>>allow writing to the backing file
>>Allow creating backup jobs when opening BDS
>>block: Allow references for backing files
>>docs: block replication's description
>>Add new block driver interfaces to control block replication
>>skip nbd_target when starting block replication
>>quorum: implement block driver interfaces for block replication
>>Implement new driver for block replication
>>
>>   block.c| 198 +-
>>   block/Makefile.objs|   3 +-
>>   block/backup.c |  13 ++
>>   block/block-backend.c  |  33 +++
>>   block/quorum.c | 244 ++-
>>   block/replication.c| 443 
>> +
>>   blockdev.c |  90 ++---
>>   blockjob.c |  10 +
>>   docs/block-replication.txt | 179 +
>>   hmp-commands.hx|  28 +++
>>   include/block/block.h  |  11 +
>>   include/block/block_int.h  |  19 ++
>>   include/block/blockjob.h   |  12 ++
>>   include/qemu/option.h  |   2 +
>>   include/sysemu/block-backend.h |   3 +
>>   include/sysemu/blockdev.h  |   2 +
>>   qapi/block.json|  16 ++
>>   util/qemu-option.c |  44 
>>   18 files changed, 1303 insertions(+), 47 deletions(-)
>>   create mode 100644 block/replication.c
>>   create mode 100644 docs/block-replication.txt
>>
> 
> Can you move the interfaces for 
> blk_start_replication()/blk_stop_replication/blk_do_checkpoint() to this 
> patch series
> instead of the COLO patch series?
> 
> I don't think those functions are specific to COLO --- they should be 
> available to all users.

OK, I will do it in the next version.

Thanks
Wen Congyang

> 
> - Michael
> 
> .
> 




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-02 Thread Wen Congyang
On 07/02/2015 10:47 PM, Michael R. Hines wrote:
> Is this up to date:
> 
> On 06/29/2015 10:34 PM, Wen Congyang wrote:
>> Block replication is a very important feature which is used for
>> continuous checkpoints(for example: COLO).
>>
>> Usage:
>> Please refer to docs/block-replication.txt
>>
>> You can get the patch here:
>> https://github.com/wencongyang/qemu-colo/commits/block-replication-v7
>>
>> You can get ths patch with framework here:
>> https://github.com/wencongyang/qemu-colo/commits/colo_framework_v7.2
>>
>> TODO:
>> 1. Continuous block replication. It will be started after basic functions
>> are accepted.
>>
>> Changs Log:
>> V7:
>> 1. Implement adding/removing quorum child. Remove the option non-connect.
>> 2. Simplify the backing refrence option according to Stefan Hajnoczi's 
>> suggestion
>> V6:
>> 1. Rebase to the newest qemu.
>> V5:
>> 1. Address the comments from Gong Lei
>> 2. Speed the failover up. The secondary vm can take over very quickly even
>> if there are too many I/O requests.
>> V4:
>> 1. Introduce a new driver replication to avoid touch nbd and qcow2.
>> V3:
>> 1: use error_setg() instead of error_set()
>> 2. Add a new block job API
>> 3. Active disk, hidden disk and nbd target uses the same AioContext
>> 4. Add a testcase to test new hbitmap API
>> V2:
>> 1. Redesign the secondary qemu(use image-fleecing)
>> 2. Use Error objects to return error message
>> 3. Address the comments from Max Reitz and Eric Blake
>>
>>
>> Wen Congyang (17):
>>Add new block driver interface to add/delete a BDS's child
>>quorum: implement block driver interfaces add/delete a BDS's child
>>hmp: add monitor command to add/remove a child
>>introduce a new API qemu_opts_absorb_qdict_by_index()
>>quorum: allow ignoring child errors
>>introduce a new API to enable/disable attach device model
>>introduce a new API to check if blk is attached
>>block: make bdrv_put_ref_bh_schedule() as a public API
>>Backup: clear all bitmap when doing block checkpoint
>>allow writing to the backing file
>>Allow creating backup jobs when opening BDS
>>block: Allow references for backing files
>>docs: block replication's description
>>Add new block driver interfaces to control block replication
>>skip nbd_target when starting block replication
>>quorum: implement block driver interfaces for block replication
>>Implement new driver for block replication
>>
>>   block.c| 198 +-
>>   block/Makefile.objs|   3 +-
>>   block/backup.c |  13 ++
>>   block/block-backend.c  |  33 +++
>>   block/quorum.c | 244 ++-
>>   block/replication.c| 443 
>> +
>>   blockdev.c |  90 ++---
>>   blockjob.c |  10 +
>>   docs/block-replication.txt | 179 +
>>   hmp-commands.hx|  28 +++
>>   include/block/block.h  |  11 +
>>   include/block/block_int.h  |  19 ++
>>   include/block/blockjob.h   |  12 ++
>>   include/qemu/option.h  |   2 +
>>   include/sysemu/block-backend.h |   3 +
>>   include/sysemu/blockdev.h  |   2 +
>>   qapi/block.json|  16 ++
>>   util/qemu-option.c |  44 
>>   18 files changed, 1303 insertions(+), 47 deletions(-)
>>   create mode 100644 block/replication.c
>>   create mode 100644 docs/block-replication.txt
>>
> 
> Is this update to date with the wiki the documentation?
> 
> https://github.com/wencongyang/qemu-colo/commits/block-replication-v7
> 
> I just want to test this patchset, rather than the whole colo patcheset.

I forgot to update the patch 13. So please setup the envrionment according to 
the wiki.

Thanks
Wen Congyang

> 
> - Michael
> 
> .
> 




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-04 Thread Wen Congyang

At 2015/7/3 23:30, Dr. David Alan Gilbert Wrote:

* Wen Congyang (we...@cn.fujitsu.com) wrote:

Block replication is a very important feature which is used for
continuous checkpoints(for example: COLO).

Usage:
Please refer to docs/block-replication.txt

You can get the patch here:
https://github.com/wencongyang/qemu-colo/commits/block-replication-v7

You can get ths patch with framework here:
https://github.com/wencongyang/qemu-colo/commits/colo_framework_v7.2


Hi,
   I seem to be having problems with the new listed syntax on the wiki;
on the secondary I'm getting the error

  Block format 'replication' used by device 'virtio0' doesn't support the 
option 'export'

./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
  -boot c -m 4096 -smp 4 -S \
  -name debug-threads=on -trace events=trace-file \
  -netdev tap,id=hn0,script=$PWD/ifup-slave,\
downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
 \
  -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
  -device virtio-rng-pci \
  -drive 
if=none,driver=raw,file=/home/localvms/bugzilla.raw,id=colo1,cache=none,aio=native
 \
  -drive 
if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=7000,\
file.file.filename=$TMPDISKS/colo-active-disk.qcow2,\
file.driver=qcow2,\
file.backing.file.filename=$TMPDISKS/colo-hidden-disk.qcow2,\
file.backing.driver=qcow2,\
file.backing.backing.backing_reference=colo1,\
file.backing.allow-write-backing-file=on \
  -incoming tcp:0:


Sorry, the option export is removed, because we use the qmp command 
nbd-server-add to let a BB be NBD server.




This is using 6fd6ce32 from the colo_framework_v7.2  tag.

What have I missed?

Dave
P.S. the name 'file.backing.backing.backing_reference' is not nice!


Which name is better? My native language is not English, so any such 
suggestion is welcome.


Thanks
Wen Congyang





TODO:
1. Continuous block replication. It will be started after basic functions
are accepted.

Changs Log:
V7:
1. Implement adding/removing quorum child. Remove the option non-connect.
2. Simplify the backing refrence option according to Stefan Hajnoczi's 
suggestion
V6:
1. Rebase to the newest qemu.
V5:
1. Address the comments from Gong Lei
2. Speed the failover up. The secondary vm can take over very quickly even
if there are too many I/O requests.
V4:
1. Introduce a new driver replication to avoid touch nbd and qcow2.
V3:
1: use error_setg() instead of error_set()
2. Add a new block job API
3. Active disk, hidden disk and nbd target uses the same AioContext
4. Add a testcase to test new hbitmap API
V2:
1. Redesign the secondary qemu(use image-fleecing)
2. Use Error objects to return error message
3. Address the comments from Max Reitz and Eric Blake


Wen Congyang (17):
   Add new block driver interface to add/delete a BDS's child
   quorum: implement block driver interfaces add/delete a BDS's child
   hmp: add monitor command to add/remove a child
   introduce a new API qemu_opts_absorb_qdict_by_index()
   quorum: allow ignoring child errors
   introduce a new API to enable/disable attach device model
   introduce a new API to check if blk is attached
   block: make bdrv_put_ref_bh_schedule() as a public API
   Backup: clear all bitmap when doing block checkpoint
   allow writing to the backing file
   Allow creating backup jobs when opening BDS
   block: Allow references for backing files
   docs: block replication's description
   Add new block driver interfaces to control block replication
   skip nbd_target when starting block replication
   quorum: implement block driver interfaces for block replication
   Implement new driver for block replication

  block.c| 198 +-
  block/Makefile.objs|   3 +-
  block/backup.c |  13 ++
  block/block-backend.c  |  33 +++
  block/quorum.c | 244 ++-
  block/replication.c| 443 +
  blockdev.c |  90 ++---
  blockjob.c |  10 +
  docs/block-replication.txt | 179 +
  hmp-commands.hx|  28 +++
  include/block/block.h  |  11 +
  include/block/block_int.h  |  19 ++
  include/block/blockjob.h   |  12 ++
  include/qemu/option.h  |   2 +
  include/sysemu/block-backend.h |   3 +
  include/sysemu/blockdev.h  |   2 +
  qapi/block.json|  16 ++
  util/qemu-option.c |  44 
  18 files changed, 1303 insertions(+), 47 deletions(-)
  create mode 100644 block/replication.c
  create mode 100644 docs/block-replication.txt

--
2.4.3


--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK






Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-06 Thread Dr. David Alan Gilbert
* Wen Congyang (ghost...@gmail.com) wrote:
> At 2015/7/3 23:30, Dr. David Alan Gilbert Wrote:
> >* Wen Congyang (we...@cn.fujitsu.com) wrote:
> >>Block replication is a very important feature which is used for
> >>continuous checkpoints(for example: COLO).
> >>
> >>Usage:
> >>Please refer to docs/block-replication.txt
> >>
> >>You can get the patch here:
> >>https://github.com/wencongyang/qemu-colo/commits/block-replication-v7
> >>
> >>You can get ths patch with framework here:
> >>https://github.com/wencongyang/qemu-colo/commits/colo_framework_v7.2
> >
> >Hi,
> >   I seem to be having problems with the new listed syntax on the wiki;
> >on the secondary I'm getting the error
> >
> >  Block format 'replication' used by device 'virtio0' doesn't support the 
> > option 'export'
> >
> >./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
> >  -boot c -m 4096 -smp 4 -S \
> >  -name debug-threads=on -trace events=trace-file \
> >  -netdev tap,id=hn0,script=$PWD/ifup-slave,\
> >downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
> > \
> >  -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
> >  -device virtio-rng-pci \
> >  -drive 
> > if=none,driver=raw,file=/home/localvms/bugzilla.raw,id=colo1,cache=none,aio=native
> >  \
> >  -drive 
> > if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=7000,\
> >file.file.filename=$TMPDISKS/colo-active-disk.qcow2,\
> >file.driver=qcow2,\
> >file.backing.file.filename=$TMPDISKS/colo-hidden-disk.qcow2,\
> >file.backing.driver=qcow2,\
> >file.backing.backing.backing_reference=colo1,\
> >file.backing.allow-write-backing-file=on \
> >  -incoming tcp:0:
> 
> Sorry, the option export is removed, because we use the qmp command
> nbd-server-add to let a BB be NBD server.

If I remove the export=  the secondary starts, but then when I issue the 
child_add
on the primary I get the error:

Server requires an export name

on the secondary I had done:

(qemu) nbd_server_start :8889
(qemu) nbd_server_add -w colo1

Can you please confirm your exact command line and sequence needed with this 
7.2 set please.

> 
> >This is using 6fd6ce32 from the colo_framework_v7.2  tag.
> >
> >What have I missed?
> >
> >Dave
> >P.S. the name 'file.backing.backing.backing_reference' is not nice!
> 
> Which name is better? My native language is not English, so any such
> suggestion is welcome.

My only problem is the way that 'backing' is repeated 3 times; that seems
odd!

Dave

> 
> Thanks
> Wen Congyang
> 
> >
> >>
> >>TODO:
> >>1. Continuous block replication. It will be started after basic functions
> >>are accepted.
> >>
> >>Changs Log:
> >>V7:
> >>1. Implement adding/removing quorum child. Remove the option non-connect.
> >>2. Simplify the backing refrence option according to Stefan Hajnoczi's 
> >>suggestion
> >>V6:
> >>1. Rebase to the newest qemu.
> >>V5:
> >>1. Address the comments from Gong Lei
> >>2. Speed the failover up. The secondary vm can take over very quickly even
> >>if there are too many I/O requests.
> >>V4:
> >>1. Introduce a new driver replication to avoid touch nbd and qcow2.
> >>V3:
> >>1: use error_setg() instead of error_set()
> >>2. Add a new block job API
> >>3. Active disk, hidden disk and nbd target uses the same AioContext
> >>4. Add a testcase to test new hbitmap API
> >>V2:
> >>1. Redesign the secondary qemu(use image-fleecing)
> >>2. Use Error objects to return error message
> >>3. Address the comments from Max Reitz and Eric Blake
> >>
> >>
> >>Wen Congyang (17):
> >>   Add new block driver interface to add/delete a BDS's child
> >>   quorum: implement block driver interfaces add/delete a BDS's child
> >>   hmp: add monitor command to add/remove a child
> >>   introduce a new API qemu_opts_absorb_qdict_by_index()
> >>   quorum: allow ignoring child errors
> >>   introduce a new API to enable/disable attach device model
> >>   introduce a new API to check if blk is attached
> >>   block: make bdrv_put_ref_bh_schedule() as a public API
> >>   Backup: clear all bitmap when doing block checkpoint
> >>   allow writing to the backing file
> >>   Allow creating backup jobs when opening BDS
> >>   block: Allow references for backing files
> >>   docs: block replication's description
> >>   Add new block driver interfaces to control block replication
> >>   skip nbd_target when starting block replication
> >>   quorum: implement block driver interfaces for block replication
> >>   Implement new driver for block replication
> >>
> >>  block.c| 198 +-
> >>  block/Makefile.objs|   3 +-
> >>  block/backup.c |  13 ++
> >>  block/block-backend.c  |  33 +++
> >>  block/quorum.c | 244 ++-
> >>  block/replication.c| 443 
> >> +
> >>  blockdev.c |  90 ++---
> >>  blockjob.c   

Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-06 Thread Wen Congyang
On 07/06/2015 05:42 PM, Dr. David Alan Gilbert wrote:
> * Wen Congyang (ghost...@gmail.com) wrote:
>> At 2015/7/3 23:30, Dr. David Alan Gilbert Wrote:
>>> * Wen Congyang (we...@cn.fujitsu.com) wrote:
 Block replication is a very important feature which is used for
 continuous checkpoints(for example: COLO).

 Usage:
 Please refer to docs/block-replication.txt

 You can get the patch here:
 https://github.com/wencongyang/qemu-colo/commits/block-replication-v7

 You can get ths patch with framework here:
 https://github.com/wencongyang/qemu-colo/commits/colo_framework_v7.2
>>>
>>> Hi,
>>>   I seem to be having problems with the new listed syntax on the wiki;
>>> on the secondary I'm getting the error
>>>
>>>  Block format 'replication' used by device 'virtio0' doesn't support the 
>>> option 'export'
>>>
>>> ./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
>>>  -boot c -m 4096 -smp 4 -S \
>>>  -name debug-threads=on -trace events=trace-file \
>>>  -netdev tap,id=hn0,script=$PWD/ifup-slave,\
>>> downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
>>>  \
>>>  -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
>>>  -device virtio-rng-pci \
>>>  -drive 
>>> if=none,driver=raw,file=/home/localvms/bugzilla.raw,id=colo1,cache=none,aio=native
>>>  \
>>>  -drive 
>>> if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=7000,\
>>> file.file.filename=$TMPDISKS/colo-active-disk.qcow2,\
>>> file.driver=qcow2,\
>>> file.backing.file.filename=$TMPDISKS/colo-hidden-disk.qcow2,\
>>> file.backing.driver=qcow2,\
>>> file.backing.backing.backing_reference=colo1,\
>>> file.backing.allow-write-backing-file=on \
>>>  -incoming tcp:0:
>>
>> Sorry, the option export is removed, because we use the qmp command
>> nbd-server-add to let a BB be NBD server.
> 
> If I remove the export=  the secondary starts, but then when I issue the 
> child_add
> on the primary I get the error:
> 
> Server requires an export name
> 
> on the secondary I had done:
> 
> (qemu) nbd_server_start :8889
> (qemu) nbd_server_add -w colo1
> 
> Can you please confirm your exact command line and sequence needed with this 
> 7.2 set please.

This is the child_add command that I used:

child_add disk1 
child.driver=replication,child.mode=primary,child.file.host=$nbd_host,child.file.port=$nbd_port,child.file.export=colo1,child.file.driver=nbd,child.ignore-errors=on

I guess you remove export from this command too.

Thanks
Wen Congyang

> 
>>
>>> This is using 6fd6ce32 from the colo_framework_v7.2  tag.
>>>
>>> What have I missed?
>>>
>>> Dave
>>> P.S. the name 'file.backing.backing.backing_reference' is not nice!
>>
>> Which name is better? My native language is not English, so any such
>> suggestion is welcome.
> 
> My only problem is the way that 'backing' is repeated 3 times; that seems
> odd!
> 
> Dave
> 
>>
>> Thanks
>> Wen Congyang
>>
>>>

 TODO:
 1. Continuous block replication. It will be started after basic functions
are accepted.

 Changs Log:
 V7:
 1. Implement adding/removing quorum child. Remove the option non-connect.
 2. Simplify the backing refrence option according to Stefan Hajnoczi's 
 suggestion
 V6:
 1. Rebase to the newest qemu.
 V5:
 1. Address the comments from Gong Lei
 2. Speed the failover up. The secondary vm can take over very quickly even
if there are too many I/O requests.
 V4:
 1. Introduce a new driver replication to avoid touch nbd and qcow2.
 V3:
 1: use error_setg() instead of error_set()
 2. Add a new block job API
 3. Active disk, hidden disk and nbd target uses the same AioContext
 4. Add a testcase to test new hbitmap API
 V2:
 1. Redesign the secondary qemu(use image-fleecing)
 2. Use Error objects to return error message
 3. Address the comments from Max Reitz and Eric Blake


 Wen Congyang (17):
   Add new block driver interface to add/delete a BDS's child
   quorum: implement block driver interfaces add/delete a BDS's child
   hmp: add monitor command to add/remove a child
   introduce a new API qemu_opts_absorb_qdict_by_index()
   quorum: allow ignoring child errors
   introduce a new API to enable/disable attach device model
   introduce a new API to check if blk is attached
   block: make bdrv_put_ref_bh_schedule() as a public API
   Backup: clear all bitmap when doing block checkpoint
   allow writing to the backing file
   Allow creating backup jobs when opening BDS
   block: Allow references for backing files
   docs: block replication's description
   Add new block driver interfaces to control block replication
   skip nbd_target when starting block replication
   quorum: implement block driver interfaces for block replication
   Implement new driver for block replicat

Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-06 Thread Michael R. Hines

On 07/04/2015 07:46 AM, Wen Congyang wrote:

At 2015/7/3 23:30, Dr. David Alan Gilbert Wrote:

* Wen Congyang (we...@cn.fujitsu.com) wrote:

Block replication is a very important feature which is used for
continuous checkpoints(for example: COLO).

Usage:
Please refer to docs/block-replication.txt

You can get the patch here:
https://github.com/wencongyang/qemu-colo/commits/block-replication-v7

You can get ths patch with framework here:
https://github.com/wencongyang/qemu-colo/commits/colo_framework_v7.2


Hi,
   I seem to be having problems with the new listed syntax on the wiki;
on the secondary I'm getting the error

  Block format 'replication' used by device 'virtio0' doesn't support 
the option 'export'


./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
  -boot c -m 4096 -smp 4 -S \
  -name debug-threads=on -trace events=trace-file \
  -netdev tap,id=hn0,script=$PWD/ifup-slave,\
downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4 
\

  -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
  -device virtio-rng-pci \
  -drive 
if=none,driver=raw,file=/home/localvms/bugzilla.raw,id=colo1,cache=none,aio=native 
\
  -drive 
if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=7000,\

file.file.filename=$TMPDISKS/colo-active-disk.qcow2,\
file.driver=qcow2,\
file.backing.file.filename=$TMPDISKS/colo-hidden-disk.qcow2,\
file.backing.driver=qcow2,\
file.backing.backing.backing_reference=colo1,\
file.backing.allow-write-backing-file=on \
  -incoming tcp:0:


Sorry, the option export is removed, because we use the qmp command 
nbd-server-add to let a BB be NBD server.




Still doesn't work. The server says:

nbd.c:nbd_receive_options():L447: read failed
nbd.c:nbd_send_negotiate():L562: option negotiation failed

- Michael




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-06 Thread Wen Congyang
On 07/07/2015 08:25 AM, Michael R. Hines wrote:
> On 07/04/2015 07:46 AM, Wen Congyang wrote:
>> At 2015/7/3 23:30, Dr. David Alan Gilbert Wrote:
>>> * Wen Congyang (we...@cn.fujitsu.com) wrote:
 Block replication is a very important feature which is used for
 continuous checkpoints(for example: COLO).

 Usage:
 Please refer to docs/block-replication.txt

 You can get the patch here:
 https://github.com/wencongyang/qemu-colo/commits/block-replication-v7

 You can get ths patch with framework here:
 https://github.com/wencongyang/qemu-colo/commits/colo_framework_v7.2
>>>
>>> Hi,
>>>I seem to be having problems with the new listed syntax on the wiki;
>>> on the secondary I'm getting the error
>>>
>>>   Block format 'replication' used by device 'virtio0' doesn't support the 
>>> option 'export'
>>>
>>> ./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
>>>   -boot c -m 4096 -smp 4 -S \
>>>   -name debug-threads=on -trace events=trace-file \
>>>   -netdev tap,id=hn0,script=$PWD/ifup-slave,\
>>> downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
>>>  \
>>>   -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
>>>   -device virtio-rng-pci \
>>>   -drive 
>>> if=none,driver=raw,file=/home/localvms/bugzilla.raw,id=colo1,cache=none,aio=native
>>>  \
>>>   -drive 
>>> if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=7000,\
>>> file.file.filename=$TMPDISKS/colo-active-disk.qcow2,\
>>> file.driver=qcow2,\
>>> file.backing.file.filename=$TMPDISKS/colo-hidden-disk.qcow2,\
>>> file.backing.driver=qcow2,\
>>> file.backing.backing.backing_reference=colo1,\
>>> file.backing.allow-write-backing-file=on \
>>>   -incoming tcp:0:
>>
>> Sorry, the option export is removed, because we use the qmp command 
>> nbd-server-add to let a BB be NBD server.
>>
> 
> Still doesn't work. The server says:
> 
> nbd.c:nbd_receive_options():L447: read failed

This log is very stange. The NBD client connects to NBD server, and NBD server 
wants to read data
from NBD client, but reading fails. It seems that the connection is closed 
unexpectedly. Can you
give me more log and how do you use it?

Thanks
Wen Congyang

> nbd.c:nbd_send_negotiate():L562: option negotiation failed
> 
> - Michael
> 
> .
> 




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-07 Thread Dr. David Alan Gilbert
* Wen Congyang (we...@cn.fujitsu.com) wrote:
> On 07/07/2015 08:25 AM, Michael R. Hines wrote:
> > On 07/04/2015 07:46 AM, Wen Congyang wrote:
> >> At 2015/7/3 23:30, Dr. David Alan Gilbert Wrote:
> >>> * Wen Congyang (we...@cn.fujitsu.com) wrote:
>  Block replication is a very important feature which is used for
>  continuous checkpoints(for example: COLO).
> 
>  Usage:
>  Please refer to docs/block-replication.txt
> 
>  You can get the patch here:
>  https://github.com/wencongyang/qemu-colo/commits/block-replication-v7
> 
>  You can get ths patch with framework here:
>  https://github.com/wencongyang/qemu-colo/commits/colo_framework_v7.2
> >>>
> >>> Hi,
> >>>I seem to be having problems with the new listed syntax on the wiki;
> >>> on the secondary I'm getting the error
> >>>
> >>>   Block format 'replication' used by device 'virtio0' doesn't support the 
> >>> option 'export'
> >>>
> >>> ./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
> >>>   -boot c -m 4096 -smp 4 -S \
> >>>   -name debug-threads=on -trace events=trace-file \
> >>>   -netdev tap,id=hn0,script=$PWD/ifup-slave,\
> >>> downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
> >>>  \
> >>>   -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
> >>>   -device virtio-rng-pci \
> >>>   -drive 
> >>> if=none,driver=raw,file=/home/localvms/bugzilla.raw,id=colo1,cache=none,aio=native
> >>>  \
> >>>   -drive 
> >>> if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=7000,\
> >>> file.file.filename=$TMPDISKS/colo-active-disk.qcow2,\
> >>> file.driver=qcow2,\
> >>> file.backing.file.filename=$TMPDISKS/colo-hidden-disk.qcow2,\
> >>> file.backing.driver=qcow2,\
> >>> file.backing.backing.backing_reference=colo1,\
> >>> file.backing.allow-write-backing-file=on \
> >>>   -incoming tcp:0:
> >>
> >> Sorry, the option export is removed, because we use the qmp command 
> >> nbd-server-add to let a BB be NBD server.
> >>
> > 
> > Still doesn't work. The server says:
> > 
> > nbd.c:nbd_receive_options():L447: read failed
> 
> This log is very stange. The NBD client connects to NBD server, and NBD 
> server wants to read data
> from NBD client, but reading fails. It seems that the connection is closed 
> unexpectedly. Can you
> give me more log and how do you use it?

That was the same failure I was getting.   I think it's that the NBD server and 
client are in different
modes, with one of them expecting the export.

Dave

> Thanks
> Wen Congyang
> 
> > nbd.c:nbd_send_negotiate():L562: option negotiation failed
> > 
> > - Michael
> > 
> > .
> > 
> 
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-07 Thread Paolo Bonzini


On 07/07/2015 11:13, Dr. David Alan Gilbert wrote:
>> > This log is very stange. The NBD client connects to NBD server, and NBD 
>> > server wants to read data
>> > from NBD client, but reading fails. It seems that the connection is closed 
>> > unexpectedly. Can you
>> > give me more log and how do you use it?
> That was the same failure I was getting.   I think it's that the NBD server 
> and client are in different
> modes, with one of them expecting the export.

nbd_server_add always expects the export.

Paolo



Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-07 Thread Michael R. Hines

On 07/07/2015 04:23 AM, Paolo Bonzini wrote:


On 07/07/2015 11:13, Dr. David Alan Gilbert wrote:

This log is very stange. The NBD client connects to NBD server, and NBD server 
wants to read data
from NBD client, but reading fails. It seems that the connection is closed 
unexpectedly. Can you
give me more log and how do you use it?

That was the same failure I was getting.   I think it's that the NBD server and 
client are in different
modes, with one of them expecting the export.

nbd_server_add always expects the export.

Paolo



OK, Wen, so your wiki finally does reflect this, but now we're back to 
the "export not found error".


Again, here's the exact command line:

1. First on the secondary VM:

qemu-system-x86_64 .snip...  -drive 
if=none,driver=qcow2,file=foo.qcow2,id=mc1,cache=none,aio=native -drive 
if=virtio,driver=replication,mode=secondary,throttling.bps-total-max=7000,file.file.filename=active_disk.qcow2,file.driver=qcow2,file.backing.file.filename=hidden_disk.qcow2,file.backing.driver=qcow2,file.backing.allow-write-backing-file=on,file.backing.backing.backing_reference=mc1


2. Then, then HMP commands:

nbd_server_start 0:6262
nbd_server_add -w mc1

3. Then the primary VM:

qemu-system-x86_64 .snip...  -drive 
if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on


With the error: Server requires an export name

*but*, your wiki has no export name on the primary VM size, so I added 
the export name back which is on your old wiki:


qemu-system-x86_64 .snip...  -drive 
if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.export=mc1,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on: 
Failed to read export length


And server now says:

nbd.c:nbd_handle_export_name():L416: export not found
nbd.c:nbd_send_negotiate():L562: option negotiation failed




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-07 Thread Wen Congyang
On 07/08/2015 12:56 AM, Michael R. Hines wrote:
> On 07/07/2015 04:23 AM, Paolo Bonzini wrote:
>>
>> On 07/07/2015 11:13, Dr. David Alan Gilbert wrote:
> This log is very stange. The NBD client connects to NBD server, and NBD 
> server wants to read data
> from NBD client, but reading fails. It seems that the connection is 
> closed unexpectedly. Can you
> give me more log and how do you use it?
>>> That was the same failure I was getting.   I think it's that the NBD server 
>>> and client are in different
>>> modes, with one of them expecting the export.
>> nbd_server_add always expects the export.
>>
>> Paolo
>>
> 
> OK, Wen, so your wiki finally does reflect this, but now we're back to the 
> "export not found error".
> 
> Again, here's the exact command line:
> 
> 1. First on the secondary VM:
> 
> qemu-system-x86_64 .snip...  -drive 
> if=none,driver=qcow2,file=foo.qcow2,id=mc1,cache=none,aio=native -drive 
> if=virtio,driver=replication,mode=secondary,throttling.bps-total-max=7000,file.file.filename=active_disk.qcow2,file.driver=qcow2,file.backing.file.filename=hidden_disk.qcow2,file.backing.driver=qcow2,file.backing.allow-write-backing-file=on,file.backing.backing.backing_reference=mc1
> 
> 2. Then, then HMP commands:
> 
> nbd_server_start 0:6262
> nbd_server_add -w mc1
> 
> 3. Then the primary VM:
> 
> qemu-system-x86_64 .snip...  -drive 
> if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on
> 
> With the error: Server requires an export name
> 
> *but*, your wiki has no export name on the primary VM size, so I added the 
> export name back which is on your old wiki:
> 
> qemu-system-x86_64 .snip...  -drive 
> if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.export=mc1,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on:
>  Failed to read export length
> 

Hmm, I think you use v7 version. Is it right?
In this version, the correct useage for primary qemu is:
1. qemu-system-x86_64 .snip...  -drive 
if=virtio,driver=quorum,read-pattern=fifo,id=disk1,
  
children.0.file.filename=bar.qcow2,children.0.driver=qcow2,

Then run hmp monitor command(It should be run after you run nbd_server_add in 
the secondary qemu):
child_add disk1 
child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on

Thanks
Wen Congyang

> And server now says:
> 
> nbd.c:nbd_handle_export_name():L416: export not found
> nbd.c:nbd_send_negotiate():L562: option negotiation failed
> 
> .
> 




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-08 Thread Michael R. Hines

On 07/07/2015 08:38 PM, Wen Congyang wrote:

On 07/08/2015 12:56 AM, Michael R. Hines wrote:

On 07/07/2015 04:23 AM, Paolo Bonzini wrote:

On 07/07/2015 11:13, Dr. David Alan Gilbert wrote:

This log is very stange. The NBD client connects to NBD server, and NBD server 
wants to read data
from NBD client, but reading fails. It seems that the connection is closed 
unexpectedly. Can you
give me more log and how do you use it?

That was the same failure I was getting.   I think it's that the NBD server and 
client are in different
modes, with one of them expecting the export.

nbd_server_add always expects the export.

Paolo


OK, Wen, so your wiki finally does reflect this, but now we're back to the "export 
not found error".

Again, here's the exact command line:

1. First on the secondary VM:

qemu-system-x86_64 .snip...  -drive 
if=none,driver=qcow2,file=foo.qcow2,id=mc1,cache=none,aio=native -drive 
if=virtio,driver=replication,mode=secondary,throttling.bps-total-max=7000,file.file.filename=active_disk.qcow2,file.driver=qcow2,file.backing.file.filename=hidden_disk.qcow2,file.backing.driver=qcow2,file.backing.allow-write-backing-file=on,file.backing.backing.backing_reference=mc1

2. Then, then HMP commands:

nbd_server_start 0:6262
nbd_server_add -w mc1

3. Then the primary VM:

qemu-system-x86_64 .snip...  -drive 
if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on

With the error: Server requires an export name

*but*, your wiki has no export name on the primary VM size, so I added the 
export name back which is on your old wiki:

qemu-system-x86_64 .snip...  -drive 
if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.export=mc1,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on:
 Failed to read export length


Hmm, I think you use v7 version. Is it right?
In this version, the correct useage for primary qemu is:
1. qemu-system-x86_64 .snip...  -drive 
if=virtio,driver=quorum,read-pattern=fifo,id=disk1,
   
children.0.file.filename=bar.qcow2,children.0.driver=qcow2,

Then run hmp monitor command(It should be run after you run nbd_server_add in 
the secondary qemu):
child_add disk1 
child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on

Thanks
Wen Congyang


And server now says:

nbd.c:nbd_handle_export_name():L416: export not found
nbd.c:nbd_send_negotiate():L562: option negotiation failed

.





OK, I'm totally confused at this point. =)

Maybe it would be easier to just wait for your next clean patchset + 
documentation which is all consistent with each other.


- Michael




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-08 Thread Wen Congyang
On 07/08/2015 11:49 PM, Michael R. Hines wrote:
> On 07/07/2015 08:38 PM, Wen Congyang wrote:
>> On 07/08/2015 12:56 AM, Michael R. Hines wrote:
>>> On 07/07/2015 04:23 AM, Paolo Bonzini wrote:
 On 07/07/2015 11:13, Dr. David Alan Gilbert wrote:
>>> This log is very stange. The NBD client connects to NBD server, and NBD 
>>> server wants to read data
>>> from NBD client, but reading fails. It seems that the connection is 
>>> closed unexpectedly. Can you
>>> give me more log and how do you use it?
> That was the same failure I was getting.   I think it's that the NBD 
> server and client are in different
> modes, with one of them expecting the export.
 nbd_server_add always expects the export.

 Paolo

>>> OK, Wen, so your wiki finally does reflect this, but now we're back to the 
>>> "export not found error".
>>>
>>> Again, here's the exact command line:
>>>
>>> 1. First on the secondary VM:
>>>
>>> qemu-system-x86_64 .snip...  -drive 
>>> if=none,driver=qcow2,file=foo.qcow2,id=mc1,cache=none,aio=native -drive 
>>> if=virtio,driver=replication,mode=secondary,throttling.bps-total-max=7000,file.file.filename=active_disk.qcow2,file.driver=qcow2,file.backing.file.filename=hidden_disk.qcow2,file.backing.driver=qcow2,file.backing.allow-write-backing-file=on,file.backing.backing.backing_reference=mc1
>>>
>>> 2. Then, then HMP commands:
>>>
>>> nbd_server_start 0:6262
>>> nbd_server_add -w mc1
>>>
>>> 3. Then the primary VM:
>>>
>>> qemu-system-x86_64 .snip...  -drive 
>>> if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on
>>>
>>> With the error: Server requires an export name
>>>
>>> *but*, your wiki has no export name on the primary VM size, so I added the 
>>> export name back which is on your old wiki:
>>>
>>> qemu-system-x86_64 .snip...  -drive 
>>> if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.export=mc1,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on:
>>>  Failed to read export length
>>>
>> Hmm, I think you use v7 version. Is it right?
>> In this version, the correct useage for primary qemu is:
>> 1. qemu-system-x86_64 .snip...  -drive 
>> if=virtio,driver=quorum,read-pattern=fifo,id=disk1,
>>
>> children.0.file.filename=bar.qcow2,children.0.driver=qcow2,
>>
>> Then run hmp monitor command(It should be run after you run nbd_server_add 
>> in the secondary qemu):
>> child_add disk1 
>> child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on
>>
>> Thanks
>> Wen Congyang
>>
>>> And server now says:
>>>
>>> nbd.c:nbd_handle_export_name():L416: export not found
>>> nbd.c:nbd_send_negotiate():L562: option negotiation failed
>>>
>>> .
>>>
>>
> 
> OK, I'm totally confused at this point. =)
> 
> Maybe it would be easier to just wait for your next clean patchset + 
> documentation which is all consistent with each other.

I have sent the v8. But the usage is not changed. You can setup the environment 
according to the wiki.
When we open nbd client, we need to connect to the nbd server. So I introduce a 
new command child_add to add NBD client
as a quorum child when the nbd server is ready.

The nbd server is ready after you run the following command:
nbd_server_start 0:6262 # the secondary qemu will listen to host:port
nbd_server_add -w mc1   # the NBD server will know this disk is used as NBD 
server. The export name is its id wc1.
# -w means we allow to write to this disk.

Then you can run the following command in the primary qemu:
child_add disk1 
child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on

After this monitor command, nbd client has connected to the nbd server.

Thanks
Wen Congyang

> 
> - Michael
> 
> .
> 




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-08 Thread Dr. David Alan Gilbert
* Wen Congyang (we...@cn.fujitsu.com) wrote:
> On 07/08/2015 11:49 PM, Michael R. Hines wrote:
> > On 07/07/2015 08:38 PM, Wen Congyang wrote:
> >> On 07/08/2015 12:56 AM, Michael R. Hines wrote:
> >>> On 07/07/2015 04:23 AM, Paolo Bonzini wrote:
>  On 07/07/2015 11:13, Dr. David Alan Gilbert wrote:
> >>> This log is very stange. The NBD client connects to NBD server, and 
> >>> NBD server wants to read data
> >>> from NBD client, but reading fails. It seems that the connection is 
> >>> closed unexpectedly. Can you
> >>> give me more log and how do you use it?
> > That was the same failure I was getting.   I think it's that the NBD 
> > server and client are in different
> > modes, with one of them expecting the export.
>  nbd_server_add always expects the export.
> 
>  Paolo
> 
> >>> OK, Wen, so your wiki finally does reflect this, but now we're back to 
> >>> the "export not found error".
> >>>
> >>> Again, here's the exact command line:
> >>>
> >>> 1. First on the secondary VM:
> >>>
> >>> qemu-system-x86_64 .snip...  -drive 
> >>> if=none,driver=qcow2,file=foo.qcow2,id=mc1,cache=none,aio=native -drive 
> >>> if=virtio,driver=replication,mode=secondary,throttling.bps-total-max=7000,file.file.filename=active_disk.qcow2,file.driver=qcow2,file.backing.file.filename=hidden_disk.qcow2,file.backing.driver=qcow2,file.backing.allow-write-backing-file=on,file.backing.backing.backing_reference=mc1
> >>>
> >>> 2. Then, then HMP commands:
> >>>
> >>> nbd_server_start 0:6262
> >>> nbd_server_add -w mc1
> >>>
> >>> 3. Then the primary VM:
> >>>
> >>> qemu-system-x86_64 .snip...  -drive 
> >>> if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on
> >>>
> >>> With the error: Server requires an export name
> >>>
> >>> *but*, your wiki has no export name on the primary VM size, so I added 
> >>> the export name back which is on your old wiki:
> >>>
> >>> qemu-system-x86_64 .snip...  -drive 
> >>> if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.export=mc1,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on:
> >>>  Failed to read export length
> >>>
> >> Hmm, I think you use v7 version. Is it right?
> >> In this version, the correct useage for primary qemu is:
> >> 1. qemu-system-x86_64 .snip...  -drive 
> >> if=virtio,driver=quorum,read-pattern=fifo,id=disk1,
> >>
> >> children.0.file.filename=bar.qcow2,children.0.driver=qcow2,
> >>
> >> Then run hmp monitor command(It should be run after you run nbd_server_add 
> >> in the secondary qemu):
> >> child_add disk1 
> >> child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on
> >>
> >> Thanks
> >> Wen Congyang
> >>
> >>> And server now says:
> >>>
> >>> nbd.c:nbd_handle_export_name():L416: export not found
> >>> nbd.c:nbd_send_negotiate():L562: option negotiation failed
> >>>
> >>> .
> >>>
> >>
> > 
> > OK, I'm totally confused at this point. =)
> > 
> > Maybe it would be easier to just wait for your next clean patchset + 
> > documentation which is all consistent with each other.
> 
> I have sent the v8. But the usage is not changed. You can setup the 
> environment according to the wiki.
> When we open nbd client, we need to connect to the nbd server. So I introduce 
> a new command child_add to add NBD client
> as a quorum child when the nbd server is ready.
> 
> The nbd server is ready after you run the following command:
> nbd_server_start 0:6262 # the secondary qemu will listen to host:port
> nbd_server_add -w mc1   # the NBD server will know this disk is used as NBD 
> server. The export name is its id wc1.
> # -w means we allow to write to this disk.
> 
> Then you can run the following command in the primary qemu:
> child_add disk1 
> child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on
> 
> After this monitor command, nbd client has connected to the nbd server.

Ah! The 'child.file.export=mc1' wasn't there previously; I see Yang added that 
to the wiki yesterday;
that probably explains the problem that we've been having.

Dave

> 
> Thanks
> Wen Congyang
> 
> > 
> > - Michael
> > 
> > .
> > 
> 
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-08 Thread Wen Congyang
On 07/09/2015 09:55 AM, Dr. David Alan Gilbert wrote:
> * Wen Congyang (we...@cn.fujitsu.com) wrote:
>> On 07/08/2015 11:49 PM, Michael R. Hines wrote:
>>> On 07/07/2015 08:38 PM, Wen Congyang wrote:
 On 07/08/2015 12:56 AM, Michael R. Hines wrote:
> On 07/07/2015 04:23 AM, Paolo Bonzini wrote:
>> On 07/07/2015 11:13, Dr. David Alan Gilbert wrote:
> This log is very stange. The NBD client connects to NBD server, and 
> NBD server wants to read data
> from NBD client, but reading fails. It seems that the connection is 
> closed unexpectedly. Can you
> give me more log and how do you use it?
>>> That was the same failure I was getting.   I think it's that the NBD 
>>> server and client are in different
>>> modes, with one of them expecting the export.
>> nbd_server_add always expects the export.
>>
>> Paolo
>>
> OK, Wen, so your wiki finally does reflect this, but now we're back to 
> the "export not found error".
>
> Again, here's the exact command line:
>
> 1. First on the secondary VM:
>
> qemu-system-x86_64 .snip...  -drive 
> if=none,driver=qcow2,file=foo.qcow2,id=mc1,cache=none,aio=native -drive 
> if=virtio,driver=replication,mode=secondary,throttling.bps-total-max=7000,file.file.filename=active_disk.qcow2,file.driver=qcow2,file.backing.file.filename=hidden_disk.qcow2,file.backing.driver=qcow2,file.backing.allow-write-backing-file=on,file.backing.backing.backing_reference=mc1
>
> 2. Then, then HMP commands:
>
> nbd_server_start 0:6262
> nbd_server_add -w mc1
>
> 3. Then the primary VM:
>
> qemu-system-x86_64 .snip...  -drive 
> if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on
>
> With the error: Server requires an export name
>
> *but*, your wiki has no export name on the primary VM size, so I added 
> the export name back which is on your old wiki:
>
> qemu-system-x86_64 .snip...  -drive 
> if=virtio,driver=quorum,read-pattern=fifo,no-connect=on,children.0.file.filename=bar.qcow2,children.0.driver=qcow2,children.1.file.driver=nbd,children.1.file.export=mc1,children.1.file.host=127.0.0.1,children.1.file.port=6262,children.1.driver=replication,children.1.mode=primary,children.1.ignore-errors=on:
>  Failed to read export length
>
 Hmm, I think you use v7 version. Is it right?
 In this version, the correct useage for primary qemu is:
 1. qemu-system-x86_64 .snip...  -drive 
 if=virtio,driver=quorum,read-pattern=fifo,id=disk1,

 children.0.file.filename=bar.qcow2,children.0.driver=qcow2,

 Then run hmp monitor command(It should be run after you run nbd_server_add 
 in the secondary qemu):
 child_add disk1 
 child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on

 Thanks
 Wen Congyang

> And server now says:
>
> nbd.c:nbd_handle_export_name():L416: export not found
> nbd.c:nbd_send_negotiate():L562: option negotiation failed
>
> .
>

>>>
>>> OK, I'm totally confused at this point. =)
>>>
>>> Maybe it would be easier to just wait for your next clean patchset + 
>>> documentation which is all consistent with each other.
>>
>> I have sent the v8. But the usage is not changed. You can setup the 
>> environment according to the wiki.
>> When we open nbd client, we need to connect to the nbd server. So I 
>> introduce a new command child_add to add NBD client
>> as a quorum child when the nbd server is ready.
>>
>> The nbd server is ready after you run the following command:
>> nbd_server_start 0:6262 # the secondary qemu will listen to host:port
>> nbd_server_add -w mc1   # the NBD server will know this disk is used as NBD 
>> server. The export name is its id wc1.
>> # -w means we allow to write to this disk.
>>
>> Then you can run the following command in the primary qemu:
>> child_add disk1 
>> child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on
>>
>> After this monitor command, nbd client has connected to the nbd server.
> 
> Ah! The 'child.file.export=mc1' wasn't there previously; I see Yang added 
> that to the wiki yesterday;
> that probably explains the problem that we've been having.

Sorry for this mistake.

Thanks
Wen Congyang

> 
> Dave
> 
>>
>> Thanks
>> Wen Congyang
>>
>>>
>>> - Michael
>>>
>>> .
>>>
>>
>>
> --
> Dr. David Alan Gilbert / dgilb...@redhat.com / Manch

Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-09 Thread Dr. David Alan Gilbert
* Wen Congyang (we...@cn.fujitsu.com) wrote:

> >> I have sent the v8. But the usage is not changed. You can setup the 
> >> environment according to the wiki.
> >> When we open nbd client, we need to connect to the nbd server. So I 
> >> introduce a new command child_add to add NBD client
> >> as a quorum child when the nbd server is ready.
> >>
> >> The nbd server is ready after you run the following command:
> >> nbd_server_start 0:6262 # the secondary qemu will listen to host:port
> >> nbd_server_add -w mc1   # the NBD server will know this disk is used as 
> >> NBD server. The export name is its id wc1.
> >> # -w means we allow to write to this disk.
> >>
> >> Then you can run the following command in the primary qemu:
> >> child_add disk1 
> >> child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on
> >>
> >> After this monitor command, nbd client has connected to the nbd server.
> > 
> > Ah! The 'child.file.export=mc1' wasn't there previously; I see Yang added 
> > that to the wiki yesterday;
> > that probably explains the problem that we've been having.
> 
> Sorry for this mistake.

OK, so this is working for me (with the 7.2 world).   What isn't working in 
this setup is migrate -b, I get:

(qemu) Receiving block device images
Error unknown block device disk1
qemu-system-x86_64: error while loading state section id 1(block)
qemu-system-x86_64: load of migration failed: Invalid argument

Can you explain the id=disk1 on the master side?

Dave

> 
> Thanks
> Wen Congyang
> 
> > 
> > Dave
> > 
> >>
> >> Thanks
> >> Wen Congyang
> >>
> >>>
> >>> - Michael
> >>>
> >>> .
> >>>
> >>
> >>
> > --
> > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> > .
> > 
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-09 Thread Wen Congyang
On 07/09/2015 05:16 PM, Dr. David Alan Gilbert wrote:
> * Wen Congyang (we...@cn.fujitsu.com) wrote:
> 
 I have sent the v8. But the usage is not changed. You can setup the 
 environment according to the wiki.
 When we open nbd client, we need to connect to the nbd server. So I 
 introduce a new command child_add to add NBD client
 as a quorum child when the nbd server is ready.

 The nbd server is ready after you run the following command:
 nbd_server_start 0:6262 # the secondary qemu will listen to host:port
 nbd_server_add -w mc1   # the NBD server will know this disk is used as 
 NBD server. The export name is its id wc1.
 # -w means we allow to write to this disk.

 Then you can run the following command in the primary qemu:
 child_add disk1 
 child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on

 After this monitor command, nbd client has connected to the nbd server.
>>>
>>> Ah! The 'child.file.export=mc1' wasn't there previously; I see Yang added 
>>> that to the wiki yesterday;
>>> that probably explains the problem that we've been having.
>>
>> Sorry for this mistake.
> 
> OK, so this is working for me (with the 7.2 world).   What isn't working in 
> this setup is migrate -b, I get:
> 
> (qemu) Receiving block device images
> Error unknown block device disk1
> qemu-system-x86_64: error while loading state section id 1(block)
> qemu-system-x86_64: load of migration failed: Invalid argument
> 
> Can you explain the id=disk1 on the master side?

Can you give me the command line? I think the id in primary and secondary qemu 
is not same.

Thanks
Wen Congyang

> 
> Dave
> 
>>
>> Thanks
>> Wen Congyang
>>
>>>
>>> Dave
>>>

 Thanks
 Wen Congyang

>
> - Michael
>
> .
>


>>> --
>>> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
>>> .
>>>
>>
> --
> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> .
> 




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-09 Thread Dr. David Alan Gilbert
* Wen Congyang (we...@cn.fujitsu.com) wrote:
> On 07/09/2015 05:16 PM, Dr. David Alan Gilbert wrote:
> > * Wen Congyang (we...@cn.fujitsu.com) wrote:
> > 
>  I have sent the v8. But the usage is not changed. You can setup the 
>  environment according to the wiki.
>  When we open nbd client, we need to connect to the nbd server. So I 
>  introduce a new command child_add to add NBD client
>  as a quorum child when the nbd server is ready.
> 
>  The nbd server is ready after you run the following command:
>  nbd_server_start 0:6262 # the secondary qemu will listen to host:port
>  nbd_server_add -w mc1   # the NBD server will know this disk is used as 
>  NBD server. The export name is its id wc1.
>  # -w means we allow to write to this disk.
> 
>  Then you can run the following command in the primary qemu:
>  child_add disk1 
>  child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on
> 
>  After this monitor command, nbd client has connected to the nbd server.
> >>>
> >>> Ah! The 'child.file.export=mc1' wasn't there previously; I see Yang added 
> >>> that to the wiki yesterday;
> >>> that probably explains the problem that we've been having.
> >>
> >> Sorry for this mistake.
> > 
> > OK, so this is working for me (with the 7.2 world).   What isn't working in 
> > this setup is migrate -b, I get:
> > 
> > (qemu) Receiving block device images
> > Error unknown block device disk1
> > qemu-system-x86_64: error while loading state section id 1(block)
> > qemu-system-x86_64: load of migration failed: Invalid argument
> > 
> > Can you explain the id=disk1 on the master side?
> 
> Can you give me the command line? I think the id in primary and secondary 
> qemu is not same.

Sure, primary:

./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
 -boot c -m 4096 -smp 4 -S \
 -name debug-threads=on -trace events=trace-file \
 -netdev tap,id=hn0,script=$PWD/ifup-prim,\
downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
 \
 -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
 -device virtio-rng-pci \
 -drive if=virtio,id=disk1,driver=quorum,read-pattern=fifo,\
cache=none,aio=native,\
children.0.file.filename=./bugzilla.raw,\
children.0.driver=raw

Sure, secondary:

TMPDISKS=/run
qemu-img create -f qcow2 $TMPDISKS/colo-hidden-disk.qcow2 40G
qemu-img create -f qcow2 $TMPDISKS/colo-active-disk.qcow2 40G

./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
 -boot c -m 4096 -smp 4 -S \
 -name debug-threads=on -trace events=trace-file \
 -netdev tap,id=hn0,script=$PWD/ifup-slave,\
downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
 \
 -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
 -device virtio-rng-pci \
 -drive 
if=none,driver=raw,file=/home/localvms/bugzilla.raw,id=colo1,cache=none,aio=native
 \
 -drive 
if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=7000,\
file.file.filename=$TMPDISKS/colo-active-disk.qcow2,\
file.driver=qcow2,\
file.backing.file.filename=$TMPDISKS/colo-hidden-disk.qcow2,\
file.backing.driver=qcow2,\
file.backing.backing.backing_reference=colo1,\
file.backing.allow-write-backing-file=on \
 -incoming tcp:0:


Secondary:
   nbd_server_start :8889
   nbd_server_add -w colo1

primary:

child_add disk1 
child.driver=replication,child.mode=primary,child.file.host=ibpair,child.file.port=8889,child.file.export=colo1,child.file.driver=nbd,child.ignore-errors=on
(qemu) migrate_set_capability colo on
(qemu) migrate -d -b tcp:ibpair:


note the 'id=disk1' in the primary is shown in your wiki.

Dave

 Thanks
> Wen Congyang
> 
> > 
> > Dave
> > 
> >>
> >> Thanks
> >> Wen Congyang
> >>
> >>>
> >>> Dave
> >>>
> 
>  Thanks
>  Wen Congyang
> 
> >
> > - Michael
> >
> > .
> >
> 
> 
> >>> --
> >>> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> >>> .
> >>>
> >>
> > --
> > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> > .
> > 
> 
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-09 Thread Wen Congyang
On 07/09/2015 06:37 PM, Dr. David Alan Gilbert wrote:
> * Wen Congyang (we...@cn.fujitsu.com) wrote:
>> On 07/09/2015 05:16 PM, Dr. David Alan Gilbert wrote:
>>> * Wen Congyang (we...@cn.fujitsu.com) wrote:
>>>
>> I have sent the v8. But the usage is not changed. You can setup the 
>> environment according to the wiki.
>> When we open nbd client, we need to connect to the nbd server. So I 
>> introduce a new command child_add to add NBD client
>> as a quorum child when the nbd server is ready.
>>
>> The nbd server is ready after you run the following command:
>> nbd_server_start 0:6262 # the secondary qemu will listen to host:port
>> nbd_server_add -w mc1   # the NBD server will know this disk is used as 
>> NBD server. The export name is its id wc1.
>> # -w means we allow to write to this disk.
>>
>> Then you can run the following command in the primary qemu:
>> child_add disk1 
>> child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on
>>
>> After this monitor command, nbd client has connected to the nbd server.
>
> Ah! The 'child.file.export=mc1' wasn't there previously; I see Yang added 
> that to the wiki yesterday;
> that probably explains the problem that we've been having.

 Sorry for this mistake.
>>>
>>> OK, so this is working for me (with the 7.2 world).   What isn't working in 
>>> this setup is migrate -b, I get:
>>>
>>> (qemu) Receiving block device images
>>> Error unknown block device disk1
>>> qemu-system-x86_64: error while loading state section id 1(block)
>>> qemu-system-x86_64: load of migration failed: Invalid argument
>>>
>>> Can you explain the id=disk1 on the master side?
>>
>> Can you give me the command line? I think the id in primary and secondary 
>> qemu is not same.
> 
> Sure, primary:
> 
> ./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
>  -boot c -m 4096 -smp 4 -S \
>  -name debug-threads=on -trace events=trace-file \
>  -netdev tap,id=hn0,script=$PWD/ifup-prim,\
> downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
>  \
>  -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
>  -device virtio-rng-pci \
>  -drive if=virtio,id=disk1,driver=quorum,read-pattern=fifo,\

If you want to use block migration, set the id to colo1

> cache=none,aio=native,\
> children.0.file.filename=./bugzilla.raw,\
> children.0.driver=raw
> 
> Sure, secondary:
> 
> TMPDISKS=/run
> qemu-img create -f qcow2 $TMPDISKS/colo-hidden-disk.qcow2 40G
> qemu-img create -f qcow2 $TMPDISKS/colo-active-disk.qcow2 40G
> 
> ./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
>  -boot c -m 4096 -smp 4 -S \
>  -name debug-threads=on -trace events=trace-file \
>  -netdev tap,id=hn0,script=$PWD/ifup-slave,\
> downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
>  \
>  -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
>  -device virtio-rng-pci \
>  -drive 
> if=none,driver=raw,file=/home/localvms/bugzilla.raw,id=colo1,cache=none,aio=native
>  \
>  -drive 
> if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=7000,\
> file.file.filename=$TMPDISKS/colo-active-disk.qcow2,\
> file.driver=qcow2,\
> file.backing.file.filename=$TMPDISKS/colo-hidden-disk.qcow2,\
> file.backing.driver=qcow2,\
> file.backing.backing.backing_reference=colo1,\
> file.backing.allow-write-backing-file=on \
>  -incoming tcp:0:
> 
> 
> Secondary:
>nbd_server_start :8889
>nbd_server_add -w colo1
> 
> primary:
> 
> child_add disk1 
> child.driver=replication,child.mode=primary,child.file.host=ibpair,child.file.port=8889,child.file.export=colo1,child.file.driver=nbd,child.ignore-errors=on

here, it is child_add colo1 

> (qemu) migrate_set_capability colo on
> (qemu) migrate -d -b tcp:ibpair:
> 
> 
> note the 'id=disk1' in the primary is shown in your wiki.

IIRC, the default id is diskx, x is 1, 2, 3...

Thanks
Wen Congyang

> 
> Dave
> 
>  Thanks
>> Wen Congyang
>>
>>>
>>> Dave
>>>

 Thanks
 Wen Congyang

>
> Dave
>
>>
>> Thanks
>> Wen Congyang
>>
>>>
>>> - Michael
>>>
>>> .
>>>
>>
>>
> --
> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> .
>

>>> --
>>> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
>>> .
>>>
>>
>>
> --
> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
> .
> 




Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-09 Thread Dr. David Alan Gilbert
* Wen Congyang (we...@cn.fujitsu.com) wrote:
> On 07/09/2015 06:37 PM, Dr. David Alan Gilbert wrote:
> > * Wen Congyang (we...@cn.fujitsu.com) wrote:
> >> On 07/09/2015 05:16 PM, Dr. David Alan Gilbert wrote:
> >>> * Wen Congyang (we...@cn.fujitsu.com) wrote:
> >>>
> >> I have sent the v8. But the usage is not changed. You can setup the 
> >> environment according to the wiki.
> >> When we open nbd client, we need to connect to the nbd server. So I 
> >> introduce a new command child_add to add NBD client
> >> as a quorum child when the nbd server is ready.
> >>
> >> The nbd server is ready after you run the following command:
> >> nbd_server_start 0:6262 # the secondary qemu will listen to host:port
> >> nbd_server_add -w mc1   # the NBD server will know this disk is used 
> >> as NBD server. The export name is its id wc1.
> >> # -w means we allow to write to this disk.
> >>
> >> Then you can run the following command in the primary qemu:
> >> child_add disk1 
> >> child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on
> >>
> >> After this monitor command, nbd client has connected to the nbd server.
> >
> > Ah! The 'child.file.export=mc1' wasn't there previously; I see Yang 
> > added that to the wiki yesterday;
> > that probably explains the problem that we've been having.
> 
>  Sorry for this mistake.
> >>>
> >>> OK, so this is working for me (with the 7.2 world).   What isn't working 
> >>> in this setup is migrate -b, I get:
> >>>
> >>> (qemu) Receiving block device images
> >>> Error unknown block device disk1
> >>> qemu-system-x86_64: error while loading state section id 1(block)
> >>> qemu-system-x86_64: load of migration failed: Invalid argument
> >>>
> >>> Can you explain the id=disk1 on the master side?
> >>
> >> Can you give me the command line? I think the id in primary and secondary 
> >> qemu is not same.
> > 
> > Sure, primary:
> > 
> > ./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
> >  -boot c -m 4096 -smp 4 -S \
> >  -name debug-threads=on -trace events=trace-file \
> >  -netdev tap,id=hn0,script=$PWD/ifup-prim,\
> > downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
> >  \
> >  -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
> >  -device virtio-rng-pci \
> >  -drive if=virtio,id=disk1,driver=quorum,read-pattern=fifo,\
> 
> If you want to use block migration, set the id to colo1

OK, does it make sense to always make it colo1 rather than having the 'disk1' 
and 'colo1' names?

> > cache=none,aio=native,\
> > children.0.file.filename=./bugzilla.raw,\
> > children.0.driver=raw
> > 
> > Sure, secondary:
> > 
> > TMPDISKS=/run
> > qemu-img create -f qcow2 $TMPDISKS/colo-hidden-disk.qcow2 40G
> > qemu-img create -f qcow2 $TMPDISKS/colo-active-disk.qcow2 40G
> > 
> > ./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
> >  -boot c -m 4096 -smp 4 -S \
> >  -name debug-threads=on -trace events=trace-file \
> >  -netdev tap,id=hn0,script=$PWD/ifup-slave,\
> > downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
> >  \
> >  -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
> >  -device virtio-rng-pci \
> >  -drive 
> > if=none,driver=raw,file=/home/localvms/bugzilla.raw,id=colo1,cache=none,aio=native
> >  \
> >  -drive 
> > if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=7000,\
> > file.file.filename=$TMPDISKS/colo-active-disk.qcow2,\
> > file.driver=qcow2,\
> > file.backing.file.filename=$TMPDISKS/colo-hidden-disk.qcow2,\
> > file.backing.driver=qcow2,\
> > file.backing.backing.backing_reference=colo1,\
> > file.backing.allow-write-backing-file=on \
> >  -incoming tcp:0:
> > 
> > 
> > Secondary:
> >nbd_server_start :8889
> >nbd_server_add -w colo1
> > 
> > primary:
> > 
> > child_add disk1 
> > child.driver=replication,child.mode=primary,child.file.host=ibpair,child.file.port=8889,child.file.export=colo1,child.file.driver=nbd,child.ignore-errors=on
> 
> here, it is child_add colo1 
> 
> > (qemu) migrate_set_capability colo on
> > (qemu) migrate -d -b tcp:ibpair:
> > 
> > 
> > note the 'id=disk1' in the primary is shown in your wiki.
> 
> IIRC, the default id is diskx, x is 1, 2, 3...

OK, using the colo1 in both those places that works (this is in the 7.2 world)
and the RAM disk size now doesn't grow in the -b block migrate.

Dave

> 
> Thanks
> Wen Congyang
> 
> > 
> > Dave
> > 
> >  Thanks
> >> Wen Congyang
> >>
> >>>
> >>> Dave
> >>>
> 
>  Thanks
>  Wen Congyang
> 
> >
> > Dave
> >
> >>
> >> Thanks
> >> Wen Congyang
> >>
> >>>
> >>> - Michael
> >>>
> >>> .
> >>>
> >>
> >>
> > --
> > Dr. David Alan

Re: [Qemu-block] [Qemu-devel] [PATCH COLO-BLOCK v7 00/17] Block replication for continuous checkpoints

2015-07-09 Thread Wen Congyang

At 2015/7/9 21:40, Dr. David Alan Gilbert Wrote:

* Wen Congyang (we...@cn.fujitsu.com) wrote:

On 07/09/2015 06:37 PM, Dr. David Alan Gilbert wrote:

* Wen Congyang (we...@cn.fujitsu.com) wrote:

On 07/09/2015 05:16 PM, Dr. David Alan Gilbert wrote:

* Wen Congyang (we...@cn.fujitsu.com) wrote:


I have sent the v8. But the usage is not changed. You can setup the environment 
according to the wiki.
When we open nbd client, we need to connect to the nbd server. So I introduce a 
new command child_add to add NBD client
as a quorum child when the nbd server is ready.

The nbd server is ready after you run the following command:
nbd_server_start 0:6262 # the secondary qemu will listen to host:port
nbd_server_add -w mc1   # the NBD server will know this disk is used as NBD 
server. The export name is its id wc1.
 # -w means we allow to write to this disk.

Then you can run the following command in the primary qemu:
child_add disk1 
child.driver=replication,child.mode=primary,child.file.host=127.0.0.1,child.file.port=6262,child.file.export=mc1,child.file.driver=nbd,child.ignore-errors=on

After this monitor command, nbd client has connected to the nbd server.


Ah! The 'child.file.export=mc1' wasn't there previously; I see Yang added that 
to the wiki yesterday;
that probably explains the problem that we've been having.


Sorry for this mistake.


OK, so this is working for me (with the 7.2 world).   What isn't working in 
this setup is migrate -b, I get:

(qemu) Receiving block device images
Error unknown block device disk1
qemu-system-x86_64: error while loading state section id 1(block)
qemu-system-x86_64: load of migration failed: Invalid argument

Can you explain the id=disk1 on the master side?


Can you give me the command line? I think the id in primary and secondary qemu 
is not same.


Sure, primary:

./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
  -boot c -m 4096 -smp 4 -S \
  -name debug-threads=on -trace events=trace-file \
  -netdev tap,id=hn0,script=$PWD/ifup-prim,\
downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
 \
  -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
  -device virtio-rng-pci \
  -drive if=virtio,id=disk1,driver=quorum,read-pattern=fifo,\


If you want to use block migration, set the id to colo1


OK, does it make sense to always make it colo1 rather than having the 'disk1' 
and 'colo1' names?


You can use anyone. The export name must be the secondary disk's id.

Thanks
Wen Congyang




cache=none,aio=native,\
children.0.file.filename=./bugzilla.raw,\
children.0.driver=raw

Sure, secondary:

TMPDISKS=/run
qemu-img create -f qcow2 $TMPDISKS/colo-hidden-disk.qcow2 40G
qemu-img create -f qcow2 $TMPDISKS/colo-active-disk.qcow2 40G

./try/bin/qemu-system-x86_64 -enable-kvm -nographic \
  -boot c -m 4096 -smp 4 -S \
  -name debug-threads=on -trace events=trace-file \
  -netdev tap,id=hn0,script=$PWD/ifup-slave,\
downscript=no,colo_script=$PWD/qemu/scripts/colo-proxy-script.sh,colo_nicname=em4
 \
  -device e1000,mac=9c:da:4d:1c:b5:89,id=net-pci0,netdev=hn0 \
  -device virtio-rng-pci \
  -drive 
if=none,driver=raw,file=/home/localvms/bugzilla.raw,id=colo1,cache=none,aio=native
 \
  -drive 
if=virtio,driver=replication,mode=secondary,export=colo1,throttling.bps-total-max=7000,\
file.file.filename=$TMPDISKS/colo-active-disk.qcow2,\
file.driver=qcow2,\
file.backing.file.filename=$TMPDISKS/colo-hidden-disk.qcow2,\
file.backing.driver=qcow2,\
file.backing.backing.backing_reference=colo1,\
file.backing.allow-write-backing-file=on \
  -incoming tcp:0:


Secondary:
nbd_server_start :8889
nbd_server_add -w colo1

primary:

child_add disk1 
child.driver=replication,child.mode=primary,child.file.host=ibpair,child.file.port=8889,child.file.export=colo1,child.file.driver=nbd,child.ignore-errors=on


here, it is child_add colo1 


(qemu) migrate_set_capability colo on
(qemu) migrate -d -b tcp:ibpair:


note the 'id=disk1' in the primary is shown in your wiki.


IIRC, the default id is diskx, x is 1, 2, 3...


OK, using the colo1 in both those places that works (this is in the 7.2 world)
and the RAM disk size now doesn't grow in the -b block migrate.

Dave



Thanks
Wen Congyang



Dave

  Thanks

Wen Congyang



Dave



Thanks
Wen Congyang



Dave



Thanks
Wen Congyang



- Michael

.





--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
.




--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
.





--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK
.





--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK