Re: [Xen-devel] Fwd: VM Live Migration with Local Storage

2017-06-21 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
> Konrad Rzeszutek Wilk
> Sent: 20 June 2017 18:57
> To: Bruno Alvisio 
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Fwd: VM Live Migration with Local Storage
> 
> On Sun, Jun 11, 2017 at 08:16:04PM -0700, Bruno Alvisio wrote:
> > Hello,
> >
> > I think it would be beneficial to add local disk migration feature for
> > ‘blkback' backend since it is one of the mostly used backends. I would like
> > to start a discussion about the design of the machinery needed to achieve
> > this feature.
> >
> > ===
> > Objective
> > Add a feature to migrate VMs that have local storage and use the blkback
> > iface.
> > ===
> >
> > ===
> > User Interface
> > Add a cmd line option in “xl migrate” command to specify if local disks
> > need to be copied to the destination node.
> > ===
> >
> > ===
> > Design
> >
> >1. As part of the libxl_domain_suspend, the “disk mirroring machinery”
> >starts an asynchronous job that copies the disks blocks from source to 
> > the
> >destination.
> >2. The protocol to copy the disks should resemble the one used for
> >memory copy:
> >
> >
> >- Do first initial copy of the disk.
> >- Check of sectors that have been written since copy started. For this,
> >the blkback driver should be aware that migration of disk is happening
> and
> >in this case forward the write request to the “migration machinery” so
> that
> >a record of dirty blocks are logged.
> >- Migration machinery copies “dirty” blocks until convergence.
> >- Duplicate all the disk writes/reads to both disks in source and
> >destinations node while VM is being suspended.
> >
> >
> > Block Diagram
> >
> >+—--+
> >|  VM   |
> >+---+
> >   |
> >   | I/O Write
> >   |
> >   V
> > +--+   +---+   +-+
> > |  blkback | > |  Source   |  sectors Stream   | Destination |
> > +--+   |  mirror   |-->|   mirror|
> >   || machinery |   I/O Writes  |  machinery  |
> >   |+---+   +-+
> >   |  |
> >   |  |
> >   | To I/O block layer   |
> >   |  |
> >   V  V
> > +--+   +-+
> > |   disk   |   |   Mirrored  |
> > +--+   | Disk|
> >+-+
> >
> >
> > ==
> > Initial Questions
> >
> >1. Is it possible to leverage the current design of QEMU for drive
> >mirroring for Xen?
> 
> Yes. It has qdisk which implement blkback interface.
> 
> >2. What is the best place to implement this protocol? As part of Xen or
> >the kernel?
> 
> QEMU

Moreover QEMU can already export disk images via NBD, and even layer qcow2 
images on top of an NBD socket. It also has comprehensive support for mirroring 
block devices with copy-on-read and background copy threads. I'm sure all of 
these capabilities could be used by a libxl toolstack, so I don't see any need 
to re-invent the wheel with blkback.

  Paul

> >3. Is it possible to use the same stream currently used for migrating
> >the memory to also migrate the disk blocks?
> 
> Probably.
> >
> >
> > Any guidance/feedback for a more specific design is greatly appreciated.
> >
> > Thanks,
> >
> > Bruno
> >
> > On Wed, Feb 22, 2017 at 5:00 AM, Wei Liu  wrote:
> >
> > > Hi Bruno
> > >
> > > Thanks for your interest.
> > >
> > > On Tue, Feb 21, 2017 at 10:34:45AM -0800, Bruno Alvisio wrote:
> > > > Hello,
> > > >
> > > > I have been to doing some research and as far as I know XEN supports
> > > > Live Migration
> > > > of VMs that only have shared storage. (i.e. iSCSI) If the VM has been
> > > > b

Re: [Xen-devel] Fwd: VM Live Migration with Local Storage

2017-06-20 Thread Igor Druzhinin
On 12/06/17 04:16, Bruno Alvisio wrote:
> Hello,
> 
> I think it would be beneficial to add local disk migration feature for
> ‘blkback' backend since it is one of the mostly used backends. I would
> like to start a discussion about the design of the machinery needed to
> achieve this feature.
> 
> ===
> Objective
> Add a feature to migrate VMs that have local storage and use the blkback
> iface.
> ===
> 
> ===
> User Interface
> Add a cmd line option in “xl migrate” command to specify if local disks
> need to be copied to the destination node.
> ===
> 
> ===
> Design
> 
>  1. As part of the libxl_domain_suspend, the “disk mirroring machinery”
> starts an asynchronous job that copies the disks blocks from source
> to the destination.
>  2. The protocol to copy the disks should resemble the one used for
> memory copy:
> 
>   * Do first initial copy of the disk.
>   * Check of sectors that have been written since copy started. For
> this, the blkback driver should be aware that migration of disk is
> happening and in this case forward the write request to
> the “migration machinery” so that a record of dirty blocks are logged.
>   * Migration machinery copies “dirty” blocks until convergence.

Be careful with that. You don't really want to merge block and memory
live migrations. They should be linked but proceed independently since
we don't want to have the last iteration of memory transfer stalled
waiting for disk convergence. Some mix of pre-copy and post-copy
approach might be suitable.

Igor

>   * Duplicate all the disk writes/reads to both disks in source and
> destinations node while VM is being suspended.
> 
> 
> Block Diagram
> 
>+—--+
>|  VM   |
>+---+
>   |
>   | I/O Write
>   |
>   V
> +--+   +---+   +-+
> |  blkback | > |  Source   |  sectors Stream   | Destination |
> +--+   |  mirror   |-->|   mirror|
>   || machinery |   I/O Writes  |  machinery  |
>   |+---+   +-+
>   |  |
>   |  |
>   | To I/O block layer   |
>   |  |
>   V  V
> +--+   +-+
> |   disk   |   |   Mirrored  |
> +--+   | Disk|
>+-+
> 
> 
> ==
> Initial Questions
> 
>  1. Is it possible to leverage the current design of QEMU for drive
> mirroring for Xen?
>  2. What is the best place to implement this protocol? As part of Xen or
> the kernel?
>  3. Is it possible to use the same stream currently used for migrating
> the memory to also migrate the disk blocks?
> 
> 
> Any guidance/feedback for a more specific design is greatly appreciated.
> 
> Thanks,
> 
> Bruno
> 
> On Wed, Feb 22, 2017 at 5:00 AM, Wei Liu  > wrote:
> 
> Hi Bruno
> 
> Thanks for your interest.
> 
> On Tue, Feb 21, 2017 at 10:34:45AM -0800, Bruno Alvisio wrote:
> > Hello,
> >
> > I have been to doing some research and as far as I know XEN supports
> > Live Migration
> > of VMs that only have shared storage. (i.e. iSCSI) If the VM has been
> > booted with local storage it cannot be live migrated.
> > QEMU seems to support live migration with local storage (I have tested 
> using
> > 'virsh migrate with the '--storage-copy-all' option)
> >
> > I am wondering if this still true in the latest XEN release. Are there 
> plans
> > to add this functionality in future releases? I would be interested in
> > contributing to the Xen Project by adding this functionality.
> >
> 
> No plan at the moment.
> 
> Xen supports a wide variety of disk backends. QEMU is one of them. The
> others are blktap (not upstreamed yet) and in-kernel blkback. The latter
> two don't have the capability to copy local storage to the remote end.
> 
> That said, I think it would be valuable to have such capability for QEMU
> backed disks. We also need to design the machinery so that other
> backends can be made to do the same thing in the future.
> 
> If you want to undertake this project, I suggest you setup a Xen system,
> read xl / libxl source code under tools directory and understand how
> everything is put together. Reading source code could be daunting at
> times, so don't hesitate to ask for pointers. After you have the big
> picture in mind, we can t

Re: [Xen-devel] Fwd: VM Live Migration with Local Storage

2017-06-20 Thread Konrad Rzeszutek Wilk
On Sun, Jun 11, 2017 at 08:16:04PM -0700, Bruno Alvisio wrote:
> Hello,
> 
> I think it would be beneficial to add local disk migration feature for
> ‘blkback' backend since it is one of the mostly used backends. I would like
> to start a discussion about the design of the machinery needed to achieve
> this feature.
> 
> ===
> Objective
> Add a feature to migrate VMs that have local storage and use the blkback
> iface.
> ===
> 
> ===
> User Interface
> Add a cmd line option in “xl migrate” command to specify if local disks
> need to be copied to the destination node.
> ===
> 
> ===
> Design
> 
>1. As part of the libxl_domain_suspend, the “disk mirroring machinery”
>starts an asynchronous job that copies the disks blocks from source to the
>destination.
>2. The protocol to copy the disks should resemble the one used for
>memory copy:
> 
> 
>- Do first initial copy of the disk.
>- Check of sectors that have been written since copy started. For this,
>the blkback driver should be aware that migration of disk is happening and
>in this case forward the write request to the “migration machinery” so that
>a record of dirty blocks are logged.
>- Migration machinery copies “dirty” blocks until convergence.
>- Duplicate all the disk writes/reads to both disks in source and
>destinations node while VM is being suspended.
> 
> 
> Block Diagram
> 
>+—--+
>|  VM   |
>+---+
>   |
>   | I/O Write
>   |
>   V
> +--+   +---+   +-+
> |  blkback | > |  Source   |  sectors Stream   | Destination |
> +--+   |  mirror   |-->|   mirror|
>   || machinery |   I/O Writes  |  machinery  |
>   |+---+   +-+
>   |  |
>   |  |
>   | To I/O block layer   |
>   |  |
>   V  V
> +--+   +-+
> |   disk   |   |   Mirrored  |
> +--+   | Disk|
>+-+
> 
> 
> ==
> Initial Questions
> 
>1. Is it possible to leverage the current design of QEMU for drive
>mirroring for Xen?

Yes. It has qdisk which implement blkback interface.

>2. What is the best place to implement this protocol? As part of Xen or
>the kernel?

QEMU
>3. Is it possible to use the same stream currently used for migrating
>the memory to also migrate the disk blocks?

Probably.
> 
> 
> Any guidance/feedback for a more specific design is greatly appreciated.
> 
> Thanks,
> 
> Bruno
> 
> On Wed, Feb 22, 2017 at 5:00 AM, Wei Liu  wrote:
> 
> > Hi Bruno
> >
> > Thanks for your interest.
> >
> > On Tue, Feb 21, 2017 at 10:34:45AM -0800, Bruno Alvisio wrote:
> > > Hello,
> > >
> > > I have been to doing some research and as far as I know XEN supports
> > > Live Migration
> > > of VMs that only have shared storage. (i.e. iSCSI) If the VM has been
> > > booted with local storage it cannot be live migrated.
> > > QEMU seems to support live migration with local storage (I have tested
> > using
> > > 'virsh migrate with the '--storage-copy-all' option)
> > >
> > > I am wondering if this still true in the latest XEN release. Are there
> > plans
> > > to add this functionality in future releases? I would be interested in
> > > contributing to the Xen Project by adding this functionality.
> > >
> >
> > No plan at the moment.
> >
> > Xen supports a wide variety of disk backends. QEMU is one of them. The
> > others are blktap (not upstreamed yet) and in-kernel blkback. The latter
> > two don't have the capability to copy local storage to the remote end.
> >
> > That said, I think it would be valuable to have such capability for QEMU
> > backed disks. We also need to design the machinery so that other
> > backends can be made to do the same thing in the future.
> >
> > If you want to undertake this project, I suggest you setup a Xen system,
> > read xl / libxl source code under tools directory and understand how
> > everything is put together. Reading source code could be daunting at
> > times, so don't hesitate to ask for pointers. After you have the big
> > picture in mind, we can then discuss how to implement the functionality
> > on xen-devel.
> >
> > Does this sound good to you?
> >
> > Wei.
> >
> > > Thanks,
> > >
> > > Bruno
> >
> > > ___
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > https://list

[Xen-devel] Fwd: VM Live Migration with Local Storage

2017-06-11 Thread Bruno Alvisio
Hello,

I think it would be beneficial to add local disk migration feature for
‘blkback' backend since it is one of the mostly used backends. I would like
to start a discussion about the design of the machinery needed to achieve
this feature.

===
Objective
Add a feature to migrate VMs that have local storage and use the blkback
iface.
===

===
User Interface
Add a cmd line option in “xl migrate” command to specify if local disks
need to be copied to the destination node.
===

===
Design

   1. As part of the libxl_domain_suspend, the “disk mirroring machinery”
   starts an asynchronous job that copies the disks blocks from source to the
   destination.
   2. The protocol to copy the disks should resemble the one used for
   memory copy:


   - Do first initial copy of the disk.
   - Check of sectors that have been written since copy started. For this,
   the blkback driver should be aware that migration of disk is happening and
   in this case forward the write request to the “migration machinery” so that
   a record of dirty blocks are logged.
   - Migration machinery copies “dirty” blocks until convergence.
   - Duplicate all the disk writes/reads to both disks in source and
   destinations node while VM is being suspended.


Block Diagram

   +—--+
   |  VM   |
   +---+
  |
  | I/O Write
  |
  V
+--+   +---+   +-+
|  blkback | > |  Source   |  sectors Stream   | Destination |
+--+   |  mirror   |-->|   mirror|
  || machinery |   I/O Writes  |  machinery  |
  |+---+   +-+
  |  |
  |  |
  | To I/O block layer   |
  |  |
  V  V
+--+   +-+
|   disk   |   |   Mirrored  |
+--+   | Disk|
   +-+


==
Initial Questions

   1. Is it possible to leverage the current design of QEMU for drive
   mirroring for Xen?
   2. What is the best place to implement this protocol? As part of Xen or
   the kernel?
   3. Is it possible to use the same stream currently used for migrating
   the memory to also migrate the disk blocks?


Any guidance/feedback for a more specific design is greatly appreciated.

Thanks,

Bruno

On Wed, Feb 22, 2017 at 5:00 AM, Wei Liu  wrote:

> Hi Bruno
>
> Thanks for your interest.
>
> On Tue, Feb 21, 2017 at 10:34:45AM -0800, Bruno Alvisio wrote:
> > Hello,
> >
> > I have been to doing some research and as far as I know XEN supports
> > Live Migration
> > of VMs that only have shared storage. (i.e. iSCSI) If the VM has been
> > booted with local storage it cannot be live migrated.
> > QEMU seems to support live migration with local storage (I have tested
> using
> > 'virsh migrate with the '--storage-copy-all' option)
> >
> > I am wondering if this still true in the latest XEN release. Are there
> plans
> > to add this functionality in future releases? I would be interested in
> > contributing to the Xen Project by adding this functionality.
> >
>
> No plan at the moment.
>
> Xen supports a wide variety of disk backends. QEMU is one of them. The
> others are blktap (not upstreamed yet) and in-kernel blkback. The latter
> two don't have the capability to copy local storage to the remote end.
>
> That said, I think it would be valuable to have such capability for QEMU
> backed disks. We also need to design the machinery so that other
> backends can be made to do the same thing in the future.
>
> If you want to undertake this project, I suggest you setup a Xen system,
> read xl / libxl source code under tools directory and understand how
> everything is put together. Reading source code could be daunting at
> times, so don't hesitate to ask for pointers. After you have the big
> picture in mind, we can then discuss how to implement the functionality
> on xen-devel.
>
> Does this sound good to you?
>
> Wei.
>
> > Thanks,
> >
> > Bruno
>
> > ___
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > https://lists.xen.org/xen-devel
>
>
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel