On Mon, Apr 25, 2011 at 9:16 AM, Jagane Sundar <jag...@sundar.org> wrote:
> The direction that I chose to go is slightly different. In both of the
> proposals you pointed me at, the original virtual disk is made
> read-only and the VM writes to a different COW file. After backup
> of the original virtual disk file is complete, the COW file is merged
> with the original vdisk file.
>
> Instead, I create an Original-Blocks-COW-file to store the original
> blocks that are overwritten by the VM everytime the VM performs
> a write while the backup is in progress. Livebackup copies these
> underlying blocks from the original virtual disk file before the VM's
> write to the original virtual disk file is scheduled. The advantage of
> this is that there is no merge necessary at the end of the backup, we
> can simply delete the Original-Blocks-COW-file.

The advantage of the approach that redirects writes to a new file
instead is that the heavy work of copying data is done asynchronously
during the merge operation instead of in the write path which will
impact guest performance.

Here's what I understand:

1. User takes a snapshot of the disk, QEMU creates old-disk.img backed
by the current-disk.img.
2. Guest issues a write A.
3. QEMU reads B from current-disk.img.
4. QEMU writes B to old-disk.img.
5. QEMU writes A to current-disk.img.
6. Guest receives write completion A.

The tricky thing is what happens if there is a failure after Step 5.
If writes A and B were unstable writes (no fsync()) then no ordering
is guaranteed and perhaps write A reached current-disk.img but write B
did not reach old-disk.img.  In this case we no longer have a
consistent old-disk.img snapshot - we're left with an updated
current-disk.img and old-disk.img does not have a copy of the old
data.

The solution is to fsync() after Step 4 and before Step 5 but this
will hurt performance.  We now have an extra read, write, and fsync()
on every write.

> I have some reasons to believe that the Original-Blocks-COW-file
> design that I am putting forth might work better. I have listed them
> below. (It's past midnight here, so pardon me if it sounds garbled -- I
> will try to clarify more in a writeup on wiki.qemu.org).
> Let me know what your thoughts are..
>
> I feel that the livebackup mechanism will impact the running VM
> less. For example, if something goes wrong with the backup process,
> then we can simply delete the Original-Blocks-COW-file and force
> the backup client to do a full backup the next time around. The
> running VM or its virtual disks are not impacted at all.

Abandoning snapshots is not okay.  Snapshots will be used in scenarios
beyond backup and I don't think we can make them
unreliable/throw-away.

> Livebackup includes a rudimentary network protocol to transfer
> the modified blocks to a livebackup_client. It supports incremental
> backups. Also, livebackup treats a backup as containing all the virtual
> disks of a VM. Hence a snapshot in livebackup terms refer to a
> snapshot of all the virtual disks.
>
> The approximate sequence of operation is as follows:
> 1. VM boots up. When bdrv_open_common opens any file backed
>    virtual disk, it checks for a file called <base_file>.livebackupconf.
>    If such a file exists, then the virtual disk is part of the backup set,
>    and a chunk of memory is allocated to keep track of dirty blocks.
> 2. qemu starts up a  livebackup thread that listens on a specified port
>    (e.g) port 7900, for connections from the livebackup client.
> 3. The livebackup_client connects to qemu at port 7900.
> 4. livebackup_client sends a 'do snapshot' command.
> 5. qemu waits 30 seconds for outstanding asynchronous I/O to complete.
> 6. When there are no more outstanding async I/O requests, qemu
>    copies the dirty_bitmap to its snapshot structure and starts a new dirty
>    bitmap.
> 7. livebackup_client starts iterating through the list of dirty blocks, and
>    starts saving these blocks to the backup image
> 8. When all blocks have been backed up, then the backup_client sends a
>    destroy snapshot command; the server simply deletes the
>    Original-Blocks-COW-files for each of the virtual disks and frees the
>    calloc'd memory holding the dirty blocks list.

I think there's a benefit to just pointing at
Original-Blocks-COW-files and letting the client access it directly.
This even works with shared storage where the actual backup work is
performed on another host via access to a shared network filesystem or
LUN.  It may not be desirable to send everything over the network.


Perhaps you made a custom network client because you are writing a
full-blown backup solution for KVM?  In that case it's your job to
move the data around and get it backed up.  But from QEMU's point of
view we just need to provide the data and it's up to the backup
software to send it over the network and do its magic.

> I have pushed my code to the following git tree.
> git://github.com/jagane/qemu-kvm-livebackup.git
>
> It started as a clone of the linux kvm tree at:
>
> git clone git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git
>
> If you want to look at the code, see livebackup.[ch] and livebackup_client.c

In terms of submitting patches it's best to target qemu.git instead of
qemu-kvm.git since this feature really isn't Linux KVM-specific.
There are also efforts to merge qemu-kvm.git into qemu.git, so we
shouldn't increase the patch delta.

> This is very much a work in progress, and I expect to do a lot of
> testing/debugging over the next few weeks. I will also create a
> detailed proposal on wiki.qemu.org, with much more information.

Excellent.  Let's get Jes to join the discussion since he's been most
hands-on with block device snapshots.  It's Easter holiday time so
perhaps later this week or next week more people will be around.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to