Howdy,

I've been working on migration for QEMU and have run into a snag. I've got a non-live migration patch that works quite happily[1]. I modified the save/restore code to not seek at all, and then basically pipe a save over a pipe to a subprocess (usually, ssh).

Conceptually, adding support for live migration is really easy. All I think I need to do is extend the current code, to have a pre-save hook that is activated before the VM is stopped. This hook will be called until it says it's done and then the rest of the save/load handlers are invoked. At first, I'm just going to do a pre-save handler for RAM which should significantly reduce the amount of down time. I think the only other device we'll have to handle specially is the VGA memory but I'm happy to ignore that for now.

So, all I really need is to be able to track which pages are dirtied. I also need the a method to reset the dirty map.

I started looking at adding another map like phys_ram_dirty. That seems to work for some of the IO_MEM_RAM pages, but not all. My initial thought is that all memory operations should go through one of the st[bwl]_phys functions but that doesn't seem to be the case.

Can anyone provide me with some advice on how to do this? Am I right in assuming that all IO will go through some function?

[1] http://hg.codemonkey.ws/qemu-pq/?f=758c26c82f52;file=qemu-migration.diff

Thanks,

Anthony Liguori


_______________________________________________
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel

Reply via email to