Hi This patch set creates infrastructure to invalidate buffers on migration target machine. The best way to see the problems is:
# create a new qcow2 image qemu-img create -f qcow2 foo.img # start the destination host qemu .... path=foo.img.... # start the source host with one installation qemu .... path=foo.img.... # migrate after lots of disk writes Destination will have "read" the beggining of the blocks of the file (where the headers are). There are two bugs here: a- we need to re-read the image after migration, to have the new values (reopening fixes it) b- we need to be sure that we read the new blocks that are on the server, not the buffered ones locally from the start of the run. NFS: flush on source and close + open on target invalidates the cache Block devices: on linux, BLKFLSBUF invalidates all the buffers for that device. This fixes iSCSI & FiberChannel. I tested iSCSI & NFS. NFS patch have been on RHEL5 kvm forever (I just forget to send the patch upstream). Our NFS gurus & cluster gurus told that this is enough for linux to ensure consistence. Once there, I fixed a couple of minor bugs (the first 3 patches): - migration should exit with error 1 as everything else. - memory leak on drive_uninit. - fix cleanup on error on drive_init() Later, Juan. Juan Quintela (5): migration: exit with error code blockdev: don't leak id on removal blockdev: release resources in the error case Reopen files after migration drive_open: Add invalidate option for block devices block.h | 2 ++ block/raw-posix.c | 24 ++++++++++++++++++++++++ blockdev.c | 53 +++++++++++++++++++++++++++++++++++++++++++++-------- blockdev.h | 6 ++++++ migration.c | 8 +++++++- vl.c | 2 +- 6 files changed, 85 insertions(+), 10 deletions(-) -- 1.7.3.4