From: Dimitris Aragiorgis <dim...@arrikto.com>

During migration, QEMU uses fsync()/fdatasync() on the open file
descriptor for read-write block devices to flush data just before
stopping the VM.

However, fsync() on a scsi-generic device returns -EINVAL which
causes the migration to fail. This patch skips flushing data in case
of an SG device, since submitting SCSI commands directly via an SG
character device (e.g. /dev/sg0) bypasses the page cache completely,
anyway.

Note that fsync() not only flushes the page cache but also the disk
cache. The scsi-generic device never sends flushes, and for
migration it assumes that the same SCSI device is used by the
destination host, so it does not issue any SCSI SYNCHRONIZE CACHE
(10) command.

Finally, remove the bdrv_is_sg() test from iscsi_co_flush() since
this is now redundant (we flush the underlying protocol at the end
of bdrv_co_flush() which, with this patch, we never reach).

Signed-off-by: Dimitris Aragiorgis <dim...@arrikto.com>
Reviewed-by: Stefan Hajnoczi <stefa...@redhat.com>
Message-id: 1435056300-14924-3-git-send-email-dim...@arrikto.com
Signed-off-by: Stefan Hajnoczi <stefa...@redhat.com>
---
 block/io.c    | 3 ++-
 block/iscsi.c | 4 ----
 2 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/block/io.c b/block/io.c
index 43f85ab..e295992 100644
--- a/block/io.c
+++ b/block/io.c
@@ -2265,7 +2265,8 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
 {
     int ret;
 
-    if (!bs || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
+    if (!bs || !bdrv_is_inserted(bs) || bdrv_is_read_only(bs) ||
+        bdrv_is_sg(bs)) {
         return 0;
     }
 
diff --git a/block/iscsi.c b/block/iscsi.c
index aff8198..49cee4d 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -628,10 +628,6 @@ static int coroutine_fn iscsi_co_flush(BlockDriverState 
*bs)
     IscsiLun *iscsilun = bs->opaque;
     struct IscsiTask iTask;
 
-    if (bdrv_is_sg(bs)) {
-        return 0;
-    }
-
     if (!iscsilun->force_next_flush) {
         return 0;
     }
-- 
2.4.3


Reply via email to