Re: [libvirt] [PATCH] Check for --live flag for postcopy-after-precopy migration

2016-08-26 Thread Jiri Denemark
On Fri, Aug 26, 2016 at 21:41:31 +0200, Michal Privoznik wrote:
> On 26.08.2016 11:25, Kothapally Madhu Pavan wrote:
> > Unlike postcopy migration there is no --live flag check for
> > postcopy-after-precopy.
> > 
> > Signed-off-by: Kothapally Madhu Pavan 
> > ---
> >  tools/virsh-domain.c |6 ++
> >  1 file changed, 6 insertions(+)
> > 
> 
> ACKed and pushed.

This doesn't make any sense. First, post-copy migration is enabled with
--postcopy option to migrate command and --postcopy-after-precopy is
just an additional flag for post-copy migration. So if virsh was to
report such an error, it should check for --postcopy option. But such
check doesn't belong to libvirt at all, the appropriate libvirt driver
is supposed to check for the flags and report invalid combinations.

Jirka

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] Check for --live flag for postcopy-after-precopy migration

2016-08-26 Thread Michal Privoznik
On 26.08.2016 11:25, Kothapally Madhu Pavan wrote:
> Unlike postcopy migration there is no --live flag check for
> postcopy-after-precopy.
> 
> Signed-off-by: Kothapally Madhu Pavan 
> ---
>  tools/virsh-domain.c |6 ++
>  1 file changed, 6 insertions(+)
> 

ACKed and pushed.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2 00/10] Introduce NVDIMM support

2016-08-26 Thread Michal Privoznik
On 11.08.2016 15:26, Michal Privoznik wrote:
>

Ping.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/9] Couple of vhost-user fixes and cleanups

2016-08-26 Thread Michal Privoznik
On 16.08.2016 17:41, Michal Privoznik wrote:
>

Ping.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/3] Introduce support for rx_queue_size

2016-08-26 Thread Michal Privoznik
On 19.08.2016 13:54, Michal Privoznik wrote:
>

Ping.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 5/6] qemu: driver: Validate configuration when setting maximum vcpu count

2016-08-26 Thread Jiri Denemark
On Thu, Aug 25, 2016 at 18:42:49 -0400, Peter Krempa wrote:
> Setting vcpu count when cpu topology is specified may result into an
> invalid configuration. Since the topology can't be modified, reject the
> setting if it doesn't match the requested topology. This will allow
> fixing the topology in case it was broken.
> 
> Partially fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1370066
> ---
>  src/qemu/qemu_driver.c | 11 +++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
> index 671d1ff..5f8c11c 100644
> --- a/src/qemu/qemu_driver.c
> +++ b/src/qemu/qemu_driver.c
> @@ -4730,6 +4730,17 @@ qemuDomainSetVcpusMax(virQEMUDriverPtr driver,
>  goto cleanup;
>  }
> 
> +if (persistentDef && persistentDef->cpu && persistentDef->cpu->sockets) {
> +/* explicitly allow correcting invalid vcpu count */

Hmm, this is more confusing than helpful :-)

> +if (nvcpus != persistentDef->cpu->sockets *
> +  persistentDef->cpu->cores *
> +  persistentDef->cpu->threads) {
> +virReportError(VIR_ERR_INVALID_ARG, "%s",
> +   _("CPU topology doesn't match the desired vcpu 
> count"));
> +goto cleanup;
> +}
> +}
> +
>  if (virDomainDefSetVcpusMax(persistentDef, nvcpus, driver->xmlopt) < 0)
>  goto cleanup;

ACK

Jirka

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] vz: update domain cache after device updates

2016-08-26 Thread Maxim Nestratov

25-Aug-16 12:25, Mikhail Feoktistov пишет:



On 25.08.2016 11:33, Nikolay Shirokovskiy wrote:

---
  src/vz/vz_driver.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index b34fe33..f223794 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1694,6 +1694,9 @@ static int vzDomainUpdateDeviceFlags(virDomainPtr domain,
  if (prlsdkUpdateDevice(driver, dom, dev) < 0)
  goto cleanup;
  +if (prlsdkUpdateDomain(driver, dom) < 0)
+goto cleanup;
+
  ret = 0;
   cleanup:


Ack



Pushed now. Thanks.

Maxim

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 4/6] conf: Don't validate vcpu count in XML parser

2016-08-26 Thread Jiri Denemark
On Thu, Aug 25, 2016 at 18:42:48 -0400, Peter Krempa wrote:
> Validating the vcpu count is more intricate and doing it in the XML
> parser will make previously valid configs (with older qemus) vanish.
> 
> Now that we have the validation callbacks we can do it in a more
> appropriate place.

This comment does not make it immediately clear that the existing
callback actually contains the check already.

ACK

Jirka

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] vz: fixed race in vzDomainAttach/DettachDevice

2016-08-26 Thread Maxim Nestratov

18-Aug-16 14:57, Olga Krishtal пишет:


While dettaching/attaching device in OpenStack, nova
calls vzDomainDettachDevice twice, because the update of the internal
configuration of the ct comes a bit latter than the update event.
As the result, we suffer from the second call to dettach the same device.

Signed-off-by: Olga Krishtal 
---
  src/vz/vz_sdk.c | 12 
  1 file changed, 12 insertions(+)


ACKed and pushed. Thanks!

Maxim

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH] vz: getting bus type for containers

2016-08-26 Thread Maxim Nestratov

25-Aug-16 11:13, Nikolay Shirokovskiy пишет:



On 15.08.2016 19:02, Mikhail Feoktistov wrote:

We should query bus type for containers too, like for VM.
In openstack we add volume disk like SCSI, so we can't
hardcode SATA bus.
---
  src/vz/vz_sdk.c | 32 +---
  1 file changed, 13 insertions(+), 19 deletions(-)

diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index f81b320..c4a1c3d 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c

ACK


Pushed now.

Maxim

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 1/2] vz: implicitly support additional migration flags

2016-08-26 Thread Maxim Nestratov

25-Aug-16 17:00, Pavel Glushchak пишет:

* Added VIR_MIGRATE_LIVE, VIR_MIGRATE_UNDEFINE_SOURCE and
   VIR_MIGRATE_PERSIST_DEST to supported migration flags

Signed-off-by: Pavel Glushchak 
---
  src/vz/vz_driver.c | 7 +--
  1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index b34fe33..7a12632 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -2887,8 +2887,11 @@ vzEatCookie(const char *cookiein, int cookieinlen, 
unsigned int flags)
  goto cleanup;
  }
  
-#define VZ_MIGRATION_FLAGS (VIR_MIGRATE_PAUSED |\

-VIR_MIGRATE_PEER2PEER)
+#define VZ_MIGRATION_FLAGS (VIR_MIGRATE_PAUSED |  \
+VIR_MIGRATE_PEER2PEER |   \
+VIR_MIGRATE_LIVE |\
+VIR_MIGRATE_UNDEFINE_SOURCE | \
+VIR_MIGRATE_PERSIST_DEST)
  
  #define VZ_MIGRATION_PARAMETERS \

  VIR_MIGRATE_PARAM_DEST_XML, VIR_TYPED_PARAM_STRING, \


Pushed now both.
Congratulations with the first contribution!

Maxim

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 2/2] vz: added VIR_MIGRATE_PARAM_BANDWIDTH param handling

2016-08-26 Thread Maxim Nestratov

25-Aug-16 17:00, Pavel Glushchak пишет:


libvirt-python passes parameter bandwidth = 0
by default. This means that bandwidth is unlimited.
VZ driver doesn't support bandwidth rate limiting,
but we still need to handle it and fail if bandwidth > 0.

Signed-off-by: Pavel Glushchak 
---
  src/vz/vz_driver.c | 12 
  1 file changed, 12 insertions(+)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index 7a12632..4a0068c 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -2897,6 +2897,7 @@ vzEatCookie(const char *cookiein, int cookieinlen, 
unsigned int flags)
  VIR_MIGRATE_PARAM_DEST_XML, VIR_TYPED_PARAM_STRING, \
  VIR_MIGRATE_PARAM_URI,  VIR_TYPED_PARAM_STRING, \
  VIR_MIGRATE_PARAM_DEST_NAME,VIR_TYPED_PARAM_STRING, \
+VIR_MIGRATE_PARAM_BANDWIDTH,VIR_TYPED_PARAM_ULLONG, \
  NULL
  
  static char *

@@ -2938,12 +2939,23 @@ vzDomainMigrateBegin3Params(virDomainPtr domain,
  char *xml = NULL;
  virDomainObjPtr dom = NULL;
  vzConnPtr privconn = domain->conn->privateData;
+unsigned long long bandwidth = 0;
  
  virCheckFlags(VZ_MIGRATION_FLAGS, NULL);
  
  if (virTypedParamsValidate(params, nparams, VZ_MIGRATION_PARAMETERS) < 0)

  goto cleanup;
  
+if (virTypedParamsGetULLong(params, nparams, VIR_MIGRATE_PARAM_BANDWIDTH,

+) < 0)
+goto cleanup;
+
+if (bandwidth > 0) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+   _("Bandwidth rate limiting is not supported"));
+goto cleanup;
+}
+
  if (!(dom = vzDomObjFromDomain(domain)))
  goto cleanup;
  


ACK

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 1/2] vz: implicitly support additional migration flags

2016-08-26 Thread Maxim Nestratov

25-Aug-16 17:00, Pavel Glushchak пишет:


* Added VIR_MIGRATE_LIVE, VIR_MIGRATE_UNDEFINE_SOURCE and
   VIR_MIGRATE_PERSIST_DEST to supported migration flags

Signed-off-by: Pavel Glushchak 
---
  src/vz/vz_driver.c | 7 +--
  1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index b34fe33..7a12632 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -2887,8 +2887,11 @@ vzEatCookie(const char *cookiein, int cookieinlen, 
unsigned int flags)
  goto cleanup;
  }
  
-#define VZ_MIGRATION_FLAGS (VIR_MIGRATE_PAUSED |\

-VIR_MIGRATE_PEER2PEER)
+#define VZ_MIGRATION_FLAGS (VIR_MIGRATE_PAUSED |  \
+VIR_MIGRATE_PEER2PEER |   \
+VIR_MIGRATE_LIVE |\
+VIR_MIGRATE_UNDEFINE_SOURCE | \
+VIR_MIGRATE_PERSIST_DEST)
  
  #define VZ_MIGRATION_PARAMETERS \

  VIR_MIGRATE_PARAM_DEST_XML, VIR_TYPED_PARAM_STRING, \

ACK

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH v2] qemu: enable user-defined cache bypassing while invoking

2016-08-26 Thread Rudy Zhang



On 16/8/26 下午2:56, fuweiwei wrote:

in the previous version, I mentioned the scenario of long-term pause while
writing external memory checkpoints:

v1: https://www.redhat.com/archives/libvir-list/2016-August/msg01194.html

Daniel suggested not to hardcode the flag, but wire this upto the API. When
the user invokes the snapshot they can request the
VIR_DOMAIN_SAVE_BYPASS_CACHE flag explicitly. So in this version I
introduce the --bypass-cache option in libvirt snapshot API. When invoking
an external VM snapshots, we may use the command like this:

virsh snapshot-create-as VM snap --memspec /path/to/memsnap --live
 --bypass-cache

The VM snapshot can be done with inapparent VM suspend now. Without
"--bypass-cache" flag, one may experience long-term VM suspend (so that the
implication of "--live" option is not significant), provided that the VM
has a large amount of dirty pages to save.

Signed-off-by: fuweiwei 
---
 include/libvirt/libvirt-domain-snapshot.h |  3 +++
 src/qemu/qemu_driver.c| 20 ++--
 tools/virsh-snapshot.c| 12 
 3 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/include/libvirt/libvirt-domain-snapshot.h 
b/include/libvirt/libvirt-domain-snapshot.h
index 0f73f24..aeff665 100644
--- a/include/libvirt/libvirt-domain-snapshot.h
+++ b/include/libvirt/libvirt-domain-snapshot.h
@@ -70,6 +70,9 @@ typedef enum {
 VIR_DOMAIN_SNAPSHOT_CREATE_LIVE= (1 << 8), /* create the snapshot
   while the guest is
   running */
+VIR_DOMAIN_SNAPSHOT_CREATE_BYPASS_CACHE   = (1 << 9), i/* Bypass cache


Delete 'i',otherwise it can't be compiled succefully.


+  while writing 
external
+  checkpoint files. */
 } virDomainSnapshotCreateFlags;

 /* Take a snapshot of the current VM state */
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 2089359..d5f441f 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -14036,6 +14036,7 @@ qemuDomainSnapshotCreateActiveExternal(virConnectPtr 
conn,
 bool pmsuspended = false;
 virQEMUDriverConfigPtr cfg = NULL;
 int compressed = QEMU_SAVE_FORMAT_RAW;
+unsigned int save_memory_flag = 0;

 /* If quiesce was requested, then issue a freeze command, and a
  * counterpart thaw command when it is actually sent to agent.
@@ -14116,8 +14117,12 @@ qemuDomainSnapshotCreateActiveExternal(virConnectPtr 
conn,
 if (!(xml = qemuDomainDefFormatLive(driver, vm->def, true, true)))
 goto cleanup;

+if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_BYPASS_CACHE)
+save_memory_flag |= VIR_DOMAIN_SAVE_BYPASS_CACHE;
+
 if ((ret = qemuDomainSaveMemory(driver, vm, snap->def->file,
-xml, compressed, resume, 0,
+xml, compressed, resume,
+save_memory_flag,
 QEMU_ASYNC_JOB_SNAPSHOT)) < 0)
 goto cleanup;

@@ -14224,7 +14229,8 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain,
   VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT |
   VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE |
   VIR_DOMAIN_SNAPSHOT_CREATE_ATOMIC |
-  VIR_DOMAIN_SNAPSHOT_CREATE_LIVE, NULL);
+  VIR_DOMAIN_SNAPSHOT_CREATE_LIVE |
+  VIR_DOMAIN_SNAPSHOT_CREATE_BYPASS_CACHE, NULL);

 VIR_REQUIRE_FLAG_RET(VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE,
  VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY,
@@ -14297,6 +14303,16 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain,
 goto cleanup;
 }

+/* the option of bypass cache is only supported for external checkpoints */
+if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_BYPASS_CACHE &&
+ (def->memory != VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL ||
+ redefine)) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+   _("only external memory snapshots support "
+ "cache bypass."));
+goto cleanup;
+}
+
 /* allow snapshots only in certain states */
 switch ((virDomainState) vm->state.state) {
 /* valid states */
diff --git a/tools/virsh-snapshot.c b/tools/virsh-snapshot.c
index f879e7a..0627baf 100644
--- a/tools/virsh-snapshot.c
+++ b/tools/virsh-snapshot.c
@@ -160,6 +160,10 @@ static const vshCmdOptDef opts_snapshot_create[] = {
  .type = VSH_OT_BOOL,
  .help = N_("require atomic operation")
 },
+{.name = "bypass-cache",
+ .type = VSH_OT_BOOL,
+ .help = N_("bypass system cache while writing external checkpoints")
+},
 VIRSH_COMMON_OPT_LIVE(N_("take a live snapshot")),
 {.name = NULL}
 };

[libvirt] [PATCH] Check for --live flag for postcopy-after-precopy migration

2016-08-26 Thread Kothapally Madhu Pavan
Unlike postcopy migration there is no --live flag check for
postcopy-after-precopy.

Signed-off-by: Kothapally Madhu Pavan 
---
 tools/virsh-domain.c |6 ++
 1 file changed, 6 insertions(+)

diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index de2a22c..798a1ff 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -10317,6 +10317,12 @@ cmdMigrate(vshControl *ctl, const vshCmd *cmd)
 }
 
 if (vshCommandOptBool(cmd, "postcopy-after-precopy")) {
+if (!live_flag) {
+virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s",
+  _("post-copy migration is not supported with "
+"non-live or paused migration"));
+goto cleanup;
+}
 iterEvent = virConnectDomainEventRegisterAny(
 priv->conn, dom,
 VIR_DOMAIN_EVENT_ID_MIGRATION_ITERATION,

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v2] qemu: enable user-defined cache bypassing while invoking

2016-08-26 Thread fuweiwei
in the previous version, I mentioned the scenario of long-term pause while
writing external memory checkpoints:

v1: https://www.redhat.com/archives/libvir-list/2016-August/msg01194.html

Daniel suggested not to hardcode the flag, but wire this upto the API. When
the user invokes the snapshot they can request the
VIR_DOMAIN_SAVE_BYPASS_CACHE flag explicitly. So in this version I
introduce the --bypass-cache option in libvirt snapshot API. When invoking
an external VM snapshots, we may use the command like this:

virsh snapshot-create-as VM snap --memspec /path/to/memsnap --live
 --bypass-cache
 
The VM snapshot can be done with inapparent VM suspend now. Without 
"--bypass-cache" flag, one may experience long-term VM suspend (so that the 
implication of "--live" option is not significant), provided that the VM
has a large amount of dirty pages to save. 

Signed-off-by: fuweiwei 
---
 include/libvirt/libvirt-domain-snapshot.h |  3 +++
 src/qemu/qemu_driver.c| 20 ++--
 tools/virsh-snapshot.c| 12 
 3 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/include/libvirt/libvirt-domain-snapshot.h 
b/include/libvirt/libvirt-domain-snapshot.h
index 0f73f24..aeff665 100644
--- a/include/libvirt/libvirt-domain-snapshot.h
+++ b/include/libvirt/libvirt-domain-snapshot.h
@@ -70,6 +70,9 @@ typedef enum {
 VIR_DOMAIN_SNAPSHOT_CREATE_LIVE= (1 << 8), /* create the snapshot
   while the guest is
   running */
+VIR_DOMAIN_SNAPSHOT_CREATE_BYPASS_CACHE   = (1 << 9), i/* Bypass cache
+  while writing 
external
+  checkpoint files. */
 } virDomainSnapshotCreateFlags;
 
 /* Take a snapshot of the current VM state */
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 2089359..d5f441f 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -14036,6 +14036,7 @@ qemuDomainSnapshotCreateActiveExternal(virConnectPtr 
conn,
 bool pmsuspended = false;
 virQEMUDriverConfigPtr cfg = NULL;
 int compressed = QEMU_SAVE_FORMAT_RAW;
+unsigned int save_memory_flag = 0;
 
 /* If quiesce was requested, then issue a freeze command, and a
  * counterpart thaw command when it is actually sent to agent.
@@ -14116,8 +14117,12 @@ qemuDomainSnapshotCreateActiveExternal(virConnectPtr 
conn,
 if (!(xml = qemuDomainDefFormatLive(driver, vm->def, true, true)))
 goto cleanup;
 
+if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_BYPASS_CACHE)
+save_memory_flag |= VIR_DOMAIN_SAVE_BYPASS_CACHE;
+
 if ((ret = qemuDomainSaveMemory(driver, vm, snap->def->file,
-xml, compressed, resume, 0,
+xml, compressed, resume,
+save_memory_flag,
 QEMU_ASYNC_JOB_SNAPSHOT)) < 0)
 goto cleanup;
 
@@ -14224,7 +14229,8 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain,
   VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT |
   VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE |
   VIR_DOMAIN_SNAPSHOT_CREATE_ATOMIC |
-  VIR_DOMAIN_SNAPSHOT_CREATE_LIVE, NULL);
+  VIR_DOMAIN_SNAPSHOT_CREATE_LIVE |
+  VIR_DOMAIN_SNAPSHOT_CREATE_BYPASS_CACHE, NULL);
 
 VIR_REQUIRE_FLAG_RET(VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE,
  VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY,
@@ -14297,6 +14303,16 @@ qemuDomainSnapshotCreateXML(virDomainPtr domain,
 goto cleanup;
 }
 
+/* the option of bypass cache is only supported for external checkpoints */
+if (flags & VIR_DOMAIN_SNAPSHOT_CREATE_BYPASS_CACHE &&
+ (def->memory != VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL ||
+ redefine)) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
+   _("only external memory snapshots support "
+ "cache bypass."));
+goto cleanup;
+}
+
 /* allow snapshots only in certain states */
 switch ((virDomainState) vm->state.state) {
 /* valid states */
diff --git a/tools/virsh-snapshot.c b/tools/virsh-snapshot.c
index f879e7a..0627baf 100644
--- a/tools/virsh-snapshot.c
+++ b/tools/virsh-snapshot.c
@@ -160,6 +160,10 @@ static const vshCmdOptDef opts_snapshot_create[] = {
  .type = VSH_OT_BOOL,
  .help = N_("require atomic operation")
 },
+{.name = "bypass-cache",
+ .type = VSH_OT_BOOL,
+ .help = N_("bypass system cache while writing external checkpoints")
+},
 VIRSH_COMMON_OPT_LIVE(N_("take a live snapshot")),
 {.name = NULL}
 };
@@ -191,6 +195,8 @@ cmdSnapshotCreate(vshControl *ctl, const vshCmd *cmd)
 flags