Provide a new set of command line flags named after varstore,
mirroring the ones that already exist for NVRAM. Users will
not need to worry about whether the guest uses one or the
other, since either version will seamlessly apply to both
scenarios, meaning among other things that existing scripts
will continue working as expected.

Signed-off-by: Andrea Bolognani <[email protected]>
---
 docs/manpages/virsh.rst | 44 ++++++++++++++++++++-------------
 tools/virsh-domain.c    | 55 ++++++++++++++++++++++++++++++++---------
 tools/virsh-snapshot.c  |  9 +++++--
 3 files changed, 78 insertions(+), 30 deletions(-)

diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst
index a9d691824e..c6858a7af9 100644
--- a/docs/manpages/virsh.rst
+++ b/docs/manpages/virsh.rst
@@ -1699,7 +1699,7 @@ create
 ::
 
    create FILE [--console] [--paused] [--autodestroy]
-      [--pass-fds N,M,...] [--validate] [--reset-nvram]
+      [--pass-fds N,M,...] [--validate] [--reset-nvram] [--reset-varstore]
 
 Create a domain from an XML <file>. Optionally, *--validate* option can be
 passed to validate the format of the input XML file against an internal RNG
@@ -1722,8 +1722,9 @@ of open file descriptors which should be pass on into the 
guest. The
 file descriptors will be re-numbered in the guest, starting from 3. This
 is only supported with container based virtualization.
 
-If *--reset-nvram* is specified, any existing NVRAM file will be deleted
-and re-initialized from its pristine template.
+If *--reset-nvram* or *--reset-varstore* is specified, any existing
+NVRAM/varstore file will be deleted and re-initialized from its pristine
+template.
 
 **Example:**
 
@@ -4255,7 +4256,8 @@ restore
 ::
 
    restore state-file [--bypass-cache] [--xml file]
-      [{--running | --paused}] [--reset-nvram] [--parallel-channels]
+      [{--running | --paused}] [--reset-nvram] [--reset-varstore]
+      [--parallel-channels]
 
 Restores a domain from a ``virsh save`` state file. See *save* for more info.
 
@@ -4274,8 +4276,9 @@ save image to decide between running or paused; passing 
either the
 *--running* or *--paused* flag will allow overriding which state the
 domain should be started in.
 
-If *--reset-nvram* is specified, any existing NVRAM file will be deleted
-and re-initialized from its pristine template.
+If *--reset-nvram* or *--reset-varstore* is specified, any existing
+NVRAM/varstore file will be deleted and re-initialized from its pristine
+template.
 
 *--parallel-channels* option can specify number of parallel IO channels
 to be used when loading memory from file. Parallel save may significantly
@@ -4899,7 +4902,7 @@ start
 
    start domain-name-or-uuid [--console] [--paused]
       [--autodestroy] [--bypass-cache] [--force-boot]
-      [--pass-fds N,M,...] [--reset-nvram]
+      [--pass-fds N,M,...] [--reset-nvram] [--reset-varstore]
 
 Start a (previously defined) inactive domain, either from the last
 ``managedsave`` state, or via a fresh boot if no managedsave state is
@@ -4918,8 +4921,9 @@ of open file descriptors which should be pass on into the 
guest. The
 file descriptors will be re-numbered in the guest, starting from 3. This
 is only supported with container based virtualization.
 
-If *--reset-nvram* is specified, any existing NVRAM file will be deleted
-and re-initialized from its pristine template.
+If *--reset-nvram* or *--reset-varstore* is specified, any existing
+NVRAM/varstore file will be deleted and re-initialized from its pristine
+template.
 
 
 suspend
@@ -4955,8 +4959,9 @@ undefine
 
 ::
 
-   undefine domain [--managed-save] [--snapshots-metadata]
-      [--checkpoints-metadata] [--nvram] [--keep-nvram]
+   undefine domain [--managed-save]
+      [--snapshots-metadata] [--checkpoints-metadata]
+      [--nvram] [--keep-nvram] [--varstore] [--keep-varstore]
       [ {--storage volumes | --remove-all-storage
          [--delete-storage-volume-snapshots]} --wipe-storage]
       [--tpm] [--keep-tpm]
@@ -4981,9 +4986,13 @@ domain.  Without the flag, attempts to undefine an 
inactive domain with
 checkpoint metadata will fail.  If the domain is active, this flag is
 ignored.
 
-*--nvram* and *--keep-nvram* specify accordingly to delete or keep nvram
-(/domain/os/nvram/) file. If the domain has an nvram file and the flags are
-omitted, the undefine will fail.
+The *--nvram* / *--varstore* and *--keep-nvram* / *--keep-varstore* flags
+specify whether to delete or keep the NVRAM (/domain/os/nvram/) or
+varstore (/domain/os/varstore) file respectively. The two sets of names are
+provided for convenience and consistency, but they're effectively aliases:
+that is, *--nvram* will work on a domain configured to use varstore and
+vice versa. If the domain has an NVRAM/varstore file and the flags are
+omitted, the undefine operation will fail.
 
 The *--storage* flag takes a parameter ``volumes``, which is a comma separated
 list of volume target names or source paths of storage volumes to be removed
@@ -8121,7 +8130,7 @@ snapshot-revert
 ::
 
    snapshot-revert domain {snapshot | --current} [{--running | --paused}]
-      [--force] [--reset-nvram]
+      [--force] [--reset-nvram] [--reset-varstore]
 
 Revert the given domain to the snapshot specified by *snapshot*, or to
 the current snapshot with *--current*.  Be aware
@@ -8167,8 +8176,9 @@ requires the use of *--force* to proceed:
     likely cause extensive filesystem corruption or crashes due to swap content
     mismatches when run.
 
-If *--reset-nvram* is specified, any existing NVRAM file will be deleted
-and re-initialized from its pristine template.
+If *--reset-nvram* or *--reset-varstore* is specified, any existing
+NVRAM/varstore file will be deleted and re-initialized from its pristine
+template.
 
 
 snapshot-delete
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index cb9dd069b6..8f06238875 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -3981,11 +3981,19 @@ static const vshCmdOptDef opts_undefine[] = {
     },
     {.name = "nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("remove nvram file")
+     .help = N_("remove NVRAM/varstore file")
     },
     {.name = "keep-nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("keep nvram file")
+     .help = N_("keep NVRAM/varstore file")
+    },
+    {.name = "varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("remove NVRAM/varstore file")
+    },
+    {.name = "keep-varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("keep NVRAM/varstore file")
     },
     {.name = "tpm",
      .type = VSH_OT_BOOL,
@@ -4020,10 +4028,10 @@ cmdUndefine(vshControl *ctl, const vshCmd *cmd)
     bool wipe_storage = vshCommandOptBool(cmd, "wipe-storage");
     bool remove_all_storage = vshCommandOptBool(cmd, "remove-all-storage");
     bool delete_snapshots = vshCommandOptBool(cmd, 
"delete-storage-volume-snapshots");
-    bool nvram = vshCommandOptBool(cmd, "nvram");
-    bool keep_nvram = vshCommandOptBool(cmd, "keep-nvram");
     bool tpm = vshCommandOptBool(cmd, "tpm");
     bool keep_tpm = vshCommandOptBool(cmd, "keep-tpm");
+    bool nvram = false;
+    bool keep_nvram = false;
     /* Positive if these items exist.  */
     int has_managed_save = 0;
     int has_snapshots_metadata = 0;
@@ -4048,8 +4056,18 @@ cmdUndefine(vshControl *ctl, const vshCmd *cmd)
     virshControl *priv = ctl->privData;
 
     VSH_REQUIRE_OPTION("delete-storage-volume-snapshots", 
"remove-all-storage");
-    VSH_EXCLUSIVE_OPTIONS("nvram", "keep-nvram");
     VSH_EXCLUSIVE_OPTIONS("tpm", "keep-tpm");
+    VSH_EXCLUSIVE_OPTIONS("nvram", "keep-nvram");
+    VSH_EXCLUSIVE_OPTIONS("varstore", "keep-varstore");
+    VSH_EXCLUSIVE_OPTIONS("nvram", "keep-varstore");
+    VSH_EXCLUSIVE_OPTIONS("varstore", "keep-nvram");
+
+    if (vshCommandOptBool(cmd, "nvram") ||
+        vshCommandOptBool(cmd, "varstore"))
+        nvram = true;
+    if (vshCommandOptBool(cmd, "keep-nvram") ||
+        vshCommandOptBool(cmd, "keep-varstore"))
+        keep_nvram = true;
 
     ignore_value(vshCommandOptStringQuiet(ctl, cmd, "storage", &vol_string));
 
@@ -4401,7 +4419,11 @@ static const vshCmdOptDef opts_start[] = {
     },
     {.name = "reset-nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("re-initialize NVRAM from its pristine template")
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
+    },
+    {.name = "reset-varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
     },
     {.name = NULL}
 };
@@ -4461,7 +4483,8 @@ cmdStart(vshControl *ctl, const vshCmd *cmd)
         flags |= VIR_DOMAIN_START_BYPASS_CACHE;
     if (vshCommandOptBool(cmd, "force-boot"))
         flags |= VIR_DOMAIN_START_FORCE_BOOT;
-    if (vshCommandOptBool(cmd, "reset-nvram"))
+    if (vshCommandOptBool(cmd, "reset-nvram") ||
+        vshCommandOptBool(cmd, "reset-varstore"))
         flags |= VIR_DOMAIN_START_RESET_NVRAM;
 
     /* We can emulate force boot, even for older servers that reject it.  */
@@ -5728,7 +5751,11 @@ static const vshCmdOptDef opts_restore[] = {
     },
     {.name = "reset-nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("re-initialize NVRAM from its pristine template")
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
+    },
+    {.name = "reset-varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
     },
     {.name = NULL}
 };
@@ -5753,7 +5780,8 @@ cmdRestore(vshControl *ctl, const vshCmd *cmd)
         flags |= VIR_DOMAIN_SAVE_RUNNING;
     if (vshCommandOptBool(cmd, "paused"))
         flags |= VIR_DOMAIN_SAVE_PAUSED;
-    if (vshCommandOptBool(cmd, "reset-nvram"))
+    if (vshCommandOptBool(cmd, "reset-nvram") ||
+        vshCommandOptBool(cmd, "reset-varstore"))
         flags |= VIR_DOMAIN_SAVE_RESET_NVRAM;
 
     if (vshCommandOptString(ctl, cmd, "file", &from) < 0)
@@ -8520,7 +8548,11 @@ static const vshCmdOptDef opts_create[] = {
     },
     {.name = "reset-nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("re-initialize NVRAM from its pristine template")
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
+    },
+    {.name = "reset-varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
     },
     {.name = NULL}
 };
@@ -8575,7 +8607,8 @@ cmdCreate(vshControl *ctl, const vshCmd *cmd)
         flags |= VIR_DOMAIN_START_AUTODESTROY;
     if (vshCommandOptBool(cmd, "validate"))
         flags |= VIR_DOMAIN_START_VALIDATE;
-    if (vshCommandOptBool(cmd, "reset-nvram"))
+    if (vshCommandOptBool(cmd, "reset-nvram") ||
+        vshCommandOptBool(cmd, "reset-varstore"))
         flags |= VIR_DOMAIN_START_RESET_NVRAM;
 
     dom = virshDomainCreateXMLHelper(priv->conn, buffer, nfds, fds, flags);
diff --git a/tools/virsh-snapshot.c b/tools/virsh-snapshot.c
index 8e5b9d635c..3d5880b1d5 100644
--- a/tools/virsh-snapshot.c
+++ b/tools/virsh-snapshot.c
@@ -1714,7 +1714,11 @@ static const vshCmdOptDef opts_snapshot_revert[] = {
     },
     {.name = "reset-nvram",
      .type = VSH_OT_BOOL,
-     .help = N_("re-initialize NVRAM from its pristine template")
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
+    },
+    {.name = "reset-varstore",
+     .type = VSH_OT_BOOL,
+     .help = N_("re-initialize NVRAM/varstore from its pristine template")
     },
     {.name = NULL}
 };
@@ -1733,7 +1737,8 @@ cmdDomainSnapshotRevert(vshControl *ctl, const vshCmd 
*cmd)
         flags |= VIR_DOMAIN_SNAPSHOT_REVERT_RUNNING;
     if (vshCommandOptBool(cmd, "paused"))
         flags |= VIR_DOMAIN_SNAPSHOT_REVERT_PAUSED;
-    if (vshCommandOptBool(cmd, "reset-nvram"))
+    if (vshCommandOptBool(cmd, "reset-nvram") ||
+        vshCommandOptBool(cmd, "reset-varstore"))
         flags |= VIR_DOMAIN_SNAPSHOT_REVERT_RESET_NVRAM;
     /* We want virsh snapshot-revert --force to work even when talking
      * to older servers that did the unsafe revert by default but
-- 
2.53.0

Reply via email to