[libvirt] [PATCH v4 33/37] qemu_driver: Identify using libvirt as a distinct way to compute baseline

2018-11-02 Thread Chris Venteicher
Hypervisor baseline cpu can be computed locally using libvirt utility
functions or remotely using QEMU QMP commands.

Likewise, cpu feature expansion can be computed locally using libvirt
utility functions or remotely using QEMU QMP commands.

This patch identifies using libvirt as a distinct case in the hypervisor
baseline logic and triggers that case when the X86 architecture is being
baselined.

There is one functionality change introduced by this patch.

Local libvirt functions are only used for feature expansion when the
local utility functions are used for CPU baseline and an error is
generated when no method is available to expand cpu featues.

The useQEMU option will be introduced in a future patch for using QEMU
to compute baseline rather than using libvirt.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_driver.c | 19 +++
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 54f7b8b26d..632b756c89 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -13477,6 +13477,7 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 bool migratable;
 virCPUDefPtr cpu = NULL;
 char *cpustr = NULL;
+bool useLibvirt = false;
 
 virCheckFlags(VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES |
   VIR_CONNECT_BASELINE_CPU_MIGRATABLE, NULL);
@@ -13496,7 +13497,9 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 if (!qemuCaps)
 goto cleanup;
 
-if (ARCH_IS_X86(arch)) {
+useLibvirt = ARCH_IS_X86(arch);
+
+if (useLibvirt) {
 migratable = !!(flags & VIR_CONNECT_BASELINE_CPU_MIGRATABLE);
 
 if (qemuConnectBaselineHypervisorCPUViaLibvirt(qemuCaps, migratable,
@@ -13512,9 +13515,17 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 
 cpu->fallback = VIR_CPU_FALLBACK_FORBID;
 
-if ((flags & VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES) &&
-virCPUExpandFeatures(arch, cpu) < 0)
-goto cleanup;
+if (flags & VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES) {
+if (useLibvirt && virCPUExpandFeatures(arch, cpu) < 0) {
+goto cleanup;
+} else {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
+   _("expand features while "
+ "computing baseline hypervisor CPU is not 
supported "
+ "for arch %s"), virArchToString(arch));
+goto cleanup;
+}
+}
 
 cpustr = virCPUDefFormat(cpu, NULL);
 
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 29/37] qemu_capabilities: Introduce virCPUDef to CPUModelInfo function

2018-11-02 Thread Chris Venteicher
Create public function to convert virCPUDef data structure into
qemuMonitorCPUModelInfoPtr data structure.

There was no existing code to reuse to create this function
so this new virQEMUCapsCPUModelInfoFromCPUDef function was based on
reversing the action of the existing virQEMUCapsCPUModelInfoToCPUDef
function.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 46 
 src/qemu/qemu_capabilities.h |  1 +
 2 files changed, 47 insertions(+)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index dcd6ffb876..54b94c4dda 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -3620,6 +3620,51 @@ virQEMUCapsLoadCache(virArch hostArch,
 }
 
 
+/* virCPUDef model=> qemuMonitorCPUModelInfo name
+ * virCPUDef features => qemuMonitorCPUModelInfo boolean properties */
+qemuMonitorCPUModelInfoPtr
+virQEMUCapsCPUModelInfoFromCPUDef(const virCPUDef *cpuDef)
+{
+size_t i;
+qemuMonitorCPUModelInfoPtr cpuModel = NULL;
+qemuMonitorCPUModelInfoPtr ret = NULL;
+
+if (!cpuDef || (VIR_ALLOC(cpuModel) < 0))
+goto cleanup;
+
+VIR_DEBUG("cpuDef->model = %s", NULLSTR(cpuDef->model));
+
+if (VIR_STRDUP(cpuModel->name, cpuDef->model) < 0 ||
+VIR_ALLOC_N(cpuModel->props, cpuDef->nfeatures) < 0)
+goto cleanup;
+
+cpuModel->nprops = 0;
+
+for (i = 0; i < cpuDef->nfeatures; i++) {
+qemuMonitorCPUPropertyPtr prop = &(cpuModel->props[cpuModel->nprops]);
+virCPUFeatureDefPtr feature = &(cpuDef->features[i]);
+
+if (!(feature->name) ||
+VIR_STRDUP(prop->name, feature->name) < 0)
+goto cleanup;
+
+prop->type = QEMU_MONITOR_CPU_PROPERTY_BOOLEAN;
+
+prop->value.boolean = feature->policy == -1 || /* policy undefined */
+  feature->policy == VIR_CPU_FEATURE_FORCE ||
+  feature->policy == VIR_CPU_FEATURE_REQUIRE;
+
+cpuModel->nprops++;
+}
+
+VIR_STEAL_PTR(ret, cpuModel);
+
+ cleanup:
+qemuMonitorCPUModelInfoFree(cpuModel);
+return ret;
+}
+
+
 /* qemuMonitorCPUModelInfo name   => virCPUDef model
  * qemuMonitorCPUModelInfo boolean properties => virCPUDef features
  *
@@ -3671,6 +3716,7 @@ virQEMUCapsCPUModelInfoToCPUDef(bool migratable, 
qemuMonitorCPUModelInfoPtr mode
 return ret;
 }
 
+
 static void
 virQEMUCapsFormatHostCPUModelInfo(virQEMUCapsPtr qemuCaps,
   virBufferPtr buf,
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index a377d7b912..ea5e021ca6 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -572,6 +572,7 @@ int virQEMUCapsGetMachineTypesCaps(virQEMUCapsPtr qemuCaps,
 void virQEMUCapsFilterByMachineType(virQEMUCapsPtr qemuCaps,
 const char *machineType);
 
+qemuMonitorCPUModelInfoPtr virQEMUCapsCPUModelInfoFromCPUDef(const virCPUDef 
*cpuDef);
 virCPUDefPtr virQEMUCapsCPUModelInfoToCPUDef(bool migratable,
  qemuMonitorCPUModelInfoPtr model);
 
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 27/37] qemu_process: Stop locking QMP process monitor immediately

2018-11-02 Thread Chris Venteicher
Locking the monitor object immediately after call to qemuMonitorOpen
doesn't make sense now that we have expanded the QEMU process code to
cover more than the original capabilities usecase.

Removing the monitor lock makes the qemuConnectMonitorQmp code
consistent with the qemuConnectMonitor code used to establish the
monitor when QEMU process is started for domains.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 08522a1580..6bd3da97ca 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8297,8 +8297,6 @@ qemuConnectMonitorQmp(qemuProcessPtr proc)
 
 VIR_STEAL_PTR(proc->mon, mon);
 
-virObjectLock(proc->mon);
-
 /* Exit capabilities negotiation mode and enter QEMU command mode
  * by issuing qmp_capabilities command to QEMU */
 if (qemuMonitorSetCapabilities(proc->mon) < 0) {
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 30/37] qemu_monitor: Support query-cpu-model-baseline QMP command

2018-11-02 Thread Chris Venteicher
Introduce monitor functions to use QEMU to compute baseline cpu
from an input of two cpu models.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_monitor.c  | 12 
 src/qemu/qemu_monitor.h  |  6 
 src/qemu/qemu_monitor_json.c | 60 
 src/qemu/qemu_monitor_json.h |  6 
 4 files changed, 84 insertions(+)

diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 2071c6e846..edbaee1a53 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -4495,3 +4495,15 @@ qemuMonitorGetPRManagerInfo(qemuMonitorPtr mon,
 virHashFree(info);
 return ret;
 }
+
+
+int
+qemuMonitorGetCPUModelBaseline(qemuMonitorPtr mon,
+   qemuMonitorCPUModelInfoPtr model_a,
+   qemuMonitorCPUModelInfoPtr model_b,
+   qemuMonitorCPUModelInfoPtr *model_baseline)
+{
+QEMU_CHECK_MONITOR(mon);
+
+return qemuMonitorJSONGetCPUModelBaseline(mon, model_a, model_b, 
model_baseline);
+}
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index d3efd37099..d46e9378ea 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -1196,4 +1196,10 @@ struct _qemuMonitorPRManagerInfo {
 int qemuMonitorGetPRManagerInfo(qemuMonitorPtr mon,
 virHashTablePtr *retinfo);
 
+int qemuMonitorGetCPUModelBaseline(qemuMonitorPtr mon,
+   qemuMonitorCPUModelInfoPtr model_a,
+   qemuMonitorCPUModelInfoPtr model_b,
+   qemuMonitorCPUModelInfoPtr *model_baseline)
+ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4);
+
 #endif /* QEMU_MONITOR_H */
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 2eabfdac70..84da721f8d 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -8464,3 +8464,63 @@ qemuMonitorJSONGetPRManagerInfo(qemuMonitorPtr mon,
 return ret;
 
 }
+
+
+int
+qemuMonitorJSONGetCPUModelBaseline(qemuMonitorPtr mon,
+   qemuMonitorCPUModelInfoPtr model_a,
+   qemuMonitorCPUModelInfoPtr model_b,
+   qemuMonitorCPUModelInfoPtr *model_baseline)
+{
+int ret = -1;
+virJSONValuePtr cmd = NULL;
+virJSONValuePtr reply = NULL;
+virJSONValuePtr data = NULL;
+virJSONValuePtr modela = NULL;
+virJSONValuePtr modelb = NULL;
+virJSONValuePtr cpu_model = NULL;
+
+*model_baseline = NULL;
+
+if (!(modela = qemuMonitorJSONBuildCPUModelInfoToJSON(model_a)) ||
+!(modelb = qemuMonitorJSONBuildCPUModelInfoToJSON(model_b)))
+goto cleanup;
+
+if (!(cmd = qemuMonitorJSONMakeCommand("query-cpu-model-baseline",
+   "a:modela", ,
+   "a:modelb", ,
+   NULL)))
+goto cleanup;
+
+if (qemuMonitorJSONCommand(mon, cmd, ) < 0)
+goto cleanup;
+
+if (qemuMonitorJSONCheckReply(cmd, reply, VIR_JSON_TYPE_OBJECT) < 0) {
+virReportError(VIR_ERR_INVALID_ARG,
+   _("QEMU doesn't support baseline or recognize model %s 
or %s"),
+   model_a->name,
+   model_b->name);
+goto cleanup;
+}
+
+data = virJSONValueObjectGetObject(reply, "return");
+
+if (!(cpu_model = virJSONValueObjectGetObject(data, "model"))) {
+virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+   _("query-cpu-model-baseline reply data was missing 
'model'"));
+goto cleanup;
+}
+
+if (!(*model_baseline = 
qemuMonitorJSONBuildCPUModelInfoFromJSON(cpu_model)))
+goto cleanup;
+
+ret = 0;
+
+ cleanup:
+virJSONValueFree(cmd);
+virJSONValueFree(reply);
+virJSONValueFree(modela);
+virJSONValueFree(modelb);
+
+return ret;
+}
diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h
index 2d5679094e..117a4a6cd0 100644
--- a/src/qemu/qemu_monitor_json.h
+++ b/src/qemu/qemu_monitor_json.h
@@ -577,4 +577,10 @@ int qemuMonitorJSONGetPRManagerInfo(qemuMonitorPtr mon,
 virHashTablePtr info)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
 
+int qemuMonitorJSONGetCPUModelBaseline(qemuMonitorPtr mon,
+   qemuMonitorCPUModelInfoPtr model_a,
+   qemuMonitorCPUModelInfoPtr model_b,
+   qemuMonitorCPUModelInfoPtr 
*model_baseline)
+ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4);
+
 #endif /* QEMU_MONITOR_JSON_H */
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 18/37] qemu_process: Setup paths within qemuProcessInitQmp

2018-11-02 Thread Chris Venteicher
Move code for setting paths and prepping file system from qemuProcessNew
to qemuProcessInitQmp.

This keeps qemuProcessNew limited to structure initialization and most
closely mirrors pattern established in qemuProcessInit within
qemuProcessInitQmp.

The patch is a non-functional, cut / paste change,
however goto is now "cleanup" rather than "error".

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 42 +
 1 file changed, 22 insertions(+), 20 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 804ec998af..70b0b61aef 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8115,26 +8115,6 @@ qemuProcessNew(const char *binary,
 proc->runUid = runUid;
 proc->runGid = runGid;
 
-/* the ".sock" sufix is important to avoid a possible clash with a qemu
- * domain called "capabilities"
- */
-if (virAsprintf(>monpath, "%s/%s", proc->libDir,
-"capabilities.monitor.sock") < 0)
-goto error;
-if (virAsprintf(>monarg, "unix:%s,server,nowait", proc->monpath) < 0)
-goto error;
-
-/* ".pidfile" suffix is used rather than ".pid" to avoid a possible clash
- * with a qemu domain called "capabilities"
- * Normally we'd use runDir for pid files, but because we're using
- * -daemonize we need QEMU to be allowed to create them, rather
- * than libvirtd. So we're using libDir which QEMU can write to
- */
-if (virAsprintf(>pidfile, "%s/%s", proc->libDir, 
"capabilities.pidfile") < 0)
-goto error;
-
-virPidFileForceCleanupPath(proc->pidfile);
-
 return proc;
 
  error:
@@ -8154,8 +8134,30 @@ qemuProcessInitQmp(qemuProcessPtr proc)
   "emulator=%s",
   proc->binary);
 
+/* the ".sock" sufix is important to avoid a possible clash with a qemu
+ * domain called "capabilities"
+ */
+if (virAsprintf(>monpath, "%s/%s", proc->libDir,
+"capabilities.monitor.sock") < 0)
+goto cleanup;
+
+if (virAsprintf(>monarg, "unix:%s,server,nowait", proc->monpath) < 0)
+goto cleanup;
+
+/* ".pidfile" suffix is used rather than ".pid" to avoid a possible clash
+ * with a qemu domain called "capabilities"
+ * Normally we'd use runDir for pid files, but because we're using
+ * -daemonize we need QEMU to be allowed to create them, rather
+ * than libvirtd. So we're using libDir which QEMU can write to
+ */
+if (virAsprintf(>pidfile, "%s/%s", proc->libDir, 
"capabilities.pidfile") < 0)
+goto cleanup;
+
+virPidFileForceCleanupPath(proc->pidfile);
+
 ret = 0;
 
+ cleanup:
 VIR_DEBUG("ret=%i", ret);
 return ret;
 }
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 26/37] qemu_process: Use unique directories for QMP processes

2018-11-02 Thread Chris Venteicher
Multiple QEMU processes for QMP commands can operate concurrently.

Use a unique directory under libDir for each QEMU processes
to avoid pidfile and unix socket collision between processes.

The pid file name is changed from "capabilities.pidfile" to "qmp.pid"
because we no longer need to avoid a possible clash with a qemu domain
called "capabilities" now that the processes artifacts are stored in
their own unique temporary directories.

"Capabilities" was changed to "qmp" in the pid file name because these
processes are no longer specific to the capabilities usecase and are
more generic in terms of being used for any general purpose QMP message
exchanges with a QEMU process that is not associated with a domain.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 26 --
 src/qemu/qemu_process.h |  1 +
 2 files changed, 17 insertions(+), 10 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 1bb11b2089..08522a1580 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8087,6 +8087,7 @@ qemuProcessFree(qemuProcessPtr proc)
 
 VIR_FREE(proc->binary);
 VIR_FREE(proc->libDir);
+VIR_FREE(proc->uniqDir);
 VIR_FREE(proc->monpath);
 VIR_FREE(proc->monarg);
 VIR_FREE(proc->pidfile);
@@ -8145,33 +8146,33 @@ qemuProcessNew(const char *binary,
 static int
 qemuProcessInitQmp(qemuProcessPtr proc)
 {
+char *template = NULL;
 int ret = -1;
 
 VIR_DEBUG("Beginning VM startup process"
   "emulator=%s",
   proc->binary);
 
-/* the ".sock" sufix is important to avoid a possible clash with a qemu
- * domain called "capabilities"
- */
-if (virAsprintf(>monpath, "%s/%s", proc->libDir,
-"capabilities.monitor.sock") < 0)
+if (virAsprintf(, "%s/qemu.XX", proc->libDir) < 0)
+goto cleanup;
+
+proc->uniqDir = mkdtemp(template);
+
+if (virAsprintf(>monpath, "%s/%s", proc->uniqDir,
+"qmp.monitor") < 0)
 goto cleanup;
 
 if (virAsprintf(>monarg, "unix:%s,server,nowait", proc->monpath) < 0)
 goto cleanup;
 
-/* ".pidfile" suffix is used rather than ".pid" to avoid a possible clash
- * with a qemu domain called "capabilities"
+/*
  * Normally we'd use runDir for pid files, but because we're using
  * -daemonize we need QEMU to be allowed to create them, rather
  * than libvirtd. So we're using libDir which QEMU can write to
  */
-if (virAsprintf(>pidfile, "%s/%s", proc->libDir, 
"capabilities.pidfile") < 0)
+if (virAsprintf(>pidfile, "%s/%s", proc->uniqDir, "qmp.pid") < 0)
 goto cleanup;
 
-virPidFileForceCleanupPath(proc->pidfile);
-
 ret = 0;
 
  cleanup:
@@ -8415,4 +8416,9 @@ void qemuProcessStopQmp(qemuProcessPtr proc)
 unlink(proc->pidfile);
 
 proc->pid = 0;
+
+
+if (proc->uniqDir)
+rmdir(proc->uniqDir);
+
 }
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index d1541d5407..f66fc0c82c 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -225,6 +225,7 @@ struct _qemuProcess {
 char *monarg;
 char *monpath;
 char *pidfile;
+char *uniqDir;
 virCommandPtr cmd;
 qemuMonitorPtr mon;
 pid_t pid;
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 22/37] qemu_process: Cleanup qemuProcessStopQmp function

2018-11-02 Thread Chris Venteicher
qemuProcessStopQmp is one of the 4 public functions used to crate and
manage a Qemu process for QMP command exchanges.

Add comment header and debug message.

Other minor code formatting cleanup.

No change in functionality is intended.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 29 ++---
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 104ed58cea..cb2902be02 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8360,14 +8360,27 @@ qemuProcessStartQmp(qemuProcessPtr proc)
 goto cleanup;
 }
 
-
-void
-qemuProcessStopQmp(qemuProcessPtr proc)
+/**
+ * qemuProcessStop:
+ * @proc: Stores Process and Connection State
+ *
+ * Stop Monitor Connection and QEMU Process
+ */
+void qemuProcessStopQmp(qemuProcessPtr proc)
 {
-if (proc->mon)
-virObjectUnlock(proc->mon);
-qemuMonitorClose(proc->mon);
-proc->mon = NULL;
+qemuMonitorPtr mon = NULL;
+
+if (!proc)
+return;
+
+VIR_DEBUG("Shutting down proc=%p emulator=%s mon=%p pid=%lld",
+  proc, proc->binary, proc->mon, (long long)proc->pid);
+
+if ((mon = QEMU_PROCESS_MONITOR(proc))) {
+virObjectUnlock(mon);
+qemuMonitorClose(mon);
+proc->mon = NULL;
+}
 
 virCommandAbort(proc->cmd);
 virCommandFree(proc->cmd);
@@ -8388,7 +8401,9 @@ qemuProcessStopQmp(qemuProcessPtr proc)
   virStrerror(errno, ebuf, sizeof(ebuf)));
 
 }
+
 if (proc->pidfile)
 unlink(proc->pidfile);
+
 proc->pid = 0;
 }
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 13/37] qemu_capabilities: Detect caps probe failure by checking monitor ptr

2018-11-02 Thread Chris Venteicher
Failure to connect to QEMU to probe capabilities is not considered a error case
and does not result in a negative return value.

Check for a NULL monitor connection pointer before trying to send capabilities
commands to QEMU rather than depending on a special positive return value.

This simplifies logic and is more consistent with the operation of existing
qemu_process functions.

A macro is introduced to easily obtain the monitor pointer from the
qemuProcess structure.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 28 ++--
 src/qemu/qemu_process.c  |  9 +
 src/qemu/qemu_process.h  |  3 +++
 3 files changed, 22 insertions(+), 18 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 9a86838d8a..48b58ca02a 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -4275,43 +4275,51 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 {
 qemuProcessPtr proc = NULL;
 qemuProcessPtr proc_kvm = NULL;
+qemuMonitorPtr mon = NULL;
+qemuMonitorPtr mon_kvm = NULL;
 int ret = -1;
-int rc;
 bool forceTCG = false;
 
 if (!(proc = qemuProcessNew(qemuCaps->binary, libDir,
 runUid, runGid, forceTCG)))
 goto cleanup;
 
-if ((rc = qemuProcessRun(proc)) != 0) {
-if (rc == 1)
-ret = 0;
+
+if (qemuProcessRun(proc) < 0)
+goto cleanup;
+
+if (!(mon = QEMU_PROCESS_MONITOR(proc))) {
+ret = 0; /* Failure probing QEMU not considered error case */
 goto cleanup;
 }
 
-if (virQEMUCapsInitQMPMonitor(qemuCaps, proc->mon) < 0)
+/* Pull capabilities from QEMU */
+if (virQEMUCapsInitQMPMonitor(qemuCaps, mon) < 0)
 goto cleanup;
 
+/* Pull capabilities again if KVM supported */
 if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) {
 qemuProcessStopQmp(proc);
 
 forceTCG = true;
 proc_kvm = qemuProcessNew(qemuCaps->binary, libDir, runUid, runGid, 
forceTCG);
 
-if ((rc = qemuProcessRun(proc_kvm)) != 0) {
-if (rc == 1)
-ret = 0;
+if (qemuProcessRun(proc_kvm) < 0)
+goto cleanup;
+
+if (!(mon_kvm = QEMU_PROCESS_MONITOR(proc_kvm))) {
+ret = 0; /* Failure probing QEMU not considered error case */
 goto cleanup;
 }
 
-if (virQEMUCapsInitQMPMonitorTCG(qemuCaps, proc_kvm->mon) < 0)
+if (virQEMUCapsInitQMPMonitorTCG(qemuCaps, mon_kvm) < 0)
 goto cleanup;
 }
 
 ret = 0;
 
  cleanup:
-if (proc && !proc->mon) {
+if (!mon) {
 char *err = QEMU_PROCESS_ERROR(proc) ? QEMU_PROCESS_ERROR(proc) : 
_("unknown error");
 
 virReportError(VIR_ERR_INTERNAL_ERROR,
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 99b720b380..140c657087 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8145,10 +8145,6 @@ qemuProcessNew(const char *binary,
 }
 
 
-/* Returns -1 on fatal error,
- *  0 on success,
- *  1 when probing QEMU failed
- */
 int
 qemuProcessRun(qemuProcessPtr proc){
 virDomainXMLOptionPtr xmlopt = NULL;
@@ -8215,6 +8211,7 @@ qemuProcessRun(qemuProcessPtr proc){
 
 virObjectLock(proc->mon);
 
+ ignore:
 ret = 0;
 
  cleanup:
@@ -8223,10 +8220,6 @@ qemuProcessRun(qemuProcessPtr proc){
 virObjectUnref(xmlopt);
 
 return ret;
-
- ignore:
-ret = 1;
-goto cleanup;
 }
 
 
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index abd80baf5c..37194e2501 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -235,6 +235,9 @@ struct _qemuProcess {
 #define QEMU_PROCESS_ERROR(proc) \
((char *) (proc ? proc->qmperr: NULL))
 
+#define QEMU_PROCESS_MONITOR(proc) \
+   ((qemuMonitorPtr) (proc ? proc->mon : NULL))
+
 qemuProcessPtr qemuProcessNew(const char *binary,
   const char *libDir,
   uid_t runUid,
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 15/37] qemu_process: Add debug message in qemuProcessLaunchQmp

2018-11-02 Thread Chris Venteicher
Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 1d96a43206..758c8bed05 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8173,6 +8173,9 @@ qemuProcessLaunchQmp(qemuProcessPtr proc)
 int status = 0;
 int ret = -1;
 
+VIR_DEBUG("proc=%p, emulator=%s",
+  proc, NULLSTR(proc->binary));
+
 if (proc->forceTCG)
 machine = "none,accel=tcg";
 else
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 05/37] qemu_process: Move process code from qemu_capabilities to qemu_process

2018-11-02 Thread Chris Venteicher
Qemu process code in qemu_capabilities.c is moved to qemu_process.c in
order to make the code usable outside the original capabilities
usecases.

This process code activates and manages Qemu processes without
establishing a guest domain.

This patch is a straight cut/paste move between files.

Following patches modify the process code
making it more generic and consistent with qemu_process.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 218 +--
 src/qemu/qemu_process.c  | 201 
 src/qemu/qemu_process.h  |  29 +
 3 files changed, 231 insertions(+), 217 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index b2bb2347fd..d5e63aca5c 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -47,6 +47,7 @@
 #define __QEMU_CAPSPRIV_H_ALLOW__
 #include "qemu_capspriv.h"
 #include "qemu_qapi.h"
+#include "qemu_process.h"
 
 #include 
 #include 
@@ -3972,18 +3973,6 @@ virQEMUCapsIsValid(void *data,
 }
 
 
-static void virQEMUCapsMonitorNotify(qemuMonitorPtr mon ATTRIBUTE_UNUSED,
- virDomainObjPtr vm ATTRIBUTE_UNUSED,
- void *opaque ATTRIBUTE_UNUSED)
-{
-}
-
-static qemuMonitorCallbacks callbacks = {
-.eofNotify = virQEMUCapsMonitorNotify,
-.errorNotify = virQEMUCapsMonitorNotify,
-};
-
-
 /**
  * virQEMUCapsInitQMPArch:
  * @qemuCaps: QEMU capabilities
@@ -4278,211 +4267,6 @@ virQEMUCapsInitQMPMonitorTCG(virQEMUCapsPtr qemuCaps 
ATTRIBUTE_UNUSED,
 }
 
 
-typedef struct _virQEMUCapsInitQMPCommand virQEMUCapsInitQMPCommand;
-typedef virQEMUCapsInitQMPCommand *virQEMUCapsInitQMPCommandPtr;
-struct _virQEMUCapsInitQMPCommand {
-char *binary;
-uid_t runUid;
-gid_t runGid;
-char **qmperr;
-char *monarg;
-char *monpath;
-char *pidfile;
-virCommandPtr cmd;
-qemuMonitorPtr mon;
-virDomainChrSourceDef config;
-pid_t pid;
-virDomainObjPtr vm;
-};
-
-
-static void
-virQEMUCapsInitQMPCommandAbort(virQEMUCapsInitQMPCommandPtr cmd)
-{
-if (cmd->mon)
-virObjectUnlock(cmd->mon);
-qemuMonitorClose(cmd->mon);
-cmd->mon = NULL;
-
-virCommandAbort(cmd->cmd);
-virCommandFree(cmd->cmd);
-cmd->cmd = NULL;
-
-if (cmd->monpath)
-unlink(cmd->monpath);
-
-virDomainObjEndAPI(>vm);
-
-if (cmd->pid != 0) {
-char ebuf[1024];
-
-VIR_DEBUG("Killing QMP caps process %lld", (long long)cmd->pid);
-if (virProcessKill(cmd->pid, SIGKILL) < 0 && errno != ESRCH)
-VIR_ERROR(_("Failed to kill process %lld: %s"),
-  (long long)cmd->pid,
-  virStrerror(errno, ebuf, sizeof(ebuf)));
-
-VIR_FREE(*cmd->qmperr);
-}
-if (cmd->pidfile)
-unlink(cmd->pidfile);
-cmd->pid = 0;
-}
-
-
-static void
-virQEMUCapsInitQMPCommandFree(virQEMUCapsInitQMPCommandPtr cmd)
-{
-if (!cmd)
-return;
-
-virQEMUCapsInitQMPCommandAbort(cmd);
-VIR_FREE(cmd->binary);
-VIR_FREE(cmd->monpath);
-VIR_FREE(cmd->monarg);
-VIR_FREE(cmd->pidfile);
-VIR_FREE(cmd);
-}
-
-
-static virQEMUCapsInitQMPCommandPtr
-virQEMUCapsInitQMPCommandNew(char *binary,
- const char *libDir,
- uid_t runUid,
- gid_t runGid,
- char **qmperr)
-{
-virQEMUCapsInitQMPCommandPtr cmd = NULL;
-
-if (VIR_ALLOC(cmd) < 0)
-goto error;
-
-if (VIR_STRDUP(cmd->binary, binary) < 0)
-goto error;
-
-cmd->runUid = runUid;
-cmd->runGid = runGid;
-cmd->qmperr = qmperr;
-
-/* the ".sock" sufix is important to avoid a possible clash with a qemu
- * domain called "capabilities"
- */
-if (virAsprintf(>monpath, "%s/%s", libDir,
-"capabilities.monitor.sock") < 0)
-goto error;
-if (virAsprintf(>monarg, "unix:%s,server,nowait", cmd->monpath) < 0)
-goto error;
-
-/* ".pidfile" suffix is used rather than ".pid" to avoid a possible clash
- * with a qemu domain called "capabilities"
- * Normally we'd use runDir for pid files, but because we're using
- * -daemonize we need QEMU to be allowed to create them, rather
- * than libvirtd. So we're using libDir which QEMU can write to
- */
-if (virAsprintf(>pidfile, "%s/%s", libDir, "capabilities.pidfile") < 
0)
-goto error;
-
-virPidFileForceCleanupPath(cmd->pidfile);
-
-cmd->config.type = VIR_DOMAIN_CHR_TYPE_UNIX;
-cmd->config.data.nix.path = cmd->monpath;
-cmd->config.data.nix.listen = false;
-
-return cmd;
-
- error:
-virQEMUCapsInitQMPCommandFree(cmd);
-return NULL;
-}
-
-
-/* Returns -1 on fatal error,
- *  0 on success,
- *  1 when probing QEMU failed
- */
-static int
-virQEMUCapsInitQMPCommandRun(virQEMUCapsInitQMPCommandPtr cmd,
-   

[libvirt] [PATCH v4 02/37] qemu_monitor: Introduce qemuMonitorCPUModelInfo / JSON conversion

2018-11-02 Thread Chris Venteicher
Conversion functions are used convert CPUModelInfo structs into
QMP JSON and the reverse.

QMP JSON is of form:
{"model": {"name": "IvyBridge", "props": {}}}

qemuMonitorCPUModelInfoBoolPropAdd is used to add boolean properties to
CPUModelInfo struct.

qemuMonitorJSONGetCPUModelExpansion makes full use of conversions and
propAdd in prep to support input of full cpu model in future.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_monitor.c  |  24 ++
 src/qemu/qemu_monitor.h  |   5 ++
 src/qemu/qemu_monitor_json.c | 154 +--
 3 files changed, 138 insertions(+), 45 deletions(-)

diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 45a4568fcc..801c072eff 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -3764,6 +3764,30 @@ qemuMonitorCPUModelInfoCopy(const 
qemuMonitorCPUModelInfo *orig)
 }
 
 
+int
+qemuMonitorCPUModelInfoBoolPropAdd(qemuMonitorCPUModelInfoPtr model,
+   const char *prop_name,
+   bool prop_value)
+{
+int ret = -1;
+qemuMonitorCPUProperty prop;
+prop.type = QEMU_MONITOR_CPU_PROPERTY_BOOLEAN;
+prop.value.boolean = prop_value;
+
+if (VIR_STRDUP(prop.name, prop_name) < 0)
+goto cleanup;
+
+if (VIR_APPEND_ELEMENT(model->props, model->nprops, prop) < 0)
+goto cleanup;
+
+ret = 0;
+
+ cleanup:
+VIR_FREE(prop.name);
+return ret;
+}
+
+
 int
 qemuMonitorGetCommands(qemuMonitorPtr mon,
char ***commands)
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index d87b5a4ec0..0a09590ed1 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -1047,6 +1047,11 @@ qemuMonitorCPUModelInfoPtr 
qemuMonitorCPUModelInfoNew(const char *name);
 qemuMonitorCPUModelInfoPtr
 qemuMonitorCPUModelInfoCopy(const qemuMonitorCPUModelInfo *orig);
 
+int qemuMonitorCPUModelInfoBoolPropAdd(qemuMonitorCPUModelInfoPtr model,
+   const char *prop_name,
+   bool prop_value)
+ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
+
 int qemuMonitorGetCommands(qemuMonitorPtr mon,
char ***commands);
 int qemuMonitorGetEvents(qemuMonitorPtr mon,
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index 3de298c9e2..f20e9e9379 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -5505,6 +5505,101 @@ qemuMonitorJSONParseCPUModelProperty(const char *key,
 return 0;
 }
 
+
+/* model_json: {"name": "z13-base", "props": {}}
+ */
+static virJSONValuePtr
+qemuMonitorJSONBuildCPUModelInfoToJSON(qemuMonitorCPUModelInfoPtr model)
+{
+virJSONValuePtr cpu_props = NULL;
+virJSONValuePtr model_json = NULL;
+size_t i;
+
+if (!model)
+goto cleanup;
+
+if (model->nprops > 0 && !(cpu_props = virJSONValueNewObject()))
+goto cleanup;
+
+for (i = 0; i < model->nprops; i++) {
+qemuMonitorCPUPropertyPtr prop = &(model->props[i]);
+
+switch (prop->type) {
+case QEMU_MONITOR_CPU_PROPERTY_BOOLEAN:
+if (virJSONValueObjectAppendBoolean(cpu_props, prop->name,
+prop->value.boolean) < 0)
+goto cleanup;
+break;
+
+case QEMU_MONITOR_CPU_PROPERTY_STRING:
+if (virJSONValueObjectAppendString(cpu_props, prop->name,
+   prop->value.string) < 0)
+goto cleanup;
+break;
+
+case QEMU_MONITOR_CPU_PROPERTY_NUMBER:
+if (virJSONValueObjectAppendNumberLong(cpu_props, prop->name,
+   prop->value.number) < 0)
+goto cleanup;
+break;
+
+case QEMU_MONITOR_CPU_PROPERTY_LAST:
+default:
+virReportEnumRangeError(qemuMonitorCPUPropertyPtr, prop->type);
+goto cleanup;
+}
+}
+
+ignore_value(virJSONValueObjectCreate(_json, "s:name", model->name,
+  "A:props", _props, NULL));
+
+ cleanup:
+virJSONValueFree(cpu_props);
+return model_json;
+}
+
+
+/* model_json: {"name": "IvyBridge", "props": {}}
+ */
+static qemuMonitorCPUModelInfoPtr
+qemuMonitorJSONBuildCPUModelInfoFromJSON(virJSONValuePtr cpu_model)
+{
+virJSONValuePtr cpu_props;
+qemuMonitorCPUModelInfoPtr model = NULL;
+qemuMonitorCPUModelInfoPtr ret = NULL;
+char const *cpu_name;
+
+if (!(cpu_name = virJSONValueObjectGetString(cpu_model, "name"))) {
+virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+   _("Parsed JSON reply missing 'name'"));
+goto cleanup;
+}
+
+if (VIR_ALLOC(model) < 0)
+goto cleanup;
+
+if (VIR_STRDUP(model->name, cpu_name) < 0)
+goto cleanup;
+
+if ((cpu_props = virJSONValueObjectGetObject(cpu_model, "props"))) {
+

[libvirt] [PATCH v4 32/37] qemu_driver: Decouple code for baseline using libvirt

2018-11-02 Thread Chris Venteicher
Create utility function encapsulating code to calculate
hypervisor baseline cpu using the local libvirt utility functions.

Similar function encapsulating code to calculating hypervisor baseline
using QEMU QMP messages will be introduced in later commit.

Patch is a cut and paste of existing code into a utility function
wrapper.

s/cpu/*baseline/ (change output variable name ) and
initialize variable "rc" are the only code changes.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_driver.c | 78 --
 1 file changed, 52 insertions(+), 26 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 83f5fe0db0..54f7b8b26d 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -13410,6 +13410,55 @@ qemuConnectBaselineCPU(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 }
 
 
+static int
+qemuConnectBaselineHypervisorCPUViaLibvirt(
+virQEMUCapsPtr qemuCaps,
+bool migratable,
+virDomainVirtType virttype,
+virArch arch,
+virCPUDefPtr *cpus,
+unsigned int ncpus,
+virCPUDefPtr *baseline)
+{
+char **features = NULL;
+int ret = -1;
+int rc = -1;
+virDomainCapsCPUModelsPtr cpuModels;
+
+*baseline = NULL;
+
+if (!(cpuModels = virQEMUCapsGetCPUDefinitions(qemuCaps, virttype)) ||
+cpuModels->nmodels == 0) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
+   _("QEMU '%s' does not support any CPU models for "
+ "virttype '%s'"),
+   virQEMUCapsGetBinary(qemuCaps),
+   virDomainVirtTypeToString(virttype));
+goto cleanup;
+}
+
+rc = virQEMUCapsGetCPUFeatures(qemuCaps, virttype,
+   migratable, );
+if (rc < 0)
+goto cleanup;
+if (features && rc == 0) {
+/* We got only migratable features from QEMU if we asked for them,
+ * no further filtering in virCPUBaseline is desired. */
+migratable = false;
+}
+
+if (!(*baseline = virCPUBaseline(arch, cpus, ncpus, cpuModels,
+ (const char **)features, migratable)))
+goto cleanup;
+
+ret = 0;
+
+ cleanup:
+virStringListFree(features);
+return ret;
+}
+
+
 static char *
 qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
  const char *emulator,
@@ -13428,7 +13477,6 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 bool migratable;
 virCPUDefPtr cpu = NULL;
 char *cpustr = NULL;
-char **features = NULL;
 
 virCheckFlags(VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES |
   VIR_CONNECT_BASELINE_CPU_MIGRATABLE, NULL);
@@ -13451,30 +13499,9 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 if (ARCH_IS_X86(arch)) {
 migratable = !!(flags & VIR_CONNECT_BASELINE_CPU_MIGRATABLE);
 
-virDomainCapsCPUModelsPtr cpuModels;
-
-if (!(cpuModels = virQEMUCapsGetCPUDefinitions(qemuCaps, virttype)) ||
-cpuModels->nmodels == 0) {
-virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
-   _("QEMU '%s' does not support any CPU models for "
- "virttype '%s'"),
-   virQEMUCapsGetBinary(qemuCaps),
-   virDomainVirtTypeToString(virttype));
-goto cleanup;
-}
-
-int rc = virQEMUCapsGetCPUFeatures(qemuCaps, virttype,
-   migratable, );
-if (rc < 0)
-goto cleanup;
-if (features && rc == 0) {
-/* We got only migratable features from QEMU if we asked for them,
- * no further filtering in virCPUBaseline is desired. */
-migratable = false;
-}
-
-if (!(cpu = virCPUBaseline(arch, cpus, ncpus, cpuModels,
-   (const char **)features, migratable)))
+if (qemuConnectBaselineHypervisorCPUViaLibvirt(qemuCaps, migratable,
+   virttype, arch,
+   cpus, ncpus, ) < 0)
 goto cleanup;
 } else {
 virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
@@ -13495,7 +13522,6 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 virCPUDefListFree(cpus);
 virCPUDefFree(cpu);
 virObjectUnref(qemuCaps);
-virStringListFree(features);
 
 return cpustr;
 }
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 36/37] qemu_driver: Remove unsupported props in expanded hypervisor baseline output

2018-11-02 Thread Chris Venteicher
Hypervisor baseline has an expansion mode where the cpu property /
feature list is fully expanded to list all supported (true) features.

The output (expanded) cpu model should not list properties that are
false (unsupported.)

QEMU expansion output enumerates both true (supported) and false
(unsupported) properties.  This patch removes the false (unsupported)
entries from the output cpu property list.

This change makes the expanded output consistent when both the internal
libvirt utility functions and QEMU QMP commands are used to expand the
property list.

qemu_monitor: Introduce removal of true/false CPUModelInfo props

A utility function is created to squash CPU properties list in
qemuMonitorCPUmodelInfo structure by removing boolean properties
of matching value.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_driver.c  |  4 
 src/qemu/qemu_monitor.c | 30 ++
 src/qemu/qemu_monitor.h |  4 
 3 files changed, 38 insertions(+)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 73707cce46..604c21d311 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -13632,6 +13632,10 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 migratable, modelInfo, 
) < 0)
 goto cleanup;
 
+/* Expansion enumerates all features
+ * Baselines output enumerates only in-model (true) features */
+qemuMonitorCPUModelInfoRemovePropByBoolValue(expansion, false);
+
 if (!(cpu = virQEMUCapsCPUModelInfoToCPUDef(migratable, 
expansion)))
 goto cleanup;
 
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index edbaee1a53..12feb034fd 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -3785,6 +3785,36 @@ qemuMonitorCPUModelInfoCopy(const 
qemuMonitorCPUModelInfo *orig)
 }
 
 
+/* Squash CPU Model Info property list
+ * removing props of type boolean matching value */
+void
+qemuMonitorCPUModelInfoRemovePropByBoolValue(qemuMonitorCPUModelInfoPtr model,
+ bool value)
+{
+qemuMonitorCPUPropertyPtr src;
+qemuMonitorCPUPropertyPtr dst;
+size_t i;
+size_t dst_nprops = 0;
+
+for (i = 0; i < model->nprops; i++) {
+src = &(model->props[i]);
+dst = &(model->props[dst_nprops]);
+
+if (src->type == QEMU_MONITOR_CPU_PROPERTY_BOOLEAN &&
+src->value.boolean == value)
+continue;
+
+*dst = *src;
+
+dst_nprops++;
+}
+
+model->nprops = dst_nprops;
+
+ignore_value(VIR_REALLOC_N_QUIET(model->props, dst_nprops));
+}
+
+
 int
 qemuMonitorCPUModelInfoBoolPropAdd(qemuMonitorCPUModelInfoPtr model,
const char *prop_name,
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index d46e9378ea..f6052ab852 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -1053,6 +1053,10 @@ int 
qemuMonitorCPUModelInfoBoolPropAdd(qemuMonitorCPUModelInfoPtr model,
bool prop_value)
 ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2);
 
+void qemuMonitorCPUModelInfoRemovePropByBoolValue(qemuMonitorCPUModelInfoPtr 
model,
+  bool value)
+ATTRIBUTE_NONNULL(1);
+
 int qemuMonitorGetCommands(qemuMonitorPtr mon,
char ***commands);
 int qemuMonitorGetEvents(qemuMonitorPtr mon,
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 25/37] qemu_process: Enter QMP command mode when starting QEMU Process

2018-11-02 Thread Chris Venteicher
qemuProcessStartQmp starts a QEMU process and monitor connection that
can be used by multiple functions possibly for multiple QMP commands.

The QMP exchange to exit capabilities negotiation mode and enter command mode
can only be performed once after the monitor connection is established.

Move responsibility for entering QMP command mode into the qemuProcess
code so multiple functions can issue QMP commands in arbitrary orders.

This also simplifies the functions using the connection provided by
qemuProcessStartQmp to issue QMP commands.

Test code now needs to call qemuMonitorSetCapabilities to send the
message to switch to command mode because the test code does not use the
qemuProcess command that internally calls qemuMonitorSetCapabilities.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 14 --
 src/qemu/qemu_process.c  |  8 
 tests/qemucapabilitiestest.c |  7 +++
 3 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 5cbc726bb9..dd7c07f6e2 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -4068,13 +4068,6 @@ virQEMUCapsInitQMPMonitor(virQEMUCapsPtr qemuCaps,
 
 /* @mon is supposed to be locked by callee */
 
-if (qemuMonitorSetCapabilities(mon) < 0) {
-VIR_DEBUG("Failed to set monitor capabilities %s",
-  virGetLastErrorMessage());
-ret = 0;
-goto cleanup;
-}
-
 if (qemuMonitorGetVersion(mon,
   , , ,
   ) < 0) {
@@ -4248,13 +4241,6 @@ virQEMUCapsInitQMPMonitorTCG(virQEMUCapsPtr qemuCaps 
ATTRIBUTE_UNUSED,
 {
 int ret = -1;
 
-if (qemuMonitorSetCapabilities(mon) < 0) {
-VIR_DEBUG("Failed to set monitor capabilities %s",
-  virGetLastErrorMessage());
-ret = 0;
-goto cleanup;
-}
-
 if (virQEMUCapsProbeQMPCPUDefinitions(qemuCaps, mon, true) < 0)
 goto cleanup;
 
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index f545fc3b33..1bb11b2089 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8298,6 +8298,14 @@ qemuConnectMonitorQmp(qemuProcessPtr proc)
 
 virObjectLock(proc->mon);
 
+/* Exit capabilities negotiation mode and enter QEMU command mode
+ * by issuing qmp_capabilities command to QEMU */
+if (qemuMonitorSetCapabilities(proc->mon) < 0) {
+VIR_DEBUG("Failed to set monitor capabilities %s",
+  virGetLastErrorMessage());
+goto ignore;
+}
+
  ignore:
 ret = 0;
 
diff --git a/tests/qemucapabilitiestest.c b/tests/qemucapabilitiestest.c
index 8fe5a55e1d..ea4671739b 100644
--- a/tests/qemucapabilitiestest.c
+++ b/tests/qemucapabilitiestest.c
@@ -58,6 +58,9 @@ testQemuCaps(const void *opaque)
 if (!(mon = qemuMonitorTestNewFromFileFull(repliesFile, >driver, 
NULL)))
 goto cleanup;
 
+if (qemuMonitorSetCapabilities(qemuMonitorTestGetMonitor(mon)) < 0)
+goto cleanup;
+
 if (!(capsActual = virQEMUCapsNew()) ||
 virQEMUCapsInitQMPMonitor(capsActual,
   qemuMonitorTestGetMonitor(mon)) < 0)
@@ -65,6 +68,10 @@ testQemuCaps(const void *opaque)
 
 if (virQEMUCapsGet(capsActual, QEMU_CAPS_KVM)) {
 qemuMonitorResetCommandID(qemuMonitorTestGetMonitor(mon));
+
+if (qemuMonitorSetCapabilities(qemuMonitorTestGetMonitor(mon)) < 0)
+goto cleanup;
+
 if (virQEMUCapsInitQMPMonitorTCG(capsActual,
  qemuMonitorTestGetMonitor(mon)) < 0)
 goto cleanup;
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 35/37] qemu_driver: Support feature expansion via QEMU when baselining cpu

2018-11-02 Thread Chris Venteicher
Support using QEMU to do feature expansion when also using QEMU to
compute hypervisor baseline.

A QEMU process is already created to send the QMP messages to baseline
using QEMU.
The same QEMU process is used for the CPU feature expansion.

QEMU only returns migratable features when expanding CPU model in
architectures where QEMU is used for baseline so no attempt is made to
ask for non-migratable features in expansions when using QEMU for
baseline.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_driver.c | 21 +
 1 file changed, 21 insertions(+)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 8b5c3e1fb5..73707cce46 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -13614,6 +13614,27 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 if (flags & VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES) {
 if (useLibvirt && virCPUExpandFeatures(arch, cpu) < 0) {
 goto cleanup;
+} else if (useQemu &&
+   virQEMUCapsGet(qemuCaps, 
QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION)) {
+
+if (!(modelInfo = virQEMUCapsCPUModelInfoFromCPUDef(cpu)))
+goto cleanup;
+
+virCPUDefFree(cpu);
+cpu = NULL;
+
+/* QEMU can only include migratable features
+   for all archs that use QEMU for baseline calculation */
+migratable = true;
+
+if (qemuMonitorGetCPUModelExpansion(mon,
+
QEMU_MONITOR_CPU_MODEL_EXPANSION_FULL,
+migratable, modelInfo, 
) < 0)
+goto cleanup;
+
+if (!(cpu = virQEMUCapsCPUModelInfoToCPUDef(migratable, 
expansion)))
+goto cleanup;
+
 } else {
 virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
_("expand features while "
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 37/37] qemu_monitor: Default props to migratable when expanding cpu model

2018-11-02 Thread Chris Venteicher
QEMU only returns migratable props when expanding model unless
explicitly told to also include non-migratable props.

Props will be marked migratable when we are certain QEMU returned only
migratable props resulting in consistent information and expansion output
for s390 that is consistent with x86.

After this change,
immediately default prop->migratable = _YES for all props
when we know QEMU only included migratable props in CPU Model.

Set model->migratability = true when we have set prop->migratable.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_monitor.c   | 53 ++-
 src/qemu/qemu_monitor.h   |  7 +-
 .../caps_2.10.0.s390x.xml | 60 -
 .../caps_2.11.0.s390x.xml | 58 -
 .../caps_2.12.0.s390x.xml | 56 
 .../qemucapabilitiesdata/caps_2.8.0.s390x.xml | 32 +-
 .../qemucapabilitiesdata/caps_2.9.0.s390x.xml | 34 +-
 .../qemucapabilitiesdata/caps_3.0.0.s390x.xml | 64 +--
 8 files changed, 210 insertions(+), 154 deletions(-)

diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 12feb034fd..845cb929a6 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -3682,12 +3682,31 @@ qemuMonitorGetCPUModelExpansion(qemuMonitorPtr mon,
 qemuMonitorCPUModelInfoPtr *expansion
)
 {
+int ret = -1;
+qemuMonitorCPUModelInfoPtr tmp;
+
 VIR_DEBUG("type=%d model_name=%s migratable=%d",
   type, input->name, migratable);
 
+*expansion = NULL;
+
 QEMU_CHECK_MONITOR(mon);
 
-return qemuMonitorJSONGetCPUModelExpansion(mon, type, migratable, input, 
expansion);
+if ((ret = qemuMonitorJSONGetCPUModelExpansion(mon, type, migratable, 
input, )) < 0)
+goto cleanup;
+
+if (migratable) {
+/* Only migratable props were included in expanded CPU model */
+*expansion = qemuMonitorCPUModelInfoCopyDefaultMigratable(tmp);
+} else {
+VIR_STEAL_PTR(*expansion, tmp);
+}
+
+ret = 0;
+
+ cleanup:
+qemuMonitorCPUModelInfoFree(tmp);
+return ret;
 }
 
 
@@ -3785,6 +3804,38 @@ qemuMonitorCPUModelInfoCopy(const 
qemuMonitorCPUModelInfo *orig)
 }
 
 
+qemuMonitorCPUModelInfoPtr
+qemuMonitorCPUModelInfoCopyDefaultMigratable(const qemuMonitorCPUModelInfo 
*orig)
+{
+qemuMonitorCPUModelInfoPtr ret = NULL;
+qemuMonitorCPUModelInfoPtr tmp = NULL;
+qemuMonitorCPUPropertyPtr prop = NULL;
+size_t i;
+
+if (!(tmp = qemuMonitorCPUModelInfoCopy(orig)))
+goto cleanup;
+
+for (i = 0; i < tmp->nprops; i++) {
+prop = tmp->props + i;
+
+/* Default prop thats in cpu model (true) to migratable (_YES)
+ * unless prop already explicitly set not migratable (_NO)
+ */
+if (prop->type == QEMU_MONITOR_CPU_PROPERTY_BOOLEAN &&
+prop->value.boolean &&
+prop->migratable != VIR_TRISTATE_BOOL_NO)
+prop->migratable = VIR_TRISTATE_BOOL_YES;
+}
+
+tmp->migratability = true; /* prop->migratable = YES/NO for all CPU props 
*/
+
+VIR_STEAL_PTR(ret, tmp);
+
+ cleanup:
+return ret;
+}
+
+
 /* Squash CPU Model Info property list
  * removing props of type boolean matching value */
 void
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index f6052ab852..9216f45f59 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -1025,7 +1025,7 @@ struct _qemuMonitorCPUModelInfo {
 char *name;
 size_t nprops;
 qemuMonitorCPUPropertyPtr props;
-bool migratability;
+bool migratability; /* true if prop->migratable is YES/NO for all CPU 
props */
 };
 
 typedef enum {
@@ -1048,6 +1048,11 @@ qemuMonitorCPUModelInfoPtr 
qemuMonitorCPUModelInfoNew(const char *name);
 qemuMonitorCPUModelInfoPtr
 qemuMonitorCPUModelInfoCopy(const qemuMonitorCPUModelInfo *orig);
 
+qemuMonitorCPUModelInfoPtr
+qemuMonitorCPUModelInfoCopyDefaultMigratable(const qemuMonitorCPUModelInfo 
*orig)
+ATTRIBUTE_NONNULL(1);
+
+
 int qemuMonitorCPUModelInfoBoolPropAdd(qemuMonitorCPUModelInfoPtr model,
const char *prop_name,
bool prop_value)
diff --git a/tests/qemucapabilitiesdata/caps_2.10.0.s390x.xml 
b/tests/qemucapabilitiesdata/caps_2.10.0.s390x.xml
index e000aac384..391bee4f06 100644
--- a/tests/qemucapabilitiesdata/caps_2.10.0.s390x.xml
+++ b/tests/qemucapabilitiesdata/caps_2.10.0.s390x.xml
@@ -118,36 +118,36 @@
   306247
   
   s390x
-  
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+  
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
   
   
   
diff --git 

[libvirt] [PATCH v4 28/37] qemu_capabilities: Introduce CPUModelInfo to virCPUDef function

2018-11-02 Thread Chris Venteicher
Move existing code to convert between cpu model info structures
(qemuMonitorCPUModelInfoPtr into virCPUDef)
into a reusable function.

The new function is used in this and future patches.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 84 ++--
 src/qemu/qemu_capabilities.h |  3 ++
 2 files changed, 64 insertions(+), 23 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index dd7c07f6e2..dcd6ffb876 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -2797,7 +2797,8 @@ virQEMUCapsInitCPUModelS390(virQEMUCapsPtr qemuCaps,
 virCPUDefPtr cpu,
 bool migratable)
 {
-size_t i;
+virCPUDefPtr tmp = NULL;
+int ret = -1;
 
 if (!modelInfo) {
 if (type == VIR_DOMAIN_VIRT_KVM) {
@@ -2810,32 +2811,18 @@ virQEMUCapsInitCPUModelS390(virQEMUCapsPtr qemuCaps,
 return 2;
 }
 
-if (VIR_STRDUP(cpu->model, modelInfo->name) < 0 ||
-VIR_ALLOC_N(cpu->features, modelInfo->nprops) < 0)
-return -1;
-
-cpu->nfeatures_max = modelInfo->nprops;
-cpu->nfeatures = 0;
-
-for (i = 0; i < modelInfo->nprops; i++) {
-virCPUFeatureDefPtr feature = cpu->features + cpu->nfeatures;
-qemuMonitorCPUPropertyPtr prop = modelInfo->props + i;
+if (!(tmp = virQEMUCapsCPUModelInfoToCPUDef(migratable, modelInfo)))
+goto cleanup;
 
-if (prop->type != QEMU_MONITOR_CPU_PROPERTY_BOOLEAN)
-continue;
+/* Free original then copy over model, vendor, vendor_id and features */
+virCPUDefStealModel(cpu, tmp, true);
 
-if (VIR_STRDUP(feature->name, prop->name) < 0)
-return -1;
+ret = 0;
 
-if (!prop->value.boolean ||
-(migratable && prop->migratable == VIR_TRISTATE_BOOL_NO))
-feature->policy = VIR_CPU_FEATURE_DISABLE;
-else
-feature->policy = VIR_CPU_FEATURE_REQUIRE;
-cpu->nfeatures++;
-}
+ cleanup:
+virCPUDefFree(tmp);
 
-return 0;
+return ret;
 }
 
 
@@ -3633,6 +3620,57 @@ virQEMUCapsLoadCache(virArch hostArch,
 }
 
 
+/* qemuMonitorCPUModelInfo name   => virCPUDef model
+ * qemuMonitorCPUModelInfo boolean properties => virCPUDef features
+ *
+ * migratable true: mark non-migratable features as disabled
+ *false: allow all features as required
+ */
+virCPUDefPtr
+virQEMUCapsCPUModelInfoToCPUDef(bool migratable, qemuMonitorCPUModelInfoPtr 
model)
+{
+virCPUDefPtr cpu = NULL;
+virCPUDefPtr ret = NULL;
+size_t i;
+
+if (!model || VIR_ALLOC(cpu) < 0)
+goto cleanup;
+
+VIR_DEBUG("model->name= %s", NULLSTR(model->name));
+
+if (VIR_STRDUP(cpu->model, model->name) < 0 ||
+VIR_ALLOC_N(cpu->features, model->nprops) < 0)
+goto cleanup;
+
+cpu->nfeatures_max = model->nprops;
+cpu->nfeatures = 0;
+
+for (i = 0; i < model->nprops; i++) {
+virCPUFeatureDefPtr feature = cpu->features + cpu->nfeatures;
+qemuMonitorCPUPropertyPtr prop = model->props + i;
+
+if (prop->type != QEMU_MONITOR_CPU_PROPERTY_BOOLEAN)
+continue;
+
+if (VIR_STRDUP(feature->name, prop->name) < 0)
+goto cleanup;
+
+if (!prop->value.boolean ||
+(migratable && prop->migratable == VIR_TRISTATE_BOOL_NO))
+feature->policy = VIR_CPU_FEATURE_DISABLE;
+else
+feature->policy = VIR_CPU_FEATURE_REQUIRE;
+
+cpu->nfeatures++;
+}
+
+VIR_STEAL_PTR(ret, cpu);
+
+ cleanup:
+virCPUDefFree(cpu);
+return ret;
+}
+
 static void
 virQEMUCapsFormatHostCPUModelInfo(virQEMUCapsPtr qemuCaps,
   virBufferPtr buf,
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index 6bb9a2c8f0..a377d7b912 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -572,6 +572,9 @@ int virQEMUCapsGetMachineTypesCaps(virQEMUCapsPtr qemuCaps,
 void virQEMUCapsFilterByMachineType(virQEMUCapsPtr qemuCaps,
 const char *machineType);
 
+virCPUDefPtr virQEMUCapsCPUModelInfoToCPUDef(bool migratable,
+ qemuMonitorCPUModelInfoPtr model);
+
 virFileCachePtr virQEMUCapsCacheNew(const char *libDir,
 const char *cacheDir,
 uid_t uid,
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 31/37] qemu_driver: Consolidate code to baseline using libvirt

2018-11-02 Thread Chris Venteicher
Simple cut/paste operations within function
qemuConnectBaselineHypervisorCPU to group together code specific to
computing baseline using only libvirt utility functions.

This is done in anticipation of creating new utility functions for
1) baseline using libvirt utilities (this code)
2) baseline using qemu qmp command (future)

Future patches are easier to follow if the code for using libvirt is
consolidated.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_driver.c | 27 ++-
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index a52e2495d5..83f5fe0db0 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -13425,7 +13425,6 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 virQEMUCapsPtr qemuCaps = NULL;
 virArch arch;
 virDomainVirtType virttype;
-virDomainCapsCPUModelsPtr cpuModels;
 bool migratable;
 virCPUDefPtr cpu = NULL;
 char *cpustr = NULL;
@@ -13437,8 +13436,6 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 if (virConnectBaselineHypervisorCPUEnsureACL(conn) < 0)
 goto cleanup;
 
-migratable = !!(flags & VIR_CONNECT_BASELINE_CPU_MIGRATABLE);
-
 if (!(cpus = virCPUDefListParse(xmlCPUs, ncpus, VIR_CPU_TYPE_AUTO)))
 goto cleanup;
 
@@ -13451,17 +13448,21 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 if (!qemuCaps)
 goto cleanup;
 
-if (!(cpuModels = virQEMUCapsGetCPUDefinitions(qemuCaps, virttype)) ||
-cpuModels->nmodels == 0) {
-virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
-   _("QEMU '%s' does not support any CPU models for "
- "virttype '%s'"),
-   virQEMUCapsGetBinary(qemuCaps),
-   virDomainVirtTypeToString(virttype));
-goto cleanup;
-}
-
 if (ARCH_IS_X86(arch)) {
+migratable = !!(flags & VIR_CONNECT_BASELINE_CPU_MIGRATABLE);
+
+virDomainCapsCPUModelsPtr cpuModels;
+
+if (!(cpuModels = virQEMUCapsGetCPUDefinitions(qemuCaps, virttype)) ||
+cpuModels->nmodels == 0) {
+virReportError(VIR_ERR_OPERATION_UNSUPPORTED,
+   _("QEMU '%s' does not support any CPU models for "
+ "virttype '%s'"),
+   virQEMUCapsGetBinary(qemuCaps),
+   virDomainVirtTypeToString(virttype));
+goto cleanup;
+}
+
 int rc = virQEMUCapsGetCPUFeatures(qemuCaps, virttype,
migratable, );
 if (rc < 0)
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 20/37] qemu_process: Don't open monitor if process failed

2018-11-02 Thread Chris Venteicher
Gracefully handle case when proc activation failed prior to calling.

Consistent with the existing code for qemuConnectMonitor (for domains)
in qemu_process.c...

- Handle qemMonitorOpen failure with INFO message and NULL ptr
- Identify parameters passed to qemuMonitorOpen

Monitor callbacks will be removed in future patch so we prep for passing
NULL for the callback pointer.

Set proc->mon to NULL then use VIR_STEAL_PTR if successful to be
consistent with other functions.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 27 +--
 1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 70ca82dd55..af6b20713a 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8236,25 +8236,48 @@ static int
 qemuConnectMonitorQmp(qemuProcessPtr proc)
 {
 int ret = -1;
+qemuMonitorPtr mon = NULL;
+unsigned long long timeout = 0;
+bool retry = true;
+bool enableJson = true;
+virQEMUDriverPtr driver = NULL;
+qemuMonitorCallbacksPtr monCallbacks = 
 virDomainXMLOptionPtr xmlopt = NULL;
 virDomainChrSourceDef monConfig;
 
 VIR_DEBUG("proc=%p, emulator=%s, proc->pid=%lld",
   proc, NULLSTR(proc->binary), (long long) proc->pid);
 
+if (!proc || !proc->pid)
+goto ignore;
+
+proc->mon = NULL;
+
 monConfig.type = VIR_DOMAIN_CHR_TYPE_UNIX;
 monConfig.data.nix.path = proc->monpath;
 monConfig.data.nix.listen = false;
 
+/* Create a NULL Domain object for qemuMonitor */
 if (!(xmlopt = virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL)) ||
 !(proc->vm = virDomainObjNew(xmlopt)))
 goto cleanup;
 
 proc->vm->pid = proc->pid;
 
-if (!(proc->mon = qemuMonitorOpen(proc->vm, , true, true,
-  0, , NULL)))
+mon = qemuMonitorOpen(proc->vm,
+  ,
+  enableJson,
+  retry,
+  timeout,
+  monCallbacks,
+  driver);
+
+if (!mon) {
+VIR_INFO("Failed to connect monitor to emulator %s", proc->binary);
 goto ignore;
+}
+
+VIR_STEAL_PTR(proc->mon, mon);
 
 virObjectLock(proc->mon);
 
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 17/37] qemu_process: Store libDir in qemuProcess struct

2018-11-02 Thread Chris Venteicher
Store libDir path in the qemuProcess struct in anticipation of moving
path construction code into qemuProcessInitQmp function.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 8 +---
 src/qemu/qemu_process.h | 1 +
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index f109e6e19b..804ec998af 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8085,6 +8085,7 @@ qemuProcessFree(qemuProcessPtr proc)
 
 qemuProcessStopQmp(proc);
 VIR_FREE(proc->binary);
+VIR_FREE(proc->libDir);
 VIR_FREE(proc->monpath);
 VIR_FREE(proc->monarg);
 VIR_FREE(proc->pidfile);
@@ -8105,7 +8106,8 @@ qemuProcessNew(const char *binary,
 if (VIR_ALLOC(proc) < 0)
 goto error;
 
-if (VIR_STRDUP(proc->binary, binary) < 0)
+if (VIR_STRDUP(proc->binary, binary) < 0 ||
+VIR_STRDUP(proc->libDir, libDir) < 0)
 goto error;
 
 proc->forceTCG = forceTCG;
@@ -8116,7 +8118,7 @@ qemuProcessNew(const char *binary,
 /* the ".sock" sufix is important to avoid a possible clash with a qemu
  * domain called "capabilities"
  */
-if (virAsprintf(>monpath, "%s/%s", libDir,
+if (virAsprintf(>monpath, "%s/%s", proc->libDir,
 "capabilities.monitor.sock") < 0)
 goto error;
 if (virAsprintf(>monarg, "unix:%s,server,nowait", proc->monpath) < 0)
@@ -8128,7 +8130,7 @@ qemuProcessNew(const char *binary,
  * -daemonize we need QEMU to be allowed to create them, rather
  * than libvirtd. So we're using libDir which QEMU can write to
  */
-if (virAsprintf(>pidfile, "%s/%s", libDir, "capabilities.pidfile") < 
0)
+if (virAsprintf(>pidfile, "%s/%s", proc->libDir, 
"capabilities.pidfile") < 0)
 goto error;
 
 virPidFileForceCleanupPath(proc->pidfile);
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index c34ca52ef5..1b8f743861 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -218,6 +218,7 @@ typedef struct _qemuProcess qemuProcess;
 typedef qemuProcess *qemuProcessPtr;
 struct _qemuProcess {
 char *binary;
+char *libDir;
 uid_t runUid;
 gid_t runGid;
 char *qmperr;  /* qemu process stderr */
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 24/37] qemu_monitor: Make monitor callbacks optional

2018-11-02 Thread Chris Venteicher
Qemu process code for capababilities doesn't use monitor callbacks and
defines empty callback functions.

Allow NULL to be passed to qemuMonitorOpen for callbacks and remove the
empty functions from the QMP process code.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_monitor.c |  4 ++--
 src/qemu/qemu_process.c | 14 +-
 2 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 4c0c2decf7..2071c6e846 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -813,12 +813,12 @@ qemuMonitorOpenInternal(virDomainObjPtr vm,
 {
 qemuMonitorPtr mon;
 
-if (!cb->eofNotify) {
+if (cb && !cb->eofNotify) {
 virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("EOF notify callback must be supplied"));
 return NULL;
 }
-if (!cb->errorNotify) {
+if (cb && !cb->errorNotify) {
 virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("Error notify callback must be supplied"));
 return NULL;
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 9d3ce4eb6d..f545fc3b33 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8063,18 +8063,6 @@ qemuProcessReconnectAll(virQEMUDriverPtr driver)
 }
 
 
-static void virQEMUCapsMonitorNotify(qemuMonitorPtr mon ATTRIBUTE_UNUSED,
- virDomainObjPtr vm ATTRIBUTE_UNUSED,
- void *opaque ATTRIBUTE_UNUSED)
-{
-}
-
-static qemuMonitorCallbacks callbacks = {
-.eofNotify = virQEMUCapsMonitorNotify,
-.errorNotify = virQEMUCapsMonitorNotify,
-};
-
-
 /**
  * qemuProcessFree:
  * @proc: Stores Process and Connection State
@@ -8270,7 +8258,7 @@ qemuConnectMonitorQmp(qemuProcessPtr proc)
 bool retry = true;
 bool enableJson = true;
 virQEMUDriverPtr driver = NULL;
-qemuMonitorCallbacksPtr monCallbacks = 
+qemuMonitorCallbacksPtr monCallbacks = NULL;
 virDomainXMLOptionPtr xmlopt = NULL;
 virDomainChrSourceDef monConfig;
 
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 34/37] qemu_driver: Support baseline calculation using QEMU

2018-11-02 Thread Chris Venteicher
Add capability to calculate hypervisor baseline using QMP message
exchanges with QEMU in addition to existing capability to calculate
baseline using libvirt utility functions.

A new utility function encapsulates the core logic for interacting with QEMU
using QMP baseline messages. The QMP messages only allow two cpu models
to be evaluated at a time so a sequence of QMP baseline messages must be
used to include all the models in the baseline when more than 2 cpu
models are input.

A QEMU process must be started prior to sending the series of QEMU
messages to compute baseline.

The QEMU process is started and maintained in the main hypervisor
baseline function and only a pointer to the QEMU process monitor is
passed to the baseline utility function.

The QEMU process is maintained in the main hypervisor baseline function
because the process will also be used for feature expansion (via QEMU)
in a later patch.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_driver.c | 100 +
 1 file changed, 100 insertions(+)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 632b756c89..8b5c3e1fb5 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -13410,6 +13410,78 @@ qemuConnectBaselineCPU(virConnectPtr conn 
ATTRIBUTE_UNUSED,
 }
 
 
+/* in:
+ *  cpus[0]->model = "z14";
+ *  cpus[0]->features[0].name = "xxx";
+ *  cpus[0]->features[1].name = "yyy";
+ *  ***
+ *  cpus[n]->model = "z13";
+ *  cpus[n]->features[0].name = "xxx";
+ *  cpus[n]->features[1].name = "yyy";
+ *
+ * out:
+ *  *baseline->model = "z13-base";
+ *  *baseline->features[0].name = "yyy";
+ */
+static int
+qemuConnectBaselineHypervisorCPUViaQEMU(qemuMonitorPtr mon,
+virCPUDefPtr *cpus,
+virCPUDefPtr *baseline)
+{
+qemuMonitorCPUModelInfoPtr model_baseline = NULL;
+qemuMonitorCPUModelInfoPtr new_model_baseline = NULL;
+qemuMonitorCPUModelInfoPtr next_model = NULL;
+bool migratable = true;
+int ret = -1;
+size_t i;
+
+*baseline = NULL;
+
+if (!cpus || !cpus[0]) {
+virReportError(VIR_ERR_INVALID_ARG, "%s", _("no cpus"));
+goto cleanup;
+}
+
+for (i = 0; cpus[i]; i++) {  /* cpus terminated by NULL element */
+virCPUDefPtr cpu = cpus[i];
+
+VIR_DEBUG("cpu[%lu]->model = %s", i, NULLSTR(cpu->model));
+
+if (!(next_model = virQEMUCapsCPUModelInfoFromCPUDef(cpu)))
+goto cleanup;
+
+if (i == 0) {
+model_baseline = next_model;
+continue;
+}
+
+if (qemuMonitorGetCPUModelBaseline(mon, model_baseline,
+   next_model, _model_baseline) < 
0)
+goto cleanup;
+
+qemuMonitorCPUModelInfoFree(model_baseline);
+qemuMonitorCPUModelInfoFree(next_model);
+
+next_model = NULL;
+
+model_baseline = new_model_baseline;
+}
+
+if (!(*baseline = virQEMUCapsCPUModelInfoToCPUDef(migratable, 
model_baseline)))
+goto cleanup;
+
+VIR_DEBUG("baseline->model = %s", (*baseline)->model);
+
+ret = 0;
+
+ cleanup:
+qemuMonitorCPUModelInfoFree(model_baseline);
+qemuMonitorCPUModelInfoFree(next_model);
+
+return ret;
+}
+
+
 static int
 qemuConnectBaselineHypervisorCPUViaLibvirt(
 virQEMUCapsPtr qemuCaps,
@@ -13478,6 +13550,12 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 virCPUDefPtr cpu = NULL;
 char *cpustr = NULL;
 bool useLibvirt = false;
+bool useQemu = false;
+bool forceTCG = false;
+qemuMonitorCPUModelInfoPtr modelInfo = NULL;
+qemuMonitorCPUModelInfoPtr expansion = NULL;
+qemuProcessPtr proc = NULL;
+qemuMonitorPtr mon = NULL;
 
 virCheckFlags(VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES |
   VIR_CONNECT_BASELINE_CPU_MIGRATABLE, NULL);
@@ -13498,6 +13576,7 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
 goto cleanup;
 
 useLibvirt = ARCH_IS_X86(arch);
+useQemu = ARCH_IS_S390(arch);
 
 if (useLibvirt) {
 migratable = !!(flags & VIR_CONNECT_BASELINE_CPU_MIGRATABLE);
@@ -13506,6 +13585,23 @@ qemuConnectBaselineHypervisorCPU(virConnectPtr conn,
virttype, arch,
cpus, ncpus, ) < 0)
 goto cleanup;
+
+} else if (useQemu) {
+const char *binary = virQEMUCapsGetBinary(qemuCaps);
+virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
+
+if (!(proc = qemuProcessNew(binary, cfg->libDir,
+cfg->user, cfg->group, forceTCG)))
+goto cleanup;
+
+if (qemuProcessStartQmp(proc) < 0)
+goto cleanup;
+
+mon = QEMU_PROCESS_MONITOR(proc);
+
+if ((qemuConnectBaselineHypervisorCPUViaQEMU(mon, cpus, ) < 0) || 
!cpu)
+goto cleanup;
+
 } else {

[libvirt] [PATCH v4 23/37] qemu_process: Catch process free before process stop

2018-11-02 Thread Chris Venteicher
Catch execution paths where qemuProcessFree is called before
qemuProcessStopQmp then report error and force stop before proceeding.

Also added public function header and debug message.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 19 ---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index cb2902be02..9d3ce4eb6d 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8075,15 +8075,28 @@ static qemuMonitorCallbacks callbacks = {
 };
 
 
-
-
+/**
+ * qemuProcessFree:
+ * @proc: Stores Process and Connection State
+ *
+ * Free process data structure.
+ */
 void
 qemuProcessFree(qemuProcessPtr proc)
 {
+VIR_DEBUG("proc=%p, proc->mon=%p", proc, (proc ? proc->mon : NULL));
+
 if (!proc)
 return;
 
-qemuProcessStopQmp(proc);
+/* This should never be non-NULL if we get here, but just in case... */
+if (proc->mon || proc->pid) {
+VIR_ERROR(_("Unexpected QEMU still active during process free"
+" emulator: %s, pid: %lld, mon: %p"),
+NULLSTR(proc->binary), (long long) proc->pid, proc->mon);
+qemuProcessStopQmp(proc);
+}
+
 VIR_FREE(proc->binary);
 VIR_FREE(proc->libDir);
 VIR_FREE(proc->monpath);
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 14/37] qemu_process: Introduce qemuProcessStartQmp

2018-11-02 Thread Chris Venteicher
Move a step closer to the function structure used elsewhere in
qemu_process where qemuProcessStart and qemuProcessStop are the exposed
functions.

qemuProcessStartQmp mirrors qemuProcessStart in calling sub functions to
intialize, launch the process and connect the monitor to the QEMU
process.

static functions qemuProcessInitQmp, qemuProcessLaunchQmp and
qemuConnectMonitorQmp are also introduced.

qemuProcessLaunchQmp is just renamed from qemuProcessRun and
encapsulates all of the original code.

qemuProcessInitQmp and qemuProcessMonitorQmp are introduced as empty
wrappers into which subsequent patches will partition code from
qemuProcessLaunchQmp.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c |  4 +-
 src/qemu/qemu_process.c  | 96 +++-
 src/qemu/qemu_process.h  |  2 +-
 3 files changed, 97 insertions(+), 5 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 48b58ca02a..5cbc726bb9 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -4285,7 +4285,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 goto cleanup;
 
 
-if (qemuProcessRun(proc) < 0)
+if (qemuProcessStartQmp(proc) < 0)
 goto cleanup;
 
 if (!(mon = QEMU_PROCESS_MONITOR(proc))) {
@@ -4304,7 +4304,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 forceTCG = true;
 proc_kvm = qemuProcessNew(qemuCaps->binary, libDir, runUid, runGid, 
forceTCG);
 
-if (qemuProcessRun(proc_kvm) < 0)
+if (qemuProcessStartQmp(proc_kvm) < 0)
 goto cleanup;
 
 if (!(mon_kvm = QEMU_PROCESS_MONITOR(proc_kvm))) {
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 140c657087..1d96a43206 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8145,8 +8145,29 @@ qemuProcessNew(const char *binary,
 }
 
 
-int
-qemuProcessRun(qemuProcessPtr proc){
+/* Initialize configuration and paths prior to starting QEMU
+ */
+static int
+qemuProcessInitQmp(qemuProcessPtr proc)
+{
+int ret = -1;
+
+VIR_DEBUG("Beginning VM startup process"
+  "emulator=%s",
+  proc->binary);
+
+ret = 0;
+
+VIR_DEBUG("ret=%i", ret);
+return ret;
+}
+
+
+/* Launch QEMU Process
+ */
+static int
+qemuProcessLaunchQmp(qemuProcessPtr proc)
+{
 virDomainXMLOptionPtr xmlopt = NULL;
 const char *machine;
 int status = 0;
@@ -8223,6 +8244,77 @@ qemuProcessRun(qemuProcessPtr proc){
 }
 
 
+/* Connect Monitor to QEMU Process
+ */
+static int
+qemuConnectMonitorQmp(qemuProcessPtr proc)
+{
+int ret = -1;
+
+VIR_DEBUG("proc=%p, emulator=%s, proc->pid=%lld",
+  proc, NULLSTR(proc->binary), (long long) proc->pid);
+
+ret = 0;
+
+VIR_DEBUG("ret=%i", ret);
+return ret;
+}
+
+
+/**
+ * qemuProcessStartQmp:
+ * @proc: Stores Process and Connection State
+ *
+ * Start and connect to QEMU binary so QMP queries can be made.
+ *
+ * Usage:
+ *   proc = qemuProcessNew(binary, forceTCG, libDir, runUid, runGid);
+ *   qemuProcessStartQmp(proc);
+ *   mon = QEMU_PROCESS_MONITOR(proc);
+ *   ** Send QMP Queries to QEMU using monitor **
+ *   qemuProcessStopQmp(proc);
+ *   qemuProcessFree(proc);
+ *
+ * Check monitor is not NULL before using.
+ *
+ * QEMU probing failure results in monitor being NULL but is not considered
+ * an error case and does not result in a negative return value.
+ *
+ * Both qemuProcessStopQmp and qemuProcessFree must be called to cleanup and
+ * free resources, even in activation failure cases.
+ *
+ * Process error output (proc->qmperr) remains available in qemuProcess struct
+ * until qemuProcessFree is called.
+ */
+int
+qemuProcessStartQmp(qemuProcessPtr proc)
+{
+int ret = -1;
+
+VIR_DEBUG("emulator=%s",
+  proc->binary);
+
+if (qemuProcessInitQmp(proc) < 0)
+goto cleanup;
+
+if (qemuProcessLaunchQmp(proc) < 0)
+goto stop;
+
+if (qemuConnectMonitorQmp(proc) < 0)
+goto stop;
+
+ret = 0;
+
+ cleanup:
+VIR_DEBUG("ret=%i", ret);
+return ret;
+
+ stop:
+qemuProcessStopQmp(proc);
+goto cleanup;
+}
+
+
 void
 qemuProcessStopQmp(qemuProcessPtr proc)
 {
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index 37194e2501..c34ca52ef5 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -246,7 +246,7 @@ qemuProcessPtr qemuProcessNew(const char *binary,
 
 void qemuProcessFree(qemuProcessPtr proc);
 
-int qemuProcessRun(qemuProcessPtr proc);
+int qemuProcessStartQmp(qemuProcessPtr proc);
 
 void qemuProcessStopQmp(qemuProcessPtr proc);
 
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 21/37] qemu_process: Cleanup qemuProcess alloc function

2018-11-02 Thread Chris Venteicher
qemuProcessNew is one of the 4 public functions used to create and
manage a qemu process for QMP command exchanges outside of domain
operations.

Add descriptive comment block, Debug statement and make source
consistent with the cleanup / VIR_STEAL_PTR format used elsewhere.

No functional changes are made.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 26 +-
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index af6b20713a..104ed58cea 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8094,6 +8094,18 @@ qemuProcessFree(qemuProcessPtr proc)
 }
 
 
+/**
+ * qemuProcessNew:
+ * @binary: Qemu binary
+ * @libDir: Directory for process and connection artifacts
+ * @runUid: UserId for Qemu Process
+ * @runGid: GroupId for Qemu Process
+ * @forceTCG: Force TCG mode if true
+ *
+ * Allocate and initialize domain structure encapsulating
+ * QEMU Process state and monitor connection to QEMU
+ * for completing QMP Queries.
+ */
 qemuProcessPtr
 qemuProcessNew(const char *binary,
const char *libDir,
@@ -8101,25 +8113,29 @@ qemuProcessNew(const char *binary,
gid_t runGid,
bool forceTCG)
 {
+qemuProcessPtr ret = NULL;
 qemuProcessPtr proc = NULL;
 
+VIR_DEBUG("exec=%s, libDir=%s, runUid=%u, runGid=%u, forceTCG=%d",
+  NULLSTR(binary), NULLSTR(libDir), runUid, runGid, forceTCG);
+
 if (VIR_ALLOC(proc) < 0)
-goto error;
+goto cleanup;
 
 if (VIR_STRDUP(proc->binary, binary) < 0 ||
 VIR_STRDUP(proc->libDir, libDir) < 0)
-goto error;
+goto cleanup;
 
 proc->forceTCG = forceTCG;
 
 proc->runUid = runUid;
 proc->runGid = runGid;
 
-return proc;
+VIR_STEAL_PTR(ret, proc);
 
- error:
+ cleanup:
 qemuProcessFree(proc);
-return NULL;
+return ret;
 }
 
 
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 12/37] qemu_process: Persist stderr in qemuProcess struct

2018-11-02 Thread Chris Venteicher
A qemuProcess struct tracks the entire lifespan of a single QEMU Process
including storing error output when the process terminates or activation
fails.

Error output remains available until qemuProcessFree is called.

The qmperr buffer no longer needs to be maintained outside of the
qemuProcess structure because the structure is used for a single QEMU
process and the structures can be maintained as long as required
to retrieve the process error info.

Capabilities init code is refactored but continues to report QEMU
process error data only when the initial (non KVM) QEMU process activation
fails to result in a usable monitor connection for retrieving
capabilities.

The VIR_ERR_INTERNAL_ERROR "Failed to probe QEMU binary" reporting is
moved into virQEMUCapsInitQMP function and the stderr string is
extracted from the qemuProcess struct using a macro to retrieve the
string.

The same error and log message should be generated, in the same
conditions, after this patch as before.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 27 ---
 src/qemu/qemu_process.c  | 12 
 src/qemu/qemu_process.h  |  6 --
 3 files changed, 20 insertions(+), 25 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 99a963912c..9a86838d8a 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -4271,8 +4271,7 @@ static int
 virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
const char *libDir,
uid_t runUid,
-   gid_t runGid,
-   char **qmperr)
+   gid_t runGid)
 {
 qemuProcessPtr proc = NULL;
 qemuProcessPtr proc_kvm = NULL;
@@ -4281,7 +4280,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 bool forceTCG = false;
 
 if (!(proc = qemuProcessNew(qemuCaps->binary, libDir,
-runUid, runGid, qmperr, forceTCG)))
+runUid, runGid, forceTCG)))
 goto cleanup;
 
 if ((rc = qemuProcessRun(proc)) != 0) {
@@ -4297,7 +4296,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 qemuProcessStopQmp(proc);
 
 forceTCG = true;
-proc_kvm = qemuProcessNew(qemuCaps->binary, libDir, runUid, runGid, 
NULL, forceTCG);
+proc_kvm = qemuProcessNew(qemuCaps->binary, libDir, runUid, runGid, 
forceTCG);
 
 if ((rc = qemuProcessRun(proc_kvm)) != 0) {
 if (rc == 1)
@@ -4312,6 +4311,13 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 ret = 0;
 
  cleanup:
+if (proc && !proc->mon) {
+char *err = QEMU_PROCESS_ERROR(proc) ? QEMU_PROCESS_ERROR(proc) : 
_("unknown error");
+
+virReportError(VIR_ERR_INTERNAL_ERROR,
+   _("Failed to probe QEMU binary with QMP: %s"), err);
+}
+
 qemuProcessStopQmp(proc);
 qemuProcessStopQmp(proc_kvm);
 qemuProcessFree(proc);
@@ -4351,7 +4357,6 @@ virQEMUCapsNewForBinaryInternal(virArch hostArch,
 {
 virQEMUCapsPtr qemuCaps;
 struct stat sb;
-char *qmperr = NULL;
 
 if (!(qemuCaps = virQEMUCapsNew()))
 goto error;
@@ -4378,15 +4383,8 @@ virQEMUCapsNewForBinaryInternal(virArch hostArch,
 goto error;
 }
 
-if (virQEMUCapsInitQMP(qemuCaps, libDir, runUid, runGid, ) < 0) {
-virQEMUCapsLogProbeFailure(binary);
-goto error;
-}
-
-if (!qemuCaps->usedQMP) {
-virReportError(VIR_ERR_INTERNAL_ERROR,
-   _("Failed to probe QEMU binary with QMP: %s"),
-   qmperr ? qmperr : _("unknown error"));
+if (virQEMUCapsInitQMP(qemuCaps, libDir, runUid, runGid) < 0 ||
+!qemuCaps->usedQMP) {
 virQEMUCapsLogProbeFailure(binary);
 goto error;
 }
@@ -4405,7 +4403,6 @@ virQEMUCapsNewForBinaryInternal(virArch hostArch,
 }
 
  cleanup:
-VIR_FREE(qmperr);
 return qemuCaps;
 
  error:
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index d017efad84..99b720b380 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8088,6 +8088,7 @@ qemuProcessFree(qemuProcessPtr proc)
 VIR_FREE(proc->monpath);
 VIR_FREE(proc->monarg);
 VIR_FREE(proc->pidfile);
+VIR_FREE(proc->qmperr);
 VIR_FREE(proc);
 }
 
@@ -8097,7 +8098,6 @@ qemuProcessNew(const char *binary,
const char *libDir,
uid_t runUid,
gid_t runGid,
-   char **qmperr,
bool forceTCG)
 {
 qemuProcessPtr proc = NULL;
@@ -8112,8 +8112,6 @@ qemuProcessNew(const char *binary,
 
 proc->runUid = runUid;
 proc->runGid = runGid;
-proc->qmperr = qmperr;
-
 
 /* the ".sock" sufix is important to avoid a possible clash with a qemu
  * domain called "capabilities"
@@ -8152,8 +8150,7 @@ qemuProcessNew(const char *binary,
  *  1 when probing QEMU failed
  */
 int
-qemuProcessRun(qemuProcessPtr proc)
-{
+qemuProcessRun(qemuProcessPtr proc){

[libvirt] [PATCH v4 11/37] qemu_process: Use qemuProcess struct for a single process

2018-11-02 Thread Chris Venteicher
In new process code, move from model where qemuProcess struct can be
used to activate a series of Qemu processes to model where one
qemuProcess struct is used for one and only one Qemu process.

The forceTCG parameter (use / don't use KVM) will be passed when the
qemuProcess struct is initialized since the qemuProcess struct won't be
reused.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 16 
 src/qemu/qemu_process.c  | 11 +++
 src/qemu/qemu_process.h  |  6 --
 3 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 359b6fbb9b..99a963912c 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -4275,14 +4275,16 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
char **qmperr)
 {
 qemuProcessPtr proc = NULL;
+qemuProcessPtr proc_kvm = NULL;
 int ret = -1;
 int rc;
+bool forceTCG = false;
 
 if (!(proc = qemuProcessNew(qemuCaps->binary, libDir,
-runUid, runGid, qmperr)))
+runUid, runGid, qmperr, forceTCG)))
 goto cleanup;
 
-if ((rc = qemuProcessRun(proc, false)) != 0) {
+if ((rc = qemuProcessRun(proc)) != 0) {
 if (rc == 1)
 ret = 0;
 goto cleanup;
@@ -4293,13 +4295,17 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 
 if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) {
 qemuProcessStopQmp(proc);
-if ((rc = qemuProcessRun(proc, true)) != 0) {
+
+forceTCG = true;
+proc_kvm = qemuProcessNew(qemuCaps->binary, libDir, runUid, runGid, 
NULL, forceTCG);
+
+if ((rc = qemuProcessRun(proc_kvm)) != 0) {
 if (rc == 1)
 ret = 0;
 goto cleanup;
 }
 
-if (virQEMUCapsInitQMPMonitorTCG(qemuCaps, proc->mon) < 0)
+if (virQEMUCapsInitQMPMonitorTCG(qemuCaps, proc_kvm->mon) < 0)
 goto cleanup;
 }
 
@@ -4307,7 +4313,9 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 
  cleanup:
 qemuProcessStopQmp(proc);
+qemuProcessStopQmp(proc_kvm);
 qemuProcessFree(proc);
+qemuProcessFree(proc_kvm);
 return ret;
 }
 
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 70d73cd457..d017efad84 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8097,7 +8097,8 @@ qemuProcessNew(const char *binary,
const char *libDir,
uid_t runUid,
gid_t runGid,
-   char **qmperr)
+   char **qmperr,
+   bool forceTCG)
 {
 qemuProcessPtr proc = NULL;
 
@@ -8107,10 +8108,13 @@ qemuProcessNew(const char *binary,
 if (VIR_STRDUP(proc->binary, binary) < 0)
 goto error;
 
+proc->forceTCG = forceTCG;
+
 proc->runUid = runUid;
 proc->runGid = runGid;
 proc->qmperr = qmperr;
 
+
 /* the ".sock" sufix is important to avoid a possible clash with a qemu
  * domain called "capabilities"
  */
@@ -8148,15 +8152,14 @@ qemuProcessNew(const char *binary,
  *  1 when probing QEMU failed
  */
 int
-qemuProcessRun(qemuProcessPtr proc,
-   bool forceTCG)
+qemuProcessRun(qemuProcessPtr proc)
 {
 virDomainXMLOptionPtr xmlopt = NULL;
 const char *machine;
 int status = 0;
 int ret = -1;
 
-if (forceTCG)
+if (proc->forceTCG)
 machine = "none,accel=tcg";
 else
 machine = "none,accel=kvm:tcg";
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index 25343c4592..ab2640ce7c 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -229,17 +229,19 @@ struct _qemuProcess {
 virDomainChrSourceDef config;
 pid_t pid;
 virDomainObjPtr vm;
+bool forceTCG;
 };
 
 qemuProcessPtr qemuProcessNew(const char *binary,
   const char *libDir,
   uid_t runUid,
   gid_t runGid,
-  char **qmperr);
+  char **qmperr,
+  bool forceTCG);
 
 void qemuProcessFree(qemuProcessPtr proc);
 
-int qemuProcessRun(qemuProcessPtr proc, bool forceTCG);
+int qemuProcessRun(qemuProcessPtr proc);
 
 void qemuProcessStopQmp(qemuProcessPtr proc);
 
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 19/37] qemu_process: Stop retaining Qemu Monitor config in qemuProcess

2018-11-02 Thread Chris Venteicher
The monitor config data is removed from the qemuProcess struct.

The monitor config data can be initialized immediately before call to
qemuMonitorOpen and does not need to be maintained after the call
because qemuMonitorOpen copies any strings it needs.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 9 +
 src/qemu/qemu_process.h | 1 -
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 70b0b61aef..70ca82dd55 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8237,13 +8237,14 @@ qemuConnectMonitorQmp(qemuProcessPtr proc)
 {
 int ret = -1;
 virDomainXMLOptionPtr xmlopt = NULL;
+virDomainChrSourceDef monConfig;
 
 VIR_DEBUG("proc=%p, emulator=%s, proc->pid=%lld",
   proc, NULLSTR(proc->binary), (long long) proc->pid);
 
-proc->config.type = VIR_DOMAIN_CHR_TYPE_UNIX;
-proc->config.data.nix.path = proc->monpath;
-proc->config.data.nix.listen = false;
+monConfig.type = VIR_DOMAIN_CHR_TYPE_UNIX;
+monConfig.data.nix.path = proc->monpath;
+monConfig.data.nix.listen = false;
 
 if (!(xmlopt = virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL)) ||
 !(proc->vm = virDomainObjNew(xmlopt)))
@@ -8251,7 +8252,7 @@ qemuConnectMonitorQmp(qemuProcessPtr proc)
 
 proc->vm->pid = proc->pid;
 
-if (!(proc->mon = qemuMonitorOpen(proc->vm, >config, true, true,
+if (!(proc->mon = qemuMonitorOpen(proc->vm, , true, true,
   0, , NULL)))
 goto ignore;
 
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index 1b8f743861..d1541d5407 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -227,7 +227,6 @@ struct _qemuProcess {
 char *pidfile;
 virCommandPtr cmd;
 qemuMonitorPtr mon;
-virDomainChrSourceDef config;
 pid_t pid;
 virDomainObjPtr vm;
 bool forceTCG;
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 10/37] qemu_capabilities: Stop QEMU process before freeing

2018-11-02 Thread Chris Venteicher
Follow the convention established in qemu_process of
1) alloc process structure
2) start process
3) use process
4) stop process
5) free process data structure

The process data structure persists after the process activation fails
or the process dies or is killed so stderr strings can be retrieved
until the process data structure is freed.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 1d83ebdbe0..359b6fbb9b 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -4306,6 +4306,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 ret = 0;
 
  cleanup:
+qemuProcessStopQmp(proc);
 qemuProcessFree(proc);
 return ret;
 }
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 09/37] qemu_process: Use consistent name for stop process function

2018-11-02 Thread Chris Venteicher
s/qemuProcessAbort/qemuProcessStopQmp/ applied to change function name
used to stop QEMU processes in process code moved from
qemu_capabilities.

No functionality change.

The new name, qemuProcessStopQmp, is consistent with the existing
function qemuProcessStop used to stop Domain processes.

Qmp is added to the end of qemuProcessStop to differentiate between the
Domain and new non-domain version of the functions.  qemuProcessStartQmp
will be used in a future patch to mirror the qemuProcessStart function
with a non-domain equivalent.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 2 +-
 src/qemu/qemu_process.c  | 6 +++---
 src/qemu/qemu_process.h  | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index cf09002523..1d83ebdbe0 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -4292,7 +4292,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 goto cleanup;
 
 if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) {
-qemuProcessAbort(proc);
+qemuProcessStopQmp(proc);
 if ((rc = qemuProcessRun(proc, true)) != 0) {
 if (rc == 1)
 ret = 0;
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 2d3e8cac7a..70d73cd457 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8083,7 +8083,7 @@ qemuProcessFree(qemuProcessPtr proc)
 if (!proc)
 return;
 
-qemuProcessAbort(proc);
+qemuProcessStopQmp(proc);
 VIR_FREE(proc->binary);
 VIR_FREE(proc->monpath);
 VIR_FREE(proc->monarg);
@@ -8219,7 +8219,7 @@ qemuProcessRun(qemuProcessPtr proc,
 
  cleanup:
 if (!proc->mon)
-qemuProcessAbort(proc);
+qemuProcessStopQmp(proc);
 virObjectUnref(xmlopt);
 
 return ret;
@@ -8231,7 +8231,7 @@ qemuProcessRun(qemuProcessPtr proc,
 
 
 void
-qemuProcessAbort(qemuProcessPtr proc)
+qemuProcessStopQmp(qemuProcessPtr proc)
 {
 if (proc->mon)
 virObjectUnlock(proc->mon);
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index 161311d007..25343c4592 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -241,6 +241,6 @@ void qemuProcessFree(qemuProcessPtr proc);
 
 int qemuProcessRun(qemuProcessPtr proc, bool forceTCG);
 
-void qemuProcessAbort(qemuProcessPtr proc);
+void qemuProcessStopQmp(qemuProcessPtr proc);
 
 #endif /* __QEMU_PROCESS_H__ */
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 16/37] qemu_process: Collect monitor code in single function

2018-11-02 Thread Chris Venteicher
qemuMonitor code lives in qemuConnectMonitorQmp rather than in
qemuProcessNew and qemuProcessLaunchQmp.

This is consistent with existing structure in qemu_process.c where
qemuConnectMonitor function contains monitor code for domain process
activation.

Simple code moves in this patch.  Improvements in later patch.

Only intended functional change in this patch is we don't
move (include) code to initiate process stop on failure to create monitor.

As comments in qemuProcessStartQmp say... Client must always call
qemuProcessStop and qemuProcessFree, even in error cases.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 41 -
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 758c8bed05..f109e6e19b 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8133,10 +8133,6 @@ qemuProcessNew(const char *binary,
 
 virPidFileForceCleanupPath(proc->pidfile);
 
-proc->config.type = VIR_DOMAIN_CHR_TYPE_UNIX;
-proc->config.data.nix.path = proc->monpath;
-proc->config.data.nix.listen = false;
-
 return proc;
 
  error:
@@ -8168,7 +8164,6 @@ qemuProcessInitQmp(qemuProcessPtr proc)
 static int
 qemuProcessLaunchQmp(qemuProcessPtr proc)
 {
-virDomainXMLOptionPtr xmlopt = NULL;
 const char *machine;
 int status = 0;
 int ret = -1;
@@ -8223,26 +8218,10 @@ qemuProcessLaunchQmp(qemuProcessPtr proc)
 goto ignore;
 }
 
-if (!(xmlopt = virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL)) ||
-!(proc->vm = virDomainObjNew(xmlopt)))
-goto cleanup;
-
-proc->vm->pid = proc->pid;
-
-if (!(proc->mon = qemuMonitorOpen(proc->vm, >config, true, true,
- 0, , NULL)))
-goto ignore;
-
-virObjectLock(proc->mon);
-
  ignore:
 ret = 0;
 
  cleanup:
-if (!proc->mon)
-qemuProcessStopQmp(proc);
-virObjectUnref(xmlopt);
-
 return ret;
 }
 
@@ -8253,13 +8232,33 @@ static int
 qemuConnectMonitorQmp(qemuProcessPtr proc)
 {
 int ret = -1;
+virDomainXMLOptionPtr xmlopt = NULL;
 
 VIR_DEBUG("proc=%p, emulator=%s, proc->pid=%lld",
   proc, NULLSTR(proc->binary), (long long) proc->pid);
 
+proc->config.type = VIR_DOMAIN_CHR_TYPE_UNIX;
+proc->config.data.nix.path = proc->monpath;
+proc->config.data.nix.listen = false;
+
+if (!(xmlopt = virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL)) ||
+!(proc->vm = virDomainObjNew(xmlopt)))
+goto cleanup;
+
+proc->vm->pid = proc->pid;
+
+if (!(proc->mon = qemuMonitorOpen(proc->vm, >config, true, true,
+  0, , NULL)))
+goto ignore;
+
+virObjectLock(proc->mon);
+
+ ignore:
 ret = 0;
 
+ cleanup:
 VIR_DEBUG("ret=%i", ret);
+virObjectUnref(xmlopt);
 return ret;
 }
 
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 08/37] qemu_process: Refer to proc not cmd in process code

2018-11-02 Thread Chris Venteicher
s/cmd/proc/ in process code imported from qemu_capabilities.

No functionality is changed.

Process code imported from qemu_capabilities was oriented around
starting a process to issue a single QMP command.

Future usecases were targeting use a single process for multiple
different QMP commands.

Patch changes the variable naming to focus on the process not the command.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c |  14 ++--
 src/qemu/qemu_process.c  | 140 +--
 src/qemu/qemu_process.h  |   6 +-
 3 files changed, 80 insertions(+), 80 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 6f4d9139bf..cf09002523 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -4274,7 +4274,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
gid_t runGid,
char **qmperr)
 {
-qemuProcessPtr cmd = NULL;
+qemuProcessPtr proc = NULL;
 int ret = -1;
 int rc;
 
@@ -4282,31 +4282,31 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 runUid, runGid, qmperr)))
 goto cleanup;
 
-if ((rc = qemuProcessRun(cmd, false)) != 0) {
+if ((rc = qemuProcessRun(proc, false)) != 0) {
 if (rc == 1)
 ret = 0;
 goto cleanup;
 }
 
-if (virQEMUCapsInitQMPMonitor(qemuCaps, cmd->mon) < 0)
+if (virQEMUCapsInitQMPMonitor(qemuCaps, proc->mon) < 0)
 goto cleanup;
 
 if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) {
-qemuProcessAbort(cmd);
-if ((rc = qemuProcessRun(cmd, true)) != 0) {
+qemuProcessAbort(proc);
+if ((rc = qemuProcessRun(proc, true)) != 0) {
 if (rc == 1)
 ret = 0;
 goto cleanup;
 }
 
-if (virQEMUCapsInitQMPMonitorTCG(qemuCaps, cmd->mon) < 0)
+if (virQEMUCapsInitQMPMonitorTCG(qemuCaps, proc->mon) < 0)
 goto cleanup;
 }
 
 ret = 0;
 
  cleanup:
-qemuProcessFree(cmd);
+qemuProcessFree(proc);
 return ret;
 }
 
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index b93b3d6cf4..2d3e8cac7a 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8078,17 +8078,17 @@ static qemuMonitorCallbacks callbacks = {
 
 
 void
-qemuProcessFree(qemuProcessPtr cmd)
+qemuProcessFree(qemuProcessPtr proc)
 {
-if (!cmd)
+if (!proc)
 return;
 
-qemuProcessAbort(cmd);
-VIR_FREE(cmd->binary);
-VIR_FREE(cmd->monpath);
-VIR_FREE(cmd->monarg);
-VIR_FREE(cmd->pidfile);
-VIR_FREE(cmd);
+qemuProcessAbort(proc);
+VIR_FREE(proc->binary);
+VIR_FREE(proc->monpath);
+VIR_FREE(proc->monarg);
+VIR_FREE(proc->pidfile);
+VIR_FREE(proc);
 }
 
 
@@ -8099,25 +8099,25 @@ qemuProcessNew(const char *binary,
gid_t runGid,
char **qmperr)
 {
-qemuProcessPtr cmd = NULL;
+qemuProcessPtr proc = NULL;
 
-if (VIR_ALLOC(cmd) < 0)
+if (VIR_ALLOC(proc) < 0)
 goto error;
 
-if (VIR_STRDUP(cmd->binary, binary) < 0)
+if (VIR_STRDUP(proc->binary, binary) < 0)
 goto error;
 
-cmd->runUid = runUid;
-cmd->runGid = runGid;
-cmd->qmperr = qmperr;
+proc->runUid = runUid;
+proc->runGid = runGid;
+proc->qmperr = qmperr;
 
 /* the ".sock" sufix is important to avoid a possible clash with a qemu
  * domain called "capabilities"
  */
-if (virAsprintf(>monpath, "%s/%s", libDir,
+if (virAsprintf(>monpath, "%s/%s", libDir,
 "capabilities.monitor.sock") < 0)
 goto error;
-if (virAsprintf(>monarg, "unix:%s,server,nowait", cmd->monpath) < 0)
+if (virAsprintf(>monarg, "unix:%s,server,nowait", proc->monpath) < 0)
 goto error;
 
 /* ".pidfile" suffix is used rather than ".pid" to avoid a possible clash
@@ -8126,19 +8126,19 @@ qemuProcessNew(const char *binary,
  * -daemonize we need QEMU to be allowed to create them, rather
  * than libvirtd. So we're using libDir which QEMU can write to
  */
-if (virAsprintf(>pidfile, "%s/%s", libDir, "capabilities.pidfile") < 
0)
+if (virAsprintf(>pidfile, "%s/%s", libDir, "capabilities.pidfile") < 
0)
 goto error;
 
-virPidFileForceCleanupPath(cmd->pidfile);
+virPidFileForceCleanupPath(proc->pidfile);
 
-cmd->config.type = VIR_DOMAIN_CHR_TYPE_UNIX;
-cmd->config.data.nix.path = cmd->monpath;
-cmd->config.data.nix.listen = false;
+proc->config.type = VIR_DOMAIN_CHR_TYPE_UNIX;
+proc->config.data.nix.path = proc->monpath;
+proc->config.data.nix.listen = false;
 
-return cmd;
+return proc;
 
  error:
-qemuProcessFree(cmd);
+qemuProcessFree(proc);
 return NULL;
 }
 
@@ -8148,7 +8148,7 @@ qemuProcessNew(const char *binary,
  *  1 when probing QEMU failed
  */
 int
-qemuProcessRun(qemuProcessPtr cmd,
+qemuProcessRun(qemuProcessPtr proc,
bool 

[libvirt] [PATCH v4 03/37] qemu_capabilities: Introduce virQEMuCapsMigratablePropsDiff

2018-11-02 Thread Chris Venteicher
Create an augmented CPUModelInfo identifying which props are/aren't
migratable based on a diff between migratable and non-migratable
inputs.

This patch pulls existing logic out of virQEMUCapsProbeQMPHostCPU
and wraps the existing logic in a standalone function hopefully
simplifying both functions and making the inputs and outputs clearer.

The patch also sets cpuData->info = NULL to make sure bad data does not
remain in failure cases.

Q) Can the right people quickly determine if they should review this?

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 131 ---
 1 file changed, 92 insertions(+), 39 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index eb9dca05f8..c8e9b8f65d 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -2355,14 +2355,91 @@ virQEMUCapsProbeQMPCPUDefinitions(virQEMUCapsPtr 
qemuCaps,
 }
 
 
+/* virQEMUCapsMigratablePropsDiff
+ * @migratable: input where all migratable props have value true
+ * and non-migratable and unsupported props have value false
+ *
+ * @nonMigratable: input where all migratable & non-migratable props
+ * have value true and unsupported props have value false
+ *
+ * @augmented: output including all props listed in @migratable but
+ * both migratable & non-migratable props have value true,
+ * unsupported props have value false,
+ * and prop->migratable is set to VIR_TRISTATE_BOOL_{YES/NO}
+ * for each supported prop
+ *
+ * Differences in expanded CPUModelInfo inputs @migratable and @nonMigratable 
are
+ * used to create output @augmented where individual props have 
prop->migratable
+ * set to indicate if prop is or isn't migratable.
+ */
+static int
+virQEMUCapsMigratablePropsDiff(qemuMonitorCPUModelInfoPtr migratable,
+   qemuMonitorCPUModelInfoPtr nonMigratable,
+   qemuMonitorCPUModelInfoPtr *augmented)
+{
+int ret = -1;
+qemuMonitorCPUModelInfoPtr tmp;
+qemuMonitorCPUPropertyPtr prop;
+qemuMonitorCPUPropertyPtr mProp;
+qemuMonitorCPUPropertyPtr nmProp;
+virHashTablePtr hash = NULL;
+size_t i;
+
+*augmented = NULL;
+
+if (!(tmp = qemuMonitorCPUModelInfoCopy(migratable)))
+goto cleanup;
+
+if (!nonMigratable)
+goto done;
+
+if (!(hash = virHashCreate(0, NULL)))
+goto cleanup;
+
+for (i = 0; i < tmp->nprops; i++) {
+prop = tmp->props + i;
+
+if (virHashAddEntry(hash, prop->name, prop) < 0)
+goto cleanup;
+}
+
+for (i = 0; i < nonMigratable->nprops; i++) {
+nmProp = nonMigratable->props + i;
+
+if (!(mProp = virHashLookup(hash, nmProp->name)) ||
+mProp->type != QEMU_MONITOR_CPU_PROPERTY_BOOLEAN ||
+mProp->type != nmProp->type)
+continue;  /* In non-migratable list but not in migratable list */
+
+if (mProp->value.boolean) {
+mProp->migratable = VIR_TRISTATE_BOOL_YES;
+} else if (nmProp->value.boolean) {
+mProp->value.boolean = true;
+mProp->migratable = VIR_TRISTATE_BOOL_NO;
+}
+}
+
+tmp->migratability = true;
+
+ done:
+VIR_STEAL_PTR(*augmented, tmp);
+ret = 0;
+
+ cleanup:
+qemuMonitorCPUModelInfoFree(tmp);
+virHashFree(hash);
+return ret;
+}
+
+
 static int
 virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps,
qemuMonitorPtr mon,
bool tcg)
 {
-qemuMonitorCPUModelInfoPtr modelInfo = NULL;
+qemuMonitorCPUModelInfoPtr migratable = NULL;
 qemuMonitorCPUModelInfoPtr nonMigratable = NULL;
-virHashTablePtr hash = NULL;
+qemuMonitorCPUModelInfoPtr augmented = NULL;
 const char *model;
 qemuMonitorCPUModelExpansionType type;
 virDomainVirtType virtType;
@@ -2381,6 +2458,7 @@ virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps,
 }
 
 cpuData = virQEMUCapsGetHostCPUData(qemuCaps, virtType);
+cpuData->info = NULL;
 
 /* Some x86_64 features defined in cpu_map.xml use spelling which differ
  * from the one preferred by QEMU. Static expansion would give us only the
@@ -2392,54 +2470,29 @@ virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps,
 else
 type = QEMU_MONITOR_CPU_MODEL_EXPANSION_STATIC;
 
-if (qemuMonitorGetCPUModelExpansion(mon, type, model, true, ) < 
0)
+if (qemuMonitorGetCPUModelExpansion(mon, type, model, true, ) < 
0)
+goto cleanup;
+
+if (!migratable) {
+ret = 0;  /* Qemu can't expand the model name, exit without error 
*/
 goto cleanup;
+}
 
 /* Try to check migratability of each feature. */
-if (modelInfo &&
-qemuMonitorGetCPUModelExpansion(mon, type, model, false,
+if (qemuMonitorGetCPUModelExpansion(mon, type, model, false,
 ) < 0)
 goto cleanup;
 
-if (nonMigratable) {
-qemuMonitorCPUPropertyPtr 

[libvirt] [PATCH v4 04/37] qemu_monitor: qemuMonitorGetCPUModelExpansion inputs and outputs CPUModelInfo

2018-11-02 Thread Chris Venteicher
A Full CPUModelInfo structure with props is sent to QEMU for expansion.

virQEMUCapsProbeQMPHostCPU migratability logic partitioned into new function
for clarity.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c |  8 +---
 src/qemu/qemu_monitor.c  | 31 ++-
 src/qemu/qemu_monitor.h  |  5 +++--
 src/qemu/qemu_monitor_json.c | 16 +++-
 src/qemu/qemu_monitor_json.h |  6 +++---
 tests/cputest.c  | 11 ---
 6 files changed, 56 insertions(+), 21 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index c8e9b8f65d..b2bb2347fd 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -2437,6 +2437,7 @@ virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps,
qemuMonitorPtr mon,
bool tcg)
 {
+qemuMonitorCPUModelInfoPtr input;
 qemuMonitorCPUModelInfoPtr migratable = NULL;
 qemuMonitorCPUModelInfoPtr nonMigratable = NULL;
 qemuMonitorCPUModelInfoPtr augmented = NULL;
@@ -2470,7 +2471,8 @@ virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps,
 else
 type = QEMU_MONITOR_CPU_MODEL_EXPANSION_STATIC;
 
-if (qemuMonitorGetCPUModelExpansion(mon, type, model, true, ) < 
0)
+if (!(input = qemuMonitorCPUModelInfoNew(model)) ||
+qemuMonitorGetCPUModelExpansion(mon, type, true, input, ) < 
0)
 goto cleanup;
 
 if (!migratable) {
@@ -2479,8 +2481,7 @@ virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps,
 }
 
 /* Try to check migratability of each feature. */
-if (qemuMonitorGetCPUModelExpansion(mon, type, model, false,
-) < 0)
+if (qemuMonitorGetCPUModelExpansion(mon, type, false, input, 
) < 0)
 goto cleanup;
 
 if (virQEMUCapsMigratablePropsDiff(migratable, nonMigratable, ) 
< 0)
@@ -2490,6 +2491,7 @@ virQEMUCapsProbeQMPHostCPU(virQEMUCapsPtr qemuCaps,
 ret = 0;
 
  cleanup:
+qemuMonitorCPUModelInfoFree(input);
 qemuMonitorCPUModelInfoFree(migratable);
 qemuMonitorCPUModelInfoFree(nonMigratable);
 qemuMonitorCPUModelInfoFree(augmented);
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 801c072eff..4c0c2decf7 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -3653,20 +3653,41 @@ qemuMonitorCPUDefInfoFree(qemuMonitorCPUDefInfoPtr cpu)
 }
 
 
+/**
+ * qemuMonitorGetCPUModelExpansion:
+ * @mon:
+ * @type: qemuMonitorCPUModelExpansionType
+ * @migratable: Prompt QEMU to include non-migratable props for X86 models if 
false
+ * @input: Input model
+ * @expansion: Expanded output model (or NULL if QEMU rejects model or request)
+ *
+ * Re-represent @input CPU props using a new CPUModelInfo constructed
+ * by naming a STATIC or DYNAMIC model cooresponding to a set of properties and
+ * a FULL or PARTIAL (only deltas from model) property list.
+ *
+ * if @type == QEMU_MONITOR_CPU_MODEL_EXPANSION_STATIC
+ *   construct @expansion using STATIC model name and a PARTIAL (delta) 
property list
+ *
+ * if @type == QEMU_MONITOR_CPU_MODEL_EXPANSION_FULL
+ *   construct @expansion using DYNAMIC model name and a FULL property list
+ *
+ * if @type == QEMU_MONITOR_CPU_MODEL_EXPANSION_STATIC_FULL
+ *   construct @expansion using STATIC model name and a FULL property list
+ */
 int
 qemuMonitorGetCPUModelExpansion(qemuMonitorPtr mon,
 qemuMonitorCPUModelExpansionType type,
-const char *model_name,
 bool migratable,
-qemuMonitorCPUModelInfoPtr *model_info)
+qemuMonitorCPUModelInfoPtr input,
+qemuMonitorCPUModelInfoPtr *expansion
+   )
 {
 VIR_DEBUG("type=%d model_name=%s migratable=%d",
-  type, model_name, migratable);
+  type, input->name, migratable);
 
 QEMU_CHECK_MONITOR(mon);
 
-return qemuMonitorJSONGetCPUModelExpansion(mon, type, model_name,
-   migratable, model_info);
+return qemuMonitorJSONGetCPUModelExpansion(mon, type, migratable, input, 
expansion);
 }
 
 
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 0a09590ed1..d3efd37099 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -1036,9 +1036,10 @@ typedef enum {
 
 int qemuMonitorGetCPUModelExpansion(qemuMonitorPtr mon,
 qemuMonitorCPUModelExpansionType type,
-const char *model_name,
 bool migratable,
-qemuMonitorCPUModelInfoPtr *model_info);
+qemuMonitorCPUModelInfoPtr input,
+qemuMonitorCPUModelInfoPtr *expansion)
+ATTRIBUTE_NONNULL(4) ATTRIBUTE_NONNULL(5);
 
 

[libvirt] [PATCH v4 07/37] qemu_process: Limit qemuProcessNew to const input strings

2018-11-02 Thread Chris Venteicher
Prevent compile errors due to trying to use a const string as a
non-const input to qemuProcessNew.

No functionality change.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_process.c | 2 +-
 src/qemu/qemu_process.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 2de1362cbb..b93b3d6cf4 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8093,7 +8093,7 @@ qemuProcessFree(qemuProcessPtr cmd)
 
 
 qemuProcessPtr
-qemuProcessNew(char *binary,
+qemuProcessNew(const char *binary,
const char *libDir,
uid_t runUid,
gid_t runGid,
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index 5417cb416f..39a2368ce5 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -231,7 +231,7 @@ struct _qemuProcess {
 virDomainObjPtr vm;
 };
 
-qemuProcessPtr qemuProcessNew(char *binary,
+qemuProcessPtr qemuProcessNew(const char *binary,
   const char *libDir,
   uid_t runUid,
   gid_t runGid,
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 06/37] qemu_process: Use qemuProcess prefix

2018-11-02 Thread Chris Venteicher
s/virQEMUCapsInitQMPCommand/qemuProcess/

No functionality change.

Use appropriate prefix in moved code.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c | 14 +++---
 src/qemu/qemu_process.c  | 28 ++--
 src/qemu/qemu_process.h  | 22 +++---
 3 files changed, 32 insertions(+), 32 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index d5e63aca5c..6f4d9139bf 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -4274,15 +4274,15 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
gid_t runGid,
char **qmperr)
 {
-virQEMUCapsInitQMPCommandPtr cmd = NULL;
+qemuProcessPtr cmd = NULL;
 int ret = -1;
 int rc;
 
-if (!(cmd = virQEMUCapsInitQMPCommandNew(qemuCaps->binary, libDir,
- runUid, runGid, qmperr)))
+if (!(proc = qemuProcessNew(qemuCaps->binary, libDir,
+runUid, runGid, qmperr)))
 goto cleanup;
 
-if ((rc = virQEMUCapsInitQMPCommandRun(cmd, false)) != 0) {
+if ((rc = qemuProcessRun(cmd, false)) != 0) {
 if (rc == 1)
 ret = 0;
 goto cleanup;
@@ -4292,8 +4292,8 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 goto cleanup;
 
 if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_KVM)) {
-virQEMUCapsInitQMPCommandAbort(cmd);
-if ((rc = virQEMUCapsInitQMPCommandRun(cmd, true)) != 0) {
+qemuProcessAbort(cmd);
+if ((rc = qemuProcessRun(cmd, true)) != 0) {
 if (rc == 1)
 ret = 0;
 goto cleanup;
@@ -4306,7 +4306,7 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
 ret = 0;
 
  cleanup:
-virQEMUCapsInitQMPCommandFree(cmd);
+qemuProcessFree(cmd);
 return ret;
 }
 
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 62886fd910..2de1362cbb 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8078,12 +8078,12 @@ static qemuMonitorCallbacks callbacks = {
 
 
 void
-virQEMUCapsInitQMPCommandFree(virQEMUCapsInitQMPCommandPtr cmd)
+qemuProcessFree(qemuProcessPtr cmd)
 {
 if (!cmd)
 return;
 
-virQEMUCapsInitQMPCommandAbort(cmd);
+qemuProcessAbort(cmd);
 VIR_FREE(cmd->binary);
 VIR_FREE(cmd->monpath);
 VIR_FREE(cmd->monarg);
@@ -8092,14 +8092,14 @@ 
virQEMUCapsInitQMPCommandFree(virQEMUCapsInitQMPCommandPtr cmd)
 }
 
 
-virQEMUCapsInitQMPCommandPtr
-virQEMUCapsInitQMPCommandNew(char *binary,
- const char *libDir,
- uid_t runUid,
- gid_t runGid,
- char **qmperr)
+qemuProcessPtr
+qemuProcessNew(char *binary,
+   const char *libDir,
+   uid_t runUid,
+   gid_t runGid,
+   char **qmperr)
 {
-virQEMUCapsInitQMPCommandPtr cmd = NULL;
+qemuProcessPtr cmd = NULL;
 
 if (VIR_ALLOC(cmd) < 0)
 goto error;
@@ -8138,7 +8138,7 @@ virQEMUCapsInitQMPCommandNew(char *binary,
 return cmd;
 
  error:
-virQEMUCapsInitQMPCommandFree(cmd);
+qemuProcessFree(cmd);
 return NULL;
 }
 
@@ -8148,8 +8148,8 @@ virQEMUCapsInitQMPCommandNew(char *binary,
  *  1 when probing QEMU failed
  */
 int
-virQEMUCapsInitQMPCommandRun(virQEMUCapsInitQMPCommandPtr cmd,
- bool forceTCG)
+qemuProcessRun(qemuProcessPtr cmd,
+   bool forceTCG)
 {
 virDomainXMLOptionPtr xmlopt = NULL;
 const char *machine;
@@ -8219,7 +8219,7 @@ virQEMUCapsInitQMPCommandRun(virQEMUCapsInitQMPCommandPtr 
cmd,
 
  cleanup:
 if (!cmd->mon)
-virQEMUCapsInitQMPCommandAbort(cmd);
+qemuProcessAbort(cmd);
 virObjectUnref(xmlopt);
 
 return ret;
@@ -8231,7 +8231,7 @@ virQEMUCapsInitQMPCommandRun(virQEMUCapsInitQMPCommandPtr 
cmd,
 
 
 void
-virQEMUCapsInitQMPCommandAbort(virQEMUCapsInitQMPCommandPtr cmd)
+qemuProcessAbort(qemuProcessPtr cmd)
 {
 if (cmd->mon)
 virObjectUnlock(cmd->mon);
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index 4ba3988e3d..5417cb416f 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -214,9 +214,9 @@ int qemuProcessStartManagedPRDaemon(virDomainObjPtr vm);
 
 void qemuProcessKillManagedPRDaemon(virDomainObjPtr vm);
 
-typedef struct _virQEMUCapsInitQMPCommand virQEMUCapsInitQMPCommand;
-typedef virQEMUCapsInitQMPCommand *virQEMUCapsInitQMPCommandPtr;
-struct _virQEMUCapsInitQMPCommand {
+typedef struct _qemuProcess qemuProcess;
+typedef qemuProcess *qemuProcessPtr;
+struct _qemuProcess {
 char *binary;
 uid_t runUid;
 gid_t runGid;
@@ -231,16 +231,16 @@ struct _virQEMUCapsInitQMPCommand {
 virDomainObjPtr vm;
 };
 
-virQEMUCapsInitQMPCommandPtr virQEMUCapsInitQMPCommandNew(char *binary,
-  const char *libDir,
-  

[libvirt] [PATCH v4 01/37] qemu_monitor: Introduce qemuMonitorCPUModelInfoNew

2018-11-02 Thread Chris Venteicher
A helper function allocates an initializes model name
in CPU Model Info structs.

Signed-off-by: Chris Venteicher 
---
 src/qemu/qemu_capabilities.c |  2 +-
 src/qemu/qemu_monitor.c  | 32 +++-
 src/qemu/qemu_monitor.h  |  2 ++
 3 files changed, 30 insertions(+), 6 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 2ca5af3297..eb9dca05f8 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -3024,7 +3024,7 @@ virQEMUCapsLoadHostCPUModelInfo(virQEMUCapsPtr qemuCaps,
 goto cleanup;
 }
 
-if (VIR_ALLOC(hostCPU) < 0)
+if (!(hostCPU = qemuMonitorCPUModelInfoNew(NULL)))
 goto cleanup;
 
 if (!(hostCPU->name = virXMLPropString(hostCPUNode, "model"))) {
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 7f7013e115..45a4568fcc 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -3670,6 +3670,31 @@ qemuMonitorGetCPUModelExpansion(qemuMonitorPtr mon,
 }
 
 
+qemuMonitorCPUModelInfoPtr
+qemuMonitorCPUModelInfoNew(const char *name)
+{
+qemuMonitorCPUModelInfoPtr ret = NULL;
+qemuMonitorCPUModelInfoPtr model;
+
+if (VIR_ALLOC(model) < 0)
+return NULL;
+
+model->name = NULL;
+model->nprops = 0;
+model->props = NULL;
+model->migratability = false;
+
+if (VIR_STRDUP(model->name, name) < 0)
+goto cleanup;
+
+VIR_STEAL_PTR(ret, model);
+
+ cleanup:
+qemuMonitorCPUModelInfoFree(model);
+return ret;
+}
+
+
 void
 qemuMonitorCPUModelInfoFree(qemuMonitorCPUModelInfoPtr model_info)
 {
@@ -3693,18 +3718,15 @@ qemuMonitorCPUModelInfoFree(qemuMonitorCPUModelInfoPtr 
model_info)
 qemuMonitorCPUModelInfoPtr
 qemuMonitorCPUModelInfoCopy(const qemuMonitorCPUModelInfo *orig)
 {
-qemuMonitorCPUModelInfoPtr copy;
+qemuMonitorCPUModelInfoPtr copy = NULL;
 size_t i;
 
-if (VIR_ALLOC(copy) < 0)
+if (!orig || !(copy = qemuMonitorCPUModelInfoNew(orig->name)))
 goto error;
 
 if (VIR_ALLOC_N(copy->props, orig->nprops) < 0)
 goto error;
 
-if (VIR_STRDUP(copy->name, orig->name) < 0)
-goto error;
-
 copy->migratability = orig->migratability;
 copy->nprops = orig->nprops;
 
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 48b142a4f4..d87b5a4ec0 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -1042,6 +1042,8 @@ int qemuMonitorGetCPUModelExpansion(qemuMonitorPtr mon,
 
 void qemuMonitorCPUModelInfoFree(qemuMonitorCPUModelInfoPtr model_info);
 
+qemuMonitorCPUModelInfoPtr qemuMonitorCPUModelInfoNew(const char *name);
+
 qemuMonitorCPUModelInfoPtr
 qemuMonitorCPUModelInfoCopy(const qemuMonitorCPUModelInfo *orig);
 
-- 
2.17.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v4 00/37] BaselineHypervisorCPU using QEMU QMP exchanges

2018-11-02 Thread Chris Venteicher
Some architectures (S390) depend on QEMU to compute baseline CPU model and
expand a models feature set.

Interacting with QEMU requires starting the QEMU process and completing one or
more query-cpu-model-baseline QMP exchanges with QEMU in addition to a
query-cpu-model-expansion QMP exchange to expand all features in the model.

See "s390x CPU models: exposing features" patch set on Qemu-devel for discussion
of QEMU aspects.

This is part of resolution of: 
https://bugzilla.redhat.com/show_bug.cgi?id=1511999

Version 4 attempts to address all issues from V1,2,3 including making the
QEMU process activation for QMP Queries generic (not specific to capabilities)
and moving that code from qemu_capabilities to qemu_process.

Version 4 greatly expands the number of patches to make the set easier
to review.

The patches to do hypervisor baseline using QEMU are primarily in
qemu_driver.c

The patches to make the process code generic (not capabilities specific)
are mostly in qemu_process.c

Many of the patches are cut / paste of existing code.  The patches that
change functionality are as modular and minimal as possible to make
reviewing easier.

I attempted to make the new qemu_process functions
consistent with the existing domain activation qemu_process functions.

A thread safe library function creates a unique directory under libDir for each 
QEMU
process (for QMP messaging) to decouple processes in terms of sockets and
file system footprint.

The last patch is based on past discussion of QEMU cpu
expansion only returning migratable features except for one x86 case where
non-migratable features are explicitly requested.  The patch records that 
features
are migratable based on QEMU only returning migratable features.  The main 
result
is the CPU xml for S390 CPU's contains the same migratability info the X86 
currently
does.  The testcases were updated to reflect the change.

Every patch should compile independently if applied in sequence.

Thanks,
Chris


Chris Venteicher (37):
  qemu_monitor: Introduce qemuMonitorCPUModelInfoNew
  qemu_monitor: Introduce qemuMonitorCPUModelInfo / JSON conversion
  qemu_capabilities: Introduce virQEMuCapsMigratablePropsDiff
  qemu_monitor: qemuMonitorGetCPUModelExpansion inputs and outputs
CPUModelInfo
  qemu_process: Move process code from qemu_capabilities to qemu_process
  qemu_process: Use qemuProcess prefix
  qemu_process: Limit qemuProcessNew to const input strings
  qemu_process: Refer to proc not cmd in process code
  qemu_process: Use consistent name for stop process function
  qemu_capabilities: Stop QEMU process before freeing
  qemu_process: Use qemuProcess struct for a single process
  qemu_process: Persist stderr in qemuProcess struct
  qemu_capabilities: Detect caps probe failure by checking monitor ptr
  qemu_process: Introduce qemuProcessStartQmp
  qemu_process: Add debug message in qemuProcessLaunchQmp
  qemu_process: Collect monitor code in single function
  qemu_process: Store libDir in qemuProcess struct
  qemu_process: Setup paths within qemuProcessInitQmp
  qemu_process: Stop retaining Qemu Monitor config in qemuProcess
  qemu_process: Don't open monitor if process failed
  qemu_process: Cleanup qemuProcess alloc function
  qemu_process: Cleanup qemuProcessStopQmp function
  qemu_process: Catch process free before process stop
  qemu_monitor: Make monitor callbacks optional
  qemu_process: Enter QMP command mode when starting QEMU Process
  qemu_process: Use unique directories for QMP processes
  qemu_process: Stop locking QMP process monitor immediately
  qemu_capabilities: Introduce CPUModelInfo to virCPUDef function
  qemu_capabilities: Introduce virCPUDef to CPUModelInfo function
  qemu_monitor: Support query-cpu-model-baseline QMP command
  qemu_driver: Consolidate code to baseline using libvirt
  qemu_driver: Decouple code for baseline using libvirt
  qemu_driver: Identify using libvirt as a distinct way to compute
baseline
  qemu_driver: Support baseline calculation using QEMU
  qemu_driver: Support feature expansion via QEMU when baselining cpu
  qemu_driver: Remove unsupported props in expanded hypervisor baseline
output
  qemu_monitor: Default props to migratable when expanding cpu model

 src/qemu/qemu_capabilities.c  | 567 --
 src/qemu/qemu_capabilities.h  |   4 +
 src/qemu/qemu_driver.c| 219 ++-
 src/qemu/qemu_monitor.c   | 184 +-
 src/qemu/qemu_monitor.h   |  29 +-
 src/qemu/qemu_monitor_json.c  | 226 +--
 src/qemu/qemu_monitor_json.h  |  12 +-
 src/qemu/qemu_process.c   | 359 +++
 src/qemu/qemu_process.h   |  37 ++
 tests/cputest.c   |  11 +-
 .../caps_2.10.0.s390x.xml |  60 +-
 .../caps_2.11.0.s390x.xml |  58 +-
 .../caps_2.12.0.s390x.xml 

[libvirt] [RFC PATCH] Add new migration flag VIR_MIGRATE_DRY_RUN

2018-11-02 Thread Jim Fehlig
A dry run can be used as a best-effort check that a migration command
will succeed. The destination host will be checked to see if it can
accommodate the resources required by the domain. DRY_RUN will fail if
the destination host is not capable of running the domain. Although a
subsequent migration will likely succeed, the success of DRY_RUN does not
ensure a future migration will succeed. Resources on the destination host
could become unavailable between a DRY_RUN and actual migration.

Signed-off-by: Jim Fehlig 
---

If it is agreed this is useful, my thought was to use the begin and
prepare phases of migration to implement it. qemuMigrationDstPrepareAny()
already does a lot of the heavy lifting wrt checking the host can
accommodate the domain. Some of it, and the remaining migration phases,
can be short-circuited in the case of dry run.

One interesting wrinkle I've observed is the check for cpu compatibility.
AFAICT qemu is actually invoked on the dst, "filtered-features" of the cpu
are requested via qmp, and results are checked against cpu in domain config.
If cpu on dst is insufficient, migration fails in the prepare phase with
something like "guest CPU doesn't match specification: missing features: z y z".
I was hoping to avoid launching qemu in the case of dry run, but that may
be unavoidable if we'd like a dependable dry run result.

Thanks for considering the idea!

(BTW, if it is considered useful I will follow up with a V1 series that
includes this patch and and impl for the qemu driver.)

 include/libvirt/libvirt-domain.h | 12 
 src/qemu/qemu_migration.h|  3 ++-
 tools/virsh-domain.c |  7 +++
 tools/virsh.pod  | 10 +-
 4 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index fdd2d6b8ea..6d52f6ce50 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -830,6 +830,18 @@ typedef enum {
  */
 VIR_MIGRATE_TLS   = (1 << 16),
 
+/* Setting the VIR_MIGRATE_DRY_RUN flag will cause libvirt to make a
+ * best-effort attempt to check if migration will succeed. The destination
+ * host will be checked to see if it can accommodate the resources required
+ * by the domain. For example are the network, disk, memory, and CPU
+ * resources used by the domain on the source host also available on the
+ * destination host. The dry run will fail if libvirt determines the
+ * destination host is not capable of running the domain. Although a
+ * subsequent migration will likely succeed, the success of dry run does
+ * not ensure a future migration will succeed. Resources on the destination
+ * host could become unavailable between a dry run and actual migration.
+ */
+VIR_MIGRATE_DRY_RUN   = (1 << 16),
 } virDomainMigrateFlags;
 
 
diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h
index e12b6972db..b0e2bc689b 100644
--- a/src/qemu/qemu_migration.h
+++ b/src/qemu/qemu_migration.h
@@ -57,7 +57,8 @@
  VIR_MIGRATE_AUTO_CONVERGE | \
  VIR_MIGRATE_RDMA_PIN_ALL | \
  VIR_MIGRATE_POSTCOPY | \
- VIR_MIGRATE_TLS)
+ VIR_MIGRATE_TLS | \
+ VIR_MIGRATE_DRY_RUN)
 
 /* All supported migration parameters and their types. */
 # define QEMU_MIGRATION_PARAMETERS \
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 372bdb95d3..46f0f44917 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -10450,6 +10450,10 @@ static const vshCmdOptDef opts_migrate[] = {
  .type = VSH_OT_BOOL,
  .help = N_("use TLS for migration")
 },
+{.name = "dry-run",
+ .type = VSH_OT_BOOL,
+ .help = N_("check if migration will succeed without actually performing 
the migration")
+},
 {.name = NULL}
 };
 
@@ -10694,6 +10698,9 @@ doMigrate(void *opaque)
 if (vshCommandOptBool(cmd, "tls"))
 flags |= VIR_MIGRATE_TLS;
 
+if (vshCommandOptBool(cmd, "dry-run"))
+flags |= VIR_MIGRATE_DRY_RUN;
+
 if (flags & VIR_MIGRATE_PEER2PEER || vshCommandOptBool(cmd, "direct")) {
 if (virDomainMigrateToURI3(dom, desturi, params, nparams, flags) == 0)
 ret = '0';
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 86c041d575..715fa3887f 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -1845,7 +1845,7 @@ I I [I] [I] 
[I] [I] [I<--comp-methods> B]
 [I<--comp-mt-level>] [I<--comp-mt-threads>] [I<--comp-mt-dthreads>]
 [I<--comp-xbzrle-cache>] [I<--auto-converge>] [I]
-[I] [I<--persistent-xml> B] [I<--tls>]
+[I] [I<--persistent-xml> B] [I<--tls>] 
[I<--dry-run>]
 
 Migrate domain to another host.  Add I<--live> for live migration; <--p2p>
 for peer-2-peer migration; I<--direct> for direct migration; or I<--tunnelled>
@@ -1937,6 +1937,14 @@ Providing I<--tls> causes the migration to use the host 
configured TLS setup
 the migration of the domain. Usage requires proper TLS setup for both source
 and 

Re: [libvirt] [tck PATCH 1/3] Add tests for virtual network <-> guest connections

2018-11-02 Thread Laine Stump
On 11/2/18 11:52 AM, Daniel P. Berrangé wrote:
> Signed-off-by: Daniel P. Berrangé 
> ---
>  lib/Sys/Virt/TCK.pm   | 33 
>  lib/Sys/Virt/TCK/NetworkBuilder.pm| 33 +++-
>  scripts/networks/300-guest-network-isolated.t | 82 ++
>  scripts/networks/310-guest-network-nat.t  | 83 +++
>  scripts/networks/320-guest-network-route.t| 83 +++
>  scripts/networks/330-guest-network-open.t | 83 +++
>  scripts/networks/340-guest-network-bridge.t   | 79 ++
>  scripts/networks/350-guest-network-private.t  | 81 ++
>  scripts/networks/360-guest-network-vepa.t | 81 ++
>  .../networks/370-guest-network-passthrough.t  | 81 ++
>  scripts/networks/380-guest-network-hostdev.t  | 82 ++
>  t/080-network-builder.t   |  2 +-
>  12 files changed, 800 insertions(+), 3 deletions(-)
>  create mode 100644 scripts/networks/300-guest-network-isolated.t
>  create mode 100644 scripts/networks/310-guest-network-nat.t
>  create mode 100644 scripts/networks/320-guest-network-route.t
>  create mode 100644 scripts/networks/330-guest-network-open.t
>  create mode 100644 scripts/networks/340-guest-network-bridge.t
>  create mode 100644 scripts/networks/350-guest-network-private.t
>  create mode 100644 scripts/networks/360-guest-network-vepa.t
>  create mode 100644 scripts/networks/370-guest-network-passthrough.t
>  create mode 100644 scripts/networks/380-guest-network-hostdev.t


You added all these new tests, but forgot that you made the MANIFEST
file static/manually edited in commit a140e4f61 back in May, so the new
tests don't get installed.


Also, the tests don't actually boot up an OS and try to pass traffic; I
guess that's something you're counting on someone adding later? :-)


(Same comment for adding the proper iptables rules, but we need to
consider what to do about that, because what gets added will in the
*very near* future be different depending on version of libvirt and/or
whether or not firewalld is using nftables).


Those last two shouldn't keep you from pushing the patch though (see my
list at the end for the few things that do need to be changed before
pushing)


>
> diff --git a/lib/Sys/Virt/TCK.pm b/lib/Sys/Virt/TCK.pm
> index 04244bd..9457836 100644
> --- a/lib/Sys/Virt/TCK.pm
> +++ b/lib/Sys/Virt/TCK.pm
> @@ -29,6 +29,7 @@ use File::Path qw(mkpath);
>  use File::Spec::Functions qw(catfile catdir rootdir);
>  use Cwd qw(cwd);
>  use LWP::UserAgent;
> +use IO::Interface::Simple;
>  use IO::Uncompress::Gunzip qw(gunzip);
>  use IO::Uncompress::Bunzip2 qw(bunzip2);
>  use XML::XPath;
> @@ -1285,6 +1286,38 @@ sub get_ip_from_leases{
>  }
>  
>  
> +sub find_free_ipv4_subnet {
> +my $index;
> +
> +my %used;
> +
> +foreach my $iface (IO::Interface::Simple->interfaces()) {


This works to eliminate conflicts with active interfaces, but doesn't
take into account any routes that have the same destination
address/prefix as the desired network. For example, you may have a
static route for 192.168.125.0/25 pointing to another host that has a
routed virtual network with that address. If that's the case, the test
will fail.


I don't have a quick alternative at hand to suggest though, and the
likelyhood of a host having a static route that conflicts with the
lowest numbered available 192.168.blah.0/24 network is probably pretty
low, so I'm going to overlook this :-)


> + if ($iface->netmask eq "255.255.255.0" &&
> + $iface->address =~ /^192.168.(\d+).\d+/) {
> + $used{"$1"} = 1;
> + print "Used $1\n";
> + } else {
> + print "Not used ", $iface->address, "\n";
> + }
> +}
> +
> +for (my $i = 1; $i < 255; $i++) {
> + if (!exists $used{"$i"}) {
> + $index = $i;
> + last;
> + }
> +}
> +
> +return () unless defined $index;
> +
> +return (
> + address => "192.168.$index.1",
> + netmask => "255.255.255.0",
> + dhcpstart => "192.168.$index.100",
> + dhcpend => "192.168.$index.200"
> + );
> +}
> +
>  sub shutdown_vm_gracefully {
>  my $dom = shift;
>  
> diff --git a/lib/Sys/Virt/TCK/NetworkBuilder.pm 
> b/lib/Sys/Virt/TCK/NetworkBuilder.pm
> index 09ca6b7..ad0cab8 100644
> --- a/lib/Sys/Virt/TCK/NetworkBuilder.pm
> +++ b/lib/Sys/Virt/TCK/NetworkBuilder.pm
> @@ -61,6 +61,22 @@ sub forward {
>  return $self;
>  }
>  
> +sub interfaces {
> +my $self = shift;
> +
> +$self->{interfaces} = [@_];
> +
> +return $self;
> +}
> +
> +sub host_devices {
> +my $self = shift;
> +
> +$self->{host_devices} = [@_];
> +
> +return $self;
> +}
> +
>  sub ipaddr {
>  my $self = shift;
>  my $address = shift;
> @@ -98,8 +114,21 @@ sub as_xml {
>  $w->emptyTag("bridge", %{$self->{bridge}})
>  if $self->{bridge};
>  
> -$w->emptyTag("forward", %{$self->{forward}})
> -if 

Re: [libvirt] [tck PATCH 2/3] Fix incorrect warning about deleting everything

2018-11-02 Thread Laine Stump
On 11/2/18 11:52 AM, Daniel P. Berrangé wrote:
> The code does not in fact delete everything on the host, only things
> whose name starts with a "tck" prefix.


Yep! Multiple people have asked me about that, and I had to allay their
fears (I was actually worried about it for a long time, then one day
realized that I'd accidentally run the tck on a host with lots of
guests/networks, and "surprise!" nothing bad happened :-)


Reviewed-by: Laine Stump 

>
> Signed-off-by: Daniel P. Berrangé 
> ---
>  bin/libvirt-tck | 36 ++--
>  1 file changed, 18 insertions(+), 18 deletions(-)
>
> diff --git a/bin/libvirt-tck b/bin/libvirt-tck
> index 816234b..d5519dd 100644
> --- a/bin/libvirt-tck
> +++ b/bin/libvirt-tck
> @@ -38,20 +38,22 @@ test suite for libvirt drivers.
>  
>  =head2 WARNING
>  
> -There now follows a few words of warning
> +The test suite is intended to be moderately safe to run on arbitrary
> +hosts and takes steps to avoid intentionally breaking things.
>  
> -The test suite needs to have a completely 'clean' initial
> -starting state. If your host already has virtual machines
> -defined and/or running this will cause problems. The test
> -suite will detect this and refuse to run, allowing you to
> -remove any preexisting guests. Alternatively you can pass
> -the --force option and libvirt will DELETE EVERYTHING it
> -finds.
> +All objects (guests, networks, storage pools, etc) that are created
> +will  have a name prefix of "tck" to minimize risk of clashes. The
> +test suite will only ever (intentionally) delete objects with a
> +"tck" name prefix.
>  
> -To repeat: ALL YOUR EXISTING DOMAINS, NETWORKS, STORAGE POOLS
> -WILL BE DELETED IF YOU USE THE --force OPTION.
> +Where a test suite needs access to a precious host resource (physical
> +NIC, PCI device, USB device, block device), execution will be skipped
> +until the admin has white listed one or more suitable resources in
> +the C configuration file.
>  
> -The warning is now complete, continue reading
> +Despite these precautions, running automated tests always carries some
> +degree of risk to the host system. It is thus advisable to avoid
> +executing this test suite on hosts with precious state.
>  
>  =head2 OPTIONS
>  
> @@ -90,13 +92,11 @@ C option generates a formal XML document of results.
>  
>  =item --force
>  
> -Forcably remove all running guest domains and all persistent guest
> -domain configuration files before running any tests. The test suite
> -requires a pristine install, so all existing managed objects must
> -be removed before running. This switch will instruct libvirt-tck
> -to automatically remove all guest domains. YOU WILL NOT GET YOUR
> -EXISTING GUEST DOMAINS BACK IF THIS HAPPENS. THEY WILL BE GONE
> -FOREVER. USE AT YOUR OWN RISK.
> +Forcably remove all previously created objects, including guests,
> +networks, storage pools, etc which have a "tck" name prefix.
> +
> +User created objects whose name does not start with "tck" will be
> +left untouched.
>  
>  =item -t, --testdir PATH
>  




pEpkey.asc
Description: application/pgp-keys
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH v1 2/7] pcihp: overwrite hotplug handler recursively from the start

2018-11-02 Thread Michael S. Tsirkin
On Fri, Nov 02, 2018 at 02:00:32PM +0100, Igor Mammedov wrote:
> On Fri, 2 Nov 2018 12:43:10 +0100
> David Hildenbrand  wrote:
> 
> > On 01.11.18 15:10, Igor Mammedov wrote:
> > > On Wed, 24 Oct 2018 12:19:25 +0200
> > > David Hildenbrand  wrote:
> > >   
> > >> For now, the hotplug handler is not called for devices that are
> > >> being cold plugged. The hotplug handler is setup when the machine
> > >> initialization is fully done. Only bridges that were cold plugged are
> > >> considered.
> > >>
> > >> Set the hotplug handler for the root piix bus directly when realizing.
> > >> Overwrite the hotplug handler of bridges when hotplugging/coldplugging
> > >> them.
> > >>
> > >> This will now make sure that the ACPI PCI hotplug handler is also called
> > >> for cold-plugged devices (also on bridges) and for bridges that were
> > >> hotplugged.
> > >>
> > >> When trying to hotplug a device to a hotplugged bridge, we now correctly
> > >> get the error message
> > >>  "Unsupported bus. Bus doesn't have property 'acpi-pcihp-bsel' set"
> > >> Insted of going via the standard PCI hotplug handler.  
> > > Erroring out is probably not ok, since it can break existing setups
> > > where SHPC hotplugging to hotplugged bridge was working just fine before. 
> > >  
> > 
> > The question is if it actually was supposed (and eventually did) work.
> I think it works now, it's QEMU 'ACPI hotplug hack' (which exists for
> the sake of Windows) limitation. We weren't able to dynamically add
> ACPI description for hotplugged bridge, so it was using native hotplug.
> Now theoretically we can load tables dynamically but that, would add
> maintenance nightmare (versioned tables) and would be harder to debug.
> I'd rather not go that direction and keep current limited version,
> suggesting users to use native hotplug if guest is capable.

Well a bunch of tables need to be dynamic, and generating them from ACPI
isn't a significant step up from generating them in the BIOS which did
create huge headaches, for many reasons but in particular because we
need to add custom interfaces for every little thing we are adding.
By comparison dynamic loading is a single interface and we can
ship any AML code we want across it.

So I'm working on a limited form of dynamic loading with versioning and
I don't necessarily agree it has to be a nightmare, but yes it does need
to be limited very carefully. Implementing bridge hotplug there
isn't in scope for me at this point.

> > If this was the expected behavior (mixing hotplug types), then the
> > necessary change to this patch would boil down to checking if the bridge
> > it hot or coldplugged.
> > 
> > > 
> > > Marcel/Michael what's your take on this change in behaviour?
> > > CCing libvirt in case they are doing this stuff
> > >   
> > 
> > Indeed, it would be nice to know if this was actually supposed to work
> > like this (coldplugged bridges using ACPI hotplug and hotplugged bridges
> > using SHPC hotplug).
> > 
> > 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [tck PATCH 0/3] Improvements to virtual network testing and misc fixes

2018-11-02 Thread Daniel P . Berrangé
The main goal of this patch series is addition of tests which create
a virtual network with every possible forwarding mode, and attempt to
boot a guest with them. A couple of other pieces were added at the end.

Daniel P. Berrangé (3):
  Add tests for virtual network <-> guest connections
  Fix incorrect warning about deleting everything
  Allow tests to be listed as positional arguments

 bin/libvirt-tck   | 69 ---
 lib/Sys/Virt/TCK.pm   | 33 
 lib/Sys/Virt/TCK/NetworkBuilder.pm| 33 +++-
 scripts/networks/300-guest-network-isolated.t | 82 ++
 scripts/networks/310-guest-network-nat.t  | 83 +++
 scripts/networks/320-guest-network-route.t| 83 +++
 scripts/networks/330-guest-network-open.t | 83 +++
 scripts/networks/340-guest-network-bridge.t   | 79 ++
 scripts/networks/350-guest-network-private.t  | 81 ++
 scripts/networks/360-guest-network-vepa.t | 81 ++
 .../networks/370-guest-network-passthrough.t  | 81 ++
 scripts/networks/380-guest-network-hostdev.t  | 82 ++
 t/080-network-builder.t   |  2 +-
 13 files changed, 838 insertions(+), 34 deletions(-)
 create mode 100644 scripts/networks/300-guest-network-isolated.t
 create mode 100644 scripts/networks/310-guest-network-nat.t
 create mode 100644 scripts/networks/320-guest-network-route.t
 create mode 100644 scripts/networks/330-guest-network-open.t
 create mode 100644 scripts/networks/340-guest-network-bridge.t
 create mode 100644 scripts/networks/350-guest-network-private.t
 create mode 100644 scripts/networks/360-guest-network-vepa.t
 create mode 100644 scripts/networks/370-guest-network-passthrough.t
 create mode 100644 scripts/networks/380-guest-network-hostdev.t

-- 
2.19.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [tck PATCH 3/3] Allow tests to be listed as positional arguments

2018-11-02 Thread Daniel P . Berrangé
The -t argument accepts the path to a test file or a test directory. It
would be useful if shell wildcards could be used to specify test files,
but this doesn't work when using optional arguments.

By changing the test path(s) to be positional arguments we can easily
allow for shell wildcards.

Signed-off-by: Daniel P. Berrangé 
---
 bin/libvirt-tck | 35 +--
 1 file changed, 21 insertions(+), 14 deletions(-)

diff --git a/bin/libvirt-tck b/bin/libvirt-tck
index d5519dd..bd332a4 100644
--- a/bin/libvirt-tck
+++ b/bin/libvirt-tck
@@ -8,7 +8,7 @@ libvirt-tck - libvirt Technology Compatability Kit
 
 =head1 SYNOPSIS
 
- # libvirt-tck [OPTIONS]
+ # libvirt-tck [OPTIONS] [TESTS..]
 
 Run with default config, probing for URI to use
 
@@ -62,8 +62,15 @@ the default configuration file from 
C
 and will allow libvirt to probe for the hypervisor driver to
 run. If a reliably repeatable test result set is desired, it is
 recommended to always give an explicit libvirt connection URI
-to choose the driver. The following options are available when
-running the C command
+to choose the driver.
+
+Any command line arguments that are not parsed as options will
+be considered paths to test scripts to invoke. If no paths are
+given, all tests under C will be
+executed.
+
+The following options are available when running the C
+command
 
 =over 4
 
@@ -98,11 +105,6 @@ networks, storage pools, etc which have a "tck" name prefix.
 User created objects whose name does not start with "tck" will be
 left untouched.
 
-=item -t, --testdir PATH
-
-Specify an alternate directory path in which to find the test
-scripts to be run. If omitted, defaults to C
-
 =item -a, --archive FILE
 
 Generate an archive containing all the raw test results. The
@@ -144,7 +146,6 @@ my $timer = 0;
 my $archive;
 my $config = catfile($confdir, "default.cfg");
 my $format = "text";
-my $testdir = catdir($datadir, "tests");
 
 if (!GetOptions("verbose" => \$verbose,
"debug" => \$debug,
@@ -154,7 +155,6 @@ if (!GetOptions("verbose" => \$verbose,
"config=s" => \$config,
"force" => \$force,
"format=s" => \$format,
-   "testdir=s" => \$testdir,
"timer" => \$timer) || $help) {
 pod2usage(-verbose => $help,
  -output => $help ? \*STDOUT : \*STDERR,
@@ -181,12 +181,19 @@ if ($verbose && $quiet) {
  -output => \*STDERR);
 }
 
-unless (-e $testdir) {
-print STDERR "$0: test directory '$testdir' does not exist\n";
-exit 2;
+my @testdirs = @ARGV;
+unless (@testdirs) {
+push @testdirs, catdir($datadir, "tests");
+}
+
+foreach (@testdirs) {
+unless (-e $_) {
+   print STDERR "$0: test path '$_' does not exist\n";
+   exit 2;
+}
 }
 
-my @newargv = ("-r", "--norc", "--merge", $testdir);
+my @newargv = ("-r", "--norc", "--merge", @testdirs);
 
 if ($archive) {
 push @newargv, "-a", $archive;
-- 
2.19.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [tck PATCH 2/3] Fix incorrect warning about deleting everything

2018-11-02 Thread Daniel P . Berrangé
The code does not in fact delete everything on the host, only things
whose name starts with a "tck" prefix.

Signed-off-by: Daniel P. Berrangé 
---
 bin/libvirt-tck | 36 ++--
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/bin/libvirt-tck b/bin/libvirt-tck
index 816234b..d5519dd 100644
--- a/bin/libvirt-tck
+++ b/bin/libvirt-tck
@@ -38,20 +38,22 @@ test suite for libvirt drivers.
 
 =head2 WARNING
 
-There now follows a few words of warning
+The test suite is intended to be moderately safe to run on arbitrary
+hosts and takes steps to avoid intentionally breaking things.
 
-The test suite needs to have a completely 'clean' initial
-starting state. If your host already has virtual machines
-defined and/or running this will cause problems. The test
-suite will detect this and refuse to run, allowing you to
-remove any preexisting guests. Alternatively you can pass
-the --force option and libvirt will DELETE EVERYTHING it
-finds.
+All objects (guests, networks, storage pools, etc) that are created
+will  have a name prefix of "tck" to minimize risk of clashes. The
+test suite will only ever (intentionally) delete objects with a
+"tck" name prefix.
 
-To repeat: ALL YOUR EXISTING DOMAINS, NETWORKS, STORAGE POOLS
-WILL BE DELETED IF YOU USE THE --force OPTION.
+Where a test suite needs access to a precious host resource (physical
+NIC, PCI device, USB device, block device), execution will be skipped
+until the admin has white listed one or more suitable resources in
+the C configuration file.
 
-The warning is now complete, continue reading
+Despite these precautions, running automated tests always carries some
+degree of risk to the host system. It is thus advisable to avoid
+executing this test suite on hosts with precious state.
 
 =head2 OPTIONS
 
@@ -90,13 +92,11 @@ C option generates a formal XML document of results.
 
 =item --force
 
-Forcably remove all running guest domains and all persistent guest
-domain configuration files before running any tests. The test suite
-requires a pristine install, so all existing managed objects must
-be removed before running. This switch will instruct libvirt-tck
-to automatically remove all guest domains. YOU WILL NOT GET YOUR
-EXISTING GUEST DOMAINS BACK IF THIS HAPPENS. THEY WILL BE GONE
-FOREVER. USE AT YOUR OWN RISK.
+Forcably remove all previously created objects, including guests,
+networks, storage pools, etc which have a "tck" name prefix.
+
+User created objects whose name does not start with "tck" will be
+left untouched.
 
 =item -t, --testdir PATH
 
-- 
2.19.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

[libvirt] [tck PATCH 1/3] Add tests for virtual network <-> guest connections

2018-11-02 Thread Daniel P . Berrangé
Signed-off-by: Daniel P. Berrangé 
---
 lib/Sys/Virt/TCK.pm   | 33 
 lib/Sys/Virt/TCK/NetworkBuilder.pm| 33 +++-
 scripts/networks/300-guest-network-isolated.t | 82 ++
 scripts/networks/310-guest-network-nat.t  | 83 +++
 scripts/networks/320-guest-network-route.t| 83 +++
 scripts/networks/330-guest-network-open.t | 83 +++
 scripts/networks/340-guest-network-bridge.t   | 79 ++
 scripts/networks/350-guest-network-private.t  | 81 ++
 scripts/networks/360-guest-network-vepa.t | 81 ++
 .../networks/370-guest-network-passthrough.t  | 81 ++
 scripts/networks/380-guest-network-hostdev.t  | 82 ++
 t/080-network-builder.t   |  2 +-
 12 files changed, 800 insertions(+), 3 deletions(-)
 create mode 100644 scripts/networks/300-guest-network-isolated.t
 create mode 100644 scripts/networks/310-guest-network-nat.t
 create mode 100644 scripts/networks/320-guest-network-route.t
 create mode 100644 scripts/networks/330-guest-network-open.t
 create mode 100644 scripts/networks/340-guest-network-bridge.t
 create mode 100644 scripts/networks/350-guest-network-private.t
 create mode 100644 scripts/networks/360-guest-network-vepa.t
 create mode 100644 scripts/networks/370-guest-network-passthrough.t
 create mode 100644 scripts/networks/380-guest-network-hostdev.t

diff --git a/lib/Sys/Virt/TCK.pm b/lib/Sys/Virt/TCK.pm
index 04244bd..9457836 100644
--- a/lib/Sys/Virt/TCK.pm
+++ b/lib/Sys/Virt/TCK.pm
@@ -29,6 +29,7 @@ use File::Path qw(mkpath);
 use File::Spec::Functions qw(catfile catdir rootdir);
 use Cwd qw(cwd);
 use LWP::UserAgent;
+use IO::Interface::Simple;
 use IO::Uncompress::Gunzip qw(gunzip);
 use IO::Uncompress::Bunzip2 qw(bunzip2);
 use XML::XPath;
@@ -1285,6 +1286,38 @@ sub get_ip_from_leases{
 }
 
 
+sub find_free_ipv4_subnet {
+my $index;
+
+my %used;
+
+foreach my $iface (IO::Interface::Simple->interfaces()) {
+   if ($iface->netmask eq "255.255.255.0" &&
+   $iface->address =~ /^192.168.(\d+).\d+/) {
+   $used{"$1"} = 1;
+   print "Used $1\n";
+   } else {
+   print "Not used ", $iface->address, "\n";
+   }
+}
+
+for (my $i = 1; $i < 255; $i++) {
+   if (!exists $used{"$i"}) {
+   $index = $i;
+   last;
+   }
+}
+
+return () unless defined $index;
+
+return (
+   address => "192.168.$index.1",
+   netmask => "255.255.255.0",
+   dhcpstart => "192.168.$index.100",
+   dhcpend => "192.168.$index.200"
+   );
+}
+
 sub shutdown_vm_gracefully {
 my $dom = shift;
 
diff --git a/lib/Sys/Virt/TCK/NetworkBuilder.pm 
b/lib/Sys/Virt/TCK/NetworkBuilder.pm
index 09ca6b7..ad0cab8 100644
--- a/lib/Sys/Virt/TCK/NetworkBuilder.pm
+++ b/lib/Sys/Virt/TCK/NetworkBuilder.pm
@@ -61,6 +61,22 @@ sub forward {
 return $self;
 }
 
+sub interfaces {
+my $self = shift;
+
+$self->{interfaces} = [@_];
+
+return $self;
+}
+
+sub host_devices {
+my $self = shift;
+
+$self->{host_devices} = [@_];
+
+return $self;
+}
+
 sub ipaddr {
 my $self = shift;
 my $address = shift;
@@ -98,8 +114,21 @@ sub as_xml {
 $w->emptyTag("bridge", %{$self->{bridge}})
 if $self->{bridge};
 
-$w->emptyTag("forward", %{$self->{forward}})
-if exists $self->{forward};
+if (exists $self->{forward}) {
+   $w->startTag("forward", %{$self->{forward}});
+   foreach (@{$self->{interfaces}}) {
+   $w->emptyTag("interface", dev => $_);
+   }
+   foreach (@{$self->{host_devices}}) {
+   $w->emptyTag("address",
+type => "pci",
+domain => $_->[0],
+bus => $_->[1],
+slot => $_->[2],
+function => $_->[3]);
+   }
+   $w->endTag("forward");
+}
 
 if ($self->{ipaddr}) {
 $w->startTag("ip",
diff --git a/scripts/networks/300-guest-network-isolated.t 
b/scripts/networks/300-guest-network-isolated.t
new file mode 100644
index 000..487e864
--- /dev/null
+++ b/scripts/networks/300-guest-network-isolated.t
@@ -0,0 +1,82 @@
+# -*- perl -*-
+#
+# Copyright (C) 2018 Red Hat, Inc.
+#
+# This program is free software; You can redistribute it and/or modify
+# it under the GNU General Public License as published by the Free
+# Software Foundation; either version 2, or (at your option) any
+# later version
+#
+# The file "LICENSE" distributed along with this file provides full
+# details of the terms and conditions
+#
+
+=pod
+
+=head1 NAME
+
+network/300-guest-network-isolated.t - guest connect to isolated network
+
+=head1 DESCRIPTION
+
+This test case validates that a guest is connected to an isolated
+virtual network
+
+=cut
+
+use strict;
+use warnings;
+
+use Test::More tests => 4;
+
+use Sys::Virt::TCK;
+
+my 

Re: [libvirt] [PATCH] news: Update for 4.9.0 release

2018-11-02 Thread Laine Stump
On 11/2/18 10:01 AM, Andrea Bolognani wrote:
> Signed-off-by: Andrea Bolognani 
> ---
>  docs/news.xml | 49 +
>  1 file changed, 49 insertions(+)
>
> diff --git a/docs/news.xml b/docs/news.xml
> index e5476a3332..b75df36ddb 100644
> --- a/docs/news.xml
> +++ b/docs/news.xml
> @@ -35,6 +35,16 @@
>  
>
>  
> +  
> +
> +  util: Add cgroup v2 support
> +
> +
> +  cgroup v2 support has been implemented in libvirt, with both
> +  "unified" (v2 only) and "hybrid" (v2 + v1) setups being usable;
> +  existing "legacy" (v1 only) setups will keep working.
> +
> +  
>
>  
>qemu: Add vfio AP support
> @@ -46,8 +56,47 @@
>
>  
>  
> +  
> +
> +  rpc: Make 'genprotocol' output reproducible
> +
> +
> +  This is another step towards making libvirt builds fully
> +  reproducible.
> +
> +  
>  
>  
> +  
> +
> +  security: Fix permissions for UNIX sockets
> +
> +
> +  Since 4.5.0, libvirt is using FD passing to hand sockets over to
> +  QEMU, which in theory removes the need for them to be accessible by
> +  the user under which the QEMU process is running; however, other
> +  processes such as vdsm need to access the sockets as well, which
> +  means adjusting permissions is still necessary.
> +
> +  
> +  
> +
> +  cpu_map: Add Icelake model definitions
> +
> +
> +  These CPU models will be available in the upcoming 3.1.0 QEMU
> +  release.
> +
> +  
> +  
> +
> +  util: Properly parse URIs with missing trailing slash
> +
> +
> +  Some storage URIs were not parsed correctly, in which case libvirt
> +  ended up emitting XML that it would then refuse to parse back.
> +
> +  
>  
>
>

Reviewed-by: Laine Stump 




pEpkey.asc
Description: application/pgp-keys
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] External disk snapshot paths and the revert operation

2018-11-02 Thread Povilas Kanapickas
On 02/11/2018 15:58, John Ferlan wrote:
> 
> 
> On 11/2/18 8:49 AM, Povilas Kanapickas wrote:
>> On 21/10/2018 21:31, Povilas Kanapickas wrote:
>>> Hey,
>>>
>>> It seems that there's an issue of how external disk snapshots currently
>>> work. The snapshot definition xml[1] that is supplied to the API has a
>>> list of disks each of which specify where the new overlay disks should
>>> be placed.
>>>
>>> This leads to ambiguities when we want to revert the domain to a
>>> snapshot with children. Consider the following situation:
>>>
>>> 0. Initial state:
>>> Domain definition: disk at disk-base.qcow2
>>>
>>> 1. We create a snapshot snap1 placing the overlay at disk-snap1.qcow2
>>>
>>> Domain definition: disk at disk-snap1.qcow2
>>> Disks:
>>>   - disk-base.qcow2 (read-only)
>>>   - disk-snap1.qcow2 (overlay, base: disk-base.qcow2)
>>>
>>> Snapshots:
>>>   - snap1 (snapshot domain disk list: disk-base.qcow2,
>>>snapshot disk list: disk-snap1.qcow2)
>>>
>>> 2. We create another snapshot snap2 placing the overlay at disk-snap2.qcow2
>>>
>>> VM definition: disk at disk-snap2.qcow2
>>> Disks:
>>>   - disk-base.qcow2 (read-only)
>>>   - disk-snap1.qcow2 (overlay, base: disk-base.qcow2, read-only)
>>>   - disk-snap2.qcow2 (overlay, base: disk-snap1.qcow2)
>>>
>>> Snapshots:
>>>   - snap1 (snapshot domain disk list: disk-base.qcow2,
>>>snapshot disk list: disk-snap1.qcow2)
>>>   - snap2 (snapshot domain disk list: disk-snap1.qcow2,
>>>snapshot disk list: disk-snap2.qcow2)
>>>
>>> Now what happens if we want to revert to snap1? We can't use any of the
>>> paths that snap1 itself references: both disk-base.qcow2 and
>>> disk-snap1.qcow2 are read-only as part of the backing chain for disks
>>> used by snap2. Possible options are these:
>>>
>>>  - we can reuse disk-snap2.qcow2 which we no longer need as it contains
>>> overlay on top of snap2. This is non-optimal because now the path of the
>>> disk after revert depends on the disk path before it.
>>>
>>>  - we can think of some name with the expectation that the user will
>>> change it later anyway.
>>>
>>> I think that both of the above are non-optimal because we don't actually
>>> revert to the exact state we took the snapshot of. The user is
>>> effectively no longer in control of the disk path after the revert.
>>>
>>> One of possible solutions would be add additional flag during the
>>> snapshot creation that would change the current behavior. Instead of
>>> creating a new disk for the overlay we could move the base disk to the
>>> specified path and create the overlay in its place. The advantages of
>>> this behavior would be the following:
>>>
>>>  - the path to the current disk used in the domain would not change when
>>> taking snapshots. This way we could avoid the problem listed above and
>>> easily revert to any snapshot.
>>>
>>>  - the confusion in naming of the overlay disks would be reduced.
>>> Currently the path specified when creating snapshot refers to the
>>> overlay image. Most of the time user does not know what will be changed
>>> in the domain until the next snapshot. So any path he chooses will
>>> likely not reflect what will end up in the overlay at the time of the
>>> next snapshot. If it was possible to specify the path where the current
>>> disk should be put instead, then the user could just name it after the
>>> snapshot and it would correctly represent the contents of that disk.
>>>
>>> The major disadvantage of the approach is that it would introduce extra
>>> complexity in the presently simple interface and also the underlying
>>> code. I've completely disregarded the case of taking snapshots while the
>>> domain is running, so there might be hidden complications there as well.
>>>
>>> Personally, I don't mind the current situation too much, but if libvirt
>>> developers think it's a thing worth fixing I would be willing to work on it.
>>>
>>> Any opinions?
>>>
>>
>> Friendly ping :-)
>>
> 
> Please be aware that the typical Red Hat reviewers of the list were away
> at KVM Forum from Oct 22-26 and some took some extended vacation time
> after that. In particular, Peter Krempa who generally understand the
> snapshot code best is away. So, it may take longer than usual for the
> patches to be considered especially since beyond the Forum/Vacation time
> we had other rather major news to disrupt our normal work flow.

Thank you for explanation.

Regards,
Povilas


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] qemu: Dissolve qemuBuildVhostuserCommandLine in qemuBuildInterfaceCommandLine

2018-11-02 Thread Michal Privoznik
On 11/02/2018 03:44 PM, Erik Skultety wrote:
> On Fri, Nov 02, 2018 at 01:56:17PM +0100, Michal Privoznik wrote:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1524230
>>
>> The qemuBuildVhostuserCommandLine builds command line for
>> vhostuser type interfaces. It is duplicating some code of the
>> function it is called from (qemuBuildInterfaceCommandLine)
>> because of the way it's called. If we merge it into the caller
>> not only we save a few lines but we also enable checks that we
>> would have to duplicate otherwise (e.g. QoS availability).
>>
>> While at it, reorder some VIR_FREE() in
>> qemuBuildInterfaceCommandLine so that it is easier to track which
>> variables are freed and which are not.
> 
> Sounds like ^this would go into a separate trivial patch.
> 
>>
>> Signed-off-by: Michal Privoznik 
>> ---
>>  src/qemu/qemu_command.c | 113 +---
>>  1 file changed, 48 insertions(+), 65 deletions(-)
>>
> ...
> 
>> @@ -8595,24 +8577,25 @@ qemuBuildInterfaceCommandLine(virQEMUDriverPtr 
>> driver,
>>  virSetError(saved_err);
>>  virFreeError(saved_err);
>>  }
>> -for (i = 0; tapfd && i < tapfdSize && tapfd[i] >= 0; i++) {
>> -if (ret < 0)
>> -VIR_FORCE_CLOSE(tapfd[i]);
>> -if (tapfdName)
>> -VIR_FREE(tapfdName[i]);
>> -}
>>  for (i = 0; vhostfd && i < vhostfdSize && vhostfd[i] >= 0; i++) {
>>  if (ret < 0)
>>  VIR_FORCE_CLOSE(vhostfd[i]);
>>  if (vhostfdName)
>>  VIR_FREE(vhostfdName[i]);
>>  }
>> -VIR_FREE(tapfd);
>> +VIR_FREE(vhostfdName);
>> +for (i = 0; tapfd && i < tapfdSize && tapfd[i] >= 0; i++) {
>> +if (ret < 0)
>> +VIR_FORCE_CLOSE(tapfd[i]);
>> +if (tapfdName)
>> +VIR_FREE(tapfdName[i]);
>> +}
>> +VIR_FREE(tapfdName);
>>  VIR_FREE(vhostfd);
>> -VIR_FREE(nic);
>> +VIR_FREE(tapfd);
>> +VIR_FREE(chardev);
>>  VIR_FREE(host);
>> -VIR_FREE(tapfdName);
>> -VIR_FREE(vhostfdName);
>> +VIR_FREE(nic);
> 
> I don't see how ^this hunk made it better. If anything, then the VIR_FREEs
> should be probably coupled like:
> 
> VIR_FREE(tapfd);
> VIR_FREE(tapfdName);
> VIR_FREE(vhostfd);
> VIR_FREE(vhostfdName);
> 

Why is that? I can see two reasonable orderings. If you have variables
A, B, C, you either free them in the same order or in reverse C, B, A.
Any other is just hard to follow.

Michal

> 
> It would also need to be a separate patch. To the rest of the changes:
> Reviewed-by: Erik Skultety 
> 

Okay, I'll separate the reordering in a separate trivial patch.

Thanks,
Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] qemu: Dissolve qemuBuildVhostuserCommandLine in qemuBuildInterfaceCommandLine

2018-11-02 Thread Erik Skultety
On Fri, Nov 02, 2018 at 01:56:17PM +0100, Michal Privoznik wrote:
> https://bugzilla.redhat.com/show_bug.cgi?id=1524230
>
> The qemuBuildVhostuserCommandLine builds command line for
> vhostuser type interfaces. It is duplicating some code of the
> function it is called from (qemuBuildInterfaceCommandLine)
> because of the way it's called. If we merge it into the caller
> not only we save a few lines but we also enable checks that we
> would have to duplicate otherwise (e.g. QoS availability).
>
> While at it, reorder some VIR_FREE() in
> qemuBuildInterfaceCommandLine so that it is easier to track which
> variables are freed and which are not.

Sounds like ^this would go into a separate trivial patch.

>
> Signed-off-by: Michal Privoznik 
> ---
>  src/qemu/qemu_command.c | 113 +---
>  1 file changed, 48 insertions(+), 65 deletions(-)
>
...

> @@ -8595,24 +8577,25 @@ qemuBuildInterfaceCommandLine(virQEMUDriverPtr driver,
>  virSetError(saved_err);
>  virFreeError(saved_err);
>  }
> -for (i = 0; tapfd && i < tapfdSize && tapfd[i] >= 0; i++) {
> -if (ret < 0)
> -VIR_FORCE_CLOSE(tapfd[i]);
> -if (tapfdName)
> -VIR_FREE(tapfdName[i]);
> -}
>  for (i = 0; vhostfd && i < vhostfdSize && vhostfd[i] >= 0; i++) {
>  if (ret < 0)
>  VIR_FORCE_CLOSE(vhostfd[i]);
>  if (vhostfdName)
>  VIR_FREE(vhostfdName[i]);
>  }
> -VIR_FREE(tapfd);
> +VIR_FREE(vhostfdName);
> +for (i = 0; tapfd && i < tapfdSize && tapfd[i] >= 0; i++) {
> +if (ret < 0)
> +VIR_FORCE_CLOSE(tapfd[i]);
> +if (tapfdName)
> +VIR_FREE(tapfdName[i]);
> +}
> +VIR_FREE(tapfdName);
>  VIR_FREE(vhostfd);
> -VIR_FREE(nic);
> +VIR_FREE(tapfd);
> +VIR_FREE(chardev);
>  VIR_FREE(host);
> -VIR_FREE(tapfdName);
> -VIR_FREE(vhostfdName);
> +VIR_FREE(nic);

I don't see how ^this hunk made it better. If anything, then the VIR_FREEs
should be probably coupled like:

VIR_FREE(tapfd);
VIR_FREE(tapfdName);
VIR_FREE(vhostfd);
VIR_FREE(vhostfdName);


It would also need to be a separate patch. To the rest of the changes:
Reviewed-by: Erik Skultety 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [ocaml PATCH] Cast virError* enums to int for comparisons with 0

2018-11-02 Thread Pino Toscano
The actual type of an enum in C is implementation defined when there are
no negative values, and thus it can be int, or uint.  This is the case
of the virError* enums in libvirt, as they do not have negative values.

Hence, to avoid hitting tautological comparison errors when checking
their rage, temporarly cast the enum values to int when checking they
are not negative.  The check is there to ensure the value is within the
range of the OCaml type used to represent it.

Signed-off-by: Pino Toscano 
---
 libvirt/libvirt_c_epilogue.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/libvirt/libvirt_c_epilogue.c b/libvirt/libvirt_c_epilogue.c
index 29656a4..4e75d2f 100644
--- a/libvirt/libvirt_c_epilogue.c
+++ b/libvirt/libvirt_c_epilogue.c
@@ -153,7 +153,7 @@ Val_err_number (virErrorNumber code)
   CAMLparam0 ();
   CAMLlocal1 (rv);
 
-  if (0 <= code && code <= MAX_VIR_CODE)
+  if (0 <= (int) code && code <= MAX_VIR_CODE)
 rv = Val_int (code);
   else {
 rv = caml_alloc (1, 0);/* VIR_ERR_UNKNOWN (code) */
@@ -169,7 +169,7 @@ Val_err_domain (virErrorDomain code)
   CAMLparam0 ();
   CAMLlocal1 (rv);
 
-  if (0 <= code && code <= MAX_VIR_DOMAIN)
+  if (0 <= (int) code && code <= MAX_VIR_DOMAIN)
 rv = Val_int (code);
   else {
 rv = caml_alloc (1, 0);/* VIR_FROM_UNKNOWN (code) */
@@ -185,7 +185,7 @@ Val_err_level (virErrorLevel code)
   CAMLparam0 ();
   CAMLlocal1 (rv);
 
-  if (0 <= code && code <= MAX_VIR_LEVEL)
+  if (0 <= (int) code && code <= MAX_VIR_LEVEL)
 rv = Val_int (code);
   else {
 rv = caml_alloc (1, 0);/* VIR_ERR_UNKNOWN_LEVEL (code) */
-- 
2.17.2

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [jenkins-ci PATCH 4/4] guests: Drop Fedora 27

2018-11-02 Thread Andrea Bolognani
Signed-off-by: Andrea Bolognani 
---
 guests/host_vars/libvirt-fedora-27/docker.yml |  2 -
 .../host_vars/libvirt-fedora-27/install.yml   |  2 -
 guests/host_vars/libvirt-fedora-27/main.yml   | 22 -
 guests/inventory  |  1 -
 guests/vars/mappings.yml  |  1 -
 guests/vars/vault.yml | 86 +--
 6 files changed, 41 insertions(+), 73 deletions(-)
 delete mode 100644 guests/host_vars/libvirt-fedora-27/docker.yml
 delete mode 100644 guests/host_vars/libvirt-fedora-27/install.yml
 delete mode 100644 guests/host_vars/libvirt-fedora-27/main.yml

diff --git a/guests/host_vars/libvirt-fedora-27/docker.yml 
b/guests/host_vars/libvirt-fedora-27/docker.yml
deleted file mode 100644
index dcada18..000
--- a/guests/host_vars/libvirt-fedora-27/docker.yml
+++ /dev/null
@@ -1,2 +0,0 @@

-docker_base: fedora:27
diff --git a/guests/host_vars/libvirt-fedora-27/install.yml 
b/guests/host_vars/libvirt-fedora-27/install.yml
deleted file mode 100644
index f7a45af..000
--- a/guests/host_vars/libvirt-fedora-27/install.yml
+++ /dev/null
@@ -1,2 +0,0 @@

-install_url: 
https://download.fedoraproject.org/pub/fedora/linux/releases/27/Everything/x86_64/os
diff --git a/guests/host_vars/libvirt-fedora-27/main.yml 
b/guests/host_vars/libvirt-fedora-27/main.yml
deleted file mode 100644
index 0b4ec2b..000
--- a/guests/host_vars/libvirt-fedora-27/main.yml
+++ /dev/null
@@ -1,22 +0,0 @@

-projects:
-  - libosinfo
-  - libvirt
-  - libvirt-cim
-  - libvirt-dbus
-  - libvirt-glib
-  - libvirt-go
-  - libvirt-go-xml
-  - libvirt-ocaml
-  - libvirt-perl
-  - libvirt-python
-  - libvirt-sandbox
-  - libvirt-tck
-  - osinfo-db
-  - osinfo-db-tools
-  - virt-manager
-  - virt-viewer
-
-package_format: rpm
-os_name: Fedora
-os_version: 27
diff --git a/guests/inventory b/guests/inventory
index d0ac2d7..6749efa 100644
--- a/guests/inventory
+++ b/guests/inventory
@@ -2,7 +2,6 @@ libvirt-centos-7
 libvirt-debian-8
 libvirt-debian-9
 libvirt-debian-sid
-libvirt-fedora-27
 libvirt-fedora-28
 libvirt-fedora-29
 libvirt-fedora-rawhide
diff --git a/guests/vars/mappings.yml b/guests/vars/mappings.yml
index cd92bd8..19f7a96 100644
--- a/guests/vars/mappings.yml
+++ b/guests/vars/mappings.yml
@@ -730,7 +730,6 @@ mappings:
 deb: libc-dev-bin
 rpm: rpcgen
 CentOS7: glibc-common
-Fedora27: glibc-common
 FreeBSD:
 
   rpmbuild:
diff --git a/guests/vars/vault.yml b/guests/vars/vault.yml
index 960d5b1..2a3efe6 100644
--- a/guests/vars/vault.yml
+++ b/guests/vars/vault.yml
@@ -1,46 +1,42 @@
 $ANSIBLE_VAULT;1.1;AES256
-386234636261336561643734336431336566373930316630653866346633316263356166
-346533623561633435363334376462303265326664640a353834303038336638636237636163
-6563623036336437646534356539613261656564626436313931306361383238623766353132
-3932613734626636310a64373138336239376136646134336535313134366530643435323568
-62326337666236623566626664386163633533623964323030633232306131356236393361616235
-37616330326332623835353930663863336537343466323035346266343135376234643530633730
-66376436643364656231393965656264643038646564346339343038346634653035653264373238
-6561363165316132326561643937616436353063346233646263623439386638343638616331
-35326633313961343836366162623137373238656537666531353131333636353838303438373839
-35393764656633613934646366313633383536386232613064663130363964643466386461363036
-38303234343730656137663231623639336362356532653839333731306435376130346635326137
-30326465316564663438386537396639663234383731363161633933363531353637653231366333
-653831353032383136656536313464353936656264306465653135396230326434646138
-38323631613866336434626331346636383835393031643531626661633164666236623339653236
-3633626563626239313461343538643736376362313130666261343034613064333162623734
-63363938356562386130653065623162653961326635666262306662373535623263363735633731
-6362623162396465623339356534396162323332363465666332356561343632353431366162
-6264356435636561643565376637306236393338386530373461363066633966623265373335
-62633035646461623639343561373465306565326437303337303735653362343532343139363731
-65616132633231323032316536646232386536313666323032363433613930613632343030663434
-36336261346137336536396330333066323831613762336437393230373336303535333739336637
-6132353436313262353764373737636232333866393439633064616334666530303335306635
-3034353631316236656136643239633639633036643838393539636331366233353631363237
-3431343130363065613836616363326130356634323237393763306335636164326232663933
-31633938323633346461623834376435633636653065613634633038636535623364326662626231
-333162353366313835343162613761333733313761323366326236616265333966683133
-63393937393439363065386565396131626434363432343230613633396135373634303536653263
-6236313537343634613636663237336233343136313262323132633537303030653032336637

[libvirt] [jenkins-ci PATCH 2/4] Build on Fedora 29

2018-11-02 Thread Andrea Bolognani
Signed-off-by: Andrea Bolognani 
---
 guests/playbooks/build/jobs/defaults.yml| 2 ++
 guests/playbooks/build/projects/libvirt-dbus.yml| 3 +++
 guests/playbooks/build/projects/libvirt-ocaml.yml   | 1 +
 guests/playbooks/build/projects/libvirt-sandbox.yml | 2 ++
 guests/playbooks/build/projects/libvirt-tck.yml | 2 ++
 guests/playbooks/build/projects/libvirt.yml | 1 +
 guests/playbooks/build/projects/virt-manager.yml| 3 +++
 guests/playbooks/build/projects/virt-viewer.yml | 1 +
 jobs/defaults.yaml  | 2 ++
 projects/libvirt-dbus.yaml  | 3 +++
 projects/libvirt-ocaml.yaml | 1 +
 projects/libvirt-sandbox.yaml   | 2 ++
 projects/libvirt-tck.yaml   | 2 ++
 projects/libvirt.yaml   | 1 +
 projects/virt-manager.yaml  | 3 +++
 projects/virt-viewer.yaml   | 1 +
 16 files changed, 30 insertions(+)

diff --git a/guests/playbooks/build/jobs/defaults.yml 
b/guests/playbooks/build/jobs/defaults.yml
index 81c5fab..76040df 100644
--- a/guests/playbooks/build/jobs/defaults.yml
+++ b/guests/playbooks/build/jobs/defaults.yml
@@ -6,6 +6,7 @@ all_machines:
   - libvirt-debian-sid
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
   - libvirt-freebsd-10
   - libvirt-freebsd-11
@@ -16,6 +17,7 @@ rpm_machines:
   - libvirt-centos-7
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
 mingw_machines:
   - libvirt-fedora-rawhide
diff --git a/guests/playbooks/build/projects/libvirt-dbus.yml 
b/guests/playbooks/build/projects/libvirt-dbus.yml
index 69ba144..d0486f0 100644
--- a/guests/playbooks/build/projects/libvirt-dbus.yml
+++ b/guests/playbooks/build/projects/libvirt-dbus.yml
@@ -8,6 +8,7 @@
   - libvirt-debian-sid
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
   - libvirt-freebsd-10
   - libvirt-freebsd-11
@@ -28,6 +29,7 @@
   - libvirt-debian-sid
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
   - libvirt-freebsd-10
   - libvirt-freebsd-11
@@ -42,6 +44,7 @@
   - libvirt-debian-sid
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
   - libvirt-freebsd-10
   - libvirt-freebsd-11
diff --git a/guests/playbooks/build/projects/libvirt-ocaml.yml 
b/guests/playbooks/build/projects/libvirt-ocaml.yml
index f0dec8b..a044128 100644
--- a/guests/playbooks/build/projects/libvirt-ocaml.yml
+++ b/guests/playbooks/build/projects/libvirt-ocaml.yml
@@ -8,6 +8,7 @@
   - libvirt-debian-sid
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
   - libvirt-freebsd-10
   - libvirt-freebsd-11
diff --git a/guests/playbooks/build/projects/libvirt-sandbox.yml 
b/guests/playbooks/build/projects/libvirt-sandbox.yml
index 6b6a3f6..88b7fec 100644
--- a/guests/playbooks/build/projects/libvirt-sandbox.yml
+++ b/guests/playbooks/build/projects/libvirt-sandbox.yml
@@ -10,6 +10,7 @@
   - libvirt-debian-sid
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
   - libvirt-ubuntu-16
   - libvirt-ubuntu-18
@@ -27,4 +28,5 @@
 machines:
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
diff --git a/guests/playbooks/build/projects/libvirt-tck.yml 
b/guests/playbooks/build/projects/libvirt-tck.yml
index e4b82cb..5919d73 100644
--- a/guests/playbooks/build/projects/libvirt-tck.yml
+++ b/guests/playbooks/build/projects/libvirt-tck.yml
@@ -7,6 +7,7 @@
   - libvirt-debian-sid
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
   - libvirt-freebsd-10
   - libvirt-freebsd-11
@@ -24,4 +25,5 @@
 machines:
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
diff --git a/guests/playbooks/build/projects/libvirt.yml 
b/guests/playbooks/build/projects/libvirt.yml
index 59635f5..60582a9 100644
--- a/guests/playbooks/build/projects/libvirt.yml
+++ b/guests/playbooks/build/projects/libvirt.yml
@@ -18,6 +18,7 @@
   - libvirt-debian-sid
   - libvirt-fedora-27
   - libvirt-fedora-28
+  - libvirt-fedora-29
   - libvirt-fedora-rawhide
   - libvirt-ubuntu-16
   - libvirt-ubuntu-18
diff --git a/guests/playbooks/build/projects/virt-manager.yml 
b/guests/playbooks/build/projects/virt-manager.yml
index 0dec7f6..11ebb7f 100644
--- a/guests/playbooks/build/projects/virt-manager.yml
+++ b/guests/playbooks/build/projects/virt-manager.yml
@@ -9,6 +9,7 @@
   - libvirt-debian-sid
   

[libvirt] [jenkins-ci PATCH 3/4] Stop building on Fedora 27

2018-11-02 Thread Andrea Bolognani
Signed-off-by: Andrea Bolognani 
---
 guests/playbooks/build/jobs/defaults.yml| 2 --
 guests/playbooks/build/projects/libvirt-dbus.yml| 3 ---
 guests/playbooks/build/projects/libvirt-ocaml.yml   | 1 -
 guests/playbooks/build/projects/libvirt-sandbox.yml | 2 --
 guests/playbooks/build/projects/libvirt-tck.yml | 2 --
 guests/playbooks/build/projects/libvirt.yml | 1 -
 guests/playbooks/build/projects/virt-manager.yml| 3 ---
 jobs/defaults.yaml  | 2 --
 projects/libvirt-dbus.yaml  | 3 ---
 projects/libvirt-ocaml.yaml | 1 -
 projects/libvirt-sandbox.yaml   | 2 --
 projects/libvirt-tck.yaml   | 2 --
 projects/libvirt.yaml   | 1 -
 projects/virt-manager.yaml  | 3 ---
 14 files changed, 28 deletions(-)

diff --git a/guests/playbooks/build/jobs/defaults.yml 
b/guests/playbooks/build/jobs/defaults.yml
index 76040df..d9e81e0 100644
--- a/guests/playbooks/build/jobs/defaults.yml
+++ b/guests/playbooks/build/jobs/defaults.yml
@@ -4,7 +4,6 @@ all_machines:
   - libvirt-debian-8
   - libvirt-debian-9
   - libvirt-debian-sid
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
@@ -15,7 +14,6 @@ all_machines:
   - libvirt-ubuntu-18
 rpm_machines:
   - libvirt-centos-7
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
diff --git a/guests/playbooks/build/projects/libvirt-dbus.yml 
b/guests/playbooks/build/projects/libvirt-dbus.yml
index d0486f0..1d4ecbd 100644
--- a/guests/playbooks/build/projects/libvirt-dbus.yml
+++ b/guests/playbooks/build/projects/libvirt-dbus.yml
@@ -6,7 +6,6 @@
   - libvirt-centos-7
   - libvirt-debian-9
   - libvirt-debian-sid
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
@@ -27,7 +26,6 @@
 machines:
   - libvirt-debian-9
   - libvirt-debian-sid
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
@@ -42,7 +40,6 @@
 machines:
   - libvirt-debian-9
   - libvirt-debian-sid
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
diff --git a/guests/playbooks/build/projects/libvirt-ocaml.yml 
b/guests/playbooks/build/projects/libvirt-ocaml.yml
index a044128..bb70d47 100644
--- a/guests/playbooks/build/projects/libvirt-ocaml.yml
+++ b/guests/playbooks/build/projects/libvirt-ocaml.yml
@@ -6,7 +6,6 @@
   - libvirt-centos-7
   - libvirt-debian-9
   - libvirt-debian-sid
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
diff --git a/guests/playbooks/build/projects/libvirt-sandbox.yml 
b/guests/playbooks/build/projects/libvirt-sandbox.yml
index 88b7fec..948fb9f 100644
--- a/guests/playbooks/build/projects/libvirt-sandbox.yml
+++ b/guests/playbooks/build/projects/libvirt-sandbox.yml
@@ -8,7 +8,6 @@
   - libvirt-debian-8
   - libvirt-debian-9
   - libvirt-debian-sid
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
@@ -26,7 +25,6 @@
 - include: '{{ playbook_base }}/jobs/autotools-rpm-job.yml'
   vars:
 machines:
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
diff --git a/guests/playbooks/build/projects/libvirt-tck.yml 
b/guests/playbooks/build/projects/libvirt-tck.yml
index 5919d73..4e40565 100644
--- a/guests/playbooks/build/projects/libvirt-tck.yml
+++ b/guests/playbooks/build/projects/libvirt-tck.yml
@@ -5,7 +5,6 @@
   - libvirt-debian-8
   - libvirt-debian-9
   - libvirt-debian-sid
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
@@ -23,7 +22,6 @@
 - include: '{{ playbook_base }}/jobs/perl-modulebuild-rpm-job.yml'
   vars:
 machines:
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
diff --git a/guests/playbooks/build/projects/libvirt.yml 
b/guests/playbooks/build/projects/libvirt.yml
index 60582a9..a78326e 100644
--- a/guests/playbooks/build/projects/libvirt.yml
+++ b/guests/playbooks/build/projects/libvirt.yml
@@ -16,7 +16,6 @@
   - libvirt-debian-8
   - libvirt-debian-9
   - libvirt-debian-sid
-  - libvirt-fedora-27
   - libvirt-fedora-28
   - libvirt-fedora-29
   - libvirt-fedora-rawhide
diff --git a/guests/playbooks/build/projects/virt-manager.yml 
b/guests/playbooks/build/projects/virt-manager.yml
index 11ebb7f..432f6eb 100644
--- a/guests/playbooks/build/projects/virt-manager.yml
+++ b/guests/playbooks/build/projects/virt-manager.yml
@@ -7,7 +7,6 @@
 machines:
   - libvirt-debian-9
   - 

[libvirt] [jenkins-ci PATCH 1/4] guests: Add Fedora 29

2018-11-02 Thread Andrea Bolognani
Signed-off-by: Andrea Bolognani 
---
 guests/host_vars/libvirt-fedora-29/docker.yml |  2 +
 .../host_vars/libvirt-fedora-29/install.yml   |  2 +
 guests/host_vars/libvirt-fedora-29/main.yml   | 22 +
 guests/inventory  |  1 +
 guests/vars/vault.yml | 86 ++-
 5 files changed, 72 insertions(+), 41 deletions(-)
 create mode 100644 guests/host_vars/libvirt-fedora-29/docker.yml
 create mode 100644 guests/host_vars/libvirt-fedora-29/install.yml
 create mode 100644 guests/host_vars/libvirt-fedora-29/main.yml

diff --git a/guests/host_vars/libvirt-fedora-29/docker.yml 
b/guests/host_vars/libvirt-fedora-29/docker.yml
new file mode 100644
index 000..f9ad241
--- /dev/null
+++ b/guests/host_vars/libvirt-fedora-29/docker.yml
@@ -0,0 +1,2 @@
+---
+docker_base: fedora:29
diff --git a/guests/host_vars/libvirt-fedora-29/install.yml 
b/guests/host_vars/libvirt-fedora-29/install.yml
new file mode 100644
index 000..79212fd
--- /dev/null
+++ b/guests/host_vars/libvirt-fedora-29/install.yml
@@ -0,0 +1,2 @@
+---
+install_url: 
https://download.fedoraproject.org/pub/fedora/linux/releases/29/Everything/x86_64/os
diff --git a/guests/host_vars/libvirt-fedora-29/main.yml 
b/guests/host_vars/libvirt-fedora-29/main.yml
new file mode 100644
index 000..b6140ed
--- /dev/null
+++ b/guests/host_vars/libvirt-fedora-29/main.yml
@@ -0,0 +1,22 @@
+---
+projects:
+  - libosinfo
+  - libvirt
+  - libvirt-cim
+  - libvirt-dbus
+  - libvirt-glib
+  - libvirt-go
+  - libvirt-go-xml
+  - libvirt-ocaml
+  - libvirt-perl
+  - libvirt-python
+  - libvirt-sandbox
+  - libvirt-tck
+  - osinfo-db
+  - osinfo-db-tools
+  - virt-manager
+  - virt-viewer
+
+package_format: rpm
+os_name: Fedora
+os_version: 29
diff --git a/guests/inventory b/guests/inventory
index 9838d7c..d0ac2d7 100644
--- a/guests/inventory
+++ b/guests/inventory
@@ -4,6 +4,7 @@ libvirt-debian-9
 libvirt-debian-sid
 libvirt-fedora-27
 libvirt-fedora-28
+libvirt-fedora-29
 libvirt-fedora-rawhide
 libvirt-freebsd-10
 libvirt-freebsd-11
diff --git a/guests/vars/vault.yml b/guests/vars/vault.yml
index 69a61da..960d5b1 100644
--- a/guests/vars/vault.yml
+++ b/guests/vars/vault.yml
@@ -1,42 +1,46 @@
 $ANSIBLE_VAULT;1.1;AES256
-37356635383236643531373130663563383030653062366434323439383730323638623666373637
-6134653331323363616632383936343465316238623138620a316461613830373831623862303263
-37383431363061346133613938643764363263303262636637363862623866656662623534313030
-3163623062663665310a34353631323237336631653537303462353435653836626438643966
-61643263356336346433633434336237383464366431383637643931366135383032373332393434
-66386565336165323239613666373463633039393662623934363934356439393636383565653835
-3536636136643431383738646233373430303635353036346436656436333635663763336266
-3935613836383838383234383764306464656461666532626436663763326163343538303231
-66336631303263363236646364633639623835386466383331626233373361656339666331363933
-316435613838663561306132656230303861333433326463623264666234313565396266
-38653137383531343964343964303432343039326332356239396539353766343536373834653863
-32633132386131376263353830656364363837666333666138383136646131666336323262353762
-63346265646637653961383635376563393432346631396238333564663135323138623936383130
-64306130653931666132316438323938386336336264623532353935616364333831646434313165
-3637333766376464303335623039616535306136663436316233316362353339616864646565
-62633133663433643338616434313437356464643034323161386262336466303365643836393537
-37323465356364656239643161383638653438333562333134663536376436616338336531333837
-653534396636633830623935333166393136343064363165316664353731393936323237
-63626535323066336337313639643930383865663730383238613731396261626434323763613133
-65333739356235623133396161663262633766313432636433346336373431383765613634353464
-34626363363234643861373336353462383730333634353632653261303431333736616238383961
-3665343330393566613130633937376564363230653635653865643434633537386565306466
-3534313166333536343234343334653830353963626335306132313339636139663139383861
-32646563343062363634613034393430313135383530633162353661356637616439623936346532
-34646439333636336533663862333935613161616435626661313936343830356434396631356464
-3364656538393134393661643833303439383763346336373665363834343238313062383662
-3565333238383835646166653763336432613063623765323962646139346466326135303031
-61636361336563396465333265323932376531353132363135363431306631653161363836393831
-6639656133353933663630633131343434393532613237653863323362323039656566343364
-39316236653761343534326632383761613835613431306338343565363732356434383632303234
-63313233303861306663613630633832356139383138663531336465623361663336306234353536
-31623562383466346437383539386434643265616137356561663530353330346233386633636133
-30383265386530346463383165316162643062663039366536663332623165386364393531343363

[libvirt] [jenkins-ci PATCH 0/4] Add Fedora 29, drop Fedora 27

2018-11-02 Thread Andrea Bolognani
It's that time of the year again :)

Andrea Bolognani (4):
  guests: Add Fedora 29
  Build on Fedora 29
  Stop building on Fedora 27
  guests: Drop Fedora 27

 guests/host_vars/libvirt-fedora-27/docker.yml |  2 -
 guests/host_vars/libvirt-fedora-29/docker.yml |  2 +
 .../install.yml   |  2 +-
 .../main.yml  |  2 +-
 guests/inventory  |  2 +-
 guests/playbooks/build/jobs/defaults.yml  |  4 +-
 .../playbooks/build/projects/libvirt-dbus.yml |  6 +-
 .../build/projects/libvirt-ocaml.yml  |  2 +-
 .../build/projects/libvirt-sandbox.yml|  4 +-
 .../playbooks/build/projects/libvirt-tck.yml  |  4 +-
 guests/playbooks/build/projects/libvirt.yml   |  2 +-
 .../playbooks/build/projects/virt-manager.yml |  6 +-
 .../playbooks/build/projects/virt-viewer.yml  |  1 +
 guests/vars/mappings.yml  |  1 -
 guests/vars/vault.yml | 82 +--
 jobs/defaults.yaml|  4 +-
 projects/libvirt-dbus.yaml|  6 +-
 projects/libvirt-ocaml.yaml   |  2 +-
 projects/libvirt-sandbox.yaml |  4 +-
 projects/libvirt-tck.yaml |  4 +-
 projects/libvirt.yaml |  2 +-
 projects/virt-manager.yaml|  6 +-
 projects/virt-viewer.yaml |  1 +
 23 files changed, 76 insertions(+), 75 deletions(-)
 delete mode 100644 guests/host_vars/libvirt-fedora-27/docker.yml
 create mode 100644 guests/host_vars/libvirt-fedora-29/docker.yml
 rename guests/host_vars/{libvirt-fedora-27 => libvirt-fedora-29}/install.yml 
(66%)
 rename guests/host_vars/{libvirt-fedora-27 => libvirt-fedora-29}/main.yml (95%)

-- 
2.19.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] nodedev: Add new module node_device_util

2018-11-02 Thread Michal Privoznik
On 11/01/2018 04:50 PM, Erik Skultety wrote:
> There's a lot of stuff going on in src/conf/nodedev_conf which not
> always has to do anything with config, so even though we're trying,
> we're not really consistent in putting only parser/formatter related
> stuff here like we do for domains. So, start the cleanup simply by adding
> a new module to the nodedev driver and put a few helper APIs which want
> to open a secondary driver connection in there (similar to storage_util
> module).
> 
> Signed-off-by: Erik Skultety 
> ---
> 
> I verified the build with debian 9, centos 7, fedora 28, rawhide, and freebsd
> 11
> 
>  src/conf/Makefile.inc.am |   1 +
>  src/conf/node_device_conf.c  | 199 ---
>  src/conf/node_device_conf.h  |  11 --
>  src/conf/virstorageobj.c |   1 +
>  src/libvirt_private.syms |   8 +-
>  src/node_device/Makefile.inc.am  |  17 +-
>  src/node_device/node_device_driver.c |   1 +
>  src/node_device/node_device_util.c   | 229 +++
>  src/node_device/node_device_util.h   |  35 
>  src/storage/Makefile.inc.am  |   1 +
>  src/storage/storage_backend_scsi.c   |   1 +
>  11 files changed, 290 insertions(+), 214 deletions(-)
>  create mode 100644 src/node_device/node_device_util.c
>  create mode 100644 src/node_device/node_device_util.h
> 
> diff --git a/src/conf/Makefile.inc.am b/src/conf/Makefile.inc.am
> index af23810640..7cb6c29d70 100644
> --- a/src/conf/Makefile.inc.am
> +++ b/src/conf/Makefile.inc.am
> @@ -163,6 +163,7 @@ libvirt_la_BUILT_LIBADD += libvirt_conf.la
>  libvirt_conf_la_SOURCES = $(CONF_SOURCES)
>  libvirt_conf_la_CFLAGS = \
>   -I$(srcdir)/conf \
> + -I$(srcdir)/node_device \

This doesn't feel right. The conf parser/formatter should be driver
agnostic.

I see two options. If you want to clean up src/conf/node_device_conf.c
either you'll put node_device_util.c into src/conf/ right next to
node_device_conf.c or you move it into src/util/ because conf module can
use util.

The rest looks good.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH] news: Update for 4.9.0 release

2018-11-02 Thread Andrea Bolognani
Signed-off-by: Andrea Bolognani 
---
 docs/news.xml | 49 +
 1 file changed, 49 insertions(+)

diff --git a/docs/news.xml b/docs/news.xml
index e5476a3332..b75df36ddb 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -35,6 +35,16 @@
 
   
 
+  
+
+  util: Add cgroup v2 support
+
+
+  cgroup v2 support has been implemented in libvirt, with both
+  "unified" (v2 only) and "hybrid" (v2 + v1) setups being usable;
+  existing "legacy" (v1 only) setups will keep working.
+
+  
   
 
   qemu: Add vfio AP support
@@ -46,8 +56,47 @@
   
 
 
+  
+
+  rpc: Make 'genprotocol' output reproducible
+
+
+  This is another step towards making libvirt builds fully
+  reproducible.
+
+  
 
 
+  
+
+  security: Fix permissions for UNIX sockets
+
+
+  Since 4.5.0, libvirt is using FD passing to hand sockets over to
+  QEMU, which in theory removes the need for them to be accessible by
+  the user under which the QEMU process is running; however, other
+  processes such as vdsm need to access the sockets as well, which
+  means adjusting permissions is still necessary.
+
+  
+  
+
+  cpu_map: Add Icelake model definitions
+
+
+  These CPU models will be available in the upcoming 3.1.0 QEMU
+  release.
+
+  
+  
+
+  util: Properly parse URIs with missing trailing slash
+
+
+  Some storage URIs were not parsed correctly, in which case libvirt
+  ended up emitting XML that it would then refuse to parse back.
+
+  
 
   
   
-- 
2.19.1

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] External disk snapshot paths and the revert operation

2018-11-02 Thread John Ferlan



On 11/2/18 8:49 AM, Povilas Kanapickas wrote:
> On 21/10/2018 21:31, Povilas Kanapickas wrote:
>> Hey,
>>
>> It seems that there's an issue of how external disk snapshots currently
>> work. The snapshot definition xml[1] that is supplied to the API has a
>> list of disks each of which specify where the new overlay disks should
>> be placed.
>>
>> This leads to ambiguities when we want to revert the domain to a
>> snapshot with children. Consider the following situation:
>>
>> 0. Initial state:
>> Domain definition: disk at disk-base.qcow2
>>
>> 1. We create a snapshot snap1 placing the overlay at disk-snap1.qcow2
>>
>> Domain definition: disk at disk-snap1.qcow2
>> Disks:
>>   - disk-base.qcow2 (read-only)
>>   - disk-snap1.qcow2 (overlay, base: disk-base.qcow2)
>>
>> Snapshots:
>>   - snap1 (snapshot domain disk list: disk-base.qcow2,
>>snapshot disk list: disk-snap1.qcow2)
>>
>> 2. We create another snapshot snap2 placing the overlay at disk-snap2.qcow2
>>
>> VM definition: disk at disk-snap2.qcow2
>> Disks:
>>   - disk-base.qcow2 (read-only)
>>   - disk-snap1.qcow2 (overlay, base: disk-base.qcow2, read-only)
>>   - disk-snap2.qcow2 (overlay, base: disk-snap1.qcow2)
>>
>> Snapshots:
>>   - snap1 (snapshot domain disk list: disk-base.qcow2,
>>snapshot disk list: disk-snap1.qcow2)
>>   - snap2 (snapshot domain disk list: disk-snap1.qcow2,
>>snapshot disk list: disk-snap2.qcow2)
>>
>> Now what happens if we want to revert to snap1? We can't use any of the
>> paths that snap1 itself references: both disk-base.qcow2 and
>> disk-snap1.qcow2 are read-only as part of the backing chain for disks
>> used by snap2. Possible options are these:
>>
>>  - we can reuse disk-snap2.qcow2 which we no longer need as it contains
>> overlay on top of snap2. This is non-optimal because now the path of the
>> disk after revert depends on the disk path before it.
>>
>>  - we can think of some name with the expectation that the user will
>> change it later anyway.
>>
>> I think that both of the above are non-optimal because we don't actually
>> revert to the exact state we took the snapshot of. The user is
>> effectively no longer in control of the disk path after the revert.
>>
>> One of possible solutions would be add additional flag during the
>> snapshot creation that would change the current behavior. Instead of
>> creating a new disk for the overlay we could move the base disk to the
>> specified path and create the overlay in its place. The advantages of
>> this behavior would be the following:
>>
>>  - the path to the current disk used in the domain would not change when
>> taking snapshots. This way we could avoid the problem listed above and
>> easily revert to any snapshot.
>>
>>  - the confusion in naming of the overlay disks would be reduced.
>> Currently the path specified when creating snapshot refers to the
>> overlay image. Most of the time user does not know what will be changed
>> in the domain until the next snapshot. So any path he chooses will
>> likely not reflect what will end up in the overlay at the time of the
>> next snapshot. If it was possible to specify the path where the current
>> disk should be put instead, then the user could just name it after the
>> snapshot and it would correctly represent the contents of that disk.
>>
>> The major disadvantage of the approach is that it would introduce extra
>> complexity in the presently simple interface and also the underlying
>> code. I've completely disregarded the case of taking snapshots while the
>> domain is running, so there might be hidden complications there as well.
>>
>> Personally, I don't mind the current situation too much, but if libvirt
>> developers think it's a thing worth fixing I would be willing to work on it.
>>
>> Any opinions?
>>
> 
> Friendly ping :-)
> 

Please be aware that the typical Red Hat reviewers of the list were away
at KVM Forum from Oct 22-26 and some took some extended vacation time
after that. In particular, Peter Krempa who generally understand the
snapshot code best is away. So, it may take longer than usual for the
patches to be considered especially since beyond the Forum/Vacation time
we had other rather major news to disrupt our normal work flow.

John

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] nwfilter: intantiate filters on loading driver

2018-11-02 Thread Nikolay Shirokovskiy


On 02.11.2018 16:31, John Ferlan wrote:
> [...]
> 
 Looks like reinstatiation was lost in 

 commit 57f5621f464b8df5671cbe5df6bab3cf006981dd
 Author: Daniel P. Berrangé 
 Date:   Thu Apr 26 18:34:33 2018 +0100

 nwfilter: keep track of active filter bindings


 nwfilterInstantiateFilter is called from reconnection code.
 Before the patch we always instantiate rules, after we do
 not because we return early in nwfilterInstantiateFilter because
 binding already exist (it is loaded from status).

>>>
>>> OK, well that's kind of the start of it, but perhaps f14c37ce4 is also
>>> to blame since that's where nwfilterInstantiateFilter is removed to be
>>> replaced by virDomainConfNWFilterInstantiate. Although I suppose it can
>>> be argued that the former should have been:
>>>
>>> if (!(obj = *FindByPortDev(...))) {
>>> ... code to get @def and @obj...
>>> }
>>>
>>> ret = virNWFilterInstantiateFilter(driver, def);
>>> ...
>>>
>>>
>>> Since calling virNWFilterInstantiateFilter during
>>> virNWFilterBindingObjListLoadStatus as I suggested below [1] isn't
>>> feasible nor does there seem to be some other means to replicate that
>>> virNWFilterInstantiateFilter called in nwfilterBindingCreateXML after
>>> adding the @def to the bindings other than via the virNWFilterBuildAll,
>>> then OK - I'm "convinced" now this is the right fix.
>>>
>>> Still probably need to adjust the commit message, how about:
>>>
>>> nwfilter: Instantiate active filter bindings during driver init
>>>
>>> Commit 57f5621f modified nwfilterInstantiateFilter to detect when
>>> a filter binding was already present before attempting to add the
>>> new binding and instantiate it.
>>
>> Ok
>>
>>>
>>> When combined with the new virNWFilterBindingObjListLoadAllConfigs
>>> logic, which searches for and loads bindings from active domains,
>>> but does not instantiate the binding as the nwfilterBindingCreateXML
>>> would do once the filter binding was added to the list of all bindings
>>> left the possibility that once the code was active the instantiation
>>> would not occur when libvirtd was restarted.
>>
>> This is a bit hard to follow because nwfilterBindingCreateXML introduced
>> later then nwfilterInstantiateFilter being analyzed. And the possibility
>> reads likes there is a race.
>>
> 
> Yes, this was difficult to describe and why it was "pulled out" of the
> first paragraph. As for "timing" or "race" - that's absolutely the key
> point. As of this patch though the
> /var/run/libvirt/nwfilter-binding/*.xml files wouldn't be created via
> the nwfilterBindingCreateXML API since it gets introduced a few patches
> later. So the net effect of this patch is I believe technically nothing
> beyond setting ourselves up for future failure, but this is what drove
> later changes, so I'm fine with blaming it.
> 
> As an aside, logically in the series of changes made, this patch came
> after c21679fa3f to introduce virNWFilterBindingObjListLoadAllConfigs.
> 
> IOW: Commit 57f5621f is only being used to set everything up without
> creating some huge and/or hard to follow patch.
> 
>> How about:
>>
>> As result qemu driver reconnection code on daemon restart skips
>> binding instantiation opposite to what it was before. And this instantiation
>> was not moved to any other place in this patch thus we got no
>> instantiation at all.
>>
> 
> However, to me this is too generic especially since qemu driver logic
> wasn't changed in this patch.
> 
> So, consider as part of the first paragraph and replacement for the
> above paragraph:
> 
> Additionally, the change to nwfilterStateInitialize to call
> virNWFilterBindingObjListLoadAllConfigs (from commit c21679fa3f) to load
> active domain filter bindings, but not instantiate them eventually leads
> to a problem for the QEMU driver reconnection logic after a daemon
> restart where the filter bindings would no longer be instantiated.

Ok for me

> 
> Hopefully this explanation works. When I'm debugging problems I find it
> easier when there's more than a simple change occurring to have someone
> actually list the API names so that I don't have to guess...
> 
> John
> 
> FWIW: I'm still at a loss to fully understand why/how a previous
> instantiation and set up of the filter bindings would be "lost" on this
> restart path. That is, at some point in time the instantiation would
> send magic commands to filter packets. Just because libvirtd restarts
> doesn't mean those are necessarily lost, are they? IOW, wouldn't this
> just be redoing what was already done? Not that it's a problem because
> we cannot be guaranteed whether the first instantiation was done when
> libvirtd was stopped.> 
>>>
>>> Subsequent commit f14c37ce4c replaced the nwfilterInstantiateFilter
>>> with virDomainConfNWFilterInstantiate which uses @ignoreExists to
>>> detect presence of the filter and still did not restore the filter
>>> instantiation call when making the 

Re: [libvirt] [PATCH v1 2/7] pcihp: overwrite hotplug handler recursively from the start

2018-11-02 Thread David Hildenbrand
On 02.11.18 14:00, Igor Mammedov wrote:
> On Fri, 2 Nov 2018 12:43:10 +0100
> David Hildenbrand  wrote:
> 
>> On 01.11.18 15:10, Igor Mammedov wrote:
>>> On Wed, 24 Oct 2018 12:19:25 +0200
>>> David Hildenbrand  wrote:
>>>   
 For now, the hotplug handler is not called for devices that are
 being cold plugged. The hotplug handler is setup when the machine
 initialization is fully done. Only bridges that were cold plugged are
 considered.

 Set the hotplug handler for the root piix bus directly when realizing.
 Overwrite the hotplug handler of bridges when hotplugging/coldplugging
 them.

 This will now make sure that the ACPI PCI hotplug handler is also called
 for cold-plugged devices (also on bridges) and for bridges that were
 hotplugged.

 When trying to hotplug a device to a hotplugged bridge, we now correctly
 get the error message
  "Unsupported bus. Bus doesn't have property 'acpi-pcihp-bsel' set"
 Insted of going via the standard PCI hotplug handler.  
>>> Erroring out is probably not ok, since it can break existing setups
>>> where SHPC hotplugging to hotplugged bridge was working just fine before.  
>>
>> The question is if it actually was supposed (and eventually did) work.
> I think it works now, it's QEMU 'ACPI hotplug hack' (which exists for
> the sake of Windows) limitation. We weren't able to dynamically add
> ACPI description for hotplugged bridge, so it was using native hotplug.
> Now theoretically we can load tables dynamically but that, would add
> maintenance nightmare (versioned tables) and would be harder to debug.
> I'd rather not go that direction and keep current limited version,
> suggesting users to use native hotplug if guest is capable.

Alright I'll keep current behavior (checking if the bridge is hotplugged
or coldplugged). Thanks!


-- 

Thanks,

David / dhildenb

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 1/3] qemu: pass stop reason from qemuProcessStopCPUs to stop handler

2018-11-02 Thread John Ferlan



On 11/2/18 4:21 AM, Nikolay Shirokovskiy wrote:
> Hi, John!
> 
> Looks like this series is stucked somehow even though there is almost 100% 
> agreement.
> 
> 

Right, but there's been a few events in between that pushed this down
the stack of things I was involved with (beyond my own work) - KVM
Forum, an acquisition, code freeze, ...

Prior to KVM Forum I asked Peter Krempa (via internal IRC) to look at
patch 3, but he may have forgotten or pushed it down his todo list. Of
course he took an extra week after KVM Forum for holiday - so

Thanks for the nudge, it's on my shorter list of things to do!  I'm
50/50 on the patch 3 event change. I would like to understand why it was
ignored previously. Sometimes there's some very strange and hidden
reason which becomes painfully obvious at some point in the future. Then
it's a search to figure out why the event just popped up, followed by
time spent wondering why we're not filtering the event any more, and why
we filtered it in the first place.

John

> On 17.10.2018 11:41, Nikolay Shirokovskiy wrote:
>>
>>
>> On 16.10.2018 21:40, John Ferlan wrote:
>>>
>>> $SUBJ:
>>>
>>> s/pass/Pass
>>>
>>> On 10/10/18 4:04 AM, Nikolay Shirokovskiy wrote:
 We send duplicate suspended lifecycle events on qemu process stop in 
 several
 places. The reason is stop event handler always send suspended event and
 we addidionally send same event but with more presise reason after call
>>>
>>> *additionally
>>> *precise
>>>
 to qemuProcessStopCPUs. Domain state change is also duplicated.

 Let's change domain state and send event only in handler. For this
 purpuse first let's pass state change reason to event handler (event
>>>
>>> *purpose
>>>
 reason is deducible from it).

 Inspired by similar patch for resume: 5dab984ed
 "qemu: Pass running reason to RESUME event handler".

>>>
>>> In any case, I think the above was a bit more appropriate for the cover
>>> since it details a few things being altered in the 3rd patch of the
>>> series. So, maybe this could change to:
>>>
>>> Similar to commit 5dab984ed which saves and passes the running reason to
>>> the RESUME event handler, during qemuProcessStopCPUs let's save and pass
>>> the pause reason in the domain private data so that the STOP event
>>> handler can use it.
>>>
 Signed-off-by: Nikolay Shirokovskiy 
 ---
  src/qemu/qemu_domain.h  |  4 
  src/qemu/qemu_process.c | 15 ---
  2 files changed, 16 insertions(+), 3 deletions(-)

 diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
 index 80bd4bd..380ea14 100644
 --- a/src/qemu/qemu_domain.h
 +++ b/src/qemu/qemu_domain.h
 @@ -370,6 +370,10 @@ struct _qemuDomainObjPrivate {
  /* qemuProcessStartCPUs stores the reason for starting vCPUs here for 
 the
   * RESUME event handler to use it */
  virDomainRunningReason runningReason;
 +
 +/* qemuProcessStopCPUs stores the reason for starting vCPUs here for 
 the
>>>
>>> s/starting/pausing/
>>>
>>> too much copy-pasta
>>>
>>> FWIW: Similar to the comment I made to Jirka for his series, I assume
>>> this STOP processing would have the similar issue with the event going
>>> to the old libvirtd and thus not needing to save/restore via XML state
>>> processing. For context see:
>>>
>>> https://www.redhat.com/archives/libvir-list/2018-September/msg01145.html
>>>
 + * STOP event handler to use it */
 +virDomainPausedReason pausedReason;
  };
  
>>>
>>> With a couple of adjustments,
>>>
>>> Reviewed-by: John Ferlan 
>>>
>>> John
>>>
>>> I can make the adjustments so that you don't need to send another series
>>> - just need your acknowledgment for that.
>>>
>>
>> I'm ok with you changes
>>
>> Nikolay
>>
>> --
>> libvir-list mailing list
>> libvir-list@redhat.com
>> https://www.redhat.com/mailman/listinfo/libvir-list
>>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] nwfilter: intantiate filters on loading driver

2018-11-02 Thread John Ferlan
[...]

>>> Looks like reinstatiation was lost in 
>>>
>>> commit 57f5621f464b8df5671cbe5df6bab3cf006981dd
>>> Author: Daniel P. Berrangé 
>>> Date:   Thu Apr 26 18:34:33 2018 +0100
>>>
>>> nwfilter: keep track of active filter bindings
>>>
>>>
>>> nwfilterInstantiateFilter is called from reconnection code.
>>> Before the patch we always instantiate rules, after we do
>>> not because we return early in nwfilterInstantiateFilter because
>>> binding already exist (it is loaded from status).
>>>
>>
>> OK, well that's kind of the start of it, but perhaps f14c37ce4 is also
>> to blame since that's where nwfilterInstantiateFilter is removed to be
>> replaced by virDomainConfNWFilterInstantiate. Although I suppose it can
>> be argued that the former should have been:
>>
>> if (!(obj = *FindByPortDev(...))) {
>> ... code to get @def and @obj...
>> }
>>
>> ret = virNWFilterInstantiateFilter(driver, def);
>> ...
>>
>>
>> Since calling virNWFilterInstantiateFilter during
>> virNWFilterBindingObjListLoadStatus as I suggested below [1] isn't
>> feasible nor does there seem to be some other means to replicate that
>> virNWFilterInstantiateFilter called in nwfilterBindingCreateXML after
>> adding the @def to the bindings other than via the virNWFilterBuildAll,
>> then OK - I'm "convinced" now this is the right fix.
>>
>> Still probably need to adjust the commit message, how about:
>>
>> nwfilter: Instantiate active filter bindings during driver init
>>
>> Commit 57f5621f modified nwfilterInstantiateFilter to detect when
>> a filter binding was already present before attempting to add the
>> new binding and instantiate it.
> 
> Ok
> 
>>
>> When combined with the new virNWFilterBindingObjListLoadAllConfigs
>> logic, which searches for and loads bindings from active domains,
>> but does not instantiate the binding as the nwfilterBindingCreateXML
>> would do once the filter binding was added to the list of all bindings
>> left the possibility that once the code was active the instantiation
>> would not occur when libvirtd was restarted.
> 
> This is a bit hard to follow because nwfilterBindingCreateXML introduced
> later then nwfilterInstantiateFilter being analyzed. And the possibility
> reads likes there is a race.
> 

Yes, this was difficult to describe and why it was "pulled out" of the
first paragraph. As for "timing" or "race" - that's absolutely the key
point. As of this patch though the
/var/run/libvirt/nwfilter-binding/*.xml files wouldn't be created via
the nwfilterBindingCreateXML API since it gets introduced a few patches
later. So the net effect of this patch is I believe technically nothing
beyond setting ourselves up for future failure, but this is what drove
later changes, so I'm fine with blaming it.

As an aside, logically in the series of changes made, this patch came
after c21679fa3f to introduce virNWFilterBindingObjListLoadAllConfigs.

IOW: Commit 57f5621f is only being used to set everything up without
creating some huge and/or hard to follow patch.

> How about:
> 
> As result qemu driver reconnection code on daemon restart skips
> binding instantiation opposite to what it was before. And this instantiation
> was not moved to any other place in this patch thus we got no
> instantiation at all.
> 

However, to me this is too generic especially since qemu driver logic
wasn't changed in this patch.

So, consider as part of the first paragraph and replacement for the
above paragraph:

Additionally, the change to nwfilterStateInitialize to call
virNWFilterBindingObjListLoadAllConfigs (from commit c21679fa3f) to load
active domain filter bindings, but not instantiate them eventually leads
to a problem for the QEMU driver reconnection logic after a daemon
restart where the filter bindings would no longer be instantiated.

Hopefully this explanation works. When I'm debugging problems I find it
easier when there's more than a simple change occurring to have someone
actually list the API names so that I don't have to guess...

John

FWIW: I'm still at a loss to fully understand why/how a previous
instantiation and set up of the filter bindings would be "lost" on this
restart path. That is, at some point in time the instantiation would
send magic commands to filter packets. Just because libvirtd restarts
doesn't mean those are necessarily lost, are they? IOW, wouldn't this
just be redoing what was already done? Not that it's a problem because
we cannot be guaranteed whether the first instantiation was done when
libvirtd was stopped.

>>
>> Subsequent commit f14c37ce4c replaced the nwfilterInstantiateFilter
>> with virDomainConfNWFilterInstantiate which uses @ignoreExists to
>> detect presence of the filter and still did not restore the filter
>> instantiation call when making the new nwfilter bindings logic active.
>>
>> Thus in order to instantiate any active domain filter, we will call
>> virNWFilterBuildAll with @false to indicate the need to go through
>> all the active bindings 

[libvirt] [PATCH] qemu: Dissolve qemuBuildVhostuserCommandLine in qemuBuildInterfaceCommandLine

2018-11-02 Thread Michal Privoznik
https://bugzilla.redhat.com/show_bug.cgi?id=1524230

The qemuBuildVhostuserCommandLine builds command line for
vhostuser type interfaces. It is duplicating some code of the
function it is called from (qemuBuildInterfaceCommandLine)
because of the way it's called. If we merge it into the caller
not only we save a few lines but we also enable checks that we
would have to duplicate otherwise (e.g. QoS availability).

While at it, reorder some VIR_FREE() in
qemuBuildInterfaceCommandLine so that it is easier to track which
variables are freed and which are not.

Signed-off-by: Michal Privoznik 
---
 src/qemu/qemu_command.c | 113 +---
 1 file changed, 48 insertions(+), 65 deletions(-)

diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 6e3ff67660..e338d3172e 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -8224,34 +8224,24 @@ qemuBuildGraphicsCommandLine(virQEMUDriverConfigPtr cfg,
 }
 
 static int
-qemuBuildVhostuserCommandLine(virQEMUDriverPtr driver,
+qemuInterfaceVhostuserConnect(virQEMUDriverPtr driver,
   virLogManagerPtr logManager,
   virSecurityManagerPtr secManager,
   virCommandPtr cmd,
   virDomainDefPtr def,
   virDomainNetDefPtr net,
   virQEMUCapsPtr qemuCaps,
-  unsigned int bootindex)
+  char **chardev)
 {
 virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
-char *chardev = NULL;
-char *netdev = NULL;
-unsigned int queues = net->driver.virtio.queues;
-char *nic = NULL;
 int ret = -1;
 
-if (!qemuDomainSupportsNicdev(def, net)) {
-virReportError(VIR_ERR_INTERNAL_ERROR,
-   "%s", _("Nicdev support unavailable"));
-goto cleanup;
-}
-
 switch ((virDomainChrType)net->data.vhostuser->type) {
 case VIR_DOMAIN_CHR_TYPE_UNIX:
-if (!(chardev = qemuBuildChrChardevStr(logManager, secManager,
-   cmd, cfg, def,
-   net->data.vhostuser,
-   net->info.alias, qemuCaps, 0)))
+if (!(*chardev = qemuBuildChrChardevStr(logManager, secManager,
+cmd, cfg, def,
+net->data.vhostuser,
+net->info.alias, qemuCaps, 0)))
 goto cleanup;
 break;
 
@@ -8274,42 +8264,9 @@ qemuBuildVhostuserCommandLine(virQEMUDriverPtr driver,
 goto cleanup;
 }
 
-if (queues > 1 &&
-!virQEMUCapsGet(qemuCaps, QEMU_CAPS_VHOSTUSER_MULTIQUEUE)) {
-virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
-   _("multi-queue is not supported for vhost-user "
- "with this QEMU binary"));
-goto cleanup;
-}
-
-if (!(netdev = qemuBuildHostNetStr(net, driver,
-   NULL, 0, NULL, 0)))
-goto cleanup;
-
-if 
(virNetDevOpenvswitchGetVhostuserIfname(net->data.vhostuser->data.nix.path,
-   >ifname) < 0)
-goto cleanup;
-
-virCommandAddArg(cmd, "-chardev");
-virCommandAddArg(cmd, chardev);
-
-virCommandAddArg(cmd, "-netdev");
-virCommandAddArg(cmd, netdev);
-
-if (!(nic = qemuBuildNicDevStr(def, net, bootindex,
-   queues, qemuCaps))) {
-goto cleanup;
-}
-
-virCommandAddArgList(cmd, "-device", nic, NULL);
-
 ret = 0;
  cleanup:
 virObjectUnref(cfg);
-VIR_FREE(netdev);
-VIR_FREE(chardev);
-VIR_FREE(nic);
-
 return ret;
 }
 
@@ -8328,7 +8285,9 @@ qemuBuildInterfaceCommandLine(virQEMUDriverPtr driver,
   int **nicindexes)
 {
 int ret = -1;
-char *nic = NULL, *host = NULL;
+char *nic = NULL;
+char *host = NULL;
+char *chardev = NULL;
 int *tapfd = NULL;
 size_t tapfdSize = 0;
 int *vhostfd = NULL;
@@ -8337,6 +8296,7 @@ qemuBuildInterfaceCommandLine(virQEMUDriverPtr driver,
 char **vhostfdName = NULL;
 virDomainNetType actualType = virDomainNetGetActualType(net);
 virNetDevBandwidthPtr actualBandwidth;
+bool requireNicdev = false;
 size_t i;
 
 
@@ -8447,9 +8407,24 @@ qemuBuildInterfaceCommandLine(virQEMUDriverPtr driver,
 break;
 
 case VIR_DOMAIN_NET_TYPE_VHOSTUSER:
-ret = qemuBuildVhostuserCommandLine(driver, logManager, secManager, 
cmd, def,
-net, qemuCaps, bootindex);
-goto cleanup;
+requireNicdev = true;
+
+if (net->driver.virtio.queues > 1 &&
+!virQEMUCapsGet(qemuCaps, QEMU_CAPS_VHOSTUSER_MULTIQUEUE)) {
+

Re: [libvirt] External disk snapshot paths and the revert operation

2018-11-02 Thread Povilas Kanapickas
On 21/10/2018 21:31, Povilas Kanapickas wrote:
> Hey,
> 
> It seems that there's an issue of how external disk snapshots currently
> work. The snapshot definition xml[1] that is supplied to the API has a
> list of disks each of which specify where the new overlay disks should
> be placed.
> 
> This leads to ambiguities when we want to revert the domain to a
> snapshot with children. Consider the following situation:
> 
> 0. Initial state:
> Domain definition: disk at disk-base.qcow2
> 
> 1. We create a snapshot snap1 placing the overlay at disk-snap1.qcow2
> 
> Domain definition: disk at disk-snap1.qcow2
> Disks:
>   - disk-base.qcow2 (read-only)
>   - disk-snap1.qcow2 (overlay, base: disk-base.qcow2)
> 
> Snapshots:
>   - snap1 (snapshot domain disk list: disk-base.qcow2,
>snapshot disk list: disk-snap1.qcow2)
> 
> 2. We create another snapshot snap2 placing the overlay at disk-snap2.qcow2
> 
> VM definition: disk at disk-snap2.qcow2
> Disks:
>   - disk-base.qcow2 (read-only)
>   - disk-snap1.qcow2 (overlay, base: disk-base.qcow2, read-only)
>   - disk-snap2.qcow2 (overlay, base: disk-snap1.qcow2)
> 
> Snapshots:
>   - snap1 (snapshot domain disk list: disk-base.qcow2,
>snapshot disk list: disk-snap1.qcow2)
>   - snap2 (snapshot domain disk list: disk-snap1.qcow2,
>snapshot disk list: disk-snap2.qcow2)
> 
> Now what happens if we want to revert to snap1? We can't use any of the
> paths that snap1 itself references: both disk-base.qcow2 and
> disk-snap1.qcow2 are read-only as part of the backing chain for disks
> used by snap2. Possible options are these:
> 
>  - we can reuse disk-snap2.qcow2 which we no longer need as it contains
> overlay on top of snap2. This is non-optimal because now the path of the
> disk after revert depends on the disk path before it.
> 
>  - we can think of some name with the expectation that the user will
> change it later anyway.
> 
> I think that both of the above are non-optimal because we don't actually
> revert to the exact state we took the snapshot of. The user is
> effectively no longer in control of the disk path after the revert.
> 
> One of possible solutions would be add additional flag during the
> snapshot creation that would change the current behavior. Instead of
> creating a new disk for the overlay we could move the base disk to the
> specified path and create the overlay in its place. The advantages of
> this behavior would be the following:
> 
>  - the path to the current disk used in the domain would not change when
> taking snapshots. This way we could avoid the problem listed above and
> easily revert to any snapshot.
> 
>  - the confusion in naming of the overlay disks would be reduced.
> Currently the path specified when creating snapshot refers to the
> overlay image. Most of the time user does not know what will be changed
> in the domain until the next snapshot. So any path he chooses will
> likely not reflect what will end up in the overlay at the time of the
> next snapshot. If it was possible to specify the path where the current
> disk should be put instead, then the user could just name it after the
> snapshot and it would correctly represent the contents of that disk.
> 
> The major disadvantage of the approach is that it would introduce extra
> complexity in the presently simple interface and also the underlying
> code. I've completely disregarded the case of taking snapshots while the
> domain is running, so there might be hidden complications there as well.
> 
> Personally, I don't mind the current situation too much, but if libvirt
> developers think it's a thing worth fixing I would be willing to work on it.
> 
> Any opinions?
> 

Friendly ping :-)

Regards,
Povilas

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v1 2/7] pcihp: overwrite hotplug handler recursively from the start

2018-11-02 Thread Igor Mammedov
On Fri, 2 Nov 2018 12:43:10 +0100
David Hildenbrand  wrote:

> On 01.11.18 15:10, Igor Mammedov wrote:
> > On Wed, 24 Oct 2018 12:19:25 +0200
> > David Hildenbrand  wrote:
> >   
> >> For now, the hotplug handler is not called for devices that are
> >> being cold plugged. The hotplug handler is setup when the machine
> >> initialization is fully done. Only bridges that were cold plugged are
> >> considered.
> >>
> >> Set the hotplug handler for the root piix bus directly when realizing.
> >> Overwrite the hotplug handler of bridges when hotplugging/coldplugging
> >> them.
> >>
> >> This will now make sure that the ACPI PCI hotplug handler is also called
> >> for cold-plugged devices (also on bridges) and for bridges that were
> >> hotplugged.
> >>
> >> When trying to hotplug a device to a hotplugged bridge, we now correctly
> >> get the error message
> >>  "Unsupported bus. Bus doesn't have property 'acpi-pcihp-bsel' set"
> >> Insted of going via the standard PCI hotplug handler.  
> > Erroring out is probably not ok, since it can break existing setups
> > where SHPC hotplugging to hotplugged bridge was working just fine before.  
> 
> The question is if it actually was supposed (and eventually did) work.
I think it works now, it's QEMU 'ACPI hotplug hack' (which exists for
the sake of Windows) limitation. We weren't able to dynamically add
ACPI description for hotplugged bridge, so it was using native hotplug.
Now theoretically we can load tables dynamically but that, would add
maintenance nightmare (versioned tables) and would be harder to debug.
I'd rather not go that direction and keep current limited version,
suggesting users to use native hotplug if guest is capable.

> If this was the expected behavior (mixing hotplug types), then the
> necessary change to this patch would boil down to checking if the bridge
> it hot or coldplugged.
> 
> > 
> > Marcel/Michael what's your take on this change in behaviour?
> > CCing libvirt in case they are doing this stuff
> >   
> 
> Indeed, it would be nice to know if this was actually supposed to work
> like this (coldplugged bridges using ACPI hotplug and hotplugged bridges
> using SHPC hotplug).
> 
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v1 1/1] qemu_domain, qemu_process: maxCpus check to non-x86 guests

2018-11-02 Thread Daniel Henrique Barboza

Hi. Sorry for the late reply.

On 10/11/18 9:03 PM, John Ferlan wrote:


On 10/4/18 9:14 AM, Daniel Henrique Barboza wrote:

This patch makes two quality of life changes for non-x86 guests. The
first one is a maxCpus validation at qemuDomainDefValidate. The check
is made by the same function used to do that at qemuProcessStartValidateXML,
qemuValidateCpuCount. This ensures that the user doesn't goes over the
board with the maxCpus value when editing the XML, only to be faced with a
runtime error when starting it.

To do that, the following changes were made:

- qemuValidateCpuCount was made public. It was renamed to
qemuProcessValidateCpuCount to be compliant with the other public methods
at qemu_process.h;

- the method signature was slightly adapted to fit the const 'def'
variable used in qemuDomainDefValidate. This change has no downside in
in its original usage at qemuProcessStartValidateXML.

The second QoL change is adding the maxCpus value in the error message
of the now qemuProcessValidateCpuCount. This simple change allows the
user to quickly edit the XML to comply with the acceptable limit without
having to know QEMU internals. x86 guests, that might have been created
prior to the x86 qemuDomainDefValidate maxCpus check code, will also
benefit from this change.

After this patch, this is the expect result running a Power 9 guest with
a maxCpus of 4:

$ ./virsh start dhb
error: Failed to start domain dhb
error: unsupported configuration: Maximum CPUs greater than specified machine 
type limit 240

And this is the result when trying to edit maxCpus to an invalid value:

$ ./virsh edit dhb
error: unsupported configuration: Maximum CPUs greater than specified machine 
type limit 240
Failed. Try again? [y,n,i,f,?]:
error: unsupported configuration: Maximum CPUs greater than specified machine 
type limit 240

Signed-off-by: Daniel Henrique Barboza 
---
  src/qemu/qemu_domain.c  |  5 +
  src/qemu/qemu_process.c | 13 +++--
  src/qemu/qemu_process.h |  3 +++
  3 files changed, 15 insertions(+), 6 deletions(-)


When you write a commit message that says this much, start thinking - I
might need to split this up into multiple patches to reach my final
goal. I think there are 3 patches here:

1. Changing the name from qemuValidateCpuCount to
qemuProcessValidateCpuCount, alter the @def variable name, and add to
qemu_process.h.

2. Alter the error message.

3. The qemu_domain change.


I can split it in 3 patches as you suggested, no problem.



diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 939b2a3da2..5a39b11da7 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -4100,6 +4100,11 @@ qemuDomainDefValidate(const virDomainDef *def,
  }
  }
  
+if (!ARCH_IS_X86(def->os.arch) &&

+qemuProcessValidateCpuCount(def, qemuCaps) < 0) {
+goto cleanup;
+}
+

So this really is the crux of the fix... To perform this validate during
the define processing, which seems to be a good idea; however, I'm
wondering why you're filtering "!ARCH_IS_X86". That same filter isn't
done at start time. Yes, there is an X86 specific message just before:

 unsupported configuration: more than 255 vCPUs are only supported on
q35-based machine type



My idea here was to keep the x86 verification unchanged, with its own
error/warning message, while adding a vCPU verification for all other
archs. This is why I've used !ARCH_IS_X86.


So, for q35-based machines, they'd have to wait for start to get the
message?  Considering there's lots of roadblocks to define a q35 with >
255 vCPUs, I think we'd be safe.

Still part of me wonders why we don't just move the
qemuProcessStartValidateXML call to qemuProcessValidateCpuCount rather
than having it both places.

I think when it was first added where it was it was done because the
DefValidate processing didn't exist (see [1] for some history). Back
then all we had was PostParse processing and if XML range validation was
done there guests may "disappear" when libvirtd restarted. So adding to
start processing was the only answer.


Should I move qemuProcessStartValidadeXML to qemuProcessValidadeCpuCount
in the next spin then?




  if (def->nresctrls &&
  def->virtType != VIR_DOMAIN_VIRT_KVM) {
  virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 29b0ba1590..0de5b503d6 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -3901,9 +3901,9 @@ qemuProcessSPICEAllocatePorts(virQEMUDriverPtr driver,
  }
  
  
-static int

-qemuValidateCpuCount(virDomainDefPtr def,
- virQEMUCapsPtr qemuCaps)
+int
+qemuProcessValidateCpuCount(const virDomainDef *def,
+virQEMUCapsPtr qemuCaps)

Going back through history commit ff968889 added this logic


  {
  unsigned int maxCpus = virQEMUCapsGetMachineMaxCpus(qemuCaps, 
def->os.machine);
  
@@ -3914,8 +3914,9 @@ 

Re: [libvirt] [RFC/WIP] [PATCH 0/5] Add support for revert and delete operations to external disk snapshots

2018-11-02 Thread Povilas Kanapickas
On 21/10/2018 19:38, Povilas Kanapickas wrote:
> Hey all,
> 
> Currently libvirt only supports creation of external disk snapshots, but not 
> reversion and deletion which are essential for any serious use of this 
> feature. 
> I've looked into implementing removal and reversion of external disk 
> snapshots 
> and came up with some prototype code that works with my simple test VMs (see 
> attached patches).
> 
> I'd like to discuss about how these features could be implemented properly. 
> As 
> I've never significantly contributed to libvirt yet, I wanted to delay the 
> discussion until I understand the problem space myself so that the discussion
> could be productive.
> 
> My current approach is relatively simple. For snapshot deletion we either
> simply remove the disk or use `qemu-img rebase` to reparent a snapshot on top 
> of the parent of the snapshot that is being deleted. For reversion we delete 
> the current overlay disk and create another that uses the image of the
> snapshot we want to revert to as the backing disk.
> 
> Are the attached patches good in principle? Are there any major blockers 
> aside 
> from lack of tests, code formatting, bugs and so on? Are there any design
> issues which prevent a simple implementation of external disk snapshot 
> support that I didn't see?
> 
> If there aren't significant blockers, my plan would be to continue work on 
> the 
> feature until I have something that could actually be reviewed and possibly
> merged.
> 

Friendly ping :-)

Regards,
Povilas

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] nodedev: Document the udevEventHandleThread

2018-11-02 Thread Erik Skultety
On Fri, Nov 02, 2018 at 01:33:32PM +0100, Andrea Bolognani wrote:
> On Fri, 2018-11-02 at 07:46 -0400, John Ferlan wrote:
> > > > + * NB: Usage of event based socket algorithm causes some issues with
> > > > + * older platforms such as CentOS 6 in which the data socket is opened
> > >
> > > ^Sounds a bit too generic, as an event based algorithm is not forced to 
> > > open a
> > > socket without O_NONBLOCK - just put something like "older platforms' 
> > > libudev
> > > opens sockets without the NONBLOCK flag which might cause issues with 
> > > event
> > > based algorithm" or something alike in there.
> >
> >  * NB: Some older distros, such as CentOS 6, libudev opens sockets
> >  * without the NONBLOCK flag which might cause issues with event
> >  * based algorithm. Although the issue can be mitigated by resetting
> >  * priv->dataReady for each event found; however, the scheduler issues
> >  * would still come into play.
>
> I haven't looked into this at all so I might be missing something
> entirely obvious, but I'd just like to point out that we no longer
> target RHEL 6 / CentOS 6, so any issue that's specific to those
> platforms we no longer have to concern ourselves with.

Right, I pointed that out in my review for the patch which was the impetus for
this patch. But John's right here that it's better we document the reasons for
the algorithm used which might save us from NACKing any further attempts to
change the code to suit old platforms, actually we might benefit from the
description ourselves at some point in the future, so I'm fine with that.

Erik

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] nodedev: Document the udevEventHandleThread

2018-11-02 Thread Andrea Bolognani
On Fri, 2018-11-02 at 07:46 -0400, John Ferlan wrote:
> > > + * NB: Usage of event based socket algorithm causes some issues with
> > > + * older platforms such as CentOS 6 in which the data socket is opened
> > 
> > ^Sounds a bit too generic, as an event based algorithm is not forced to 
> > open a
> > socket without O_NONBLOCK - just put something like "older platforms' 
> > libudev
> > opens sockets without the NONBLOCK flag which might cause issues with event
> > based algorithm" or something alike in there.
> 
>  * NB: Some older distros, such as CentOS 6, libudev opens sockets
>  * without the NONBLOCK flag which might cause issues with event
>  * based algorithm. Although the issue can be mitigated by resetting
>  * priv->dataReady for each event found; however, the scheduler issues
>  * would still come into play.

I haven't looked into this at all so I might be missing something
entirely obvious, but I'd just like to point out that we no longer
target RHEL 6 / CentOS 6, so any issue that's specific to those
platforms we no longer have to concern ourselves with.

-- 
Andrea Bolognani / Red Hat / Virtualization

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3] qemu: support metadata-cache-size for blockdev

2018-11-02 Thread Kevin Wolf
Am 02.11.2018 um 12:37 hat Nikolay Shirokovskiy geschrieben:
> On 02.11.2018 13:23, Kevin Wolf wrote:
> > Am 01.11.2018 um 12:32 hat Nikolay Shirokovskiy geschrieben:
> >> Just set l2-cache-size to INT64_MAX for all format nodes of
> >> qcow2 type in block node graph.
> >>
> >> AFAIK this is sane because *actual* cache size depends on size
> >> of data being referenced in image and thus the total size of
> >> all cache sizes for all images in disk backing chain will not
> >> exceed the cache size that covers just one full image as in
> >> case of no backing chain.
> > 
> > This is not quite correct.
> > 
> > Starting from qemu 3.1, INT64_MAX will add a cache that covers the whole
> > image. Memory is only used if a cache entry is actually used, so if you
> > never access the backing file, it doesn't really use any memory.
> > However, the granularity isn't a single cluster here, but L2 tables. So
> > if some L2 table contains one cluster in the overlay and another cluster
> > in the backing file, and both are accessed, the L2 table will be cached
> > in both the overlay and th backing file.
> 
> So we can end up N times bigger memory usage for L2 cache in case
> of backing chain comparing to single image?

In the worst case, yes.

Kevin

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] nodedev: Document the udevEventHandleThread

2018-11-02 Thread Erik Skultety
On Fri, Nov 02, 2018 at 07:46:35AM -0400, John Ferlan wrote:
>
>
> On 11/2/18 3:48 AM, Erik Skultety wrote:
> > On Thu, Nov 01, 2018 at 01:48:39PM -0400, John Ferlan wrote:
> >> Commit cdbe1332 neglected to document the API. So let's add some
> >> details about the algorithm and why it was used to help future
> >> readers understand the issues encountered. Based largely off a
> >> description provided by Erik Skultety .
> >
> > Oh, I missed ^this last sentence. Since you're blaming the commit already 
> > and
> > the description based on a mailing list thread will not be part of the log, 
> > you
> > should IMHO drop it.
> >
> >>
> >> NB: Management of the processing udev device notification is a
> >> delicate balance between the udev process, the scheduler, and when
> >> exactly the data from/for the socket is received. The balance is
> >> particularly important for environments when multiple devices are
> >> added into the system more or less simultaneously such as is done
> >> for mdev.
> >
> > "or SRIOV"
> >
> > Note: I can't really remember what I used for reproducer, mdevs or a SRIOV
> > card.
> >
> >> In these cases scheduler blocking on the udev recv() call
> >
> > I don't think scheduler is blocking anywhere, it's rather old libudev 
> > blocking
> > on the recv call ;)
> >
> >> is more noticeable. It's expected that future devices will follow
> >> similar algorithms. Even though the algorithm does present some
> >> challenges for older OS's (such as Centos 6), trying to rewrite
> >> the algorithm to fit both models would be more complex and involve
> >> pulling the monitor object out of the private data lockable object
> >> and would need to be guarded by a separate lock. Devising such an
> >> algorithm to work around issues with older OS's at the expense
> >> of more modern OS algorithms in newer event processing code may
> >> result in unexpected issues, so the choice is to encourage use
> >> of newer OS's with newer udev event processing code.
> >>
> >> Signed-off-by: John Ferlan 
> >> ---
> >>
> >>  v1: https://www.redhat.com/archives/libvir-list/2018-October/msg01053.html
> >>
> >>  Changes are from code review with some minor tweaks/editing as I
> >>  went along. Mistakes are my own!
> >>
> >>  src/node_device/node_device_udev.c | 28 
> >>  1 file changed, 28 insertions(+)
> >>
> >> diff --git a/src/node_device/node_device_udev.c 
> >> b/src/node_device/node_device_udev.c
> >> index 22897591de..f2c2299d4d 100644
> >> --- a/src/node_device/node_device_udev.c
> >> +++ b/src/node_device/node_device_udev.c
> >> @@ -1591,6 +1591,26 @@ udevEventMonitorSanityCheck(udevEventDataPtr priv,
> >>  }
> >>
> >>
> >> +/**
> >> + * udevEventHandleThread
> >> + * @opaque: unused
> >> + *
> >> + * Thread to handle the udevEventHandleCallback processing when udev
> >> + * tells us there's a device change for us (add, modify, delete, etc).
> >> + *
> >> + * Once notified there is data to be processed, the actual @device
> >> + * data retrieval by libudev may be delayed due to how threads are
> >> + * scheduled. In fact, the event loop could be scheduled earlier than
> >> + * the handler thread, thus potentially emitting the very same event
> >> + * the handler thread is currently trying to process, simply because
> >> + * the data hadn't been retrieved from the socket.
> >> + *
> >
> > ...
> >
> >> + * NB: Usage of event based socket algorithm causes some issues with
> >> + * older platforms such as CentOS 6 in which the data socket is opened
> >
> > ^Sounds a bit too generic, as an event based algorithm is not forced to 
> > open a
> > socket without O_NONBLOCK - just put something like "older platforms' 
> > libudev
> > opens sockets without the NONBLOCK flag which might cause issues with event
> > based algorithm" or something alike in there.
> >
>
>  * NB: Some older distros, such as CentOS 6, libudev opens sockets
>  * without the NONBLOCK flag which might cause issues with event
>  * based algorithm. Although the issue can be mitigated by resetting
>  * priv->dataReady for each event found; however, the scheduler issues
>  * would still come into play.

Sounds good...

Erik

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [jenkins-ci PATCH v2 0/3] Add libvirt-ocaml

2018-11-02 Thread Andrea Bolognani
On Fri, 2018-11-02 at 12:21 +0100, Pino Toscano wrote:
> > Either way, I'll have to apply changes to the live CI
> > environment as you don't have any permissions there...
> 
> Feel free to start the update magic, please ;)

Done.

  https://ci.centos.org/view/libvirt/job/libvirt-ocaml-build/

Note that it failed to build on FreeBSD 10, so you'll probably want
to look into that.

Once you've done that, please also consider

  * adding an autogen.sh script to libvirt-ocaml, even if it ends
up just calling autoreconf, and

  * introducing some sort of test suite that can be run with 'make
check'

as suggested during earlier discussion.

-- 
Andrea Bolognani / Red Hat / Virtualization

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3] qemu: support metadata-cache-size for blockdev

2018-11-02 Thread Nikolay Shirokovskiy



On 02.11.2018 13:23, Kevin Wolf wrote:
> Am 01.11.2018 um 12:32 hat Nikolay Shirokovskiy geschrieben:
>> Just set l2-cache-size to INT64_MAX for all format nodes of
>> qcow2 type in block node graph.
>>
>> AFAIK this is sane because *actual* cache size depends on size
>> of data being referenced in image and thus the total size of
>> all cache sizes for all images in disk backing chain will not
>> exceed the cache size that covers just one full image as in
>> case of no backing chain.
> 
> This is not quite correct.
> 
> Starting from qemu 3.1, INT64_MAX will add a cache that covers the whole
> image. Memory is only used if a cache entry is actually used, so if you
> never access the backing file, it doesn't really use any memory.
> However, the granularity isn't a single cluster here, but L2 tables. So
> if some L2 table contains one cluster in the overlay and another cluster
> in the backing file, and both are accessed, the L2 table will be cached
> in both the overlay and th backing file.

So we can end up N times bigger memory usage for L2 cache in case
of backing chain comparing to single image?

> 
> More importantly, before 3.1, I think QEMU will actually try to allocate
> 2^63-1 bytes for the cache and fail to open the image. So we can't do
> this unconditionally.
> 
> (Is this patch tested with a real QEMU?)
> 

Test was quite minimal: upstream and 2.10 but with recent patches that
makes setting INT64_MAX possible I guess.

Ok then we have to use version check in capabilities instead of feature test.

Nikolay

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v1 2/7] pcihp: overwrite hotplug handler recursively from the start

2018-11-02 Thread David Hildenbrand
On 01.11.18 15:10, Igor Mammedov wrote:
> On Wed, 24 Oct 2018 12:19:25 +0200
> David Hildenbrand  wrote:
> 
>> For now, the hotplug handler is not called for devices that are
>> being cold plugged. The hotplug handler is setup when the machine
>> initialization is fully done. Only bridges that were cold plugged are
>> considered.
>>
>> Set the hotplug handler for the root piix bus directly when realizing.
>> Overwrite the hotplug handler of bridges when hotplugging/coldplugging
>> them.
>>
>> This will now make sure that the ACPI PCI hotplug handler is also called
>> for cold-plugged devices (also on bridges) and for bridges that were
>> hotplugged.
>>
>> When trying to hotplug a device to a hotplugged bridge, we now correctly
>> get the error message
>>  "Unsupported bus. Bus doesn't have property 'acpi-pcihp-bsel' set"
>> Insted of going via the standard PCI hotplug handler.
> Erroring out is probably not ok, since it can break existing setups
> where SHPC hotplugging to hotplugged bridge was working just fine before.

The question is if it actually was supposed (and eventually did) work.

If this was the expected behavior (mixing hotplug types), then the
necessary change to this patch would boil down to checking if the bridge
it hot or coldplugged.

> 
> Marcel/Michael what's your take on this change in behaviour?
> CCing libvirt in case they are doing this stuff
> 

Indeed, it would be nice to know if this was actually supposed to work
like this (coldplugged bridges using ACPI hotplug and hotplugged bridges
using SHPC hotplug).


-- 

Thanks,

David / dhildenb

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] nodedev: Document the udevEventHandleThread

2018-11-02 Thread John Ferlan



On 11/2/18 3:48 AM, Erik Skultety wrote:
> On Thu, Nov 01, 2018 at 01:48:39PM -0400, John Ferlan wrote:
>> Commit cdbe1332 neglected to document the API. So let's add some
>> details about the algorithm and why it was used to help future
>> readers understand the issues encountered. Based largely off a
>> description provided by Erik Skultety .
> 
> Oh, I missed ^this last sentence. Since you're blaming the commit already and
> the description based on a mailing list thread will not be part of the log, 
> you
> should IMHO drop it.
> 
>>
>> NB: Management of the processing udev device notification is a
>> delicate balance between the udev process, the scheduler, and when
>> exactly the data from/for the socket is received. The balance is
>> particularly important for environments when multiple devices are
>> added into the system more or less simultaneously such as is done
>> for mdev.
> 
> "or SRIOV"
> 
> Note: I can't really remember what I used for reproducer, mdevs or a SRIOV
> card.
> 
>> In these cases scheduler blocking on the udev recv() call
> 
> I don't think scheduler is blocking anywhere, it's rather old libudev blocking
> on the recv call ;)
> 
>> is more noticeable. It's expected that future devices will follow
>> similar algorithms. Even though the algorithm does present some
>> challenges for older OS's (such as Centos 6), trying to rewrite
>> the algorithm to fit both models would be more complex and involve
>> pulling the monitor object out of the private data lockable object
>> and would need to be guarded by a separate lock. Devising such an
>> algorithm to work around issues with older OS's at the expense
>> of more modern OS algorithms in newer event processing code may
>> result in unexpected issues, so the choice is to encourage use
>> of newer OS's with newer udev event processing code.
>>
>> Signed-off-by: John Ferlan 
>> ---
>>
>>  v1: https://www.redhat.com/archives/libvir-list/2018-October/msg01053.html
>>
>>  Changes are from code review with some minor tweaks/editing as I
>>  went along. Mistakes are my own!
>>
>>  src/node_device/node_device_udev.c | 28 
>>  1 file changed, 28 insertions(+)
>>
>> diff --git a/src/node_device/node_device_udev.c 
>> b/src/node_device/node_device_udev.c
>> index 22897591de..f2c2299d4d 100644
>> --- a/src/node_device/node_device_udev.c
>> +++ b/src/node_device/node_device_udev.c
>> @@ -1591,6 +1591,26 @@ udevEventMonitorSanityCheck(udevEventDataPtr priv,
>>  }
>>
>>
>> +/**
>> + * udevEventHandleThread
>> + * @opaque: unused
>> + *
>> + * Thread to handle the udevEventHandleCallback processing when udev
>> + * tells us there's a device change for us (add, modify, delete, etc).
>> + *
>> + * Once notified there is data to be processed, the actual @device
>> + * data retrieval by libudev may be delayed due to how threads are
>> + * scheduled. In fact, the event loop could be scheduled earlier than
>> + * the handler thread, thus potentially emitting the very same event
>> + * the handler thread is currently trying to process, simply because
>> + * the data hadn't been retrieved from the socket.
>> + *
> 
> ...
> 
>> + * NB: Usage of event based socket algorithm causes some issues with
>> + * older platforms such as CentOS 6 in which the data socket is opened
> 
> ^Sounds a bit too generic, as an event based algorithm is not forced to open a
> socket without O_NONBLOCK - just put something like "older platforms' libudev
> opens sockets without the NONBLOCK flag which might cause issues with event
> based algorithm" or something alike in there.
> 

 * NB: Some older distros, such as CentOS 6, libudev opens sockets
 * without the NONBLOCK flag which might cause issues with event
 * based algorithm. Although the issue can be mitigated by resetting
 * priv->dataReady for each event found; however, the scheduler issues
 * would still come into play.

Should I drop the "Although the issue..." as well? IDC... mainly trying
to avoid the "trap" of patches looking to fix older distros...

Tks -

John


> otherwise looks okay, I don't think a v3 is necessary:
> Reviewed-by: Erik Skultety 
> 
>> + * without the NONBLOCK bit set. That can be avoided by moving the reset
>> + * priv->dataReady to just after the udev_monitor_receive_device; however,
>> + * scheduler issues would still come into play.
>> + */
>>  static void
>>  udevEventHandleThread(void *opaque ATTRIBUTE_UNUSED)
>>  {
>> @@ -1637,6 +1657,9 @@ udevEventHandleThread(void *opaque ATTRIBUTE_UNUSED)
>>  return;
>>  }
>>
>> +/* Trying to move the reset of the @priv->dataReady flag to
>> + * after the udev_monitor_receive_device wouldn't help much
>> + * due to event mgmt and scheduler timing. */
>>  virObjectLock(priv);
>>  priv->dataReady = false;
>>  virObjectUnlock(priv);
>> @@ -1646,6 +1669,11 @@ udevEventHandleThread(void *opaque ATTRIBUTE_UNUSED)
>>
>>  

Re: [libvirt] [PATCH 0/3] add disk driver metadata_cache_size option

2018-11-02 Thread Nikolay Shirokovskiy



On 02.11.2018 13:11, Kevin Wolf wrote:
> Am 01.11.2018 um 12:32 hat Nikolay Shirokovskiy geschrieben:
>> Hi, all.
>>
>> This is a patch series after offlist agreement on introducing
>> metadata-cache-size option for disks. The options itself is described in 2nd
>> patch of the series.
>>
>> There is a plenty of attempts to add option to set qcow2 metadata cache 
>> sizes in
>> past, see [1] [2] to name recent that has comments I tried to address or has
>> work I reused to some extent.
>>
>> I would like to address Peter's comment in [1] that this option is image's
>> option rather then disk's here. While this makes sense if we set specific 
>> cache
>> value that covers disk only partially, here when we have maximum policy that
>> covers whole disk it makes sense to set same value for all images. The setted
>> value is only upper limit on cache size and thus cache for every image will
>> grow proportionally to image data size "visible from top" eventually I guess.
>> And these sizes are changing during guest lifetime - thus we need set whole
>> disk limit for every image for 'maxium' effect.
>>
>> On the other hand if we add policies different from 'maximum' it may require
>> per image option. Well I can't imagine name for element for backing chain 
>> that
>> will have cache size attribute better then 'driver'). And driver is already
>> used for top image so I leave it this way.
>>
>>   KNOWN ISSUES
>>
>> 1. when using -driver to configure disks in command line cache size does not
>>get propagated thru backing chain
>>
>> 2. when making external disk snapshot cache size sneak into backing file
>>filename and thus cache size for backing image became remembered there
>>
>> 3. blockcommit after such snapshot will not work because libvirt is not ready
>>for backing file name is reported as json sometimes
>>
>> Looks like 1 can be addressed in qemu.
> 
> I don't think we want to add inheritance of driver-specific options.
> Option inheritance is already a bit messy with options that every driver
> understands.
> 
> And -drive is the obsolete interface anyway, when you use -blockdev you
> specify all nodes explicitly.
> 
>> The reason for behaviour in 2 I do not understand honestly.
> 
> QEMU doesn't recognise that the cache size option doesn't affect which
> data you're accessing. Max had a series that should probably fix it.
> Maybe we should finally get it merged.
> 
> Anyway, I suppose if you use libvirt in -blockdev mode, it doesn't make
> a difference anyway, because libvirt should then manually call
> blockdev-create for the overlay and therefore specify the backing file
> string explicitly.
> 
> Can you confirm this in practice?

Yes, there is no problem when using -blockdev. Cache size will be set
explicitly for every qcow2 format node. The issue is only with -drive,
when you for example start VM with 'maximum' policy, make a snaphot,
destoy VM, change policy to default and then start VM again. It is
supposed that as cache size is not specified then it is set to qemu's
default value but it is not for backing file because cache size is
recorded in filename.

So if we due to inheritance issue we won't use this policy until
libvirt switch to blockdev then this point does not matter.

Nikolay

> 
>> And 3 will be naturally addessed after blockjobs starts working with
>> blockdev configuration I guess.
> 
> Hopefully (Peter?), but depending on 2. it might not even be necessary
> if libvirt doesn't explicitly store json: pseudo-filenames.
> 
> Kevin
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [jenkins-ci PATCH v2 0/3] Add libvirt-ocaml

2018-11-02 Thread Pino Toscano
On Friday, 2 November 2018 12:08:22 CET Andrea Bolognani wrote:
> On Mon, 2018-10-22 at 19:02 +0200, Pino Toscano wrote:
> > Now that the libvirt-ocaml repository is fixed, let's build it in CI.
> > 
> > Changes from v1 to v2:
> > - split according to Andrea's hints
> > - removed --with-libvirt as argument for configure, no more needed
> >   after upstream changes
> > 
> > Pino Toscano (3):
> >   guests: add mappings for OCaml components
> >   guests: pull dependencies for libvirt-ocaml
> >   Add libvirt-ocaml project
> 
> Reviewed-by: Andrea Bolognani 
> 
> Do you plan on pushing these yourself, or do you want me to take
> care of it?

I just pushed them, with Erik's and yours R-b.

> Either way, I'll have to apply changes to the live CI
> environment as you don't have any permissions there...

Feel free to start the update magic, please ;)

Thanks,
-- 
Pino Toscano

signature.asc
Description: This is a digitally signed message part.
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [jenkins-ci PATCH v2 0/3] Add libvirt-ocaml

2018-11-02 Thread Andrea Bolognani
On Mon, 2018-10-22 at 19:02 +0200, Pino Toscano wrote:
> Now that the libvirt-ocaml repository is fixed, let's build it in CI.
> 
> Changes from v1 to v2:
> - split according to Andrea's hints
> - removed --with-libvirt as argument for configure, no more needed
>   after upstream changes
> 
> Pino Toscano (3):
>   guests: add mappings for OCaml components
>   guests: pull dependencies for libvirt-ocaml
>   Add libvirt-ocaml project

Reviewed-by: Andrea Bolognani 

Do you plan on pushing these yourself, or do you want me to take
care of it? Either way, I'll have to apply changes to the live CI
environment as you don't have any permissions there...

-- 
Andrea Bolognani / Red Hat / Virtualization

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 3/3] qemu: support metadata-cache-size for blockdev

2018-11-02 Thread Kevin Wolf
Am 01.11.2018 um 12:32 hat Nikolay Shirokovskiy geschrieben:
> Just set l2-cache-size to INT64_MAX for all format nodes of
> qcow2 type in block node graph.
> 
> AFAIK this is sane because *actual* cache size depends on size
> of data being referenced in image and thus the total size of
> all cache sizes for all images in disk backing chain will not
> exceed the cache size that covers just one full image as in
> case of no backing chain.

This is not quite correct.

Starting from qemu 3.1, INT64_MAX will add a cache that covers the whole
image. Memory is only used if a cache entry is actually used, so if you
never access the backing file, it doesn't really use any memory.
However, the granularity isn't a single cluster here, but L2 tables. So
if some L2 table contains one cluster in the overlay and another cluster
in the backing file, and both are accessed, the L2 table will be cached
in both the overlay and th backing file.

More importantly, before 3.1, I think QEMU will actually try to allocate
2^63-1 bytes for the cache and fail to open the image. So we can't do
this unconditionally.

(Is this patch tested with a real QEMU?)

Kevin

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/3] add disk driver metadata_cache_size option

2018-11-02 Thread Kevin Wolf
Am 01.11.2018 um 12:32 hat Nikolay Shirokovskiy geschrieben:
> Hi, all.
> 
> This is a patch series after offlist agreement on introducing
> metadata-cache-size option for disks. The options itself is described in 2nd
> patch of the series.
> 
> There is a plenty of attempts to add option to set qcow2 metadata cache sizes 
> in
> past, see [1] [2] to name recent that has comments I tried to address or has
> work I reused to some extent.
> 
> I would like to address Peter's comment in [1] that this option is image's
> option rather then disk's here. While this makes sense if we set specific 
> cache
> value that covers disk only partially, here when we have maximum policy that
> covers whole disk it makes sense to set same value for all images. The setted
> value is only upper limit on cache size and thus cache for every image will
> grow proportionally to image data size "visible from top" eventually I guess.
> And these sizes are changing during guest lifetime - thus we need set whole
> disk limit for every image for 'maxium' effect.
> 
> On the other hand if we add policies different from 'maximum' it may require
> per image option. Well I can't imagine name for element for backing chain that
> will have cache size attribute better then 'driver'). And driver is already
> used for top image so I leave it this way.
> 
>   KNOWN ISSUES
> 
> 1. when using -driver to configure disks in command line cache size does not
>get propagated thru backing chain
> 
> 2. when making external disk snapshot cache size sneak into backing file
>filename and thus cache size for backing image became remembered there
> 
> 3. blockcommit after such snapshot will not work because libvirt is not ready
>for backing file name is reported as json sometimes
> 
> Looks like 1 can be addressed in qemu.

I don't think we want to add inheritance of driver-specific options.
Option inheritance is already a bit messy with options that every driver
understands.

And -drive is the obsolete interface anyway, when you use -blockdev you
specify all nodes explicitly.

> The reason for behaviour in 2 I do not understand honestly.

QEMU doesn't recognise that the cache size option doesn't affect which
data you're accessing. Max had a series that should probably fix it.
Maybe we should finally get it merged.

Anyway, I suppose if you use libvirt in -blockdev mode, it doesn't make
a difference anyway, because libvirt should then manually call
blockdev-create for the overlay and therefore specify the backing file
string explicitly.

Can you confirm this in practice?

> And 3 will be naturally addessed after blockjobs starts working with
> blockdev configuration I guess.

Hopefully (Peter?), but depending on 2. it might not even be necessary
if libvirt doesn't explicitly store json: pseudo-filenames.

Kevin

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 1/3] qemu: pass stop reason from qemuProcessStopCPUs to stop handler

2018-11-02 Thread Nikolay Shirokovskiy
Hi, John!

Looks like this series is stucked somehow even though there is almost 100% 
agreement.


On 17.10.2018 11:41, Nikolay Shirokovskiy wrote:
> 
> 
> On 16.10.2018 21:40, John Ferlan wrote:
>>
>> $SUBJ:
>>
>> s/pass/Pass
>>
>> On 10/10/18 4:04 AM, Nikolay Shirokovskiy wrote:
>>> We send duplicate suspended lifecycle events on qemu process stop in several
>>> places. The reason is stop event handler always send suspended event and
>>> we addidionally send same event but with more presise reason after call
>>
>> *additionally
>> *precise
>>
>>> to qemuProcessStopCPUs. Domain state change is also duplicated.
>>>
>>> Let's change domain state and send event only in handler. For this
>>> purpuse first let's pass state change reason to event handler (event
>>
>> *purpose
>>
>>> reason is deducible from it).
>>>
>>> Inspired by similar patch for resume: 5dab984ed
>>> "qemu: Pass running reason to RESUME event handler".
>>>
>>
>> In any case, I think the above was a bit more appropriate for the cover
>> since it details a few things being altered in the 3rd patch of the
>> series. So, maybe this could change to:
>>
>> Similar to commit 5dab984ed which saves and passes the running reason to
>> the RESUME event handler, during qemuProcessStopCPUs let's save and pass
>> the pause reason in the domain private data so that the STOP event
>> handler can use it.
>>
>>> Signed-off-by: Nikolay Shirokovskiy 
>>> ---
>>>  src/qemu/qemu_domain.h  |  4 
>>>  src/qemu/qemu_process.c | 15 ---
>>>  2 files changed, 16 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
>>> index 80bd4bd..380ea14 100644
>>> --- a/src/qemu/qemu_domain.h
>>> +++ b/src/qemu/qemu_domain.h
>>> @@ -370,6 +370,10 @@ struct _qemuDomainObjPrivate {
>>>  /* qemuProcessStartCPUs stores the reason for starting vCPUs here for 
>>> the
>>>   * RESUME event handler to use it */
>>>  virDomainRunningReason runningReason;
>>> +
>>> +/* qemuProcessStopCPUs stores the reason for starting vCPUs here for 
>>> the
>>
>> s/starting/pausing/
>>
>> too much copy-pasta
>>
>> FWIW: Similar to the comment I made to Jirka for his series, I assume
>> this STOP processing would have the similar issue with the event going
>> to the old libvirtd and thus not needing to save/restore via XML state
>> processing. For context see:
>>
>> https://www.redhat.com/archives/libvir-list/2018-September/msg01145.html
>>
>>> + * STOP event handler to use it */
>>> +virDomainPausedReason pausedReason;
>>>  };
>>>  
>>
>> With a couple of adjustments,
>>
>> Reviewed-by: John Ferlan 
>>
>> John
>>
>> I can make the adjustments so that you don't need to send another series
>> - just need your acknowledgment for that.
>>
> 
> I'm ok with you changes
> 
> Nikolay
> 
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] nwfilter: intantiate filters on loading driver

2018-11-02 Thread Nikolay Shirokovskiy


On 02.11.2018 00:50, John Ferlan wrote:
> 
> 
> On 11/1/18 3:26 AM, Nikolay Shirokovskiy wrote:
>>
>>
>> On 01.11.2018 03:48, John Ferlan wrote:
>>>
>>>
>>> On 10/31/18 10:41 AM, Nikolay Shirokovskiy wrote:
 On 31.10.2018 16:14, John Ferlan wrote:
>
>
> On 10/30/18 3:24 AM, Nikolay Shirokovskiy wrote:
>>
>>
>> On 29.10.2018 22:37, John Ferlan wrote:
>>>
>>>
>>> On 10/15/18 4:26 AM, Nikolay Shirokovskiy wrote:
 Before using filters binding filters instantiation was done by 
 hypervisors
 drivers initialization code (qemu was the only such hypervisor). Now 
 qemu
 reconnection done supposes it should be done by nwfilter driver 
 probably.
 Let's add this missing step.

 Signed-off-by: Nikolay Shirokovskiy 
 ---
  src/nwfilter/nwfilter_driver.c | 3 +++
  1 file changed, 3 insertions(+)

>>>
>>> If there's research you've done where the instantiation was done before
>>> introduction of the nwfilter bindings that would be really helpful...
>>>
>>> I found that virNWFilterBuildAll was introduced in commit 3df907bfff.
>>> There were 2 callers for it:
>>>
>>>1. virNWFilterTriggerRebuildImpl
>>>2. nwfilterStateReload
>>>
>>> The former called as part of the virNWFilterConfLayerInit callback
>>> during nwfilterStateInitialize (about 50 lines earlier).
>
> First off let me say you certainly find a lot of interesting bugs/issues
> that are complex and that's good. Often times I wonder how you trip
> across things or how to quantify what you've found. Some are simpler
> than others. This one on the surface would seem to be simple, but I keep
> wondering is there a downside.

 The issue is related to my recent posts:

 [1] [RFC] Faster libvirtd restart with nwfilter rules
 https://www.redhat.com/archives/libvir-list/2018-September/msg01206.html
 which continues here:
 https://www.redhat.com/archives/libvir-list/2018-October/msg00657.html

 In short if there is lots of VMs with filters then libvirtd restart takes 
 a lot of time
 during which libvirtd is unresponsive. By the way the issue is found in 
 libvirt
 versions 3.9.0 and ealier (don't know of newer versions, Virtuozzo is 
 based on 3.9.0 now,
 just like Centos7 :) )
>>>
>>> So many different issues - trying to focus on just the one for this
>>> particular patch.
>>>

 And attempts to fix it:

 [2] [PATCH RFC 0/4] nwfilter: don't reinstantiate filters if they are not 
 changed
 https://www.redhat.com/archives/libvir-list/2018-October/msg00904.html

 [3]  [PATCH v2 0/2] nwfilter: don't reinstantiate rules if there is no 
 need to
 https://www.redhat.com/archives/libvir-list/2018-October/msg01317.html

 And the reason I started v2 is Laine mentioned that it is important to
 reinstantiate rules on restart (v1 do not care if somebody messed tables).

>>>
>>> I've seen the patches and even read some briefly, but still need to take
>>> this particular patch as separate from those.
>>>
 And I discovered that upstream version stops reinstantiating rules at all
 after introducing filters bindings. And this functionality is old:
>>>
>>> So the key is "when" did that happen?  That is can you pinpoint or
>>> bisect a time in history where the filters were instantiated? And by
>>> instantiated what exactly (call) are you referencing?
>>
>> The below commit is the one. It adds instantiating filters on
>> libvirtd restart. By instantiating I mean that firewall rules
>> correspondent to filters are actually get [re]added to iptables etc.
>>
>>>

 commit cf6f8b9a9720fe5323a84e690de9fbf8ba41f6ac
 Author: Stefan Berger 
 Date:   Mon Aug 16 12:59:54 2010 -0400

 nwfilter: extend nwfilter reload support

>>>
>>> Yes, nwfilter is old, brittle, and in need of attention. Not sure anyone
>>> really wants to take on the task though! I just realized I never got the
>>> common object code implemented there. Mostly because of lock issues.
>>> Suffice to say there's more "interesting" changes in the nwfilter space
>>> being discussed internally.
>>>
>
> The nwfilter processing is kindly said rather strange, complex, and to a
> degree fragile. Thus when patches come along they are met with greater
> scrutiny. From just reading the commit message here I don't get a sense
> for the problem and why the solution fixes it. So I'm left to try and
> research myself and ask a lot of questions.
>
> BTW, some of the detail provided in this response is really useful as
> either part of a cover or after the --- in the single patch. I would
> think you'd "know" when you've done lots of research into a problem that
> providing reviews more than a mere hint would be useful! For 

[libvirt] RFC: VIR_DOMAIN_SNAPSHOT_LIST_NO_METADATA is not functional?

2018-11-02 Thread Han Han
Hello,
I found snapshot APIs(like virDomainSnapshotNum) invoked with
VIR_DOMAIN_SNAPSHOT_LIST_NO_METADATA will return 0 even there are internal
no-metadata snapshots in the domain.

Then I find the reason is in virDomainSnapshotObjListGetNames():
   937 /* If this common code is being used, we assume that all
snapshots
   938 ┆* have metadata, and thus can handle METADATA up front as
an
   939 ┆* all-or-none filter.  XXX This might not always be true, if
we
   940 ┆* add the ability to track qcow2 internal snapshots without
the
   941 ┆* use of metadata.
*/
   942 if ((data.flags & VIR_DOMAIN_SNAPSHOT_FILTERS_METADATA)
==
   943 ┆
VIR_DOMAIN_SNAPSHOT_LIST_NO_METADATA)
   944 ┆   return 0;

So currently, with VIR_DOMAIN_SNAPSHOT_LIST_NO_METADATA, no snapshot will
be returned.

I checked the commit. It is really old(6478ec1673aEric Blake
2012-08-13 18:09:12 -0600) . I guess it was initially designed for the
function to list internal snapshots without metadata.

However, now internal snapshot seems lower prioritized than external
snapshot.
Do we have plan to design this function? Since the APIs invoked with this
flag don't return the **right** result, if the function for
VIR_DOMAIN_SNAPSHOT_LIST_NO_METADATA will not be designed, how will we deal
with the flag? Remove it from code OR update the docs to mark it
unsupported?


-- 
Best regards,
---
Han Han
Quality Engineer
Redhat.

Email: h...@redhat.com
Phone: +861065339333
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH v2] nodedev: Document the udevEventHandleThread

2018-11-02 Thread Erik Skultety
On Thu, Nov 01, 2018 at 01:48:39PM -0400, John Ferlan wrote:
> Commit cdbe1332 neglected to document the API. So let's add some
> details about the algorithm and why it was used to help future
> readers understand the issues encountered. Based largely off a
> description provided by Erik Skultety .

Oh, I missed ^this last sentence. Since you're blaming the commit already and
the description based on a mailing list thread will not be part of the log, you
should IMHO drop it.

>
> NB: Management of the processing udev device notification is a
> delicate balance between the udev process, the scheduler, and when
> exactly the data from/for the socket is received. The balance is
> particularly important for environments when multiple devices are
> added into the system more or less simultaneously such as is done
> for mdev.

"or SRIOV"

Note: I can't really remember what I used for reproducer, mdevs or a SRIOV
card.

> In these cases scheduler blocking on the udev recv() call

I don't think scheduler is blocking anywhere, it's rather old libudev blocking
on the recv call ;)

> is more noticeable. It's expected that future devices will follow
> similar algorithms. Even though the algorithm does present some
> challenges for older OS's (such as Centos 6), trying to rewrite
> the algorithm to fit both models would be more complex and involve
> pulling the monitor object out of the private data lockable object
> and would need to be guarded by a separate lock. Devising such an
> algorithm to work around issues with older OS's at the expense
> of more modern OS algorithms in newer event processing code may
> result in unexpected issues, so the choice is to encourage use
> of newer OS's with newer udev event processing code.
>
> Signed-off-by: John Ferlan 
> ---
>
>  v1: https://www.redhat.com/archives/libvir-list/2018-October/msg01053.html
>
>  Changes are from code review with some minor tweaks/editing as I
>  went along. Mistakes are my own!
>
>  src/node_device/node_device_udev.c | 28 
>  1 file changed, 28 insertions(+)
>
> diff --git a/src/node_device/node_device_udev.c 
> b/src/node_device/node_device_udev.c
> index 22897591de..f2c2299d4d 100644
> --- a/src/node_device/node_device_udev.c
> +++ b/src/node_device/node_device_udev.c
> @@ -1591,6 +1591,26 @@ udevEventMonitorSanityCheck(udevEventDataPtr priv,
>  }
>
>
> +/**
> + * udevEventHandleThread
> + * @opaque: unused
> + *
> + * Thread to handle the udevEventHandleCallback processing when udev
> + * tells us there's a device change for us (add, modify, delete, etc).
> + *
> + * Once notified there is data to be processed, the actual @device
> + * data retrieval by libudev may be delayed due to how threads are
> + * scheduled. In fact, the event loop could be scheduled earlier than
> + * the handler thread, thus potentially emitting the very same event
> + * the handler thread is currently trying to process, simply because
> + * the data hadn't been retrieved from the socket.
> + *

...

> + * NB: Usage of event based socket algorithm causes some issues with
> + * older platforms such as CentOS 6 in which the data socket is opened

^Sounds a bit too generic, as an event based algorithm is not forced to open a
socket without O_NONBLOCK - just put something like "older platforms' libudev
opens sockets without the NONBLOCK flag which might cause issues with event
based algorithm" or something alike in there.

otherwise looks okay, I don't think a v3 is necessary:
Reviewed-by: Erik Skultety 

> + * without the NONBLOCK bit set. That can be avoided by moving the reset
> + * priv->dataReady to just after the udev_monitor_receive_device; however,
> + * scheduler issues would still come into play.
> + */
>  static void
>  udevEventHandleThread(void *opaque ATTRIBUTE_UNUSED)
>  {
> @@ -1637,6 +1657,9 @@ udevEventHandleThread(void *opaque ATTRIBUTE_UNUSED)
>  return;
>  }
>
> +/* Trying to move the reset of the @priv->dataReady flag to
> + * after the udev_monitor_receive_device wouldn't help much
> + * due to event mgmt and scheduler timing. */
>  virObjectLock(priv);
>  priv->dataReady = false;
>  virObjectUnlock(priv);
> @@ -1646,6 +1669,11 @@ udevEventHandleThread(void *opaque ATTRIBUTE_UNUSED)
>
>  udevHandleOneDevice(device);
>  udev_device_unref(device);
> +
> +/* Instead of waiting for the next event after processing @device
> + * data, let's keep reading from the udev monitor and only wait
> + * for the next event once either a EAGAIN or a EWOULDBLOCK error
> + * is encountered. */
>  }
>  }
>
> --
> 2.17.2
>
> --
> libvir-list mailing list
> libvir-list@redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH] qemu: Restore lost shutdown reason

2018-11-02 Thread Nikolay Shirokovskiy



On 01.11.2018 17:24, John Ferlan wrote:
> 
> 
> On 10/22/18 3:36 AM, Nikolay Shirokovskiy wrote:
>>
>>
>> On 18.10.2018 18:28, John Ferlan wrote:
>>> When qemuProcessReconnectHelper was introduced (commit d38897a5d)
>>> reconnection failure used VIR_DOMAIN_SHUTOFF_FAILED; however, that
>>> was changed in commit bda2f17d to either VIR_DOMAIN_SHUTOFF_CRASHED
>>> or VIR_DOMAIN_SHUTOFF_UNKNOWN.
>>>
>>> When QEMU_CAPS_NO_SHUTDOWN checking was removed in commit fe35b1ad6
>>> the conditional state was just left at VIR_DOMAIN_SHUTOFF_CRASHED.
>>>
>>> Restore the logic adjustment using the same conditions as command
>>> line building and alter the comment to describe the reasoning.
>>>
>>> Signed-off-by: John Ferlan 
>>> ---
>>>
>>>  This was "discovered" while reviewing Nikolay's patch related
>>>  to adding "DAEMON" as the shutdown reason:
>>>
>>> https://www.redhat.com/archives/libvir-list/2018-October/msg00493.html
>>>
>>>  While working through the iterations of that - figured I'd at
>>>  least post this.
>>>
>>>
>>>  src/qemu/qemu_process.c | 15 ++-
>>>  1 file changed, 10 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
>>> index 3955eda17c..4a39111d51 100644
>>> --- a/src/qemu/qemu_process.c
>>> +++ b/src/qemu/qemu_process.c
>>> @@ -7985,11 +7985,16 @@ qemuProcessReconnect(void *opaque)
>>>  if (virDomainObjIsActive(obj)) {
>>>  /* We can't get the monitor back, so must kill the VM
>>>   * to remove danger of it ending up running twice if
>>> - * user tries to start it again later
>>> - * If we couldn't get the monitor since QEMU supports
>>> - * no-shutdown, we can safely say that the domain
>>> - * crashed ... */
>>> -state = VIR_DOMAIN_SHUTOFF_CRASHED;
>>> + * user tries to start it again later.
>>> + *
>>> + * If we cannot get to the monitor when the QEMU command
>>> + * line used -no-shutdown, then we can safely say that the
>>> + * domain crashed; otherwise we don't really know. */
>>> +if (priv->monJSON && priv->allowReboot == VIR_TRISTATE_BOOL_YES)
>>> +state = VIR_DOMAIN_SHUTOFF_CRASHED;
>>> +else
>>> +state = VIR_DOMAIN_SHUTOFF_UNKNOWN;
>>> +
>>>  /* If BeginJob failed, we jumped here without a job, let's hope 
>>> another
>>>   * thread didn't have a chance to start playing with the domain yet
>>>   * (it's all we can do anyway).
>>>
>>
>> This is reasonable but not complete. This is only applied to case when we do 
>> connect(2)
> 
> I understand your point; however, the goal of this patch wasn't to fix
> the various other failure scenarios/reasons. It was to merely restore
> the lost UNKNOWN reason in the event that either priv->monJSON == false
> (unlikely) or priv->allowReboot != VIR_TRISTATE_BOOL_YES (possible).
> 
> Rather than copy-pasta, I'll create qemuDomainIsUsingNoShutdown(priv)
> which will perform the condition checks for both command line generation
> and reconnection failure paths.

I'm not against this change. I hoped we can elaborate more presice solution
as discussed in 
https://www.redhat.com/archives/libvir-list/2018-October/msg00493.html

Nikolay

> 
> If that's not desirable, then that's fine. I won't spend more cycles on
> this.
> 
> John
> 
>> to qemu and it is failed with ECONNREFUSED which means qemu process does not 
>> exists
>> (in which case it is either crashed if was configured with -no-shutdown or 
>> terminated
>> for unknown reason) or just closed monitor connection (in this case we are 
>> going
>> to terminate it by ourselves).
>>
>> But there are other cases. For example we can jump to error path even before 
>> connect(2)
>> or after successfull connect. See discussion of such cases in [1].
>>
>> [1] https://www.redhat.com/archives/libvir-list/2018-October/msg01031.html
>>
>> Nikolay
>>

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list