Re: [PATCH 3/3] domain_conf: add SCHED_DEADLINE support in the XML configuration

2022-08-10 Thread Michal Prívozník
On 8/1/22 19:11, Sasha Algisi wrote:
> Users can set SCHED_DEADLINE as a scheduling policy.
> 
> For example, for setting runtime = 1000, deadline = 1500
> and period = 2000 for vcpus 0-2:
> 
> 
>   ...
>   deadline="1500" period="2000"/>
>   ...
> 
> 
> Update release notes accordingly.
> 
> Signed-off-by: Sasha Algisi 
> Signed-off-by: Dario Faggioli 
> ---
>  NEWS.rst  |  5 +++
>  docs/formatdomain.rst | 16 +++---
>  src/conf/domain_conf.c| 52 ---
>  src/conf/schemas/domaincommon.rng | 16 ++
>  4 files changed, 80 insertions(+), 9 deletions(-)
> 
> diff --git a/NEWS.rst b/NEWS.rst
> index ef298da539..23484afdc2 100644
> --- a/NEWS.rst
> +++ b/NEWS.rst
> @@ -17,6 +17,11 @@ v8.7.0 (unreleased)
>  
>  * **New features**
>  
> +  * qemu: support for SCHED_DEADLINE scheduling
> +
> +Users can now use the SCHED_DEADLINE scheduling policy for tasks
> +associated to virtual CPUs, IO Threads and Emulator processes.
> +
>  * **Improvements**
>  

Bonus points for remembering to update NEWS.rst, but we tend to do that
in a separate patch. The reason being: easier backports. I mean, when a
downstream maintainer decides to backport these patches, they would get
a conflict in the NEWS file instantly.

>  * **Bug fixes**
> diff --git a/docs/formatdomain.rst b/docs/formatdomain.rst
> index 1ed969ac3e..216262b79d 100644
> --- a/docs/formatdomain.rst
> +++ b/docs/formatdomain.rst
> @@ -910,10 +910,11 @@ CPU Tuning
> support since 2.1.0`
>  ``vcpusched``, ``iothreadsched`` and ``emulatorsched``
> The optional ``vcpusched``, ``iothreadsched`` and ``emulatorsched`` 
> elements
> -   specify the scheduler type (values ``batch``, ``idle``, ``fifo``, ``rr``) 
> for
> -   particular vCPU, IOThread and emulator threads respectively. For 
> ``vcpusched``
> -   and ``iothreadsched`` the attributes ``vcpus`` and ``iothreads`` select 
> which
> -   vCPUs/IOThreads this setting applies to, leaving them out sets the 
> default.
> +   specify the scheduler type (values ``batch``, ``idle``, ``fifo``, ``rr``,
> +   ``deadline`` :since:`Since 8.7.0`) for particular vCPU, IOThread and 
> emulator
> +   threads respectively. For ``vcpusched`` and ``iothreadsched`` the 
> attributes
> +   ``vcpus`` and ``iothreads`` select which vCPUs/IOThreads this setting 
> applies
> +   to, leaving them out sets the default.
> The element ``emulatorsched`` does not have that attribute. Valid 
> ``vcpus``
> values start at 0 through one less than the number of vCPU's defined for 
> the
> domain. Valid ``iothreads`` values are described in the `IOThreads 
> Allocation`_
> @@ -923,6 +924,13 @@ CPU Tuning
> priority must be specified as well (and is ignored for non-real-time 
> ones).
> The value range for the priority depends on the host kernel (usually 
> 1-99).
> :since:`Since 1.2.13` ``emulatorsched`` :since:`since 5.3.0`
> +   For SCHED_DEADLINE (``deadline``), runtime , deadline and period must also
> +   be specified (they are ignored in other schedulers). It must always be 
> true
> +   that: runtime <= deadline <= period.
> +   The values are specified in nanoseconds. The valid range for the 
> parameters
> +   is [1024, 2^63-1] (but a smaller one can be put in place via sysctl). The
> +   period can be set to 0, in which case, a period equal to the deadline is
> +   used.

I wonder whether we should make the @period attribute optional then and
if not provided in the XML then fill in the value provided to @deadline.
Just a suggestion though.

>  ``cachetune`` :since:`Since 4.1.0`
> Optional ``cachetune`` element can control allocations for CPU caches 
> using
> the resctrl on the host. Whether or not is this supported can be gathered
> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
> index e85cc1f809..86ada8f147 100644
> --- a/src/conf/domain_conf.c
> +++ b/src/conf/domain_conf.c
> @@ -16693,7 +16693,10 @@ virDomainLoaderDefParseXML(virDomainLoaderDef 
> *loader,
>  static int
>  virDomainSchedulerParseCommonAttrs(xmlNodePtr node,
> virProcessSchedPolicy *policy,
> -   int *priority)
> +   int *priority,
> +   uint64_t *runtime,
> +   uint64_t *deadline,
> +   uint64_t *period)

Again, why not use ull instead? I'm just going to stop picking on this.
I'm sure you get the idea. The rest of the patch looks correct.


What I'm missing in this patch is a test case. We tend to either
introduce a new one or extend an existing one whenever we touch XML
parser/formatter (tests/qemuxml2xmltest.c).

Quick git grep scheduler= -- tests/   shows some test cases that could
be extended, or where inspiration for a new one can be taken from.

Michal



Re: [PATCH 0/3] qemu: add support for the SCHED_DEADLINE scheduling policy

2022-08-10 Thread Michal Prívozník
On 8/1/22 19:11, Sasha Algisi wrote:
> Hello everyone,
> 
> This patchset aims at adding support for the SCHED_DEADLINE Linux
> scheduling policy for vcpus, io-threads and emulator processes.
> 
> In fact, libvirt currently supports SCHED_OTHER, SCHED_BATCH,
> SCHED_IDLE, SCHE_FIFO and SCHED_RR, but not SCHED_DEADLINE.
> SCHED_DEADLINE is a policy implementing an algorithm originating from
> the real-time scheduling community, but it can be useful outside of
> the real-time computing field as well.
> 
> It allows one to set a specific amount of CPU time that a task should
> receive with a given periodicity, and withing a certain deadline.
> E.g., task t should be scheduled at least for 50 ms every 100 ms. To
> achieve this, it needs 3 parameters: runtime, deadline and period
> (although period can just be equal to deadline, which is what happens
> automatically if one sets period=0). It must always hold that:
> runtime <= deadline <= period (and this is enforced by the kernel,
> but checks are included in the patches, so that meaningful and easy
> to interpret error messages can be printed to the user).
> 
> More info on SCHED_DEADLINE are available here:
> 
> https://docs.kernel.org/scheduler/sched-deadline.html
> 
> The interface will look like this, e.g., for setting SCHED_DEADLINE
> as a policy for 3 (0-2) vcpus, with runtime = 1000, deadline =
> 1500 and period = 2000:
> 
>   
> ...
>   deadline="1500" period="2000"/>
> ...
>   
> 
> This a link to a branch containing the patches:
> 
> https://gitlab.com/Algisi-00/libvirt/-/tree/sched-deadline
> 
> And this is the link to results of running the CI on such branch:
> 
> https://gitlab.com/Algisi-00/libvirt/-/pipelines/601795712
> 
> Note that the jobs that are failing are also failing in the exact
> same way without these patches applied.
> 
> Feedback is welcome and very much appreciated.
> 
> Thanks and regards.
> 
> Sasha Algisi (3):
>   virprocess: define sched_attr and sched_setattr
>   virprocess: add the SCHED_DEADLINE scheduling policy
>   domain_conf: add SCHED_DEADLINE support in the XML configuration
> 
>  NEWS.rst  |   5 ++
>  docs/formatdomain.rst |  16 +++-
>  src/ch/ch_process.c   |   3 +-
>  src/conf/domain_conf.c|  52 +++--
>  src/conf/domain_conf.h|   3 +
>  src/conf/schemas/domaincommon.rng |  16 
>  src/qemu/qemu_process.c   |   8 +-
>  src/util/virprocess.c | 123 +-
>  src/util/virprocess.h |   6 +-
>  9 files changed, 216 insertions(+), 16 deletions(-)
> 

Hey, the code looks good. However, we require that the code compiles
after each patch, which is not the case with your series. The reason for
our requirement is simple: easy git bisect. Therefore, it's okay if
feature does not work until the very last commit. We often have
patches/commits that work gradually towards grand finale.

Can you please fix that in v2?

Michal



Re: [PATCH 2/3] virprocess: add the SCHED_DEADLINE scheduling policy

2022-08-10 Thread Michal Prívozník
On 8/1/22 19:11, Sasha Algisi wrote:
> Tasks associated to virtual CPUs, IO Threads and Emulator processes
> can be created with the SCHED_DEADLINE policy. The policy is described
> in details here: https://docs.kernel.org/scheduler/sched-deadline.html
> 
> It requires the following parameters (all in nanoseconds):
> 1) runtime
> 2) deadline
> 3) period
> 
> It must always holds that: runtime <= deadline <= period.
> 
> The kernel enforces that the values stay within [1024, 2^63-1].
> Note, however, that a smaller range could be set (or be already set
> by  default) via sysctl (see kernel.sched_deadline_period_max_us and
> kernel.sched_deadline_period_min_us).
> 
> All the three parameters are mandatory but period can be set to 0,
> in which case it will set to the same value of deadline.
> 
> Signed-off-by: Sasha Algisi 
> Signed-off-by: Dario Faggioli 
> ---
>  src/ch/ch_process.c |  3 +-
>  src/conf/domain_conf.h  |  3 ++
>  src/qemu/qemu_process.c |  8 +++-
>  src/util/virprocess.c   | 84 +++--
>  src/util/virprocess.h   |  6 ++-
>  5 files changed, 97 insertions(+), 7 deletions(-)
> 
> diff --git a/src/ch/ch_process.c b/src/ch/ch_process.c
> index 77f55e777b..a40d188aac 100644
> --- a/src/ch/ch_process.c
> +++ b/src/ch/ch_process.c
> @@ -293,7 +293,8 @@ virCHProcessSetupPid(virDomainObj *vm,
>  /* Set scheduler type and priority, but not for the main thread. */
>  if (sched &&
>  nameval != VIR_CGROUP_THREAD_EMULATOR &&
> -virProcessSetScheduler(pid, sched->policy, sched->priority) < 0)
> +virProcessSetScheduler(pid, sched->policy, sched->priority,
> +   sched->runtime, sched->deadline, 
> sched->period) < 0)
>  goto cleanup;
>  
>  ret = 0;
> diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
> index 060c395943..c3d1a1b65d 100644
> --- a/src/conf/domain_conf.h
> +++ b/src/conf/domain_conf.h
> @@ -2454,6 +2454,9 @@ typedef enum {
>  struct _virDomainThreadSchedParam {
>  virProcessSchedPolicy policy;
>  int priority;
> +uint64_t runtime;
> +uint64_t deadline;
> +uint64_t period;

Or just unsigned long long. Here we don't face kernel just yet, and can
use 'more generic' types which also allows you to drop plenty of
typecasts when using internal helpers.

>  };
>  
>  struct _virDomainTimerCatchupDef {
> diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
> index 137dcf5cf4..7586e0538a 100644
> --- a/src/qemu/qemu_process.c
> +++ b/src/qemu/qemu_process.c
> @@ -2728,7 +2728,8 @@ qemuProcessSetupPid(virDomainObj *vm,
>  /* Set scheduler type and priority, but not for the main thread. */
>  if (sched &&
>  nameval != VIR_CGROUP_THREAD_EMULATOR &&
> -virProcessSetScheduler(pid, sched->policy, sched->priority) < 0)
> +virProcessSetScheduler(pid, sched->policy, sched->priority,
> +   sched->runtime, sched->deadline, 
> sched->period) < 0)
>  goto cleanup;
>  
>  ret = 0;
> @@ -7813,7 +7814,10 @@ qemuProcessLaunch(virConnectPtr conn,
>  if (vm->def->cputune.emulatorsched &&
>  virProcessSetScheduler(vm->pid,
> vm->def->cputune.emulatorsched->policy,
> -   vm->def->cputune.emulatorsched->priority) < 0)
> +   vm->def->cputune.emulatorsched->priority,
> +   vm->def->cputune.emulatorsched->runtime,
> +   vm->def->cputune.emulatorsched->deadline,
> +   vm->def->cputune.emulatorsched->period) < 0)
>  goto cleanup;
>  
>  VIR_DEBUG("Setting any required VM passwords");
> diff --git a/src/util/virprocess.c b/src/util/virprocess.c
> index a8f86784e1..c96bfc45fd 100644
> --- a/src/util/virprocess.c
> +++ b/src/util/virprocess.c
> @@ -84,6 +84,7 @@ VIR_ENUM_IMPL(virProcessSchedPolicy,
>"idle",
>"fifo",
>"rr",
> +  "deadline",
>  );
>  
>  
> @@ -1610,6 +1611,13 @@ virProcessSchedTranslatePolicy(virProcessSchedPolicy 
> policy)
>  case VIR_PROC_POLICY_RR:
>  return SCHED_RR;
>  
> +case VIR_PROC_POLICY_DEADLINE:
> +# ifdef SCHED_DEADLINE
> +return SCHED_DEADLINE;
> +# else
> +return -1;
> +# endif
> +
>  case VIR_PROC_POLICY_LAST:
>  /* nada */
>  break;
> @@ -1621,13 +1629,20 @@ virProcessSchedTranslatePolicy(virProcessSchedPolicy 
> policy)
>  int
>  virProcessSetScheduler(pid_t pid,
> virProcessSchedPolicy policy,
> -   int priority)
> +   int priority,
> +   uint64_t runtime G_GNUC_UNUSED,
> +   uint64_t deadline G_GNUC_UNUSED,
> +   uint64_t period G_GNUC_UNUSED)

The !WITH_SCHED_SETSCHEDULER case stub is not updated correspondingly.
And these new arguments don't need the

Re: [PATCH 1/3] virprocess: define sched_attr and sched_setattr

2022-08-10 Thread Michal Prívozník
On 8/1/22 19:11, Sasha Algisi wrote:
> In order to use SCHED_DEADLINE we need sched_setattr(), as
> sched_setscheduler() does not support all the necessary parameters.
> 
> Signed-off-by: Sasha Algisi 
> Signed-off-by: Dario Faggioli 
> ---
>  src/util/virprocess.c | 39 +++
>  1 file changed, 39 insertions(+)
> 
> diff --git a/src/util/virprocess.c b/src/util/virprocess.c
> index 013afd91b4..a8f86784e1 100644
> --- a/src/util/virprocess.c
> +++ b/src/util/virprocess.c
> @@ -51,6 +51,10 @@
>  # include 
>  #endif
>  
> +#if WITH_SYS_SYSCALL_H
> +# include 
> +#endif
> +
>  #ifdef WIN32
>  # define WIN32_LEAN_AND_MEAN
>  # include 
> @@ -67,6 +71,10 @@
>  
>  #define VIR_FROM_THIS VIR_FROM_NONE
>  
> +#if defined(__linux__) && !defined(SCHED_FLAG_RESET_ON_FORK)
> +# define SCHED_FLAG_RESET_ON_FORK 0x01
> +#endif
> +
>  VIR_LOG_INIT("util.process");
>  
>  VIR_ENUM_IMPL(virProcessSchedPolicy,
> @@ -79,6 +87,37 @@ VIR_ENUM_IMPL(virProcessSchedPolicy,
>  );
>  
>  
> +#if defined(__linux__) && defined(SCHED_DEADLINE)
> +
> +struct sched_attr {
> +uint32_t size;
> +uint32_t sched_policy;
> +uint64_t sched_flags;
> +
> +/*SCHED_OTHER, SCHED_BATCH*/
> +int32_t sched_nice;
> +
> +/*SCHED_FIFO, SCHED_RR*/
> +uint32_t sched_priority;
> +
> +/*SCHED_DEADLINE*/
> +uint64_t sched_runtime;
> +uint64_t sched_deadline;
> +uint64_t sched_period;
> +};

Darn, I wish we could just include  but we can't.
Kernel headers (at least the version I'm using: 5.15) are broken as the
header file redefines sched_param struct.

> +
> +
> +static
> +int sched_setattr(pid_t pid,

We format it differently:

static int
function(int arg, ..)

> +  struct sched_attr *attr,
> +  unsigned int flags)
> +{
> +return syscall(SYS_sched_setattr, pid, attr, flags);
> +}
> +
> +#endif

Now, this function is not used and is static which makes compiler sad.
Maybe it can be marked as G_GNUC_UNUSED for the time being, until is
used (in the following patch). Or just squash patches together.

Michal



[PATCH 4/5] qemu: refactor functions with removed driver if possible

2022-08-10 Thread Kristina Hanicova
Signed-off-by: Kristina Hanicova 
---
 src/qemu/qemu_domain.c|  3 +--
 src/qemu/qemu_driver.c| 52 ++-
 src/qemu/qemu_hotplug.c   | 12 +++--
 src/qemu/qemu_migration.c | 39 ++---
 src/qemu/qemu_snapshot.c  |  3 +--
 5 files changed, 37 insertions(+), 72 deletions(-)

diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index b1c17bcd03..93fd11a3c9 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -6081,8 +6081,7 @@ qemuDomainObjExitMonitor(virDomainObj *obj)
 
 void qemuDomainObjEnterMonitor(virDomainObj *obj)
 {
-ignore_value(qemuDomainObjEnterMonitorInternal(obj,
-   VIR_ASYNC_JOB_NONE));
+ignore_value(qemuDomainObjEnterMonitorInternal(obj, VIR_ASYNC_JOB_NONE));
 }
 
 /*
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 0503a79075..4ff1d65c1e 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -184,8 +184,8 @@ qemuAutostartDomain(virDomainObj *vm,
 virResetLastError();
 if (vm->autostart &&
 !virDomainObjIsActive(vm)) {
-if (qemuProcessBeginJob(vm,
-VIR_DOMAIN_JOB_OPERATION_START, flags) < 0) {
+if (qemuProcessBeginJob(vm, VIR_DOMAIN_JOB_OPERATION_START,
+flags) < 0) {
 virReportError(VIR_ERR_INTERNAL_ERROR,
_("Failed to start job on VM '%s': %s"),
vm->def->name, virGetLastErrorMessage());
@@ -1623,8 +1623,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr 
conn,
NULL)))
 goto cleanup;
 
-if (qemuProcessBeginJob(vm, VIR_DOMAIN_JOB_OPERATION_START,
-flags) < 0) {
+if (qemuProcessBeginJob(vm, VIR_DOMAIN_JOB_OPERATION_START, flags) < 0) {
 qemuDomainRemoveInactive(driver, vm);
 goto cleanup;
 }
@@ -1784,8 +1783,7 @@ qemuDomainShutdownFlagsAgent(virDomainObj *vm,
 int agentFlag = isReboot ?  QEMU_AGENT_SHUTDOWN_REBOOT :
 QEMU_AGENT_SHUTDOWN_POWERDOWN;
 
-if (qemuDomainObjBeginAgentJob(vm,
-   VIR_AGENT_JOB_MODIFY) < 0)
+if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0)
 return -1;
 
 if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_RUNNING) {
@@ -1913,8 +1911,7 @@ qemuDomainRebootAgent(virDomainObj *vm,
 if (!isReboot)
 agentFlag = QEMU_AGENT_SHUTDOWN_POWERDOWN;
 
-if (qemuDomainObjBeginAgentJob(vm,
-   VIR_AGENT_JOB_MODIFY) < 0)
+if (qemuDomainObjBeginAgentJob(vm, VIR_AGENT_JOB_MODIFY) < 0)
 return -1;
 
 if (!qemuDomainAgentAvailable(vm, agentForced))
@@ -1941,8 +1938,7 @@ qemuDomainRebootMonitor(virDomainObj *vm,
 qemuDomainObjPrivate *priv = vm->privateData;
 int ret = -1;
 
-if (qemuDomainObjBeginJob(vm,
-  VIR_JOB_MODIFY) < 0)
+if (qemuDomainObjBeginJob(vm, VIR_JOB_MODIFY) < 0)
 return -1;
 
 if (virDomainObjCheckActive(vm) < 0)
@@ -3221,8 +3217,7 @@ qemuDomainCoreDumpWithFormat(virDomainPtr dom,
 if (virDomainCoreDumpWithFormatEnsureACL(dom->conn, vm->def) < 0)
 goto cleanup;
 
-if (qemuDomainObjBeginAsyncJob(vm,
-   VIR_ASYNC_JOB_DUMP,
+if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP,
VIR_DOMAIN_JOB_OPERATION_DUMP,
flags) < 0)
 goto cleanup;
@@ -3460,8 +3455,7 @@ processWatchdogEvent(virQEMUDriver *driver,
 
 switch (action) {
 case VIR_DOMAIN_WATCHDOG_ACTION_DUMP:
-if (qemuDomainObjBeginAsyncJob(vm,
-   VIR_ASYNC_JOB_DUMP,
+if (qemuDomainObjBeginAsyncJob(vm, VIR_ASYNC_JOB_DUMP,
VIR_DOMAIN_JOB_OPERATION_DUMP,
flags) < 0) {
 return;
@@ -4268,8 +4262,7 @@ static void qemuProcessEventHandler(void *data, void 
*opaque)
   processEvent->action);
 break;
 case QEMU_PROCESS_EVENT_BLOCK_JOB:
-processBlockJobEvent(vm,
- processEvent->data,
+processBlockJobEvent(vm, processEvent->data,
  processEvent->action,
  processEvent->status);
 break;
@@ -6040,8 +6033,7 @@ qemuDomainRestoreInternal(virConnectPtr conn,
 priv->hookRun = true;
 }
 
-if (qemuProcessBeginJob(vm, VIR_DOMAIN_JOB_OPERATION_RESTORE,
-flags) < 0)
+if (qemuProcessBeginJob(vm, VIR_DOMAIN_JOB_OPERATION_RESTORE, flags) < 0)
 goto cleanup;
 
 ret = qemuSaveImageStartVM(conn, driver, vm, &fd, data, path,
@@ -6650,8 +6642,7 @@ qemuDomainCreateWithFlags(virDomainPtr dom, unsigned int 
flags)
 if (virDomainCreateWithF

Re: [PATCH v2 8/9] qemu: Enable SCHED_CORE for domains and helper processes

2022-08-10 Thread Michal Prívozník
On 7/13/22 19:25, Daniel P. Berrangé wrote:
> On Mon, Jun 27, 2022 at 12:44:40PM +0200, Michal Privoznik wrote:
>> Despite all mitigations, side channel attacks when two processes
>> run at two Hyper Threads of the same core are still possible.
>> Fortunately, the Linux kernel came up with a solution: userspace
>> can create so called trusted groups, which are sets of processes
>> and only processes of the same group can run on sibling Hyper
>> Threads. Of course, two processes of different groups can run on
>> different cores, because there's no known side channel attack.
>> It's only Hyper Threads that are affected.
> 
> The next patch deals with helper processes too. I guess the
> difference in this patch is that it deals with helper processes
> spawned /after/ QEMU, so they can inherit scheduling group at
> startup easily, while the next patch has to apply the group
> later in startup ?

Correct.

Michal



[PATCH 1/5] qemu: beginJob: move saveStatus into private job callbacks

2022-08-10 Thread Kristina Hanicova
It makes sense to move this to other hypervisor-based functions
into the private job callback structure to make begin job
general.

Signed-off-by: Kristina Hanicova 
---
 src/hypervisor/domain_job.h | 2 ++
 src/qemu/qemu_domain.c  | 1 +
 src/qemu/qemu_domainjob.c   | 4 ++--
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h
index 30893dabc3..838cfd16e3 100644
--- a/src/hypervisor/domain_job.h
+++ b/src/hypervisor/domain_job.h
@@ -196,6 +196,7 @@ typedef int (*virDomainObjPrivateJobFormat)(virBuffer *,
 typedef int (*virDomainObjPrivateJobParse)(xmlXPathContextPtr,
virDomainJobObj *,
virDomainObj *);
+typedef void (*virDomainObjPrivateSaveStatus)(virDomainObj *obj);
 
 struct _virDomainObjPrivateJobCallbacks {
virDomainObjPrivateJobAlloc allocJobPrivate;
@@ -203,6 +204,7 @@ struct _virDomainObjPrivateJobCallbacks {
virDomainObjPrivateJobReset resetJobPrivate;
virDomainObjPrivateJobFormat formatJobPrivate;
virDomainObjPrivateJobParse parseJobPrivate;
+   virDomainObjPrivateSaveStatus saveStatusPrivate;
 };
 
 
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index bc5961a09f..94b50420fe 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -366,6 +366,7 @@ static virDomainObjPrivateJobCallbacks 
qemuPrivateJobCallbacks = {
 .resetJobPrivate = qemuJobResetPrivate,
 .formatJobPrivate = qemuDomainFormatJobPrivate,
 .parseJobPrivate = qemuDomainParseJobPrivate,
+.saveStatusPrivate = qemuDomainSaveStatus,
 };
 
 /**
diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c
index d682f7be38..11f30de136 100644
--- a/src/qemu/qemu_domainjob.c
+++ b/src/qemu/qemu_domainjob.c
@@ -844,8 +844,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver,
 priv->job.agentStarted = now;
 }
 
-if (virDomainTrackJob(job))
-qemuDomainSaveStatus(obj);
+if (virDomainTrackJob(job) && priv->job.cb)
+priv->job.cb->saveStatusPrivate(obj);
 
 return 0;
 
-- 
2.37.1



[PATCH 2/5] hypervisor: domain_job: add maxQueuedJobs

2022-08-10 Thread Kristina Hanicova
This patch adds a new variable maxQueuedJobs into the job object
as it is the last hypervisor-based part of the begin job. Since
this patch, it will not be necessary to propagate driver
structure into the job functions.

Signed-off-by: Kristina Hanicova 
---
 src/hypervisor/domain_job.h |  1 +
 src/qemu/qemu_domain.c  |  3 +++
 src/qemu/qemu_domainjob.c   | 11 +--
 3 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h
index 838cfd16e3..7f35d5ee85 100644
--- a/src/hypervisor/domain_job.h
+++ b/src/hypervisor/domain_job.h
@@ -155,6 +155,7 @@ struct _virDomainJobObj {
 virCond cond;   /* Use to coordinate jobs */
 
 int jobsQueued;
+unsigned int maxQueuedJobs;
 
 /* The following members are for VIR_JOB_* */
 virDomainJob active;/* currently running job */
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 94b50420fe..7ac7c3c05a 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -1758,6 +1758,7 @@ static void *
 qemuDomainObjPrivateAlloc(void *opaque)
 {
 g_autoptr(qemuDomainObjPrivate) priv = g_new0(qemuDomainObjPrivate, 1);
+g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(opaque);
 
 if (virDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks) < 0) {
 virReportSystemError(errno, "%s",
@@ -1770,6 +1771,8 @@ qemuDomainObjPrivateAlloc(void *opaque)
 
 priv->blockjobs = virHashNew(virObjectUnref);
 
+priv->job.maxQueuedJobs = cfg->maxQueuedJobs;
+
 /* agent commands block by default, user can choose different behavior */
 priv->agentTimeout = VIR_DOMAIN_AGENT_RESPONSE_TIMEOUT_BLOCK;
 priv->migMaxBandwidth = QEMU_DOMAIN_MIG_BANDWIDTH_MAX;
diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c
index 11f30de136..0e1c7210c5 100644
--- a/src/qemu/qemu_domainjob.c
+++ b/src/qemu/qemu_domainjob.c
@@ -723,7 +723,7 @@ qemuDomainObjReleaseAsyncJob(virDomainObj *obj)
  * -1 otherwise.
  */
 static int ATTRIBUTE_NONNULL(1)
-qemuDomainObjBeginJobInternal(virQEMUDriver *driver,
+qemuDomainObjBeginJobInternal(virQEMUDriver *driver G_GNUC_UNUSED,
   virDomainObj *obj,
   virDomainJob job,
   virDomainAgentJob agentJob,
@@ -734,7 +734,6 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver,
 unsigned long long now;
 unsigned long long then;
 bool nested = job == VIR_JOB_ASYNC_NESTED;
-g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
 const char *blocker = NULL;
 const char *agentBlocker = NULL;
 int ret = -1;
@@ -763,8 +762,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver,
  retry:
 if (job != VIR_JOB_ASYNC &&
 job != VIR_JOB_DESTROY &&
-cfg->maxQueuedJobs &&
-priv->job.jobsQueued > cfg->maxQueuedJobs) {
+priv->job.maxQueuedJobs &&
+priv->job.jobsQueued > priv->job.maxQueuedJobs) {
 goto error;
 }
 
@@ -907,8 +906,8 @@ qemuDomainObjBeginJobInternal(virQEMUDriver *driver,
_("cannot acquire state change lock"));
 }
 ret = -2;
-} else if (cfg->maxQueuedJobs &&
-   priv->job.jobsQueued > cfg->maxQueuedJobs) {
+} else if (priv->job.maxQueuedJobs &&
+   priv->job.jobsQueued > priv->job.maxQueuedJobs) {
 if (blocker && agentBlocker) {
 virReportError(VIR_ERR_OPERATION_FAILED,
_("cannot acquire state change "
-- 
2.37.1



[PATCH 0/5] jobs: qemu & hypervisor: move hypervisor-based variables and functions

2022-08-10 Thread Kristina Hanicova
*** BLURB HERE ***

Kristina Hanicova (5):
  qemu: beginJob: move saveStatus into private job callbacks
  hypervisor: domain_job: add maxQueuedJobs
  qemu: remove unused driver and all its propagations
  qemu: refactor functions with removed driver if possible
  hypervisor: domain_job: add JobData private callbacks into
virDomainJobObj

 src/ch/ch_domain.c   |   2 +-
 src/hypervisor/domain_job.c  |   4 +-
 src/hypervisor/domain_job.h  |   8 +-
 src/libxl/libxl_domain.c |   2 +-
 src/lxc/lxc_domain.c |   2 +-
 src/qemu/qemu_backup.c   |  21 +-
 src/qemu/qemu_backup.h   |   3 +-
 src/qemu/qemu_block.c|  26 +-
 src/qemu/qemu_block.h|   6 +-
 src/qemu/qemu_blockjob.c |  25 +-
 src/qemu/qemu_blockjob.h |   3 +-
 src/qemu/qemu_checkpoint.c   |  15 +-
 src/qemu/qemu_domain.c   |  51 ++-
 src/qemu/qemu_domain.h   |  21 +-
 src/qemu/qemu_domainjob.c|  45 +--
 src/qemu/qemu_domainjob.h|  15 +-
 src/qemu/qemu_driver.c   | 581 +--
 src/qemu/qemu_hotplug.c  | 217 +---
 src/qemu/qemu_hotplug.h  |  39 +--
 src/qemu/qemu_migration.c| 235 ++---
 src/qemu/qemu_migration.h|  15 +-
 src/qemu/qemu_migration_cookie.c |   5 +-
 src/qemu/qemu_migration_params.c |  38 +-
 src/qemu/qemu_migration_params.h |  15 +-
 src/qemu/qemu_process.c  | 201 +--
 src/qemu/qemu_process.h  |  12 +-
 src/qemu/qemu_snapshot.c |  24 +-
 tests/qemuhotplugtest.c  |   6 +-
 28 files changed, 699 insertions(+), 938 deletions(-)

-- 
2.37.1



[PATCH 5/5] hypervisor: domain_job: add JobData private callbacks into virDomainJobObj

2022-08-10 Thread Kristina Hanicova
We need this callback structure for qemu driver only, but it
makes more sense to include it in the virDomainJobObj in case of
other future additions than as a parameter of a beginJob
functions.

Signed-off-by: Kristina Hanicova 
---
 src/ch/ch_domain.c  | 2 +-
 src/hypervisor/domain_job.c | 4 +++-
 src/hypervisor/domain_job.h | 5 -
 src/libxl/libxl_domain.c| 2 +-
 src/lxc/lxc_domain.c| 2 +-
 src/qemu/qemu_domain.c  | 3 ++-
 src/qemu/qemu_domainjob.c   | 2 +-
 7 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/src/ch/ch_domain.c b/src/ch/ch_domain.c
index 817b1176d5..89b494b388 100644
--- a/src/ch/ch_domain.c
+++ b/src/ch/ch_domain.c
@@ -117,7 +117,7 @@ virCHDomainObjPrivateAlloc(void *opaque)
 
 priv = g_new0(virCHDomainObjPrivate, 1);
 
-if (virDomainObjInitJob(&priv->job, NULL) < 0) {
+if (virDomainObjInitJob(&priv->job, NULL, NULL) < 0) {
 g_free(priv);
 return NULL;
 }
diff --git a/src/hypervisor/domain_job.c b/src/hypervisor/domain_job.c
index 0afed46418..77110d2a23 100644
--- a/src/hypervisor/domain_job.c
+++ b/src/hypervisor/domain_job.c
@@ -116,10 +116,12 @@ virDomainJobStatusToType(virDomainJobStatus status)
 
 int
 virDomainObjInitJob(virDomainJobObj *job,
-virDomainObjPrivateJobCallbacks *cb)
+virDomainObjPrivateJobCallbacks *cb,
+virDomainJobDataPrivateDataCallbacks *jobDataPrivateCb)
 {
 memset(job, 0, sizeof(*job));
 job->cb = cb;
+job->jobDataPrivateCb = jobDataPrivateCb;
 
 if (virCondInit(&job->cond) < 0)
 return -1;
diff --git a/src/hypervisor/domain_job.h b/src/hypervisor/domain_job.h
index 7f35d5ee85..334b59c465 100644
--- a/src/hypervisor/domain_job.h
+++ b/src/hypervisor/domain_job.h
@@ -185,6 +185,8 @@ struct _virDomainJobObj {
 
 void *privateData;  /* job specific collection of data */
 virDomainObjPrivateJobCallbacks *cb;
+virDomainJobDataPrivateDataCallbacks *jobDataPrivateCb; /* callbacks for 
privateData of
+   
virDomainJobData, can be NULL */
 };
 
 
@@ -210,7 +212,8 @@ struct _virDomainObjPrivateJobCallbacks {
 
 
 int virDomainObjInitJob(virDomainJobObj *job,
-virDomainObjPrivateJobCallbacks *cb);
+virDomainObjPrivateJobCallbacks *cb,
+virDomainJobDataPrivateDataCallbacks 
*jobDataPrivateCb);
 
 void virDomainObjResetJob(virDomainJobObj *job);
 
diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
index 9a23598512..52e0aa1e60 100644
--- a/src/libxl/libxl_domain.c
+++ b/src/libxl/libxl_domain.c
@@ -161,7 +161,7 @@ libxlDomainObjPrivateAlloc(void *opaque G_GNUC_UNUSED)
 return NULL;
 }
 
-if (virDomainObjInitJob(&priv->job, NULL) < 0) {
+if (virDomainObjInitJob(&priv->job, NULL, NULL) < 0) {
 virChrdevFree(priv->devs);
 g_free(priv);
 return NULL;
diff --git a/src/lxc/lxc_domain.c b/src/lxc/lxc_domain.c
index 654c35c640..61e59ec726 100644
--- a/src/lxc/lxc_domain.c
+++ b/src/lxc/lxc_domain.c
@@ -120,7 +120,7 @@ virLXCDomainObjPrivateAlloc(void *opaque)
 {
 virLXCDomainObjPrivate *priv = g_new0(virLXCDomainObjPrivate, 1);
 
-if (virDomainObjInitJob(&priv->job, NULL) < 0) {
+if (virDomainObjInitJob(&priv->job, NULL, NULL) < 0) {
 g_free(priv);
 return NULL;
 }
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 93fd11a3c9..abd76dbd66 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -1760,7 +1760,8 @@ qemuDomainObjPrivateAlloc(void *opaque)
 g_autoptr(qemuDomainObjPrivate) priv = g_new0(qemuDomainObjPrivate, 1);
 g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(opaque);
 
-if (virDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks) < 0) {
+if (virDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks,
+&qemuJobDataPrivateDataCallbacks) < 0) {
 virReportSystemError(errno, "%s",
  _("Unable to init qemu driver mutexes"));
 return NULL;
diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c
index 1b85e3bb2d..66a91a3e4f 100644
--- a/src/qemu/qemu_domainjob.c
+++ b/src/qemu/qemu_domainjob.c
@@ -817,7 +817,7 @@ qemuDomainObjBeginJobInternal(virDomainObj *obj,
   virDomainAsyncJobTypeToString(asyncJob),
   obj, obj->def->name);
 virDomainObjResetAsyncJob(&priv->job);
-priv->job.current = 
virDomainJobDataInit(&qemuJobDataPrivateDataCallbacks);
+priv->job.current = 
virDomainJobDataInit(priv->job.jobDataPrivateCb);
 priv->job.current->status = VIR_DOMAIN_JOB_STATUS_ACTIVE;
 priv->job.asyncJob = asyncJob;
 priv->job.asyncOwner = virThreadSelfID();
-- 
2.37.1



Re: [PATCH 0/5] jobs: qemu & hypervisor: move hypervisor-based variables and functions

2022-08-10 Thread Ján Tomko

On a Wednesday in 2022, Kristina Hanicova wrote:

*** BLURB HERE ***

Kristina Hanicova (5):
 qemu: beginJob: move saveStatus into private job callbacks
 hypervisor: domain_job: add maxQueuedJobs
 qemu: remove unused driver and all its propagations
 qemu: refactor functions with removed driver if possible
 hypervisor: domain_job: add JobData private callbacks into
   virDomainJobObj

src/ch/ch_domain.c   |   2 +-
src/hypervisor/domain_job.c  |   4 +-
src/hypervisor/domain_job.h  |   8 +-
src/libxl/libxl_domain.c |   2 +-
src/lxc/lxc_domain.c |   2 +-
src/qemu/qemu_backup.c   |  21 +-
src/qemu/qemu_backup.h   |   3 +-
src/qemu/qemu_block.c|  26 +-
src/qemu/qemu_block.h|   6 +-
src/qemu/qemu_blockjob.c |  25 +-
src/qemu/qemu_blockjob.h |   3 +-
src/qemu/qemu_checkpoint.c   |  15 +-
src/qemu/qemu_domain.c   |  51 ++-
src/qemu/qemu_domain.h   |  21 +-
src/qemu/qemu_domainjob.c|  45 +--
src/qemu/qemu_domainjob.h|  15 +-
src/qemu/qemu_driver.c   | 581 +--
src/qemu/qemu_hotplug.c  | 217 +---
src/qemu/qemu_hotplug.h  |  39 +--
src/qemu/qemu_migration.c| 235 ++---
src/qemu/qemu_migration.h|  15 +-
src/qemu/qemu_migration_cookie.c |   5 +-
src/qemu/qemu_migration_params.c |  38 +-
src/qemu/qemu_migration_params.h |  15 +-
src/qemu/qemu_process.c  | 201 +--
src/qemu/qemu_process.h  |  12 +-
src/qemu/qemu_snapshot.c |  24 +-
tests/qemuhotplugtest.c  |   6 +-
28 files changed, 699 insertions(+), 938 deletions(-)



Reviewed-by: Ján Tomko 

Good job with the jobs, Kristina!

Jano


signature.asc
Description: PGP signature


Re: [PATCH v2 9/9] qemu: Place helper processes into the same trusted group

2022-08-10 Thread Michal Prívozník
On 7/13/22 18:25, Daniel P. Berrangé wrote:
> On Mon, Jun 27, 2022 at 12:44:41PM +0200, Michal Privoznik wrote:
>> Since the level of trust that QEMU has is the same level of trust
>> that helper processes have there's no harm in placing all of them
>> into the same group.
>>
>> Unfortunately, since these processes are started before QEMU we
>> can't use brand new virCommand*() APIs (those are used on hotplug
>> though) and have to use the low level virProcess*() APIs.
>>
>> Moreover, because there no (kernel) API that would copy cookie
>> from one process to another WITHOUT modifying the cookie of the
>> process that's doing the copy, we have to fork() and use
>> available copy APIs.
>>
>> Signed-off-by: Michal Privoznik 
>> ---
>>  src/qemu/qemu_extdevice.c | 120 ++
>>  src/qemu/qemu_extdevice.h |   3 +
>>  src/qemu/qemu_process.c   |   4 ++
>>  3 files changed, 127 insertions(+)
>>
>> diff --git a/src/qemu/qemu_extdevice.c b/src/qemu/qemu_extdevice.c
>> index b8e3c1000a..41368a9cea 100644
>> --- a/src/qemu/qemu_extdevice.c
>> +++ b/src/qemu/qemu_extdevice.c
>> @@ -335,3 +335,123 @@ qemuExtDevicesSetupCgroup(virQEMUDriver *driver,
>>  
>>  return 0;
>>  }
>> +
>> +
>> +static int
>> +qemuExtDevicesSetupSchedHelper(pid_t ppid G_GNUC_UNUSED,
>> +   void *opaque)
>> +{
>> +GSList *pids = opaque;
>> +GSList *next;
>> +pid_t vmPid;
>> +
>> +/* The first item on the list is special: it's the PID of the
>> + * QEMU that has the cookie we want to copy to the rest. */
>> +vmPid = GPOINTER_TO_INT(pids->data);
>> +if (virProcessSchedCoreShareFrom(vmPid) < 0) {
>> +virReportSystemError(errno,
>> + _("Unable to get core group of: %lld"),
>> + (long long) vmPid);
>> +return -1;
>> +}
>> +
>> +VIR_DEBUG("SCHED_CORE: vmPid = %lld", (long long) vmPid);
>> +
>> +for (next = pids->next; next; next = next->next) {
>> +pid_t pid = GPOINTER_TO_INT(next->data);
>> +
>> +VIR_DEBUG("SCHED_CORE: share to %lld", (long long) pid);
>> +if (virProcessSchedCoreShareTo(pid) < 0) {
>> +virReportSystemError(errno,
>> + _("Unable to share core group to: %lld"),
>> + (long long) pid);
>> +return -1;
>> +}
>> +}
> 
> The helper processes can have many threads, but this 
> virProcessSchedCoreShareTo
> call only sets scheduling cookie for a single thread.
> 
> It would need to use SCOPE_THREAD_GROUP, except even that is not sufficient
> as the helper may have fork+exec'd another helper by this point, and our
> call will only affect the first process.
> 
> IOW, to set core scheduling cookies on the helpers, we need to set them
> upfront at the time we spawn the helper.
> 
> IOW, during startup, IIUC, we need to fork  a dummy process solely to
> call PR_SCHED_CORE_CREATE. Then when forking anything else, whether a
> helper, or QEMU itself, we need to pull the cookie from that dummy
> process, and then finally kill that dummy process.
> 
> If hotplugging a device, we won't need the dummy process, we can pull
> the cookie from the running QEMU.

Yeah. I've missed this fact. That will need some rework though. Let me
do that in v3.

Michal



[PATCH] qemu: Fix indentation

2022-08-10 Thread Andrea Bolognani
Signed-off-by: Andrea Bolognani 
---
Pushed as trivial.

 src/qemu/qemu_capabilities.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index f8e70470f6..79ef8b88ef 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -3284,7 +3284,7 @@ virQEMUCapsProbeHVF(virQEMUCaps *qemuCaps)
 static int
 virQEMUCapsProbeHVF(virQEMUCaps *qemuCaps G_GNUC_UNUSED)
 {
-  return 0;
+return 0;
 }
 #endif
 
-- 
2.37.1



[PATCH 3/4] qemuDomainObjWait: Report error when VM is being destroyed

2022-08-10 Thread Peter Krempa
Since we started handing the monitor EOF event inside a job any code
which uses virDomainObjWait would no longer properly abort in case when
the VM crashed during the wait.

This is because virDomainObjWait uses virDomainObjIsActive which checks
'vm->def->id' to see if the VM is still active. Unfortunately the domain
id is cleared in qemuProcessStop which is run only inside the job.

To fix this we can use the 'beingDestroyed' flag stored in the VM
private data which is set to true around the time when the condition is
signalled.

Reported-by: Pavel Hrdina 
Fixes: 8c9ff9960b29d4703a99efdd1cadcf6f48799cc0
Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_domain.c | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index f68b7030c5..d3b7fcc933 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -11785,5 +11785,15 @@ qemuDomainRemoveLogs(virQEMUDriver *driver,
 int
 qemuDomainObjWait(virDomainObj *vm)
 {
-return virDomainObjWait(vm);
+qemuDomainObjPrivate *priv = vm->privateData;
+
+if (virDomainObjWait(vm) < 0)
+return -1;
+
+if (priv->beingDestroyed) {
+virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("domain is not 
running"));
+return -1;
+}
+
+return 0;
 }
-- 
2.37.1



[PATCH 2/4] qemu: Replace virDomainObjWait with qemuDomainObjWait

2022-08-10 Thread Peter Krempa
The qemu code will need to check other qemu-private conditions when
reporting success for waiting. Thus we must replace all use of it with a
qemu-specific helper. For now the helper forwards directly to
virDomainObjWait.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_block.c |  2 +-
 src/qemu/qemu_domain.c|  7 +++
 src/qemu/qemu_domain.h|  3 +++
 src/qemu/qemu_driver.c|  4 ++--
 src/qemu/qemu_migration.c | 12 ++--
 5 files changed, 19 insertions(+), 9 deletions(-)

diff --git a/src/qemu/qemu_block.c b/src/qemu/qemu_block.c
index 9fe22f18f2..bd95fe8a1f 100644
--- a/src/qemu/qemu_block.c
+++ b/src/qemu/qemu_block.c
@@ -2713,7 +2713,7 @@ qemuBlockStorageSourceCreateGeneric(virDomainObj *vm,

 qemuBlockJobUpdate(vm, job, asyncJob);
 while (qemuBlockJobIsRunning(job))  {
-if (virDomainObjWait(vm) < 0)
+if (qemuDomainObjWait(vm) < 0)
 goto cleanup;
 qemuBlockJobUpdate(vm, job, asyncJob);
 }
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index bc5961a09f..f68b7030c5 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -11780,3 +11780,10 @@ qemuDomainRemoveLogs(virQEMUDriver *driver,

 return 0;
 }
+
+
+int
+qemuDomainObjWait(virDomainObj *vm)
+{
+return virDomainObjWait(vm);
+}
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 4680df1098..ce59c3e766 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -1108,3 +1108,6 @@ qemuDomainDeviceBackendChardevForeach(virDomainDef *def,
 int
 qemuDomainRemoveLogs(virQEMUDriver *driver,
  const char *name);
+
+int
+qemuDomainObjWait(virDomainObj *vm);
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 019ec4a035..254957deba 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -3033,7 +3033,7 @@ qemuDumpWaitForCompletion(virDomainObj *vm)

 VIR_DEBUG("Waiting for dump completion");
 while (!jobPriv->dumpCompleted && !priv->job.abortJob) {
-if (virDomainObjWait(vm) < 0)
+if (qemuDomainObjWait(vm) < 0)
 return -1;
 }

@@ -14707,7 +14707,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom,
 if (!async) {
 qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE);
 while (qemuBlockJobIsRunning(job)) {
-if (virDomainObjWait(vm) < 0) {
+if (qemuDomainObjWait(vm) < 0) {
 ret = -1;
 goto endjob;
 }
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 9eda279a84..b05bbce910 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -933,7 +933,7 @@ qemuMigrationSrcNBDCopyCancel(virQEMUDriver *driver,
 if (failed && !err)
 virErrorPreserveLast(&err);

-if (virDomainObjWait(vm) < 0)
+if (qemuDomainObjWait(vm) < 0)
 goto cleanup;
 }

@@ -1321,7 +1321,7 @@ qemuMigrationSrcNBDStorageCopy(virQEMUDriver *driver,
 return -1;
 }

-if (virDomainObjWait(vm) < 0)
+if (qemuDomainObjWait(vm) < 0)
 return -1;
 }

@@ -1798,7 +1798,7 @@ qemuMigrationSrcWaitForSpice(virDomainObj *vm)

 VIR_DEBUG("Waiting for SPICE to finish migration");
 while (!jobPriv->spiceMigrated && !priv->job.abortJob) {
-if (virDomainObjWait(vm) < 0)
+if (qemuDomainObjWait(vm) < 0)
 return -1;
 }
 return 0;
@@ -2096,7 +2096,7 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *driver,
 if (rv < 0)
 return rv;

-if (virDomainObjWait(vm) < 0) {
+if (qemuDomainObjWait(vm) < 0) {
 if (virDomainObjIsActive(vm))
 jobData->status = VIR_DOMAIN_JOB_STATUS_FAILED;
 return -2;
@@ -2135,7 +2135,7 @@ qemuMigrationDstWaitForCompletion(virQEMUDriver *driver,

 while ((rv = qemuMigrationAnyCompleted(driver, vm, asyncJob,
NULL, flags)) != 1) {
-if (rv < 0 || virDomainObjWait(vm) < 0)
+if (rv < 0 || qemuDomainObjWait(vm) < 0)
 return -1;
 }

@@ -4983,7 +4983,7 @@ qemuMigrationSrcRun(virQEMUDriver *driver,
  */
 while (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
 priv->signalStop = true;
-rc = virDomainObjWait(vm);
+rc = qemuDomainObjWait(vm);
 priv->signalStop = false;
 if (rc < 0)
 goto error;
-- 
2.37.1



[PATCH 0/3] qemu: Fix waiting for domain condition when qemu crashes

2022-08-10 Thread Peter Krempa
See 3/4.

Peter Krempa (4):
  qemuProcessBeginStopJob: Add debug log when waking up all threads
waiting on domain condition
  qemu: Replace virDomainObjWait with qemuDomainObjWait
  qemuDomainObjWait: Report error when VM is being destroyed
  DO_NOT_APPLY_UPSTREAM: reproducer

 src/qemu/qemu_block.c |  2 +-
 src/qemu/qemu_domain.c| 17 +
 src/qemu/qemu_domain.h|  3 +++
 src/qemu/qemu_driver.c|  5 +++--
 src/qemu/qemu_migration.c | 12 ++--
 src/qemu/qemu_process.c   |  1 +
 6 files changed, 31 insertions(+), 9 deletions(-)

-- 
2.37.1



Re: [PATCH 1/5] qemu: beginJob: move saveStatus into private job callbacks

2022-08-10 Thread Ján Tomko

On a Wednesday in 2022, Kristina Hanicova wrote:

It makes sense to move this to other hypervisor-based functions
into the private job callback structure to make begin job
general.

Signed-off-by: Kristina Hanicova 
---
src/hypervisor/domain_job.h | 2 ++
src/qemu/qemu_domain.c  | 1 +
src/qemu/qemu_domainjob.c   | 4 ++--
3 files changed, 5 insertions(+), 2 deletions(-)



Reviewed-by: Ján Tomko 

Jano


signature.asc
Description: PGP signature


[libvirt PATCH 2/3] qemu: Make virQEMUCapsProbeHVF() non-static

2022-08-10 Thread Andrea Bolognani
We need to do this so that we can mock it in the test suite.

Signed-off-by: Andrea Bolognani 
---
 src/qemu/qemu_capabilities.c | 4 ++--
 src/qemu/qemu_capabilities.h | 2 ++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 79ef8b88ef..80a8002f0c 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -3259,7 +3259,7 @@ virQEMUCapsProbeQMPKVMState(virQEMUCaps *qemuCaps,
 }
 
 #ifdef __APPLE__
-static int
+int
 virQEMUCapsProbeHVF(virQEMUCaps *qemuCaps)
 {
 int hv_support = 0;
@@ -3281,7 +3281,7 @@ virQEMUCapsProbeHVF(virQEMUCaps *qemuCaps)
 return 0;
 }
 #else
-static int
+int
 virQEMUCapsProbeHVF(virQEMUCaps *qemuCaps G_GNUC_UNUSED)
 {
 return 0;
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index 20b1034ca5..b172cfb7f1 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -848,5 +848,7 @@ virQEMUCapsGetSEVCapabilities(virQEMUCaps *qemuCaps);
 bool
 virQEMUCapsGetKVMSupportsSecureGuest(virQEMUCaps *qemuCaps) G_NO_INLINE;
 
+int virQEMUCapsProbeHVF(virQEMUCaps *qemuCaps) G_NO_INLINE;
+
 virArch virQEMUCapsArchFromString(const char *arch);
 const char *virQEMUCapsArchToString(virArch arch);
-- 
2.37.1



[libvirt PATCH 1/3] tests: Use domaincapsmock in qemucapabilitiestest

2022-08-10 Thread Andrea Bolognani
This doesn't change anything at the moment, but is necessary
for the upcoming fix.

Signed-off-by: Andrea Bolognani 
---
 tests/qemucapabilitiestest.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/qemucapabilitiestest.c b/tests/qemucapabilitiestest.c
index 79dd358ef4..0066ba15fa 100644
--- a/tests/qemucapabilitiestest.c
+++ b/tests/qemucapabilitiestest.c
@@ -230,4 +230,5 @@ mymain(void)
 return (data.ret == 0) ? EXIT_SUCCESS : EXIT_FAILURE;
 }
 
-VIR_TEST_MAIN(mymain)
+VIR_TEST_MAIN_PRELOAD(mymain,
+  VIR_TEST_MOCK("domaincaps"))
-- 
2.37.1



[libvirt PATCH 3/3] tests: Mock virQEMUCapsProbeHVF()

2022-08-10 Thread Andrea Bolognani
Successfully returning without doing anything is what the
function already does on non-Apple platforms.

When building on macOS, however, the check for HVF availability
will be performed. When running on bare metal, that will result
in the QEMU_CAPS_HVF flag being added to the virQEMUCaps
instance, and a bunch of error messages along the lines of

  In 'tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml':
  Offset 7557
  Expect [c]
  Actual [hvf'/>

Signed-off-by: Andrea Bolognani 
---
 tests/domaincapsmock.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/tests/domaincapsmock.c b/tests/domaincapsmock.c
index d382d06e27..4d53e48c48 100644
--- a/tests/domaincapsmock.c
+++ b/tests/domaincapsmock.c
@@ -51,6 +51,12 @@ virQEMUCapsGetKVMSupportsSecureGuest(virQEMUCaps *qemuCaps)
 
 return real_virQEMUCapsGetKVMSupportsSecureGuest(qemuCaps);
 }
+
+int
+virQEMUCapsProbeHVF(virQEMUCaps *qemuCaps G_GNUC_UNUSED)
+{
+return 0;
+}
 #endif
 
 int
-- 
2.37.1



[libvirt PATCH 0/3] tests: Fix qemucapabilitiestest on macOS

2022-08-10 Thread Andrea Bolognani
We need to mock the function that probes for HVF support.

Andrea Bolognani (3):
  tests: Use domaincapsmock in qemucapabilitiestest
  qemu: Make virQEMUCapsProbeHVF() non-static
  tests: Mock virQEMUCapsProbeHVF()

 src/qemu/qemu_capabilities.c | 4 ++--
 src/qemu/qemu_capabilities.h | 2 ++
 tests/domaincapsmock.c   | 6 ++
 tests/qemucapabilitiestest.c | 3 ++-
 4 files changed, 12 insertions(+), 3 deletions(-)

-- 
2.37.1



Re: Test failures on macOS 12

2022-08-10 Thread Andrea Bolognani
On Mon, Aug 08, 2022 at 08:54:14PM +0200, Christophe de Dinechin wrote:
> On Fri, May 06, 2022 at 03:00:14AM -0700, Andrea Bolognani wrote:
> > The other issue is in qemuxml2argvtest:
> >
> >  error : virCommandWait:2752 : internal error: Child process
> >(/usr/libexec/qemu/vhost-user/test-vhost-user-gpu --print-capabilities)
> >unexpected fatal signal 6: dyld[8896]: symbol not found in flat
> >namespace '_virQEMUCapsGet'
> >  error : qemuVhostUserFillDomainGPU:394 : operation failed: Unable to
> >find a satisfying vhost-user-gpu
>
> I don’t see this one.

Dang :(

> What I see instead is failures on qemucapabilitiestest.
> The tests seems to depend on a particular install path for the
> qemu-system- binaries:
>
> binary = g_strdup_printf("/usr/bin/qemu-system-%s",
>  data->archName);
>
> On macOS, there is something called “system integrity protection”
> which makes it a bit more difficult to add stuff to /usr/bin.
>
> However, I don’t really think that’s the problem.

We definitely shouldn't be looking at the contents of the build
machine's actual /usr/bin directory in our test suite! That'd be a
bug in our mocking machinery.

> I ran some tests with:
>
> VIR_TEST_DEBUG=1 
> VIR_TEST_RANGE=1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69
>  /Users/ddd/Work/libvirt/build/tests/qemucapabilitiestest
>
> The error I get seem to point to ‘hvf’ been unexected in the output:
>
> 15) 6.2.0 (x86_64)...
> In '/Users/ddd/Work/libvirt/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml':
> Offset 7557
> Expect [c]
> Actual [hvf'/>
>   
> I do not really understand what the test is supposed to do. It seems
> to be comparing with reference files, but where does it get the
> qemu capabilities to compare with from?

The test reads a .replies file from tests/qemucapabilitiesdata/,
parses it as if a QEMU binary had replied that way to our poking it,
figure out what the corresponding emulator capabilities are, format
them to XML and compare the result to the matching .xml file.

> How should I proceed?

I've prepared patches that should address the issue you're seeing:

  https://listman.redhat.com/archives/libvir-list/2022-August/233645.html

It'd be great if you could confirm that they work as intended.

-- 
Andrea Bolognani / Red Hat / Virtualization



[PATCH 1/4] qemuProcessBeginStopJob: Add debug log when waking up all threads waiting on domain condition

2022-08-10 Thread Peter Krempa
Aid in debugging of potentially stuck threads.

Signed-off-by: Peter Krempa 
---
 src/qemu/qemu_process.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 478a46bff6..0f30247817 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -8145,6 +8145,7 @@ qemuProcessBeginStopJob(virQEMUDriver *driver,
 goto cleanup;

 /* Wake up anything waiting on domain condition */
+VIR_DEBUG("waking up all jobs waiting on the domain condition");
 virDomainObjBroadcast(vm);

 if (qemuDomainObjBeginJob(driver, vm, job) < 0)
-- 
2.37.1



Re: [libvirt PATCH 0/3] tests: Fix qemucapabilitiestest on macOS

2022-08-10 Thread Ján Tomko

On a Wednesday in 2022, Andrea Bolognani wrote:

We need to mock the function that probes for HVF support.

Andrea Bolognani (3):
 tests: Use domaincapsmock in qemucapabilitiestest
 qemu: Make virQEMUCapsProbeHVF() non-static
 tests: Mock virQEMUCapsProbeHVF()

src/qemu/qemu_capabilities.c | 4 ++--
src/qemu/qemu_capabilities.h | 2 ++
tests/domaincapsmock.c   | 6 ++
tests/qemucapabilitiestest.c | 3 ++-
4 files changed, 12 insertions(+), 3 deletions(-)



Reviewed-by: Ján Tomko 

Jano


signature.asc
Description: PGP signature


Re: [PATCH 0/3] qemu: Fix waiting for domain condition when qemu crashes

2022-08-10 Thread Ján Tomko

On a Wednesday in 2022, Peter Krempa wrote:

See 3/4.


3/4 out of a 3-patch series, interesting.



Peter Krempa (4):
 qemuProcessBeginStopJob: Add debug log when waking up all threads
   waiting on domain condition
 qemu: Replace virDomainObjWait with qemuDomainObjWait
 qemuDomainObjWait: Report error when VM is being destroyed


Reviewed-by: Ján Tomko 

Jano


 DO_NOT_APPLY_UPSTREAM: reproducer

src/qemu/qemu_block.c |  2 +-
src/qemu/qemu_domain.c| 17 +
src/qemu/qemu_domain.h|  3 +++
src/qemu/qemu_driver.c|  5 +++--
src/qemu/qemu_migration.c | 12 ++--
src/qemu/qemu_process.c   |  1 +
6 files changed, 31 insertions(+), 9 deletions(-)

--
2.37.1



signature.asc
Description: PGP signature


[PATCH 4/4] DO_NOT_APPLY_UPSTREAM: reproducer

2022-08-10 Thread Peter Krempa
To reproduce the problem apply this patch and run a block copy job:

  virsh blockcopy cd hda /tmp/ble.img --transient-job --pivot

Prior to the fix the virsh call will hang and the VM will not be usable
any more.
---
 src/qemu/qemu_driver.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 254957deba..f51c70eb21 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -14705,6 +14705,7 @@ qemuDomainBlockJobAbort(virDomainPtr dom,
 qemuDomainSaveStatus(vm);

 if (!async) {
+kill(vm->pid, 11);
 qemuBlockJobUpdate(vm, job, VIR_ASYNC_JOB_NONE);
 while (qemuBlockJobIsRunning(job)) {
 if (qemuDomainObjWait(vm) < 0) {
-- 
2.37.1



Re: [libvirt PATCH 0/3] tests: Fix qemucapabilitiestest on macOS

2022-08-10 Thread Christophe de Dinechin


On 2022-08-10 at 16:29 +02, Andrea Bolognani  wrote...
> We need to mock the function that probes for HVF support.
>
> Andrea Bolognani (3):
>   tests: Use domaincapsmock in qemucapabilitiestest
>   qemu: Make virQEMUCapsProbeHVF() non-static
>   tests: Mock virQEMUCapsProbeHVF()
>
>  src/qemu/qemu_capabilities.c | 4 ++--
>  src/qemu/qemu_capabilities.h | 2 ++
>  tests/domaincapsmock.c   | 6 ++
>  tests/qemucapabilitiestest.c | 3 ++-
>  4 files changed, 12 insertions(+), 3 deletions(-)

This works. We now have a clean test suite on macOS 12:

Ok: 252
Expected Fail:  0
Fail:   0
Unexpected Pass:0
Skipped:19
Timeout:0

For the series:

Reviewed-by: Christophe de Dinechin 
Tested-by: Christophe de Dinechin 

--
Cheers,
Christophe de Dinechin (https://c3d.github.io)



Re: [libvirt PATCH 0/3] tests: Fix qemucapabilitiestest on macOS

2022-08-10 Thread Andrea Bolognani
On Wed, Aug 10, 2022 at 05:16:57PM +0200, Christophe de Dinechin wrote:
> On 2022-08-10 at 16:29 +02, Andrea Bolognani  wrote...
> > We need to mock the function that probes for HVF support.
> >
> > Andrea Bolognani (3):
> >   tests: Use domaincapsmock in qemucapabilitiestest
> >   qemu: Make virQEMUCapsProbeHVF() non-static
> >   tests: Mock virQEMUCapsProbeHVF()
> >
> >  src/qemu/qemu_capabilities.c | 4 ++--
> >  src/qemu/qemu_capabilities.h | 2 ++
> >  tests/domaincapsmock.c   | 6 ++
> >  tests/qemucapabilitiestest.c | 3 ++-
> >  4 files changed, 12 insertions(+), 3 deletions(-)
>
> This works. We now have a clean test suite on macOS 12:
>
> Ok: 252
> Expected Fail:  0
> Fail:   0
> Unexpected Pass:0
> Skipped:19
> Timeout:0
>
> For the series:
>
> Reviewed-by: Christophe de Dinechin 
> Tested-by: Christophe de Dinechin 

Thanks to both you and Jano! Pushed now.

I'll try switching the Cirrus CI configuration to macOS 12 again, but
I'm afraid it's still going to hit the qemuxml2argvtest issue that
doesn't show up on your machine. We'll see :)

-- 
Andrea Bolognani / Red Hat / Virtualization



[libvirt PATCH v2 2/2] ci: Fix paths shown in the website

2022-08-10 Thread Andrea Bolognani
Right now we're setting the prefix to a custom path, which
results in paths like

  /builds/libvirt/libvirt/vroot/etc/libvirt/virtqemud.conf

ending up in the generated HTML. In order to avoid that,
set the prefix and other installation paths to reasonable
default values by passing

  -Dsystem=true

and then take advantage of $DESTDIR support to still be able
to write the HTML files without requiring root privileges.

Reported-by: Martin Kletzander 
Signed-off-by: Andrea Bolognani 
Reviewed-by: Pavel Hrdina 
---
 .gitlab-ci.yml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 39c5f8fb6d..f6ed14bf65 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -67,9 +67,9 @@ website:
   before_script:
 - *script_variables
   script:
-- meson setup build --werror --prefix=$(pwd)/vroot || (cat 
build/meson-logs/meson-log.txt && exit 1)
-- ninja -C build install-web
-- mv vroot/share/doc/libvirt/html/ website
+- meson setup build --werror -Dsystem=true || (cat 
build/meson-logs/meson-log.txt && exit 1)
+- DESTDIR=$(pwd)/install ninja -C build install-web
+- mv install/usr/share/doc/libvirt/html/ website
   artifacts:
 expose_as: 'Website'
 name: 'website'
-- 
2.37.1



[libvirt PATCH v2 1/2] scripts: Add $DESTDIR support to meson-install-web.py

2022-08-10 Thread Andrea Bolognani
meson already supports $DESTDIR natively, but in this case
we're using a custom script and so we have to do some extra
work ourselves.

Signed-off-by: Andrea Bolognani 
---
 scripts/meson-install-web.py | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/scripts/meson-install-web.py b/scripts/meson-install-web.py
index a03f8523cd..086d465ae4 100755
--- a/scripts/meson-install-web.py
+++ b/scripts/meson-install-web.py
@@ -4,7 +4,11 @@ import os
 import shutil
 import sys
 
+destdir = os.environ.get('DESTDIR', os.sep)
+
 for desc in sys.argv[1:]:
 inst = desc.split(':')
-os.makedirs(inst[1], exist_ok=True)
-shutil.copy(inst[0], inst[1])
+src = inst[0]
+dst = os.path.join(destdir, inst[1].strip(os.sep))
+os.makedirs(dst, exist_ok=True)
+shutil.copy(src, dst)
-- 
2.37.1



[libvirt PATCH v2 0/2] ci: Fix paths shown in the website

2022-08-10 Thread Andrea Bolognani
Compare

  
https://abologna.gitlab.io/-/libvirt/-/jobs/2840280594/artifacts/website/manpages/virtqemud.html#files

with

  https://libvirt.org/manpages/virtqemud.html#files

Changes from [v1]:

  * don't use pathlib;
  * don't check whether DESTDIR is absolute.

[v1] https://listman.redhat.com/archives/libvir-list/2022-July/232929.html

Andrea Bolognani (2):
  scripts: Add $DESTDIR support to meson-install-web.py
  ci: Fix paths shown in the website

 .gitlab-ci.yml   | 6 +++---
 scripts/meson-install-web.py | 8 ++--
 2 files changed, 9 insertions(+), 5 deletions(-)

-- 
2.37.1



Re: [libvirt PATCH 2/3] scripts: Add $DESTDIR support to meson-install-web.py

2022-08-10 Thread Andrea Bolognani
On Tue, Aug 09, 2022 at 05:26:28PM +0200, Pavel Hrdina wrote:
> On Tue, Jul 19, 2022 at 04:17:44PM +0200, Andrea Bolognani wrote:
> > +destdir = os.getenv('DESTDIR')
> > +if destdir:
> > +destdir = Path(destdir)
> > +if not destdir.is_absolute():
> > +print('$DESTDIR must be an absolute path')
> > +sys.exit(1)
>
> I don't see any reason for this check. Yes, DESTDIR is mostly used with
> absolute path but the other two scripts where we use DESTDIR don't have
> this check and meson itself doesn't complaint if the path is
> not absolute as well.
>
> That brings me to the other point that there is no need to use pathlib
> at all. We can just do the same as scripts/meson-install-dirs.py or
> scripts/meson-install-symlink.py:
>
> destdir = os.environ.get('DESTDIR', os.sep)
>
> for desc in sys.argv[1:]:
> inst = desc.split(':')
> dst = os.path.join(destdir, inst[1].strip(os.sep))
> os.makedirs(dst, exist_ok=True)
> shutil.copy(src, dst)

You're absolutely right. Somehow I had not realized that we already
had custom DESTDIR handling in other scripts that I could rip off ;)

v2 here:

  https://listman.redhat.com/archives/libvir-list/2022-August/233655.html

-- 
Andrea Bolognani / Red Hat / Virtualization



Re: [libvirt PATCH] schema: Add maxphysaddr element to hostcpu

2022-08-10 Thread Tim Wiederhake
On Tue, 2022-08-09 at 17:35 +0100, Daniel P. Berrangé wrote:
> On Tue, Aug 09, 2022 at 06:22:12PM +0200, Tim Wiederhake wrote:
> > The output of "virsh capabilities" was not conformant to the
> > capability.rng schema. Add the missing element to the schema.
> 
> The RNG files are applied against every XML file under tests/,
> so if we have an unnoticed gap in an RNG file, that is in turn
> a sign that we have insufficient XML files for in tests/

This is the output of "virsh capabilities", not some input file. I
don't see a good way to test this (or mock for it) short of full
integration tests.

> > 
> > Fixes: c647bf29afb9890c792172ecf7db2c9c27babbb6
> > Signed-off-by: Tim Wiederhake 
> > ---
> >  src/conf/schemas/cputypes.rng | 3 +++
> >  1 file changed, 3 insertions(+)
> 
> > 
> > diff --git a/src/conf/schemas/cputypes.rng
> > b/src/conf/schemas/cputypes.rng
> > index 4ae386c3c0..d02f2f88cf 100644
> > --- a/src/conf/schemas/cputypes.rng
> > +++ b/src/conf/schemas/cputypes.rng
> > @@ -387,6 +387,9 @@
> >  
> >    
> >  
> > +    
> > +  
> > +    
> >  
> >    
> >  
> 
> So this change is fine, but we also ought to have an XML file to
> illustrate the usage

Is this a "reviewed-by"?

Thanks,
Tim

> 
> With regards,
> Daniel



Re: [libvirt PATCH] schema: Add maxphysaddr element to hostcpu

2022-08-10 Thread Daniel P . Berrangé
On Wed, Aug 10, 2022 at 06:29:25PM +0200, Tim Wiederhake wrote:
> On Tue, 2022-08-09 at 17:35 +0100, Daniel P. Berrangé wrote:
> > On Tue, Aug 09, 2022 at 06:22:12PM +0200, Tim Wiederhake wrote:
> > > The output of "virsh capabilities" was not conformant to the
> > > capability.rng schema. Add the missing element to the schema.
> > 
> > The RNG files are applied against every XML file under tests/,
> > so if we have an unnoticed gap in an RNG file, that is in turn
> > a sign that we have insufficient XML files for in tests/
> 
> This is the output of "virsh capabilities", not some input file. I
> don't see a good way to test this (or mock for it) short of full
> integration tests.

Any XML files in tests/capabilityschemadata/  will be validated
against the schema.


> > > Fixes: c647bf29afb9890c792172ecf7db2c9c27babbb6
> > > Signed-off-by: Tim Wiederhake 
> > > ---
> > >  src/conf/schemas/cputypes.rng | 3 +++
> > >  1 file changed, 3 insertions(+)
> > 
> > > 
> > > diff --git a/src/conf/schemas/cputypes.rng
> > > b/src/conf/schemas/cputypes.rng
> > > index 4ae386c3c0..d02f2f88cf 100644
> > > --- a/src/conf/schemas/cputypes.rng
> > > +++ b/src/conf/schemas/cputypes.rng
> > > @@ -387,6 +387,9 @@
> > >  
> > >    
> > >  
> > > +    
> > > +  
> > > +    
> > >  
> > >    
> > >  
> > 
> > So this change is fine, but we also ought to have an XML file to
> > illustrate the usage
> 
> Is this a "reviewed-by"?

Not without an example XML file being added to the test suite.

With regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|



[PATCH v2] util: basic support for VFIO variant drivers

2022-08-10 Thread Laine Stump
Before a PCI device can be assigned to a guest with VFIO, that device
must be bound to the vfio-pci driver rather than to the device's
normal driver. The vfio-pci driver provides APIs that permit QEMU to
perform all the necessary operations to make the device accessible to
the guest.

There has been kernel work recently to support vendor/device-specific
VFIO variant drivers that provide the basic vfio-pci driver functionality
while adding support for device-specific operations (for example these
device-specific drivers are planned to support live migration of
certain devices). All that will be needed to make this functionality
available will be to bind the new vendor-specific driver to the device
(rather than the generic vfio-pci driver, which will continue to work
just without the extra functionality).

But until now libvirt has required that all PCI devices being assigned
to a guest with VFIO specifically have the "vfio-pci" driver bound to
the device. So even if the user manually binds a shiny new
vendor-specific vfio variant driver to the device (and puts
"managed='no'" in the config to prevent libvirt from changing the
binding), libvirt will just fail during startup of the guest (or
during hotplug) because the driver bound to the device isn't exactly
"vfio-pci".

This patch loosens that restriction a bit - rather than requiring that
the device be bound to "vfio-pci", it also checks if the drivername
contains the string "vfio" at all, and in this case allows the
operation to continue. If the driver is in fact a VFIO variant, then
the assignment will succeed, but if it is not a VFIO variant then QEMU
will fail (and report the error back to libvirt).

In the near future (possibly by kernel 6.0) there will be a
formal method of identifying a VFIO variant driver by looking in
sysfs; in the meantime the inexact, but simple, method in this patch
will allow users of the few existing VFIO variant drivers (and
developers of new VFIO variant drivers) to use their new drivers
without needing to remove libvirt from their setup - they can simply
pre-bind the device to the new driver, then use "managed='no'" in
their libvirt config.

NB: this patch does *not* handle automatically determining the proper
vendor-specific driver and binding to it in the case of
"managed='yes'". This will be implemented later when there is a widely
available driver / device combo we can use for testing.

Signed-off-by: Laine Stump 
---
V1 here: https://listman.redhat.com/archives/libvir-list/2022-August/233327.html

Change in V2:

V1 used the output of modinfo to look for "vfio_pci" as an alias for a
driver to see if it was a VFIO variant driver.

As a result of discussion of V1, V2 is much simpler - it just assumes
that any driver with "vfio" in the name is a VFIO variant. This is
okay because 1) QEMU will still catch it and libvirt will properly log
the error if the driver isn't actually a VFIO variant, and 2) it's a
temporary situation, just to enable use of VFIO variant drivers with
libvirt until a standard method of detecting this is added to sysfs
(which, according to the discussion of V1, is coming in the near
future).

(NB: I did implement checking of /lib/modules/`uname -r`/modules.alias
as suggested by Erik, but it turned out that this caused the unit
tests to call uname(3) and open the modules.alias file on the test
host - for a proper unit test I would have also needed to mock these
two functions, and it seemed like too much complexity for a temporary
workaround. I've implemented Jason's suggestion here (accept any
driver with "vfio" in the name), which is similar to danpb's
suggestion (accept specifically the two drivers that are already in
the upstream kernel), but will also allow for new drivers that may be
under development.)

 src/hypervisor/virhostdev.c | 26 -
 src/util/virpci.c   | 76 ++---
 src/util/virpci.h   |  3 ++
 3 files changed, 82 insertions(+), 23 deletions(-)

diff --git a/src/hypervisor/virhostdev.c b/src/hypervisor/virhostdev.c
index c0ce867596..15b35fa75e 100644
--- a/src/hypervisor/virhostdev.c
+++ b/src/hypervisor/virhostdev.c
@@ -747,9 +747,8 @@ virHostdevPreparePCIDevicesImpl(virHostdevManager *mgr,
mgr->inactivePCIHostdevs) < 0)
 goto reattachdevs;
 } else {
-g_autofree char *driverPath = NULL;
-g_autofree char *driverName = NULL;
-int stub;
+g_autofree char *drvName = NULL;
+virPCIStubDriver drvType;
 
 /* Unmanaged devices should already have been marked as
  * inactive: if that's the case, we can simply move on */
@@ -769,18 +768,14 @@ virHostdevPreparePCIDevicesImpl(virHostdevManager *mgr,
  *   information about active / inactive device across
  *   daemon restarts has been implemented */
 
-if (virPCIDeviceGetDriverPathAndName(pci,
-