On Wed, Aug 24, 2022 at 10:56:22AM -0500, Jonathon Jongsma wrote:
> On 8/24/22 2:09 AM, Erik Skultety wrote:
> > On Tue, Aug 23, 2022 at 12:43:03PM -0500, Jonathon Jongsma wrote:
> > > Openstack developers reported that newly-created mdevs were not
> > > recognized by libvirt until after a libvirt
On Wed, Aug 24, 2022 at 10:56:22AM -0500, Jonathon Jongsma wrote:
> On 8/24/22 2:09 AM, Erik Skultety wrote:
> > On Tue, Aug 23, 2022 at 12:43:03PM -0500, Jonathon Jongsma wrote:
> > > Openstack developers reported that newly-created mdevs were not
> > > recognized by libvirt until after a libvirt
On 8/24/22 2:09 AM, Erik Skultety wrote:
On Tue, Aug 23, 2022 at 12:43:03PM -0500, Jonathon Jongsma wrote:
Openstack developers reported that newly-created mdevs were not
recognized by libvirt until after a libvirt daemon restart. The source
of the problem appears to be that when libvirt gets
Since they are simply normal RPC messages, the keep alive packets are
subject to the "max_client_requests" limit just like any API calls.
Thus, if a client hits the 'max_client_requests' limit and all the
pending API calls take a long time to complete, it may result in
keep-alives firing and
On Wed, Aug 24, 2022 at 03:12:53PM +0100, Daniel P. Berrangé wrote:
On Wed, Aug 24, 2022 at 04:09:31PM +0200, Martin Kletzander wrote:
On Tue, Aug 23, 2022 at 12:10:14PM +0100, Daniel P. Berrangé wrote:
> On Tue, Aug 23, 2022 at 01:01:49PM +0200, Martin Kletzander wrote:
> > On Tue, Aug 23,
On Tue, Aug 23, 2022 at 12:10:14PM +0100, Daniel P. Berrangé wrote:
On Tue, Aug 23, 2022 at 01:01:49PM +0200, Martin Kletzander wrote:
On Tue, Aug 23, 2022 at 11:39:10AM +0100, Daniel P. Berrangé wrote:
> On Thu, Aug 04, 2022 at 03:07:17PM +0200, Martin Kletzander wrote:
> > Signed-off-by:
This patch moves qemuDomainObjEndJob() into
src/conf/virdomainjob as universal virDomainObjEndJob().
Signed-off-by: Kristina Hanicova
---
docs/kbase/internals/qemu-threads.rst | 6 +-
src/conf/virdomainjob.c | 28 +
src/conf/virdomainjob.h | 2 +
This patch removes virCHDomainObjBeginJob() and replaces it with
call to the generalized virDomainObjBeginJob().
Signed-off-by: Kristina Hanicova
---
src/ch/ch_domain.c | 51 +-
src/ch/ch_domain.h | 4
src/ch/ch_driver.c | 20 +-
On Wed, Aug 24, 2022 at 04:09:31PM +0200, Martin Kletzander wrote:
> On Tue, Aug 23, 2022 at 12:10:14PM +0100, Daniel P. Berrangé wrote:
> > On Tue, Aug 23, 2022 at 01:01:49PM +0200, Martin Kletzander wrote:
> > > On Tue, Aug 23, 2022 at 11:39:10AM +0100, Daniel P. Berrangé wrote:
> > > > On Thu,
Signed-off-by: Kristina Hanicova
---
docs/kbase/internals/qemu-threads.rst | 12 +--
src/conf/virdomainjob.c | 30 +++
src/conf/virdomainjob.h | 6 ++
src/libvirt_private.syms | 2 ++
src/qemu/qemu_backup.c
Signed-off-by: Kristina Hanicova
---
src/conf/virdomainjob.c | 44 +++
src/conf/virdomainjob.h | 6 ++
src/libvirt_private.syms | 2 ++
src/qemu/qemu_domain.c| 2 +-
src/qemu/qemu_domainjob.c | 44 ---
This patch removes virCHDomainObjEndJob() and replaces it with
call to the generalized virDomainObjEndJob().
Signed-off-by: Kristina Hanicova
---
src/ch/ch_domain.c | 18 --
src/ch/ch_domain.h | 3 ---
src/ch/ch_driver.c | 20 ++--
3 files changed, 10
This patch removes libxlDomainObjBeginJob() and replaces it with
generalized virDomainObjBeginJob().
Signed-off-by: Kristina Hanicova
---
src/libxl/libxl_domain.c| 62 ++---
src/libxl/libxl_domain.h| 6
src/libxl/libxl_driver.c| 48
This patch adds the generalized job object into the domain object
so that it can be used by all drivers without the need to extract
it from the private data.
Because of this, the job object needs to be created and set
during the creation of the domain object. This patch also extends
xmlopt with
This patch moves qemuDomainObjBeginJob() into
src/conf/virdomainjob as universal virDomainObjBeginJob().
Signed-off-by: Kristina Hanicova
---
docs/kbase/internals/qemu-threads.rst | 8 +-
src/conf/virdomainjob.c | 18 +++
src/conf/virdomainjob.h | 4 +
Although these and functions in the following two patches are for
now just being used by the qemu driver, it makes sense to have all
begin job functions in the same file.
Signed-off-by: Kristina Hanicova
---
docs/kbase/internals/qemu-threads.rst | 10 ++--
src/conf/virdomainjob.c
Struct virDomainJobData is meant for statistics for async jobs.
It was used to keep track of only two attributes, one of which is
also in the generalized virDomainJobObj ("started") and one which
is always set to the same value, if any job is active
("jobType").
This patch removes usage &
This patch uses the job object directly in the domain object and
removes the job object from private data of all drivers that use
it as well as other relevant code (initializing and freeing the
structure).
Signed-off-by: Kristina Hanicova
---
src/ch/ch_domain.c | 29 ++--
This patch moves qemuDomainObjBeginJobInternal() as
virDomainObjBeginJobInternal() into hypervisor in order to be
used by other hypervisors in the following patches.
Signed-off-by: Kristina Hanicova
---
po/POTFILES | 1 +
src/hypervisor/domain_job.c | 250
This patch removes virLXCDomainObjBeginJob() and replaces it with
call to the generalized virDomainObjBeginJob().
Signed-off-by: Kristina Hanicova
---
src/lxc/lxc_domain.c | 57
src/lxc/lxc_domain.h | 6 -
src/lxc/lxc_driver.c | 46
This patch removes virLXCDomainObjEndJob() and replaces it with
call to the generalized virDomainObjEndJob().
Signed-off-by: Kristina Hanicova
---
src/lxc/lxc_domain.c | 20
src/lxc/lxc_domain.h | 4
src/lxc/lxc_driver.c | 57 +++-
This patch removes libxlDomainObjEndJob() and replaces it with
call to the generalized virDomainObjEndJob().
Signed-off-by: Kristina Hanicova
---
src/libxl/libxl_domain.c| 27 ++--
src/libxl/libxl_domain.h| 4 ---
src/libxl/libxl_driver.c| 51
This series finishes the generalization of the jobs-related code across
jobs-using drivers.
This is the last one, I promise.
Kristina Hanicova (17):
qemu & hypervisor: move qemuDomainObjBeginJobInternal() into hyperisor
libxl: remove usage of virDomainJobData
move files:
There may be a case that the callback structure will exist with
no callbacks (following patches). This patch adds check for
specific callbacks before using them.
Signed-off-by: Kristina Hanicova
---
src/conf/virdomainjob.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff
The following patches move job object as a member into the domain
object. Because of this, domain_conf (where the domain object is
defined) needs to import the file with the job object.
It makes sense to move jobs to the same level as the domain_conf:
into src/conf/
Signed-off-by: Kristina
On 8/24/22 13:19, Michal Prívozník wrote:
> On 8/23/22 16:19, Michal Prívozník wrote:
>> On 8/18/22 16:20, Martin Kletzander wrote:
>>> Just like the socket, remove the pidfile when TPM emulator is being
>>> stopped. In
>>> order to make this a bit cleaner, try to remove it even if swtpm_ioctl
On 8/23/22 16:19, Michal Prívozník wrote:
> On 8/18/22 16:20, Martin Kletzander wrote:
>> Just like the socket, remove the pidfile when TPM emulator is being stopped.
>> In
>> order to make this a bit cleaner, try to remove it even if swtpm_ioctl does
>> not
>> exist.
>>
>> Signed-off-by:
For QEMU_SCHED_CORE_EMULATOR or QEMU_SCHED_CORE_FULL the QEMU
process (and its vCPU threads) should be placed into its own
scheduling group. Since we have the dummy process running for
exactly this purpose use its PID as an argument to
virCommandSetRunAmong().
Signed-off-by: Michal Privoznik
For QEMU_SCHED_CORE_FULL case, all helper processes should be
placed into the same scheduling group as the QEMU process they
serve. It may happen though, that a helper process is started
before QEMU (cold start of a domain). But we have the dummy
process running from which the QEMU process will
As advertised in the previous commit, QEMU_SCHED_CORE_VCPUS case
is implemented for hotplug case. The implementation is very
similar to the cold boot case, except here we fork off for every
vCPU (because the implementation is done in
qemuProcessSetupVcpu() which is also the function that's called
Ideally, we would just pick the best default and users wouldn't
have to intervene at all. But in some cases it may be handy to
not bother with SCHED_CORE at all or place helper processes into
the same group as QEMU. Introduce a knob in qemu.conf to allow
users control this behaviour.
The aim of this helper function is to spawn a child process in
which new scheduling group is created. This dummy process will
then used to distribute scheduling group from (e.g. when starting
helper processes or QEMU itself). The process is not needed for
QEMU_SCHED_CORE_NONE case (obviously) nor
For QEMU_SCHED_CORE_VCPUS case, the vCPU threads should be placed
all into one scheduling group, but not the emulator or any of its
threads. Therefore, as soon as vCPU TIDs are detected, fork off a
child which then creates a separate scheduling group and adds all
vCPU threads into it.
Please
v4 of:
https://listman.redhat.com/archives/libvir-list/2022-August/233683.html
diff to v3:
- Fixed wording in qemu.conf,
- Removed dead code in qemuProcessSetupVcpuSchedCoreHelper(),
- Made the dummy child process (qemuDomainSchedCoreStart()) exit early,
instead of jumping onto error label.
Since its 5.14 release the Linux kernel allows userspace to
define trusted groups of processes/threads that can run on
sibling Hyper Threads (HT) at the same time. This is to mitigate
side channel attacks like L1TF or MDS. If there are no tasks to
fully utilize all HTs, then a HT will idle instead
There are two modes of core scheduling that are handy wrt
virCommand:
1) create new trusted group when executing a virCommand
2) place freshly executed virCommand into the trusted group of
another process.
Therefore, implement these two new operations as new APIs:
virCommandSetRunAlone() and
On Tue, Aug 23, 2022 at 12:43:03PM -0500, Jonathon Jongsma wrote:
> Openstack developers reported that newly-created mdevs were not
> recognized by libvirt until after a libvirt daemon restart. The source
> of the problem appears to be that when libvirt gets the udev 'add'
> event, the sysfs tree
On Tue, Aug 23, 2022 at 06:43:26PM +0200, Peter Krempa wrote:
> On Tue, Aug 23, 2022 at 18:22:37 +0200, Erik Skultety wrote:
> > Coverity reports:
> > virNWFilterSnoopIPLeaseUpdate(virNWFilterSnoopIPLease *ipl,
> > time_t timeout)
> > {
> > if
38 matches
Mail list logo