On 10/6/2022 12:01 PM, Jason Gunthorpe wrote:
> On Wed, Sep 21, 2022 at 08:09:54PM -0300, Jason Gunthorpe wrote:
>> On Wed, Sep 21, 2022 at 03:30:55PM -0400, Steven Sistare wrote:
>>
If Steve wants to keep it then someone needs to fix the deadlock in
the vfio implementation before any
On Wed, Sep 21, 2022 at 08:09:54PM -0300, Jason Gunthorpe wrote:
> On Wed, Sep 21, 2022 at 03:30:55PM -0400, Steven Sistare wrote:
>
> > > If Steve wants to keep it then someone needs to fix the deadlock in
> > > the vfio implementation before any userspace starts to appear.
> >
> > The only
On 10/6/22 15:47, Daniel P. Berrangé wrote:
> On Wed, Oct 05, 2022 at 10:02:00AM -0400, Stefan Berger wrote:
>> Introduced VIR_MIGRATE_TPM_SHARED_STORAGE for migrating a TPM across
>> shared storage.
>>
>> At this point do not support this flag in 'virsh', yet.
>>
>> Signed-off-by: Stefan Berger
For QEMU_SCHED_CORE_FULL case, all helper processes should be
placed into the same scheduling group as the QEMU process they
serve. It may happen though, that a helper process is started
before QEMU (cold start of a domain). But we have the dummy
process running from which the QEMU process will
For QEMU_SCHED_CORE_VCPUS case, the vCPU threads should be placed
all into one scheduling group, but not the emulator or any of its
threads. Therefore, as soon as vCPU TIDs are detected, fork off a
child which then creates a separate scheduling group and adds all
vCPU threads into it.
Please
As advertised in the previous commit, QEMU_SCHED_CORE_VCPUS case
is implemented for hotplug case. The implementation is very
similar to the cold boot case, except here we fork off for every
vCPU (because the implementation is done in
qemuProcessSetupVcpu() which is also the function that's called
The aim of this helper function is to spawn a child process in
which new scheduling group is created. This dummy process will
then used to distribute scheduling group from (e.g. when starting
helper processes or QEMU itself). The process is not needed for
QEMU_SCHED_CORE_NONE case (obviously) nor
Ideally, we would just pick the best default and users wouldn't
have to intervene at all. But in some cases it may be handy to
not bother with SCHED_CORE at all or place helper processes into
the same group as QEMU. Introduce a knob in qemu.conf to allow
users control this behaviour.
Since its 5.14 release the Linux kernel allows userspace to
define trusted groups of processes/threads that can run on
sibling Hyper Threads (HT) at the same time. This is to mitigate
side channel attacks like L1TF or MDS. If there are no tasks to
fully utilize all HTs, then a HT will idle instead
For QEMU_SCHED_CORE_EMULATOR or QEMU_SCHED_CORE_FULL the QEMU
process (and its vCPU threads) should be placed into its own
scheduling group. Since we have the dummy process running for
exactly this purpose use its PID as an argument to
virCommandSetRunAmong().
Signed-off-by: Michal Privoznik
This is just a resend of:
https://listman.redhat.com/archives/libvir-list/2022-August/233895.html
Michal Prívozník (8):
virprocess: Core Scheduling support
virCommand: Introduce APIs for core scheduling
qemu_conf: Introduce a knob to set SCHED_CORE
qemu_domain: Introduce
There are two modes of core scheduling that are handy wrt
virCommand:
1) create new trusted group when executing a virCommand
2) place freshly executed virCommand into the trusted group of
another process.
Therefore, implement these two new operations as new APIs:
virCommandSetRunAlone() and
On Wed, Oct 05, 2022 at 10:02:00AM -0400, Stefan Berger wrote:
> Introduced VIR_MIGRATE_TPM_SHARED_STORAGE for migrating a TPM across
> shared storage.
>
> At this point do not support this flag in 'virsh', yet.
>
> Signed-off-by: Stefan Berger
> ---
> include/libvirt/libvirt-domain.h | 8
On 10/5/22 16:02, Stefan Berger wrote:
> Do not create storage if TPM_SHARED_STORAGE migration flag is set and on
> incoming migration since in this case the storage directory must already
> exist. Also do not run swtpm_setup in this case.
>
> Pass the migration flag from migration related
On 10/5/22 16:02, Stefan Berger wrote:
> Always pass the --migration option to swtpm, if swptm supports it (staring
> with v0.8). Always apply the 'release-lock-outgoing' parameter with this
> option and apply the 'incoming' parameter for incoming migration so that
> swtpm releases the file lock
On 10/5/22 16:01, Stefan Berger wrote:
> This series of patches adds support for migrating vTPMs across hosts whose
> storage has been set up to share the directory structure holding the state
> of the TPM (swtpm). A new migration flag VIR_MIGRATE_TPM_SHARED_STORAGE is
> added to enable this. This
On 10/5/22 16:02, Stefan Berger wrote:
> When migrating the TPM in a setup that has shared storage for the TPM state
> files setup between hosts we never remove the state.
>
> Signed-off-by: Stefan Berger
> ---
> src/qemu/qemu_tpm.c | 4
> 1 file changed, 4 insertions(+)
>
> diff --git
Libvirt stores pid of domain(e.g Qemu) process when a domain process is started
and same pid is used while destroying domain process. There is always a race
possible that before libvirt tries to kill domain process, actual domain process
is already dead and same pid is used for another process. In
Libvirt stores pid of domain(e.g Qemu) process when a domain process is started
and same pid is used while destroying domain process. There is always a race
possible that before libvirt tries to kill domain process, actual domain process
is already dead and same pid is used for another process. In
On Thu, Oct 06, 2022 at 10:52:03AM +0200, Peter Krempa wrote:
> On Thu, Oct 06, 2022 at 09:20:08 +0100, Daniel P. Berrangé wrote:
> > On Thu, Oct 06, 2022 at 09:42:26AM +0200, Peter Krempa wrote:
> > > On Tue, Oct 04, 2022 at 08:51:50 -0400, Daniel P. Berrangé wrote:
> > > > This refresh switches
On Thu, Oct 06, 2022 at 10:52:03AM +0200, Peter Krempa wrote:
> On Thu, Oct 06, 2022 at 09:20:08 +0100, Daniel P. Berrangé wrote:
> > On Thu, Oct 06, 2022 at 09:42:26AM +0200, Peter Krempa wrote:
> > > On Tue, Oct 04, 2022 at 08:51:50 -0400, Daniel P. Berrangé wrote:
> > > > This refresh switches
On Thu, Oct 06, 2022 at 09:20:08 +0100, Daniel P. Berrangé wrote:
> On Thu, Oct 06, 2022 at 09:42:26AM +0200, Peter Krempa wrote:
> > On Tue, Oct 04, 2022 at 08:51:50 -0400, Daniel P. Berrangé wrote:
> > > This refresh switches the CI for contributors to be triggered by merge
> > > requests.
On 10/6/22 10:46, Erik Skultety wrote:
> libvirt-derived repos recently changed the way how and when CI
> containers are built and for that a different naming scheme was adopted
> to differentiate between the 2. Update the integration pipeline config
> to reflect this change.
>
> Signed-off-by:
libvirt-derived repos recently changed the way how and when CI
containers are built and for that a different naming scheme was adopted
to differentiate between the 2. Update the integration pipeline config
to reflect this change.
Signed-off-by: Erik Skultety
---
ci/integration.yml | 10
On Thu, Oct 06, 2022 at 09:42:26AM +0200, Peter Krempa wrote:
> On Tue, Oct 04, 2022 at 08:51:50 -0400, Daniel P. Berrangé wrote:
> > This refresh switches the CI for contributors to be triggered by merge
> > requests. Pushing to a branch in a fork will no longer run CI pipelines,
> > in order to
On Tue, Oct 04, 2022 at 08:51:50 -0400, Daniel P. Berrangé wrote:
> This refresh switches the CI for contributors to be triggered by merge
> requests. Pushing to a branch in a fork will no longer run CI pipelines,
> in order to avoid consuming CI minutes. To regain the original behaviour
>
On Tue, Oct 04, 2022 at 08:51:49 -0400, Daniel P. Berrangé wrote:
> The jobs in other libvirt projects have all been renamed, due to
> the need to have two parallel sets of jobs for different execution
> scenarios. Since the integration tests are targetting 'master'
> branch pipelines in the
27 matches
Mail list logo