When using shared storage there is no need to apply security labels on the
storage since the files have to have been labeled already on the source
side and we must assume that the source and destination side have been
setup to use the same uid and gid for running swtpm as well as share the
same
Pass the --migration option to swtpm if swptm supports it (starting
with v0.8) and if the TPM's state is written on shared storage. If this
is the case apply the 'release-lock-outgoing' parameter with this
option and apply the 'incoming' parameter for incoming migration so that
swtpm releases the
Add support for storing private TPM-related data. The first private data
will be related to the capability of the started swtpm indicating whether
it is capable of migration with a shared storage setup since that requires
support for certain command line flags that were only becoming available
in
Never remove the TPM state on outgoing migration if the storage setup
has shared storage for the TPM state files. Also, do not do the security
cleanup on outgoing migration if shared storage is detected.
Signed-off-by: Stefan Berger
---
src/qemu/qemu_domain.c| 12 +++-
Do not create storage if the TPM state files are on shared storage and
there's an incoming migration since in this case the storage directory
must already exist. Also do not run swtpm_setup in this case.
Signed-off-by: Stefan Berger
---
src/qemu/qemu_tpm.c | 10 +-
1 file changed, 9
Add support for parsing swtpm 'cmdarg-migration' capability (since v0.8).
Signed-off-by: Stefan Berger
---
src/util/virtpm.c | 1 +
src/util/virtpm.h | 1 +
2 files changed, 2 insertions(+)
diff --git a/src/util/virtpm.c b/src/util/virtpm.c
index 91db0f31eb..19850de1c8 100644
---
This series of patches adds support for migrating vTPMs across hosts whose
storage has been set up to share the directory structure holding the state
of the TPM (swtpm). The existence of share storage influences the
management of the directory structure holding the TPM state, which for
example is
On Tue, Oct 18, 2022 at 04:37:32PM +0200, Martin Kletzander wrote:
> Due to the setup of the modular daemon service files the reverting to
> non-socket
> activated daemons could have never worked. The reason is that masking the
> socket files prevents starting the daemons since they require (as
On Tue, Oct 18, 2022 at 04:37:31PM +0200, Martin Kletzander wrote:
> Similarly to commit ec7e31ed3206, allow traditional daemon activation for
> virtproxyd.
I'm not convinced we want todo this.
virtproxyd has supported socket activation since day 1, so I think
we are right to enforce this, as we
On 10/18/22 09:47, Michal Prívozník wrote:
On 10/18/22 14:23, Stefan Berger wrote:
On 10/18/22 04:15, Daniel P. Berrangé wrote:
On Mon, Oct 17, 2022 at 11:17:56AM -0400, Stefan Berger wrote:
On 10/17/22 09:48, Daniel P. Berrangé wrote:
On Mon, Oct 17, 2022 at 09:39:52AM -0400, Stefan
After dealing with some issues and talking to Daniel it seems we don't really
need to properly support non-socket activation for all the daemons on systemd.
I'm not sure how to phrase that in the documentation, but I gave it a shot.
Martin Kletzander (3):
docs: Specify reverting to traditional
Similarly to commit ec7e31ed3206, allow traditional daemon activation for
virtproxyd.
Signed-off-by: Martin Kletzander
---
src/remote/virtproxyd.service.in | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/remote/virtproxyd.service.in
Commit 59d30adacd1d added information about how to properly adjust
libvirt-guests service file when disabling socket activation, but it was still
not clear that it is meant only in the aforementioned case.
Signed-off-by: Martin Kletzander
---
docs/manpages/libvirtd.rst | 10 --
Due to the setup of the modular daemon service files the reverting to non-socket
activated daemons could have never worked. The reason is that masking the
socket files prevents starting the daemons since they require (as in Requires=
rather than Wants= in the service file) the sockets. On top of
On 10/18/22 14:23, Stefan Berger wrote:
>
>
> On 10/18/22 04:15, Daniel P. Berrangé wrote:
>> On Mon, Oct 17, 2022 at 11:17:56AM -0400, Stefan Berger wrote:
>>>
>>>
>>> On 10/17/22 09:48, Daniel P. Berrangé wrote:
On Mon, Oct 17, 2022 at 09:39:52AM -0400, Stefan Berger wrote:
>
>
On 10/18/22 04:15, Daniel P. Berrangé wrote:
On Mon, Oct 17, 2022 at 11:17:56AM -0400, Stefan Berger wrote:
On 10/17/22 09:48, Daniel P. Berrangé wrote:
On Mon, Oct 17, 2022 at 09:39:52AM -0400, Stefan Berger wrote:
The key is in qemuMigrationSrcIsSafe(), and how it determines if a
On Mon, Oct 17, 2022 at 12:16:41 -0400, Cole Robinson wrote:
> Reproducer:
>
> ./build/tools/virsh \
> --connect test:///`pwd`/examples/xml/test/testnodeinline.xml \
> vol-list default-pool
>
> Fixes: b3e33a0ef7e62169175280c647aa9ac361bd554d
>
> Signed-off-by: Cole Robinson
> ---
On Mon, Oct 17, 2022 at 12:16:40 -0400, Cole Robinson wrote:
> Signed-off-by: Cole Robinson
> ---
> examples/xml/test/testnodeinline.xml | 22 ++
> 1 file changed, 22 insertions(+)
Reviewed-by: Peter Krempa
On Mon, Oct 17, 2022 at 12:16:39 -0400, Cole Robinson wrote:
> The testdriver has xmlns support for overriding object default
> state. demo it by pausing a VM
>
> Signed-off-by: Cole Robinson
> ---
> examples/xml/test/testnodeinline.xml | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
On Sun, Oct 16, 2022 at 03:06:17PM -0400, Cole Robinson wrote:
> On 10/7/22 7:42 AM, Daniel P. Berrangé wrote:
> > The libvirt QEMU driver provides all the functionality required for
> > launching a guest on AMD SEV(-ES) platforms, with a configuration
> > that enables attestation of the launch
On Sun, Oct 16, 2022 at 03:27:39PM -0400, Cole Robinson wrote:
> On 10/7/22 7:43 AM, Daniel P. Berrangé wrote:
> > Despite efforts to make the virt-qemu-sev-validate tool friendly, it is
> > a certainty that almost everyone who tries it will hit false negative
> > results, getting a failure
On Sun, Oct 16, 2022 at 03:09:43PM -0400, Cole Robinson wrote:
> On 10/7/22 7:43 AM, Daniel P. Berrangé wrote:
> > When validating a SEV-ES guest, we need to know the CPU count and VMSA
> > state. We can get the CPU count directly from libvirt's guest info. The
> > VMSA state can be constructed
On Sun, Oct 16, 2022 at 03:00:25PM -0400, Cole Robinson wrote:
> On 10/7/22 7:43 AM, Daniel P. Berrangé wrote:
> > The VMSA files contain the expected CPU register state for the VM. Their
> > content varies based on a few pieces of the stack
> >
> > - AMD CPU architectural initial state
> > -
On Sun, Oct 16, 2022 at 02:54:47PM -0400, Cole Robinson wrote:
> On 10/7/22 7:42 AM, Daniel P. Berrangé wrote:
> > The virt-qemu-sev-validate program will compare a reported SEV/SEV-ES
> > domain launch measurement, to a computed launch measurement. This
> > determines whether the domain has been
There are couple of scenarios where we need to reflect MAC change
done in the guest:
1) domain restore from a file (here, we don't store updated MAC
in the save file and thus on restore create the macvtap with
the original MAC),
2) reconnecting to a running domain (here, the guest
Parts of the code that responds to the NIC_RX_FILTER_CHANGED
event are going to be re-used. Separate them into a function
(qemuDomainSyncRxFilter()) and move the code into qemu_domain.c
so that it can be re-used from other places of the driver.
There's one slight change though: instead of passing
See the last patch for explanation. I haven't found a gitlab issue for
this nor a bug open. But I remember somebody complaining about problems
during restore from a save file, on IRC perhaps?
Michal Prívozník (5):
processNicRxFilterChangedEvent: Free @guestFilter and @hostFilter
When restoring a domain from a save image, we need to query QEMU
for some runtime information that is not stored in status XML, or
even if it is, it's not parsed (e.g. virtio-mem actual size, or
soon rx-filters for macvtaps).
During migration, this is done in qemuMigrationDstFinishFresh(),
or in
There's no need to call virNetDevRxFilterFree() explicitly, when
corresponding variables can be declared as
g_autoptr(virNetDevRxFilter).
Signed-off-by: Michal Privoznik
---
src/qemu/qemu_driver.c | 10 +++---
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git
We are not updating domain XML to new MAC address, just merely
setting host side of macvtap. But we don't need a MODIFY job for
that, QUERY is just fine.
This allows us to process the event should it occur during
migration.
Signed-off-by: Michal Privoznik
---
src/qemu/qemu_driver.c | 2 +-
1
On Mon, Oct 17, 2022 at 11:17:56AM -0400, Stefan Berger wrote:
>
>
> On 10/17/22 09:48, Daniel P. Berrangé wrote:
> > On Mon, Oct 17, 2022 at 09:39:52AM -0400, Stefan Berger wrote:
> > >
> > >
>
> >
> > The key is in qemuMigrationSrcIsSafe(), and how it determines if a
> > migration is safe.
On 10/17/22 19:53, Jim Fehlig wrote:
> Signed-off-by: Jim Fehlig
> ---
> NEWS.rst | 5 +
> 1 file changed, 5 insertions(+)
>
Reviewed-by: Michal Privoznik
And you get bonus points for remembering to write NEWS item :-)
Michal
32 matches
Mail list logo