From: Michal Privoznik
This is similar to the previous commit. SGX memory backend needs
to access /dev/sgx_vepc and /dev/sgx_provision. Create these
nodes in domain's private /dev when required by domain's config.
Signed-off-by: Michal Privoznik
Signed-off-by: Haibin Huang
---
src/qemu/qemu_n
With NUMA config:
...
0-1
512
0
...
Without NUMA config:
...
512
...
Signed-off-by: Lin Yang
Signed-off-by: Michal Privoznik
Signed-off-by: Haibin Huang
---
docs/formatdomain.rst | 25 +
According to the result parsing from xml, add the argument of
SGX EPC memory backend into QEMU command line.
With NUMA config:
#qemu-system-x86_64 \
.. \
-object
'{"qom-type":"memory-backend-epc","id":"memepc0","prealloc":true,"size":67108864,"host-nodes":[0,1],"policy":"
From: Haibin Huang
Generate the QMP command for query-sgx-capabilities and the command
return SGX capabilities from QMP.
{"execute":"query-sgx-capabilities"}
the right reply:
{"return":
{
"sgx": true,
"section-size": 197132288,
"flc": true
}
}
the error reply:
{
From: Haibin Huang
the QMP capabilities:
{"return":
{
"sgx": true,
"section-size": 1024,
"flc": true
}
}
the domain capabilities:
yes
1
Signed-off-by: Michal Privoznik
Signed-off-by: Haibin Huang
---
src/qemu/qemu_capabilities.c |
From: Michal Privoznik
As advertised in previous commits, QEMU needs to access
/dev/sgx_vepc and /dev/sgx_provision files when SGX memory
backend is configured. And if it weren't for QEMU's namespaces,
we wouldn't dare to relabel them, because they are system wide
files. But if namespaces are use
From: Haibin Huang
Extend hypervisor capabilities to include sgx feature. When available,
the hypervisor supports launching an VM with SGX on Intel platfrom.
The SGX feature tag privides additional details like section size and
sgx1 or sgx2.
Signed-off-by: Haibin Huang
Signed-off-by: Michal Pri
From: Haibin Huang
Signed-off-by: Michal Privoznik
Signed-off-by: Haibin Huang
---
src/conf/domain_capabilities.c | 11 +++
src/conf/domain_capabilities.h | 22 ++
src/libvirt_private.syms | 1 +
3 files changed, 34 insertions(+)
diff --git a/src/conf/domain
From: Michal Privoznik
SGX memory backend needs to access /dev/sgx_vepc (which allows
userspace to allocate "raw" EPC without an associated enclave)
and /dev/sgx_provision (which allows creating provisioning
enclaves). Allow these two devices in CGroups if a domain is
configured so.
Signed-off-b
The previous v15 version can be found here:
https://listman.redhat.com/archives/libvir-list/2022-August/234030.html
v14 version:
https://listman.redhat.com/archives/libvir-list/2022-July/233257.html
Diff to v15:
- Updated libvirt target verion to latest 8.9.0 in formatdomain.rst
- Sum up all sgx
Signed-off-by: Jiri Denemark
---
Notes:
Version 2:
- new patch
tools/virsh-completer-host.c | 50
tools/virsh-completer-host.h | 5
tools/virsh-host.c | 1 +
3 files changed, 56 insertions(+)
diff --git a/tools/virsh-completer-host.
Signed-off-by: Jiri Denemark
Reviewed-by: Ján Tomko
---
Notes:
Version 2:
- no change
src/cpu/cpu_ppc64.c | 20
tests/domaincapsdata/qemu_4.2.0.ppc64.xml | 6 +++---
tests/domaincapsdata/qemu_5.0.0.ppc64.xml | 8
tests/domaincapsdat
This patch is effectively a no-op, but I wanted to initialize
.getVendorForModel explicitly as implementing this function does not
even make sense on ARM. The CPU models in our CPU map are only used for
describing host CPU in capabilities XML and cannot be used for guest CPU
definition in domain XM
The API can be used to get usability blockers for an unusable CPU model,
which is not obvious. Let's explicitly document this behavior as it is
now mentioned in the documentation of domain capabilities XML.
Signed-off-by: Jiri Denemark
---
Notes:
Version 2:
- new patch
src/libvirt-host
Since commit "cpu_x86: Disable blockers from unusable CPU models"
(v3.8.0-99-g9c9620af1d) we explicitly disable CPU features reported by
QEMU as usability blockers for a particular CPU model when creating
baseline or host-model CPU definition. When QEMU changed canonical names
for some features (mo
Signed-off-by: Jiri Denemark
---
Notes:
Version 2:
- patch 10/11 from v1 and the corresponding section in NEWS dropped
- mention --model for virsh hypervisor-cpu-baseline
- mention CPU blockers translation bug
- mention docs improvements
NEWS.rst | 22 ++
This option can be used as a shortcut for creating a single XML with
just a CPU model name and no features:
$ virsh hypervisor-cpu-baseline --model Skylake-Server
Skylake-Server
Signed-off-by: Jiri Denemark
---
Notes:
Vers
The ppc64 CPU code still has to load and parse the CPU map everytime it
needs to look at it, which can make some operations pretty slow. Other
archs already switched to loading the CPU map once and keeping the
parsed structure in memory. Let's switch ppc64 as well.
Signed-off-by: Jiri Denemark
Re
So far QEMU driver does not get CPU model vendor from QEMU directly and
it has to ask the CPU driver for the info stored in CPU map.
Signed-off-by: Jiri Denemark
Reviewed-by: Ján Tomko
---
Notes:
Version 2:
- no change
src/cpu/cpu.c| 25 +
src/c
Signed-off-by: Jiri Denemark
Reviewed-by: Ján Tomko
---
Notes:
Version 2:
- no change
src/cpu/cpu_x86.c | 19
.../domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 88 -
.../domaincapsdata/qemu_4.2.0-tcg.x86_64.xml | 88 -
tes
Signed-off-by: Jiri Denemark
---
Notes:
Version 2:
- new patch
docs/formatdomaincaps.rst | 18 ++
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/docs/formatdomaincaps.rst b/docs/formatdomaincaps.rst
index 6ce780fb69..afade16bc0 100644
--- a/docs/formatdom
Since the function always returns 0, we can just return void and make
callers simpler.
Signed-off-by: Jiri Denemark
Reviewed-by: Ján Tomko
---
Notes:
Version 2:
- no change
src/conf/domain_capabilities.c | 24 +---
src/conf/domain_capabilities.h | 11 ++-
s
Even though several CPU models from various vendors are reported as
usable on a given host, user may still want to use only those that match
the host vendor. Currently the only place where users can check the
vendor of each CPU model is our CPU map, which is considered internal
and users should not
The only part of qemuCaps both functions are interested in is the CPU
architecture. Changing them to expect just virArch makes the functions
more reusable.
Signed-off-by: Jiri Denemark
Reviewed-by: Ján Tomko
---
Notes:
Version 2:
- no change
src/qemu/qemu_capabilities.c | 18 +
See individual patches for details.
Patches 4 and 6 were truncated as they include a lot of boring changes
in tests, complete patches are available in my gitlab repo:
git fetch g...@gitlab.com:jirkade/libvirt.git domaincaps
Version 2:
- original patch 10/11 was replaced by several new patche
On Fri, Oct 07, 2022 at 13:10:40 +0200, Michal Privoznik wrote:
> In recent commit of v8.8.0-41-g41eb0f446c I've suggested during
> review to put both xdr_free() calls under error label, assuming
> that xdr_free() accepts NULL and thus is a NOP when the control
> jumps onto the label even before ei
On Fri, Oct 07, 2022 at 03:22:33PM +0200, Michal Prívozník wrote:
> On 10/5/22 12:51, Daniel P. Berrangé wrote:
> > Libvirt provides QMP passthrough APIs for the QEMU driver and these are
> > exposed in virsh. It is not especially pleasant, however, using the raw
> > QMP JSON syntax. QEMU has a too
On 10/5/22 12:51, Daniel P. Berrangé wrote:
> Libvirt provides QMP passthrough APIs for the QEMU driver and these are
> exposed in virsh. It is not especially pleasant, however, using the raw
> QMP JSON syntax. QEMU has a tool 'qmp-shell' which can speak QMP and
> exposes a human friendly interacti
On Fri, Oct 07, 2022 at 13:56:27 +0100, Daniel P. Berrangé wrote:
> Since they are simply normal RPC messages, the keep alive packets are
> subject to the "max_client_requests" limit just like any API calls.
>
> Thus, if a client hits the 'max_client_requests' limit and all the
> pending API calls
Since they are simply normal RPC messages, the keep alive packets are
subject to the "max_client_requests" limit just like any API calls.
Thus, if a client hits the 'max_client_requests' limit and all the
pending API calls take a long time to complete, it may result in
keep-alives firing and dropp
This function is fine to use in other languages
Signed-off-by: Daniel P. Berrangé
---
build-aux/syntax-check.mk | 1 +
1 file changed, 1 insertion(+)
diff --git a/build-aux/syntax-check.mk b/build-aux/syntax-check.mk
index 649eb91acb..41970d31a1 100644
--- a/build-aux/syntax-check.mk
+++ b/buil
Despite efforts to make the virt-qemu-sev-validate tool friendly, it is
a certainty that almost everyone who tries it will hit false negative
results, getting a failure despite the VM being trustworthy.
Diagnosing these problems is no easy matter, especially for those not
familiar with SEV/SEV-ES
Expand the SEV guest kbase guide with information about how to configure
a SEV/SEV-ES guest when attestation is required, and mention the use of
virt-qemu-sev-validate as a way to confirm it.
Signed-off-by: Daniel P. Berrangé
---
docs/kbase/launch_security_sev.rst | 102 +
It is possible to build OVMF for SEV with an embedded Grub that can
fetch LUKS disk secrets. This adds support for injecting secrets in
the required format.
Signed-off-by: Daniel P. Berrangé
---
docs/manpages/virt-qemu-sev-validate.rst | 66 ++
tools/virt-qemu-sev-validate.py |
In general we expect to be able to construct a SEV-ES VMSA
blob from knowledge about the AMD achitectural CPU register
defaults, KVM setup and QEMU setup. If any of this unexpectedly
changes, figuring out what's wrong could be horrible. This
systemtap script demonstrates how to capture the real VMS
When validating a SEV-ES guest, we need to know the CPU count and VMSA
state. We can get the CPU count directly from libvirt's guest info. The
VMSA state can be constructed automatically if we query the CPU SKU from
host capabilities XML. Neither of these is secure, however, so this
behaviour is re
With the SEV-ES policy the VMSA state of each vCPU must be included in
the measured data. The VMSA state can be generated using the 'sevctl'
tool, by telling it a QEMU VMSA is required, and passing the hypevisor's
CPU SKU (family, model, stepping).
Signed-off-by: Daniel P. Berrangé
---
docs/manp
When connected to libvirt we can validate that the guest configuration
has the kernel hashes property enabled, otherwise including the kernel
GUID table in our expected measurements is not likely to match the
actual measurement.
When running locally we can also automatically detect the kernel/init
The virt-qemu-sev-validate program will compare a reported SEV/SEV-ES
domain launch measurement, to a computed launch measurement. This
determines whether the domain has been tampered with during launch.
This initial implementation requires all inputs to be provided
explicitly, and as such can run
Accept information about a connection to libvirt and a guest on the
command line. Talk to libvirt to obtain the running guest state and
automatically detect as much configuration as possible.
It will refuse to use a libvirt connection that is thought to be local
to the current machine, as running
The libvirt QEMU driver provides all the functionality required for
launching a guest on AMD SEV(-ES) platforms, with a configuration
that enables attestation of the launch measurement. The documentation
for how to actually perform an attestation is severely lacking and
not suitable for mere mortal
The VMSA files contain the expected CPU register state for the VM. Their
content varies based on a few pieces of the stack
- AMD CPU architectural initial state
- KVM hypervisor VM CPU initialization
- QEMU userspace VM CPU initialization
- AMD CPU SKU (family/model/stepping)
The first th
When doing direct kernel boot we need to include the kernel, initrd and
cmdline in the measurement.
Signed-off-by: Daniel P. Berrangé
---
docs/manpages/virt-qemu-sev-validate.rst | 43 ++
tools/virt-qemu-sev-validate.py | 102 ++-
2 files changed, 144 insert
In recent commit of v8.8.0-41-g41eb0f446c I've suggested during
review to put both xdr_free() calls under error label, assuming
that xdr_free() accepts NULL and thus is a NOP when the control
jumps onto the label even before either of @arg or @ret was
allocated. Well, turns out, xdr_free() does no
*** BLURB HERE ***
Michal Prívozník (5):
meson: Replace meson.build_root() with meson.project_build_root()
meson: Replace meson.source_root() with meson.project_source_root()
meson: Replace external_program.path() with
external_program.full_path()
Replace dep.get_pkgconfig_variable() w
Bump the minimal required version to 0.56.0. Looking into our CI
this is the oldest version we install.
Signed-off-by: Michal Privoznik
---
libvirt.spec.in | 2 +-
meson.build | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/libvirt.spec.in b/libvirt.spec.in
index 950c5
The path() method is deprecated in 0.55.0 and we're recommended
to use full_path() instead. Interestingly, we were already doing
do in couple of place, but not all of them.
Signed-off-by: Michal Privoznik
---
build-aux/meson.build | 10 +-
docs/meson.build| 8
meson.b
The get_pkgconfig_variable() method is deprecated in 0.56.0 and
we're recommended to use get_variable(pkgconfig : ...) instead.
Signed-off-by: Michal Privoznik
---
meson.build | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/meson.build b/meson.build
index 82234
The build_root() method is deprecated in 0.56.0 and we're
recommended to use project_build_root() instead.
Signed-off-by: Michal Privoznik
---
build-aux/meson.build| 2 +-
docs/go/meson.build | 2 +-
docs/html/meson.build| 4 ++--
docs/kbase/internals/meson.b
The source_root() method is deprecated in 0.56.0 and we're
recommended to use project_source_root() instead.
Signed-off-by: Michal Privoznik
---
build-aux/meson.build | 2 +-
docs/meson.build| 4 ++--
meson.build | 4 ++--
po/
50 matches
Mail list logo