Re: [PATCH 1/3] libxl: Break down an if() in libxlCapsInitNuma()
On 5/12/21 7:23 AM, Michal Privoznik wrote: There's an if-else statement in libxlCapsInitNuma() that can really be just two standalone if()-s. Writing it as such helps with code readability. Signed-off-by: Michal Privoznik --- src/libxl/libxl_capabilities.c | 14 +++--- 1 file changed, 7 insertions(+), 7 deletions(-) Reviewed-by: Jim Fehlig Regards, Jim diff --git a/src/libxl/libxl_capabilities.c b/src/libxl/libxl_capabilities.c index ea4f370a6d..a64a9266c4 100644 --- a/src/libxl/libxl_capabilities.c +++ b/src/libxl/libxl_capabilities.c @@ -260,13 +260,13 @@ libxlCapsInitNuma(libxl_ctx *ctx, virCaps *caps) virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("libxl_get_numainfo failed")); goto cleanup; -} else { -cpu_topo = libxl_get_cpu_topology(ctx, _cpus); -if (cpu_topo == NULL || nr_cpus == 0) { -virReportError(VIR_ERR_INTERNAL_ERROR, "%s", - _("libxl_get_cpu_topology failed")); -goto cleanup; -} +} + +cpu_topo = libxl_get_cpu_topology(ctx, _cpus); +if (cpu_topo == NULL || nr_cpus == 0) { +virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("libxl_get_cpu_topology failed")); +goto cleanup; } cpus = g_new0(virCapsHostNUMACellCPU *, nr_nodes);
Re: [PATCH v2 3/3] qapi: deprecate drive-backup
On 5/24/21 10:06 AM, Vladimir Sementsov-Ogievskiy wrote: 15.05.2021 01:38, John Snow wrote: On 5/6/21 5:57 AM, Kashyap Chamarthy wrote: TODO: We also need to deprecate drive-backup transaction action.. But union members in QAPI doesn't support 'deprecated' feature. I tried to dig a bit, but failed :/ Markus, could you please help with it? At least by advice? Oho, I see. OK, I'm not Markus, but I've been getting into lots of trouble in the QAPI generator lately, so let me see if I can help get you started... https://gitlab.com/jsnow/qemu/-/commits/hack-deprecate-union-branches/ Here's a quick hack that might expose that feature. I suppose we can discuss this with Markus and turn these into real patches if that's the direction we wanna head. Hi! Markus is silent.. Maybe, you'll send patches ? ) He just went on PTO for 2 weeks :') It's going to have to wait, I'm afraid ...
Re: [PATCH v2 3/3] qapi: deprecate drive-backup
15.05.2021 01:38, John Snow wrote: On 5/6/21 5:57 AM, Kashyap Chamarthy wrote: TODO: We also need to deprecate drive-backup transaction action.. But union members in QAPI doesn't support 'deprecated' feature. I tried to dig a bit, but failed :/ Markus, could you please help with it? At least by advice? Oho, I see. OK, I'm not Markus, but I've been getting into lots of trouble in the QAPI generator lately, so let me see if I can help get you started... https://gitlab.com/jsnow/qemu/-/commits/hack-deprecate-union-branches/ Here's a quick hack that might expose that feature. I suppose we can discuss this with Markus and turn these into real patches if that's the direction we wanna head. Hi! Markus is silent.. Maybe, you'll send patches ? ) -- Best regards, Vladimir
Re: Ceph RBD Connection
Ok, so I figured out my problem after digging deeper into troubleshooting. My issue was just that ceph monitor is not listening to port 6789 on all interfaces. Its just listening to it on the network address 10.0.0.5. Changing the pool configuration to account for that fixed my issue. I wrote a simple c program based off documentation at https://docs.ceph.com/en/latest/rados/api/librados/ which is where it hit me after trying to use `rbd -m localhost:6789 list libvirt-pool`
Re: [PATCH] hw/arm/raspi: Remove deprecated raspi2/raspi3 aliases
On Thu, 20 May 2021 at 13:07, Peter Maydell wrote: > > On Tue, 11 May 2021 at 04:25, Philippe Mathieu-Daudé wrote: > > > > Hi Peter, > > > > Can this patch go via your qemu-arm tree (it is reviewed)? > > Sure. > > Applied to target-arm.next, thanks. This breaks 'make check' because some bits of the test suite are still using the old machine names. The error message is a bit opaque: MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))} QTEST_QEMU_IMG=./qemu-img G_TEST_DBUS_DAEMON=/home/petmay01/linaro/qemu-from-laptop/qemu/tests/dbus-vmstate-daemon.sh QTEST_QEMU_BINARY=./qemu-system-arm QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon tests/qtest/boot-serial-test --tap -k qemu-system-arm: -bios /tmp/qtest-boot-serial-c3pthL6: unsupported machine type Use -machine help to list supported machines socket_accept failed: Resource temporarily unavailable ** ERROR:../../tests/qtest/libqtest.c:319:qtest_init_without_qmp_handshake: assertion failed: (s->fd >= 0 && s->qmp_fd >= 0) ERROR qtest-arm/boot-serial-test - Bail out! ERROR:../../tests/qtest/libqtest.c:319:qtest_init_without_qmp_handshake: assertion failed: (s->fd >= 0 && s->qmp_fd >= 0) Makefile.mtest:80: recipe for target 'run-test-8' failed make: *** [run-test-8] Error 1 make: Leaving directory '/home/petmay01/linaro/qemu-from-laptop/qemu/build/arm-clang' "git grep raspi[23] tests" suggests there may be other uses beyond just the boot-serial-test one. Dropped. thanks -- PMM
Re: [PATCH v2 4/4] conf: Deduplicate NUMA distance code
On Mon, May 24, 2021 at 14:03:09 +0200, Michal Privoznik wrote: > After previous patches we have two structures: > virCapsHostNUMACellDistance and virNumaDistance which express the > same thing. And have the exact same members (modulo their names). > Drop the former in favor of the latter. > > This change means that distances with value of 0 are no longer > printed out into capabilities XML, because domain XML code allows > partial distance specification and thus threats value of 0 as > unspecified by user (see virDomainNumaGetNodeDistance() which > returns the default LOCAL/REMOTE distance for value of 0). > > Also, from ACPI 6.1 specification, section 5.2.17 System Locality > Distance Information Table (SLIT): > > Distance values of 0-9 are reserved and have no meaning. > > Thus we shouldn't be ever reporting 0 in neither domain nor > capabilities XML. > > Signed-off-by: Michal Privoznik > --- > > diff to v1: > - Filled in justification to stop reporting 0 distance in capabilities > XML (which we never did anyway). > - Change capabilities RNG to make it obvious that NUMA distances are the > same in domain and capabilities XMLs. > > docs/schemas/capability.rng| 13 +++-- > src/conf/capabilities.c| 26 -- > src/conf/capabilities.h| 11 +++ > src/conf/virconftypes.h| 2 -- > src/libxl/libxl_capabilities.c | 8 > 5 files changed, 18 insertions(+), 42 deletions(-) Reviewed-by: Peter Krempa
Re: [PATCH 00/35] qemu: Implement support for transient disk hotplug and backing store sharing
On Fri, May 21, 2021 at 02:47:00PM +0200, Peter Krempa wrote: > This series refactors the qemu disk hotplug code so that it can be > easily extended for hotplug of transient disks and then also used at > startup to plug in disks where the users want to share the backing > image. > > Masayoshi Mizuma (1): > qemu_snapshot: Add the guest name to the transient disk path > > Peter Krempa (34): > qemu: snapshot: Extract setup of snapshot disk definition for > transient disks > qemu: process: Setup transient disks only when starting a fresh VM > qemuSnapshotDiskPrepareOne: Pass in qemuSnapshotDiskContext > qemuSnapshotDiskContext: Store virQEMUDriverConfig in the struct > qemuSnapshotDiskPrepareOne: Use data from qemuSnapshotDiskContext > qemuSnapshotDiskCreate: Use 'cfg' from the qemuSnapshotDiskContext > qemu: snapshot: move transient snapshot code to qemu_process > qemu: Move 'bootindex' handling for disks out of command line > formatter > qemu: Move bootindex usage logic into qemuBuildDiskDeviceStr > qemuxml2argvtest: Remove pointless tests for keywrapping on s390 > qemu: Move iothread and s390 address validation for disk devices into > the validator > Replace virDomainDiskInsertPreAlloced by virDomainDiskInsert > conf: remove virDomainDiskInsertPreAlloced > qemuBlockStorageSourceChainData: Add handling of 'copy-on-read' filter > layer > qemuDomainAttachDiskGeneric: Move 'copy-on-read' handling to > qemuBlockStorageSourceChainData > qemuDomainRemoveDiskDevice: Move 'copy-on-read' handling to > qemuBlockStorageSourceChainData > qemuDomainAttachDeviceDiskLiveInternal: Absorb > qemuDomainAttachUSBMassStorageDevice > qemuDomainAttachDeviceDiskLiveInternal: Absorb > qemuDomainAttachVirtioDiskDevice > qemuDomainAttachDeviceDiskLiveInternal: Absorb > qemuDomainAttachSCSIDisk > qemuDomainAttachDeviceDiskLiveInternal: Simplify call to > qemuDomainAttachDiskGeneric > qemuDomainAttachDiskGeneric: Move setup of disk into > qemuDomainAttachDeviceDiskLiveInternal > qemu: hotplug: Move post-insertion steps of disk hotplug to > qemuDomainAttachDeviceDiskLiveInternal > qemuDomainAttachDiskGeneric: Fix whitespace > qemuDomainAttachDiskGeneric: Refactor cleanup > qemuDomainAttachDiskGeneric: Move PR helper attach into > qemuDomainAttachDeviceDiskLiveInternal > qemuDomainAttachDiskGeneric: Refactor rollback handling > qemuDomainAttachDiskGeneric: Split up frontend and backend attachment > qemu: Track creation of disk overlay individually > qemuDomainAttachDiskGeneric: Implement hotplug of disk > qemuDomainAttachDiskGeneric: Pass the qemu async job type > qemuDomainAttachDiskGeneric: Export > conf: Introduce 'shareBacking' for disks > qemu: Allow disks with images shared accross VMs > tests: Add qemuxml2argv and qemuxml2xml test for shareBacking='yes'> With the issues fixed Reviewed-by: Pavel Hrdina signature.asc Description: PGP signature
Re: [PATCH 12/35] qemu: Move iothread and s390 address validation for disk devices into the validator
On Fri, May 21, 2021 at 02:47:12PM +0200, Peter Krempa wrote: > The "machine-loadparm-multiple-disks-nets-s390" case now requires the > QEMU_CAPS_CCW feature to pass validation. > > Signed-off-by: Peter Krempa > --- > src/qemu/qemu_command.c | 50 > src/qemu/qemu_validate.c | 49 +++ > tests/qemuxml2xmltest.c | 2 +- > 3 files changed, 50 insertions(+), 51 deletions(-) > > diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c > index 200f9a04b1..9c32fd16b5 100644 > --- a/src/qemu/qemu_command.c > +++ b/src/qemu/qemu_command.c > @@ -1583,50 +1583,6 @@ qemuBuildDriveStr(virDomainDiskDef *disk, > } > > > -static bool > -qemuCheckIOThreads(const virDomainDef *def, > - virDomainDiskDef *disk) > -{ > -/* Right "type" of disk" */ > -switch ((virDomainDiskBus)disk->bus) { > -case VIR_DOMAIN_DISK_BUS_VIRTIO: > -if (disk->info.type != VIR_DOMAIN_DEVICE_ADDRESS_TYPE_PCI && > -disk->info.type != VIR_DOMAIN_DEVICE_ADDRESS_TYPE_CCW) { > -virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", > -_("IOThreads only available for virtio pci and " > - "virtio ccw disk")); > -return false; > -} > -break; > - > -case VIR_DOMAIN_DISK_BUS_IDE: > -case VIR_DOMAIN_DISK_BUS_FDC: > -case VIR_DOMAIN_DISK_BUS_SCSI: > -case VIR_DOMAIN_DISK_BUS_XEN: > -case VIR_DOMAIN_DISK_BUS_USB: > -case VIR_DOMAIN_DISK_BUS_UML: > -case VIR_DOMAIN_DISK_BUS_SATA: > -case VIR_DOMAIN_DISK_BUS_SD: > -case VIR_DOMAIN_DISK_BUS_NONE: > -case VIR_DOMAIN_DISK_BUS_LAST: > -virReportError(VIR_ERR_CONFIG_UNSUPPORTED, > - _("IOThreads not available for bus %s target %s"), > - virDomainDiskBusTypeToString(disk->bus), disk->dst); > -return false; > -} > - > -/* Can we find the disk iothread in the iothreadid list? */ > -if (!virDomainIOThreadIDFind(def, disk->iothread)) { > -virReportError(VIR_ERR_CONFIG_UNSUPPORTED, > - _("Disk iothread '%u' not defined in iothreadid"), > - disk->iothread); > -return false; > -} > - > -return true; > -} > - > - > static int > qemuBuildDriveDevCacheStr(virDomainDiskDef *disk, >virBuffer *buf, > @@ -1668,12 +1624,6 @@ qemuBuildDiskDeviceStr(const virDomainDef *def, > g_autofree char *scsiVPDDeviceId = NULL; > int controllerModel; > > -if (!qemuDomainCheckCCWS390AddressSupport(def, >info, qemuCaps, > disk->dst)) > -return NULL; > - > -if (disk->iothread && !qemuCheckIOThreads(def, disk)) > -return NULL; > - > switch ((virDomainDiskBus) disk->bus) { > case VIR_DOMAIN_DISK_BUS_IDE: > if (disk->device == VIR_DOMAIN_DISK_DEVICE_CDROM) > diff --git a/src/qemu/qemu_validate.c b/src/qemu/qemu_validate.c > index e6ddb43113..9c74092f23 100644 > --- a/src/qemu/qemu_validate.c > +++ b/src/qemu/qemu_validate.c > @@ -2404,6 +2404,50 @@ qemuValidateDomainDeviceDefDiskSerial(const char > *value) > } > > > +static bool > +qemuvalidateDomainDeviceDefDiskIOThreads(const virDomainDef *def, s/qemuvalidate/qemuValidate/ Pavel signature.asc Description: PGP signature
Re: Ceph RBD Connection
On 5/23/21 8:48 PM, Han Han wrote: You can check the following: 1. Make sure the ceph mon node are running on the localhost:6789 root@server ~# netstat -pan | grep 6789 tcp 0 0 10.0.0.5:6789 0.0.0.0:* LISTEN 2087/ceph-mon 2. Check if the ceph authorization is enabled on the ceph cluster. Check if authorization is used in rbd pool xml. You can provide more details on the rbd pool xml, ceph conf, and the version of librbd1(client side), ceph(server side). root@server ~# ceph auth get client.libvirt exported keyring for client.libvirt [client.libvirt] key = AQDtM6lgpebpIoIKaFXQSTAg== caps mon = "profile rbd" caps osd = "profile rbd pool=libvirt-pool" root@server ~# virsh secret-list UUID Usage f290cb3a-a9de-48d7-89ce-09311eadff4a ceph client.libvirt secret root@server ~# virsh secret-dumpxml f290cb3a-a9de-48d7-89ce-09311eadff4a f290cb3a-a9de-48d7-89ce-09311eadff4a client.libvirt secret root@server ~# virsh pool-dumpxml libvirt-pool libvirt-pool 5ae8abb9-31ab-44c0-a708-072b95182018 0 0 0 libvirt-pool root@server ~# ceph versions { "mon": { "ceph version 15.2.12 (ce065eabfa5ce81323b009786bdf5bb03127cbe1) octopus (stable)": 2 }, "mgr": { "ceph version 15.2.12 (ce065eabfa5ce81323b009786bdf5bb03127cbe1) octopus (stable)": 2 }, "osd": { "ceph version 15.2.12 (ce065eabfa5ce81323b009786bdf5bb03127cbe1) octopus (stable)": 2 }, "mds": {}, "rgw": { "ceph version 15.2.12 (ce065eabfa5ce81323b009786bdf5bb03127cbe1) octopus (stable)": 1 }, "overall": { "ceph version 15.2.12 (ce065eabfa5ce81323b009786bdf5bb03127cbe1) octopus (stable)": 7 } } root@server ~# pkgfile /usr/lib/librbd.so.1.12.0 community/ceph-libs root@server ~# pacsearch ceph-libs community/ceph-libs 15.2.12-1 [installed] Distributed, fault-tolerant storage platform delivering object, block, and file system Sorry I should have included this information in my original email. I followed this guide (https://blog.modest-destiny.com/posts/kvm-libvirt-add-ceph-rbd-pool/) to setup the above configuration and I also tried following the guide at https://docs.ceph.com/en/latest/rbd/libvirt/
Re: [PATCH 34/35] qemu: Allow disks with images shared accross VMs
On a Friday in 2021, Peter Krempa wrote: Implement this behaviour by skipping the disks on traditional commandline and hotplug them before resuming CPUs. That allows to use the support for hotplugging of transient disks which inherently allows sharing of the backing image as we open it read-only. This commit implements the validation code to allow it only with buses supporting hotplug and the hotplug code while starting up the VM. When we have such disk we need to issue a system-reset so that firmware tables are regenerated to allow booting from such device. Signed-off-by: Peter Krempa --- src/qemu/qemu_command.c | 6 ++ src/qemu/qemu_process.c | 45 +++- src/qemu/qemu_validate.c | 24 + 3 files changed, 74 insertions(+), 1 deletion(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 35/35] tests: Add qemuxml2argv and qemuxml2xml test for
On a Friday in 2021, Peter Krempa wrote: Signed-off-by: Peter Krempa --- .../disk-transient.x86_64-4.1.0.err | 2 +- .../disk-transient.x86_64-latest.args | 4 +- tests/qemuxml2argvdata/disk-transient.xml | 6 +++ .../disk-transient.x86_64-latest.xml | 48 +++ tests/qemuxml2xmltest.c | 1 + 5 files changed, 58 insertions(+), 3 deletions(-) create mode 100644 tests/qemuxml2xmloutdata/disk-transient.x86_64-latest.xml Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 33/35] conf: Introduce 'shareBacking' for disks
On a Friday in 2021, Peter Krempa wrote: In case the user wants to share the disk image between multiple VMs the qemu driver needs to hotplug such disks to instantiate the backends. Since that doesn't work for all disk configs add a switch to force this behaviour. Signed-off-by: Peter Krempa --- docs/formatdomain.rst | 6 ++ docs/schemas/domaincommon.rng | 5 + src/conf/domain_conf.c| 17 ++--- src/conf/domain_conf.h| 1 + 4 files changed, 26 insertions(+), 3 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 32/35] qemuDomainAttachDiskGeneric: Export
On a Friday in 2021, Peter Krempa wrote: Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 2 +- src/qemu/qemu_hotplug.h | 6 ++ 2 files changed, 7 insertions(+), 1 deletion(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 31/35] qemuDomainAttachDiskGeneric: Pass the qemu async job type
On a Friday in 2021, Peter Krempa wrote: The qemuDomainAttachDiskGeneric will also be used on startup for transient disks which share the overlay. The VM startup code passes the asyncJob around so we need to pass it into qemuDomainAttachDiskGeneric. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 23 ++- 1 file changed, 14 insertions(+), 9 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 30/35] qemuDomainAttachDiskGeneric: Implement hotplug of disk
On a Friday in 2021, Peter Krempa wrote: Add code which creates the transient overlay after hotplugging the disk backend before attaching the disk frontend. The state of the topmost image is modified to be already read-only to prevent the need to open the image in read-write mode. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 52 - 1 file changed, 46 insertions(+), 6 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 29/35] qemu: Track creation of disk overlay individually
On a Friday in 2021, Peter Krempa wrote: In preparation for hotplug of disks we'll need to track whether the overlay file was created individually per-disk. Add 'transientOverlayCreated' to 'struct _qemuDomainDiskPrivate' and remove 'inhibitDiskTransientDelete' from 'qemuDomainObjPrivate' and adjust the code for the change. Signed-off-by: Peter Krempa --- src/qemu/qemu_domain.c | 2 -- src/qemu/qemu_domain.h | 7 +++ src/qemu/qemu_process.c | 33 +++-- 3 files changed, 22 insertions(+), 20 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 28/35] qemuDomainAttachDiskGeneric: Split up frontend and backend attachment
On a Friday in 2021, Peter Krempa wrote: Split up the monitor contexts to attach the backend of the disk and the frontend device in preparation for hotplugging transient disks where we'll need to add the code for adding the transient overlay between these two steps. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 17 - 1 file changed, 12 insertions(+), 5 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 27/35] qemuDomainAttachDiskGeneric: Refactor rollback handling
On a Friday in 2021, Peter Krempa wrote: Modify the rollback section to use it's own monitor context so that we *its can later split up the hotplug into multiple steps and move the detachment of the extension device into the rollback section rather than doing it inline. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 28 ++-- 1 file changed, 18 insertions(+), 10 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 26/35] qemuDomainAttachDiskGeneric: Move PR helper attach into qemuDomainAttachDeviceDiskLiveInternal
On a Friday in 2021, Peter Krempa wrote: Similarly to previous refactors we want to move all hotplug related setup which isn't strictly relevant to attaching the disk into qemuDomainAttachDeviceDiskLiveInternal. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 13 ++--- 1 file changed, 6 insertions(+), 7 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 25/35] qemuDomainAttachDiskGeneric: Refactor cleanup
On a Friday in 2021, Peter Krempa wrote: Remove the 'ret' variable and 'cleanup' label in favor of directly returning the value since we don't have anything under the 'cleanup:' label. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 30 -- 1 file changed, 12 insertions(+), 18 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 24/35] qemuDomainAttachDiskGeneric: Fix whitespace
On a Friday in 2021, Peter Krempa wrote: Remove two empty lines. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 2 -- 1 file changed, 2 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 23/35] qemu: hotplug: Move post-insertion steps of disk hotplug to qemuDomainAttachDeviceDiskLiveInternal
On a Friday in 2021, Peter Krempa wrote: Move the auditing entry and insertion into the disk definition from the function which deals with qemu to 'qemuDomainAttachDeviceDiskLiveInternal' which deals with the hotplug related specifics. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 11 +++ 1 file changed, 7 insertions(+), 4 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 22/35] qemuDomainAttachDiskGeneric: Move setup of disk into qemuDomainAttachDeviceDiskLiveInternal
On a Friday in 2021, Peter Krempa wrote: qemuDomainAttachDeviceDiskLiveInternal already sets up certain pieces of the disk definition so it's better suited to move the setup of the virStorageSource structs, granting access to the storage and allocation of the alias from qemuDomainAttachDiskGeneric which will be just handling the qemu interaction. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 29 + 1 file changed, 17 insertions(+), 12 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 21/35] qemuDomainAttachDeviceDiskLiveInternal: Simplify call to qemuDomainAttachDiskGeneric
On a Friday in 2021, Peter Krempa wrote: We can call it in one place as all per-device-type subcases use the same code. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 8 ++-- 1 file changed, 2 insertions(+), 6 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH v2] Add SELinux policy for virt
On Mon, May 24, 2021 at 05:25:19AM -0700, Andrea Bolognani wrote: > On Fri, May 21, 2021 at 03:37:00PM +0100, Daniel P. Berrangé wrote: > > On Fri, May 21, 2021 at 04:22:59PM +0200, Vit Mojzis wrote: > > > On 4/30/21 10:28 PM, Vit Mojzis wrote: > > > > On 4/26/21 7:31 PM, Daniel P. Berrangé wrote: > > > > > On Wed, Apr 07, 2021 at 06:14:58AM -0700, Vit Mojzis wrote: > > > > > > Sorry for the long delay. This is our first request to ship a > > > > > > policy for > > > > > > multiple selinux stores (targeted, mls and minimum). > > > > > > > > > > > > Changes: > > > > > > * Replace all selinux-policy-%{policytype} dependencies with > > > > > > selinux-policy-base > > > > > > * Add Ghost files representing installed policy modules in all > > > > > > policy stores > > > > > > * Rewrite policy compilation script in python > > > > > > * Compile the policy module twice (1 version for > > > > > > targeted/minimum - with > > > > > > enable_mcs, and 1 for mls - with enable_mls) > > > > > > * Manage policy (un)installation using triggers based on which > > > > > > policy > > > > > > type is available > > > > > > > > > > > > The new policy was only tested in "targeted" mode so far and > > > > > > we'll need to make > > > > > > sure it works properly in "mls". As for "minimum", we know it will > > > > > > not > > > > > > work properly (as is the case of the current policy) by default > > > > > > (some > > > > > > other "contrib" policy modules need to be enabled). > > > > > > I'd argue there is no point trying to get it to work in "minimum", > > > > > > mostly because it (minimum) will be retired soon. > > > > > > > > > > I'm wondering how SELinux is supposed to integrate with containers > > > > > when > > > > > using a modular policy. > > > > > > > > > > Right now you can install RPMs in a container, and use selinux > > > > > enforcement > > > > > on that container because the host OS policy provides all the rules > > > > > in the > > > > > monolithic blob. > > > > > If we take this policy into libvirt, then when you install libvirt in > > > > > a > > > > > container, there will be no selinux policy available. > > > > > > > > > > Users can't install libvirt-selinux inside the container, as it > > > > > needs to be > > > > > built against the main policy in the host. > > > > > > > > > > User likely won't install libvirt-selinux outside the container as > > > > > that > > > > > defeats the purpose of using containers for their deployment > > > > > mechanism. > > > > > > > > > > Container based deployment of libvirt is important for both OpenStack > > > > > and KubeVirt. > > > > > > So from discussions with respective developers i got the following: > > > > > > KubeVirt runs the libvirt containers with a custom policy > > > https://github.com/kubevirt/kubevirt/blob/81cb9f79e0144af0e6e43c439eab7f8dac81de31/cmd/virt-handler/virt_launcher.cil, > > > that depends on libvirt module (uses svirt_sandbox_domain). Libvirt is > > > only > > > installed inside the container and there is no bind mount of > > > /sys/fs/selinux. So they will need to install libvirt-daemon-selinux on > > > the > > > host. > > > > With OpenStack I believe their deployment tool manages the config of > > the entire host, so installing the libvirt-daemon-selinux package > > ought to be reasonably straightforward for them. > > > > I worry about KubeVirt though. IIUC in their deployment, the hosts > > in use are all provisioned by OpenShift upfront & when KubeVirt is > > deployed, the only pieces they're deploying live inside the host. > > > > IOW, it seems like libvirt-daemon-selinux would have to be provided > > ahead of time by OpenShift if it is to be used, and I'm not sure > > if that's a practical requirement. > > > > I think we need to get explicit confirmation from KubeVirt that > > a requirement to installing RPMs directly on the host is going > > to be acceptable. > > I'm afraid that's not going to fly for KubeVirt. > > Adding Roman and Vladik so they can provide more information. > > For context, the discussion is about shipping the SELinux policy > for libvirt as part of a sub-package of libvirt instead of the main > selinux-policy package. Reading again, I realize Vit links to a URL above that shows virt-handler includes a custom selinux policy. How does that get deployed, and can the libvirt-daemon-selinux stuff be deployed in the same way ? Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH 20/35] qemuDomainAttachDeviceDiskLiveInternal: Absorb qemuDomainAttachSCSIDisk
On a Friday in 2021, Peter Krempa wrote: Move the validation of the SCSI device address and the attachment of the controller into qemuDomainAttachDeviceDiskLiveInternal as there's no specific need for a special helper. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 69 - 1 file changed, 27 insertions(+), 42 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 19/35] qemuDomainAttachDeviceDiskLiveInternal: Absorb qemuDomainAttachVirtioDiskDevice
On a Friday in 2021, Peter Krempa wrote: Move the specific device setup and address reservation code into the main hotplug helper as it's just one extra function call. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 32 1 file changed, 8 insertions(+), 24 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 18/35] qemuDomainAttachDeviceDiskLiveInternal: Absorb qemuDomainAttachUSBMassStorageDevice
On a Friday in 2021, Peter Krempa wrote: Move the specific device setup and address reservation code into the main hotplug helper as it's just one extra function call. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 35 +++ 1 file changed, 15 insertions(+), 20 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 17/35] qemuDomainRemoveDiskDevice: Move 'copy-on-read' handling to qemuBlockStorageSourceChainData
On a Friday in 2021, Peter Krempa wrote: Unify the handling of the copy-on-read filter by changing the handling to use qemuBlockStorageSourceChainData. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 16 1 file changed, 8 insertions(+), 8 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH v2] Add SELinux policy for virt
On Fri, May 21, 2021 at 03:37:00PM +0100, Daniel P. Berrangé wrote: > On Fri, May 21, 2021 at 04:22:59PM +0200, Vit Mojzis wrote: > > On 4/30/21 10:28 PM, Vit Mojzis wrote: > > > On 4/26/21 7:31 PM, Daniel P. Berrangé wrote: > > > > On Wed, Apr 07, 2021 at 06:14:58AM -0700, Vit Mojzis wrote: > > > > > Sorry for the long delay. This is our first request to ship a > > > > > policy for > > > > > multiple selinux stores (targeted, mls and minimum). > > > > > > > > > > Changes: > > > > > * Replace all selinux-policy-%{policytype} dependencies with > > > > > selinux-policy-base > > > > > * Add Ghost files representing installed policy modules in all > > > > > policy stores > > > > > * Rewrite policy compilation script in python > > > > > * Compile the policy module twice (1 version for > > > > > targeted/minimum - with > > > > > enable_mcs, and 1 for mls - with enable_mls) > > > > > * Manage policy (un)installation using triggers based on which policy > > > > > type is available > > > > > > > > > > The new policy was only tested in "targeted" mode so far and > > > > > we'll need to make > > > > > sure it works properly in "mls". As for "minimum", we know it will not > > > > > work properly (as is the case of the current policy) by default (some > > > > > other "contrib" policy modules need to be enabled). > > > > > I'd argue there is no point trying to get it to work in "minimum", > > > > > mostly because it (minimum) will be retired soon. > > > > > > > > I'm wondering how SELinux is supposed to integrate with containers when > > > > using a modular policy. > > > > > > > > Right now you can install RPMs in a container, and use selinux > > > > enforcement > > > > on that container because the host OS policy provides all the rules > > > > in the > > > > monolithic blob. > > > > If we take this policy into libvirt, then when you install libvirt in a > > > > container, there will be no selinux policy available. > > > > > > > > Users can't install libvirt-selinux inside the container, as it > > > > needs to be > > > > built against the main policy in the host. > > > > > > > > User likely won't install libvirt-selinux outside the container as that > > > > defeats the purpose of using containers for their deployment mechanism. > > > > > > > > Container based deployment of libvirt is important for both OpenStack > > > > and KubeVirt. > > > > So from discussions with respective developers i got the following: > > > > KubeVirt runs the libvirt containers with a custom policy > > https://github.com/kubevirt/kubevirt/blob/81cb9f79e0144af0e6e43c439eab7f8dac81de31/cmd/virt-handler/virt_launcher.cil, > > that depends on libvirt module (uses svirt_sandbox_domain). Libvirt is only > > installed inside the container and there is no bind mount of > > /sys/fs/selinux. So they will need to install libvirt-daemon-selinux on the > > host. > > With OpenStack I believe their deployment tool manages the config of > the entire host, so installing the libvirt-daemon-selinux package > ought to be reasonably straightforward for them. > > I worry about KubeVirt though. IIUC in their deployment, the hosts > in use are all provisioned by OpenShift upfront & when KubeVirt is > deployed, the only pieces they're deploying live inside the host. > > IOW, it seems like libvirt-daemon-selinux would have to be provided > ahead of time by OpenShift if it is to be used, and I'm not sure > if that's a practical requirement. > > I think we need to get explicit confirmation from KubeVirt that > a requirement to installing RPMs directly on the host is going > to be acceptable. I'm afraid that's not going to fly for KubeVirt. Adding Roman and Vladik so they can provide more information. For context, the discussion is about shipping the SELinux policy for libvirt as part of a sub-package of libvirt instead of the main selinux-policy package. -- Andrea Bolognani / Red Hat / Virtualization
Re: [PATCH 16/35] qemuDomainAttachDiskGeneric: Move 'copy-on-read' handling to qemuBlockStorageSourceChainData
On a Friday in 2021, Peter Krempa wrote: Fill in the required fields in qemuBlockStorageSourceChainData to handle the hotplug so that we can simplify the cleanup code. Signed-off-by: Peter Krempa --- src/qemu/qemu_hotplug.c | 23 ++- 1 file changed, 6 insertions(+), 17 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 15/35] qemuBlockStorageSourceChainData: Add handling of 'copy-on-read' filter layer
On a Friday in 2021, Peter Krempa wrote: qemuBlockStorageSourceChainData encapsulates the backend of the disk for startup and hotplug operations. Add the handling for the copy-on-read filter so that the hotplug code doesn't need to have separate cleanup. Signed-off-by: Peter Krempa --- src/qemu/qemu_block.c | 12 src/qemu/qemu_block.h | 4 2 files changed, 16 insertions(+) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 1/4] capabilities: Rename siblings to distances
On 5/21/21 9:46 AM, Peter Krempa wrote: > On Thu, May 20, 2021 at 17:24:53 +0200, Michal Privoznik wrote: >> The virCapsHostNUMACellSiblingInfo structure really represents >> distance to other NUMA node. Rename the structure and variables >> of that type to make it more obvious. >> >> Signed-off-by: Michal Privoznik >> --- >> src/conf/capabilities.c| 52 +- >> src/conf/capabilities.h| 10 +++ >> src/conf/virconftypes.h| 2 +- >> src/libxl/libxl_capabilities.c | 20 ++--- >> 4 files changed, 42 insertions(+), 42 deletions(-) > > [...] > > I first wanted to complain that we might want to add other data related > to NUMA siblings other than distances ... That was my intent when implementing NUMA distance reporting for capabilities, years ago. And I'm planning on extending capabilities for memory caches reporting (currently struct _virDomainNumaCache for domain NUMA nodes). But my code has it as another argument to virCapabilitiesHostNUMAAddCell() and another struct (I'm doing a deduplication similar to this one). And because of that code reuse (well, XML formater) - I prefer to follow _virDomainNuma struct in capabilities. Michal
[PATCH v2 4/4] conf: Deduplicate NUMA distance code
After previous patches we have two structures: virCapsHostNUMACellDistance and virNumaDistance which express the same thing. And have the exact same members (modulo their names). Drop the former in favor of the latter. This change means that distances with value of 0 are no longer printed out into capabilities XML, because domain XML code allows partial distance specification and thus threats value of 0 as unspecified by user (see virDomainNumaGetNodeDistance() which returns the default LOCAL/REMOTE distance for value of 0). Also, from ACPI 6.1 specification, section 5.2.17 System Locality Distance Information Table (SLIT): Distance values of 0-9 are reserved and have no meaning. Thus we shouldn't be ever reporting 0 in neither domain nor capabilities XML. Signed-off-by: Michal Privoznik --- diff to v1: - Filled in justification to stop reporting 0 distance in capabilities XML (which we never did anyway). - Change capabilities RNG to make it obvious that NUMA distances are the same in domain and capabilities XMLs. docs/schemas/capability.rng| 13 +++-- src/conf/capabilities.c| 26 -- src/conf/capabilities.h| 11 +++ src/conf/virconftypes.h| 2 -- src/libxl/libxl_capabilities.c | 8 5 files changed, 18 insertions(+), 42 deletions(-) diff --git a/docs/schemas/capability.rng b/docs/schemas/capability.rng index c4cafc47ee..fb8203ad6d 100644 --- a/docs/schemas/capability.rng +++ b/docs/schemas/capability.rng @@ -157,16 +157,9 @@ - - - - - - - - - - + + + diff --git a/src/conf/capabilities.c b/src/conf/capabilities.c index 926ecb5a24..1290c9c15d 100644 --- a/src/conf/capabilities.c +++ b/src/conf/capabilities.c @@ -349,7 +349,7 @@ virCapabilitiesHostNUMAAddCell(virCapsHostNUMA *caps, int ncpus, virCapsHostNUMACellCPU **cpus, int ndistances, - virCapsHostNUMACellDistance **distances, + virNumaDistance **distances, int npageinfo, virCapsHostNUMACellPageInfo **pageinfo) { @@ -833,17 +833,7 @@ virCapabilitiesHostNUMAFormat(virBuffer *buf, cell->pageinfo[j].avail); } -if (cell->ndistances) { -virBufferAddLit(buf, "\n"); -virBufferAdjustIndent(buf, 2); -for (j = 0; j < cell->ndistances; j++) { -virBufferAsprintf(buf, "\n", - cell->distances[j].node, - cell->distances[j].distance); -} -virBufferAdjustIndent(buf, -2); -virBufferAddLit(buf, "\n"); -} +virNumaDistanceFormat(buf, cell->distances, cell->ndistances); virBufferAsprintf(buf, "\n", cell->ncpus); virBufferAdjustIndent(buf, 2); @@ -1457,10 +1447,10 @@ virCapabilitiesFillCPUInfo(int cpu_id G_GNUC_UNUSED, static int virCapabilitiesGetNUMADistances(int node, -virCapsHostNUMACellDistance **distancesRet, +virNumaDistance **distancesRet, int *ndistancesRet) { -virCapsHostNUMACellDistance *tmp = NULL; +virNumaDistance *tmp = NULL; int tmp_size = 0; int ret = -1; int *distances = NULL; @@ -1476,14 +1466,14 @@ virCapabilitiesGetNUMADistances(int node, return 0; } -tmp = g_new0(virCapsHostNUMACellDistance, ndistances); +tmp = g_new0(virNumaDistance, ndistances); for (i = 0; i < ndistances; i++) { if (!distances[i]) continue; -tmp[tmp_size].node = i; -tmp[tmp_size].distance = distances[i]; +tmp[tmp_size].cellid = i; +tmp[tmp_size].value = distances[i]; tmp_size++; } @@ -1607,7 +1597,7 @@ virCapabilitiesHostNUMAInitReal(virCapsHostNUMA *caps) for (n = 0; n <= max_node; n++) { g_autoptr(virBitmap) cpumap = NULL; -g_autofree virCapsHostNUMACellDistance *distances = NULL; +g_autofree virNumaDistance *distances = NULL; int ndistances = 0; g_autofree virCapsHostNUMACellPageInfo *pageinfo = NULL; int npageinfo; diff --git a/src/conf/capabilities.h b/src/conf/capabilities.h index f11471ef6c..4d4ac476ea 100644 --- a/src/conf/capabilities.h +++ b/src/conf/capabilities.h @@ -94,11 +94,6 @@ struct _virCapsHostNUMACellCPU { virBitmap *siblings; }; -struct _virCapsHostNUMACellDistance { -int node; /* foreign NUMA node */ -unsigned int distance; /* distance to the node */ -}; - struct _virCapsHostNUMACellPageInfo { unsigned
Re: [PATCH 14/35] conf: remove virDomainDiskInsertPreAlloced
On a Friday in 2021, Peter Krempa wrote: Replace the last use of the function by virDomainDiskInsert and remove the unused helper. Signed-off-by: Peter Krempa --- src/conf/domain_conf.c | 16 ++-- src/conf/domain_conf.h | 2 -- src/libvirt_private.syms | 1 - 3 files changed, 2 insertions(+), 17 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 13/35] Replace virDomainDiskInsertPreAlloced by virDomainDiskInsert
On a Friday in 2021, Peter Krempa wrote: Pre-extending the disk array size is pointless nowadays since we've switched to memory APIs which don't return failure. Switch all uses of reallocation of the array followed by 'virDomainDiskInsertPreAlloced' with direct virDomainDiskInsert. Signed-off-by: Peter Krempa --- src/libxl/libxl_driver.c | 4 +--- src/lxc/lxc_driver.c | 6 +- src/qemu/qemu_hotplug.c | 3 +-- 3 files changed, 3 insertions(+), 10 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 12/35] qemu: Move iothread and s390 address validation for disk devices into the validator
On a Friday in 2021, Peter Krempa wrote: The "machine-loadparm-multiple-disks-nets-s390" case now requires the QEMU_CAPS_CCW feature to pass validation. Signed-off-by: Peter Krempa --- src/qemu/qemu_command.c | 50 src/qemu/qemu_validate.c | 49 +++ tests/qemuxml2xmltest.c | 2 +- 3 files changed, 50 insertions(+), 51 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 11/35] qemuxml2argvtest: Remove pointless tests for keywrapping on s390
On a Friday in 2021, Peter Krempa wrote: There were two negative tests for the keywrapping feature on s390 when the feature flag was missing. For now both shared the error message thus worked fine, but with the upcoming patch to move some disk validation code from the command line formatter to validation code will change the error message in case the disk capabilities are missing. Drop the test cases which don't provide any capability and keep those that have the disk capabilities present as they are sufficient to prove the feature. Signed-off-by: Peter Krempa --- tests/qemuxml2argvtest.c | 8 1 file changed, 8 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 10/35] qemu: Move bootindex usage logic into qemuBuildDiskDeviceStr
On a Friday in 2021, Peter Krempa wrote: We can skip the formatting of the bootindex for floppies directly at the place where it's being formatted. Signed-off-by: Peter Krempa --- src/qemu/qemu_command.c | 23 --- src/qemu/qemu_command.h | 1 - src/qemu/qemu_hotplug.c | 2 +- 3 files changed, 9 insertions(+), 17 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 4/4] conf: Deduplicate NUMA distance code
On 5/21/21 9:58 AM, Peter Krempa wrote: > On Thu, May 20, 2021 at 17:24:56 +0200, Michal Privoznik wrote: >> After previous patches we have two structures: >> virCapsHostNUMACellDistance and virNumaDistance which express the >> same thing. And have the exact same members (modulo their names). >> Drop the former in favor of the latter. >> >> Signed-off-by: Michal Privoznik >> --- >> src/conf/capabilities.c| 26 -- >> src/conf/capabilities.h| 11 +++ >> src/conf/virconftypes.h| 2 -- >> src/libxl/libxl_capabilities.c | 8 >> 4 files changed, 15 insertions(+), 32 deletions(-) >> >> diff --git a/src/conf/capabilities.c b/src/conf/capabilities.c >> index 926ecb5a24..1290c9c15d 100644 >> --- a/src/conf/capabilities.c >> +++ b/src/conf/capabilities.c > > [...] > >> @@ -833,17 +833,7 @@ virCapabilitiesHostNUMAFormat(virBuffer *buf, >>cell->pageinfo[j].avail); >> } >> >> -if (cell->ndistances) { >> -virBufferAddLit(buf, "\n"); >> -virBufferAdjustIndent(buf, 2); >> -for (j = 0; j < cell->ndistances; j++) { > > This code didn't skip printing the sibling if 'value' is 0 ... > >> -virBufferAsprintf(buf, "\n", >> - cell->distances[j].node, >> - cell->distances[j].distance); >> -} >> -virBufferAdjustIndent(buf, -2); >> -virBufferAddLit(buf, "\n"); >> -} >> +virNumaDistanceFormat(buf, cell->distances, cell->ndistances); > > ... but this new implementation does that. I didn't check whether that's > justified or not, but the commit message doesn't try to justify it > either. > > Was that an expected change? Yes, I will adjust commit message. The point is that in domain XML we allow partial specification of distances. Each guest NUMA node will have an array of virDomainNumaDistance structs (#nodes long) and for each the .value member will either be in [10, 255] range or 0 (if not specified in XML). For capabilities - we query numa_distance() and store its output into an array. The numa_distance() comes from numactl package, and returns 0 to indicate an error (if either parsing distances from sysfs failed or we passed a non-existent node in): https://github.com/numactl/numactl/blob/master/distance.c#L110 Therefore, I think it's safe to ignore 0 - it's not valid distance anyway. However, what I forgot to squash in was: diff --git i/docs/schemas/capability.rng w/docs/schemas/capability.rng index c4cafc47ee..fb8203ad6d 100644 --- i/docs/schemas/capability.rng +++ w/docs/schemas/capability.rng @@ -157,16 +157,9 @@ - - - - - - - - - - + + + So let me resend v2. Thanks! Michal
Re: [PATCH 09/35] qemu: Move 'bootindex' handling for disks out of command line formatter
On a Friday in 2021, Peter Krempa wrote: The logic assigning the bootindices from the legacy boot orded s/orded/order/ configuration was spread through the command line formatters for the disk device and for the floppy controller. This patch adds 'effectiveBootindex' property to the disk private data which holds the calculated boot index and moves the logic of determining the boot index into 'qemuProcessPrepareDomainDiskBootorder' called from 'qemuProcessPrepareDomainStorage'. Signed-off-by: Peter Krempa --- src/qemu/qemu_command.c | 63 +++ src/qemu/qemu_domain.h | 4 +++ src/qemu/qemu_process.c | 66 + 3 files changed, 80 insertions(+), 53 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 08/35] qemu_snapshot: Add the guest name to the transient disk path
On a Friday in 2021, Peter Krempa wrote: From: Masayoshi Mizuma Later patches will implement sharing of the backing file, so we'll need to be able to discriminate the overlays per VM. Signed-off-by: Masayoshi Mizuma Signed-off-by: Peter Krempa --- src/qemu/qemu_process.c | 2 +- src/qemu/qemu_snapshot.c | 6 -- src/qemu/qemu_snapshot.h | 3 ++- 3 files changed, 7 insertions(+), 4 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 07/35] qemu: snapshot: move transient snapshot code to qemu_process
On a Friday in 2021, Peter Krempa wrote: The code deals with the startup of the VM and just uses the snapshot code to achieve the desired outcome. Signed-off-by: Peter Krempa --- src/qemu/qemu_process.c | 61 +- src/qemu/qemu_snapshot.c | 82 +++- src/qemu/qemu_snapshot.h | 26 - 3 files changed, 89 insertions(+), 80 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 06/35] qemuSnapshotDiskCreate: Use 'cfg' from the qemuSnapshotDiskContext
On a Friday in 2021, Peter Krempa wrote: We store the virQEMUDriverConfig object in the context. Signed-off-by: Peter Krempa --- src/qemu/qemu_snapshot.c | 16 ++-- 1 file changed, 6 insertions(+), 10 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 05/35] qemuSnapshotDiskPrepareOne: Use data from qemuSnapshotDiskContext
On a Friday in 2021, Peter Krempa wrote: Remove all the arguments which are present in qemuSnapshotDiskContext. Signed-off-by: Peter Krempa --- src/qemu/qemu_snapshot.c | 30 -- 1 file changed, 12 insertions(+), 18 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 04/35] qemuSnapshotDiskContext: Store virQEMUDriverConfig in the struct
On a Friday in 2021, Peter Krempa wrote: The config is used both with the preparation and execution functions, so we can store it in the context to simplify other helpers. Signed-off-by: Peter Krempa --- src/qemu/qemu_snapshot.c | 7 +++ 1 file changed, 7 insertions(+) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 03/35] qemuSnapshotDiskPrepareOne: Pass in qemuSnapshotDiskContext
On a Friday in 2021, Peter Krempa wrote: Rather than filling various parts of the context from arguments pass in the whole context. Signed-off-by: Peter Krempa --- src/qemu/qemu_snapshot.c | 23 ++- 1 file changed, 10 insertions(+), 13 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: How to hot plugin a new vhost-user-blk-pci device to running VM?
On Mon, May 24, 2021 at 01:04:44PM +0200, Michal Prívozník wrote: > On 5/21/21 5:28 PM, 梁朝军 wrote: > > Thanks all of you for your help. > > One more question regarding vhost-user-blk-pci type device, how to > > identify a vhost-blk disk in QEUM VM ? for example, disk name looks like > > vda,vdb,..., but that some application in VM want to detect that a certain > > entry is really the device it is waiting for. Specific for windows , they > > always show as disk 0, 1, 2….etc > > Is there any way to identify those disk with each other in VM? > > In general no. Usually disks will be enumerated sequentially - thus the > first disk on a sata/scsi/usb/.. bus will be sda, the second will be > sdb, and so on. But libvirt can't guarantee it - the same way you can't > guarantee how a disk is going to be called with real HW. You can set the 'serial' property in the disk in libvirt, and then match that in the guest. For Linux guests that's used in /dev/disk/by-id symlinks. Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
Re: [PATCH 02/35] qemu: process: Setup transient disks only when starting a fresh VM
On a Friday in 2021, Peter Krempa wrote: Creating the overlay for the disk is needed when starting a new VM only. Additionally for now migration with transient disks is forbidden anyways. Signed-off-by: Peter Krempa --- src/qemu/qemu_process.c | 8 +--- 1 file changed, 5 insertions(+), 3 deletions(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
Re: [PATCH 01/35] qemu: snapshot: Extract setup of snapshot disk definition for transient disks
On a Friday in 2021, Peter Krempa wrote: The code will be later reused when adding support for sharing the backing image of the snapshot. Signed-off-by: Peter Krempa --- src/qemu/qemu_snapshot.c | 40 ++-- 1 file changed, 26 insertions(+), 14 deletions(-) diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index fd6669433b..c3cff46bc9 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1168,6 +1168,29 @@ qemuSnapshotDiskPrepareActiveExternal(virDomainObj *vm, } +static virDomainSnapshotDiskDef * +qemuSnapshotGetTransientDiskDef(virDomainDiskDef *domdisk) +{ +g_autoptr(virDomainSnapshotDiskDef) snapdisk = g_new0(virDomainSnapshotDiskDef, 1); + +snapdisk->name = g_strdup(domdisk->dst); +snapdisk->snapshot = VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL; +snapdisk->src = virStorageSourceNew(); +snapdisk->src->type = VIR_STORAGE_TYPE_FILE; +snapdisk->src->format = VIR_STORAGE_FILE_QCOW2; +snapdisk->src->path = g_strdup_printf("%s.TRANSIENT", domdisk->src->path); + +if (virFileExists(snapdisk->src->path)) { +virReportError(VIR_ERR_OPERATION_UNSUPPORTED, + _("Overlay file '%s' for transient disk '%s' already exists"), + snapdisk->src->path, domdisk->dst); +return NULL; +} + +return g_steal_pointer(); +} + + static qemuSnapshotDiskContext * qemuSnapshotDiskPrepareDisksTransient(virDomainObj *vm, virQEMUDriverConfig *cfg, @@ -1181,26 +1204,15 @@ qemuSnapshotDiskPrepareDisksTransient(virDomainObj *vm, for (i = 0; i < vm->def->ndisks; i++) { virDomainDiskDef *domdisk = vm->def->disks[i]; -g_autoptr(virDomainSnapshotDiskDef) snapdisk = g_new0(virDomainSnapshotDiskDef, 1); +g_autoptr(virDomainSnapshotDiskDef) snapdisk = NULL; if (!domdisk->transient) continue; /* validation code makes sure that we do this only for local disks * with a file source */ -snapdisk->name = g_strdup(domdisk->dst); -snapdisk->snapshot = VIR_DOMAIN_SNAPSHOT_LOCATION_EXTERNAL; -snapdisk->src = virStorageSourceNew(); -snapdisk->src->type = VIR_STORAGE_TYPE_FILE; -snapdisk->src->format = VIR_STORAGE_FILE_QCOW2; -snapdisk->src->path = g_strdup_printf("%s.TRANSIENT", domdisk->src->path); - -if (virFileExists(snapdisk->src->path)) { -virReportError(VIR_ERR_OPERATION_UNSUPPORTED, - _("Overlay file '%s' for transient disk '%s' already exists"), - snapdisk->src->path, domdisk->dst); -return NULL; -} + +snapdisk = qemuSnapshotGetTransientDiskDef(domdisk); if (!snapdisk) return NULL; To correctly propagate the error and avoid NULL dereference below Reviewed-by: Ján Tomko Jano if (qemuSnapshotDiskPrepareOne(vm, cfg, domdisk, snapdisk, snapctxt->dd + snapctxt->ndd++, -- 2.31.1 signature.asc Description: PGP signature
Re: How to hot plugin a new vhost-user-blk-pci device to running VM?
On 5/21/21 5:28 PM, 梁朝军 wrote: > Thanks all of you for your help. > One more question regarding vhost-user-blk-pci type device, how to identify > a vhost-blk disk in QEUM VM ? for example, disk name looks like vda,vdb,..., > but that some application in VM want to detect that a certain entry is > really the device it is waiting for. Specific for windows , they always show > as disk 0, 1, 2….etc > Is there any way to identify those disk with each other in VM? In general no. Usually disks will be enumerated sequentially - thus the first disk on a sata/scsi/usb/.. bus will be sda, the second will be sdb, and so on. But libvirt can't guarantee it - the same way you can't guarantee how a disk is going to be called with real HW. Michal
Re: [PATCH] kbase: Fix broken link of migration doc
On a Monday in 2021, Han Han wrote: Signed-off-by: Han Han --- docs/kbase/index.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) Reviewed-by: Ján Tomko Jano signature.asc Description: PGP signature
[PATCH] kbase: Fix broken link of migration doc
Signed-off-by: Han Han --- docs/kbase/index.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/kbase/index.rst b/docs/kbase/index.rst index d483ca94de..91083ee49d 100644 --- a/docs/kbase/index.rst +++ b/docs/kbase/index.rst @@ -66,4 +66,4 @@ Internals / Debugging `VM migration internals `__ VM migration implementation details, complementing the info in - `migration `__ + `migration <../migration.html>`__ -- 2.31.1