Re: [libvirt] [PATCH v2] qemu: Introduce caching whether /dev/kvm is accessible
On Tue, Oct 30, 2018 at 03:53:59PM +, Daniel P. Berrangé wrote: On Tue, Oct 30, 2018 at 04:47:32PM +0100, Michal Privoznik wrote: Well, caching owner + seclabels + ACLs won't help either. What if user loads some profile into AppArmor or something that denies previously allowed access to /dev/kvm (or vice versa)? What I am saying is that there are some security models which base their decisions on something else than file attributes. The virFileAccessibleAs check for /dev/kvm was put in their to ensure that we correctly report usability of KVM in the capabilities XML according to file permissions/ownership. Essentially KVM is not usable until the udev rule is applied to change permissions to world accessible or to set the kvm group. Can libvirt be run sooner than the permissions are set up during regular start-up, or this is just for the case of the user randomly touching the permissions? IIRC the problem was that users with vmx disabled in BIOS setup needed to delete libvirt's qemuCaps cache manually after enabling it even after restarting the system. To catch that, all we'd have to do is run the check once per daemon lifetime, I don't think we need to care aout selinux/apparmor restrictions - just need to be no worse than what we cope with today, which is just perms and owner/group. but that could be considered worse according to this metric. Jano signature.asc Description: PGP signature -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] nwfilter: intantiate filters on loading driver
On 10/31/18 10:41 AM, Nikolay Shirokovskiy wrote: > On 31.10.2018 16:14, John Ferlan wrote: >> >> >> On 10/30/18 3:24 AM, Nikolay Shirokovskiy wrote: >>> >>> >>> On 29.10.2018 22:37, John Ferlan wrote: On 10/15/18 4:26 AM, Nikolay Shirokovskiy wrote: > Before using filters binding filters instantiation was done by hypervisors > drivers initialization code (qemu was the only such hypervisor). Now qemu > reconnection done supposes it should be done by nwfilter driver probably. > Let's add this missing step. > > Signed-off-by: Nikolay Shirokovskiy > --- > src/nwfilter/nwfilter_driver.c | 3 +++ > 1 file changed, 3 insertions(+) > If there's research you've done where the instantiation was done before introduction of the nwfilter bindings that would be really helpful... I found that virNWFilterBuildAll was introduced in commit 3df907bfff. There were 2 callers for it: 1. virNWFilterTriggerRebuildImpl 2. nwfilterStateReload The former called as part of the virNWFilterConfLayerInit callback during nwfilterStateInitialize (about 50 lines earlier). >> >> First off let me say you certainly find a lot of interesting bugs/issues >> that are complex and that's good. Often times I wonder how you trip >> across things or how to quantify what you've found. Some are simpler >> than others. This one on the surface would seem to be simple, but I keep >> wondering is there a downside. > > The issue is related to my recent posts: > > [1] [RFC] Faster libvirtd restart with nwfilter rules > https://www.redhat.com/archives/libvir-list/2018-September/msg01206.html > which continues here: > https://www.redhat.com/archives/libvir-list/2018-October/msg00657.html > > In short if there is lots of VMs with filters then libvirtd restart takes a > lot of time > during which libvirtd is unresponsive. By the way the issue is found in > libvirt > versions 3.9.0 and ealier (don't know of newer versions, Virtuozzo is based > on 3.9.0 now, > just like Centos7 :) ) So many different issues - trying to focus on just the one for this particular patch. > > And attempts to fix it: > > [2] [PATCH RFC 0/4] nwfilter: don't reinstantiate filters if they are not > changed > https://www.redhat.com/archives/libvir-list/2018-October/msg00904.html > > [3] [PATCH v2 0/2] nwfilter: don't reinstantiate rules if there is no need to > https://www.redhat.com/archives/libvir-list/2018-October/msg01317.html > > And the reason I started v2 is Laine mentioned that it is important to > reinstantiate rules on restart (v1 do not care if somebody messed tables). > I've seen the patches and even read some briefly, but still need to take this particular patch as separate from those. > And I discovered that upstream version stops reinstantiating rules at all > after introducing filters bindings. And this functionality is old: So the key is "when" did that happen? That is can you pinpoint or bisect a time in history where the filters were instantiated? And by instantiated what exactly (call) are you referencing? > > commit cf6f8b9a9720fe5323a84e690de9fbf8ba41f6ac > Author: Stefan Berger > Date: Mon Aug 16 12:59:54 2010 -0400 > > nwfilter: extend nwfilter reload support > Yes, nwfilter is old, brittle, and in need of attention. Not sure anyone really wants to take on the task though! I just realized I never got the common object code implemented there. Mostly because of lock issues. Suffice to say there's more "interesting" changes in the nwfilter space being discussed internally. >> >> The nwfilter processing is kindly said rather strange, complex, and to a >> degree fragile. Thus when patches come along they are met with greater >> scrutiny. From just reading the commit message here I don't get a sense >> for the problem and why the solution fixes it. So I'm left to try and >> research myself and ask a lot of questions. >> >> BTW, some of the detail provided in this response is really useful as >> either part of a cover or after the --- in the single patch. I would >> think you'd "know" when you've done lots of research into a problem that >> providing reviews more than a mere hint would be useful! For nwfilter >> bindings, it's hard to imagine this is one of those - it just happened >> type events. There seems to be a very specific sequence that's missing >> from the commit description. >> >>> >>> Right but virNWFilterTriggerRebuildImpl is not actually called, it >>> is set as rebuild callback. This rebuild callback can be called in >>> virNWFilterObjTestUnassignDef and virNWFilterObjListAssignDef. >> >> Ah yes, the virNWFilterConfLayerInit sets up the context to call the >> virNWFilterTriggerRebuildImpl for virNWFilterObjTestUnassignDef and >> virNWFilterObjListAssignDef and no I see it doesn't directly call the >> virNWFilterBuildAll. >> >> Still, I don't see where the processing of a rebuild callback is >>
Re: [libvirt] [PATCH] libxl: add support for soft reset
On 10/30/18 3:18 AM, Michal Privoznik wrote: On 10/30/2018 03:17 AM, Jim Fehlig wrote: On 10/29/18 7:26 PM, Jim Fehlig wrote: The pvops Linux kernel implements machine_ops.crash_shutdown as static void xen_hvm_crash_shutdown(struct pt_regs *regs) { native_machine_crash_shutdown(regs); xen_reboot(SHUTDOWN_soft_reset); } but currently the libxl driver does not handle the soft reset shutdown event. As a result, the guest domain never proceeds past xen_reboot(), making it impossible for HVM domains to save a crash dump using kexec. This patch adds support for handling the soft reset event by calling libxl_domain_soft_reset() and re-enabling domain death events, which is similar to the xl tool handling of soft reset shutdown event. Signed-off-by: Jim Fehlig --- src/libxl/libxl_domain.c | 30 ++ 1 file changed, 30 insertions(+) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 0032b9dd11..295ec04a85 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -447,8 +447,10 @@ libxlDomainShutdownThread(void *opaque) virObjectEventPtr dom_event = NULL; libxl_shutdown_reason xl_reason = ev->u.domain_shutdown.shutdown_reason; libxlDriverConfigPtr cfg; + libxl_domain_config d_config; cfg = libxlDriverConfigGet(driver); + libxl_domain_config_init(_config); vm = virDomainObjListFindByID(driver->domains, ev->domid); if (!vm) { @@ -532,6 +534,33 @@ libxlDomainShutdownThread(void *opaque) * after calling libxl_domain_suspend() are handled by it's callers. */ goto endjob; + } else if (xl_reason == LIBXL_SHUTDOWN_REASON_SOFT_RESET) { + libxlDomainObjPrivatePtr priv = vm->privateData; + int res; I'm not sure why I have this useless local. I've squashed the below diff into my local branch. ACK Thanks, but I found this didn't compile with Xen 4.6, our minimum supported Xen version. Soft reset was introduced in 4.7. I've sent a V2, which includes some cleanups to the libxlDomainShutdownThread function. Thanks for looking at it if you have time https://www.redhat.com/archives/libvir-list/2018-October/msg01370.html Regards, Jim -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH V2 0/3] libxl: add support for soft reset
Patches 1 and 2 are new to V2 and make slight improvements to libxlDomainShutdownThread. Patch 3 is adjusted to work with Xen 4.6. Jim Fehlig (3): libxl: remove redundant calls to virObjectEventStateQueue libxl: Remove some goto labels in libxlDomainShutdownThread libxl: add support for soft reset src/libxl/libxl_domain.c | 99 1 file changed, 69 insertions(+), 30 deletions(-) -- 2.18.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH V2 2/3] libxl: Remove some goto labels in libxlDomainShutdownThread
There are too many goto labels in libxlDomainShutdownThread. Convert the 'destroy' and 'restart' labels to helper functions, leaving only the commonly used pattern of 'endjob' and 'cleanup' labels. Signed-off-by: Jim Fehlig --- src/libxl/libxl_domain.c | 66 1 file changed, 40 insertions(+), 26 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 9ed6ee8fb3..4cdaee0e51 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -430,6 +430,30 @@ virDomainDefParserConfig libxlDomainDefParserConfig = { }; +static void +libxlDomainShutdownHandleDestroy(libxlDriverPrivatePtr driver, + virDomainObjPtr vm) +{ +libxlDomainDestroyInternal(driver, vm); +libxlDomainCleanup(driver, vm); +if (!vm->persistent) +virDomainObjListRemove(driver->domains, vm); +} + + +static void +libxlDomainShutdownHandleRestart(libxlDriverPrivatePtr driver, + virDomainObjPtr vm) +{ +libxlDomainDestroyInternal(driver, vm); +libxlDomainCleanup(driver, vm); +if (libxlDomainStartNew(driver, vm, false) < 0) { +VIR_ERROR(_("Failed to restart VM '%s': %s"), + vm->def->name, virGetLastErrorMessage()); +} +} + + struct libxlShutdownThreadInfo { libxlDriverPrivatePtr driver; @@ -468,10 +492,12 @@ libxlDomainShutdownThread(void *opaque) VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN); switch ((virDomainLifecycleAction) vm->def->onPoweroff) { case VIR_DOMAIN_LIFECYCLE_ACTION_DESTROY: -goto destroy; +libxlDomainShutdownHandleDestroy(driver, vm); +goto endjob; case VIR_DOMAIN_LIFECYCLE_ACTION_RESTART: case VIR_DOMAIN_LIFECYCLE_ACTION_RESTART_RENAME: -goto restart; +libxlDomainShutdownHandleRestart(driver, vm); +goto endjob; case VIR_DOMAIN_LIFECYCLE_ACTION_PRESERVE: case VIR_DOMAIN_LIFECYCLE_ACTION_COREDUMP_DESTROY: case VIR_DOMAIN_LIFECYCLE_ACTION_COREDUMP_RESTART: @@ -487,19 +513,23 @@ libxlDomainShutdownThread(void *opaque) VIR_DOMAIN_EVENT_STOPPED_CRASHED); switch ((virDomainLifecycleAction) vm->def->onCrash) { case VIR_DOMAIN_LIFECYCLE_ACTION_DESTROY: -goto destroy; +libxlDomainShutdownHandleDestroy(driver, vm); +goto endjob; case VIR_DOMAIN_LIFECYCLE_ACTION_RESTART: case VIR_DOMAIN_LIFECYCLE_ACTION_RESTART_RENAME: -goto restart; +libxlDomainShutdownHandleRestart(driver, vm); +goto endjob; case VIR_DOMAIN_LIFECYCLE_ACTION_PRESERVE: case VIR_DOMAIN_LIFECYCLE_ACTION_LAST: goto endjob; case VIR_DOMAIN_LIFECYCLE_ACTION_COREDUMP_DESTROY: libxlDomainAutoCoreDump(driver, vm); -goto destroy; +libxlDomainShutdownHandleDestroy(driver, vm); +goto endjob; case VIR_DOMAIN_LIFECYCLE_ACTION_COREDUMP_RESTART: libxlDomainAutoCoreDump(driver, vm); -goto restart; +libxlDomainShutdownHandleRestart(driver, vm); +goto endjob; } } else if (xl_reason == LIBXL_SHUTDOWN_REASON_REBOOT) { virDomainObjSetState(vm, VIR_DOMAIN_SHUTOFF, @@ -510,10 +540,12 @@ libxlDomainShutdownThread(void *opaque) VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN); switch ((virDomainLifecycleAction) vm->def->onReboot) { case VIR_DOMAIN_LIFECYCLE_ACTION_DESTROY: -goto destroy; +libxlDomainShutdownHandleDestroy(driver, vm); +goto endjob; case VIR_DOMAIN_LIFECYCLE_ACTION_RESTART: case VIR_DOMAIN_LIFECYCLE_ACTION_RESTART_RENAME: -goto restart; +libxlDomainShutdownHandleRestart(driver, vm); +goto endjob; case VIR_DOMAIN_LIFECYCLE_ACTION_PRESERVE: case VIR_DOMAIN_LIFECYCLE_ACTION_COREDUMP_DESTROY: case VIR_DOMAIN_LIFECYCLE_ACTION_COREDUMP_RESTART: @@ -531,26 +563,8 @@ libxlDomainShutdownThread(void *opaque) * Similar to the xl implementation, ignore SUSPEND. Any actions needed * after calling libxl_domain_suspend() are handled by it's callers. */ -goto endjob; } else { VIR_INFO("Unhandled shutdown_reason %d", xl_reason); -goto endjob; -} - - destroy: -libxlDomainDestroyInternal(driver, vm); -libxlDomainCleanup(driver, vm); -if (!vm->persistent) -virDomainObjListRemove(driver->domains, vm); - -goto endjob; - - restart: -libxlDomainDestroyInternal(driver, vm); -libxlDomainCleanup(driver, vm); -if (libxlDomainStartNew(driver, vm, false) < 0) { -VIR_ERROR(_("Failed to restart VM '%s': %s"), - vm->def->name,
[libvirt] [PATCH V2 1/3] libxl: remove redundant calls to virObjectEventStateQueue
In libxlDomainShutdownThread, virObjectEventStateQueue is needlessly called in the destroy and restart labels. The cleanup label aready queues whatever event was created based on libxl_shutdown_reason. There is no need to handle destroy and restart differently. Signed-off-by: Jim Fehlig --- src/libxl/libxl_domain.c | 4 1 file changed, 4 deletions(-) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 0032b9dd11..9ed6ee8fb3 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -538,8 +538,6 @@ libxlDomainShutdownThread(void *opaque) } destroy: -virObjectEventStateQueue(driver->domainEventState, dom_event); -dom_event = NULL; libxlDomainDestroyInternal(driver, vm); libxlDomainCleanup(driver, vm); if (!vm->persistent) @@ -548,8 +546,6 @@ libxlDomainShutdownThread(void *opaque) goto endjob; restart: -virObjectEventStateQueue(driver->domainEventState, dom_event); -dom_event = NULL; libxlDomainDestroyInternal(driver, vm); libxlDomainCleanup(driver, vm); if (libxlDomainStartNew(driver, vm, false) < 0) { -- 2.18.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH V2 3/3] libxl: add support for soft reset
The pvops Linux kernel implements machine_ops.crash_shutdown as static void xen_hvm_crash_shutdown(struct pt_regs *regs) { native_machine_crash_shutdown(regs); xen_reboot(SHUTDOWN_soft_reset); } but currently the libxl driver does not handle the soft reset shutdown event. As a result, the guest domain never proceeds past xen_reboot(), making it impossible for HVM domains to save a crash dump using kexec. This patch adds support for handling the soft reset event by calling libxl_domain_soft_reset() and re-enabling domain death events, which is similar to the xl tool handling of soft reset shutdown event. Signed-off-by: Jim Fehlig --- V2: Check for availability of soft reset with LIBXL_HAVE_SOFT_RESET src/libxl/libxl_domain.c | 29 + 1 file changed, 29 insertions(+) diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c index 4cdaee0e51..47c1f49538 100644 --- a/src/libxl/libxl_domain.c +++ b/src/libxl/libxl_domain.c @@ -471,8 +471,10 @@ libxlDomainShutdownThread(void *opaque) virObjectEventPtr dom_event = NULL; libxl_shutdown_reason xl_reason = ev->u.domain_shutdown.shutdown_reason; libxlDriverConfigPtr cfg; +libxl_domain_config d_config; cfg = libxlDriverConfigGet(driver); +libxl_domain_config_init(_config); vm = virDomainObjListFindByID(driver->domains, ev->domid); if (!vm) { @@ -563,6 +565,33 @@ libxlDomainShutdownThread(void *opaque) * Similar to the xl implementation, ignore SUSPEND. Any actions needed * after calling libxl_domain_suspend() are handled by it's callers. */ +#ifdef LIBXL_HAVE_SOFT_RESET +} else if (xl_reason == LIBXL_SHUTDOWN_REASON_SOFT_RESET) { +libxlDomainObjPrivatePtr priv = vm->privateData; +if (libxl_retrieve_domain_configuration(cfg->ctx, vm->def->id, +_config) != 0) { +VIR_ERROR(_("Failed to retrieve config for VM '%s'. " +"Unable to perform soft reset. Destroying VM"), + vm->def->name); +libxlDomainShutdownHandleDestroy(driver, vm); +goto endjob; +} + +if (priv->deathW) { +libxl_evdisable_domain_death(cfg->ctx, priv->deathW); +priv->deathW = NULL; +} + +if (libxl_domain_soft_reset(cfg->ctx, _config, vm->def->id, +NULL, NULL) != 0) { +VIR_ERROR(_("Failed to soft reset VM '%s'. Destroying VM"), + vm->def->name); +libxlDomainShutdownHandleDestroy(driver, vm); +goto endjob; +} +libxl_evenable_domain_death(cfg->ctx, vm->def->id, 0, >deathW); +libxl_domain_unpause(cfg->ctx, vm->def->id); +#endif } else { VIR_INFO("Unhandled shutdown_reason %d", xl_reason); } -- 2.18.0 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v2] vhost-user: define conventions for vhost-user backends
On Wed, Oct 31, 2018 at 07:44:36PM +0400, Marc-André Lureau wrote: > Hi > > On Tue, Oct 16, 2018 at 4:14 PM Daniel P. Berrangé > wrote: > > > > It just reinvents the chardev unix socket syntax, but in a > > > > different adhoc manner, which is repeating the mistake we have > > > > made time & again in QEMU. Using QAPI we can directly accept > > > > the ChardevSocket syntax we already know about. Instead of > > > > having --socket-path and --fd and documenting that they are > > > > mutually exclusive ChardevSocket QAPI schema provides that > > > > representation in a well defined format which is discoverable > > > > and QEMU and mgmt apps already understand. > > > > > > That would require external vhost-user backends to implement QAPI/json > > > parsing. Is this necessary for a vhost-user backend? I doubt it. > > > > They could, but would not be required, to implement QAPI/json parser. > > > > The QAPI schema defines a standard for how to model & interpret the > > non-scalar values for command line arguments. An external impl would > > need to ensure that whatever parsing it does for CLI args is semantically > > compatible with the parsing rules defined by the QEMU QAPI schema we > > define to ensure interoperability of its impl. > > So you would want to have something like? > > --chardev '{ "id" : "bar", "backend" : { "type" : "socket", "data" : { > "addr" : { "type": "unix", "path": "/tmp/foo.sock" }, "server": > "false" } } }' I wasn't specificially suggesting json syntax. Just the flattened dot separate syntax, whose valid components are mapped to corresponding qapi schema defintions. eg closer to what we have already today --chardev socket,id=bar,path=/tmp/foo.sock,server > > instead of: > > --socket-path=/tmp/foo.sock > > I don't really get what that will help with, for the vhost-user backend > case... It avoids presuming that we forever want to launch the backend with a single socket path and nothing else. Using the chardev, means we can choosen between client/server mode. We can choose whether to pass an FD to the socket, instead of the socket path. Whether the reconnect flag is set or not, and all the other aspects of a chardev you can define. I don't think we should have to add more & more adhoc CLI args (--socket-path, --fd, --reconnect, etc) and then manually parse them & pack their values together into a chardev object. Regards, Daniel -- |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o-https://fstop138.berrange.com :| |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v2] vhost-user: define conventions for vhost-user backends
Hi On Tue, Oct 16, 2018 at 4:14 PM Daniel P. Berrangé wrote: > > > It just reinvents the chardev unix socket syntax, but in a > > > different adhoc manner, which is repeating the mistake we have > > > made time & again in QEMU. Using QAPI we can directly accept > > > the ChardevSocket syntax we already know about. Instead of > > > having --socket-path and --fd and documenting that they are > > > mutually exclusive ChardevSocket QAPI schema provides that > > > representation in a well defined format which is discoverable > > > and QEMU and mgmt apps already understand. > > > > That would require external vhost-user backends to implement QAPI/json > > parsing. Is this necessary for a vhost-user backend? I doubt it. > > They could, but would not be required, to implement QAPI/json parser. > > The QAPI schema defines a standard for how to model & interpret the > non-scalar values for command line arguments. An external impl would > need to ensure that whatever parsing it does for CLI args is semantically > compatible with the parsing rules defined by the QEMU QAPI schema we > define to ensure interoperability of its impl. So you would want to have something like? --chardev '{ "id" : "bar", "backend" : { "type" : "socket", "data" : { "addr" : { "type": "unix", "path": "/tmp/foo.sock" }, "server": "false" } } }' instead of: --socket-path=/tmp/foo.sock I don't really get what that will help with, for the vhost-user backend case... > > Regards, > Daniel > -- > |: https://berrange.com -o-https://www.flickr.com/photos/dberrange :| > |: https://libvirt.org -o-https://fstop138.berrange.com :| > |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :| -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
[libvirt] [PATCH] qemu: Report warning if QoS is set for vhostuser interface
https://bugzilla.redhat.com/show_bug.cgi?id=1524230 Because of historical reasons, we are not denying starting a domain which has QoS set for unsupported type of device. We do report just a warning instead. And even though we perhaps used to do so for vhostuser it got lost somewhere. Bring it back. Signed-off-by: Michal Privoznik --- src/qemu/qemu_command.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 6e3ff67660..489e8bc689 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -8246,6 +8246,12 @@ qemuBuildVhostuserCommandLine(virQEMUDriverPtr driver, goto cleanup; } +if (virDomainNetGetActualBandwidth(net)) { +VIR_WARN("setting bandwidth on interfaces of " + "type '%s' is not implemented yet", + virDomainNetTypeToString(VIR_DOMAIN_NET_TYPE_VHOSTUSER)); +} + switch ((virDomainChrType)net->data.vhostuser->type) { case VIR_DOMAIN_CHR_TYPE_UNIX: if (!(chardev = qemuBuildChrChardevStr(logManager, secManager, -- 2.18.1 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] nwfilter: intantiate filters on loading driver
On 31.10.2018 16:14, John Ferlan wrote: > > > On 10/30/18 3:24 AM, Nikolay Shirokovskiy wrote: >> >> >> On 29.10.2018 22:37, John Ferlan wrote: >>> >>> >>> On 10/15/18 4:26 AM, Nikolay Shirokovskiy wrote: Before using filters binding filters instantiation was done by hypervisors drivers initialization code (qemu was the only such hypervisor). Now qemu reconnection done supposes it should be done by nwfilter driver probably. Let's add this missing step. Signed-off-by: Nikolay Shirokovskiy --- src/nwfilter/nwfilter_driver.c | 3 +++ 1 file changed, 3 insertions(+) >>> >>> If there's research you've done where the instantiation was done before >>> introduction of the nwfilter bindings that would be really helpful... >>> >>> I found that virNWFilterBuildAll was introduced in commit 3df907bfff. >>> There were 2 callers for it: >>> >>>1. virNWFilterTriggerRebuildImpl >>>2. nwfilterStateReload >>> >>> The former called as part of the virNWFilterConfLayerInit callback >>> during nwfilterStateInitialize (about 50 lines earlier). > > First off let me say you certainly find a lot of interesting bugs/issues > that are complex and that's good. Often times I wonder how you trip > across things or how to quantify what you've found. Some are simpler > than others. This one on the surface would seem to be simple, but I keep > wondering is there a downside. The issue is related to my recent posts: [1] [RFC] Faster libvirtd restart with nwfilter rules https://www.redhat.com/archives/libvir-list/2018-September/msg01206.html which continues here: https://www.redhat.com/archives/libvir-list/2018-October/msg00657.html In short if there is lots of VMs with filters then libvirtd restart takes a lot of time during which libvirtd is unresponsive. By the way the issue is found in libvirt versions 3.9.0 and ealier (don't know of newer versions, Virtuozzo is based on 3.9.0 now, just like Centos7 :) ) And attempts to fix it: [2] [PATCH RFC 0/4] nwfilter: don't reinstantiate filters if they are not changed https://www.redhat.com/archives/libvir-list/2018-October/msg00904.html [3] [PATCH v2 0/2] nwfilter: don't reinstantiate rules if there is no need to https://www.redhat.com/archives/libvir-list/2018-October/msg01317.html And the reason I started v2 is Laine mentioned that it is important to reinstantiate rules on restart (v1 do not care if somebody messed tables). And I discovered that upstream version stops reinstantiating rules at all after introducing filters bindings. And this functionality is old: commit cf6f8b9a9720fe5323a84e690de9fbf8ba41f6ac Author: Stefan Berger Date: Mon Aug 16 12:59:54 2010 -0400 nwfilter: extend nwfilter reload support > > The nwfilter processing is kindly said rather strange, complex, and to a > degree fragile. Thus when patches come along they are met with greater > scrutiny. From just reading the commit message here I don't get a sense > for the problem and why the solution fixes it. So I'm left to try and > research myself and ask a lot of questions. > > BTW, some of the detail provided in this response is really useful as > either part of a cover or after the --- in the single patch. I would > think you'd "know" when you've done lots of research into a problem that > providing reviews more than a mere hint would be useful! For nwfilter > bindings, it's hard to imagine this is one of those - it just happened > type events. There seems to be a very specific sequence that's missing > from the commit description. > >> >> Right but virNWFilterTriggerRebuildImpl is not actually called, it >> is set as rebuild callback. This rebuild callback can be called in >> virNWFilterObjTestUnassignDef and virNWFilterObjListAssignDef. > > Ah yes, the virNWFilterConfLayerInit sets up the context to call the > virNWFilterTriggerRebuildImpl for virNWFilterObjTestUnassignDef and > virNWFilterObjListAssignDef and no I see it doesn't directly call the > virNWFilterBuildAll. > > Still, I don't see where the processing of a rebuild callback is > different than prior to commit 3df907bfff - at least with respect to the > two places which would call it. Right processing did not changed, I just wanted to show that virNWFilterBuildAll is not called now and before on nwfilter init. By the way 3df907bfff introduced virNWFilterBuildAll but not its functionality. > >> The former is not called during nwfilter driver init. The latter >> is called as part of virNWFilterObjListLoadAllConfigs but rebuild >> callback is never called on this path because the list is empty > > Which list? nwfilters? nwfilter-bindings? nwfilters. And again by the way this is a bit vague statement. The matter is not list is not empty but we won't find same name in it when virNWFilterObjListAssignDef is called during driver init (which is said in below paragraph part in other words too) > >> (callback is called only on updates when filter with same name >> is
Re: [libvirt] [PATCH] nwfilter: intantiate filters on loading driver
On 10/30/18 3:24 AM, Nikolay Shirokovskiy wrote: > > > On 29.10.2018 22:37, John Ferlan wrote: >> >> >> On 10/15/18 4:26 AM, Nikolay Shirokovskiy wrote: >>> Before using filters binding filters instantiation was done by hypervisors >>> drivers initialization code (qemu was the only such hypervisor). Now qemu >>> reconnection done supposes it should be done by nwfilter driver probably. >>> Let's add this missing step. >>> >>> Signed-off-by: Nikolay Shirokovskiy >>> --- >>> src/nwfilter/nwfilter_driver.c | 3 +++ >>> 1 file changed, 3 insertions(+) >>> >> >> If there's research you've done where the instantiation was done before >> introduction of the nwfilter bindings that would be really helpful... >> >> I found that virNWFilterBuildAll was introduced in commit 3df907bfff. >> There were 2 callers for it: >> >>1. virNWFilterTriggerRebuildImpl >>2. nwfilterStateReload >> >> The former called as part of the virNWFilterConfLayerInit callback >> during nwfilterStateInitialize (about 50 lines earlier). First off let me say you certainly find a lot of interesting bugs/issues that are complex and that's good. Often times I wonder how you trip across things or how to quantify what you've found. Some are simpler than others. This one on the surface would seem to be simple, but I keep wondering is there a downside. The nwfilter processing is kindly said rather strange, complex, and to a degree fragile. Thus when patches come along they are met with greater scrutiny. From just reading the commit message here I don't get a sense for the problem and why the solution fixes it. So I'm left to try and research myself and ask a lot of questions. BTW, some of the detail provided in this response is really useful as either part of a cover or after the --- in the single patch. I would think you'd "know" when you've done lots of research into a problem that providing reviews more than a mere hint would be useful! For nwfilter bindings, it's hard to imagine this is one of those - it just happened type events. There seems to be a very specific sequence that's missing from the commit description. > > Right but virNWFilterTriggerRebuildImpl is not actually called, it > is set as rebuild callback. This rebuild callback can be called in > virNWFilterObjTestUnassignDef and virNWFilterObjListAssignDef. Ah yes, the virNWFilterConfLayerInit sets up the context to call the virNWFilterTriggerRebuildImpl for virNWFilterObjTestUnassignDef and virNWFilterObjListAssignDef and no I see it doesn't directly call the virNWFilterBuildAll. Still, I don't see where the processing of a rebuild callback is different than prior to commit 3df907bfff - at least with respect to the two places which would call it. > The former is not called during nwfilter driver init. The latter > is called as part of virNWFilterObjListLoadAllConfigs but rebuild > callback is never called on this path because the list is empty Which list? nwfilters? nwfilter-bindings? > (callback is called only on updates when filter with same name > is already present in the list). So virNWFilterBuildAll is > not called on nwfilter driver init. > And similar logic was run was before commit 3df907bfff? This direct correlation is the part I'm missing. What follows is my understand of the before and after - some of the before is a bit of hand waving. Perhaps Daniel can chime in and "fix up" inaccuracies. Prior to commit 3df907bfff, the virNWFilterDomainFWUpdateCB was only run when the domain was active. IOW: The domain would need to preload the bindings in "hidden" locations such that the various vnetX (or whatever target dev for the ) devices would load the whichever was assocated. When libvirtd was restarted the processing was similar and the "magic" of reinstantiation was provided through virNWFilterRegisterCallbackDriver in (for example) qemuStateInitialize. That I believe is the part you describe in your commit as "Before using filters binding filters instantiation was done by hypervisors drivers initialization code (qemu was the only such hypervisor)." - although I see LXC and UML drivers changed as well, so I could be wrong with my assumption of what you meant. After that commit, rather than requiring qemuStateInitialize to register a callback driver which somehow magically loaded the filters for running guests, the nwfilter-bindings for running guests are loaded via (for example) /var/run/libvirt/vnet0.xml file processing during virNWFilterBindingObjListLoadAllConfigs (change in the previous commit 57f5621f46). So rather than the rebuild processing magically occurring in the background by some hypervisor driver performing the rebuild callback processing. Since virRegisterStateDriver (and friends) for nwfilter are run before qemu, IIUC that means the filter bindings would be loaded already. It's all a complicated dance. So in this after model for running guests it seems to me that the exact same processing occurs. Now if someone during libvirtd's stop
Re: [libvirt] [jenkins-ci PATCH v2 0/3] Add libvirt-ocaml
On Mon, Oct 22, 2018 at 07:02:24PM +0200, Pino Toscano wrote: > Now that the libvirt-ocaml repository is fixed, let's build it in CI. > > Changes from v1 to v2: > - split according to Andrea's hints > - removed --with-libvirt as argument for configure, no more needed > after upstream changes Reviewed-by: Erik Skultety -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH v2 0/3] qemu: guest dedicated crypto adapters
On 10/29/18 5:48 PM, John Ferlan wrote: On 10/18/18 10:54 AM, Boris Fiuczynski wrote: v2: + added singleton check for vfio-ap mediated device This patch series introduces initial libvirt support for guest dedicated crypto adapters on S390. It allows to specify a vfio-ap mediated device in a domain. Extensive documentation about AP is available in patch 6 of the QEMU patch series. I modified patch2 to use "supported" and pushed the series just now. Tks - John Thanks John. I appreciate it. -- Mit freundlichen Grüßen/Kind regards Boris Fiuczynski IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Köderitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294 -- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list