On Thu, Jan 21, 2016 at 5:16 PM, Klaus Aehlig <[email protected]> wrote:
> > > However, this is not what you're patch is achieving. EnsureKvmdOnNodes > > > is called at a couple of places on non-KVM clusters, e.g., > > > - at cluster init, and > > > - whenever the set of enabled hypervisors changes, regardless of > > > whether kvm is in the old or the new set > > > > Yes, but in these cases the calls are justified and conditional (it is > > executed only > > if we know we need to potentially switch the state of kvmd). > > Can you elaborate on this a bit? If I understand the code correctly, we > currently always try to stop the kvmd on hypervisor changes with kvm not > enabled. In other words, if we change the set of enabled hypervisors from > Xen to Xen + fake hypervisor, we try to stop kvmd. I don't see how this > attempt to stop kvmd is more justified that an attempt to stop kvmd when > adding a new node. Yes, the condition could be more precise by evaluating if 'kvm' is in the old or new list of enabled hypervisors. But at least it shaves off most cases, when other cluster parameters change - actually only a change in enabled_hypervisors or enabled_user_shutdown triggers a call to EnsureKvmdOnNodes(). So likewise I was trying to shave off cases on node parameter changes when KVM is disabled. I think you are right that on node addition we shouldn't make expectations about the node state. In general, I don't have strong feelings regarding this, I'm happy to take the silent trial approach in both cases. Thanks, > Klaus > > -- > Klaus Aehlig > Google Germany GmbH, Dienerstr. 12, 80331 Muenchen > Registergericht und -nummer: Hamburg, HRB 86891 > Sitz der Gesellschaft: Hamburg > Geschaeftsfuehrer: Matthew Scott Sucherman, Paul Terence Manicle >
