On Fri, Jan 18, 2019 at 7:57 AM Paolo Bonzini <pbonz...@redhat.com> wrote:
> On 18/01/19 11:21, Daniel P. Berrangé wrote: > > On Fri, Jan 18, 2019 at 10:16:34AM +0000, Dr. David Alan Gilbert wrote: > >> * Paolo Bonzini (pbonz...@redhat.com) wrote: > >>> The solution is to restart the VM using "-cpu host,-vmx". > >> The problem as Christian explained in that thread is that it was common > >> for them to start VMs with vmx enabled but for people not to use it > >> on most of the VMs, so we break migration for most VMs even though most > >> don't use it. > >> It might not be robust, but it worked for a lot of people most of the > >> time. > It's not "not robust" (like, it usually works but sometimes fails > mysteriously). It's entirely broken, you just don't notice that it is > if you're not using the feature. > It is useful to understand the risk. However, this is the same risk we have been successfully living with for several years now, and it seems abrupt to declare 3.1 and 3.2 as the Qemu version beyond which migration requires a whole cluster restart whether or not a L2 guest had been, or will ever be started on any of the guests. I would like to see the risk clearly communicated, and have the option of proceeding anyways (as we have every day since first deploying the solution). I think I am not alone here, otherwise I would have quietly implemented a naive patch myself without raising this for discussion. :-) Given the known risk, I'm happy to restart all machines that have or will likely use an L2 guest, and leverage this capability for the 80%+ of machines that will never launch an L2 guest. Although, detecting it and using this to block live migration in case any mistakes in detection were made would be very cool as well. Is this something that will already work with the pending 3.2 code? Or is any change required to achieve this? Is it best to upgrade to 3.0 before proceeding to 3.2 (once it is released), or will it be acceptable to migrate from 2.12 directly to 3.2 in this manner? > Yes, this is exactly why I said we should make the migration blocker > > be conditional on any L2 guest having been started. I vaguely recall > > someone saying there wasn't any way to detect this situation from > > QEMU though ? > You can check that and give a warning (check that CR4.VMXE=1 but no > other live migration state was transferred). However, without live > migration support in the kernel and in QEMU you cannot start VMs *for > the entire future life of the VM* after a live migration. So even if we > implemented that kind of blocker, it would fail even if no VM has been > started, as long as the kvm_intel module is loaded on migration. That > would be no different in practice from what we have now. > > It might work to unload the kvm_intel module and run live migration with > the CPU configured differently ("-cpu host,-vmx") on the destination. > For machines that will not use L2 guest, would it be a good precaution to unload kvm_intel pre-emptively before live migration just in case? In particular, I'm curious if doing anything at all increases the risk of failure, or if it should be left alone entirely and never used as the lowest risk option (and what we have traditionally been doing anyways). I do appreciate the warnings and details. Just not the enforcement piece. Thanks! -- Mark Mielke <mark.mie...@gmail.com>