* Felipe Franciosi (fel...@nutanix.com) wrote: > > > > On Sep 30, 2019, at 6:59 PM, Dr. David Alan Gilbert <dgilb...@redhat.com> > > wrote: > > > > * Felipe Franciosi (fel...@nutanix.com) wrote: > >> > >> > >>> On Sep 30, 2019, at 6:11 PM, Dr. David Alan Gilbert <dgilb...@redhat.com> > >>> wrote: > >>> > >>> * Felipe Franciosi (fel...@nutanix.com) wrote: > >>>> > >>>> > >>>>> On Sep 30, 2019, at 5:03 PM, Dr. David Alan Gilbert > >>>>> <dgilb...@redhat.com> wrote: > >>>>> > >>>>> * Felipe Franciosi (fel...@nutanix.com) wrote: > >>>>>> Hi David, > >>>>>> > >>>>>>> On Sep 30, 2019, at 3:29 PM, Dr. David Alan Gilbert > >>>>>>> <dgilb...@redhat.com> wrote: > >>>>>>> > >>>>>>> * Felipe Franciosi (fel...@nutanix.com) wrote: > >>>>>>>> Heyall, > >>>>>>>> > >>>>>>>> We have a use case where a host should self-fence (and all VMs should > >>>>>>>> die) if it doesn't hear back from a heartbeat within a certain time > >>>>>>>> period. Lots of ideas were floated around where libvirt could take > >>>>>>>> care of killing VMs or a separate service could do it. The concern > >>>>>>>> with those is that various failures could lead to _those_ services > >>>>>>>> being unavailable and the fencing wouldn't be enforced as it should. > >>>>>>>> > >>>>>>>> Ultimately, it feels like Qemu should be responsible for this > >>>>>>>> heartbeat and exit (or execute a custom callback) on timeout. > >>>>>>> > >>>>>>> It doesn't feel doing it inside qemu would be any safer; something > >>>>>>> outside QEMU can forcibly emit a kill -9 and qemu *will* stop. > >>>>>> > >>>>>> The argument above is that we would have to rely on this external > >>>>>> service being functional. Consider the case where the host is > >>>>>> dysfunctional, with this service perhaps crashed and a corrupt > >>>>>> filesystem preventing it from restarting. The VMs would never die. > >>>>> > >>>>> Yeh that could fail. > >>>>> > >>>>>> It feels like a Qemu timer-driven heartbeat check and calls abort() / > >>>>>> exit() would be more reliable. Thoughts? > >>>>> > >>>>> OK, yes; perhaps using a timer_create and telling it to send a fatal > >>>>> signal is pretty solid; it would take the kernel to do that once it's > >>>>> set. > >>>> > >>>> I'm confused about why the kernel needs to be involved. If this is a > >>>> timer off the Qemu main loop, it can just check on the heartbeat > >>>> condition (which should be customisable) and call abort() if that's > >>>> not satisfied. If you agree on that I'd like to talk about how that > >>>> check could be made customisable. > >>> > >>> There are times when the main loop can get blocked even though the CPU > >>> threads can be running and can in some configurations perform IO > >>> even without the main loop (I think!). > >> > >> Ah, that's a very good point. Indeed, you can perform IO in those > >> cases specially when using vhost devices. > >> > >>> By setting a timer in the kernel that sends a signal to qemu, the kernel > >>> will send that signal however broken qemu is. > >> > >> Got you now. That's probably better. Do you reckon a signal is > >> preferable over SIGEV_THREAD? > > > > Not sure; probably the safest is getting the kernel to SIGKILL it - but > > that's a complete nightmare to debug - your process just goes *pop* > > with no apparent reason why. > > I've not used SIGEV_THREAD - it looks promising though. > > I'm worried that SIGEV_THREAD could be a bit heavyweight (if it fires > up a new thread each time). On the other hand, as you said, SIGKILL > makes it harder to debug. > > Also, asking the kernel to defer the SIGKILL (ie. updating the timer) > needs to come from Qemu itself (eg. a timer in the main loop, > something we already ruled unsuitable, or a qmp command which > constitutes an external dependency that we also ruled undesirable).
OK, there's two reasons I think this isn't that bad/is good: a) It's an external dependency - but if it fails the result is the system fails, rather than the system keeps on running; so I think that's the balance you were after; it's the opposite from the external watchdog. b) You need some external system anyway to tell QEMU when it's OK - what's your definitino of a failed system? > What if, when self-fencing is enabled, Qemu kicks off a new thread > from the start which does nothing but periodically wake up, verify the > heartbeat condition and log()+abort() if required? (Then we wouldn't > need the kernel timer.) I'd make that thread bump the kernel timer along. > > > >> I'm still wondering how to make this customisable so that different > >> types of heartbeat could be implemented (preferably without creating > >> external dependencies per discussion above). Thoughts welcome. > > > > Yes, you need something to enable it, and some safe way to retrigger > > the timer. A qmp command marked as 'oob' might be the right way - > > another qm command can't block it. > > This qmp approach is slightly different than the external dependency > that itself kills Qemu; if it doesn't run, then Qemu dies because the > kernel timer is not updated. But this is also a heavyweight approach. > We are talking about a service that needs to frequently connect to all > running VMs on a host to reset the timer. > > But it does allow for the customisable heartbeat: the logic behind > what triggers the command is completely flexible. > > Thinking about this idea of a separate Qemu thread, one thing that > came to mind is this: > > qemu -fence heartbeat=/path/to/file,deadline=60[,recheck=5] > > Qemu could fire up a thread that stat()s <file> (every <recheck> > seconds or on a default interval) and log()+abort() the whole process > if the last modification time of the file is older than <deadline>. If > <file> goes away (ie. stat() gives ENOENT), then it either fences > immediately or ignores it, not sure which is more sensible. > > Thoughts? As above; I'm OK with using a file with that; but I'd make that thread bump the kernel timer along; if that thread gets stuck somehow the kernel still nukes your process. Dave > F. > > > > > Dave > > > > > >> F. > >> > >>> > >>>> > >>>>> > >>>>> IMHO the safer way is to kick the host off the network by reprogramming > >>>>> switches; so even if the qemu is actually alive it can't get anywhere. > >>>>> > >>>>> Dave > >>>> > >>>> Naturally some off-host STONITH is preferable, but that's not always > >>>> available. A self-fencing mechanism right at the heart of the emulator > >>>> can do the job without external hardware dependencies. > >>> > >>> Dave > >>> > >>>> Cheers, > >>>> Felipe > >>>> > >>>>> > >>>>> > >>>>>> Felipe > >>>>>> > >>>>>>> > >>>>>>>> Does something already exist for this purpose which could be used? > >>>>>>>> Would a generic Qemu-fencing infrastructure be something of interest? > >>>>>>> Dave > >>>>>>> > >>>>>>> > >>>>>>>> Cheers, > >>>>>>>> F. > >>>>>>>> > >>>>>>> -- > >>>>>>> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK > >>>>>> > >>>>> -- > >>>>> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK > >>>> > >>> -- > >>> Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK > >> > > -- > > Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK > -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK