On Aug 01 2013, Alex Bligh wrote: > Paolo, > > --On 1 August 2013 15:51:11 +0200 Paolo Bonzini <pbonz...@redhat.com> wrote: > > >>> So actually there is another problem with this patch (both the > >>> condvar and the event approach are equally buggy). If a timer > >>> on clock X disables clock X, qemu_clock_enable will deadlock. > >> > >>Yes. I believe there will be a similar problem if a timer > >>created or deleted an AioContext (clearly less likely!) > >> > >>> I suppose it's easier to ask each user of qemu_clock_enable to > >>> provide its own synchronization, and drop this patch. The simplest > >>> kind of synchronization is to delay some work to a bottom half in > >>> the clock's AioContext. If you do that, you know that the timers > >>> are not running. > >> > >>I'm not sure that's true. If two AioContexts run in different > >>threads, would their BH's and timers not also run in those two > >>different threads? > > > >Suppose a thread wants to do qemu_clock_enable(foo, false), and the > >code after qemu_clock_enable assumes that no timers are running. > >Then you should move that code to a bottom half in foo's AioContext. > > foo is a QEMUClock here. > > A QEMUClock may not have just one AioContext. It could have several > each operated by a different thread.
Oops, you're right. Checking the code for callers of qemu_clock_enable() it seems like there shouldn't be any deadlocks. So it should work if qemu_clock_enable walks all of the timerlists and waits on each event. But we should document the expectations since they are different from the current code. Paolo