On 24/06/2015 18:56, Alex Bennée wrote:
> This is where I get confused between the advantage of this over however
> same pid recursive locking. If you use recursive locking you don't need
> to add a bunch of state to remind you of when to release the lock. Then
> you'd just need:
> 
> static void prepare_mmio_access(MemoryRegion *mr)
> {
>     if (mr->global_locking) {
>         qemu_mutex_lock_iothread();
>     }
>     if (mr->flush_coalesced_mmio) {
>         qemu_mutex_lock_iothread();
>         qemu_flush_coalesced_mmio_buffer();
>         qemu_mutex_unlock_iothread();
>     }
> }
> 
> and a bunch of:
> 
> if (mr->global_locking)
>    qemu_mutex_unlock_iothread();
> 
> in the access functions. Although I suspect you could push the
> mr->global_locking up to the dispatch functions.
> 
> Am I missing something here?

The semantics of recursive locking with respect to condition variables
are not clear.  Either cond_wait releases all locks, and then the mutex
can be released when the code doesn't expect it to be, or cond_wait
doesn't release all locks and then you have deadlocks.

POSIX says to do the latter:

        It is advised that an application should not use a
        PTHREAD_MUTEX_RECURSIVE mutex with condition variables because
        the implicit unlock performed for a pthread_cond_timedwait() or
        pthread_cond_wait() may not actually release the mutex (if it
        had been locked multiple times). If this happens, no other
        thread can satisfy the condition of the predicate."

So, recursive locking is okay if you don't have condition variables
attached to the lock (and if you cannot do without it), but
qemu_global_mutex does have them.

QEMU has so far tried to use the solution that Stevens outlines here:
https://books.google.it/books?id=kCTMFpEcIOwC&pg=PA434

        ... Leave the interfaces to func1 and func2 unchanged, and avoid
        a recursive mutex by providing a private version of func2,
        called func2_locked.  To call func2_locked, hold the mutex
        embedded in the data structure whose address we pass as the
        argument.

as a way to avoid recursive locking.  This is much better because a) it
is more efficient---taking locks can be expensive even if they're
uncontended, especially if your VM spans multiple NUMA nodes on the
host; b) it is always clear when a lock is taken and when it isn't.

Note that Stevens has another example right after this one of recursive
locking, involving callbacks, but it's ill-defined.  There's no reason
for the "timeout" function in page 437 to hold the mutex when it calls
"func".  It can unlock before and re-lock afterwards, like QEMU's own
timerlist_run_timers function.

Paolo

Reply via email to