On Fri, Oct 28, 2016 at 11:13:19AM -0700, Tony Lindgren wrote:
> * Johan Hovold <jo...@kernel.org> [161028 02:45]:
> > On Thu, Oct 27, 2016 at 12:15:52PM -0700, Tony Lindgren wrote:
> > > * Johan Hovold <jo...@kernel.org> [161027 11:46]:
> > > > But then this looks like it could trigger an ABBA deadlock as musb->lock
> > > > is held while queue_on_resume() takes musb->list_lock, and
> > > > musb_run_pending() would take the same locks in the reverse order.
> > > 
> > > It seems we can avoid that by locking only list_add_tail() and list_del():
> > > 
> > > list_for_each_entry_safe(w, _w, &musb->resume_work, node) {
> > >   spin_lock_irqsave(&musb->list_lock, flags);
> > >   list_del(&w->node);
> > >   spin_unlock_irqrestore(&musb->list_lock, flags);
> > >   if (w->callback)
> > >           w->callback(musb, w->data);
> > >   devm_kfree(musb->controller, w);
> > > }
> > 
> > I think you still need to hold the lock while traversing the list (even
> > if you temporarily release it during the callback).
> 
> Hmm yeah we need iterate through the list again to avoid missing new
> elements being added. I've updated the patch to use a the common
> while (!list_empty(&musb->resume_work)) loop. Does that solve the
> concern you had or did you also had some other concern there?

Yeah, while that minimises the race window it is still possible that the
timer callback checks pm_runtime_active() after the queue has been
processed but before the rpm status is updated. 

How about using a work struct and synchronous get for the deferred case?

Johan
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to