On Thu, 2018-10-25 at 17:34 +0200, Johannes Berg wrote: > On Thu, 2018-10-25 at 15:05 +0000, Bart Van Assche wrote: > > > @@ -2889,7 +2893,7 @@ static bool start_flush_work(struct work_struct > > *work, struct wq_barrier *barr, > > * workqueues the deadlock happens when the rescuer stalls, blocking > > * forward progress. > > */ > > - if (!from_cancel && > > + if (!from_cancel && (pwq->wq->flags & __WQ_HAS_BEEN_USED) && > > (pwq->wq->saved_max_active == 1 || pwq->wq->rescuer)) { > > lock_acquire_exclusive(&pwq->wq->lockdep_map, 0, 0, NULL, > > _THIS_IP_); > > This also doesn't seem right to me. You shouldn't really care whether or > not the workqueue has been used at this point, lockdep also doesn't do > this for locks. > > Any dependency you cause at some point can - at a future time - be taken > into account when checking dependency cycles. Removing one arbitrarily > just because you haven't actually executed anything *yet* just removes > knowledge from lockdep. In the general case, this isn't right. Just > because you haven't executd anything here doesn't mean that it's > *impossible* to have executed something, right?
Please have a look at the call trace in the description of this patch and also at the direct I/O code. The lockdep complaint in the description of this patch really is a false positive. What I think needs further discussion is on how to address this false positive - track whether or not a work queue has been used or follow Tejun's proposal that I became aware of after I posted this patch, namely introduce a new function for destroying a work queue that skips draining, e.g. destroy_workqueue_skip_drain() (https://lkml.org/lkml/2018/10/24/2). Thanks, Bart.