On Wed, 2018-04-11 at 07:56 -0700, Tejun Heo wrote:
> And looking at the change, it looks like the right thing we should
> have done is caching @lock on the print_blkg side and when switching
> locks make sure both locks are held.  IOW, do the following in
> blk_cleanup_queue()
> 
>       spin_lock_irq(lock);
>       if (q->queue_lock != &q->__queue_lock) {
>               spin_lock(&q->__queue_lock);
>               q->queue_lock = &q->__queue_lock;
>               spin_unlock(&q->__queue_lock);
>       }
>       spin_unlock_irq(lock);
> 
> Otherwise, there can be two lock holders thinking they have exclusive
> access to the request_queue.

I think that's a bad idea. A block driver is allowed to destroy the
spinlock it associated with the request queue as soon as blk_cleanup_queue()
has finished. If the block cgroup controller would cache a pointer to the
block driver spinlock then that could cause the cgroup code to attempt to
lock a spinlock after it has been destroyed. I don't think we need that kind
of race conditions.

Bart.



Reply via email to