Hello, Bart. On Thu, Sep 24, 2015 at 10:35:41AM -0700, Bart Van Assche wrote: > My interpretation of the percpu_ref_tryget_live() implementation in > <linux/percpu-refcount.h> is that the tryget operation will only fail if the > refcount is in atomic mode and additionally the __PERCPU_REF_DEAD flag has > been set.
Yeah and percpu_ref_kill() does both. > >Also, what does the barriers do in your patch? > > My intention was to guarantee that on architectures that do not provide the > same ordering guarantees as x86 (e.g. PPC or ARM) that the store and load > operations on mq_freeze_depth and mq_usage_counter would not be reordered. > However, it is probably safe to leave out the barrier I proposed to > introduce in blk_mq_queue_enter() since it is acceptable that there is some > delay in communicating mq_freeze_depth updates from the CPU that modified > that counter to the CPU that reads that counter. Hmmm... please don't use barriers this way. Use it only when there's a clear requirement for interlocking writer and reader pair. There isn't one here. All it does is confusing people trying to read the code. > >The only race condition that I can see there is if unfreeze and freeze > >race each other and freeze tries to kill the ref which hasn't finished > >reinit yet. We prolly want to put mutexes around freeze/unfreeze so > >that they're serialized if something like that can happen (it isn't a > >hot path to begin with). > > My concern is that the following could happen if mq_freeze_depth is not > checked in the hot path of blk_mq_queue_enter(): > * mq_usage_counter >= 1 before blk_mq_freeze_queue() is called. > * blk_mq_freeze_queue() keeps waiting forever if new requests are queued > faster than that these requests complete. Again, that doesn't happen. Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

