On Mon, 2017-09-25 at 10:26 +0800, Ming Lei wrote:
> On Fri, Sep 22, 2017 at 03:14:04PM -0700, Bart Van Assche wrote:
> > +int blk_queue_enter(struct request_queue *q, bool nowait, bool preempt)
> >  {
> >     while (true) {
> >             int ret;
> >  
> > -           if (percpu_ref_tryget_live(&q->q_usage_counter))
> > -                   return 0;
> > +           if (percpu_ref_tryget_live(&q->q_usage_counter)) {
> > +                   /*
> > +                    * Ensure read order of q_usage_counter and the
> > +                    * PREEMPT_ONLY queue flag.
> > +                    */
> > +                   smp_rmb();
> > +                   if (preempt || !blk_queue_preempt_only(q))
> > +                           return 0;
> > +                   else
> > +                           percpu_ref_put(&q->q_usage_counter);
> > +           }
> 
> Now you introduce one smp_rmb() and test on preempt flag on
> blk-mq's fast path, which should have been avoided, so I
> think this way is worse than my patchset.

So that means that you have not noticed that it is safe to leave out that
smp_rmp() call because blk-mq queue freezing and unfreezing waits for a grace
period and hence waits until all CPUs have executed a full memory barrier?

Bart.

Reply via email to