On Thu, Jun 19, 2014 at 11:01:26AM +0800, Lai Jiangshan wrote:
> > +   /*
> > +    * Restore per-cpu operation.  smp_store_release() is paired with
> > +    * smp_load_acquire() in __pcpu_ref_alive() and guarantees that the
> 
> s/smp_load_acquire()/smp_read_barrier_depends()/

Will update.

> s/smp_store_release()/smp_mb()/  if you accept my next comment.
>
> > +    * zeroing is visible to all percpu accesses which can see the
> > +    * following PCPU_REF_DEAD clearing.
> > +    */
> > +   for_each_possible_cpu(cpu)
> > +           *per_cpu_ptr(pcpu_count, cpu) = 0;
> > +
> > +   smp_store_release(&ref->pcpu_count_ptr,
> > +                     ref->pcpu_count_ptr & ~PCPU_REF_DEAD);
> 
> I think it would be better if smp_mb() is used.

smp_wmb() would be better here.  We don't need the reader side.

> it is documented that smp_read_barrier_depends() and smp_mb() are paired.
> Not smp_read_barrier_depends() and smp_store_release().

I don't know.  I thought about doing that but the RCU accessors are
pairing store_release with read_barrier_depends, so I don't think the
particular paring is problematic and store_release is better at
documenting what's being barriered.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to