Re: [PATCH v2 6/6] percpu-refcount: implement percpu_ref_reinit() and percpu_ref_is_zero()
On Thu, Jun 19, 2014 at 10:03:05AM -0700, Paul E. McKenney wrote: > documentation: Add acquire/release barriers to pairing rules > > It is possible to pair acquire and release barriers with other barriers, > so this commit adds them to the list in the SMP barrier pairing section. > > Reported-by: Lai Jiangshan > Signed-off-by: Paul E. McKenney > > diff --git a/Documentation/memory-barriers.txt > b/Documentation/memory-barriers.txt > index a6ca533a73fc..2a7c3c4fb53f 100644 > --- a/Documentation/memory-barriers.txt > +++ b/Documentation/memory-barriers.txt > @@ -757,10 +757,12 @@ SMP BARRIER PAIRING > When dealing with CPU-CPU interactions, certain types of memory barrier > should > always be paired. A lack of appropriate pairing is almost certainly an > error. > > -A write barrier should always be paired with a data dependency barrier or > read > -barrier, though a general barrier would also be viable. Similarly a read > -barrier or a data dependency barrier should always be paired with at least an > -write barrier, though, again, a general barrier is viable: > +A write barrier should always be paired with a data dependency barrier, > +acquire barrier, release barrier, or read barrier, though a general > +barrier would also be viable. Similarly a read barrier or a data > +dependency barrier should always be paired with at least a write barrier, > +an acquire barrier, or a release barrier, though, again, a general > +barrier is viable: FWIW, Reviewed-by: Tejun Heo Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 6/6] percpu-refcount: implement percpu_ref_reinit() and percpu_ref_is_zero()
On Thu, Jun 19, 2014 at 09:55:02AM -0700, Paul E. McKenney wrote: > On Thu, Jun 19, 2014 at 09:31:04AM -0400, Tejun Heo wrote: > > On Thu, Jun 19, 2014 at 11:01:26AM +0800, Lai Jiangshan wrote: > > > > + /* > > > > +* Restore per-cpu operation. smp_store_release() is paired > > > > with > > > > +* smp_load_acquire() in __pcpu_ref_alive() and guarantees that > > > > the > > > > > > s/smp_load_acquire()/smp_read_barrier_depends()/ > > > > Will update. > > > > > s/smp_store_release()/smp_mb()/ if you accept my next comment. > > > > > > > +* zeroing is visible to all percpu accesses which can see the > > > > +* following PCPU_REF_DEAD clearing. > > > > +*/ > > > > + for_each_possible_cpu(cpu) > > > > + *per_cpu_ptr(pcpu_count, cpu) = 0; > > > > + > > > > + smp_store_release(>pcpu_count_ptr, > > > > + ref->pcpu_count_ptr & ~PCPU_REF_DEAD); > > > > > > I think it would be better if smp_mb() is used. > > > > smp_wmb() would be better here. We don't need the reader side. > > > > > it is documented that smp_read_barrier_depends() and smp_mb() are paired. > > > Not smp_read_barrier_depends() and smp_store_release(). > > Well, sounds like the documentation needs an update, then. ;-) > > For example, current rcu_assign_pointer() is a wrapper around > smp_store_release(). > > > I don't know. I thought about doing that but the RCU accessors are > > pairing store_release with read_barrier_depends, so I don't think the > > particular paring is problematic and store_release is better at > > documenting what's being barriered. > > Which Tejun noted as well. And here is a patch to update the documentation. Thoughts? Thanx, Paul documentation: Add acquire/release barriers to pairing rules It is possible to pair acquire and release barriers with other barriers, so this commit adds them to the list in the SMP barrier pairing section. Reported-by: Lai Jiangshan Signed-off-by: Paul E. McKenney diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index a6ca533a73fc..2a7c3c4fb53f 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -757,10 +757,12 @@ SMP BARRIER PAIRING When dealing with CPU-CPU interactions, certain types of memory barrier should always be paired. A lack of appropriate pairing is almost certainly an error. -A write barrier should always be paired with a data dependency barrier or read -barrier, though a general barrier would also be viable. Similarly a read -barrier or a data dependency barrier should always be paired with at least an -write barrier, though, again, a general barrier is viable: +A write barrier should always be paired with a data dependency barrier, +acquire barrier, release barrier, or read barrier, though a general +barrier would also be viable. Similarly a read barrier or a data +dependency barrier should always be paired with at least a write barrier, +an acquire barrier, or a release barrier, though, again, a general +barrier is viable: CPU 1 CPU 2 === === -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 6/6] percpu-refcount: implement percpu_ref_reinit() and percpu_ref_is_zero()
On Thu, Jun 19, 2014 at 09:31:04AM -0400, Tejun Heo wrote: > On Thu, Jun 19, 2014 at 11:01:26AM +0800, Lai Jiangshan wrote: > > > + /* > > > + * Restore per-cpu operation. smp_store_release() is paired with > > > + * smp_load_acquire() in __pcpu_ref_alive() and guarantees that the > > > > s/smp_load_acquire()/smp_read_barrier_depends()/ > > Will update. > > > s/smp_store_release()/smp_mb()/ if you accept my next comment. > > > > > + * zeroing is visible to all percpu accesses which can see the > > > + * following PCPU_REF_DEAD clearing. > > > + */ > > > + for_each_possible_cpu(cpu) > > > + *per_cpu_ptr(pcpu_count, cpu) = 0; > > > + > > > + smp_store_release(>pcpu_count_ptr, > > > + ref->pcpu_count_ptr & ~PCPU_REF_DEAD); > > > > I think it would be better if smp_mb() is used. > > smp_wmb() would be better here. We don't need the reader side. > > > it is documented that smp_read_barrier_depends() and smp_mb() are paired. > > Not smp_read_barrier_depends() and smp_store_release(). Well, sounds like the documentation needs an update, then. ;-) For example, current rcu_assign_pointer() is a wrapper around smp_store_release(). > I don't know. I thought about doing that but the RCU accessors are > pairing store_release with read_barrier_depends, so I don't think the > particular paring is problematic and store_release is better at > documenting what's being barriered. Which Tejun noted as well. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 6/6] percpu-refcount: implement percpu_ref_reinit() and percpu_ref_is_zero()
On Thu, Jun 19, 2014 at 11:01:26AM +0800, Lai Jiangshan wrote: > > + /* > > +* Restore per-cpu operation. smp_store_release() is paired with > > +* smp_load_acquire() in __pcpu_ref_alive() and guarantees that the > > s/smp_load_acquire()/smp_read_barrier_depends()/ Will update. > s/smp_store_release()/smp_mb()/ if you accept my next comment. > > > +* zeroing is visible to all percpu accesses which can see the > > +* following PCPU_REF_DEAD clearing. > > +*/ > > + for_each_possible_cpu(cpu) > > + *per_cpu_ptr(pcpu_count, cpu) = 0; > > + > > + smp_store_release(>pcpu_count_ptr, > > + ref->pcpu_count_ptr & ~PCPU_REF_DEAD); > > I think it would be better if smp_mb() is used. smp_wmb() would be better here. We don't need the reader side. > it is documented that smp_read_barrier_depends() and smp_mb() are paired. > Not smp_read_barrier_depends() and smp_store_release(). I don't know. I thought about doing that but the RCU accessors are pairing store_release with read_barrier_depends, so I don't think the particular paring is problematic and store_release is better at documenting what's being barriered. Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 6/6] percpu-refcount: implement percpu_ref_reinit() and percpu_ref_is_zero()
On Thu, Jun 19, 2014 at 11:01:26AM +0800, Lai Jiangshan wrote: + /* +* Restore per-cpu operation. smp_store_release() is paired with +* smp_load_acquire() in __pcpu_ref_alive() and guarantees that the s/smp_load_acquire()/smp_read_barrier_depends()/ Will update. s/smp_store_release()/smp_mb()/ if you accept my next comment. +* zeroing is visible to all percpu accesses which can see the +* following PCPU_REF_DEAD clearing. +*/ + for_each_possible_cpu(cpu) + *per_cpu_ptr(pcpu_count, cpu) = 0; + + smp_store_release(ref-pcpu_count_ptr, + ref-pcpu_count_ptr ~PCPU_REF_DEAD); I think it would be better if smp_mb() is used. smp_wmb() would be better here. We don't need the reader side. it is documented that smp_read_barrier_depends() and smp_mb() are paired. Not smp_read_barrier_depends() and smp_store_release(). I don't know. I thought about doing that but the RCU accessors are pairing store_release with read_barrier_depends, so I don't think the particular paring is problematic and store_release is better at documenting what's being barriered. Thanks. -- tejun -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 6/6] percpu-refcount: implement percpu_ref_reinit() and percpu_ref_is_zero()
On Thu, Jun 19, 2014 at 09:31:04AM -0400, Tejun Heo wrote: On Thu, Jun 19, 2014 at 11:01:26AM +0800, Lai Jiangshan wrote: + /* + * Restore per-cpu operation. smp_store_release() is paired with + * smp_load_acquire() in __pcpu_ref_alive() and guarantees that the s/smp_load_acquire()/smp_read_barrier_depends()/ Will update. s/smp_store_release()/smp_mb()/ if you accept my next comment. + * zeroing is visible to all percpu accesses which can see the + * following PCPU_REF_DEAD clearing. + */ + for_each_possible_cpu(cpu) + *per_cpu_ptr(pcpu_count, cpu) = 0; + + smp_store_release(ref-pcpu_count_ptr, + ref-pcpu_count_ptr ~PCPU_REF_DEAD); I think it would be better if smp_mb() is used. smp_wmb() would be better here. We don't need the reader side. it is documented that smp_read_barrier_depends() and smp_mb() are paired. Not smp_read_barrier_depends() and smp_store_release(). Well, sounds like the documentation needs an update, then. ;-) For example, current rcu_assign_pointer() is a wrapper around smp_store_release(). I don't know. I thought about doing that but the RCU accessors are pairing store_release with read_barrier_depends, so I don't think the particular paring is problematic and store_release is better at documenting what's being barriered. Which Tejun noted as well. Thanx, Paul -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 6/6] percpu-refcount: implement percpu_ref_reinit() and percpu_ref_is_zero()
On Thu, Jun 19, 2014 at 09:55:02AM -0700, Paul E. McKenney wrote: On Thu, Jun 19, 2014 at 09:31:04AM -0400, Tejun Heo wrote: On Thu, Jun 19, 2014 at 11:01:26AM +0800, Lai Jiangshan wrote: + /* +* Restore per-cpu operation. smp_store_release() is paired with +* smp_load_acquire() in __pcpu_ref_alive() and guarantees that the s/smp_load_acquire()/smp_read_barrier_depends()/ Will update. s/smp_store_release()/smp_mb()/ if you accept my next comment. +* zeroing is visible to all percpu accesses which can see the +* following PCPU_REF_DEAD clearing. +*/ + for_each_possible_cpu(cpu) + *per_cpu_ptr(pcpu_count, cpu) = 0; + + smp_store_release(ref-pcpu_count_ptr, + ref-pcpu_count_ptr ~PCPU_REF_DEAD); I think it would be better if smp_mb() is used. smp_wmb() would be better here. We don't need the reader side. it is documented that smp_read_barrier_depends() and smp_mb() are paired. Not smp_read_barrier_depends() and smp_store_release(). Well, sounds like the documentation needs an update, then. ;-) For example, current rcu_assign_pointer() is a wrapper around smp_store_release(). I don't know. I thought about doing that but the RCU accessors are pairing store_release with read_barrier_depends, so I don't think the particular paring is problematic and store_release is better at documenting what's being barriered. Which Tejun noted as well. And here is a patch to update the documentation. Thoughts? Thanx, Paul documentation: Add acquire/release barriers to pairing rules It is possible to pair acquire and release barriers with other barriers, so this commit adds them to the list in the SMP barrier pairing section. Reported-by: Lai Jiangshan la...@cn.fujitsu.com Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index a6ca533a73fc..2a7c3c4fb53f 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -757,10 +757,12 @@ SMP BARRIER PAIRING When dealing with CPU-CPU interactions, certain types of memory barrier should always be paired. A lack of appropriate pairing is almost certainly an error. -A write barrier should always be paired with a data dependency barrier or read -barrier, though a general barrier would also be viable. Similarly a read -barrier or a data dependency barrier should always be paired with at least an -write barrier, though, again, a general barrier is viable: +A write barrier should always be paired with a data dependency barrier, +acquire barrier, release barrier, or read barrier, though a general +barrier would also be viable. Similarly a read barrier or a data +dependency barrier should always be paired with at least a write barrier, +an acquire barrier, or a release barrier, though, again, a general +barrier is viable: CPU 1 CPU 2 === === -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 6/6] percpu-refcount: implement percpu_ref_reinit() and percpu_ref_is_zero()
On Thu, Jun 19, 2014 at 10:03:05AM -0700, Paul E. McKenney wrote: documentation: Add acquire/release barriers to pairing rules It is possible to pair acquire and release barriers with other barriers, so this commit adds them to the list in the SMP barrier pairing section. Reported-by: Lai Jiangshan la...@cn.fujitsu.com Signed-off-by: Paul E. McKenney paul...@linux.vnet.ibm.com diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index a6ca533a73fc..2a7c3c4fb53f 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -757,10 +757,12 @@ SMP BARRIER PAIRING When dealing with CPU-CPU interactions, certain types of memory barrier should always be paired. A lack of appropriate pairing is almost certainly an error. -A write barrier should always be paired with a data dependency barrier or read -barrier, though a general barrier would also be viable. Similarly a read -barrier or a data dependency barrier should always be paired with at least an -write barrier, though, again, a general barrier is viable: +A write barrier should always be paired with a data dependency barrier, +acquire barrier, release barrier, or read barrier, though a general +barrier would also be viable. Similarly a read barrier or a data +dependency barrier should always be paired with at least a write barrier, +an acquire barrier, or a release barrier, though, again, a general +barrier is viable: FWIW, Reviewed-by: Tejun Heo t...@kernel.org Thanks. -- tejun -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 6/6] percpu-refcount: implement percpu_ref_reinit() and percpu_ref_is_zero()
On 06/19/2014 10:20 AM, Tejun Heo wrote: > Now that explicit invocation of percpu_ref_exit() is necessary to free > the percpu counter, we can implement percpu_ref_reinit() which > reinitializes a released percpu_ref. This can be used implement > scalable gating switch which can be drained and then re-opened without > worrying about memory allocation failures. > > percpu_ref_is_zero() is added to be used in a sanity check in > percpu_ref_exit(). As this function will be useful for other purposes > too, make it a public interface. > > v2: Use smp_read_barrier_depends() instead of smp_load_acquire(). We > only need data dep barrier and smp_load_acquire() is stronger and > heavier on some archs. Spotted by Lai Jiangshan. > > Signed-off-by: Tejun Heo > Cc: Kent Overstreet > Cc: Christoph Lameter > Cc: Lai Jiangshan > --- > include/linux/percpu-refcount.h | 19 +++ > lib/percpu-refcount.c | 35 +++ > 2 files changed, 54 insertions(+) > > --- a/include/linux/percpu-refcount.h > +++ b/include/linux/percpu-refcount.h > @@ -67,6 +67,7 @@ struct percpu_ref { > > int __must_check percpu_ref_init(struct percpu_ref *ref, >percpu_ref_func_t *release); > +void percpu_ref_reinit(struct percpu_ref *ref); > void percpu_ref_exit(struct percpu_ref *ref); > void percpu_ref_kill_and_confirm(struct percpu_ref *ref, >percpu_ref_func_t *confirm_kill); > @@ -99,6 +100,9 @@ static inline bool __pcpu_ref_alive(stru > { > unsigned long pcpu_ptr = ACCESS_ONCE(ref->pcpu_count_ptr); > > + /* paired with smp_store_release() in percpu_ref_reinit() */ > + smp_read_barrier_depends(); > + > if (unlikely(pcpu_ptr & PCPU_REF_DEAD)) > return false; > > @@ -206,4 +210,19 @@ static inline void percpu_ref_put(struct > rcu_read_unlock_sched(); > } > > +/** > + * percpu_ref_is_zero - test whether a percpu refcount reached zero > + * @ref: percpu_ref to test > + * > + * Returns %true if @ref reached zero. > + */ > +static inline bool percpu_ref_is_zero(struct percpu_ref *ref) > +{ > + unsigned __percpu *pcpu_count; > + > + if (__pcpu_ref_alive(ref, _count)) > + return false; > + return !atomic_read(>count); > +} > + > #endif > --- a/lib/percpu-refcount.c > +++ b/lib/percpu-refcount.c > @@ -61,6 +61,41 @@ int percpu_ref_init(struct percpu_ref *r > EXPORT_SYMBOL_GPL(percpu_ref_init); > > /** > + * percpu_ref_reinit - re-initialize a percpu refcount > + * @ref: perpcu_ref to re-initialize > + * > + * Re-initialize @ref so that it's in the same state as when it finished > + * percpu_ref_init(). @ref must have been initialized successfully, killed > + * and reached 0 but not exited. > + * > + * Note that percpu_ref_tryget[_live]() are safe to perform on @ref while > + * this function is in progress. > + */ > +void percpu_ref_reinit(struct percpu_ref *ref) > +{ > + unsigned __percpu *pcpu_count = pcpu_count_ptr(ref); > + int cpu; > + > + BUG_ON(!pcpu_count); > + WARN_ON(!percpu_ref_is_zero(ref)); > + > + atomic_set(>count, 1 + PCPU_COUNT_BIAS); > + > + /* > + * Restore per-cpu operation. smp_store_release() is paired with > + * smp_load_acquire() in __pcpu_ref_alive() and guarantees that the s/smp_load_acquire()/smp_read_barrier_depends()/ s/smp_store_release()/smp_mb()/ if you accept my next comment. > + * zeroing is visible to all percpu accesses which can see the > + * following PCPU_REF_DEAD clearing. > + */ > + for_each_possible_cpu(cpu) > + *per_cpu_ptr(pcpu_count, cpu) = 0; > + > + smp_store_release(>pcpu_count_ptr, > + ref->pcpu_count_ptr & ~PCPU_REF_DEAD); I think it would be better if smp_mb() is used. it is documented that smp_read_barrier_depends() and smp_mb() are paired. Not smp_read_barrier_depends() and smp_store_release(). > +} > +EXPORT_SYMBOL_GPL(percpu_ref_reinit); > + > +/** > * percpu_ref_exit - undo percpu_ref_init() > * @ref: percpu_ref to exit > * > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > . > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 6/6] percpu-refcount: implement percpu_ref_reinit() and percpu_ref_is_zero()
On 06/19/2014 10:20 AM, Tejun Heo wrote: Now that explicit invocation of percpu_ref_exit() is necessary to free the percpu counter, we can implement percpu_ref_reinit() which reinitializes a released percpu_ref. This can be used implement scalable gating switch which can be drained and then re-opened without worrying about memory allocation failures. percpu_ref_is_zero() is added to be used in a sanity check in percpu_ref_exit(). As this function will be useful for other purposes too, make it a public interface. v2: Use smp_read_barrier_depends() instead of smp_load_acquire(). We only need data dep barrier and smp_load_acquire() is stronger and heavier on some archs. Spotted by Lai Jiangshan. Signed-off-by: Tejun Heo t...@kernel.org Cc: Kent Overstreet k...@daterainc.com Cc: Christoph Lameter c...@linux-foundation.org Cc: Lai Jiangshan la...@cn.fujitsu.com --- include/linux/percpu-refcount.h | 19 +++ lib/percpu-refcount.c | 35 +++ 2 files changed, 54 insertions(+) --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -67,6 +67,7 @@ struct percpu_ref { int __must_check percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release); +void percpu_ref_reinit(struct percpu_ref *ref); void percpu_ref_exit(struct percpu_ref *ref); void percpu_ref_kill_and_confirm(struct percpu_ref *ref, percpu_ref_func_t *confirm_kill); @@ -99,6 +100,9 @@ static inline bool __pcpu_ref_alive(stru { unsigned long pcpu_ptr = ACCESS_ONCE(ref-pcpu_count_ptr); + /* paired with smp_store_release() in percpu_ref_reinit() */ + smp_read_barrier_depends(); + if (unlikely(pcpu_ptr PCPU_REF_DEAD)) return false; @@ -206,4 +210,19 @@ static inline void percpu_ref_put(struct rcu_read_unlock_sched(); } +/** + * percpu_ref_is_zero - test whether a percpu refcount reached zero + * @ref: percpu_ref to test + * + * Returns %true if @ref reached zero. + */ +static inline bool percpu_ref_is_zero(struct percpu_ref *ref) +{ + unsigned __percpu *pcpu_count; + + if (__pcpu_ref_alive(ref, pcpu_count)) + return false; + return !atomic_read(ref-count); +} + #endif --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -61,6 +61,41 @@ int percpu_ref_init(struct percpu_ref *r EXPORT_SYMBOL_GPL(percpu_ref_init); /** + * percpu_ref_reinit - re-initialize a percpu refcount + * @ref: perpcu_ref to re-initialize + * + * Re-initialize @ref so that it's in the same state as when it finished + * percpu_ref_init(). @ref must have been initialized successfully, killed + * and reached 0 but not exited. + * + * Note that percpu_ref_tryget[_live]() are safe to perform on @ref while + * this function is in progress. + */ +void percpu_ref_reinit(struct percpu_ref *ref) +{ + unsigned __percpu *pcpu_count = pcpu_count_ptr(ref); + int cpu; + + BUG_ON(!pcpu_count); + WARN_ON(!percpu_ref_is_zero(ref)); + + atomic_set(ref-count, 1 + PCPU_COUNT_BIAS); + + /* + * Restore per-cpu operation. smp_store_release() is paired with + * smp_load_acquire() in __pcpu_ref_alive() and guarantees that the s/smp_load_acquire()/smp_read_barrier_depends()/ s/smp_store_release()/smp_mb()/ if you accept my next comment. + * zeroing is visible to all percpu accesses which can see the + * following PCPU_REF_DEAD clearing. + */ + for_each_possible_cpu(cpu) + *per_cpu_ptr(pcpu_count, cpu) = 0; + + smp_store_release(ref-pcpu_count_ptr, + ref-pcpu_count_ptr ~PCPU_REF_DEAD); I think it would be better if smp_mb() is used. it is documented that smp_read_barrier_depends() and smp_mb() are paired. Not smp_read_barrier_depends() and smp_store_release(). +} +EXPORT_SYMBOL_GPL(percpu_ref_reinit); + +/** * percpu_ref_exit - undo percpu_ref_init() * @ref: percpu_ref to exit * -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ . -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/