On 01/23/2014 12:52 PM, Linus Torvalds wrote:
On Thu, Jan 23, 2014 at 9:47 AM, Waiman Long wrote:
Thank for the info. I am less familiar with that kind of issues on other
architecture. I will add a smp_mb__after_atomic_dec()& send out a new
patch.
SInce it's the unlock path,. you need to use
On Thu, Jan 23, 2014 at 9:47 AM, Waiman Long wrote:
>
> Thank for the info. I am less familiar with that kind of issues on other
> architecture. I will add a smp_mb__after_atomic_dec() & send out a new
> patch.
SInce it's the unlock path,. you need to use the "mb__*before*"
versions, since
On 01/23/2014 12:15 PM, Linus Torvalds wrote:
On Thu, Jan 23, 2014 at 9:12 AM, Waiman Long wrote:
I thought that all atomic RMW instructions are memory barrier.
On x86 they are. Not necessarily elsewhere.
If they are not, what kind of barrier should be added?
smp_mb__before_atomic_xyz()
On Thu, Jan 23, 2014 at 09:15:38AM -0800, Linus Torvalds wrote:
> On Thu, Jan 23, 2014 at 9:12 AM, Waiman Long wrote:
> >
> > I thought that all atomic RMW instructions are memory barrier.
>
> On x86 they are. Not necessarily elsewhere.
>
> > If they are not, what kind of barrier should be
On Thu, Jan 23, 2014 at 9:12 AM, Waiman Long wrote:
>
> I thought that all atomic RMW instructions are memory barrier.
On x86 they are. Not necessarily elsewhere.
> If they are not, what kind of barrier should be added?
smp_mb__before_atomic_xyz() and smp_mb__after_atomic_xyz() will do it,
and
On 01/23/2014 05:07 AM, Peter Zijlstra wrote:
On Wed, Jan 22, 2014 at 04:33:55PM -0500, Waiman Long wrote:
+/**
+ * queue_read_unlock - release read lock of a queue rwlock
+ * @lock : Pointer to queue rwlock structure
+ */
+static inline void queue_read_unlock(struct qrwlock *lock)
+{
+
On Wed, Jan 22, 2014 at 04:33:55PM -0500, Waiman Long wrote:
> +/**
> + * queue_read_unlock - release read lock of a queue rwlock
> + * @lock : Pointer to queue rwlock structure
> + */
> +static inline void queue_read_unlock(struct qrwlock *lock)
> +{
> + /*
> + * Atomically decrement the
On Wed, Jan 22, 2014 at 04:33:55PM -0500, Waiman Long wrote:
+/**
+ * queue_read_unlock - release read lock of a queue rwlock
+ * @lock : Pointer to queue rwlock structure
+ */
+static inline void queue_read_unlock(struct qrwlock *lock)
+{
+ /*
+ * Atomically decrement the reader
On 01/23/2014 05:07 AM, Peter Zijlstra wrote:
On Wed, Jan 22, 2014 at 04:33:55PM -0500, Waiman Long wrote:
+/**
+ * queue_read_unlock - release read lock of a queue rwlock
+ * @lock : Pointer to queue rwlock structure
+ */
+static inline void queue_read_unlock(struct qrwlock *lock)
+{
+
On Thu, Jan 23, 2014 at 9:12 AM, Waiman Long waiman.l...@hp.com wrote:
I thought that all atomic RMW instructions are memory barrier.
On x86 they are. Not necessarily elsewhere.
If they are not, what kind of barrier should be added?
smp_mb__before_atomic_xyz() and smp_mb__after_atomic_xyz()
On Thu, Jan 23, 2014 at 09:15:38AM -0800, Linus Torvalds wrote:
On Thu, Jan 23, 2014 at 9:12 AM, Waiman Long waiman.l...@hp.com wrote:
I thought that all atomic RMW instructions are memory barrier.
On x86 they are. Not necessarily elsewhere.
If they are not, what kind of barrier should
On 01/23/2014 12:15 PM, Linus Torvalds wrote:
On Thu, Jan 23, 2014 at 9:12 AM, Waiman Longwaiman.l...@hp.com wrote:
I thought that all atomic RMW instructions are memory barrier.
On x86 they are. Not necessarily elsewhere.
If they are not, what kind of barrier should be added?
On Thu, Jan 23, 2014 at 9:47 AM, Waiman Long waiman.l...@hp.com wrote:
Thank for the info. I am less familiar with that kind of issues on other
architecture. I will add a smp_mb__after_atomic_dec() send out a new
patch.
SInce it's the unlock path,. you need to use the mb__*before*
versions,
On 01/23/2014 12:52 PM, Linus Torvalds wrote:
On Thu, Jan 23, 2014 at 9:47 AM, Waiman Longwaiman.l...@hp.com wrote:
Thank for the info. I am less familiar with that kind of issues on other
architecture. I will add a smp_mb__after_atomic_dec() send out a new
patch.
SInce it's the unlock
This patch introduces a new read/write lock implementation that put
waiting readers and writers into a queue instead of actively contending
the lock like the current read/write lock implementation. This will
improve performance in highly contended situation by reducing the
cache line bouncing
This patch introduces a new read/write lock implementation that put
waiting readers and writers into a queue instead of actively contending
the lock like the current read/write lock implementation. This will
improve performance in highly contended situation by reducing the
cache line bouncing
16 matches
Mail list logo