On Fri, Apr 01, 2016 at 05:47:43PM +0100, Will Deacon wrote:
> > +#define smp_cond_load_acquire(ptr, cond_expr) ({ \
> > + typeof(ptr) __PTR = (ptr); \
> > + typeof(*ptr) VAL; \
>
> It's a bit grim having a
On Fri, Apr 01, 2016 at 05:47:43PM +0100, Will Deacon wrote:
> > +#define smp_cond_load_acquire(ptr, cond_expr) ({ \
> > + typeof(ptr) __PTR = (ptr); \
> > + typeof(*ptr) VAL; \
>
> It's a bit grim having a
On Fri, Apr 01, 2016 at 01:43:03PM +0200, Peter Zijlstra wrote:
> > Ah, yes, I forgot about that. Lemme go find that discussions and see
> > what I can do there.
>
> Completely untested..
>
> ---
> include/linux/compiler.h | 20 ++--
> kernel/locking/qspinlock.c | 12
On Fri, Apr 01, 2016 at 01:43:03PM +0200, Peter Zijlstra wrote:
> > Ah, yes, I forgot about that. Lemme go find that discussions and see
> > what I can do there.
>
> Completely untested..
>
> ---
> include/linux/compiler.h | 20 ++--
> kernel/locking/qspinlock.c | 12
> Ah, yes, I forgot about that. Lemme go find that discussions and see
> what I can do there.
Completely untested..
---
include/linux/compiler.h | 20 ++--
kernel/locking/qspinlock.c | 12 ++--
kernel/sched/core.c| 9 +
kernel/sched/sched.h | 2 +-
> Ah, yes, I forgot about that. Lemme go find that discussions and see
> what I can do there.
Completely untested..
---
include/linux/compiler.h | 20 ++--
kernel/locking/qspinlock.c | 12 ++--
kernel/sched/core.c| 9 +
kernel/sched/sched.h | 2 +-
On Fri, Apr 01, 2016 at 11:41:19AM +0100, Will Deacon wrote:
> On Fri, Apr 01, 2016 at 12:31:43PM +0200, Peter Zijlstra wrote:
> > On Thu, Mar 31, 2016 at 06:12:38PM -0400, Waiman Long wrote:
> > > >>However, if we allow a limited number of readers to spin on the
> > > >>lock simultaneously, we
On Fri, Apr 01, 2016 at 11:41:19AM +0100, Will Deacon wrote:
> On Fri, Apr 01, 2016 at 12:31:43PM +0200, Peter Zijlstra wrote:
> > On Thu, Mar 31, 2016 at 06:12:38PM -0400, Waiman Long wrote:
> > > >>However, if we allow a limited number of readers to spin on the
> > > >>lock simultaneously, we
On Fri, Apr 01, 2016 at 12:31:43PM +0200, Peter Zijlstra wrote:
> On Thu, Mar 31, 2016 at 06:12:38PM -0400, Waiman Long wrote:
> > >>However, if we allow a limited number of readers to spin on the
> > >>lock simultaneously, we can eliminates some of the reader-to-reader
> > >>latencies at the
On Fri, Apr 01, 2016 at 12:31:43PM +0200, Peter Zijlstra wrote:
> On Thu, Mar 31, 2016 at 06:12:38PM -0400, Waiman Long wrote:
> > >>However, if we allow a limited number of readers to spin on the
> > >>lock simultaneously, we can eliminates some of the reader-to-reader
> > >>latencies at the
On Thu, Mar 31, 2016 at 06:12:38PM -0400, Waiman Long wrote:
> >>However, if we allow a limited number of readers to spin on the
> >>lock simultaneously, we can eliminates some of the reader-to-reader
> >>latencies at the expense of a bit more cacheline contention and
> >>probably more power
On Thu, Mar 31, 2016 at 06:12:38PM -0400, Waiman Long wrote:
> >>However, if we allow a limited number of readers to spin on the
> >>lock simultaneously, we can eliminates some of the reader-to-reader
> >>latencies at the expense of a bit more cacheline contention and
> >>probably more power
On Thu, Mar 31, 2016 at 06:12:38PM -0400, Waiman Long wrote:
> On 03/29/2016 04:20 PM, Peter Zijlstra wrote:
> >>cnts = atomic_add_return_acquire(_QR_BIAS,>cnts) - _QR_BIAS;
> >>+ while ((cnts& _QW_WMASK) == _QW_LOCKED) {
> >>+ if (locked&& ((cnts>> _QR_SHIFT)<
On Thu, Mar 31, 2016 at 06:12:38PM -0400, Waiman Long wrote:
> On 03/29/2016 04:20 PM, Peter Zijlstra wrote:
> >>cnts = atomic_add_return_acquire(_QR_BIAS,>cnts) - _QR_BIAS;
> >>+ while ((cnts& _QW_WMASK) == _QW_LOCKED) {
> >>+ if (locked&& ((cnts>> _QR_SHIFT)<
On 03/29/2016 04:20 PM, Peter Zijlstra wrote:
On Sat, Mar 19, 2016 at 11:21:19PM -0400, Waiman Long wrote:
In qrwlock, the reader that is spining on the lock will need to notify
the next reader in the queue when the lock is free. That introduces a
reader-to-reader latency that is not present in
On 03/29/2016 04:20 PM, Peter Zijlstra wrote:
On Sat, Mar 19, 2016 at 11:21:19PM -0400, Waiman Long wrote:
In qrwlock, the reader that is spining on the lock will need to notify
the next reader in the queue when the lock is free. That introduces a
reader-to-reader latency that is not present in
On Sat, Mar 19, 2016 at 11:21:19PM -0400, Waiman Long wrote:
> In qrwlock, the reader that is spining on the lock will need to notify
> the next reader in the queue when the lock is free. That introduces a
> reader-to-reader latency that is not present in the original rwlock.
How did you find
On Sat, Mar 19, 2016 at 11:21:19PM -0400, Waiman Long wrote:
> In qrwlock, the reader that is spining on the lock will need to notify
> the next reader in the queue when the lock is free. That introduces a
> reader-to-reader latency that is not present in the original rwlock.
How did you find
On 03/20/2016 06:43 AM, Peter Zijlstra wrote:
We still have that starvation case in mutex, I would think that is far
more important to fix.
Peter, I am sorry that I let the mutex patch languish for a while. I
will work on that next.
Cheers,
Longman
On 03/20/2016 06:43 AM, Peter Zijlstra wrote:
We still have that starvation case in mutex, I would think that is far
more important to fix.
Peter, I am sorry that I let the mutex patch languish for a while. I
will work on that next.
Cheers,
Longman
We still have that starvation case in mutex, I would think that is far
more important to fix.
We still have that starvation case in mutex, I would think that is far
more important to fix.
In qrwlock, the reader that is spining on the lock will need to notify
the next reader in the queue when the lock is free. That introduces a
reader-to-reader latency that is not present in the original rwlock.
That is the price for reducing lock cacheline contention. It also
reduces the
In qrwlock, the reader that is spining on the lock will need to notify
the next reader in the queue when the lock is free. That introduces a
reader-to-reader latency that is not present in the original rwlock.
That is the price for reducing lock cacheline contention. It also
reduces the
24 matches
Mail list logo