On Tue, Jun 07, 2016 at 08:45:53PM +0800, Boqun Feng wrote:
> On Tue, Jun 07, 2016 at 02:00:16PM +0200, Peter Zijlstra wrote:
> > On Tue, Jun 07, 2016 at 07:43:15PM +0800, Boqun Feng wrote:
> > > On Mon, Jun 06, 2016 at 06:08:36PM +0200, Peter Zijlstra wrote:
> > > > diff --git a/kernel/locking/qsp
On Tue, Jun 07, 2016 at 02:00:16PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 07, 2016 at 07:43:15PM +0800, Boqun Feng wrote:
> > On Mon, Jun 06, 2016 at 06:08:36PM +0200, Peter Zijlstra wrote:
> > > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> > > index ce2f75e32ae1..e1c2
On Tue, Jun 07, 2016 at 07:43:15PM +0800, Boqun Feng wrote:
> On Mon, Jun 06, 2016 at 06:08:36PM +0200, Peter Zijlstra wrote:
> > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> > index ce2f75e32ae1..e1c29d352e0e 100644
> > --- a/kernel/locking/qspinlock.c
> > +++ b/kernel/lo
On Mon, Jun 06, 2016 at 06:08:36PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 02, 2016 at 06:57:00PM +0100, Will Deacon wrote:
> > > This 'replaces' commit:
> > >
> > > 54cf809b9512 ("locking,qspinlock: Fix spin_is_locked() and
> > > spin_unlock_wait()")
> > >
> > > and seems to still work wit
On Thu, Jun 02, 2016 at 06:57:00PM +0100, Will Deacon wrote:
> > This 'replaces' commit:
> >
> > 54cf809b9512 ("locking,qspinlock: Fix spin_is_locked() and
> > spin_unlock_wait()")
> >
> > and seems to still work with the test case from that thread while
> > getting rid of the extra barriers.
On Fri, Jun 03, 2016 at 06:35:37PM +0100, Will Deacon wrote:
> On Fri, Jun 03, 2016 at 03:42:49PM +0200, Peter Zijlstra wrote:
> > On Fri, Jun 03, 2016 at 01:47:34PM +0100, Will Deacon wrote:
> > > Even on x86, I think you need a fence here:
> > >
> > > X86 lock
> > > {
> > > }
> > > P0
On Fri, Jun 03, 2016 at 03:42:49PM +0200, Peter Zijlstra wrote:
> On Fri, Jun 03, 2016 at 01:47:34PM +0100, Will Deacon wrote:
> > Even on x86, I think you need a fence here:
> >
> > X86 lock
> > {
> > }
> > P0| P1;
> > MOV EAX,$1| MOV EAX,$1;
> >
On Fri, Jun 03, 2016 at 01:47:34PM +0100, Will Deacon wrote:
> > Now, the normal atomic_foo_acquire() stuff uses smp_mb() as per
> > smp_mb__after_atomic(), its just ARM64 and PPC that go all 'funny' and
> > need this extra barrier. Blergh. So lets shelf this issue for a bit.
>
> Hmm... I certainl
On Fri, Jun 03, 2016 at 01:47:34PM +0100, Will Deacon wrote:
> Even on x86, I think you need a fence here:
>
> X86 lock
> {
> }
> P0| P1;
> MOV EAX,$1| MOV EAX,$1;
> LOCK XCHG [x],EAX | LOCK XCHG [y],EAX ;
> MOV EBX,[y] | MOV EBX,[x]
Hi Peter,
On Thu, Jun 02, 2016 at 11:51:19PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 02, 2016 at 06:57:00PM +0100, Will Deacon wrote:
> > > +++ b/include/asm-generic/qspinlock.h
> > > @@ -28,30 +28,13 @@
> > > */
> > > static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
On Thu, Jun 02, 2016 at 06:57:00PM +0100, Will Deacon wrote:
> > +++ b/include/asm-generic/qspinlock.h
> > @@ -28,30 +28,13 @@
> > */
> > static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
> > {
> > + /*
> > +* See queued_spin_unlock_wait().
> > *
> > +* An
On Thu, Jun 02, 2016 at 06:34:25PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 02, 2016 at 04:44:24PM +0200, Peter Zijlstra wrote:
> > On Thu, Jun 02, 2016 at 10:24:40PM +0800, Boqun Feng wrote:
> > > On Thu, Jun 02, 2016 at 01:52:02PM +0200, Peter Zijlstra wrote:
> > > About spin_unlock_wait() on p
On Thu, Jun 02, 2016 at 04:44:24PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 02, 2016 at 10:24:40PM +0800, Boqun Feng wrote:
> > On Thu, Jun 02, 2016 at 01:52:02PM +0200, Peter Zijlstra wrote:
> > About spin_unlock_wait() on ppc, I actually have a fix pending review:
> >
> > http://lkml.kernel.or
On Thu, Jun 02, 2016 at 11:11:07PM +0800, Boqun Feng wrote:
> On Thu, Jun 02, 2016 at 04:44:24PM +0200, Peter Zijlstra wrote:
> > Let me go ponder that some :/
> >
>
> An intial thought of the fix is making queued_spin_unlock_wait() an
> atomic-nop too:
>
> static inline void queued_spin_unlock_
On Thu, Jun 02, 2016 at 11:11:07PM +0800, Boqun Feng wrote:
[snip]
>
> OK, I will resend a new patch making spin_unlock_wait() align the
> semantics in your series.
>
I realize that if my patch goes first then it's more safe and convenient
to keep the two smp_mb()s in ppc arch_spin_unlock_wait()
On Thu, Jun 02, 2016 at 04:44:24PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 02, 2016 at 10:24:40PM +0800, Boqun Feng wrote:
> > On Thu, Jun 02, 2016 at 01:52:02PM +0200, Peter Zijlstra wrote:
> > About spin_unlock_wait() on ppc, I actually have a fix pending review:
> >
> > http://lkml.kernel.or
On Thu, Jun 02, 2016 at 10:24:40PM +0800, Boqun Feng wrote:
> On Thu, Jun 02, 2016 at 01:52:02PM +0200, Peter Zijlstra wrote:
> About spin_unlock_wait() on ppc, I actually have a fix pending review:
>
> http://lkml.kernel.org/r/1461130033-70898-1-git-send-email-boqun.f...@gmail.com
Please use the
On Thu, Jun 02, 2016 at 01:52:02PM +0200, Peter Zijlstra wrote:
[snip]
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -27,6 +27,8 @@
> #include
> #include
> #include
> +#include
> +#include
>
> #ifdef CONFIG_PPC64
> /* use 0x80yy when lo
This patch updates/fixes all spin_unlock_wait() implementations.
The update is in semantics; where it previously was only a control
dependency, we now upgrade to a full load-acquire to match the
store-release from the spin_unlock() we waited on. This ensures that
when spin_unlock_wait() returns, w
19 matches
Mail list logo