On Sat, Jul 07, 2018 at 10:10:24PM +0200, Andrea Parri wrote:
> On Sat, Jul 07, 2018 at 04:08:47PM +0800, Guo Ren wrote:
> > On Fri, Jul 06, 2018 at 02:17:16PM +0200, Peter Zijlstra wrote:
>
> > CPU0CPU1
> >
> > WRITE_ONCE(x, 1)WRITE_ONCE(y, 1)
> > r0 =
On Sat, Jul 07, 2018 at 10:10:24PM +0200, Andrea Parri wrote:
> On Sat, Jul 07, 2018 at 04:08:47PM +0800, Guo Ren wrote:
> > On Fri, Jul 06, 2018 at 02:17:16PM +0200, Peter Zijlstra wrote:
>
> > CPU0CPU1
> >
> > WRITE_ONCE(x, 1)WRITE_ONCE(y, 1)
> > r0 =
On Sat, Jul 07, 2018 at 09:54:37PM +0200, Andrea Parri wrote:
> Hi Guo,
>
> On Sat, Jul 07, 2018 at 03:42:10PM +0800, Guo Ren wrote:
> > On Fri, Jul 06, 2018 at 01:56:14PM +0200, Peter Zijlstra wrote:
> > > CPU0CPU1
> > >
> > > r1 = READ_ONCE(x); WRITE_ONCE(y, 1);
>
On Sat, Jul 07, 2018 at 09:54:37PM +0200, Andrea Parri wrote:
> Hi Guo,
>
> On Sat, Jul 07, 2018 at 03:42:10PM +0800, Guo Ren wrote:
> > On Fri, Jul 06, 2018 at 01:56:14PM +0200, Peter Zijlstra wrote:
> > > CPU0CPU1
> > >
> > > r1 = READ_ONCE(x); WRITE_ONCE(y, 1);
>
On Sat, Jul 07, 2018 at 04:08:47PM +0800, Guo Ren wrote:
> On Fri, Jul 06, 2018 at 02:17:16PM +0200, Peter Zijlstra wrote:
> CPU0CPU1
>
> WRITE_ONCE(x, 1)WRITE_ONCE(y, 1)
> r0 = xchg(, 2)r1 = xchg(, 2)
>
> must not allow: r0==0 && r1==0
> So
On Sat, Jul 07, 2018 at 04:08:47PM +0800, Guo Ren wrote:
> On Fri, Jul 06, 2018 at 02:17:16PM +0200, Peter Zijlstra wrote:
> CPU0CPU1
>
> WRITE_ONCE(x, 1)WRITE_ONCE(y, 1)
> r0 = xchg(, 2)r1 = xchg(, 2)
>
> must not allow: r0==0 && r1==0
> So
Hi Guo,
On Sat, Jul 07, 2018 at 03:42:10PM +0800, Guo Ren wrote:
> On Fri, Jul 06, 2018 at 01:56:14PM +0200, Peter Zijlstra wrote:
> > CPU0CPU1
> >
> > r1 = READ_ONCE(x); WRITE_ONCE(y, 1);
> > r2 = xchg(, 2); smp_store_release(, 1);
> >
> > must not
Hi Guo,
On Sat, Jul 07, 2018 at 03:42:10PM +0800, Guo Ren wrote:
> On Fri, Jul 06, 2018 at 01:56:14PM +0200, Peter Zijlstra wrote:
> > CPU0CPU1
> >
> > r1 = READ_ONCE(x); WRITE_ONCE(y, 1);
> > r2 = xchg(, 2); smp_store_release(, 1);
> >
> > must not
On Fri, Jul 06, 2018 at 02:17:16PM +0200, Peter Zijlstra wrote:
> >
> > CPU0CPU1
> >
> > r1 = READ_ONCE(x); WRITE_ONCE(y, 1);
> > r2 = xchg(, 2); smp_store_release(, 1);
> >
> > must not allow: r1==1 && r2==0
>
> Also, since you said "SYNC.IS" is a
On Fri, Jul 06, 2018 at 02:17:16PM +0200, Peter Zijlstra wrote:
> >
> > CPU0CPU1
> >
> > r1 = READ_ONCE(x); WRITE_ONCE(y, 1);
> > r2 = xchg(, 2); smp_store_release(, 1);
> >
> > must not allow: r1==1 && r2==0
>
> Also, since you said "SYNC.IS" is a
On Fri, Jul 06, 2018 at 01:56:14PM +0200, Peter Zijlstra wrote:
> That's how LL/SC works. What I was asking is if they have any effect on
> memory ordering. Some architectures have LL/SC imply memory ordering,
> most do not.
>
> Going by your spinlock implementation they don't imply any memory
>
On Fri, Jul 06, 2018 at 01:56:14PM +0200, Peter Zijlstra wrote:
> That's how LL/SC works. What I was asking is if they have any effect on
> memory ordering. Some architectures have LL/SC imply memory ordering,
> most do not.
>
> Going by your spinlock implementation they don't imply any memory
>
On Fri, Jul 06, 2018 at 02:03:23PM +0200, Peter Zijlstra wrote:
> > > Test-and-set with MB acting as ACQUIRE, ok.
> > Em ... Ok, I'll try to use test-and-set function instead of it.
>
> "test-and-set" is just the name of this type of spinlock implementation.
>
> You _could_ use the linux
On Fri, Jul 06, 2018 at 02:03:23PM +0200, Peter Zijlstra wrote:
> > > Test-and-set with MB acting as ACQUIRE, ok.
> > Em ... Ok, I'll try to use test-and-set function instead of it.
>
> "test-and-set" is just the name of this type of spinlock implementation.
>
> You _could_ use the linux
On Fri, Jul 06, 2018 at 02:05:32PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 06, 2018 at 07:48:12PM +0800, Guo Ren wrote:
> > On Thu, Jul 05, 2018 at 08:00:08PM +0200, Peter Zijlstra wrote:
> > > On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> > > > +#ifdef CONFIG_CPU_HAS_LDSTEX
> > >
On Fri, Jul 06, 2018 at 02:05:32PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 06, 2018 at 07:48:12PM +0800, Guo Ren wrote:
> > On Thu, Jul 05, 2018 at 08:00:08PM +0200, Peter Zijlstra wrote:
> > > On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> > > > +#ifdef CONFIG_CPU_HAS_LDSTEX
> > >
On Fri, Jul 06, 2018 at 02:03:23PM +0200, Peter Zijlstra wrote:
> > Ok, I'll try to implement ticket lock in next version patch.
>
> If you need inspiration, look at:
>
My bad, just look at the arm (not arm64) version.
On Fri, Jul 06, 2018 at 02:03:23PM +0200, Peter Zijlstra wrote:
> > Ok, I'll try to implement ticket lock in next version patch.
>
> If you need inspiration, look at:
>
My bad, just look at the arm (not arm64) version.
On Fri, Jul 06, 2018 at 01:56:14PM +0200, Peter Zijlstra wrote:
> > But I couldn't undertand what's wrong without the 1th smp_mb()?
> > 1th smp_mb will make all ld/st finish before ldex.w. Is it necessary?
>
> Yes.
>
> CPU0CPU1
>
> r1 = READ_ONCE(x);
On Fri, Jul 06, 2018 at 01:56:14PM +0200, Peter Zijlstra wrote:
> > But I couldn't undertand what's wrong without the 1th smp_mb()?
> > 1th smp_mb will make all ld/st finish before ldex.w. Is it necessary?
>
> Yes.
>
> CPU0CPU1
>
> r1 = READ_ONCE(x);
On Fri, Jul 06, 2018 at 07:48:12PM +0800, Guo Ren wrote:
> On Thu, Jul 05, 2018 at 08:00:08PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> > > +#ifdef CONFIG_CPU_HAS_LDSTEX
> > > +ENTRY(csky_cmpxchg)
> > > + USPTOKSP
> > > + mfcra3, epc
> > > +
On Fri, Jul 06, 2018 at 07:48:12PM +0800, Guo Ren wrote:
> On Thu, Jul 05, 2018 at 08:00:08PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> > > +#ifdef CONFIG_CPU_HAS_LDSTEX
> > > +ENTRY(csky_cmpxchg)
> > > + USPTOKSP
> > > + mfcra3, epc
> > > +
On Fri, Jul 06, 2018 at 07:44:03PM +0800, Guo Ren wrote:
> On Thu, Jul 05, 2018 at 07:59:02PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> >
> > > +static inline void arch_spin_lock(arch_spinlock_t *lock)
> > > +{
> > > + unsigned int *p = >lock;
> >
On Fri, Jul 06, 2018 at 07:44:03PM +0800, Guo Ren wrote:
> On Thu, Jul 05, 2018 at 07:59:02PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> >
> > > +static inline void arch_spin_lock(arch_spinlock_t *lock)
> > > +{
> > > + unsigned int *p = >lock;
> >
On Fri, Jul 06, 2018 at 07:01:31PM +0800, Guo Ren wrote:
> On Thu, Jul 05, 2018 at 07:50:59PM +0200, Peter Zijlstra wrote:
> > What's the memory ordering rules for your LDEX/STEX ?
> Every CPU has a local exclusive monitor.
>
> "Ldex rz, (rx, #off)" will add an entry into the local monitor, and
On Fri, Jul 06, 2018 at 07:01:31PM +0800, Guo Ren wrote:
> On Thu, Jul 05, 2018 at 07:50:59PM +0200, Peter Zijlstra wrote:
> > What's the memory ordering rules for your LDEX/STEX ?
> Every CPU has a local exclusive monitor.
>
> "Ldex rz, (rx, #off)" will add an entry into the local monitor, and
On Thu, Jul 05, 2018 at 08:00:08PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> > +#ifdef CONFIG_CPU_HAS_LDSTEX
> > +ENTRY(csky_cmpxchg)
> > + USPTOKSP
> > + mfcra3, epc
> > + INCTRAP a3
> > +
> > + subisp, 8
> > + stw a3, (sp, 0)
On Thu, Jul 05, 2018 at 08:00:08PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> > +#ifdef CONFIG_CPU_HAS_LDSTEX
> > +ENTRY(csky_cmpxchg)
> > + USPTOKSP
> > + mfcra3, epc
> > + INCTRAP a3
> > +
> > + subisp, 8
> > + stw a3, (sp, 0)
On Thu, Jul 05, 2018 at 07:59:02PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
>
> > +static inline void arch_spin_lock(arch_spinlock_t *lock)
> > +{
> > + unsigned int *p = >lock;
> > + unsigned int tmp;
> > +
> > + asm volatile (
> > +
On Thu, Jul 05, 2018 at 07:59:02PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
>
> > +static inline void arch_spin_lock(arch_spinlock_t *lock)
> > +{
> > + unsigned int *p = >lock;
> > + unsigned int tmp;
> > +
> > + asm volatile (
> > +
On Thu, Jul 05, 2018 at 07:50:59PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
>
> > +#include
> > +
> > +#define __xchg(new, ptr, size) \
> > +({ \
> > +
On Thu, Jul 05, 2018 at 07:50:59PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
>
> > +#include
> > +
> > +#define __xchg(new, ptr, size) \
> > +({ \
> > +
On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> +#ifdef CONFIG_CPU_HAS_LDSTEX
> +ENTRY(csky_cmpxchg)
> + USPTOKSP
> + mfcra3, epc
> + INCTRAP a3
> +
> + subisp, 8
> + stw a3, (sp, 0)
> + mfcra3, epsr
> + stw a3, (sp, 4)
> +
> + psrset
On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> +#ifdef CONFIG_CPU_HAS_LDSTEX
> +ENTRY(csky_cmpxchg)
> + USPTOKSP
> + mfcra3, epc
> + INCTRAP a3
> +
> + subisp, 8
> + stw a3, (sp, 0)
> + mfcra3, epsr
> + stw a3, (sp, 4)
> +
> + psrset
On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> +static inline void arch_spin_lock(arch_spinlock_t *lock)
> +{
> + unsigned int *p = >lock;
> + unsigned int tmp;
> +
> + asm volatile (
> + "1: ldex.w %0, (%1) \n"
> + " bnez
On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> +static inline void arch_spin_lock(arch_spinlock_t *lock)
> +{
> + unsigned int *p = >lock;
> + unsigned int tmp;
> +
> + asm volatile (
> + "1: ldex.w %0, (%1) \n"
> + " bnez
On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> +#include
> +
> +#define __xchg(new, ptr, size) \
> +({ \
> + __typeof__(ptr) __ptr = (ptr); \
> +
On Mon, Jul 02, 2018 at 01:30:14AM +0800, Guo Ren wrote:
> +#include
> +
> +#define __xchg(new, ptr, size) \
> +({ \
> + __typeof__(ptr) __ptr = (ptr); \
> +
Signed-off-by: Guo Ren
---
arch/csky/include/asm/cmpxchg.h| 68 +
arch/csky/include/asm/spinlock.h | 174 +
arch/csky/include/asm/spinlock_types.h | 20
arch/csky/kernel/atomic.S | 87 +
4 files
Signed-off-by: Guo Ren
---
arch/csky/include/asm/cmpxchg.h| 68 +
arch/csky/include/asm/spinlock.h | 174 +
arch/csky/include/asm/spinlock_types.h | 20
arch/csky/kernel/atomic.S | 87 +
4 files
40 matches
Mail list logo