On Wed, Apr 14, 2021 at 12:16:38PM +0200, Peter Zijlstra wrote:
> How's this then? Compile tested only on openrisc/simple_smp_defconfig.
>
> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
> index d74b13825501..a7a1296b0b4d 100644
> --- a/include/asm-generic/qspinloc
On Thu, Apr 15, 2021 at 10:02:18AM +0100, Catalin Marinas wrote:
> IIRC, one issue we had with ticket spinlocks on arm64 was on big.LITTLE
> systems where the little CPUs were always last to get a ticket when
> racing with the big cores. That was with load/store exclusives (LR/SC
> style) and would
On Thu, Apr 15, 2021 at 10:02:18AM +0100, Catalin Marinas wrote:
> (fixed Will's email address)
>
> On Thu, Apr 15, 2021 at 10:09:54AM +0200, Peter Zijlstra wrote:
> > On Thu, Apr 15, 2021 at 05:47:34AM +0900, Stafford Horne wrote:
> > > > How's this then? Compile tested only on openrisc/simple_sm
(fixed Will's email address)
On Thu, Apr 15, 2021 at 10:09:54AM +0200, Peter Zijlstra wrote:
> On Thu, Apr 15, 2021 at 05:47:34AM +0900, Stafford Horne wrote:
> > > How's this then? Compile tested only on openrisc/simple_smp_defconfig.
> >
> > I did my testing with this FPGA build SoC:
> >
> >
On Thu, Apr 15, 2021 at 05:47:34AM +0900, Stafford Horne wrote:
> > How's this then? Compile tested only on openrisc/simple_smp_defconfig.
>
> I did my testing with this FPGA build SoC:
>
> https://github.com/stffrdhrn/de0_nano-multicore
>
> Note, the CPU timer sync logic uses mb() and is a bi
On Wed, Apr 14, 2021 at 02:45:43PM +0200, Peter Zijlstra wrote:
> On Wed, Apr 14, 2021 at 12:16:38PM +0200, Peter Zijlstra wrote:
> > On Wed, Apr 14, 2021 at 11:05:24AM +0200, Peter Zijlstra wrote:
> >
> > > That made me look at the qspinlock code, and queued_spin_*lock() uses
> > > atomic_try_cmp
On Wed, Apr 14, 2021 at 12:16:38PM +0200, Peter Zijlstra wrote:
> On Wed, Apr 14, 2021 at 11:05:24AM +0200, Peter Zijlstra wrote:
>
> > That made me look at the qspinlock code, and queued_spin_*lock() uses
> > atomic_try_cmpxchg_acquire(), which means any arch that uses qspinlock
> > and has RCpc
From: Peter Zijlstra
> Sent: 14 April 2021 13:56
>
> > I've tested it on csky SMP*4 hw (860) & riscv SMP*4 hw (c910) and it's okay.
>
> W00t :-)
>
> > Hope you can keep
> > typedef struct {
> > union {
> > atomic_t lock;
> > struct __raw_tickets {
> > #ifd
On Wed, Apr 14, 2021 at 02:55:57PM +0200, Peter Zijlstra wrote:
> On Wed, Apr 14, 2021 at 08:39:33PM +0800, Guo Ren wrote:
> > > + * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc
> > > and hence
> > > + * uses atomic_fetch_add() which is SC to create an RCsc lock.
>
> This ^
On Wed, Apr 14, 2021 at 08:39:33PM +0800, Guo Ren wrote:
> I've tested it on csky SMP*4 hw (860) & riscv SMP*4 hw (c910) and it's okay.
W00t :-)
> Hope you can keep
> typedef struct {
> union {
> atomic_t lock;
> struct __raw_tickets {
> #ifdef __BIG_ENDIA
On Wed, Apr 14, 2021 at 12:16:38PM +0200, Peter Zijlstra wrote:
> On Wed, Apr 14, 2021 at 11:05:24AM +0200, Peter Zijlstra wrote:
>
> > That made me look at the qspinlock code, and queued_spin_*lock() uses
> > atomic_try_cmpxchg_acquire(), which means any arch that uses qspinlock
> > and has RCpc
On Wed, Apr 14, 2021 at 6:16 PM Peter Zijlstra wrote:
>
> On Wed, Apr 14, 2021 at 11:05:24AM +0200, Peter Zijlstra wrote:
>
> > That made me look at the qspinlock code, and queued_spin_*lock() uses
> > atomic_try_cmpxchg_acquire(), which means any arch that uses qspinlock
> > and has RCpc atomics
On Wed, Apr 14, 2021 at 11:05:24AM +0200, Peter Zijlstra wrote:
> That made me look at the qspinlock code, and queued_spin_*lock() uses
> atomic_try_cmpxchg_acquire(), which means any arch that uses qspinlock
> and has RCpc atomics will give us massive pain.
>
> Current archs using qspinlock are:
13 matches
Mail list logo