barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock();
spin_lock();
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Spraul
Cc: Paul E. McKenney
---
?
- spin_unlock_wait() is an ACQUIRE.
- No memory ordering is enforced by spin_is_locked().
The patch adds this into Documentation/locking/spinlock.txt.
Signed-off-by: Manfred Spraul
---
Documentation/locking/spinlocks.txt | 9 +
1 file changed, 9 insertions(+)
diff --git a/Documentation/locking
Hi,
V5: Major restructuring based on input from Peter and Davidlohr.
As discussed before:
If a high-scalability locking scheme is built with multiple
spinlocks, then often additional memory barriers are required.
The documentation was not as clear as possible, and memory
barriers were missing /
k) instead of
spin_unlock_wait(_lock) and loop backward.
- use smp_store_mb() instead of a raw smp_mb()
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: Pablo Neira Ayuso <pa...@netfilter.org>
Cc: netfilter-de...@vger.kernel.org
---
Question: Should I split this patch?
First a patch that
k) instead of
spin_unlock_wait(_lock) and loop backward.
- use smp_store_mb() instead of a raw smp_mb()
Signed-off-by: Manfred Spraul
Cc: Pablo Neira Ayuso
Cc: netfilter-de...@vger.kernel.org
---
Question: Should I split this patch?
First a patch that uses smp_mb(), with Cc: stable.
The replace
On 08/29/2016 03:44 PM, Peter Zijlstra wrote:
If you add a barrier, the Changelog had better be clear. And I'm still
not entirely sure I get what exactly this barrier should do, nor why it
defaults to a full smp_mb. If what I suspect it should do, only PPC and
ARM64 need the barrier.
The
On 08/29/2016 03:44 PM, Peter Zijlstra wrote:
If you add a barrier, the Changelog had better be clear. And I'm still
not entirely sure I get what exactly this barrier should do, nor why it
defaults to a full smp_mb. If what I suspect it should do, only PPC and
ARM64 need the barrier.
The
change avoids that nf_conntrack_lock() could loop multiple times.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
net/netfilter/nf_conntrack_core.c | 36 ++--
1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/net/net
s override it with a less expensive
barrier if this is sufficient for their hardware/spinlock
implementation.
For overriding, the same approach as for smp_mb__before_spin_lock()
is used: If smp_mb__after_spin_lock is already defined, then it is
not changed.
Signed-off-by: Manfred Spraul <manf...@col
barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock();
spin_lock();
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Hi,
V4: Docu/comment improvements, remove unnecessary barrier for x86.
V3: Bugfix for arm64
V2: Include updated documentation for rcutree patch
As discussed before:
If a high-scalability locking scheme is built with multiple
spinlocks, then often additional memory barriers are required.
The
change avoids that nf_conntrack_lock() could loop multiple times.
Signed-off-by: Manfred Spraul
---
net/netfilter/nf_conntrack_core.c | 36 ++--
1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/net/netfilter/nf_conntrack_core.c
b/net/netfilter
s override it with a less expensive
barrier if this is sufficient for their hardware/spinlock
implementation.
For overriding, the same approach as for smp_mb__before_spin_lock()
is used: If smp_mb__after_spin_lock is already defined, then it is
not changed.
Signed-off-by: Manfred Spraul
---
Doc
barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock();
spin_lock();
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Spraul
---
include/linux/spinloc
Hi,
V4: Docu/comment improvements, remove unnecessary barrier for x86.
V3: Bugfix for arm64
V2: Include updated documentation for rcutree patch
As discussed before:
If a high-scalability locking scheme is built with multiple
spinlocks, then often additional memory barriers are required.
The
queued_spin_unlock_wait for details.
As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used
in any hotpaths, the patch does not create that define yet.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
arch/x86/include/asm/qspinlock.h | 11 +++
1 file chang
queued_spin_unlock_wait for details.
As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used
in any hotpaths, the patch does not create that define yet.
Signed-off-by: Manfred Spraul
---
arch/x86/include/asm/qspinlock.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/arch
Hi Peter,
On 08/29/2016 12:48 PM, Peter Zijlstra wrote:
On Sun, Aug 28, 2016 at 01:56:13PM +0200, Manfred Spraul wrote:
Right now, the spinlock machinery tries to guarantee barriers even for
unorthodox locking cases, which ends up as a constant stream of updates
as the architectures try
Hi Peter,
On 08/29/2016 12:48 PM, Peter Zijlstra wrote:
On Sun, Aug 28, 2016 at 01:56:13PM +0200, Manfred Spraul wrote:
Right now, the spinlock machinery tries to guarantee barriers even for
unorthodox locking cases, which ends up as a constant stream of updates
as the architectures try
kmemleak_alloc+0x23/0x40
kmem_cache_alloc_trace+0xe1/0x180
selinux_msg_queue_alloc_security+0x3f/0xd0
security_msg_queue_alloc+0x2e/0x40
newque+0x4e/0x150
ipcget+0x159/0x1b0
SyS_msgget+0x39/0x40
entry_SYSCALL_64_fastpath+0x13/0x8f
Manfred Spraul suggested to fix s
em_cache_alloc_trace+0xe1/0x180
selinux_msg_queue_alloc_security+0x3f/0xd0
security_msg_queue_alloc+0x2e/0x40
newque+0x4e/0x150
ipcget+0x159/0x1b0
SyS_msgget+0x39/0x40
entry_SYSCALL_64_fastpath+0x13/0x8f
Manfred Spraul suggested to fix sem.c as well and Davidlo
barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock();
spin_lock();
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock();
spin_lock();
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Spraul
---
include/linux/spinloc
On 08/28/2016 03:43 PM, Paul E. McKenney wrote:
Without the smp_mb__after_unlock_lock(), other CPUs can observe the
write to d without seeing the write to a.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
With the upgraded commit log, I am OK with the patch below.
Done.
H
On 08/28/2016 03:43 PM, Paul E. McKenney wrote:
Without the smp_mb__after_unlock_lock(), other CPUs can observe the
write to d without seeing the write to a.
Signed-off-by: Manfred Spraul
With the upgraded commit log, I am OK with the patch below.
Done.
However, others will probably want
possible.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/asm-generic/barrier.h | 16
kernel/rcu/tree.h | 12
2 files changed, 16 insertions(+), 12 deletions(-)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/b
possible.
Signed-off-by: Manfred Spraul
---
include/asm-generic/barrier.h | 16
kernel/rcu/tree.h | 12
2 files changed, 16 insertions(+), 12 deletions(-)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index fe297b5..9b4d28
change avoids that nf_conntrack_lock() could loop multiple times.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
net/netfilter/nf_conntrack_core.c | 36 ++--
1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/net/net
spin_unlock() + spin_lock() together do not form a full memory barrier:
a=1;
spin_unlock();
spin_lock();
+ smp_mb__after_unlock_lock();
d=1;
Without the smp_mb__after_unlock_lock(), other CPUs can observe the
write to d without seeing the write to a.
Signed-off-by: Manfred Spraul <m
change avoids that nf_conntrack_lock() could loop multiple times.
Signed-off-by: Manfred Spraul
---
net/netfilter/nf_conntrack_core.c | 36 ++--
1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/net/netfilter/nf_conntrack_core.c
b/net/netfilter
spin_unlock() + spin_lock() together do not form a full memory barrier:
a=1;
spin_unlock();
spin_lock();
+ smp_mb__after_unlock_lock();
d=1;
Without the smp_mb__after_unlock_lock(), other CPUs can observe the
write to d without seeing the write to a.
Signed-off-by: Manfred Spraul
(), that is part of
spin_unlock_wait()
- smp_mb__after_spin_lock() instead of a direct smp_mb().
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
Documentation/locking/spinlocks.txt | 5 +
include/linux/spinlock.h| 12
ipc
(), that is part of
spin_unlock_wait()
- smp_mb__after_spin_lock() instead of a direct smp_mb().
Signed-off-by: Manfred Spraul
---
Documentation/locking/spinlocks.txt | 5 +
include/linux/spinlock.h| 12
ipc/sem.c | 16 +---
3
Hi,
as discussed before:
If a high-scalability locking scheme is built with multiple
spinlocks, then often additional memory barriers are required.
The documentation was not as clear as possible, and memory
barriers were missing / superfluous in the implementation.
Patch 1: Documentation,
queued_spin_unlock_wait for details.
As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used
in any hotpaths, the patch does not create that define yet.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
arch/x86/include/asm/qspinlock.h | 11 +++
1 file chang
Hi,
as discussed before:
If a high-scalability locking scheme is built with multiple
spinlocks, then often additional memory barriers are required.
The documentation was not as clear as possible, and memory
barriers were missing / superfluous in the implementation.
Patch 1: Documentation,
queued_spin_unlock_wait for details.
As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used
in any hotpaths, the patch does not create that define yet.
Signed-off-by: Manfred Spraul
---
arch/x86/include/asm/qspinlock.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/arch
Hi Paul,
On 08/10/2016 11:00 PM, Paul E. McKenney wrote:
On Wed, Aug 10, 2016 at 12:17:57PM -0700, Davidlohr Bueso wrote:
[...]
CPU0 CPU1
complex_mode = truespin_lock(l)
smp_mb() <--- do we want a smp_mb() here?
Hi Paul,
On 08/10/2016 11:00 PM, Paul E. McKenney wrote:
On Wed, Aug 10, 2016 at 12:17:57PM -0700, Davidlohr Bueso wrote:
[...]
CPU0 CPU1
complex_mode = truespin_lock(l)
smp_mb() <--- do we want a smp_mb() here?
Hi Boqun,
On 08/12/2016 04:47 AM, Boqun Feng wrote:
We should not be doing an smp_mb() right after a spin_lock(), makes no sense.
The
spinlock machinery should guarantee us the barriers in the unorthodox locking
cases,
such as this.
Do we really want to go there?
Trying to handle all
Hi Boqun,
On 08/12/2016 04:47 AM, Boqun Feng wrote:
We should not be doing an smp_mb() right after a spin_lock(), makes no sense.
The
spinlock machinery should guarantee us the barriers in the unorthodox locking
cases,
such as this.
Do we really want to go there?
Trying to handle all
Hi,
[adding Peter, correcting Davidlohr's mail address]
On 08/10/2016 02:05 AM, Benjamin Herrenschmidt wrote:
On Tue, 2016-08-09 at 20:52 +0200, Manfred Spraul wrote:
Hi Benjamin, Hi Michael,
regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() to
arch_spin_is_locked()"):
For t
Hi,
[adding Peter, correcting Davidlohr's mail address]
On 08/10/2016 02:05 AM, Benjamin Herrenschmidt wrote:
On Tue, 2016-08-09 at 20:52 +0200, Manfred Spraul wrote:
Hi Benjamin, Hi Michael,
regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() to
arch_spin_is_locked()"):
For t
Hi Benjamin, Hi Michael,
regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() to
arch_spin_is_locked()"):
For the ipc/sem code, I would like to replace the spin_is_locked() with
a smp_load_acquire(), see:
http://git.cmpxchg.org/cgit.cgi/linux-mmots.git/tree/ipc/sem.c#n367
Hi Benjamin, Hi Michael,
regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() to
arch_spin_is_locked()"):
For the ipc/sem code, I would like to replace the spin_is_locked() with
a smp_load_acquire(), see:
http://git.cmpxchg.org/cgit.cgi/linux-mmots.git/tree/ipc/sem.c#n367
Hi Fabian,
On 07/29/2016 10:15 AM, Fabian Frederick wrote:
Running LTP msgsnd06 with kmemleak gives the following:
cat /sys/kernel/debug/kmemleak
unreferenced object 0x88003c0a11f8 (size 8):
comm "msgsnd06", pid 1645, jiffies 4294672526 (age 6.549s)
hex dump (first 8 bytes):
1b
Hi Fabian,
On 07/29/2016 10:15 AM, Fabian Frederick wrote:
Running LTP msgsnd06 with kmemleak gives the following:
cat /sys/kernel/debug/kmemleak
unreferenced object 0x88003c0a11f8 (size 8):
comm "msgsnd06", pid 1645, jiffies 4294672526 (age 6.549s)
hex dump (first 8 bytes):
1b
e16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@informatik.uni-bremen.de
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: <sta...@vger.kernel.org>
---
include/linux/sem.h | 1 +
ipc/sem.c | 138 +++---
e16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@informatik.uni-bremen.de
Signed-off-by: Manfred Spraul
Cc:
---
include/linux/sem.h | 1 +
ipc/sem.c | 138 +++-
2 files changed, 84 insertions(+), 55 deletions(-)
di
Hi Andrew,
On 07/14/2016 12:05 AM, Andrew Morton wrote:
On Wed, 13 Jul 2016 07:06:50 +0200 Manfred Spraul <manf...@colorfullife.com>
wrote:
Hi Andrew, Hi Peter,
next version of the sem_lock() fixes:
The patches are again vs. tip.
Patch 1 is ready for merging, Patch 2 is for
Hi Andrew,
On 07/14/2016 12:05 AM, Andrew Morton wrote:
On Wed, 13 Jul 2016 07:06:50 +0200 Manfred Spraul
wrote:
Hi Andrew, Hi Peter,
next version of the sem_lock() fixes:
The patches are again vs. tip.
Patch 1 is ready for merging, Patch 2 is for review.
- Patch 1 is the patch
Hi Davidlohr,
On 07/13/2016 06:16 PM, Davidlohr Bueso wrote:
Manfred, shouldn't this patch be part of patch 1 (as you add the
unnecessary barriers there? Iow, can we have a single patch for all this?
Two reasons:
- patch 1 is safe for backporting, patch 2 not.
- patch 1 is safe on all
Hi Davidlohr,
On 07/13/2016 06:16 PM, Davidlohr Bueso wrote:
Manfred, shouldn't this patch be part of patch 1 (as you add the
unnecessary barriers there? Iow, can we have a single patch for all this?
Two reasons:
- patch 1 is safe for backporting, patch 2 not.
- patch 1 is safe on all
w both thread A and thread C operate on the same array, without
any synchronization.
Full memory barrier are required to synchronize changes of
complex_mode and the lock operations.
Fixes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@informatik.uni-bremen.de
Signed-
SMP.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/sem.c | 14 --
1 file changed, 14 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 0da63c8..d7b4212 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -291,14 +291,6 @@ static void complexmode_enter(struct sem
Hi Andrew, Hi Peter,
next version of the sem_lock() fixes:
The patches are again vs. tip.
Patch 1 is ready for merging, Patch 2 is for review.
- Patch 1 is the patch as in -next since January
It fixes the race that was found by Felix.
- Patch 2 removes the memory barriers that are part of the
w both thread A and thread C operate on the same array, without
any synchronization.
Full memory barrier are required to synchronize changes of
complex_mode and the lock operations.
Fixes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@informatik.uni-bremen.de
Signed-o
SMP.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 14 --
1 file changed, 14 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 0da63c8..d7b4212 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -291,14 +291,6 @@ static void complexmode_enter(struct sem_array *sma)
sem
Hi Andrew, Hi Peter,
next version of the sem_lock() fixes:
The patches are again vs. tip.
Patch 1 is ready for merging, Patch 2 is for review.
- Patch 1 is the patch as in -next since January
It fixes the race that was found by Felix.
- Patch 2 removes the memory barriers that are part of the
On 06/28/2016 07:27 AM, Davidlohr Bueso wrote:
On Thu, 23 Jun 2016, Manfred Spraul wrote:
What I'm not sure yet is if smp_load_acquire() is sufficient:
Thread A:
if (!READ_ONCE(sma->complex_mode)) {
The code is test_and_test, no barrier requirements for first test
Yeah, it wo
On 06/28/2016 07:27 AM, Davidlohr Bueso wrote:
On Thu, 23 Jun 2016, Manfred Spraul wrote:
What I'm not sure yet is if smp_load_acquire() is sufficient:
Thread A:
if (!READ_ONCE(sma->complex_mode)) {
The code is test_and_test, no barrier requirements for first test
Yeah, it wo
xes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@informatik.uni-bremen.de
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: <sta...@vger.kernel.org>
---
include/linux/sem.h | 1 +
ipc/sem.c | 122 ++-
lock to the per
semaphore locks. This reduces how often the per-semaphore locks must
be scanned.
Passed stress testing with sem-scalebench.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/linux/sem.h | 2 +-
ipc/sem.c
Hi Andrew, Hi Peter,
next version of the sem_lock() fixes / improvement:
The patches are now vs. tip.
Patch 1 is ready for merging, patch 2 is new and for discussion.
Patch 1 fixes the race that was found by Felix.
It also adds smp_mb() to fully synchronize
WRITE_ONCE(status, 1);
xes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@informatik.uni-bremen.de
Signed-off-by: Manfred Spraul
Cc:
---
include/linux/sem.h | 1 +
ipc/sem.c | 122 ++--
2 files changed, 71 insertions(+), 52 dele
lock to the per
semaphore locks. This reduces how often the per-semaphore locks must
be scanned.
Passed stress testing with sem-scalebench.
Signed-off-by: Manfred Spraul
---
include/linux/sem.h | 2 +-
ipc/sem.c | 89 +
2 files
Hi Andrew, Hi Peter,
next version of the sem_lock() fixes / improvement:
The patches are now vs. tip.
Patch 1 is ready for merging, patch 2 is new and for discussion.
Patch 1 fixes the race that was found by Felix.
It also adds smp_mb() to fully synchronize
WRITE_ONCE(status, 1);
On 06/21/2016 10:29 PM, Davidlohr Bueso wrote:
On Sat, 18 Jun 2016, Manfred Spraul wrote:
sysv sem has two lock modes: One with per-semaphore locks, one lock mode
with a single big lock for the whole array.
When switching from the per-semaphore locks to the big lock, all
per-semaphore locks
On 06/21/2016 10:29 PM, Davidlohr Bueso wrote:
On Sat, 18 Jun 2016, Manfred Spraul wrote:
sysv sem has two lock modes: One with per-semaphore locks, one lock mode
with a single big lock for the whole array.
When switching from the per-semaphore locks to the big lock, all
per-semaphore locks
On 06/21/2016 02:30 AM, Davidlohr Bueso wrote:
On Sat, 18 Jun 2016, Manfred Spraul wrote:
diff --git a/include/linux/sem.h b/include/linux/sem.h
index 976ce3a..d0efd6e 100644
--- a/include/linux/sem.h
+++ b/include/linux/sem.h
@@ -21,6 +21,7 @@ struct sem_array {
struct list_head
On 06/21/2016 02:30 AM, Davidlohr Bueso wrote:
On Sat, 18 Jun 2016, Manfred Spraul wrote:
diff --git a/include/linux/sem.h b/include/linux/sem.h
index 976ce3a..d0efd6e 100644
--- a/include/linux/sem.h
+++ b/include/linux/sem.h
@@ -21,6 +21,7 @@ struct sem_array {
struct list_head
On 06/21/2016 01:04 AM, Andrew Morton wrote:
On Sat, 18 Jun 2016 22:02:21 +0200 Manfred Spraul <manf...@colorfullife.com>
wrote:
Commit 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") introduced a race:
sem_lock has a fast path that allows parallel simple operations.
There
On 06/21/2016 01:04 AM, Andrew Morton wrote:
On Sat, 18 Jun 2016 22:02:21 +0200 Manfred Spraul
wrote:
Commit 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") introduced a race:
sem_lock has a fast path that allows parallel simple operations.
There are two reasons why a simple
lock to the per
semaphore locks. This reduces how often the per-semaphore locks must
be scanned.
Passed stress testing with sem-scalebench.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/linux/sem.h | 2 +-
ipc/sem.c
plex_count==1)
- wakes up Thread B.
- decrements complex_count
Thread A:
- does the complex_count test
Bug:
Now both thread A and thread C operate on the same array, without
any synchronization.
Fixes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@in
lock to the per
semaphore locks. This reduces how often the per-semaphore locks must
be scanned.
Passed stress testing with sem-scalebench.
Signed-off-by: Manfred Spraul
---
include/linux/sem.h | 2 +-
ipc/sem.c | 91 -
2 files
plex_count==1)
- wakes up Thread B.
- decrements complex_count
Thread A:
- does the complex_count test
Bug:
Now both thread A and thread C operate on the same array, without
any synchronization.
Fixes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@informatik.uni-bre
Hi,
On 06/15/2016 07:23 AM, Stephen Rothwell wrote:
Hi Andrew,
Today's linux-next merge of the akpm-current tree got a conflict in:
ipc/sem.c
between commit:
33ac279677dc ("locking/barriers: Introduce smp_acquire__after_ctrl_dep()")
from the tip tree and commit:
a1c58ea067cb
Hi,
On 06/15/2016 07:23 AM, Stephen Rothwell wrote:
Hi Andrew,
Today's linux-next merge of the akpm-current tree got a conflict in:
ipc/sem.c
between commit:
33ac279677dc ("locking/barriers: Introduce smp_acquire__after_ctrl_dep()")
from the tip tree and commit:
a1c58ea067cb
Hi Peter,
On 05/20/2016 06:04 PM, Peter Zijlstra wrote:
On Fri, May 20, 2016 at 05:21:49PM +0200, Peter Zijlstra wrote:
Let me write a patch..
OK, something like the below then.. lemme go build that and verify that
too fixes things.
---
Subject: locking,qspinlock: Fix spin_is_locked() and
Hi Peter,
On 05/20/2016 06:04 PM, Peter Zijlstra wrote:
On Fri, May 20, 2016 at 05:21:49PM +0200, Peter Zijlstra wrote:
Let me write a patch..
OK, something like the below then.. lemme go build that and verify that
too fixes things.
---
Subject: locking,qspinlock: Fix spin_is_locked() and
On 05/21/2016 09:37 AM, Peter Zijlstra wrote:
On Fri, May 20, 2016 at 05:48:39PM -0700, Davidlohr Bueso wrote:
As opposed to spin_is_locked(), spin_unlock_wait() is perhaps more tempting
to use for locking correctness. For example, taking a look at
nf_conntrack_all_lock(),
it too likes to get
On 05/21/2016 09:37 AM, Peter Zijlstra wrote:
On Fri, May 20, 2016 at 05:48:39PM -0700, Davidlohr Bueso wrote:
As opposed to spin_is_locked(), spin_unlock_wait() is perhaps more tempting
to use for locking correctness. For example, taking a look at
nf_conntrack_all_lock(),
it too likes to get
Hi,
On 02/26/2016 01:21 PM, PrasannaKumar Muralidharan wrote:
From: PrasannaKumar Muralidharan
As described in bug #112271 (bugzilla.kernel.org/show_bug.cgi?id=112271)
don't set sempid in semctl syscall. Set sempid only when semop is called.
I disagree with the
Hi,
On 02/26/2016 01:21 PM, PrasannaKumar Muralidharan wrote:
From: PrasannaKumar Muralidharan
As described in bug #112271 (bugzilla.kernel.org/show_bug.cgi?id=112271)
don't set sempid in semctl syscall. Set sempid only when semop is called.
I disagree with the bug report:
sempid is (and
Hi Ying,
On 02/14/2016 07:41 AM, kernel test robot wrote:
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 0050ee059f7fc86b1df2527aaa14ed5dc72f9973 ("ipc/msg: increase MSGMNI, remove
scaling")
LTP_syscalls: msgctl11: "Not
Hi Ying,
On 02/14/2016 07:41 AM, kernel test robot wrote:
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 0050ee059f7fc86b1df2527aaa14ed5dc72f9973 ("ipc/msg: increase MSGMNI, remove
scaling")
LTP_syscalls: msgctl11: "Not
On 01/04/2016 02:02 PM, Davidlohr Bueso wrote:
On Sat, 02 Jan 2016, Manfred Spraul wrote:
Commit 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") introduced a
race:
sem_lock has a fast path that allows parallel simple operations.
There are two reasons why a simple operation
On 01/04/2016 02:02 PM, Davidlohr Bueso wrote:
On Sat, 02 Jan 2016, Manfred Spraul wrote:
Commit 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") introduced a
race:
sem_lock has a fast path that allows parallel simple operations.
There are two reasons why a simple operation
Hi Dmitry,
On 01/02/2016 01:19 PM, Dmitry Vyukov wrote:
On Sat, Jan 2, 2016 at 12:33 PM, Manfred Spraul
wrote:
Hi Dmitry,
shm locking differs too much from msg/sem locking, I never looked at it in
depth, so I'm not able to perform a proper review.
Except for the obvious: Races that can
plex_count==1)
- wakes up Thread B.
- decrements complex_count
Thread A:
- does the complex_count test
Bug:
Now both thread A and thread C operate on the same array, without
any synchronization.
Reported-by: fel...@informatik.uni-bremen.de
Signed-off-by: Manfred Spraul
Cc:
---
includ
On 11/13/2015 08:23 PM, Davidlohr Bueso wrote:
So considering EINVAL, even your approach to bumping up nattach by
calling
_shm_open earlier isn't enough. Races exposed to user called rmid can
still
occur between dropping the lock and doing ->mmap(). Ultimately this
leads to
all
Hi Dmitry,
shm locking differs too much from msg/sem locking, I never looked at it
in depth, so I'm not able to perform a proper review.
Except for the obvious: Races that can be triggered from user space are
inacceptable.
Regardless if there is a BUG_ON, a WARN_ON or nothing at all.
On
plex_count==1)
- wakes up Thread B.
- decrements complex_count
Thread A:
- does the complex_count test
Bug:
Now both thread A and thread C operate on the same array, without
any synchronization.
Reported-by: fel...@informatik.uni-bremen.de
Signed-off-by: Manfred Spraul <manf...@colorfullife.c
Hi Dmitry,
shm locking differs too much from msg/sem locking, I never looked at it
in depth, so I'm not able to perform a proper review.
Except for the obvious: Races that can be triggered from user space are
inacceptable.
Regardless if there is a BUG_ON, a WARN_ON or nothing at all.
On
On 11/13/2015 08:23 PM, Davidlohr Bueso wrote:
So considering EINVAL, even your approach to bumping up nattach by
calling
_shm_open earlier isn't enough. Races exposed to user called rmid can
still
occur between dropping the lock and doing ->mmap(). Ultimately this
leads to
all
Hi Dmitry,
On 01/02/2016 01:19 PM, Dmitry Vyukov wrote:
On Sat, Jan 2, 2016 at 12:33 PM, Manfred Spraul
<manf...@colorfullife.com> wrote:
Hi Dmitry,
shm locking differs too much from msg/sem locking, I never looked at it in
depth, so I'm not able to perform a proper review.
--
Manfred
/*
* pmsg.cpp, parallel sysv msg pingpong
*
* Copyright (C) 1999, 2001, 2005, 2008 by Manfred Spraul.
* All rights reserved except the rights granted by the GPL.
*
* Redistribution of this file is permitted under the terms of the GNU
* General Public License (GPL) version 2 or l
--
Manfred
/*
* pmsg.cpp, parallel sysv msg pingpong
*
* Copyright (C) 1999, 2001, 2005, 2008 by Manfred Spraul.
* All rights reserved except the rights granted by the GPL.
*
* Redistribution of this file is permitted under the terms of the GNU
* General Public License (GPL) version 2 or l
0:00:00 2001
From: Manfred Spraul
Date: Sat, 10 Oct 2015 08:37:22 +0200
Subject: [PATCH] ipc/sem.c: Alternative for fixing Concurrency bug
Two ideas for fixing the bug found by Felix:
- Revert my initial patch.
Problem: Significant slowdown for application that use large sem
arrays and comple
301 - 400 of 1250 matches
Mail list logo