tatic code analysis.
- This is a cast between different non-void types, which the future
randstruct GCC plugin warns on.
And, as bonus, the code size gets smaller:
Before:
0 .text 3770
After:
0 .text 374e
Signed-off-by: Manfred Spraul
---
include/linux/
The only users of ipc_alloc() were ipc_rcu_alloc() and the on-heap
sem_io fall-back memory. Better to just open-code these to make things
easier to read.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff due to inclusion of memset() into
ipc_rcu_alloc().]
Signed-off-by: Manfred Spraul
Hi all,
Updated series. The series got longer, because I merged all patches
from Kees.
Main changes are:
- sems[] instead of sem[0].
- Immediately use BUILD_BUG_ON()
- Immediately move the memset() to avoid crashing with SEM_UNDO.
- Use rcu for every security_xx_free(), even if ipc_addid() was no
that there is
a comment in include/linux/sem.h and man semctl(2) as well.
So: Correct wrong comments.
Signed-off-by: Manfred Spraul
Cc: linux-...@vger.kernel.org
---
include/linux/sem.h | 2 +-
include/uapi/linux/sem.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a
From: Kees Cook
Avoid using ipc_rcu_free, since it just re-finds the original structure
pointer. For the pre-list-init failure path, there is no RCU needed,
since it was just allocated. It can be directly freed.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/shm.c | 9
From: Kees Cook
Avoid using ipc_rcu_free, since it just re-finds the original structure
pointer. For the pre-list-init failure path, there is no RCU needed,
since it was just allocated. It can be directly freed.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 9
From: Kees Cook
There are no more callers of ipc_rcu_free(), so remove it.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/util.c | 7 ---
ipc/util.h | 1 -
2 files changed, 8 deletions(-)
diff --git a/ipc/util.c b/ipc/util.c
index dd73feb..556884b 100644
--- a/ipc/util.c
Loosely based on a patch from Kees Cook :
- id and retval can be merged
- if ipc_addid() fails, then use call_rcu() directly.
The difference is that call_rcu is used for failed ipc_addid() calls,
to continue to guaranteed an rcu delay for security_sem_free().
Signed-off-by: Manfred Spraul
Cc
-off-by: Manfred Spraul
Cc: Kees Cook
---
ipc/shm.c | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index c9f1f30..cb1d97e 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -548,7 +548,6 @@ static int newseg(struct ipc_namespace *ns, struct
ipc_params
From: Kees Cook
Only after ipc_addid() has succeeded will refcounting be used, so
move initialization into ipc_addid() and remove from open-coded
*_alloc() routines.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 2 --
ipc/sem.c | 1 -
ipc/shm.c | 2 --
ipc/util.c
memset was
temporarily inside ipc_rcu_alloc()]
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 25 -
1 file changed, 20 insertions(+), 5 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index a04c4d6..445a5b5 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -451,6 +451,25 @@ static
From: Kees Cook
No callers remain for ipc_rcu_alloc(). Drop the function.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff because the memset was
temporarily inside ipc_rcu_free()]
Signed-off-by: Manfred Spraul
---
ipc/util.c | 21 -
ipc/util.h | 3 ---
2
From: Kees Cook
Avoid using ipc_rcu_free, since it just re-finds the original structure
pointer. For the pre-list-init failure path, there is no RCU needed,
since it was just allocated. It can be directly freed.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 9
From: Kees Cook
There is nothing special about the msg_alloc/free routines any more,
so remove them to make code more readable.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff to keep rcu protection for
security_msg_queue_alloc()]
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 24
Loosely based on a patch from Kees Cook :
- id and retval can be merged
- if ipc_addid() fails, then use call_rcu() directly.
The difference is that call_rcu is used for failed ipc_addid() calls,
to continue to guaranteed an rcu delay for security_msg_queue_free().
Signed-off-by: Manfred Spraul
From: Kees Cook
There is nothing special about the shm_alloc/free routines any more,
so remove them to make code more readable.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff, to continue to keep rcu for
free calls after a successful security_shm_alloc()]
Signed-off-by: Manfred
Now that ipc_rcu_alloc() and ipc_rcu_free() are removed,
document when it is valid to use ipc_getref() and ipc_putref().
Signed-off-by: Manfred Spraul
---
ipc/util.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/ipc/util.h b/ipc/util.h
index 77336c2b..c692010 100644
--- a/ipc/util.h
From: Kees Cook
The remaining users of __sem_free() can simply call kvfree() instead for
better readability.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff to keep rcu protection for
security_sem_alloc()]
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 9 ++---
1 file changed
From: Kees Cook
Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for msg_queue structure
layout to be randomized in the future.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 18 ++
1 file changed
From: Kees Cook
Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for shmid_kernel structure
layout to be randomized in the future.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/shm.c | 18 ++
1 file
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.
To simplify code, the ipc_rcu_alloc initializes the allocation to 0.
Signed-off-by: Manfred Spraul
---
include/linux/ipc.h | 3 +++
ipc/msg.c | 19 +++
ipc
why this causes indigestion, there's probably something subtly
wrong here
commit 337f43326737b5eb28eb13f43c27a5788da0f913
Author: Manfred Spraul
Date: Fri May 19 07:39:23 2017 +1000
ipc: merge ipc_rcu and kern_ipc_perm
ipc has two management structures that exist for eve
Hi valdis,
On 05/20/2017 10:18 PM, valdis.kletni...@vt.edu wrote:
Seeing problems with programs that use semaphores. The one
that I'm getting bit by is jackd. strace says:
getuid()= 967
semget(0x282929, 0, 000)= 229376
semop(229376, [{0, -1, SEM
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.
Signed-off-by: Manfred Spraul
---
include/linux/ipc.h | 3 +++
ipc/msg.c | 22 ++
ipc/sem.c | 35 ---
ipc/shm.c
that there is
a comment in include/linux/sem.h and man semctl(2) as well.
So: Correct wrong comments.
Signed-off-by: Manfred Spraul
Cc: linux-...@vger.kernel.org
---
include/linux/sem.h | 2 +-
include/uapi/linux/sem.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a
tatic code analysis.
- This is a cast between different non-void types, which the future
randstruct GCC plugin warns on.
And, as bonus, the code size gets smaller:
Before:
0 .text 3770
After:
0 .text 374e
Signed-off-by: Manfred Spraul
---
include/linux/
Hi all,
could you check the following three patches?
As explained, I try to combine the changes for static analyzers and for
the randstruct gcc plugin with cleanups.
0001-ipc-sem.c-remove-sem_base-embed-struct-sem.patch:
sem_base is not needed. Instead of improving the casts,
rem
00:00:00 2001
From: Manfred Spraul
Date: Sat, 13 May 2017 18:02:20 +0200
Subject: [PATCH] ipc/sem.c: remove sem_base, embed struct sem
sma->sem_base is initialized with
sma->sem_base = (struct sem *) &sma[1];
The current code has four problems:
- There is an unnecessary pointer derefe
: Manfred Spraul
Fixes: 370b262c896e ("ipc/sem: avoid idr tree lookup for interrupted semop")
Cc: d...@stgolabs.net
---
Patch V2:
- subject line updated
- documentation slightly updated
ipc/sem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ipc/sem.c b/ipc/sem.c
ind
locks
the global spinlock.
The fix is trivial: Use the return value from sem_lock.
Reported-by: dvyu...@google.com
Signed-off-by: Manfred Spraul
Fixes: 370b262c896e ("ipc/sem: avoid idr tree lookup for interrupted semop")
Cc: d...@stgolabs.net
---
ipc/sem.c | 2 +-
1 file changed, 1 inser
On 12/18/2016 05:29 PM, Davidlohr Bueso wrote:
On Sun, 18 Dec 2016, Bueso wrote:
On Fri, 16 Dec 2016, Dmitry Vyukov wrote:
[ BUG: bad unlock balance detected! ]
4.9.0+ #89 Not tainted
Thanks for the report, I can reproduce the issue as of (which I
obviously
should have tested with lockdep
On 10/28/2016 09:29 PM, Colin Ian King wrote:
On 28/10/16 20:21, Manfred Spraul wrote:
Hi Colin,
On 10/28/2016 08:11 PM, Colin King wrote:
[...]
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -1839,7 +1839,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct
sembuf __user *, tsops,
max = 0
Hi Colin,
On 10/28/2016 08:11 PM, Colin King wrote:
From: Colin Ian King
The left shift amount is sop->sem_num % 64, which is up to 63, so
ensure we are shifting a ULL rather than a 32 bit value.
Good catch, thanks.
CoverityScan CID#1372862 "Bad bit shift operation"
Fixes: 7c24530cb4e3c0ae
On 10/20/2016 02:21 AM, Andrew Morton wrote:
On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul
wrote:
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch to full spin_lock()/spin_unlock() for
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch to full spin_lock()/spin_unlock() for the rare codepath,
then the fast path doesn't need the smp_mb() anymore.
2) confirm that no arch needs th
one complex op now scales a bit worse, but this is pure theory:
If there is concurrency, the it won't be exactly 10:1:10:1:10:1:...
If there is no concurrency, then there is no need for scalability.
Signed-off-by: Manfred Spraul
---
include/linux/sem.h | 2 +-
ipc/sem.c
the slow path is used.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..d5f2710 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -280,24 +280,13 @@ static void complexmode_enter
Hi,
On 10/09/2016 09:05 AM, kernel test robot wrote:
FYI, we noticed a -8.8% regression of aim9.shared_memory.ops_per_sec due to
commit:
commit 0882cba0a03bca73acd8fab8fb50db04691908e9 ("ipc/sem.c: fix complex_count vs.
simple op race")
https://git.kernel.org/pub/scm/linux/kernel/git/next/lin
one complex op now scales a bit worse, but this is pure theory:
If there is concurrency, the it won't be exactly 10:1:10:1:10:1:...
If there is no concurrency, then there is no need for scalability.
Signed-off-by: Manfred Spraul
---
include/linux/sem.h | 2 +-
ipc/sem.c
the slow path is used.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..d5f2710 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -280,24 +280,13 @@ static void complexmode_enter
Hi Andrew, Hi Peter, Hi Davidlohr,
New idea for ipc/sem:
The ACQUIRE from spin_lock() will continue to apply only for the load,
not for the store.
Thus: If we don't want to add arch dependencies into ipc/sem, the only
safe option is to use spin_lock()/spin_unlock() instead of spin_unlock_wait().
On 09/22/2016 12:21 AM, Davidlohr Bueso wrote:
On Sun, 18 Sep 2016, Manfred Spraul wrote:
Just as with msgrcv (along with the rest of sysvipc since a few years
ago), perform the security checks without holding the ipc object
lock.
Thinking about it: isn't this wrong?
CPU1:
* m
On 09/18/2016 09:11 PM, Davidlohr Bueso wrote:
Changes from v1 (https://lkml.org/lkml/2016/9/12/266)
- Got rid of the signal_pending check in wakeup fastpath. (patch 2)
- Added read/access once to queue.status (we're obviously concerned about
lockless access upon unrelated events, even if on th
( 6.21%)
Hmeansembench-sem-482965735.00 ( 0.00%) 1040313.00 ( 7.72%)
Signed-off-by: Davidlohr Bueso
Acked-by: Manfred Spraul
--
Manfred
What about the attached dup detection?
--
Manfred
>From 140340a358dbf66b3bc6f848ca9b860e3e957e84 Mon Sep 17 00:00:00 2001
From: Manfred Spraul
Date: Mon, 19 Sep 2016 06:25:20 +0200
Subject: [PATCH] ipc/sem: Update duplicate sop detection
The duplicated sop detection can be improved:
- use uint64_t instead of unsigned lo
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote:
... saves some LoC and looks cleaner than re-implementing the
calls.
Signed-off-by: Davidlohr Bueso
Acked-by: Manfred Spraul
--
Manfred
Hi Davidlohr,
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote:
@@ -1933,22 +1823,32 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf
__user *, tsops,
queue.alter = alter;
error = perform_atomic_semop(sma, &queue);
- if (error == 0) {
- /* If the operatio
Hi Davidlohr,
Just as with msgrcv (along with the rest of sysvipc since a few years
ago), perform the security checks without holding the ipc object lock.
Thinking about it: isn't this wrong?
CPU1:
* msgrcv()
* ipcperms()
CPU2:
* msgctl(), change permissions
** msgctl() returns, new perm
Hi Davidlohr,
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote:
Hmeansembench-sem-482965735.00 ( 0.00%) 1040313.00 ( 7.72%)
[...]
Signed-off-by: Davidlohr Bueso
---
ipc/sem.c | 268 +++---
1 file changed, 83 insertions(+), 1
Hi Davidlohr,
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote:
... as this call should obviously be paired with its _prepare()
counterpart. At least whenever possible, as there is no harm in
calling it bogusly as we do now in a few places.
I would define the interface differently:
WAKE_Q creates
Hi Davidlohr,
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote:
This is the main workhorse that deals with semop user calls
such that the waitforzero or semval update operations, on the
set, can complete on not as the sma currently stands. Currently,
the set is iterated twice (setting semval, then
Hi,
On 09/06/2016 08:42 AM, kernel test robot wrote:
FYI, we noticed a -8.9% regression of aim9.shared_memory.ops_per_sec due to
commit:
commit 99ac0dfffcfb34326a880e90e06c30a2a882c692 ("ipc/sem.c: fix complex_count vs.
simple op race")
https://git.kernel.org/pub/scm/linux/kernel/git/next/lin
Hi Peter,
On 09/02/2016 09:22 PM, Peter Zijlstra wrote:
On Fri, Sep 02, 2016 at 08:35:55AM +0200, Manfred Spraul wrote:
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote
On 09/02/2016 09:22 PM, Peter Zijlstra wrote:
On Fri, Sep 02, 2016 at 08:35:55AM +0200, Manfred Spraul wrote:
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote:
Since
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote:
Since spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the memory barrier before spin_unlock_wait
Since spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the memory barrier before spin_unlock_wait() is
also not required.
Not for stable!
Signed-off-by: Manfred Spraul
Cc: Pablo Neira Ayuso
Cc: netfilter-de...@vger.kernel.org
---
net/netfilter/nf_conntrack_core.c | 8
As spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the memory barrier before spin_unlock_wait()
is not required.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index a5da82c
_acquire(), but then it must be checked first if all
updates to qspinlock were backported.
Fixes: b16c29191dc8
Signed-off-by: Manfred Spraul
Cc:
Cc: Sasha Levin
Cc: Pablo Neira Ayuso
Cc: netfilter-de...@vger.kernel.org
---
net/netfilter/nf_conntrack_core.c | 41 ++-
?
- spin_unlock_wait() is spin_lock()+spin_unlock().
- No memory ordering is enforced by spin_is_locked().
The patch adds this into Documentation/locking/spinlock.txt.
Signed-off-by: Manfred Spraul
Cc: Will Deacon
---
Documentation/locking/spinlocks.txt | 9 +
1 file changed, 9 insertions(+)
diff
As spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the smp_mb() after spin_lock() is not required.
Remove it.
Signed-off-by: Manfred Spraul
---
net/netfilter/nf_conntrack_core.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/net/netfilter
orm a full memory barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock(&b);
spin_lock(&c);
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Sp
mb() after spin_unlock_wait() can be removed.
Not for stable!
Signed-off-by: Manfred Spraul
Cc: Pablo Neira Ayuso
Cc: netfilter-de...@vger.kernel.org
---
net/netfilter/nf_conntrack_core.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/net/netfilter/nf_conntrack_core.c
b/net
>From memory ordering point of view, spin_unlock_wait() provides
the same guarantees as spin_lock(); spin_unlock().
Therefore the smp_mb() after spin_lock() is not necessary,
spin_unlock_wait() must provide the memory ordering.
Signed-off-by: Manfred Spraul
Cc: Will Deacon
---
ipc/sem.c
Hi,
Based on the new consensus:
- spin_unlock_wait() is spin_lock();spin_unlock();
- no guarantees are provided by spin_is_locked().
- the acquire during spin_lock() is for the load, not for the store.
Summary:
If a high-scalability locking scheme is built with multiple
spinlocks, then often addi
he smp_rmb() after spin_unlock_wait() can be removed.
Not for stable!
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 8
1 file changed, 8 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..6586e0a 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -290,14 +290,6 @@ static void complex
Hi,
On 09/01/2016 10:44 AM, Peter Zijlstra wrote:
On Wed, Aug 31, 2016 at 08:32:18PM +0200, Manfred Spraul wrote:
On 08/31/2016 06:40 PM, Will Deacon wrote:
The litmus test then looks a bit like:
CPUm:
LOCK(x)
smp_mb();
RyAcq=0
CPUn:
Wy=1
smp_mb();
UNLOCK_WAIT(x)
Correct.
which I think
On 08/31/2016 06:40 PM, Will Deacon wrote:
I'm struggling with this example. We have these locks:
&sem->lock
&sma->sem_base[0...sma->sem_nsems].lock
&sma->sem_perm.lock
a condition variable:
sma->complex_mode
and a new barrier:
smp_mb__after_spin_lock()
For simplicity, we ca
mb() after spin_unlock_wait() can be removed.
Not for stable!
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 8
1 file changed, 8 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..6586e0a 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -290,14 +290,6 @@ static void complexmode_en
nverts ipc/sem.c to the new define.
For overriding, the same approach as for smp_mb__before_spin_lock()
is used: If smp_mb__after_spin_lock is already defined, then it is
not changed.
The default is smp_mb(), to ensure that no architecture gets broken.
Signed-off-by: Manfred Spraul
---
inc
emory barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock(&b);
spin_lock(&c);
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Spraul
Cc: Paul E. Mc
?
- spin_unlock_wait() is an ACQUIRE.
- No memory ordering is enforced by spin_is_locked().
The patch adds this into Documentation/locking/spinlock.txt.
Signed-off-by: Manfred Spraul
---
Documentation/locking/spinlocks.txt | 9 +
1 file changed, 9 insertions(+)
diff --git a/Documentation/locking
Hi,
V5: Major restructuring based on input from Peter and Davidlohr.
As discussed before:
If a high-scalability locking scheme is built with multiple
spinlocks, then often additional memory barriers are required.
The documentation was not as clear as possible, and memory
barriers were missing /
ck) instead of
spin_unlock_wait(&global_lock) and loop backward.
- use smp_store_mb() instead of a raw smp_mb()
Signed-off-by: Manfred Spraul
Cc: Pablo Neira Ayuso
Cc: netfilter-de...@vger.kernel.org
---
Question: Should I split this patch?
First a patch that uses smp_mb(), with Cc: stable.
The
On 08/29/2016 03:44 PM, Peter Zijlstra wrote:
If you add a barrier, the Changelog had better be clear. And I'm still
not entirely sure I get what exactly this barrier should do, nor why it
defaults to a full smp_mb. If what I suspect it should do, only PPC and
ARM64 need the barrier.
The barrier
and loop backward.
The last change avoids that nf_conntrack_lock() could loop multiple times.
Signed-off-by: Manfred Spraul
---
net/netfilter/nf_conntrack_core.c | 36 ++--
1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/net/netfilter/nf_conntrack_core.
it with a less expensive
barrier if this is sufficient for their hardware/spinlock
implementation.
For overriding, the same approach as for smp_mb__before_spin_lock()
is used: If smp_mb__after_spin_lock is already defined, then it is
not changed.
Signed-off-by: Manfred Spraul
---
Doc
emory barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock(&b);
spin_lock(&c);
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Spraul
---
include/
Hi,
V4: Docu/comment improvements, remove unnecessary barrier for x86.
V3: Bugfix for arm64
V2: Include updated documentation for rcutree patch
As discussed before:
If a high-scalability locking scheme is built with multiple
spinlocks, then often additional memory barriers are required.
The docu
queued_spin_unlock_wait for details.
As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used
in any hotpaths, the patch does not create that define yet.
Signed-off-by: Manfred Spraul
---
arch/x86/include/asm/qspinlock.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/arch
Hi Peter,
On 08/29/2016 12:48 PM, Peter Zijlstra wrote:
On Sun, Aug 28, 2016 at 01:56:13PM +0200, Manfred Spraul wrote:
Right now, the spinlock machinery tries to guarantee barriers even for
unorthodox locking cases, which ends up as a constant stream of updates
as the architectures try to
em_cache_alloc_trace+0xe1/0x180
selinux_msg_queue_alloc_security+0x3f/0xd0
security_msg_queue_alloc+0x2e/0x40
newque+0x4e/0x150
ipcget+0x159/0x1b0
SyS_msgget+0x39/0x40
entry_SYSCALL_64_fastpath+0x13/0x8f
Manfred Spraul suggested to fix sem.c as well and Davidlo
emory barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock(&b);
spin_lock(&c);
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Spraul
---
include/
On 08/28/2016 03:43 PM, Paul E. McKenney wrote:
Without the smp_mb__after_unlock_lock(), other CPUs can observe the
write to d without seeing the write to a.
Signed-off-by: Manfred Spraul
With the upgraded commit log, I am OK with the patch below.
Done.
However, others will probably want
; r2==0 would
be possible.
Signed-off-by: Manfred Spraul
---
include/asm-generic/barrier.h | 16
kernel/rcu/tree.h | 12
2 files changed, 16 insertions(+), 12 deletions(-)
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index fe297b5
and loop backward.
The last change avoids that nf_conntrack_lock() could loop multiple times.
Signed-off-by: Manfred Spraul
---
net/netfilter/nf_conntrack_core.c | 36 ++--
1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/net/netfilter/nf_conntrack_core.
spin_unlock() + spin_lock() together do not form a full memory barrier:
a=1;
spin_unlock(&b);
spin_lock(&c);
+ smp_mb__after_unlock_lock();
d=1;
Without the smp_mb__after_unlock_lock(), other CPUs can observe the
write to d without seeing the write to a.
Signed-off-by: Manfre
(), that is part of
spin_unlock_wait()
- smp_mb__after_spin_lock() instead of a direct smp_mb().
Signed-off-by: Manfred Spraul
---
Documentation/locking/spinlocks.txt | 5 +
include/linux/spinlock.h| 12
ipc/sem.c | 16 +---
3
Hi,
as discussed before:
If a high-scalability locking scheme is built with multiple
spinlocks, then often additional memory barriers are required.
The documentation was not as clear as possible, and memory
barriers were missing / superfluous in the implementation.
Patch 1: Documentation, define
queued_spin_unlock_wait for details.
As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used
in any hotpaths, the patch does not create that define yet.
Signed-off-by: Manfred Spraul
---
arch/x86/include/asm/qspinlock.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/arch
Hi Paul,
On 08/10/2016 11:00 PM, Paul E. McKenney wrote:
On Wed, Aug 10, 2016 at 12:17:57PM -0700, Davidlohr Bueso wrote:
[...]
CPU0 CPU1
complex_mode = truespin_lock(l)
smp_mb() <--- do we want a smp_mb() here?
spin_unlock
Hi Boqun,
On 08/12/2016 04:47 AM, Boqun Feng wrote:
We should not be doing an smp_mb() right after a spin_lock(), makes no sense.
The
spinlock machinery should guarantee us the barriers in the unorthodox locking
cases,
such as this.
Do we really want to go there?
Trying to handle all unortho
Hi,
[adding Peter, correcting Davidlohr's mail address]
On 08/10/2016 02:05 AM, Benjamin Herrenschmidt wrote:
On Tue, 2016-08-09 at 20:52 +0200, Manfred Spraul wrote:
Hi Benjamin, Hi Michael,
regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() to
arch_spin_is_locked()"):
Hi Benjamin, Hi Michael,
regarding commit 51d7d5205d33 ("powerpc: Add smp_mb() to
arch_spin_is_locked()"):
For the ipc/sem code, I would like to replace the spin_is_locked() with
a smp_load_acquire(), see:
http://git.cmpxchg.org/cgit.cgi/linux-mmots.git/tree/ipc/sem.c#n367
http://www.ozlab
Hi Fabian,
On 07/29/2016 10:15 AM, Fabian Frederick wrote:
Running LTP msgsnd06 with kmemleak gives the following:
cat /sys/kernel/debug/kmemleak
unreferenced object 0x88003c0a11f8 (size 8):
comm "msgsnd06", pid 1645, jiffies 4294672526 (age 6.549s)
hex dump (first 8 bytes):
1b
e array, without
any synchronization.
Fixes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@informatik.uni-bremen.de
Signed-off-by: Manfred Spraul
Cc:
---
include/linux/sem.h | 1 +
ipc/sem.c | 138 +++-
2 file
Hi Andrew,
On 07/14/2016 12:05 AM, Andrew Morton wrote:
On Wed, 13 Jul 2016 07:06:50 +0200 Manfred Spraul
wrote:
Hi Andrew, Hi Peter,
next version of the sem_lock() fixes:
The patches are again vs. tip.
Patch 1 is ready for merging, Patch 2 is for review.
- Patch 1 is the patch as in
Hi Davidlohr,
On 07/13/2016 06:16 PM, Davidlohr Bueso wrote:
Manfred, shouldn't this patch be part of patch 1 (as you add the
unnecessary barriers there? Iow, can we have a single patch for all this?
Two reasons:
- patch 1 is safe for backporting, patch 2 not.
- patch 1 is safe on all archite
Bug:
Now both thread A and thread C operate on the same array, without
any synchronization.
Full memory barrier are required to synchronize changes of
complex_mode and the lock operations.
Fixes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@informatik.uni-bre
pport SMP.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 14 --
1 file changed, 14 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 0da63c8..d7b4212 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -291,14 +291,6 @@ static void complexmode_enter(struct sem_array *sma)
Hi Andrew, Hi Peter,
next version of the sem_lock() fixes:
The patches are again vs. tip.
Patch 1 is ready for merging, Patch 2 is for review.
- Patch 1 is the patch as in -next since January
It fixes the race that was found by Felix.
- Patch 2 removes the memory barriers that are part of the
101 - 200 of 655 matches
Mail list logo