--
Manfred
>From fe2f257b1950a19bf5c6f67e71aa25c2f13bcdc3 Mon Sep 17 00:00:00 2001
From: Manfred Spraul
Date: Sun, 24 May 2020 14:47:31 +0200
Subject: [PATCH 2/2] ipc/msg.c: Handle case of senders not enqueuing the
message
The patch "ipc/msg.c: wake up senders until there is
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.
To simplify code, the ipc_rcu_alloc initializes the allocation to 0.
Signed-off-by: Manfred Spraul
---
include/linux/ipc.h | 3 +++
ipc/msg.c | 19 +++
ipc
From: Kees Cook
Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for shmid_kernel structure
layout to be randomized in the future.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/shm.c | 18 ++
1 file
From: Kees Cook
Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for msg_queue structure
layout to be randomized in the future.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 18 ++
1 file changed
From: Kees Cook
The remaining users of __sem_free() can simply call kvfree() instead for
better readability.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff to keep rcu protection for
security_sem_alloc()]
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 9 ++---
1 file changed
From: Kees Cook
There is nothing special about the msg_alloc/free routines any more,
so remove them to make code more readable.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff to keep rcu protection for
security_msg_queue_alloc()]
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 24
Loosely based on a patch from Kees Cook :
- id and retval can be merged
- if ipc_addid() fails, then use call_rcu() directly.
The difference is that call_rcu is used for failed ipc_addid() calls,
to continue to guaranteed an rcu delay for security_msg_queue_free().
Signed-off-by: Manfred Spraul
From: Kees Cook
There is nothing special about the shm_alloc/free routines any more,
so remove them to make code more readable.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff, to continue to keep rcu for
free calls after a successful security_shm_alloc()]
Signed-off-by: Manfred
Now that ipc_rcu_alloc() and ipc_rcu_free() are removed,
document when it is valid to use ipc_getref() and ipc_putref().
Signed-off-by: Manfred Spraul
---
ipc/util.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/ipc/util.h b/ipc/util.h
index 77336c2b..c692010 100644
--- a/ipc/util.h
From: Kees Cook
Avoid using ipc_rcu_free, since it just re-finds the original structure
pointer. For the pre-list-init failure path, there is no RCU needed,
since it was just allocated. It can be directly freed.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 9
From: Kees Cook
No callers remain for ipc_rcu_alloc(). Drop the function.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff because the memset was
temporarily inside ipc_rcu_free()]
Signed-off-by: Manfred Spraul
---
ipc/util.c | 21 -
ipc/util.h | 3 ---
2
From: Kees Cook
Only after ipc_addid() has succeeded will refcounting be used, so
move initialization into ipc_addid() and remove from open-coded
*_alloc() routines.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 2 --
ipc/sem.c | 1 -
ipc/shm.c | 2 --
ipc/util.c
memset was
temporarily inside ipc_rcu_alloc()]
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 25 -
1 file changed, 20 insertions(+), 5 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index a04c4d6..445a5b5 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -451,6 +451,25 @@ static
-off-by: Manfred Spraul
Cc: Kees Cook
---
ipc/shm.c | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index c9f1f30..cb1d97e 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -548,7 +548,6 @@ static int newseg(struct ipc_namespace *ns, struct
ipc_params
Loosely based on a patch from Kees Cook :
- id and retval can be merged
- if ipc_addid() fails, then use call_rcu() directly.
The difference is that call_rcu is used for failed ipc_addid() calls,
to continue to guaranteed an rcu delay for security_sem_free().
Signed-off-by: Manfred Spraul
Cc
From: Kees Cook
There are no more callers of ipc_rcu_free(), so remove it.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/util.c | 7 ---
ipc/util.h | 1 -
2 files changed, 8 deletions(-)
diff --git a/ipc/util.c b/ipc/util.c
index dd73feb..556884b 100644
--- a/ipc/util.c
From: Kees Cook
Avoid using ipc_rcu_free, since it just re-finds the original structure
pointer. For the pre-list-init failure path, there is no RCU needed,
since it was just allocated. It can be directly freed.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 9
From: Kees Cook
Avoid using ipc_rcu_free, since it just re-finds the original structure
pointer. For the pre-list-init failure path, there is no RCU needed,
since it was just allocated. It can be directly freed.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/shm.c | 9
that there is
a comment in include/linux/sem.h and man semctl(2) as well.
So: Correct wrong comments.
Signed-off-by: Manfred Spraul
Cc: linux-...@vger.kernel.org
---
include/linux/sem.h | 2 +-
include/uapi/linux/sem.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a
The only users of ipc_alloc() were ipc_rcu_alloc() and the on-heap
sem_io fall-back memory. Better to just open-code these to make things
easier to read.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff due to inclusion of memset() into
ipc_rcu_alloc().]
Signed-off-by: Manfred Spraul
tatic code analysis.
- This is a cast between different non-void types, which the future
randstruct GCC plugin warns on.
And, as bonus, the code size gets smaller:
Before:
0 .text 3770
After:
0 .text 374e
Signed-off-by: Manfred Spraul
---
include/linux/
Hi all,
Updated series. The series got longer, because I merged all patches
from Kees.
Main changes are:
- sems[] instead of sem[0].
- Immediately use BUILD_BUG_ON()
- Immediately move the memset() to avoid crashing with SEM_UNDO.
- Use rcu for every security_xx_free(), even if ipc_addid() was no
Hi Alan,
On 07/08/2017 06:21 PM, Alan Stern wrote:
Pardon me for barging in, but I found this whole interchange extremely
confusing...
On Sat, 8 Jul 2017, Ingo Molnar wrote:
* Paul E. McKenney wrote:
On Sat, Jul 08, 2017 at 10:35:43AM +0200, Ingo Molnar wrote:
* Manfred Spraul wrote
the hysteresis code, the slow path is at least factor 10
rarer than it was before.
Especially: Who is able to test it?
Signed-off-by: Manfred Spraul
Cc: Alan Stern
---
ipc/sem.c | 33 +++--
1 file changed, 19 insertions(+), 14 deletions(-)
diff --git a/ipc/sem.
Hi Paul,
On 07/06/2017 01:31 AM, Paul E. McKenney wrote:
From: Manfred Spraul
As we want to remove spin_unlock_wait() and replace it with explicit
spin_lock()/spin_unlock() calls, we can use this to simplify the
locking.
In addition:
- Reading nf_conntrack_locks_all needs ACQUIRE memory
Hi Ingo,
On 07/07/2017 10:31 AM, Ingo Molnar wrote:
There's another, probably just as significant advantage:
queued_spin_unlock_wait()
is 'read-only', while spin_lock()+spin_unlock() dirties the lock cache line. On
any bigger system this should make a very measurable difference - if
spin_unloc
Hi,
On 12/23/2017 08:33 AM, syzbot wrote:
Hello,
syzkaller hit the following crash on
6084b576dca2e898f5c101baef151f7bfdbb606d
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/master
compiler: gcc (GCC) 7.1.1 20170620
.config is attached
Raw console output is attached.
Unfor
Hi all,
could you check the following three patches?
As explained, I try to combine the changes for static analyzers and for
the randstruct gcc plugin with cleanups.
0001-ipc-sem.c-remove-sem_base-embed-struct-sem.patch:
sem_base is not needed. Instead of improving the casts,
rem
tatic code analysis.
- This is a cast between different non-void types, which the future
randstruct GCC plugin warns on.
And, as bonus, the code size gets smaller:
Before:
0 .text 3770
After:
0 .text 374e
Signed-off-by: Manfred Spraul
---
include/linux/
that there is
a comment in include/linux/sem.h and man semctl(2) as well.
So: Correct wrong comments.
Signed-off-by: Manfred Spraul
Cc: linux-...@vger.kernel.org
---
include/linux/sem.h | 2 +-
include/uapi/linux/sem.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.
Signed-off-by: Manfred Spraul
---
include/linux/ipc.h | 3 +++
ipc/msg.c | 22 ++
ipc/sem.c | 35 ---
ipc/shm.c
Hi valdis,
On 05/20/2017 10:18 PM, valdis.kletni...@vt.edu wrote:
Seeing problems with programs that use semaphores. The one
that I'm getting bit by is jackd. strace says:
getuid()= 967
semget(0x282929, 0, 000)= 229376
semop(229376, [{0, -1, SEM
why this causes indigestion, there's probably something subtly
wrong here
commit 337f43326737b5eb28eb13f43c27a5788da0f913
Author: Manfred Spraul
Date: Fri May 19 07:39:23 2017 +1000
ipc: merge ipc_rcu and kern_ipc_perm
ipc has two management structures that exist for eve
Hi Davidlohr,
On 06/08/2017 10:09 PM, Davidlohr Bueso wrote:
Yes, this would be a prerequisite for ipc; which I initially thought
didn't
take a performance hit.
Did you see a regression for ipc?
The fast paths don't use the refcount, it is only used for rare situations:
- GETALL, SETALL fo
lowed immediately by spin_unlock().
This should be safe from a performance perspective because exit_sem()
is rarely invoked in production.
Signed-off-by: Paul E. McKenney
Cc: Andrew Morton
Cc: Davidlohr Bueso
Cc: Manfred Spraul
Cc: Will Deacon
Cc: Peter Zijlstra
Cc: Alan Stern
Cc: Andrea
On 07/03/2017 07:14 PM, Paul E. McKenney wrote:
On Mon, Jul 03, 2017 at 10:39:49AM -0400, Alan Stern wrote:
On Sat, 1 Jul 2017, Manfred Spraul wrote:
As we want to remove spin_unlock_wait() and replace it with explicit
spin_lock()/spin_unlock() calls, we can use this to simplify the
locking
Hi,
On 12/01/2017 06:20 PM, Davidlohr Bueso wrote:
On Thu, 30 Nov 2017, Philippe Mikoyan wrote:
As described in the title, this patch fixes id_ds inconsistency
when ctl_stat runs concurrently with some ds-changing function,
e.g. shmat, msgsnd or whatever.
For instance, if shmctl(IPC_STAT) is
Hi Davidlohr,
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote:
This is the main workhorse that deals with semop user calls
such that the waitforzero or semval update operations, on the
set, can complete on not as the sma currently stands. Currently,
the set is iterated twice (setting semval, then
the slow path is used.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..d5f2710 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -280,24 +280,13 @@ static void complexmode_enter
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch to full spin_lock()/spin_unlock() for the rare codepath,
then the fast path doesn't need the smp_mb() anymore.
2) confirm that no arch needs th
one complex op now scales a bit worse, but this is pure theory:
If there is concurrency, the it won't be exactly 10:1:10:1:10:1:...
If there is no concurrency, then there is no need for scalability.
Signed-off-by: Manfred Spraul
---
include/linux/sem.h | 2 +-
ipc/sem.c
On 10/20/2016 02:21 AM, Andrew Morton wrote:
On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul
wrote:
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch to full spin_lock()/spin_unlock() for
On 09/02/2016 09:22 PM, Peter Zijlstra wrote:
On Fri, Sep 02, 2016 at 08:35:55AM +0200, Manfred Spraul wrote:
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote:
Since
Hi Peter,
On 09/02/2016 09:22 PM, Peter Zijlstra wrote:
On Fri, Sep 02, 2016 at 08:35:55AM +0200, Manfred Spraul wrote:
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote
Hi,
On 09/06/2016 08:42 AM, kernel test robot wrote:
FYI, we noticed a -8.9% regression of aim9.shared_memory.ops_per_sec due to
commit:
commit 99ac0dfffcfb34326a880e90e06c30a2a882c692 ("ipc/sem.c: fix complex_count vs.
simple op race")
https://git.kernel.org/pub/scm/linux/kernel/git/next/lin
Hi Paul,
On 08/10/2016 11:00 PM, Paul E. McKenney wrote:
On Wed, Aug 10, 2016 at 12:17:57PM -0700, Davidlohr Bueso wrote:
[...]
CPU0 CPU1
complex_mode = truespin_lock(l)
smp_mb() <--- do we want a smp_mb() here?
spin_unlock
Hi Andrew, Hi Peter, Hi Davidlohr,
New idea for ipc/sem:
The ACQUIRE from spin_lock() will continue to apply only for the load,
not for the store.
Thus: If we don't want to add arch dependencies into ipc/sem, the only
safe option is to use spin_lock()/spin_unlock() instead of spin_unlock_wait().
the slow path is used.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..d5f2710 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -280,24 +280,13 @@ static void complexmode_enter
one complex op now scales a bit worse, but this is pure theory:
If there is concurrency, the it won't be exactly 10:1:10:1:10:1:...
If there is no concurrency, then there is no need for scalability.
Signed-off-by: Manfred Spraul
---
include/linux/sem.h | 2 +-
ipc/sem.c
Hi Davidlohr,
Just as with msgrcv (along with the rest of sysvipc since a few years
ago), perform the security checks without holding the ipc object lock.
Thinking about it: isn't this wrong?
CPU1:
* msgrcv()
* ipcperms()
CPU2:
* msgctl(), change permissions
** msgctl() returns, new perm
Hi Davidlohr,
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote:
@@ -1933,22 +1823,32 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf
__user *, tsops,
queue.alter = alter;
error = perform_atomic_semop(sma, &queue);
- if (error == 0) {
- /* If the operatio
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote:
... saves some LoC and looks cleaner than re-implementing the
calls.
Signed-off-by: Davidlohr Bueso
Acked-by: Manfred Spraul
--
Manfred
What about the attached dup detection?
--
Manfred
>From 140340a358dbf66b3bc6f848ca9b860e3e957e84 Mon Sep 17 00:00:00 2001
From: Manfred Spraul
Date: Mon, 19 Sep 2016 06:25:20 +0200
Subject: [PATCH] ipc/sem: Update duplicate sop detection
The duplicated sop detection can be improved:
- use uint64_t instead of unsigned lo
Hi Boqun,
On 08/12/2016 04:47 AM, Boqun Feng wrote:
We should not be doing an smp_mb() right after a spin_lock(), makes no sense.
The
spinlock machinery should guarantee us the barriers in the unorthodox locking
cases,
such as this.
Do we really want to go there?
Trying to handle all unortho
Hi Michal,
On 12/19/2017 10:48 AM, Michal Hocko wrote:
Hi,
we have been contacted by our partner about the following permission
discrepancy
1. Create a shared memory segment with permissions 600 with user A using
shmget(key, 1024, 0600 | IPC_CREAT)
2. ipcs -m should return an output as follo
601 - 655 of 655 matches
Mail list logo