that the slow path is used.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/sem.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..d5f2710 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -280,24 +
Hi Andrew, Hi Peter, Hi Davidlohr,
New idea for ipc/sem:
The ACQUIRE from spin_lock() will continue to apply only for the load,
not for the store.
Thus: If we don't want to add arch dependencies into ipc/sem, the only
safe option is to use spin_lock()/spin_unlock() instead of spin_unlock_wait().
and then
one complex op now scales a bit worse, but this is pure theory:
If there is concurrency, the it won't be exactly 10:1:10:1:10:1:...
If there is no concurrency, then there is no need for scalability.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/linux/sem.h | 2 +-
ipc
Hi Colin,
On 10/28/2016 08:11 PM, Colin King wrote:
From: Colin Ian King
The left shift amount is sop->sem_num % 64, which is up to 63, so
ensure we are shifting a ULL rather than a 32 bit value.
Good catch, thanks.
CoverityScan CID#1372862 "Bad bit shift
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch to full spin_lock()/spin_unlock() for the rare codepath,
then the fast path doesn't need the smp_mb() anymore.
2) confirm that no arch needs
and then
one complex op now scales a bit worse, but this is pure theory:
If there is concurrency, the it won't be exactly 10:1:10:1:10:1:...
If there is no concurrency, then there is no need for scalability.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/linux/sem.h | 2 +-
ipc
that the slow path is used.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/sem.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..d5f2710 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -280,24 +
On 10/20/2016 02:21 AM, Andrew Morton wrote:
On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul <manf...@colorfullife.com>
wrote:
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch t
On 10/28/2016 09:29 PM, Colin Ian King wrote:
On 28/10/16 20:21, Manfred Spraul wrote:
Hi Colin,
On 10/28/2016 08:11 PM, Colin King wrote:
[...]
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -1839,7 +1839,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct
sembuf __user *, tsops,
max = 0
-by: Manfred Spraul <manf...@colorfullife.com>
Fixes: 370b262c896e ("ipc/sem: avoid idr tree lookup for interrupted semop")
Cc: d...@stgolabs.net
---
Patch V2:
- subject line updated
- documentation slightly updated
ipc/sem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -
On 12/18/2016 05:29 PM, Davidlohr Bueso wrote:
On Sun, 18 Dec 2016, Bueso wrote:
On Fri, 16 Dec 2016, Dmitry Vyukov wrote:
[ BUG: bad unlock balance detected! ]
4.9.0+ #89 Not tainted
Thanks for the report, I can reproduce the issue as of (which I
obviously
should have tested with
locks
the global spinlock.
The fix is trivial: Use the return value from sem_lock.
Reported-by: dvyu...@google.com
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Fixes: 370b262c896e ("ipc/sem: avoid idr tree lookup for interrupted semop")
Cc: d...@stgolabs.net
---
ipc/sem.
Hi Ingo,
On 07/07/2017 10:31 AM, Ingo Molnar wrote:
There's another, probably just as significant advantage:
queued_spin_unlock_wait()
is 'read-only', while spin_lock()+spin_unlock() dirties the lock cache line. On
any bigger system this should make a very measurable difference - if
wrote:
* Manfred Spraul <manf...@colorfullife.com> wrote:
Hi Ingo,
On 07/07/2017 10:31 AM, Ingo Molnar wrote:
There's another, probably just as significant advantage:
queued_spin_unlock_wait()
is 'read-only', while spin_lock()+spin_unlock() dirties the lock cache line. On
any b
Hi Paul,
On 07/06/2017 01:31 AM, Paul E. McKenney wrote:
From: Manfred Spraul <manf...@colorfullife.com>
As we want to remove spin_unlock_wait() and replace it with explicit
spin_lock()/spin_unlock() calls, we can use this to simplify the
locking.
In addition:
- Reading nf_conntrack_loc
slow path should not hurt:
Due to the hysteresis code, the slow path is at least factor 10
rarer than it was before.
Especially: Who is able to test it?
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: Alan Stern <st...@rowland.harvard.edu>
immediately by spin_unlock().
This should be safe from a performance perspective because exit_sem()
is rarely invoked in production.
Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Davidlohr Bueso <d...@stgolabs.net>
On 07/03/2017 07:14 PM, Paul E. McKenney wrote:
On Mon, Jul 03, 2017 at 10:39:49AM -0400, Alan Stern wrote:
On Sat, 1 Jul 2017, Manfred Spraul wrote:
As we want to remove spin_unlock_wait() and replace it with explicit
spin_lock()/spin_unlock() calls, we can use this to simplify the
locking
00:00:00 2001
From: Manfred Spraul <manf...@colorfullife.com>
Date: Sat, 13 May 2017 18:02:20 +0200
Subject: [PATCH] ipc/sem.c: remove sem_base, embed struct sem
sma->sem_base is initialized with
sma->sem_base = (struct sem *) [1];
The current code has four problems:
- There is an u
are
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/linux/ipc.h | 3 +++
ipc/msg.c | 22 ++
ipc/sem.c
Hi all,
could you check the following three patches?
As explained, I try to combine the changes for static analyzers and for
the randstruct gcc plugin with cleanups.
0001-ipc-sem.c-remove-sem_base-embed-struct-sem.patch:
sem_base is not needed. Instead of improving the casts,
c code analysis.
- This is a cast between different non-void types, which the future
randstruct GCC plugin warns on.
And, as bonus, the code size gets smaller:
Before:
0 .text 3770
After:
0 .text 374e
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
i
that there is
a comment in include/linux/sem.h and man semctl(2) as well.
So: Correct wrong comments.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: linux-...@vger.kernel.org
---
include/linux/sem.h | 2 +-
include/uapi/linux/sem.h | 2 +-
2 files changed, 2 insertions
Hi Davidlohr,
On 06/08/2017 10:09 PM, Davidlohr Bueso wrote:
Yes, this would be a prerequisite for ipc; which I initially thought
didn't
take a performance hit.
Did you see a regression for ipc?
The fast paths don't use the refcount, it is only used for rare situations:
- GETALL, SETALL
Hi valdis,
On 05/20/2017 10:18 PM, valdis.kletni...@vt.edu wrote:
Seeing problems with programs that use semaphores. The one
that I'm getting bit by is jackd. strace says:
getuid()= 967
semget(0x282929, 0, 000)= 229376
semop(229376, [{0, -1,
On 05/27/2017 09:41 PM, Kees Cook wrote:
On Mon, Feb 20, 2017 at 3:29 AM, Elena Reshetova
wrote:
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter
Hi Kees,
On 05/25/2017 09:45 PM, Kees Cook wrote:
On Thu, May 25, 2017 at 11:50 AM, Manfred Spraul
<manf...@colorfullife.com> wrote:
Hi all,
Updated series. The series got longer, because I merged all patches
from Kees.
Main changes are:
- sems[] instead of sem[0].
- Immediate
are
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.
To simplify code, the ipc_rcu_alloc initializes the allocation to 0.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/linux/ipc.h | 3 +++
ipc/msg.c
From: Kees Cook <keesc...@chromium.org>
Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for shmid_kernel structure
layout to be randomized in the future.
Signed-off-by: Kees Cook <keesc...@chromium.org>
Signed-off-by: Manfred
From: Kees Cook <keesc...@chromium.org>
Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for msg_queue structure
layout to be randomized in the future.
Signed-off-by: Kees Cook <keesc...@chromium.org>
Signed-off-by: Manfred
From: Kees Cook <keesc...@chromium.org>
There are no more callers of ipc_rcu_free(), so remove it.
Signed-off-by: Kees Cook <keesc...@chromium.org>
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/util.c | 7 ---
ipc/util.h | 1 -
2 files changed, 8 deleti
;
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/msg.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/ipc/msg.c b/ipc/msg.c
index 0ed7dae..25d43e2 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -95,13 +95,18 @@ static inline void msg_rmid(struct
delay for
security_sem_free().
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: Kees Cook <keesc...@chromium.org>
---
ipc/shm.c | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index c9f1f30..cb1d97e 100644
--- a/ipc/shm.c
+++ b/i
ed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: Kees Cook <keesc...@chromium.org>
---
ipc/sem.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 445a5b5..2b2ed56 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -479,7 +479,
c code analysis.
- This is a cast between different non-void types, which the future
randstruct GCC plugin warns on.
And, as bonus, the code size gets smaller:
Before:
0 .text 3770
After:
0 .text 374e
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
i
Hi all,
Updated series. The series got longer, because I merged all patches
from Kees.
Main changes are:
- sems[] instead of sem[0].
- Immediately use BUILD_BUG_ON()
- Immediately move the memset() to avoid crashing with SEM_UNDO.
- Use rcu for every security_xx_free(), even if ipc_addid() was
Now that ipc_rcu_alloc() and ipc_rcu_free() are removed,
document when it is valid to use ipc_getref() and ipc_putref().
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/util.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/ipc/util.h b/ipc/util.h
index 77336c2b..c
From: Kees Cook <keesc...@chromium.org>
The remaining users of __sem_free() can simply call kvfree() instead for
better readability.
Signed-off-by: Kees Cook <keesc...@chromium.org>
[manf...@colorfullife.com: Rediff to keep rcu protection for
security_sem_alloc()]
Signed-off-by: Ma
_queue_alloc()]
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/msg.c | 24
1 file changed, 4 insertions(+), 20 deletions(-)
diff --git a/ipc/msg.c b/ipc/msg.c
index 770342e..5b25e07 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -95,29 +95,13 @@ static inline vo
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: Kees Cook <keesc...@chromium.org>
---
ipc/msg.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/ipc/msg.c b/ipc/msg.c
index 10094a7..cd90bfd 100644
--- a/ipc/msg.c
+++ b/ipc/msg.c
@@ -132,7 +132,
a successful security_shm_alloc()]
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/shm.c | 24
1 file changed, 4 insertions(+), 20 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index b85db5a..ec5688e 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -172,1
From: Kees Cook <keesc...@chromium.org>
Only after ipc_addid() has succeeded will refcounting be used, so
move initialization into ipc_addid() and remove from open-coded
*_alloc() routines.
Signed-off-by: Kees Cook <keesc...@chromium.org>
Signed-off-by: Manfred Spraul <manf...@c
omium.org>
[manf...@colorfullife.com: Rediff, because the memset was
temporarily inside ipc_rcu_alloc()]
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/sem.c | 25 -
1 file changed, 20 insertions(+), 5 deletions(-)
diff --git a/ipc/sem.c b/ipc/s
;
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/sem.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 484ccf8..a04c4d6 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -258,13 +258,18 @@ static void merge_queues(str
From: Kees Cook <keesc...@chromium.org>
No callers remain for ipc_rcu_alloc(). Drop the function.
Signed-off-by: Kees Cook <keesc...@chromium.org>
[manf...@colorfullife.com: Rediff because the memset was
temporarily inside ipc_rcu_free()]
Signed-off-by: Manfred Spraul <manf...@c
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/sem.c | 8 +---
ipc/util.c | 27 +++
ipc/util.h | 6 --
3 files changed, 8 insertions(+), 33 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index bdff6d9..484ccf8 100644
--- a/ipc/sem.c
+++
;
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/shm.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index 2eb85bd..77e1bff 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -172,6 +172,11 @@ static inline void shm_lock_by_ptr(s
that there is
a comment in include/linux/sem.h and man semctl(2) as well.
So: Correct wrong comments.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: linux-...@vger.kernel.org
---
include/linux/sem.h | 2 +-
include/uapi/linux/sem.h | 2 +-
2 files changed, 2 insertions
again. No idea why this causes indigestion, there's probably something subtly
wrong here
commit 337f43326737b5eb28eb13f43c27a5788da0f913
Author: Manfred Spraul <manf...@colorfullife.com>
Date: Fri May 19 07:39:23 2017 +1000
ipc: merge ipc_rcu and kern_ipc_perm
ipc has two
Hi,
On 12/01/2017 06:20 PM, Davidlohr Bueso wrote:
On Thu, 30 Nov 2017, Philippe Mikoyan wrote:
As described in the title, this patch fixes id_ds inconsistency
when ctl_stat runs concurrently with some ds-changing function,
e.g. shmat, msgsnd or whatever.
For instance, if shmctl(IPC_STAT) is
Hi,
On 12/23/2017 08:33 AM, syzbot wrote:
Hello,
syzkaller hit the following crash on
6084b576dca2e898f5c101baef151f7bfdbb606d
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/master
compiler: gcc (GCC) 7.1.1 20170620
.config is attached
Raw console output is attached.
Hi Arnd,
On 01/03/2018 12:15 AM, Arnd Bergmann wrote:
2 ipc/sem.c:377:6: warning: '___p1' may be used uninitialized in this function
[-Wmaybe-uninitialized]
This code was last touched in 3.16 by the backport of commit
5864a2fd3088 ("ipc/sem.c: fix complex_count vs. simple op race")
The
Hello Davidlohr,
On 07/17/2018 07:26 AM, Davidlohr Bueso wrote:
In order for load/store tearing to work, _all_ accesses to
the variable in question need to be done around READ and
WRITE_ONCE() macros. Ensure everyone does so for q->status
variable for semtimedop().
What is the background of
Hi,
Dmitry convinced me that I should properly review the initialization
of new ipc objects, and I found another issue.
The series corrects 3 issues with ipc_addid(), and also renames
two functions and corrects a wrong comment.
0001-ipc-reorganize-initialization-of-kern_ipc_perm.id:
not fullfill that, i.e. more bugfixes are required.
Reported-by: syzbot+2827ef6b3385deb07...@syzkaller.appspotmail.com
Signed-off-by: Manfred Spraul
Cc: Dmitry Vyukov
Cc: Kees Cook
Cc: Davidlohr Bueso
Cc: Michael Kerrisk
Signed-off-by: Manfred Spraul
---
Documentation/sysctl/kernel.txt | 3
=true may disappear at
the end of the next rcu grace period.
Signed-off-by: Manfred Spraul
---
ipc/util.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/ipc/util.c b/ipc/util.c
index 4f2db913acf9..776a9ce2905f 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -646,8 +646,8
a pointer in the idr, without
acquiring the object lock.
- The caller is responsible for locking.
- _check means that some checks are made.
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 2 +-
ipc/sem.c | 2 +-
ipc/shm.c | 2 +-
ipc/util.c | 6 +++---
ipc/util.h | 2 +-
5 files changed, 7 insert
-by: Manfred Spraul
Cc: Dmitry Vyukov
Cc: Kees Cook
Cc: Davidlohr Bueso
---
ipc/msg.c | 2 +-
ipc/sem.c | 2 +-
ipc/shm.c | 2 ++
ipc/util.c | 12 ++--
4 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/ipc/msg.c b/ipc/msg.c
index 829c2062ded4..5bf5cb8017ea 100644
--- a/ipc
that it does not check the sequence counter.
Signed-off-by: Manfred Spraul
---
ipc/shm.c | 4 ++--
ipc/util.c | 10 ++
ipc/util.h | 2 +-
3 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index 426ba1039a7b..cd8655c7bb77 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
of
syzbot+2827ef6b3385deb07...@syzkaller.appspotmail.com:
syzbot found an issue with kern_ipc_perm.seq
Signed-off-by: Manfred Spraul
Cc: Dmitry Vyukov
Cc: Kees Cook
Cc: Davidlohr Bueso
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 19 ++-
ipc/sem.c | 18 +-
ipc/shm.c
Hi Dmitry,
On 07/05/2018 10:36 AM, Dmitry Vyukov wrote:
[...]
Hi Manfred,
The series looks like a significant improvement to me. Thanks!
I feel that this code can be further simplified (unless I am missing
something here). Please take a look at this version:
The ipc code uses the equivalent of
rcu_read_lock();
kfree_rcu(a, rcu);
if (a->deleted) {
rcu_read_unlock();
return FAILURE;
}
<...>
Is this safe, or is dereferencing "a" after having called call_rcu()
a use-after-free?
the newly created object has
temporarily sequence number 0 as well.
--
Manfred
>From 4791e604dcb618ed7ea1f42b2f6ca9cfe3c113c3 Mon Sep 17 00:00:00 2001
From: Manfred Spraul
Date: Wed, 4 Jul 2018 10:04:49 +0200
Subject: [PATCH] ipc: fix races with kern_ipc_perm.id and .seq
ipc_addid(
Hi Davidlohr,
On 07/09/2018 10:09 PM, Davidlohr Bueso wrote:
On Mon, 09 Jul 2018, Manfred Spraul wrote:
@Davidlohr:
Please double check that I have taken the correct patches, and
that I didn't break anything.
Everything seems ok.
Patch 8 had an alternative patch that didn't change nowarn
, we
can simply move the logic into shm_lock() and get rid of the
function altogether.
[changelog mostly by manfred]
Signed-off-by: Davidlohr Bueso
Signed-off-by: Manfred Spraul
---
ipc/shm.c | 29 +++--
ipc/util.c | 36
ipc/util.h
resizing (when more memory becomes available) is the least
of our problems.
Signed-off-by: Davidlohr Bueso
Acked-by: Herbert Xu
Signed-off-by: Manfred Spraul
---
lib/rhashtable.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
a
positive consequence as for the same reasons we want nowarn semantics in
bucket_table_alloc().
Signed-off-by: Davidlohr Bueso
Acked-by: Michal Hocko
(commit id extended to 12 digits, line wraps updated)
Signed-off-by: Manfred Spraul
---
lib/rhashtable.c | 7 ++-
1 file changed, 2 insert
to the finding of
syzbot+2827ef6b3385deb07...@syzkaller.appspotmail.com:
syzbot found an issue with kern_ipc_perm.seq
Signed-off-by: Manfred Spraul
Cc: Dmitry Vyukov
Cc: Kees Cook
Reviewed-by: Davidlohr Bueso
---
ipc/msg.c | 19 ++-
ipc/sem.c | 18 +-
ipc/shm.c | 19
-by: Manfred Spraul
Cc: Dmitry Vyukov
Cc: Kees Cook
Cc: Davidlohr Bueso
---
ipc/msg.c | 2 +-
ipc/sem.c | 2 +-
ipc/shm.c | 2 ++
ipc/util.c | 10 --
4 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/ipc/msg.c b/ipc/msg.c
index 49358f474fc9..38119c1f0da3 100644
--- a/ipc
=true may disappear at
the end of the next rcu grace period.
Signed-off-by: Manfred Spraul
Reviewed-by: Davidlohr Bueso
---
ipc/util.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/ipc/util.c b/ipc/util.c
index cffd12240f67..5cc37066e659 100644
--- a/ipc/util.c
+++ b/ipc
a pointer in the idr, without
acquiring the object lock.
- The caller is responsible for locking.
- _check means that the sequence number is checked.
Signed-off-by: Manfred Spraul
Reviewed-by: Davidlohr Bueso
---
ipc/msg.c | 2 +-
ipc/sem.c | 2 +-
ipc/shm.c | 2 +-
ipc/util.c | 8
ipc/
From: Davidlohr Bueso
Now that we know that rhashtable_init() will not fail, we
can get rid of a lot of the unnecessary cleanup paths when
the call errored out.
Signed-off-by: Davidlohr Bueso
(variable name added to util.h to resolve checkpatch warning)
Signed-off-by: Manfred Spraul
---
ipc
e variable name.
seq: sequence number, to avoid quick collisions of the user space id
key: user space key, used for the rhash tree
Signed-off-by: Manfred Spraul
Cc: Dmitry Vyukov
---
include/linux/ipc_namespace.h | 2 +-
ipc/msg.c | 6 +++---
ipc/sem.c |
e
isn't found, and can therefore call into ipcget() callbacks.
Now that rhashtable initialization cannot fail, we can properly
get rid of the hack altogether.
Signed-off-by: Davidlohr Bueso
(commit id extended to 12 digits)
Signed-off-by: Manfred Spraul
---
include/linux/ipc_namespace.h | 1
ersions.
Signed-off-by: Manfred Spraul
---
ipc/util.c | 2 +-
ipc/util.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/ipc/util.c b/ipc/util.c
index fb69c911655a..6306eb25180b 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -461,7 +461,7 @@ void ipc_set_key_private(struct
Hi,
I have added all all review findings and rediffed the patches
- patch #1-#6: Fix syzcall findings & further race cleanups
patch #1 has an updated subject/comment
patch #2 contains the squashed result of Dmitrys change and my
own updates.
patch #6 is replaced
of ipc_addid()
do not fullfill that, i.e. more bugfixes are required.
The patch is a squash of a patch from Dmitry and my own changes.
Reported-by: syzbot+2827ef6b3385deb07...@syzkaller.appspotmail.com
Signed-off-by: Manfred Spraul
Cc: Dmitry Vyukov
Cc: Kees Cook
Cc: Davidlohr Bueso
Cc: Michael
-by: Manfred Spraul
Cc: Dmitry Vyukov
Cc: Kees Cook
Cc: Davidlohr Bueso
---
ipc/msg.c | 2 +-
ipc/sem.c | 2 +-
ipc/shm.c | 2 ++
ipc/util.c | 12 ++--
4 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/ipc/msg.c b/ipc/msg.c
index 829c2062ded4..5bf5cb8017ea 100644
--- a/ipc
Hi,
I have merged the patches from Dmitry, Davidlohr and myself:
- patch #1-#6: Fix syzcall findings & further race cleanups
- patch #7: Cleanup from Dmitry for ipc_idr_alloc.
- patch #8-#11: rhashtable improvement from Davidlohr
- patch #12: Another cleanup for ipc_idr_alloc.
@Davidlohr:
not fullfill that, i.e. more bugfixes are required.
Reported-by: syzbot+2827ef6b3385deb07...@syzkaller.appspotmail.com
Signed-off-by: Manfred Spraul
Cc: Dmitry Vyukov
Cc: Kees Cook
Cc: Davidlohr Bueso
Cc: Michael Kerrisk
---
Documentation/sysctl/kernel.txt | 3 ++-
ipc/util.c
=true may disappear at
the end of the next rcu grace period.
Signed-off-by: Manfred Spraul
Cc: Davidlohr Bueso
---
ipc/util.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/ipc/util.c b/ipc/util.c
index bbb1ce212a0d..8133f10832a9 100644
--- a/ipc/util.c
+++ b/ipc/util.c
that it does not check the sequence counter.
Signed-off-by: Manfred Spraul
Cc: Davidlohr Bueso
---
ipc/shm.c | 4 ++--
ipc/util.c | 10 ++
ipc/util.h | 2 +-
3 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index 426ba1039a7b..cd8655c7bb77 100644
--- a/ipc
a
positive consequence as for the same reasons we want nowarn semantics in
bucket_table_alloc().
Signed-off-by: Davidlohr Bueso
Acked-by: Michal Hocko
(commit id extended to 12 digits, line wraps updated)
Signed-off-by: Manfred Spraul
---
lib/rhashtable.c | 7 ++-
1 file changed, 2 insert
From: Dmitry Vyukov
ipc_idr_alloc refactoring
Signed-off-by: Dmitry Vyukov
Signed-off-by: Manfred Spraul
---
ipc/util.c | 51 +--
1 file changed, 13 insertions(+), 38 deletions(-)
diff --git a/ipc/util.c b/ipc/util.c
index 8bc166bb4981
resizing (when more memory becomes available) is the least
of our problems.
Signed-off-by: Davidlohr Bueso
Acked-by: Herbert Xu
Signed-off-by: Manfred Spraul
---
lib/rhashtable.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
of
syzbot+2827ef6b3385deb07...@syzkaller.appspotmail.com:
syzbot found an issue with kern_ipc_perm.seq
Signed-off-by: Manfred Spraul
Cc: Dmitry Vyukov
Cc: Kees Cook
Cc: Davidlohr Bueso
---
ipc/msg.c | 19 ++-
ipc/sem.c | 18 +-
ipc/shm.c | 19 ++-
3
a pointer in the idr, without
acquiring the object lock.
- The caller is responsible for locking.
- _check means that the sequence number is checked.
Signed-off-by: Manfred Spraul
Cc: Davidlohr Bueso
---
ipc/msg.c | 2 +-
ipc/sem.c | 2 +-
ipc/shm.c | 2 +-
ipc/util.c | 8
ipc/util.h | 2 +
If idr_alloc within ipc_idr_alloc fails, then the return value (-ENOSPC)
is used to calculate new->id.
Technically, this is not a bug, because new->id is never accessed.
But: Clean it up anyways: On error, just return, do not set new->id.
And improve the documentation.
Signed-off-by
From: Davidlohr Bueso
Now that we know that rhashtable_init() will not fail, we
can get rid of a lot of the unnecessary cleanup paths when
the call errored out.
Signed-off-by: Davidlohr Bueso
(variable name added to util.h to resolve checkpatch warning)
Signed-off-by: Manfred Spraul
---
ipc
e
isn't found, and can therefore call into ipcget() callbacks.
Now that rhashtable initialization cannot fail, we can properly
get rid of the hack altogether.
Signed-off-by: Davidlohr Bueso
(commit id extended to 12 digits)
Signed-off-by: Manfred Spraul
---
include/linux/ipc_namespace.h | 1
Hello Dmitry,
On 07/09/2018 07:05 PM, Dmitry Vyukov wrote:
On Mon, Jul 9, 2018 at 5:10 PM, Manfred Spraul wrote:
If idr_alloc within ipc_idr_alloc fails, then the return value (-ENOSPC)
is used to calculate new->id.
Technically, this is not a bug, because new->id is never ac
Hello Dmitry,
On 07/04/2018 12:03 PM, Dmitry Vyukov wrote:
On Wed, Jul 4, 2018 at 11:18 AM, Manfred Spraul
wrote:
There are 2 relevant values: kern_ipc_perm.id and kern_ipc_perm.seq.
For kern_ipc_perm.id, it is possible to move the access to the codepath that
hold the lock
Hello Mathew,
On 03/29/2018 12:56 PM, Matthew Wilcox wrote:
On Thu, Mar 29, 2018 at 10:47:45AM +0200, Manfred Spraul wrote:
This can be implemented trivially with the current code
using idr_alloc_cyclic.
Is there a performance impact?
Right now, the idr tree is only large if there are lots
Hello together,
On 03/29/2018 04:14 AM, Davidlohr Bueso wrote:
Cc'ing mtk, Manfred and linux-api.
See below.
On Thu, 15 Mar 2018, Waiman Long wrote:
On 03/15/2018 03:00 PM, Eric W. Biederman wrote:
Waiman Long writes:
On 03/14/2018 08:49 PM, Eric W. Biederman wrote:
Hi,
On 03/30/2018 09:09 PM, Davidlohr Bueso wrote:
On Wed, 28 Mar 2018, Davidlohr Bueso wrote:
On Fri, 23 Mar 2018, Eric W. Biederman wrote:
Today the last process to update a semaphore is remembered and
reported in the pid namespace of that process. If there are processes
in any other pid
On 10/2/18 8:27 PM, Waiman Long wrote:
On 10/02/2018 12:19 PM, Manfred Spraul wrote:
A bit related to the patch series that increases IPC_MNI:
(User space) id reuse create the risk of data corruption:
Process A: calls ipc function
Process A: sleeps just at the beginning of the syscall
Process
ids.
Signed-off-by: Manfred Spraul
---
Open questions:
- Is there a significant performance advantage, especially
there are many ipc ids?
- Over how many ids should the code cycle always?
- Further review remarks?
ipc/util.c | 22 +-
1 file changed, 21 insertions(+), 1
Hi Dmitry,
On 11/30/18 6:58 PM, Dmitry Vyukov wrote:
On Thu, Nov 29, 2018 at 9:13 AM, Manfred Spraul
wrote:
Hello together,
On 11/27/18 4:52 PM, syzbot wrote:
Hello,
syzbot found the following crash on:
HEAD commit:e195ca6cb6f2 Merge branch 'for-linus' of git://git.kernel...
git tree
/drivers/net/winbond-840.c Sat May 12 11:59:43 2001
@@ -32,10 +32,13 @@
synchronize tx_q_bytes
software reset in tx_timeout
Copyright (C) 2000 Manfred Spraul
+ * further cleanups
+ Copyright (c) 2001 Manfred Spraul
Jeff Garzik wrote:
>
> Manfred Spraul wrote:
> > @@ -437,9 +439,9 @@
> > if (option > 0) {
> > if (option & 0x200)
> > np->full_duplex = 1;
> > - np->default_port = opti
501 - 600 of 1250 matches
Mail list logo