On 06/28/2016 07:27 AM, Davidlohr Bueso wrote:
On Thu, 23 Jun 2016, Manfred Spraul wrote:
What I'm not sure yet is if smp_load_acquire() is sufficient:
Thread A:
if (!READ_ONCE(sma->complex_mode)) {
The code is test_and_test, no barrier requirements for first test
Yeah,
ck operations.
Fixes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@informatik.uni-bremen.de
Signed-off-by: Manfred Spraul
Cc:
---
include/linux/sem.h | 1 +
ipc/sem.c | 122 ++--
2 files changed, 71 inserti
lock to the per
semaphore locks. This reduces how often the per-semaphore locks must
be scanned.
Passed stress testing with sem-scalebench.
Signed-off-by: Manfred Spraul
---
include/linux/sem.h | 2 +-
ipc/sem.c | 89 +
2 files
Hi Andrew, Hi Peter,
next version of the sem_lock() fixes / improvement:
The patches are now vs. tip.
Patch 1 is ready for merging, patch 2 is new and for discussion.
Patch 1 fixes the race that was found by Felix.
It also adds smp_mb() to fully synchronize
WRITE_ONCE(status, 1);
On 06/21/2016 10:29 PM, Davidlohr Bueso wrote:
On Sat, 18 Jun 2016, Manfred Spraul wrote:
sysv sem has two lock modes: One with per-semaphore locks, one lock mode
with a single big lock for the whole array.
When switching from the per-semaphore locks to the big lock, all
per-semaphore locks
On 06/21/2016 02:30 AM, Davidlohr Bueso wrote:
On Sat, 18 Jun 2016, Manfred Spraul wrote:
diff --git a/include/linux/sem.h b/include/linux/sem.h
index 976ce3a..d0efd6e 100644
--- a/include/linux/sem.h
+++ b/include/linux/sem.h
@@ -21,6 +21,7 @@ struct sem_array {
struct list_head
On 06/21/2016 01:04 AM, Andrew Morton wrote:
On Sat, 18 Jun 2016 22:02:21 +0200 Manfred Spraul
wrote:
Commit 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") introduced a race:
sem_lock has a fast path that allows parallel simple operations.
There are two reasons why a simple
lock to the per
semaphore locks. This reduces how often the per-semaphore locks must
be scanned.
Passed stress testing with sem-scalebench.
Signed-off-by: Manfred Spraul
---
include/linux/sem.h | 2 +-
ipc/sem.c | 91 -
2 files
ock (no array scan, complex_count==1)
- wakes up Thread B.
- decrements complex_count
Thread A:
- does the complex_count test
Bug:
Now both thread A and thread C operate on the same array, without
any synchronization.
Fixes: 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()")
Reported-by: fel...@inf
Hi,
On 06/15/2016 07:23 AM, Stephen Rothwell wrote:
Hi Andrew,
Today's linux-next merge of the akpm-current tree got a conflict in:
ipc/sem.c
between commit:
33ac279677dc ("locking/barriers: Introduce smp_acquire__after_ctrl_dep()")
from the tip tree and commit:
a1c58ea067cb ("ipc
Hi Peter,
On 05/20/2016 06:04 PM, Peter Zijlstra wrote:
On Fri, May 20, 2016 at 05:21:49PM +0200, Peter Zijlstra wrote:
Let me write a patch..
OK, something like the below then.. lemme go build that and verify that
too fixes things.
---
Subject: locking,qspinlock: Fix spin_is_locked() and s
On 05/21/2016 09:37 AM, Peter Zijlstra wrote:
On Fri, May 20, 2016 at 05:48:39PM -0700, Davidlohr Bueso wrote:
As opposed to spin_is_locked(), spin_unlock_wait() is perhaps more tempting
to use for locking correctness. For example, taking a look at
nf_conntrack_all_lock(),
it too likes to get s
Hi,
On 02/26/2016 01:21 PM, PrasannaKumar Muralidharan wrote:
From: PrasannaKumar Muralidharan
As described in bug #112271 (bugzilla.kernel.org/show_bug.cgi?id=112271)
don't set sempid in semctl syscall. Set sempid only when semop is called.
I disagree with the bug report:
sempid is (and alw
Hi Ying,
On 02/14/2016 07:41 AM, kernel test robot wrote:
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 0050ee059f7fc86b1df2527aaa14ed5dc72f9973 ("ipc/msg: increase MSGMNI, remove
scaling")
LTP_syscalls: msgctl11: "Not en
On 01/04/2016 02:02 PM, Davidlohr Bueso wrote:
On Sat, 02 Jan 2016, Manfred Spraul wrote:
Commit 6d07b68ce16a ("ipc/sem.c: optimize sem_lock()") introduced a
race:
sem_lock has a fast path that allows parallel simple operations.
There are two reasons why a simple operation can
Hi Dmitry,
On 01/02/2016 01:19 PM, Dmitry Vyukov wrote:
On Sat, Jan 2, 2016 at 12:33 PM, Manfred Spraul
wrote:
Hi Dmitry,
shm locking differs too much from msg/sem locking, I never looked at it in
depth, so I'm not able to perform a proper review.
Except for the obvious: Races that c
ock (no array scan, complex_count==1)
- wakes up Thread B.
- decrements complex_count
Thread A:
- does the complex_count test
Bug:
Now both thread A and thread C operate on the same array, without
any synchronization.
Reported-by: fel...@informatik.uni-bremen.de
Signed-off-by: Manfred Spraul
Cc:
--
On 11/13/2015 08:23 PM, Davidlohr Bueso wrote:
So considering EINVAL, even your approach to bumping up nattach by
calling
_shm_open earlier isn't enough. Races exposed to user called rmid can
still
occur between dropping the lock and doing ->mmap(). Ultimately this
leads to
all ipc_valid_obje
Hi Dmitry,
shm locking differs too much from msg/sem locking, I never looked at it
in depth, so I'm not able to perform a proper review.
Except for the obvious: Races that can be triggered from user space are
inacceptable.
Regardless if there is a BUG_ON, a WARN_ON or nothing at all.
On 12/
Nice!
--
Manfred
/*
* pmsg.cpp, parallel sysv msg pingpong
*
* Copyright (C) 1999, 2001, 2005, 2008 by Manfred Spraul.
* All rights reserved except the rights granted by the GPL.
*
* Redistribution of this file is permitted under the terms of the GNU
* General Public License (GPL) version 2
cfd23f04ad45115c Mon Sep 17 00:00:00 2001
From: Manfred Spraul
Date: Sat, 10 Oct 2015 08:37:22 +0200
Subject: [PATCH] ipc/sem.c: Alternative for fixing Concurrency bug
Two ideas for fixing the bug found by Felix:
- Revert my initial patch.
Problem: Significant slowdown for application that u
gned-off-by: Herton R. Krzesinski
Acked-by: Manfred Spraul
--
Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please
_SLAB,
CONFIG_SLAB_DEBUG and CONFIG_DEBUG_SPINLOCK, you can easily see something like
the following in the kernel log:
Signed-off-by: Herton R. Krzesinski
Cc: sta...@vger.kernel.org
Acked-by: Manfred Spraul
--
Manfred
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel&qu
Hi Herton,
On 08/10/2015 05:31 PM, Herton R. Krzesinski wrote:
Well without the synchronize_rcu() and with the semid list loop fix I was still
able to get issues, and I thought the problem is related to racing with IPC_RMID
on freeary again. This is one scenario I would imagine:
()
(i.e.: starting from 3.10).
Andrew: Could you include it into your tree and forward it?
Signed-off-by: Manfred Spraul
Reported-by: Oleg Nesterov
Cc:
---
ipc/sem.c | 18 ++
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index bc3d530..e581b08
Hi Herton,
On 08/07/2015 07:09 PM, Herton R. Krzesinski wrote:
The current semaphore code allows a potential use after free: in exit_sem we may
free the task's sem_undo_list while there is still another task looping through
the same semaphore set and cleaning the sem_undo list at freeary functio
Hi Davidlohr,
On 05/30/2015 02:03 AM, Davidlohr Bueso wrote:
Upon every shm_lock call, we BUG_ON if an error was returned,
indicating racing either in idr or in RMID. Move this logic
into the locking.
Signed-off-by: Davidlohr Bueso
---
ipc/shm.c | 11 +++
1 file changed, 7 insertion
On 05/30/2015 02:03 AM, Davidlohr Bueso wrote:
We currently use a full barrier on the sender side to
to avoid receiver tasks disappearing on us while still
performing on the sender side wakeup. We lack however,
the proper CPU-CPU interactions pairing on the receiver
side which busy-waits for the
Hi Davidlohr,
On 04/28/2015 06:59 PM, Davidlohr Bueso wrote:
On Tue, 2015-04-28 at 18:43 +0200, Peter Zijlstra wrote:
Well, if you can 'guarantee' the cmpxchg will not fail, you can then
rely on the fact that cmpxchg implies a full barrier, which would
obviate the need for the wmb.
Yes, assumi
ned-off-by: Chris Metcalf
sysvsem depends on this definition, i.e. a false early return can cause
a corrupted semaphore state.
Acked-by: Manfred Spraul
---
On 04/28/2015 12:24 PM, Peter Zijlstra wrote:
I think it must not return before the lock holder that is current at the
time of callin
On 04/07/2015 05:03 PM, Sebastian Andrzej Siewior wrote:
This patch moves the wakeup_process() invocation so it is not done under
the info->lock. With this change, the waiter is woken up once it is
"ready" which means its state is STATE_READY and it does not need to loop
on SMP if it is still in
.: starting from 3.10).
Signed-off-by: Manfred Spraul
Reported-by: Oleg Nesterov
Cc:
---
include/linux/spinlock.h | 15 +++
ipc/sem.c| 8
2 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
Hi Oleg,
On 03/01/2015 02:22 PM, Oleg Nesterov wrote:
On 02/28, Peter Zijlstra wrote:
On Sat, Feb 28, 2015 at 09:36:15PM +0100, Manfred Spraul wrote:
+/*
+ * Place this after a control barrier (such as e.g. a spin_unlock_wait())
+ * to ensure that reads cannot be moved ahead of the
take care of adding it to a tree that is heading
for Linus' tree?
Signed-off-by: Manfred Spraul
Reported-by: Oleg Nesterov
Cc:
---
include/linux/spinlock.h | 10 ++
ipc/sem.c| 7 ++-
2 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/inc
Hi Oleg,
On 02/26/2015 08:29 PM, Oleg Nesterov wrote:
@@ -341,7 +359,13 @@ static inline int sem_lock(struct sem_array *sma, struct
sembuf *sops,
* Thus: if is now 0, then it will stay 0.
*/
if (sma->complex_count == 0) {
ion.
But since the existing control boundary is a write memory barrier,
it is cheaper use an smp_rmb().
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 26 +-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 9284211..d43011d 100644
-
Hi Oleg,
my example was bad, let's continue with your example.
And: If sem_lock() needs another smp_xmb(), then we must add it:
Some apps do not have a user space hot path, i.e. it seems that on some
setups, we have millions of calls per second.
If there is a race, then it will happen.
I've t
Hi Oleg,
On 02/18/2015 04:59 PM, Oleg Nesterov wrote:
Let's look at sem_lock(). I never looked at this code before, I can be
easily wrong. Manfred will correct me. But at first glance we can write
the oversimplified pseudo-code:
spinlock_t local, global;
bool my_lock(bool try_l
On 01/22/2015 03:44 AM, Ethan Zhao wrote:
On Wed, Jan 21, 2015 at 1:30 PM, Manfred Spraul
wrote:
On 01/21/2015 04:53 AM, Ethan Zhao wrote:
On Tue, Jan 20, 2015 at 10:10 PM, Stephen Smalley
wrote:
On 01/20/2015 04:18 AM, Ethan Zhao wrote:
sys_semget()
->new
On 01/21/2015 04:53 AM, Ethan Zhao wrote:
On Tue, Jan 20, 2015 at 10:10 PM, Stephen Smalley wrote:
On 01/20/2015 04:18 AM, Ethan Zhao wrote:
sys_semget()
->newary()
->security_sem_alloc()
->sem_alloc_security()
selinux_sem_alloc_security()
Hi,
On 01/20/2015 03:10 PM, Stephen Smalley wrote:
On 01/20/2015 04:18 AM, Ethan Zhao wrote:
A NULL pointer dereference was observed as following panic:
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [] ipc_has_perm+0x4b/0x60
...
Process opcmon (pid: 30712, threadinfo
.extra1=NULL has the same effect as .extra1=&zero.
--
Manfred
>From 194e5d4758bb30531bad0907f06f3518002cd8b4 Mon Sep 17 00:00:00 2001
From: Manfred Spraul
Date: Sat, 13 Dec 2014 21:25:27 +0100
Subject: [PATCH] kernel/sysctl.c: Type safe macros
struct ctl_table is used for creating entries in e.g. /pr
sctl_payload *' arguments. But I haven't thought about it much.
Another idea: why do we pass "int *" instead of "int"?
With "int", we could use
.int_min = 0;
.int_max = 1;
--
Manfred
>From 7a210bec3d9dc3382ef0d6809a7742856373bbee Mon Sep
orruptions with 0-sized undo buffer allocation is
possible since 3.10, too.
(sem_lock before accessing sma->sem_nsems replaced with
sem_obtain_object_check).
--
Manfred
>From fa928cdd6b5e032006f100f9689a5a4167c086e8 Mon Sep 17 00:00:00 2001
From: Manfred Spraul
Date: Sun, 23 Nov 2
On 11/21/2014 09:29 PM, Rik van Riel wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/21/2014 03:09 PM, Andrew Morton wrote:
On Fri, 21 Nov 2014 14:52:26 -0500 Rik van Riel
wrote:
When manipulating just one semaphore with semop, sem_lock only
takes that single semaphore's lock. Thi
Hi Rik,
good catch - I completely forgot to check the initialization
On 11/22/2014 04:40 AM, Rik van Riel wrote:
newary initializes a bunch of things after the call to
ipc_addid, however some things are initialized inside
ipc_addid as well
Looking closer at newary, I suppose that it should be
Hi Steven,
On 11/16/2014 08:40 PM, Steven Stewart-Gallus wrote:
Finally, please don't ignore the rest of my message. Even if my patch
isn't that good there are lots of ways to compromise and improve it
such as adding tests, annotations and making it clearer.
I think you were already given idea
Hi Andrew,
I've updated the patches based on the feedback I got.
Could you include them in your tree and forward them to Linus?
I consider them as ready for merging.
0001-ipc-sem.c-Chance-memory-barrier-in-sem_lock-to-smp_r.patch
When I rewrote sem_lock(), I was more conservative than
ion).
Note:
If an administrator must limit the memory allocations, then he can set the
values as necessary.
Or he can disable sysv entirely (as e.g. done by Android).
Signed-off-by: Manfred Spraul
---
include/uapi/linux/sem.h | 18 +++---
1 file changed, 15 insertions(+), 3 deletions
ned-off-by: Manfred Spraul
---
Documentation/sysctl/kernel.txt | 10 +++--
include/linux/ipc_namespace.h | 20 -
include/uapi/linux/msg.h| 28
ipc/Makefile| 2 +-
ipc/ipc_sysctl.c| 94
er
A: if (sma->complex_count == 0)
Thread A must read the increased complex_count value, i.e. the read must
not be reordered with the read of sem_perm.lock done by spin_is_locked().
Since it's about ordering of reads, smp_rmb() is sufficient.
Signed-off-by: Manfred Spraul
---
ipc/s
Hi Steven,
You wrote:
Currently the only thread-safe way of using mq_notify with message
queues is to use the SIGEV_THREAD option.
Could you explain what you mean with "only thread-safe way"?
I'm a bit relunctant to extend mq_notify() without understanding the reason.
What about:
- use sigproc
avoid the usage of mm->start_stack) and ignores VM_GROWSUP.
Signed-off-by: Oleg Nesterov
Acked-by: Manfred Spraul
[snip]
+ if (vma) {
+ if (vma->vm_flags & VM_GROWSDOWN)
+ end += PAGE_SIZE * 4; /* can't o
Hi Andrew,
On 08/14/2014 03:34 PM, Andrew Vagin wrote:
On Thu, Aug 14, 2014 at 11:37:45AM +0200, Manfred Spraul wrote:
Hi Andrey,
[...]
What do you use auto_msgmni for?
We disable it to check that criu restores a value of the msgmni sysctl
correctly.
https://github.com/xemul/criu/blob
Hi Andrey,
On 08/13/2014 03:21 PM, Andrey Vagin wrote:
proc_dointvec_minmax() returns zero, if a new value has been set.
So we don't need to check all charectes have been handled.
What do you use auto_msgmni for?
I would propose to remove the whole logic - just always allow 32000
message queu
Hi Eric,
On 08/12/2014 12:37 PM, Eric W. Biederman wrote:
Manfred Spraul writes:
Sigh. Patches for new code during the merge window. It is a really
rotten time to look at new things.
Right now, each new IPC namespace starts with the kernel default values.
This means that changes that were
sem_lock right now contains an smp_mb().
I think smp_rmb() would be sufficient - and performance of semop() with rmb()
is up to 10% faster. It would be a pairing of rmb() with spin_unlock().
The race we must protect against is:
sem->lock is free
sma->complex_count = 0
sma->sem_perm.lock held by t
ion).
Note:
If an administrator must limit the memory allocations, then he can set the
values as necessary.
Or he can disable sysv entirely (as e.g. done by Android).
Signed-off-by: Manfred Spraul
---
include/uapi/linux/sem.h | 18 +++---
1 file changed, 15 insertions(+), 3 deletions
- SysV shm
- POSIX mqueues
Cc: se...@hallyn.com
Cc: ebied...@xmission.com
Cc: contain...@lists.linux-foundation.org
Cc: mtk.manpa...@gmail.com
Signed-off-by: Manfred Spraul
---
include/linux/ipc_namespace.h | 6 --
ipc/mqueue.c | 23 ---
ipc/msg.c
re used
to control latency vs. throughput:
If MSGMNB is large, then msgsnd() just returns and more messages can be queued
before a task switch to a task that calls msgrcv() is forced.
Signed-off-by: Manfred Spraul
---
include/linux/ipc_namespace.h | 20 --
include/uapi/linux/msg.h
Hi Andrew,
I got some positive and no negative feedback on my patches, thus:
Could you add the patches to -mm and push them towards Linus?
0001-ipc-msg-increase-MSGMNI-remove-scaling.patch
- increase MSGMNI to 32000
- as a bonus, this removes around 300 lines
0002-ipc-sem.c-incre
t; Parent: 2f2ed41dcaec34f2d6f224aa84efcc5a9dd8d5c3
> Refname:refs/heads/next
> Author: Manfred Spraul
> AuthorDate: Fri Jun 6 14:37:49 2014 -0700
> Committer: Linus Torvalds
> CommitDate: Fri Jun 6 16:08:15 2014 -0700
>
> ipc/sem.c: change perform_atomic_semo
Hi all,
until now, every sysadmin/distro had to update the sysv limits.
For shm, the new proposal is to increase the limits to (nearly) ULONG_MAX.
Right now, I try to create patches that also increase the limits for
sysv msg, but I got stuck:
- MSGMNI is trivial, just increase it to nearly IP
SysV can be abused to allocated locked kernel memory.
For most systems, a small limit doesn't make sense,
see the discussion with regards to SHMMAX.
Therefore: increase MSGMNI to the maximum supported.
And: if we ignore the risk of locking too much memory, then
an automatic scaling of MSGMNI does
Right now, each IPC namespace starts with the kernel boot standard
settings.
This patch changes that:
Now each new namespace starts with the settings from the parent
namespace.
The patch updates
- SysV msg
- SysV sem
- SysV shm
- POSIX mqueues
It's just a proposal - only partially tested
--
SysV can be abused to allocated locked kernel memory.
For most systems, a small limit doesn't make sense,
see the discussion with regards to SHMMAX.
Therefore: Increase the sysv sem limits to the maximum supported.
With regards to the maximum supported:
Some of the specified hard limits are not c
Hi all,
a) If we increase SHMMAX/SHMALL, then it makes sense to
increase MSGMNI, too.
And: This allows to remove the automatic scaling (~300 lines)
b) We can also increase SEMMSL, SEMMNI and SEMOPM
c) I think it would make more sense if a namespace starts with the
limits from it's paren
of GETNCNT or GETZCNT,
this is done to prevent unnecessary bloat.
The task that triggered is reported with name (tsk->comm) and pid.
Signed-off-by: Manfred Spraul
Acked-by: Davidlohr Bueso
Cc: Michael Kerrisk
Cc: Joe Perches
---
ipc/sem.c | 11 +++
1 file changed, 11 inserti
Hi Nadia,
You added a patch that adds dynamic scaling of MSGMNI (f7bf3df8).
The description begins with:
On large systems we'd like to allow a larger number of message queues. In
some cases up to 32K. However simply setting MSGMNI to a larger value may
cause problems for smaller systems.
Whic
>From ed73ce838fc3f55e34041591a72b3135ccaa460b Mon Sep 17 00:00:00 2001
From: Manfred Spraul
Date: Sun, 25 May 2014 21:04:42 +0200
Subject: [PATCH] ipc namespace: copy settings from parent namespace
Right now, each IPC namespace starts with the kernel boot standard
settings.
This patch changes that:
Now each new namespac
of GETNCNT or GETZCNT,
this is done to prevent unnecessary bloat.
Signed-off-by: Manfred Spraul
Cc: Davidlohr Bueso
Cc: Michael Kerrisk
Cc: Joe Perches
---
ipc/sem.c | 12
1 file changed, 12 insertions(+)
diff --git a/ipc/sem.c b/ipc/sem.c
index 9106321..3cc2f7b6 100644
--- a/ipc
Hi Joe,
On 05/25/2014 08:39 PM, Joe Perches wrote:
On Sun, 2014-05-25 at 20:21 +0200, Manfred Spraul wrote:
+*/
+ printk_once(KERN_INFO "semctl(GETNCNT/GETZCNT) is since 3.16 Single " \
+ "Unix Specificati
reported on exactly one semaphore:
The semaphore that caused the thread to got to sleep.
This patch adds a printk_once() that is triggered if a thread hits
the relevant case.
Signed-off-by: Manfred Spraul
Cc: Davidlohr Bueso
Cc: Michael Kerrisk
---
ipc/sem.c | 12
1 file changed, 12
Hi Andrew,
On 05/20/2014 12:46 AM, Andrew Morton wrote:
On Sun, 18 May 2014 09:58:37 +0200 Manfred Spraul
wrote:
SUSv4 clearly defines how semncnt and semzcnt must be calculated:
A task waits on exactly one semaphore:
The semaphore from the first operation in the sop array that cannot
Hi Davidlohr, Hi Andrew,
On 05/18/2014 08:01 PM, Davidlohr Bueso wrote:
On Sun, 2014-05-18 at 07:53 +0200, Manfred Spraul wrote:
On 05/13/2014 10:27 PM, Davidlohr Bueso wrote:
When specifying the MSG_NOERROR flag, receivers can avoid returning
error (E2BIG) and just truncate the message text
Hi Andrew,
I've applied the changes recommended by Michael and Davidlohr - and I don't
have any open points on my list, either.
Therefore: Could you add the series to -mm and move it towards mainline?
Background:
SUSv4 and the man page of semop() clearly define how semncnt or semzcnt must
be u
amp;paste, this will be cleaned up in the next patch.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5749b9c..dc648f8 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -1047,6 +1047,16 @@ static int count_semzc
Preparation for the next patch:
In the slow-path of perform_atomic_semop(), store a pointer to the operation
that caused the operation to block.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/ipc/sem.c b/ipc/sem.c
index 7b1585d..8b5e976 100644
count_semzcnt and count_semncnt are more of less identical.
The patch creates a single function that either counts the number of tasks
waiting for zero or waiting due to a decrease operation.
Compared to the initial version, the BUG_ONs were removed.
Signed-off-by: Manfred Spraul
---
ipc/sem.c
Right now, perform_atomic_semop gets the content of sem_queue as individual
fields.
Changes that, instead pass a pointer to sem_queue.
This is a preparation for the next patch: it uses
sem_queue to store the reason why a task must sleep.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 38
Somehow $ was overlooked in the last round of whitespace
cleanups.
Do that now, before making further changes.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index bee5554..5749b9c 100644
--- a/ipc
on of the patch, the BUG_ONs were removed
and it was clarified that the new behavior conforms to SUS.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 34 +-
1 file changed, 13 insertions(+), 21 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 8b5e976..71
Hi Davidlohr,
On 05/13/2014 10:27 PM, Davidlohr Bueso wrote:
The need for volatile is not obvious, document it.
Signed-off-by: Davidlohr Bueso
Signed-off-by: Manfred Spraul
In the long run, it would be great if the logic from sem.c would be
moved to one central place
On 05/13/2014 10:27 PM, Davidlohr Bueso wrote:
When specifying the MSG_NOERROR flag, receivers can avoid returning
error (E2BIG) and just truncate the message text, if it is too large.
Currently, this logic is only respected when there are already pending
messages in the queue.
Do you have a te
On 05/13/2014 10:27 PM, Davidlohr Bueso wrote:
Nothing big and no logical changes, just get rid of some
redundant function declarations. Move msg_[init/exit]_ns
down the end of the file.
Signed-off-by: Davidlohr Bueso
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 132
On 05/13/2014 10:27 PM, Davidlohr Bueso wrote:
Call __set_current_state() instead of assigning
the new state directly.
Signed-off-by: Davidlohr Bueso
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/ipc/msg.c b/ipc/msg.c
Hi Andrew,
On 05/15/2014 12:30 AM, Andrew Morton wrote:
On Wed, 14 May 2014 07:52:38 -0700 Davidlohr Bueso wrote:
- semcnt = 0;
+ BUG_ON(sop->sem_flg & IPC_NOWAIT);
+ BUG_ON(sop->sem_op > 0);
Hmm in light of Linus' recent criticism about randomly sprinkling
BUG_ONs in the k
Hi Davidlohr,
On 05/14/2014 09:50 PM, Davidlohr Bueso wrote:
Do you have any preferences? I can cook up a patch if you think that
this merits Linux having MSGTQL.
MSGTQL means a global counter - therefore zero scalability. That's why I
didn't implement it when I noticed the issue with 0-byte me
Hi Davidlohr, Hi Andrew,
On 05/13/2014 10:27 PM, Davidlohr Bueso wrote:
When sending a message, we must guarantee that there will be
enough room in the queue to add it, otherwise wait until space
becomes available -- which requires blocking the calling task.
Currently, this relies meeting both o
Hi Davidlohr,
On 05/12/2014 08:11 PM, Davidlohr Bueso wrote:
On Sat, 2014-05-10 at 12:03 +0200, Manfred Spraul wrote:
GETZCNT is supposed to return the number of threads that wait until
a semaphore value becomes 0.
The current implementation overlooks complex operations that contain
both wait
Hi Davidlohr,
On 05/12/2014 01:34 AM, Davidlohr Bueso wrote:
On Sat, 2014-05-10 at 12:03 +0200, Manfred Spraul wrote:
Somehow $ was overlooked in the last round of whitespace
cleanups.
Do that now, before making further changes.
No big deal, but this patch could easily be included in the the
Hi Michael,
On 05/12/2014 10:02 AM, Michael Kerrisk (man-pages) wrote:
Hi Manfred,
On Sat, May 10, 2014 at 12:03 PM, Manfred Spraul
wrote:
Hi all,
According to the man page of semop(), semzcnt or semncnt are increased
exactly for the operation that couldn't proceed.
Perhaps it
amp;paste, this will be cleaned up in the next patch.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5749b9c..dc648f8 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -1047,6 +1047,16 @@ static int count_semzc
Hi all,
According to the man page of semop(), semzcnt or semncnt are increased
exactly for the operation that couldn't proceed.
The Linux implementation always tried to be clever and to increase the counters
for all operations that might be the reason why a task sleeps.
The following patches fix
count_semzcnt and count_semncnt are more of less identical.
The patch creates a single function that either counts the number of tasks
waiting for zero or waiting due to a decrease operation.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 103
Preparation for the next patch:
In the slow-path of perform_atomic_semop(), store a pointer to the operation
that caused the operation to block.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/ipc/sem.c b/ipc/sem.c
index 3962cca..22a4c12 100644
Right now, perform_atomic_semop gets the content of sem_queue as individual
fields.
Changes that, instead pass a pointer to sem_queue.
This is a preparation for the next patch: it uses
sem_queue to store the reason why a task must sleep.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 38
Somehow $ was overlooked in the last round of whitespace
cleanups.
Do that now, before making further changes.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index bee5554..5749b9c 100644
--- a/ipc
implementation assumes that GETNCNT and GETZCNT are rare operations,
therefore the code counts them only on demand.
(If they wouldn't be rare, then the non-compliance would have
been found earlier)
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 37 -
1
Hi Marian,
Note: The limits will soon be increased to (nearly) ULONG_MAX.
I.e.: If you propose the patch because you are running into issues with
a too small SEMMAX after an unshare(CLONE_NEWIPC), then this will be
fixed soon.
On 05/04/2014 01:53 AM, Davidlohr Bueso wrote:
On Sun, 2014-05-0
201 - 300 of 655 matches
Mail list logo