From: Kees Cook <keesc...@chromium.org>
The remaining users of __sem_free() can simply call kvfree() instead for
better readability.
Signed-off-by: Kees Cook <keesc...@chromium.org>
[manf...@colorfullife.com: Rediff to keep rcu protection for
security_sem_alloc()]
Signed-off-by: Ma
From: Kees Cook
The remaining users of __sem_free() can simply call kvfree() instead for
better readability.
Signed-off-by: Kees Cook
[manf...@colorfullife.com: Rediff to keep rcu protection for
security_sem_alloc()]
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 9 ++---
1 file changed
From: Kees Cook <keesc...@chromium.org>
Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for msg_queue structure
layout to be randomized in the future.
Signed-off-by: Kees Cook <keesc...@chromium.org>
Signed-off-by: Manfred
From: Kees Cook
Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for msg_queue structure
layout to be randomized in the future.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/msg.c | 18 ++
1 file changed
From: Kees Cook <keesc...@chromium.org>
Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for shmid_kernel structure
layout to be randomized in the future.
Signed-off-by: Kees Cook <keesc...@chromium.org>
Signed-off-by: Manfred
From: Kees Cook
Instead of using ipc_rcu_alloc() which only performs the refcount
bump, open code it. This also allows for shmid_kernel structure
layout to be randomized in the future.
Signed-off-by: Kees Cook
Signed-off-by: Manfred Spraul
---
ipc/shm.c | 18 ++
1 file
are
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.
To simplify code, the ipc_rcu_alloc initializes the allocation to 0.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/linux/ipc.h | 3 +++
ipc/msg.c
are
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.
To simplify code, the ipc_rcu_alloc initializes the allocation to 0.
Signed-off-by: Manfred Spraul
---
include/linux/ipc.h | 3 +++
ipc/msg.c | 19 +++
ipc
again. No idea why this causes indigestion, there's probably something subtly
wrong here
commit 337f43326737b5eb28eb13f43c27a5788da0f913
Author: Manfred Spraul <manf...@colorfullife.com>
Date: Fri May 19 07:39:23 2017 +1000
ipc: merge ipc_rcu and kern_ipc_perm
ipc has two
his causes indigestion, there's probably something subtly
wrong here
commit 337f43326737b5eb28eb13f43c27a5788da0f913
Author: Manfred Spraul
Date: Fri May 19 07:39:23 2017 +1000
ipc: merge ipc_rcu and kern_ipc_perm
ipc has two management structures that exist for every id:
set(sma, 0, size);
sma->sem_perm.refcount was initialized by ipc_rcu_alloc.
And due to the SEM_UNDO, the refcount is relevant.
Thanks for bisecting it, I'll update the patch.
--
Manfred
set(sma, 0, size);
sma->sem_perm.refcount was initialized by ipc_rcu_alloc.
And due to the SEM_UNDO, the refcount is relevant.
Thanks for bisecting it, I'll update the patch.
--
Manfred
are
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/linux/ipc.h | 3 +++
ipc/msg.c | 22 ++
ipc/sem.c
are
cacheline aligned.
In addition, it reduces the number of casts, instead most codepaths can
use container_of.
Signed-off-by: Manfred Spraul
---
include/linux/ipc.h | 3 +++
ipc/msg.c | 22 ++
ipc/sem.c | 35 ---
ipc/shm.c
that there is
a comment in include/linux/sem.h and man semctl(2) as well.
So: Correct wrong comments.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: linux-...@vger.kernel.org
---
include/linux/sem.h | 2 +-
include/uapi/linux/sem.h | 2 +-
2 files changed, 2 insertions
that there is
a comment in include/linux/sem.h and man semctl(2) as well.
So: Correct wrong comments.
Signed-off-by: Manfred Spraul
Cc: linux-...@vger.kernel.org
---
include/linux/sem.h | 2 +-
include/uapi/linux/sem.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git
c code analysis.
- This is a cast between different non-void types, which the future
randstruct GCC plugin warns on.
And, as bonus, the code size gets smaller:
Before:
0 .text 3770
After:
0 .text 374e
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
i
c code analysis.
- This is a cast between different non-void types, which the future
randstruct GCC plugin warns on.
And, as bonus, the code size gets smaller:
Before:
0 .text 3770
After:
0 .text 374e
Signed-off-by: Manfred Spraul
---
include/linux/sem.
that sem_ctime is only
for Coherent the time of the last change...
http://calculix-rpm.sourceforge.net/sysvsem.html
--
Manfred
that sem_ctime is only
for Coherent the time of the last change...
http://calculix-rpm.sourceforge.net/sysvsem.html
--
Manfred
p+1))
Does this trigger a warning with randstruct as well?
If we have to touch it, then I would remove it by merging struct
kern_ipc_perm and struct ipc_rcu.
And, obviously:
Do you see any issues with the attached patch?
--
Manfred
>From cf680121e348241d75ca01b3d68eeeab93e5c589 Mon Sep 17
p+1))
Does this trigger a warning with randstruct as well?
If we have to touch it, then I would remove it by merging struct
kern_ipc_perm and struct ipc_rcu.
And, obviously:
Do you see any issues with the attached patch?
--
Manfred
>From cf680121e348241d75ca01b3d68eeeab93e5c589 Mon Sep 17
-by: Manfred Spraul <manf...@colorfullife.com>
Fixes: 370b262c896e ("ipc/sem: avoid idr tree lookup for interrupted semop")
Cc: d...@stgolabs.net
---
Patch V2:
- subject line updated
- documentation slightly updated
ipc/sem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -
-by: Manfred Spraul
Fixes: 370b262c896e ("ipc/sem: avoid idr tree lookup for interrupted semop")
Cc: d...@stgolabs.net
---
Patch V2:
- subject line updated
- documentation slightly updated
ipc/sem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ipc/sem.c b/ipc/sem.c
ind
locks
the global spinlock.
The fix is trivial: Use the return value from sem_lock.
Reported-by: dvyu...@google.com
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Fixes: 370b262c896e ("ipc/sem: avoid idr tree lookup for interrupted semop")
Cc: d...@stgolabs.net
---
ipc/sem.
locks
the global spinlock.
The fix is trivial: Use the return value from sem_lock.
Reported-by: dvyu...@google.com
Signed-off-by: Manfred Spraul
Fixes: 370b262c896e ("ipc/sem: avoid idr tree lookup for interrupted semop")
Cc: d...@stgolabs.net
---
ipc/sem.c | 2 +-
1 file changed, 1 inser
I'll send a patch.
--
Manfred
I'll send a patch.
--
Manfred
On 10/28/2016 09:29 PM, Colin Ian King wrote:
On 28/10/16 20:21, Manfred Spraul wrote:
Hi Colin,
On 10/28/2016 08:11 PM, Colin King wrote:
[...]
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -1839,7 +1839,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct
sembuf __user *, tsops,
max = 0
On 10/28/2016 09:29 PM, Colin Ian King wrote:
On 28/10/16 20:21, Manfred Spraul wrote:
Hi Colin,
On 10/28/2016 08:11 PM, Colin King wrote:
[...]
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -1839,7 +1839,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct
sembuf __user *, tsops,
max = 0
unsigned long mask = 1ULL << ((sop->sem_num) % BITS_PER_LONG);
Why 1ULL? Is 1UL not sufficient?
--
Manfred
R_LONG);
Why 1ULL? Is 1UL not sufficient?
--
Manfred
t
>> discarded because of bad power saving behavior.
>>
>> This patch saves the reset dt properties on probe and does a reset on every
>> open after clocks where enabled, to make sure the clock is stable while and
>> after hard reset.
>>
>> Tested on i.MX6 and i.
t;
>> This patch saves the reset dt properties on probe and does a reset on every
>> open after clocks where enabled, to make sure the clock is stable while and
>> after hard reset.
>>
>> Tested on i.MX6 and i.MX28, both using LAN8720.
>>
>> Signed-off-by:
clocks where enabled, to make sure the clock is stable
while and after hard reset.
Tested on i.MX6 and i.MX28, both using LAN8720.
Signed-off-by: Manfred Schlaegl <manfred.schla...@ginzinger.com>
---
drivers/net/ethernet/freescale/fec.h | 4 ++
drivers/net/ethernet/freescale/fec_main.
clocks where enabled, to make sure the clock is stable
while and after hard reset.
Tested on i.MX6 and i.MX28, both using LAN8720.
Signed-off-by: Manfred Schlaegl
---
drivers/net/ethernet/freescale/fec.h | 4 ++
drivers/net/ethernet/freescale/fec_main.c | 77 +--
2
On 10/20/2016 02:21 AM, Andrew Morton wrote:
On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul <manf...@colorfullife.com>
wrote:
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch t
On 10/20/2016 02:21 AM, Andrew Morton wrote:
On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul
wrote:
Hi,
as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.
I see two options:
1) switch to full spin_lock()/spin_unlock
lock (we "must' switch, not we "can"
switch as written right now).
The patches passed stress-testing.
What do you think?
My initial idea was to aim for 4.10, then we have more time to decide.
--
Manfred
and then
one complex op now scales a bit worse, but this is pure theory:
If there is concurrency, the it won't be exactly 10:1:10:1:10:1:...
If there is no concurrency, then there is no need for scalability.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/linux/sem.h | 2 +-
ipc
lock (we "must' switch, not we "can"
switch as written right now).
The patches passed stress-testing.
What do you think?
My initial idea was to aim for 4.10, then we have more time to decide.
--
Manfred
and then
one complex op now scales a bit worse, but this is pure theory:
If there is concurrency, the it won't be exactly 10:1:10:1:10:1:...
If there is no concurrency, then there is no need for scalability.
Signed-off-by: Manfred Spraul
---
include/linux/sem.h | 2 +-
ipc/sem.c | 86
that the slow path is used.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/sem.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..d5f2710 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -280,24 +
that the slow path is used.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..d5f2710 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -280,24 +280,13 @@ static void complexmode_enter
atch/9359365/
https://patchwork.kernel.org/patch/9359371/
(if possible, with both patches together)
Thanks,
Manfred
atch/9359365/
https://patchwork.kernel.org/patch/9359371/
(if possible, with both patches together)
Thanks,
Manfred
and then
one complex op now scales a bit worse, but this is pure theory:
If there is concurrency, the it won't be exactly 10:1:10:1:10:1:...
If there is no concurrency, then there is no need for scalability.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
include/linux/sem.h | 2 +-
ipc
and then
one complex op now scales a bit worse, but this is pure theory:
If there is concurrency, the it won't be exactly 10:1:10:1:10:1:...
If there is no concurrency, then there is no need for scalability.
Signed-off-by: Manfred Spraul
---
include/linux/sem.h | 2 +-
ipc/sem.c | 87
that the slow path is used.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/sem.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..d5f2710 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -280,24 +
that the slow path is used.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 25 +++--
1 file changed, 3 insertions(+), 22 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..d5f2710 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -280,24 +280,13 @@ static void complexmode_enter
.
Andrew: Could you add it into mmots? Perhaps aiming for 4.10.
--
Manfred
.
Andrew: Could you add it into mmots? Perhaps aiming for 4.10.
--
Manfred
On 09/22/2016 12:21 AM, Davidlohr Bueso wrote:
On Sun, 18 Sep 2016, Manfred Spraul wrote:
Just as with msgrcv (along with the rest of sysvipc since a few years
ago), perform the security checks without holding the ipc object
lock.
Thinking about it: isn't this wrong?
CPU1:
* msgrcv
On 09/22/2016 12:21 AM, Davidlohr Bueso wrote:
On Sun, 18 Sep 2016, Manfred Spraul wrote:
Just as with msgrcv (along with the rest of sysvipc since a few years
ago), perform the security checks without holding the ipc object
lock.
Thinking about it: isn't this wrong?
CPU1:
* msgrcv
est (https://github.com/manfred-colorfu/ipcsemtest)
- ipcscale (https://github.com/manfred-colorfu/ipcscale)
Details are in each individual patch. Please consider for v4.9.
Thanks!
Davidlohr Bueso (5):
ipc/sem: do not call wake_sem_queue_do() prematurely
The only patch that I don't l
est (https://github.com/manfred-colorfu/ipcsemtest)
- ipcscale (https://github.com/manfred-colorfu/ipcscale)
Details are in each individual patch. Please consider for v4.9.
Thanks!
Davidlohr Bueso (5):
ipc/sem: do not call wake_sem_queue_do() prematurely
The only patch that I don't l
sembench-sem-482965735.00 ( 0.00%) 1040313.00 ( 7.72%)
Signed-off-by: Davidlohr Bueso <dbu...@suse.de>
Acked-by: Manfred Spraul <manf...@colorfullife.com>
--
Manfred
sembench-sem-482965735.00 ( 0.00%) 1040313.00 ( 7.72%)
Signed-off-by: Davidlohr Bueso
Acked-by: Manfred Spraul
--
Manfred
== 0, etc)
where the dup detection would be unnecessary, but it seems like a
stretch to go
at it like this. The above will work on the common case (assuming
lower sem_num
of course). So I'm not particularly worried about being too smart at
the dup detection.
What
== 0, etc)
where the dup detection would be unnecessary, but it seems like a
stretch to go
at it like this. The above will work on the common case (assuming
lower sem_num
of course). So I'm not particularly worried about being too smart at
the dup detection.
What
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote:
... saves some LoC and looks cleaner than re-implementing the
calls.
Signed-off-by: Davidlohr Bueso <dbu...@suse.de>
Acked-by: Manfred Spraul <manf...@colorfullife.com>
--
Manfred
On 09/12/2016 01:53 PM, Davidlohr Bueso wrote:
... saves some LoC and looks cleaner than re-implementing the
calls.
Signed-off-by: Davidlohr Bueso
Acked-by: Manfred Spraul
--
Manfred
to avoid the wakeup calls, then do it entirely.
Perhaps:
> if(error == 0) { /* nonblocking codepath 1, with wakeups */
> [...]
> }
> if (error < 0} goto out_unlock_free;
>
This would have an advantage, because the WAKE_Q would be initialized
only when needed
--
Manfred
to avoid the wakeup calls, then do it entirely.
Perhaps:
> if(error == 0) { /* nonblocking codepath 1, with wakeups */
> [...]
> }
> if (error < 0} goto out_unlock_free;
>
This would have an advantage, because the WAKE_Q would be initialized
only when needed
--
Manfred
ck_object(>q_perm);
}
This means the lock is dropped, just for ipcperms().
This doubles the lock acquire/release cycles.
--
Manfred
ck_object(>q_perm);
}
This means the lock is dropped, just for ipcperms().
This doubles the lock acquire/release cycles.
--
Manfred
my opinion: I would avoid placing the error= into the if().
--
Manfred
--
placing the error= into the if().
--
Manfred
--
rcu_read_unlock();
+ goto out_free;
+ }
Is this really better/simpler?
You replace "if (error) goto cleanup" with "if (error) {cleanup_1();
goto cleanup_2()}".
From my point of view, this just increases the risks that some cleanup
steps are forgotten.
--
Manfred
goto out_free;
+ }
Is this really better/simpler?
You replace "if (error) goto cleanup" with "if (error) {cleanup_1();
goto cleanup_2()}".
From my point of view, this just increases the risks that some cleanup
steps are forgotten.
--
Manfred
can detect
absense of duplicated ops regardless of the array size.
Should we support that?
--
Manfred
duplicated ops regardless of the array size.
Should we support that?
--
Manfred
-o 1 -f
(Note: Just removing the barrier is not an option)
--
Manfred
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..85cc5bf 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -363,14 +363,6 @@ static inline int sem_lock(struct sem_array *sma, struct
sembuf *sops,
*/
-o 1 -f
(Note: Just removing the barrier is not an option)
--
Manfred
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..85cc5bf 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -363,14 +363,6 @@ static inline int sem_lock(struct sem_array *sma, struct
sembuf *sops,
*/
Hi Peter,
On 09/02/2016 09:22 PM, Peter Zijlstra wrote:
On Fri, Sep 02, 2016 at 08:35:55AM +0200, Manfred Spraul wrote:
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote
Hi Peter,
On 09/02/2016 09:22 PM, Peter Zijlstra wrote:
On Fri, Sep 02, 2016 at 08:35:55AM +0200, Manfred Spraul wrote:
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote
On 09/02/2016 09:22 PM, Peter Zijlstra wrote:
On Fri, Sep 02, 2016 at 08:35:55AM +0200, Manfred Spraul wrote:
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote:
Since
On 09/02/2016 09:22 PM, Peter Zijlstra wrote:
On Fri, Sep 02, 2016 at 08:35:55AM +0200, Manfred Spraul wrote:
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote:
Since
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote:
Since spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the memory barrier before spin_unlock_wait
On 09/01/2016 06:41 PM, Peter Zijlstra wrote:
On Thu, Sep 01, 2016 at 04:30:39PM +0100, Will Deacon wrote:
On Thu, Sep 01, 2016 at 05:27:52PM +0200, Manfred Spraul wrote:
Since spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the memory barrier before spin_unlock_wait
Since spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the memory barrier before spin_unlock_wait() is
also not required.
Not for stable!
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: Pablo Neira Ayuso <pa...@netfilter.org>
Cc:
Since spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the memory barrier before spin_unlock_wait() is
also not required.
Not for stable!
Signed-off-by: Manfred Spraul
Cc: Pablo Neira Ayuso
Cc: netfilter-de...@vger.kernel.org
---
net/netfilter/nf_conntrack_core.c | 8
As spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the memory barrier before spin_unlock_wait()
is not required.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/sem.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/ipc/s
As spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the memory barrier before spin_unlock_wait()
is not required.
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index a5da82c
n it must be checked first if all
updates to qspinlock were backported.
Fixes: b16c29191dc8
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: <sta...@vger.kernel.org>
Cc: Sasha Levin <sasha.le...@oracle.com>
Cc: Pablo Neira Ayuso <pa...@netfilter.org>
Cc: netfil
n it must be checked first if all
updates to qspinlock were backported.
Fixes: b16c29191dc8
Signed-off-by: Manfred Spraul
Cc:
Cc: Sasha Levin
Cc: Pablo Neira Ayuso
Cc: netfilter-de...@vger.kernel.org
---
net/netfilter/nf_conntrack_core.c | 41 ++-
1 file ch
?
- spin_unlock_wait() is spin_lock()+spin_unlock().
- No memory ordering is enforced by spin_is_locked().
The patch adds this into Documentation/locking/spinlock.txt.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: Will Deacon <will.dea...@arm.com>
---
Documentation/locking/spinl
?
- spin_unlock_wait() is spin_lock()+spin_unlock().
- No memory ordering is enforced by spin_is_locked().
The patch adds this into Documentation/locking/spinlock.txt.
Signed-off-by: Manfred Spraul
Cc: Will Deacon
---
Documentation/locking/spinlocks.txt | 9 +
1 file changed, 9 insertions(+)
diff
As spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the smp_mb() after spin_lock() is not required.
Remove it.
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
net/netfilter/nf_conntrack_core.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
a full memory barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock();
spin_lock();
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Spra
As spin_unlock_wait() is defined as equivalent to spin_lock();
spin_unlock(), the smp_mb() after spin_lock() is not required.
Remove it.
Signed-off-by: Manfred Spraul
---
net/netfilter/nf_conntrack_core.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/net/netfilter
a full memory barrier:
(everything initialized to 0)
CPU1:
a=1;
spin_unlock();
spin_lock();
+ smp_mb__after_unlock_lock();
r1=d;
CPU2:
d=1;
smp_mb();
r2=a;
Without the smp_mb__after_unlock_lock(), r1==0 && r2==0 would
be possible.
Signed-off-by: Manfred Spraul
--
---
inclu
mb() after spin_unlock_wait() can be removed.
Not for stable!
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
Cc: Pablo Neira Ayuso <pa...@netfilter.org>
Cc: netfilter-de...@vger.kernel.org
---
net/netfilter/nf_conntrack_core.c | 5 -
1 file changed, 5 deletions(-)
diff
mb() after spin_unlock_wait() can be removed.
Not for stable!
Signed-off-by: Manfred Spraul
Cc: Pablo Neira Ayuso
Cc: netfilter-de...@vger.kernel.org
---
net/netfilter/nf_conntrack_core.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/net/netfilter/nf_conntrack_core.c
b/net
, with the target of including in
linux-next?
--
Manfred
he smp_rmb() after spin_unlock_wait() can be removed.
Not for stable!
Signed-off-by: Manfred Spraul <manf...@colorfullife.com>
---
ipc/sem.c | 8
1 file changed, 8 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..6586e0a 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -290,14
>From memory ordering point of view, spin_unlock_wait() provides
the same guarantees as spin_lock(); spin_unlock().
Therefore the smp_mb() after spin_lock() is not necessary,
spin_unlock_wait() must provide the memory ordering.
Signed-off-by: Manfred Spraul <manf...@colorfullife.c
>From memory ordering point of view, spin_unlock_wait() provides
the same guarantees as spin_lock(); spin_unlock().
Therefore the smp_mb() after spin_lock() is not necessary,
spin_unlock_wait() must provide the memory ordering.
Signed-off-by: Manfred Spraul
Cc: Will Deacon
---
ipc/sem.c
, with the target of including in
linux-next?
--
Manfred
he smp_rmb() after spin_unlock_wait() can be removed.
Not for stable!
Signed-off-by: Manfred Spraul
---
ipc/sem.c | 8
1 file changed, 8 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..6586e0a 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -290,14 +290,6 @@ static void complex
201 - 300 of 1478 matches
Mail list logo