[PATCH 1/9] irq_work: Fix racy IRQ_WORK_BUSY flag setting

2012-11-17 Thread Frederic Weisbecker
The IRQ_WORK_BUSY flag is set right before we execute the
work. Once this flag value is set, the work enters a
claimable state again.

So if we have specific data to compute in our work, we ensure it's
either handled by another CPU or locally by enqueuing the work again.
This state machine is guanranteed by atomic operations on the flags.

So when we set IRQ_WORK_BUSY without using an xchg-like operation,
we break this guarantee as in the following summarized scenario:

CPU 1   CPU 2
-   -
(flags = 0)
old_flags = flags;
(flags = 0)
cmpxchg(flags, old_flags,
old_flags | IRQ_WORK_FLAGS)
(flags = 3)
[...]
flags = IRQ_WORK_BUSY
(flags = 2)
func()
(sees flags = 3)
cmpxchg(flags, old_flags,
old_flags | 
IRQ_WORK_FLAGS)
(give up)

cmpxchg(flags, 2, 0);
(flags = 0)

CPU 1 claims a work and executes it, so it sets IRQ_WORK_BUSY and
the work is again in a claimable state. Now CPU 2 has new data to process
and try to claim that work but it may see a stale value of the flags
and think the work is still pending somewhere that will handle our data.
This is because CPU 1 doesn't set IRQ_WORK_BUSY atomically.

As a result, the data expected to be handle by CPU 2 won't get handled.

To fix this, use xchg() to set IRQ_WORK_BUSY, this way we ensure the CPU 2
will see the correct value with cmpxchg() using the expected ordering.

Changelog-heavily-inspired-by: Steven Rostedt 
Signed-off-by: Frederic Weisbecker 
Acked-by: Steven Rostedt 
Cc: Peter Zijlstra 
Cc: Ingo Molnar 
Cc: Thomas Gleixner 
Cc: Andrew Morton 
Cc: Paul Gortmaker 
Cc: Anish Kumar 
---
 kernel/irq_work.c |5 -
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 1588e3b..57be1a6 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -119,8 +119,11 @@ void irq_work_run(void)
/*
 * Clear the PENDING bit, after this point the @work
 * can be re-used.
+* Make it immediately visible so that other CPUs trying
+* to claim that work don't rely on us to handle their data
+* while we are in the middle of the func.
 */
-   work->flags = IRQ_WORK_BUSY;
+   xchg(&work->flags, IRQ_WORK_BUSY);
work->func(work);
/*
 * Clear the BUSY bit and return to the free state if
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/9] irq_work: Fix racy IRQ_WORK_BUSY flag setting

2012-11-15 Thread Frederic Weisbecker
The IRQ_WORK_BUSY flag is set right before we execute the
work. Once this flag value is set, the work enters a
claimable state again.

So if we have specific data to compute in our work, we ensure it's
either handled by another CPU or locally by enqueuing the work again.
This state machine is guanranteed by atomic operations on the flags.

So when we set IRQ_WORK_BUSY without using an xchg-like operation,
we break this guarantee as in the following summarized scenario:

CPU 1   CPU 2
-   -
(flags = 0)
old_flags = flags;
(flags = 0)
cmpxchg(flags, old_flags,
old_flags | IRQ_WORK_FLAGS)
(flags = 3)
[...]
flags = IRQ_WORK_BUSY
(flags = 2)
func()
(sees flags = 3)
cmpxchg(flags, old_flags,
old_flags | 
IRQ_WORK_FLAGS)
(give up)

cmpxchg(flags, 2, 0);
(flags = 0)

CPU 1 claims a work and executes it, so it sets IRQ_WORK_BUSY and
the work is again in a claimable state. Now CPU 2 has new data to process
and try to claim that work but it may see a stale value of the flags
and think the work is still pending somewhere that will handle our data.
This is because CPU 1 doesn't set IRQ_WORK_BUSY atomically.

As a result, the data expected to be handle by CPU 2 won't get handled.

To fix this, use xchg() to set IRQ_WORK_BUSY, this way we ensure the CPU 2
will see the correct value with cmpxchg() using the expected ordering.

Changelog-heavily-inspired-by: Steven Rostedt 
Signed-off-by: Frederic Weisbecker 
Acked-by: Steven Rostedt 
Cc: Peter Zijlstra 
Cc: Ingo Molnar 
Cc: Thomas Gleixner 
Cc: Andrew Morton 
Cc: Paul Gortmaker 
Cc: Anish Kumar 
---
 kernel/irq_work.c |5 -
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 1588e3b..57be1a6 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -119,8 +119,11 @@ void irq_work_run(void)
/*
 * Clear the PENDING bit, after this point the @work
 * can be re-used.
+* Make it immediately visible so that other CPUs trying
+* to claim that work don't rely on us to handle their data
+* while we are in the middle of the func.
 */
-   work->flags = IRQ_WORK_BUSY;
+   xchg(&work->flags, IRQ_WORK_BUSY);
work->func(work);
/*
 * Clear the BUSY bit and return to the free state if
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/