Re: [PATCH v3] softirq: Prevent looping on disabled tasklets

2017-02-17 Thread Thomas Gleixner
On Sun, 12 Feb 2017, Chris Wilson wrote:

> On Sun, Feb 12, 2017 at 03:46:09PM +, Chris Wilson wrote:
> > +void tasklet_enable(struct tasklet_struct *t)
> > +{
> > +   if (!atomic_dec_and_test(>count))
> > +   return;
> > +
> > +   if (test_bit(TASKLET_STATE_SCHED, >state))
> > +   raise_softirq(HI_SOFTIRQ | TASKLET_SOFTIRQ);
> 
> And of course this can't work as raise_softirq() is local to the cpu.

Indeed. tasklets are a horror by design. We rather should make them go away.

Thanks,

tglx


Re: [PATCH v3] softirq: Prevent looping on disabled tasklets

2017-02-17 Thread Thomas Gleixner
On Sun, 12 Feb 2017, Chris Wilson wrote:

> On Sun, Feb 12, 2017 at 03:46:09PM +, Chris Wilson wrote:
> > +void tasklet_enable(struct tasklet_struct *t)
> > +{
> > +   if (!atomic_dec_and_test(>count))
> > +   return;
> > +
> > +   if (test_bit(TASKLET_STATE_SCHED, >state))
> > +   raise_softirq(HI_SOFTIRQ | TASKLET_SOFTIRQ);
> 
> And of course this can't work as raise_softirq() is local to the cpu.

Indeed. tasklets are a horror by design. We rather should make them go away.

Thanks,

tglx


Re: [PATCH v3] softirq: Prevent looping on disabled tasklets

2017-02-12 Thread Chris Wilson
On Sun, Feb 12, 2017 at 03:46:09PM +, Chris Wilson wrote:
> +void tasklet_enable(struct tasklet_struct *t)
> +{
> + if (!atomic_dec_and_test(>count))
> + return;
> +
> + if (test_bit(TASKLET_STATE_SCHED, >state))
> + raise_softirq(HI_SOFTIRQ | TASKLET_SOFTIRQ);

And of course this can't work as raise_softirq() is local to the cpu.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


Re: [PATCH v3] softirq: Prevent looping on disabled tasklets

2017-02-12 Thread Chris Wilson
On Sun, Feb 12, 2017 at 03:46:09PM +, Chris Wilson wrote:
> +void tasklet_enable(struct tasklet_struct *t)
> +{
> + if (!atomic_dec_and_test(>count))
> + return;
> +
> + if (test_bit(TASKLET_STATE_SCHED, >state))
> + raise_softirq(HI_SOFTIRQ | TASKLET_SOFTIRQ);

And of course this can't work as raise_softirq() is local to the cpu.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


[PATCH v3] softirq: Prevent looping on disabled tasklets

2017-02-12 Thread Chris Wilson
Disabling a tasklet causes it not to run during tasklet_action, but is
put back onto the runnable tasklet list, and a new softirq raised. As
the softirq is raised from within __do_softirq() this causing
__do_softirq() to loop constantly until its timeslice expires and is
transferred to the ksoftirq thread. ksoftirq then permanently spins,
as on each action, the disabled tasklet keeps reraising the softirq.

Break this vicious cycle by moving the softirq from the action to the
final tasklet_enable().

This behaviour appears to be historic (since the first git import).
However, the looping until timeslice duration (to a max of 2ms) was
first introduced in commit c10d73671ad3 ("softirq: reduce latencies"),
with the restart limit restored in commit 34376a50fb1f ("Fix lockup
related to stop_machine being stuck in __do_softirq.")

v2: Export tasklet_enable() to work with modules.
v3: Restore the looping over a failed tasklet_trylock()

Reported-by: Tvrtko Ursulin 
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc: Thomas Gleixner 
Cc: Hannes Reinecke 
Cc: Jens Axboe 
Cc: Bjorn Helgaas 
Cc: Alexander Potapenko 
Cc: Chen Fan 
Cc: Ingo Molnar 
Cc: "Peter Zijlstra (Intel)" 
Cc: Sebastian Andrzej Siewior 
Cc: Johannes Thumshirn 
Cc: Emese Revfy 
Cc: Sagi Grimberg 
Cc: Eric Dumazet 
Cc: Tom Herbert 
Cc: Ben Hutchings 
---
 include/linux/interrupt.h |  7 +--
 kernel/softirq.c  | 20 ++--
 2 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 53144e78a369..a1fa88e7e509 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -611,12 +611,7 @@ static inline void tasklet_disable(struct tasklet_struct 
*t)
smp_mb();
 }
 
-static inline void tasklet_enable(struct tasklet_struct *t)
-{
-   smp_mb__before_atomic();
-   atomic_dec(>count);
-}
-
+extern void tasklet_enable(struct tasklet_struct *t);
 extern void tasklet_kill(struct tasklet_struct *t);
 extern void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu);
 extern void tasklet_init(struct tasklet_struct *t,
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 080eb57789c4..47c8933d315e 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -516,6 +516,7 @@ static __latent_entropy void tasklet_action(struct 
softirq_action *a)
 
while (list) {
struct tasklet_struct *t = list;
+   bool raise_softirq = true;
 
list = list->next;
 
@@ -529,13 +530,15 @@ static __latent_entropy void tasklet_action(struct 
softirq_action *a)
continue;
}
tasklet_unlock(t);
+   raise_softirq = false;
}
 
local_irq_disable();
t->next = NULL;
*__this_cpu_read(tasklet_vec.tail) = t;
__this_cpu_write(tasklet_vec.tail, &(t->next));
-   __raise_softirq_irqoff(TASKLET_SOFTIRQ);
+   if (raise_softirq)
+   __raise_softirq_irqoff(TASKLET_SOFTIRQ);
local_irq_enable();
}
 }
@@ -552,6 +555,7 @@ static __latent_entropy void tasklet_hi_action(struct 
softirq_action *a)
 
while (list) {
struct tasklet_struct *t = list;
+   bool raise_softirq = true;
 
list = list->next;
 
@@ -565,13 +569,15 @@ static __latent_entropy void tasklet_hi_action(struct 
softirq_action *a)
continue;
}
tasklet_unlock(t);
+   raise_softirq = false;
}
 
local_irq_disable();
t->next = NULL;
*__this_cpu_read(tasklet_hi_vec.tail) = t;
__this_cpu_write(tasklet_hi_vec.tail, &(t->next));
-   __raise_softirq_irqoff(HI_SOFTIRQ);
+   if (raise_softirq)
+   __raise_softirq_irqoff(HI_SOFTIRQ);
local_irq_enable();
}
 }
@@ -587,6 +593,16 @@ void tasklet_init(struct tasklet_struct *t,
 }
 EXPORT_SYMBOL(tasklet_init);
 
+void tasklet_enable(struct tasklet_struct *t)
+{
+   if (!atomic_dec_and_test(>count))
+   return;
+
+   if (test_bit(TASKLET_STATE_SCHED, >state))
+   raise_softirq(HI_SOFTIRQ | TASKLET_SOFTIRQ);
+}
+EXPORT_SYMBOL(tasklet_enable);
+
 void tasklet_kill(struct tasklet_struct *t)
 {
if (in_interrupt())
-- 
2.11.0



[PATCH v3] softirq: Prevent looping on disabled tasklets

2017-02-12 Thread Chris Wilson
Disabling a tasklet causes it not to run during tasklet_action, but is
put back onto the runnable tasklet list, and a new softirq raised. As
the softirq is raised from within __do_softirq() this causing
__do_softirq() to loop constantly until its timeslice expires and is
transferred to the ksoftirq thread. ksoftirq then permanently spins,
as on each action, the disabled tasklet keeps reraising the softirq.

Break this vicious cycle by moving the softirq from the action to the
final tasklet_enable().

This behaviour appears to be historic (since the first git import).
However, the looping until timeslice duration (to a max of 2ms) was
first introduced in commit c10d73671ad3 ("softirq: reduce latencies"),
with the restart limit restored in commit 34376a50fb1f ("Fix lockup
related to stop_machine being stuck in __do_softirq.")

v2: Export tasklet_enable() to work with modules.
v3: Restore the looping over a failed tasklet_trylock()

Reported-by: Tvrtko Ursulin 
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc: Thomas Gleixner 
Cc: Hannes Reinecke 
Cc: Jens Axboe 
Cc: Bjorn Helgaas 
Cc: Alexander Potapenko 
Cc: Chen Fan 
Cc: Ingo Molnar 
Cc: "Peter Zijlstra (Intel)" 
Cc: Sebastian Andrzej Siewior 
Cc: Johannes Thumshirn 
Cc: Emese Revfy 
Cc: Sagi Grimberg 
Cc: Eric Dumazet 
Cc: Tom Herbert 
Cc: Ben Hutchings 
---
 include/linux/interrupt.h |  7 +--
 kernel/softirq.c  | 20 ++--
 2 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index 53144e78a369..a1fa88e7e509 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -611,12 +611,7 @@ static inline void tasklet_disable(struct tasklet_struct 
*t)
smp_mb();
 }
 
-static inline void tasklet_enable(struct tasklet_struct *t)
-{
-   smp_mb__before_atomic();
-   atomic_dec(>count);
-}
-
+extern void tasklet_enable(struct tasklet_struct *t);
 extern void tasklet_kill(struct tasklet_struct *t);
 extern void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu);
 extern void tasklet_init(struct tasklet_struct *t,
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 080eb57789c4..47c8933d315e 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -516,6 +516,7 @@ static __latent_entropy void tasklet_action(struct 
softirq_action *a)
 
while (list) {
struct tasklet_struct *t = list;
+   bool raise_softirq = true;
 
list = list->next;
 
@@ -529,13 +530,15 @@ static __latent_entropy void tasklet_action(struct 
softirq_action *a)
continue;
}
tasklet_unlock(t);
+   raise_softirq = false;
}
 
local_irq_disable();
t->next = NULL;
*__this_cpu_read(tasklet_vec.tail) = t;
__this_cpu_write(tasklet_vec.tail, &(t->next));
-   __raise_softirq_irqoff(TASKLET_SOFTIRQ);
+   if (raise_softirq)
+   __raise_softirq_irqoff(TASKLET_SOFTIRQ);
local_irq_enable();
}
 }
@@ -552,6 +555,7 @@ static __latent_entropy void tasklet_hi_action(struct 
softirq_action *a)
 
while (list) {
struct tasklet_struct *t = list;
+   bool raise_softirq = true;
 
list = list->next;
 
@@ -565,13 +569,15 @@ static __latent_entropy void tasklet_hi_action(struct 
softirq_action *a)
continue;
}
tasklet_unlock(t);
+   raise_softirq = false;
}
 
local_irq_disable();
t->next = NULL;
*__this_cpu_read(tasklet_hi_vec.tail) = t;
__this_cpu_write(tasklet_hi_vec.tail, &(t->next));
-   __raise_softirq_irqoff(HI_SOFTIRQ);
+   if (raise_softirq)
+   __raise_softirq_irqoff(HI_SOFTIRQ);
local_irq_enable();
}
 }
@@ -587,6 +593,16 @@ void tasklet_init(struct tasklet_struct *t,
 }
 EXPORT_SYMBOL(tasklet_init);
 
+void tasklet_enable(struct tasklet_struct *t)
+{
+   if (!atomic_dec_and_test(>count))
+   return;
+
+   if (test_bit(TASKLET_STATE_SCHED, >state))
+   raise_softirq(HI_SOFTIRQ | TASKLET_SOFTIRQ);
+}
+EXPORT_SYMBOL(tasklet_enable);
+
 void tasklet_kill(struct tasklet_struct *t)
 {
if (in_interrupt())
-- 
2.11.0