In case of IRQF_RESCUE_THREAD, the threaded handler is only used to handle interrupt when IRQ flood comes, use irq's affinity for this thread so that scheduler may select other not too busy CPUs for handling the interrupt.
Cc: Long Li <[email protected]> Cc: Ingo Molnar <[email protected]>, Cc: Peter Zijlstra <[email protected]> Cc: Keith Busch <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Sagi Grimberg <[email protected]> Cc: John Garry <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Hannes Reinecke <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Ming Lei <[email protected]> --- kernel/irq/manage.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 1566abbf50e8..03bc041348b7 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -968,7 +968,18 @@ irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) if (cpumask_available(desc->irq_common_data.affinity)) { const struct cpumask *m; - m = irq_data_get_effective_affinity_mask(&desc->irq_data); + /* + * Managed IRQ's affinity is setup gracefull on MUNA locality, + * also if IRQF_RESCUE_THREAD is set, interrupt flood has been + * triggered, so ask scheduler to run the thread on CPUs + * specified by this interrupt's affinity. + */ + if ((action->flags & IRQF_RESCUE_THREAD) && + irqd_affinity_is_managed(&desc->irq_data)) + m = desc->irq_common_data.affinity; + else + m = irq_data_get_effective_affinity_mask( + &desc->irq_data); cpumask_copy(mask, m); } else { valid = false; -- 2.20.1

