As we prepare for offloading the residual 1hz scheduler ticks to workqueue, let's affine those to housekeepers so that they don't interrupt the CPUs that don't want to be disturbed.
Signed-off-by: Frederic Weisbecker <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Luiz Capitulino <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wanpeng Li <[email protected]> Cc: Ingo Molnar <[email protected]> --- include/linux/sched/isolation.h | 1 + kernel/sched/isolation.c | 4 +++- kernel/workqueue.c | 3 ++- 3 files changed, 6 insertions(+), 2 deletions(-) diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h index d849431..4a6582c 100644 --- a/include/linux/sched/isolation.h +++ b/include/linux/sched/isolation.h @@ -12,6 +12,7 @@ enum hk_flags { HK_FLAG_SCHED = (1 << 3), HK_FLAG_TICK = (1 << 4), HK_FLAG_DOMAIN = (1 << 5), + HK_FLAG_WQ = (1 << 6), }; #ifdef CONFIG_CPU_ISOLATION diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c index b71b436..8f1c1de 100644 --- a/kernel/sched/isolation.c +++ b/kernel/sched/isolation.c @@ -3,6 +3,7 @@ * any CPU: unbound workqueues, timers, kthreads and any offloadable work. * * Copyright (C) 2017 Red Hat, Inc., Frederic Weisbecker + * Copyright (C) 2017-2018 SUSE, Frederic Weisbecker * */ @@ -119,7 +120,8 @@ static int __init housekeeping_nohz_full_setup(char *str) { unsigned int flags; - flags = HK_FLAG_TICK | HK_FLAG_TIMER | HK_FLAG_RCU | HK_FLAG_MISC; + flags = HK_FLAG_TICK | HK_FLAG_WQ | HK_FLAG_TIMER | + HK_FLAG_RCU | HK_FLAG_MISC; return housekeeping_setup(str, flags); } diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 017044c..0cee2d6 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -5570,7 +5570,8 @@ int __init workqueue_init_early(void) WARN_ON(__alignof__(struct pool_workqueue) < __alignof__(long long)); BUG_ON(!alloc_cpumask_var(&wq_unbound_cpumask, GFP_KERNEL)); - cpumask_copy(wq_unbound_cpumask, housekeeping_cpumask(HK_FLAG_DOMAIN)); + cpumask_copy(wq_unbound_cpumask, + housekeeping_cpumask(HK_FLAG_DOMAIN | HK_FLAG_WQ)); pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC); -- 2.7.4

