Commit 88b8dac0 makes load_balance() consider other cpus in its group. But, there are some missing parts for this feature to work properly. This patchset correct these things and make load_balance() robust.
Others are related to LBF_ALL_PINNED. This is fallback functionality when all tasks can't be moved as cpu affinity. But, currently, if imbalance is not large enough to task's load, we leave LBF_ALL_PINNED flag and 'redo' is triggered. This is not our intention, so correct it. These are based on v3.8-rc7. Joonsoo Kim (8): sched: change position of resched_cpu() in load_balance() sched: explicitly cpu_idle_type checking in rebalance_domains() sched: don't consider other cpus in our group in case of NEWLY_IDLE sched: clean up move_task() and move_one_task() sched: move up affinity check to mitigate useless redoing overhead sched: rename load_balance_tmpmask to load_balance_cpu_active sched: prevent to re-select dst-cpu in load_balance() sched: reset lb_env when redo in load_balance() kernel/sched/core.c | 9 +++-- kernel/sched/fair.c | 107 +++++++++++++++++++++++++++++---------------------- 2 files changed, 67 insertions(+), 49 deletions(-) -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/