On Tue, 2016-05-10 at 07:26 +0800, Yuyang Du wrote: > By cpu reservation, you mean the various averages in select_task_rq_fair? > It does seem a lot of cleanup should be done.
Nah, I meant claiming an idle cpu with cmpxchg(). It's mostly the average load business that leads to premature stacking though, the reservation thingy more or less just wastes cycles. Only whacking cfs_rq_runnable_load_avg() with a rock makes schbench -m <sockets> -t <near socket size> -a work well. 'Course a rock in its gearbox also rendered load balancing fairly busted for the general case :) -Mike