Hi, Vincent On Tue, 15 Sep 2020 at 17:28, Vincent Guittot <vincent.guit...@linaro.org> wrote: > > On Tue, 15 Sep 2020 at 11:11, Jiang Biao <benbji...@gmail.com> wrote: > > > > Hi, Vincent > > > > On Mon, 14 Sep 2020 at 18:07, Vincent Guittot > > <vincent.guit...@linaro.org> wrote: > > > > > > The busy_factor, which increases load balance interval when a cpu is busy, > > > is set to 32 by default. This value generates some huge LB interval on > > > large system like the THX2 made of 2 node x 28 cores x 4 threads. > > > For such system, the interval increases from 112ms to 3584ms at MC level. > > > And from 228ms to 7168ms at NUMA level. > > Agreed that the interval is too big for that case. > > But would it be too small for an AMD environment(like ROME) with 8cpu > > at MC level(CCX), if we reduce busy_factor? > > Are you sure that this is too small ? As mentioned in the commit > message below, I tested it on small system (2x4 cores Arm64) and i > have seen some improvements Not so sure. :) Small interval means more frequent balances and more cost consumed for balancing, especially for pinned vm cases. For our case, we have AMD ROME servers made of 2node x 48cores x 2thread, and 8c at MC level(within a CCX). The 256ms interval seems a little too big for us, compared to Intel Cascadlake CPU with 48c at MC level, whose balance interval is 1536ms. 128ms seems a little more waste. :) I guess more balance costs may hurt the throughput of sysbench like benchmark.. Just a guess.
> > > For that case, the interval could be reduced from 256ms to 128ms. > > Or should we define an MIN_INTERVAL for MC level to avoid too small > > interval? > > What would be a too small interval ? That's hard to say. :) My guess is just for large server system cases. Thanks. Regards, Jiang