* Borislav Petkov <b...@suse.de> wrote: > On Mon, Nov 07, 2016 at 03:07:46PM +0100, Ingo Molnar wrote: > > - cache domains might be seriously mixed up, resulting in serious drop in > > performance. > > > > - or domains might be partitioned 'wrong' but not catastrophically > > wrong, resulting in a minor performance drop (if at all) > > Something between the two. > > Here's some debugging output from set_cpu_sibling_map(): > > [ 0.202033] smpboot: set_cpu_sibling_map: cpu: 0, has_smt: 0, has_mp: 1 > [ 0.202043] smpboot: set_cpu_sibling_map: first loop, llc(this): 65528, o: > 0, llc(o): 65528 > [ 0.202058] smpboot: set_cpu_sibling_map: first loop, link mask smt > > so we link it into the SMT mask even if has_smt is off. > > [ 0.202067] smpboot: set_cpu_sibling_map: first loop, link mask llc > [ 0.202077] smpboot: set_cpu_sibling_map: second loop, llc(this): 65528, > o: 0, llc(o): 65528 > [ 0.202091] smpboot: set_cpu_sibling_map: second loop, link mask die > > I've attached the debug diff. > > And since those llc(o), i.e. the cpu_llc_id of the *other* CPU in the > loops in set_cpu_sibling_map() underflows, we're generating the funniest > thread_siblings masks and then when I run 8 threads of nbench, they get > spread around the LLC domains in a very strange pattern which doesn't > give you the normal scheduling spread one would expect for performance. > > And this is just one workload - I can't imagine what else might be > influenced by this funkiness. > > Oh and other things like EDAC use cpu_llc_id so they will be b0rked too.
So the point I tried to make is that to people doing -stable backporting decisions this description you just gave is much more valuable than the previous changelog. > So we absolutely need to fix that cpu_llc_id thing. Absolutely! Thanks, Ingo