On Mon, Nov 07, 2016 at 03:07:46PM +0100, Ingo Molnar wrote:
>  - cache domains might be seriously mixed up, resulting in serious drop in
>    performance.
>
>  - or domains might be partitioned 'wrong' but not catastrophically
>   wrong, resulting in a minor performance drop (if at all)

Something between the two.

Here's some debugging output from set_cpu_sibling_map():

[    0.202033] smpboot: set_cpu_sibling_map: cpu: 0, has_smt: 0, has_mp: 1
[    0.202043] smpboot: set_cpu_sibling_map: first loop, llc(this): 65528, o: 
0, llc(o): 65528
[    0.202058] smpboot: set_cpu_sibling_map: first loop, link mask smt

so we link it into the SMT mask even if has_smt is off.

[    0.202067] smpboot: set_cpu_sibling_map: first loop, link mask llc
[    0.202077] smpboot: set_cpu_sibling_map: second loop, llc(this): 65528, o: 
0, llc(o): 65528
[    0.202091] smpboot: set_cpu_sibling_map: second loop, link mask die

I've attached the debug diff.

And since those llc(o), i.e. the cpu_llc_id of the *other* CPU in the
loops in set_cpu_sibling_map() underflows, we're generating the funniest
thread_siblings masks and then when I run 8 threads of nbench, they get
spread around the LLC domains in a very strange pattern which doesn't
give you the normal scheduling spread one would expect for performance.

And this is just one workload - I can't imagine what else might be
influenced by this funkiness.

Oh and other things like EDAC use cpu_llc_id so they will be b0rked too.

So we absolutely need to fix that cpu_llc_id thing.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)
-- 
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 601d2b331350..5974098d8266 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -506,6 +506,9 @@ void set_cpu_sibling_map(int cpu)
 	struct cpuinfo_x86 *o;
 	int i, threads;
 
+	pr_info("%s: cpu: %d, has_smt: %d, has_mp: %d\n",
+		__func__, cpu, has_smt, has_mp);
+
 	cpumask_set_cpu(cpu, cpu_sibling_setup_mask);
 
 	if (!has_mp) {
@@ -519,11 +522,19 @@ void set_cpu_sibling_map(int cpu)
 	for_each_cpu(i, cpu_sibling_setup_mask) {
 		o = &cpu_data(i);
 
-		if ((i == cpu) || (has_smt && match_smt(c, o)))
+		pr_info("%s: first loop, llc(this): %d, o: %d, llc(o): %d\n",
+			__func__, per_cpu(cpu_llc_id, cpu),
+			o->cpu_index, per_cpu(cpu_llc_id, o->cpu_index));
+
+		if ((i == cpu) || (has_smt && match_smt(c, o))) {
+			pr_info("%s: first loop, link mask smt\n", __func__);
 			link_mask(topology_sibling_cpumask, cpu, i);
+		}
 
-		if ((i == cpu) || (has_mp && match_llc(c, o)))
+		if ((i == cpu) || (has_mp && match_llc(c, o))) {
+			pr_info("%s: first loop, link mask llc\n", __func__);
 			link_mask(cpu_llc_shared_mask, cpu, i);
+		}
 
 	}
 
@@ -534,7 +545,12 @@ void set_cpu_sibling_map(int cpu)
 	for_each_cpu(i, cpu_sibling_setup_mask) {
 		o = &cpu_data(i);
 
+		pr_info("%s: second loop, llc(this): %d, o: %d, llc(o): %d\n",
+			__func__, per_cpu(cpu_llc_id, cpu),
+			o->cpu_index, per_cpu(cpu_llc_id, o->cpu_index));
+
 		if ((i == cpu) || (has_mp && match_die(c, o))) {
+			pr_info("%s: second loop, link mask die\n", __func__);
 			link_mask(topology_core_cpumask, cpu, i);
 
 			/*

Reply via email to