Re: [PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-06 Thread Alison Schofield
On Wed, Apr 04, 2018 at 12:00:45PM -0700, Alison Schofield wrote:
> On Wed, Apr 04, 2018 at 11:42:11AM -0700, Tim Chen wrote:
> > On 04/04/2018 10:38 AM, Alison Schofield wrote:
> > > On Wed, Apr 04, 2018 at 10:24:49AM -0700, Tim Chen wrote:
> > >> On 04/03/2018 02:12 PM, Alison Schofield wrote:
> > >>
> > >>> +
> > >>> +   /*
> > >>> +* topology_sane() considers LLCs that span NUMA nodes to be
> > >>> +* insane and will display a warning message. Bypass the call
> > >>> +* to topology_sane() for snc_cpu's to avoid that warning.
> > >>> +*/
> > >>> +
> > >>> +   if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu)) {
> > >>> +   /* Indicate that package has NUMA nodes inside: */
> > >>> +   x86_has_numa_in_package = true;
> > >>
> > >> Why does the x86_has_numa_in_package has to be set here when it would 
> > >> have
> > >> been done later in set_cpu_sibling_map?
> > > 
> > > Tim,
> > > I had that same thought when you commented on it previously. After 
> > > discussing w DaveH, decided that match_llc() and match_die(c,0)
> > > could be different and chose to be (cautiously) redundant.
> > > alisons
> > 
> > If it is redundant, I suggest it be removed, and only added if
> > there is truly a case where the current logic 
> > 
> > if (match_die(c, o) && !topology_same_node(c, o))
> > x86_has_numa_in_package = true;
> > 
> > fails.  And also the modification of this logic should be at the
> > same place for easy code maintenance. 
> 
> That makes good sense. I'll look to define the difference or remove
> the redundancy.
> 
> alisons
I found not reason for the redundancy via experimentation w my Skylake,
nor through code examination. I've removed it in v5. I'll see if
anyone claims theoretical case.
alisons
> 
> > 
> > Tim  
> > 
> > > 
> > > 
> > > 
> > >>
> > >>> +
> > >>> +   /*
> > >>> +* false means 'c' does not share the LLC of 'o'.
> > >>> +* Note: this decision gets reflected all the way
> > >>> +* out to userspace.
> > >>> +*/
> > >>> +
> > >>> +   return false;
> > >>
> > >> Thanks.
> > >>
> > >> Tim
> > 


Re: [PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-06 Thread Alison Schofield
On Wed, Apr 04, 2018 at 12:00:45PM -0700, Alison Schofield wrote:
> On Wed, Apr 04, 2018 at 11:42:11AM -0700, Tim Chen wrote:
> > On 04/04/2018 10:38 AM, Alison Schofield wrote:
> > > On Wed, Apr 04, 2018 at 10:24:49AM -0700, Tim Chen wrote:
> > >> On 04/03/2018 02:12 PM, Alison Schofield wrote:
> > >>
> > >>> +
> > >>> +   /*
> > >>> +* topology_sane() considers LLCs that span NUMA nodes to be
> > >>> +* insane and will display a warning message. Bypass the call
> > >>> +* to topology_sane() for snc_cpu's to avoid that warning.
> > >>> +*/
> > >>> +
> > >>> +   if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu)) {
> > >>> +   /* Indicate that package has NUMA nodes inside: */
> > >>> +   x86_has_numa_in_package = true;
> > >>
> > >> Why does the x86_has_numa_in_package has to be set here when it would 
> > >> have
> > >> been done later in set_cpu_sibling_map?
> > > 
> > > Tim,
> > > I had that same thought when you commented on it previously. After 
> > > discussing w DaveH, decided that match_llc() and match_die(c,0)
> > > could be different and chose to be (cautiously) redundant.
> > > alisons
> > 
> > If it is redundant, I suggest it be removed, and only added if
> > there is truly a case where the current logic 
> > 
> > if (match_die(c, o) && !topology_same_node(c, o))
> > x86_has_numa_in_package = true;
> > 
> > fails.  And also the modification of this logic should be at the
> > same place for easy code maintenance. 
> 
> That makes good sense. I'll look to define the difference or remove
> the redundancy.
> 
> alisons
I found not reason for the redundancy via experimentation w my Skylake,
nor through code examination. I've removed it in v5. I'll see if
anyone claims theoretical case.
alisons
> 
> > 
> > Tim  
> > 
> > > 
> > > 
> > > 
> > >>
> > >>> +
> > >>> +   /*
> > >>> +* false means 'c' does not share the LLC of 'o'.
> > >>> +* Note: this decision gets reflected all the way
> > >>> +* out to userspace.
> > >>> +*/
> > >>> +
> > >>> +   return false;
> > >>
> > >> Thanks.
> > >>
> > >> Tim
> > 


Re: [PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-04 Thread Alison Schofield
On Wed, Apr 04, 2018 at 11:42:11AM -0700, Tim Chen wrote:
> On 04/04/2018 10:38 AM, Alison Schofield wrote:
> > On Wed, Apr 04, 2018 at 10:24:49AM -0700, Tim Chen wrote:
> >> On 04/03/2018 02:12 PM, Alison Schofield wrote:
> >>
> >>> +
> >>> + /*
> >>> +  * topology_sane() considers LLCs that span NUMA nodes to be
> >>> +  * insane and will display a warning message. Bypass the call
> >>> +  * to topology_sane() for snc_cpu's to avoid that warning.
> >>> +  */
> >>> +
> >>> + if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu)) {
> >>> + /* Indicate that package has NUMA nodes inside: */
> >>> + x86_has_numa_in_package = true;
> >>
> >> Why does the x86_has_numa_in_package has to be set here when it would have
> >> been done later in set_cpu_sibling_map?
> > 
> > Tim,
> > I had that same thought when you commented on it previously. After 
> > discussing w DaveH, decided that match_llc() and match_die(c,0)
> > could be different and chose to be (cautiously) redundant.
> > alisons
> 
> If it is redundant, I suggest it be removed, and only added if
> there is truly a case where the current logic 
> 
> if (match_die(c, o) && !topology_same_node(c, o))
> x86_has_numa_in_package = true;
> 
> fails.  And also the modification of this logic should be at the
> same place for easy code maintenance. 

That makes good sense. I'll look to define the difference or remove
the redundancy.

alisons

> 
> Tim  
> 
> > 
> > 
> > 
> >>
> >>> +
> >>> + /*
> >>> +  * false means 'c' does not share the LLC of 'o'.
> >>> +  * Note: this decision gets reflected all the way
> >>> +  * out to userspace.
> >>> +  */
> >>> +
> >>> + return false;
> >>
> >> Thanks.
> >>
> >> Tim
> 


Re: [PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-04 Thread Alison Schofield
On Wed, Apr 04, 2018 at 11:42:11AM -0700, Tim Chen wrote:
> On 04/04/2018 10:38 AM, Alison Schofield wrote:
> > On Wed, Apr 04, 2018 at 10:24:49AM -0700, Tim Chen wrote:
> >> On 04/03/2018 02:12 PM, Alison Schofield wrote:
> >>
> >>> +
> >>> + /*
> >>> +  * topology_sane() considers LLCs that span NUMA nodes to be
> >>> +  * insane and will display a warning message. Bypass the call
> >>> +  * to topology_sane() for snc_cpu's to avoid that warning.
> >>> +  */
> >>> +
> >>> + if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu)) {
> >>> + /* Indicate that package has NUMA nodes inside: */
> >>> + x86_has_numa_in_package = true;
> >>
> >> Why does the x86_has_numa_in_package has to be set here when it would have
> >> been done later in set_cpu_sibling_map?
> > 
> > Tim,
> > I had that same thought when you commented on it previously. After 
> > discussing w DaveH, decided that match_llc() and match_die(c,0)
> > could be different and chose to be (cautiously) redundant.
> > alisons
> 
> If it is redundant, I suggest it be removed, and only added if
> there is truly a case where the current logic 
> 
> if (match_die(c, o) && !topology_same_node(c, o))
> x86_has_numa_in_package = true;
> 
> fails.  And also the modification of this logic should be at the
> same place for easy code maintenance. 

That makes good sense. I'll look to define the difference or remove
the redundancy.

alisons

> 
> Tim  
> 
> > 
> > 
> > 
> >>
> >>> +
> >>> + /*
> >>> +  * false means 'c' does not share the LLC of 'o'.
> >>> +  * Note: this decision gets reflected all the way
> >>> +  * out to userspace.
> >>> +  */
> >>> +
> >>> + return false;
> >>
> >> Thanks.
> >>
> >> Tim
> 


Re: [PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-04 Thread Tim Chen
On 04/04/2018 10:38 AM, Alison Schofield wrote:
> On Wed, Apr 04, 2018 at 10:24:49AM -0700, Tim Chen wrote:
>> On 04/03/2018 02:12 PM, Alison Schofield wrote:
>>
>>> +
>>> +   /*
>>> +* topology_sane() considers LLCs that span NUMA nodes to be
>>> +* insane and will display a warning message. Bypass the call
>>> +* to topology_sane() for snc_cpu's to avoid that warning.
>>> +*/
>>> +
>>> +   if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu)) {
>>> +   /* Indicate that package has NUMA nodes inside: */
>>> +   x86_has_numa_in_package = true;
>>
>> Why does the x86_has_numa_in_package has to be set here when it would have
>> been done later in set_cpu_sibling_map?
> 
> Tim,
> I had that same thought when you commented on it previously. After 
> discussing w DaveH, decided that match_llc() and match_die(c,0)
> could be different and chose to be (cautiously) redundant.
> alisons

If it is redundant, I suggest it be removed, and only added if
there is truly a case where the current logic 

if (match_die(c, o) && !topology_same_node(c, o))
x86_has_numa_in_package = true;

fails.  And also the modification of this logic should be at the
same place for easy code maintenance. 

Tim  

> 
> 
> 
>>
>>> +
>>> +   /*
>>> +* false means 'c' does not share the LLC of 'o'.
>>> +* Note: this decision gets reflected all the way
>>> +* out to userspace.
>>> +*/
>>> +
>>> +   return false;
>>
>> Thanks.
>>
>> Tim



Re: [PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-04 Thread Tim Chen
On 04/04/2018 10:38 AM, Alison Schofield wrote:
> On Wed, Apr 04, 2018 at 10:24:49AM -0700, Tim Chen wrote:
>> On 04/03/2018 02:12 PM, Alison Schofield wrote:
>>
>>> +
>>> +   /*
>>> +* topology_sane() considers LLCs that span NUMA nodes to be
>>> +* insane and will display a warning message. Bypass the call
>>> +* to topology_sane() for snc_cpu's to avoid that warning.
>>> +*/
>>> +
>>> +   if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu)) {
>>> +   /* Indicate that package has NUMA nodes inside: */
>>> +   x86_has_numa_in_package = true;
>>
>> Why does the x86_has_numa_in_package has to be set here when it would have
>> been done later in set_cpu_sibling_map?
> 
> Tim,
> I had that same thought when you commented on it previously. After 
> discussing w DaveH, decided that match_llc() and match_die(c,0)
> could be different and chose to be (cautiously) redundant.
> alisons

If it is redundant, I suggest it be removed, and only added if
there is truly a case where the current logic 

if (match_die(c, o) && !topology_same_node(c, o))
x86_has_numa_in_package = true;

fails.  And also the modification of this logic should be at the
same place for easy code maintenance. 

Tim  

> 
> 
> 
>>
>>> +
>>> +   /*
>>> +* false means 'c' does not share the LLC of 'o'.
>>> +* Note: this decision gets reflected all the way
>>> +* out to userspace.
>>> +*/
>>> +
>>> +   return false;
>>
>> Thanks.
>>
>> Tim



Re: [PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-04 Thread Alison Schofield
On Wed, Apr 04, 2018 at 10:24:49AM -0700, Tim Chen wrote:
> On 04/03/2018 02:12 PM, Alison Schofield wrote:
> 
> > +
> > +   /*
> > +* topology_sane() considers LLCs that span NUMA nodes to be
> > +* insane and will display a warning message. Bypass the call
> > +* to topology_sane() for snc_cpu's to avoid that warning.
> > +*/
> > +
> > +   if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu)) {
> > +   /* Indicate that package has NUMA nodes inside: */
> > +   x86_has_numa_in_package = true;
> 
> Why does the x86_has_numa_in_package has to be set here when it would have
> been done later in set_cpu_sibling_map?

Tim,
I had that same thought when you commented on it previously. After 
discussing w DaveH, decided that match_llc() and match_die(c,0)
could be different and chose to be (cautiously) redundant.
alisons



> 
> > +
> > +   /*
> > +* false means 'c' does not share the LLC of 'o'.
> > +* Note: this decision gets reflected all the way
> > +* out to userspace.
> > +*/
> > +
> > +   return false;
> 
> Thanks.
> 
> Tim


Re: [PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-04 Thread Alison Schofield
On Wed, Apr 04, 2018 at 10:24:49AM -0700, Tim Chen wrote:
> On 04/03/2018 02:12 PM, Alison Schofield wrote:
> 
> > +
> > +   /*
> > +* topology_sane() considers LLCs that span NUMA nodes to be
> > +* insane and will display a warning message. Bypass the call
> > +* to topology_sane() for snc_cpu's to avoid that warning.
> > +*/
> > +
> > +   if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu)) {
> > +   /* Indicate that package has NUMA nodes inside: */
> > +   x86_has_numa_in_package = true;
> 
> Why does the x86_has_numa_in_package has to be set here when it would have
> been done later in set_cpu_sibling_map?

Tim,
I had that same thought when you commented on it previously. After 
discussing w DaveH, decided that match_llc() and match_die(c,0)
could be different and chose to be (cautiously) redundant.
alisons



> 
> > +
> > +   /*
> > +* false means 'c' does not share the LLC of 'o'.
> > +* Note: this decision gets reflected all the way
> > +* out to userspace.
> > +*/
> > +
> > +   return false;
> 
> Thanks.
> 
> Tim


Re: [PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-04 Thread Tim Chen
On 04/03/2018 02:12 PM, Alison Schofield wrote:

> +
> + /*
> +  * topology_sane() considers LLCs that span NUMA nodes to be
> +  * insane and will display a warning message. Bypass the call
> +  * to topology_sane() for snc_cpu's to avoid that warning.
> +  */
> +
> + if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu)) {
> + /* Indicate that package has NUMA nodes inside: */
> + x86_has_numa_in_package = true;

Why does the x86_has_numa_in_package has to be set here when it would have
been done later in set_cpu_sibling_map?

> +
> + /*
> +  * false means 'c' does not share the LLC of 'o'.
> +  * Note: this decision gets reflected all the way
> +  * out to userspace.
> +  */
> +
> + return false;

Thanks.

Tim


Re: [PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-04 Thread Tim Chen
On 04/03/2018 02:12 PM, Alison Schofield wrote:

> +
> + /*
> +  * topology_sane() considers LLCs that span NUMA nodes to be
> +  * insane and will display a warning message. Bypass the call
> +  * to topology_sane() for snc_cpu's to avoid that warning.
> +  */
> +
> + if (!topology_same_node(c, o) && x86_match_cpu(snc_cpu)) {
> + /* Indicate that package has NUMA nodes inside: */
> + x86_has_numa_in_package = true;

Why does the x86_has_numa_in_package has to be set here when it would have
been done later in set_cpu_sibling_map?

> +
> + /*
> +  * false means 'c' does not share the LLC of 'o'.
> +  * Note: this decision gets reflected all the way
> +  * out to userspace.
> +  */
> +
> + return false;

Thanks.

Tim


[PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-03 Thread Alison Schofield
From: Alison Schofield 

Intel's Skylake Server CPUs have a different LLC topology than previous
generations. When in Sub-NUMA-Clustering (SNC) mode, the package is
divided into two "slices", each containing half the cores, half the LLC,
and one memory controller and each slice is enumerated to Linux as a
NUMA node. This is similar to how the cores and LLC were arranged
for the Cluster-On-Die (CoD) feature.

CoD allowed the same cache line to be present in each half of the LLC.
But, with SNC, each line is only ever present in *one* slice. This
means that the portion of the LLC *available* to a CPU depends on the
data being accessed:

Remote socket: entire package LLC is shared
Local socket->local slice: data goes into local slice LLC
Local socket->remote slice: data goes into remote-slice LLC. Slightly
higher latency than local slice LLC.

The biggest implication from this is that a process accessing all
NUMA-local memory only sees half the LLC capacity.

The CPU describes its cache hierarchy with the CPUID instruction. One
of the CPUID leaves enumerates the "logical processors sharing this
cache". This information is used for scheduling decisions so that tasks
move more freely between CPUs sharing the cache.

But, the CPUID for the SNC configuration discussed above enumerates
the LLC as being shared by the entire package. This is not 100%
precise because the entire cache is not usable by all accesses. But,
it *is* the way the hardware enumerates itself, and this is not likely
to change.

The userspace visible impact of all the above is that the sysfs info
reports the entire LLC as being available to the entire package. As
noted above, this is not true for local socket accesses. This patch
does not correct the sysfs info. It is the same, pre and post patch.

This patch continues to allow this SNC topology and it does so without
complaint. It eliminates a warning that looks like this:

sched: CPU #3's llc-sibling CPU #0 is not on the same node! [node: 1 != 
0]. Ignoring dependency.

The warning is coming from the sane_topology check() in smpboot.c.
To fix this, add a vendor and model specific check to never call
topology_sane() for these systems. Also, just like "Cluster-on-Die"
we throw out the "coregroup" sched_domain_topology_level and use
NUMA information from the SRAT alone.

This is OK at least on the hardware we are immediately concerned about
because the LLC sharing happens at both the slice and at the package
level, which are also NUMA boundaries.

Signed-off-by: Alison Schofield 
Cc: Dave Hansen 
Cc: Tony Luck 
Cc: Tim Chen 
Cc: "H. Peter Anvin" 
Cc: Borislav Petkov 
Cc: Peter Zijlstra (Intel) 
Cc: David Rientjes 
Cc: Igor Mammedov 
Cc: Prarit Bhargava 
Cc: brice.gog...@gmail.com
Cc: Ingo Molnar 
---

Changes in v4:

 * Added this to the patch description above:

   The userspace visible impact of all the above is that the sysfs info
   reports the entire LLC as being available to the entire package. As
   noted above, this is not true for local socket accesses. This patch
   does not correct the sysfs info. It is the same, pre and post patch.

   This patch continues to allow this SNC topology and it does so without
   complaint. It eliminates a warning that looks like this:
  
 * Changed the code comment per PeterZ/DaveH discussion wrt bypassing
   that topology_sane() check in match_llc()
/*
 * false means 'c' does not share the LLC of 'o'.
 * Note: this decision gets reflected all the way
 * out to userspace
 */
   This message hopes to clarify what happens when we return false.
   Note that returning false is *not* new to this patch. Without
   this patch we always returned false - with a warning. This avoids
   that warning and returns false directly.

 * Remove __initconst from snc_cpu[] declaration that I had added in
   v3. This is not an init time only path. 

 * Do not deal with the wrong sysfs info. It was wrong before this
   patch and it will be the exact same 'wrongness' after this patch.

   We can address the sysfs reporting separately. Here are some options:
   1) Change the way the LLC-size is reported.  Enumerate two separate,
  half-sized LLCs shared only by the slice when SNC mode is on.
   2) Do not export the sysfs info that is wrong. Prevents userspace
  from making bad decisions based on inaccurate info.


Changes in v3:

 * Use x86_match_cpu() for vendor & model check and moved related
   comments to the array define. (Still just one system model)

 * Updated the comments surrounding the topology_sane() check.


Changes in v2:

 * Add vendor check (Intel) where we only had a model check (Skylake_X).
   

[PATCH v4] x86,sched: allow topologies where NUMA nodes share an LLC

2018-04-03 Thread Alison Schofield
From: Alison Schofield 

Intel's Skylake Server CPUs have a different LLC topology than previous
generations. When in Sub-NUMA-Clustering (SNC) mode, the package is
divided into two "slices", each containing half the cores, half the LLC,
and one memory controller and each slice is enumerated to Linux as a
NUMA node. This is similar to how the cores and LLC were arranged
for the Cluster-On-Die (CoD) feature.

CoD allowed the same cache line to be present in each half of the LLC.
But, with SNC, each line is only ever present in *one* slice. This
means that the portion of the LLC *available* to a CPU depends on the
data being accessed:

Remote socket: entire package LLC is shared
Local socket->local slice: data goes into local slice LLC
Local socket->remote slice: data goes into remote-slice LLC. Slightly
higher latency than local slice LLC.

The biggest implication from this is that a process accessing all
NUMA-local memory only sees half the LLC capacity.

The CPU describes its cache hierarchy with the CPUID instruction. One
of the CPUID leaves enumerates the "logical processors sharing this
cache". This information is used for scheduling decisions so that tasks
move more freely between CPUs sharing the cache.

But, the CPUID for the SNC configuration discussed above enumerates
the LLC as being shared by the entire package. This is not 100%
precise because the entire cache is not usable by all accesses. But,
it *is* the way the hardware enumerates itself, and this is not likely
to change.

The userspace visible impact of all the above is that the sysfs info
reports the entire LLC as being available to the entire package. As
noted above, this is not true for local socket accesses. This patch
does not correct the sysfs info. It is the same, pre and post patch.

This patch continues to allow this SNC topology and it does so without
complaint. It eliminates a warning that looks like this:

sched: CPU #3's llc-sibling CPU #0 is not on the same node! [node: 1 != 
0]. Ignoring dependency.

The warning is coming from the sane_topology check() in smpboot.c.
To fix this, add a vendor and model specific check to never call
topology_sane() for these systems. Also, just like "Cluster-on-Die"
we throw out the "coregroup" sched_domain_topology_level and use
NUMA information from the SRAT alone.

This is OK at least on the hardware we are immediately concerned about
because the LLC sharing happens at both the slice and at the package
level, which are also NUMA boundaries.

Signed-off-by: Alison Schofield 
Cc: Dave Hansen 
Cc: Tony Luck 
Cc: Tim Chen 
Cc: "H. Peter Anvin" 
Cc: Borislav Petkov 
Cc: Peter Zijlstra (Intel) 
Cc: David Rientjes 
Cc: Igor Mammedov 
Cc: Prarit Bhargava 
Cc: brice.gog...@gmail.com
Cc: Ingo Molnar 
---

Changes in v4:

 * Added this to the patch description above:

   The userspace visible impact of all the above is that the sysfs info
   reports the entire LLC as being available to the entire package. As
   noted above, this is not true for local socket accesses. This patch
   does not correct the sysfs info. It is the same, pre and post patch.

   This patch continues to allow this SNC topology and it does so without
   complaint. It eliminates a warning that looks like this:
  
 * Changed the code comment per PeterZ/DaveH discussion wrt bypassing
   that topology_sane() check in match_llc()
/*
 * false means 'c' does not share the LLC of 'o'.
 * Note: this decision gets reflected all the way
 * out to userspace
 */
   This message hopes to clarify what happens when we return false.
   Note that returning false is *not* new to this patch. Without
   this patch we always returned false - with a warning. This avoids
   that warning and returns false directly.

 * Remove __initconst from snc_cpu[] declaration that I had added in
   v3. This is not an init time only path. 

 * Do not deal with the wrong sysfs info. It was wrong before this
   patch and it will be the exact same 'wrongness' after this patch.

   We can address the sysfs reporting separately. Here are some options:
   1) Change the way the LLC-size is reported.  Enumerate two separate,
  half-sized LLCs shared only by the slice when SNC mode is on.
   2) Do not export the sysfs info that is wrong. Prevents userspace
  from making bad decisions based on inaccurate info.


Changes in v3:

 * Use x86_match_cpu() for vendor & model check and moved related
   comments to the array define. (Still just one system model)

 * Updated the comments surrounding the topology_sane() check.


Changes in v2:

 * Add vendor check (Intel) where we only had a model check (Skylake_X).
   Considered the suggestion of adding a new flag here but thought that
   to be overkill for this usage.

 * Return false, instead of true, from match_llc() per reviewer suggestion.
   That also cleaned up a topology broken bug message in sched_domain().

 * Updated the