On 3/20/2014 10:38 PM, Daniel J Blueman wrote:
On 21/03/2014 06:07, Bjorn Helgaas wrote:
[+cc linux-pci, Myron, Suravee, Kim, Aravind]

On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman <dan...@numascale.com> wrote:
For systems with multiple servers and routed fabric, all northbridges get
assigned to the first server. Fix this by also using the node reported from
the PCI bus. For single-fabric systems, the northbriges are on PCI bus 0
by definition, which are on NUMA node 0 by definition, so this is invarient
on most systems.

Tested on fam10h and fam15h single and multi-fabric systems and candidate
for stable.

I wish this had been cc'd to linux-pci.  We're talking about a related
change by Suravee there.  In fact, we were hoping this quirk could be
removed altogether.

Noted.

I don't understand what this quirk is doing.  Normally we discover the
NUMA node for a PCI host bridge via the ACPI _PXM method.  The way
_PXM works is that every PCI device in the hierarchy below the bridge
inherits the same node number as the host bridge.  I first thought
this might be a workaround for a system that lacks _PXM, but I don't
think that can be right, because you're only changing the node for a
few devices, not the whole hierarchy.
 >
So I suspect the problem is more complicated, and maybe _PXM is
insufficient to describe the topology?  Are there subtrees that should
have nodes different from the host bridge?

Yes; see below.

I know this patch is already in v3.14-rc7, but I'd still like to
understand it so we can do the right thing with Suravee's patch.

The _PXM method associates each northbridge with the first NUMA node, 0 in 
single-fabric systems, and eg 4 for the second server in a multi-fabric system 
with 2 dual-module Opterons (with 2 NUMA nodes internally) etc, since the 
northbridges appear in the
PCI tree, under the host bridge, not above it [1].
Daniel,

That lspci looks interesting, what is the value returned from pci_bus_to_node() 
on your system for each fabric?

Suravee


With _PXM, the rest of the PCI bus hierarchy has the right NUMA node 
associated, but the northbridge PCI devices should be associated with their 
actual NUMA node, 0, 1, 2, 3 for the first server in this example. The quirk 
fixes this up; irqbalance at least
uses this NUMA data exposed in /sys.

The alternative to the quirk may be to explicitly express the northbridge PCI 
devices in the AML with their own _PXM methods. If it's valid, it may be the 
honest approach, though the quirk may be needed for most BIOSs; I can check the 
AML on a few servers
to confirm if helpful.

Thanks,
   Daniel

[1] http://quora.org/2014/lspci.txt


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to