Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-23 Thread Daniel J Blueman
On 03/22/2014 12:11 AM, Bjorn Helgaas wrote: [+cc Rafael, linux-acpi for _PXM questions] On Thu, Mar 20, 2014 at 9:38 PM, Daniel J Blueman wrote: On 21/03/2014 06:07, Bjorn Helgaas wrote: On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman wrote: For systems with multiple servers and routed

Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-23 Thread Daniel J Blueman
On 03/22/2014 01:16 AM, Suravee Suthikulpanit wrote: On 3/20/2014 10:38 PM, Daniel J Blueman wrote: On 21/03/2014 06:07, Bjorn Helgaas wrote: [+cc linux-pci, Myron, Suravee, Kim, Aravind] On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman wrote: For systems with multiple servers and routed fa

Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-21 Thread Suravee Suthikulpanit
On 3/20/2014 10:38 PM, Daniel J Blueman wrote: On 21/03/2014 06:07, Bjorn Helgaas wrote: [+cc linux-pci, Myron, Suravee, Kim, Aravind] On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman wrote: For systems with multiple servers and routed fabric, all northbridges get assigned to the first serve

Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-21 Thread Bjorn Helgaas
[+cc Rafael, linux-acpi for _PXM questions] On Thu, Mar 20, 2014 at 9:38 PM, Daniel J Blueman wrote: > On 21/03/2014 06:07, Bjorn Helgaas wrote: >> On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman >> wrote: >>> >>> For systems with multiple servers and routed fabric, all northbridges get >>> as

Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-20 Thread Daniel J Blueman
On 21/03/2014 11:51, Suravee Suthikulpanit wrote: Bjorn, On a typical AMD system, there are two types of host bridges: * PCI Root Complex Host bridge (e.g. RD890, SR56xx, etc.) * CPU Host bridge Here is an example from a 2 sockets system: $ lspci [] The host bridge 00:00.0 is basically the

Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-20 Thread Suravee Suthikulpanit
Bjorn, On a typical AMD system, there are two types of host bridges: * PCI Root Complex Host bridge (e.g. RD890, SR56xx, etc.) * CPU Host bridge Here is an example from a 2 sockets system: $ lspci 00:00.0 Host bridge: Advanced Micro Devices [AMD] nee ATI RD890 PCI to PCI bridge (external gfx0

Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-20 Thread Daniel J Blueman
On 21/03/2014 06:07, Bjorn Helgaas wrote: [+cc linux-pci, Myron, Suravee, Kim, Aravind] On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman wrote: For systems with multiple servers and routed fabric, all northbridges get assigned to the first server. Fix this by also using the node reported from

Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-20 Thread Bjorn Helgaas
[+cc linux-pci, Myron, Suravee, Kim, Aravind] On Thu, Mar 13, 2014 at 5:43 AM, Daniel J Blueman wrote: > For systems with multiple servers and routed fabric, all northbridges get > assigned to the first server. Fix this by also using the node reported from > the PCI bus. For single-fabric systems

Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-14 Thread Daniel J Blueman
Hi Boris, On 14/03/2014 17:06, Borislav Petkov wrote: On Thu, Mar 13, 2014 at 07:43:01PM +0800, Daniel J Blueman wrote: For systems with multiple servers and routed fabric, all northbridges get assigned to the first server. Fix this by also using the node reported from the PCI bus. For single-f

Re: [PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-14 Thread Borislav Petkov
On Thu, Mar 13, 2014 at 07:43:01PM +0800, Daniel J Blueman wrote: > For systems with multiple servers and routed fabric, all northbridges get > assigned to the first server. Fix this by also using the node reported from > the PCI bus. For single-fabric systems, the northbriges are on PCI bus 0 > by

[PATCH] Fix northbridge quirk to assign correct NUMA node

2014-03-13 Thread Daniel J Blueman
For systems with multiple servers and routed fabric, all northbridges get assigned to the first server. Fix this by also using the node reported from the PCI bus. For single-fabric systems, the northbriges are on PCI bus 0 by definition, which are on NUMA node 0 by definition, so this is invarient