In a system with a single initiator node, and one or more memory-only
'target' nodes, the memory-only node(s) would fail to register their
initiator node correctly. i.e. in sysfs:

  # ls /sys/devices/system/node/node0/access0/targets/
  node0

Where as the correct behavior should be:

  # ls /sys/devices/system/node/node0/access0/targets/
  node0 node1

This happened because hmat_register_target_initiators() uses list_sort()
to sort the initiator list, but the sort comparision function
(initiator_cmp()) is overloaded to also set the node mask's bits.

In a system with a single initiator, the list is singular, and list_sort
elides the comparision helper call. Thus the node mask never gets set,
and the subsequent search for the best initiator comes up empty.

Add a new helper to sort the initiator list, and handle the singular
list corner case by setting the node mask for that explicitly.

Reported-by: Chris Piper <chris.d.pi...@intel.com>
Cc: <sta...@vger.kernel.org>
Cc: Rafael J. Wysocki <raf...@kernel.org>
Cc: Liu Shixin <liushix...@huawei.com>
Cc: Dan Williams <dan.j.willi...@intel.com>
Signed-off-by: Vishal Verma <vishal.l.ve...@intel.com>
---
 drivers/acpi/numa/hmat.c | 32 ++++++++++++++++++++++++++++++--
 1 file changed, 30 insertions(+), 2 deletions(-)

diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
index 144a84f429ed..cd20b0e9cdfa 100644
--- a/drivers/acpi/numa/hmat.c
+++ b/drivers/acpi/numa/hmat.c
@@ -573,6 +573,30 @@ static int initiator_cmp(void *priv, const struct 
list_head *a,
        return ia->processor_pxm - ib->processor_pxm;
 }
 
+static int initiators_to_nodemask(unsigned long *p_nodes)
+{
+       /*
+        * list_sort doesn't call @cmp (initiator_cmp) for 0 or 1 sized lists.
+        * For a single-initiator system with other memory-only nodes, this
+        * means an empty p_nodes mask, since that is set by initiator_cmp().
+        * Special case the singular list, and make sure the node mask gets set
+        * appropriately.
+        */
+       if (list_empty(&initiators))
+               return -ENXIO;
+
+       if (list_is_singular(&initiators)) {
+               struct memory_initiator *initiator = list_first_entry(
+                       &initiators, struct memory_initiator, node);
+
+               set_bit(initiator->processor_pxm, p_nodes);
+               return 0;
+       }
+
+       list_sort(p_nodes, &initiators, initiator_cmp);
+       return 0;
+}
+
 static void hmat_register_target_initiators(struct memory_target *target)
 {
        static DECLARE_BITMAP(p_nodes, MAX_NUMNODES);
@@ -609,7 +633,9 @@ static void hmat_register_target_initiators(struct 
memory_target *target)
         * initiators.
         */
        bitmap_zero(p_nodes, MAX_NUMNODES);
-       list_sort(p_nodes, &initiators, initiator_cmp);
+       if (initiators_to_nodemask(p_nodes) < 0)
+               return;
+
        if (!access0done) {
                for (i = WRITE_LATENCY; i <= READ_BANDWIDTH; i++) {
                        loc = localities_types[i];
@@ -643,7 +669,9 @@ static void hmat_register_target_initiators(struct 
memory_target *target)
 
        /* Access 1 ignores Generic Initiators */
        bitmap_zero(p_nodes, MAX_NUMNODES);
-       list_sort(p_nodes, &initiators, initiator_cmp);
+       if (initiators_to_nodemask(p_nodes) < 0)
+               return;
+
        for (i = WRITE_LATENCY; i <= READ_BANDWIDTH; i++) {
                loc = localities_types[i];
                if (!loc)
-- 
2.38.1


Reply via email to