From: Long Li <lon...@microsoft.com> On systems with large number of NUMA nodes, there may be more NUMA nodes than the number of MSI/MSI-X interrupts that device requests for. The current code always picks up the NUMA nodes starting from the node 0, up to the number of interrupts requested. This may left some later NUMA nodes unused.
For example, if the system has 16 NUMA nodes, and the device reqeusts for 8 interrupts, NUMA node 0 to 7 are assigned for those interrupts, NUMA 8 to 15 are unused. There are several problems with this approach: 1. Later, when those managed IRQs are allocated, they can not be assigned to NUMA 8 to 15, this may create an IRQ concentration on NUMA 0 to 7. 2. Some upper layers assume affinity mask has a complete coverage over NUMA nodes. For example, block layer use the affinity mask to decide how to map CPU queues to hardware queues, missing NUMA nodes in the masks may result in an uneven mapping of queues. For the above example of 16 NUMA nodes, CPU queues on NUMA node 0 to 7 are assigned to the hardware queues 0 to 7, respectively. But CPU queues on NUMA node 8 to 15 are all assigned to the hardware queue 0. Fix this problem by going over all NUMA nodes and assign them round-robin to all IRQs. Signed-off-by: Long Li <lon...@microsoft.com> --- kernel/irq/affinity.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index f4f29b9d90ee..2d08b560d4b6 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -117,12 +117,13 @@ static int irq_build_affinity_masks(const struct irq_affinity *affd, */ if (numvecs <= nodes) { for_each_node_mask(n, nodemsk) { - cpumask_copy(masks + curvec, node_to_cpumask[n]); - if (++done == numvecs) - break; + cpumask_or(masks + curvec, masks + curvec, node_to_cpumask[n]); + done++; if (++curvec == last_affv) curvec = affd->pre_vectors; } + if (done > numvecs) + done = numvecs; goto out; } -- 2.14.1