On 12/19/2013 02:19 AM, rui wang wrote: > On 12/19/13, Prarit Bhargava <pra...@redhat.com> wrote: >> >> >> On 12/03/2013 09:48 PM, rui wang wrote: >>> On 11/20/13, Prarit Bhargava <pra...@redhat.com> wrote: >>> Have you considered the case when an IRQ is destined to more than one CPU? >>> e.g.: >>> >>> bash# cat /proc/irq/89/smp_affinity_list >>> 30,62 >>> bash# >>> >>> In this case offlining CPU30 does not seem to require an empty vector >>> slot. It seems that we only need to change the affinity mask of irq89. >>> Your check_vectors() assumed that each irq on the offlining cpu >>> requires a new vector slot. >>> >> >> Rui, >> >> The smp_affinity_list only indicates a preferred destination of the IRQ, not >> the >> *actual* location of the CPU. So the IRQ is on one of cpu 30 or 62 but not >> both >> simultaneously. >> > > It depends on how IOAPIC (or MSI/MSIx) is configured. An IRQ can be > simultaneously broadcast to all destination CPUs (Fixed Mode) or > delivered to the CPU with the lowest priority task (Lowest Priority > Mode). It's programmed in the Delivery Mode bits of the IOAPIC's IO > Redirection table registers, or the Message Data Register in the case > of MSI/MSIx
Hmm ... I didn't realize that this was a possibility. I'll go back and rework the patch. Thanks for the info Rui! P. > >> If the case is that 62 is being brought down, then the smp_affinity mask >> will be >> updated to reflect only cpu 30 (and vice versa). >> > > Yes the affinity mask should be updated. But if it was destined to > more than one CPU, your "this_counter" does not seem to count the > right numbers. Are you saying that smp_affinity mask is broken on > Linux so that there's no way to configure an IRQ to target more than > one CPU? > > Thanks > Rui > >> P. >> -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/