On 01/07/2014 12:54 PM, Luck, Tony wrote:
> +     for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
> +             irq = __this_cpu_read(vector_irq[vector]);
> +             if (irq >= 0) {
> +                     desc = irq_to_desc(irq);
> +                     data = irq_desc_get_irq_data(desc);
> +                     cpumask_copy(&affinity_new, data->affinity);
> +                     cpu_clear(this_cpu, affinity_new);
> +                     /*
> +                      * The check below determines if this irq requires
> +                      * an empty vector_irq[irq] entry on an online
> +                      * cpu.
> +                      *
> +                      * The code only counts active non-percpu irqs, and
> +                      * those irqs that are not linked to on an online cpu.
> +                      * The first test is trivial, the second is not.
> +                      *
> +                      * The second test takes into account the
> +                      * account that a single irq may be mapped to multiple
> +                      * cpu's vector_irq[] (for example IOAPIC cluster
> +                      * mode).  In this case we have two
> +                      * possibilities:
> +                      *
> +                      * 1) the resulting affinity mask is empty; that is
> +                      * this the down'd cpu is the last cpu in the irq's
> +                      * affinity mask, and
> Code says "||" - so I think comment should say "or".
> +                      *
> +                      * 2) the resulting affinity mask is no longer
> +                      * a subset of the online cpus but the affinity
> +                      * mask is not zero; that is the down'd cpu is the
> +                      * last online cpu in a user set affinity mask.
> +                      *
> +                      * In both possibilities, we need to remap the irq
> +                      * to a new vector_irq[].
> +                      *
> +                      */
> +                     if (irq_has_action(irq) && !irqd_is_per_cpu(data) &&
> +                         (cpumask_empty(&affinity_new) ||
> +                          !cpumask_subset(&affinity_new, &online_new)))
> +                             this_count++;
> +             }
> 
> That's an impressive 6:1 ratio of lines-of-comment to lines-of-code!

Heh -- I couldn't decide if I should keep it all together in one comment or
divide it up.  I guess it does look less scary if I divide it up.  So how about
(sorry for the cut-and-paste)


        for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
                irq = __this_cpu_read(vector_irq[vector]);
                if (irq >= 0) {
                        desc = irq_to_desc(irq);
                        data = irq_desc_get_irq_data(desc);
                        cpumask_copy(&affinity_new, data->affinity);
                        cpu_clear(this_cpu, affinity_new);

                        /* Do not count inactive or per-cpu irqs. */
                        if (!irq_has_action(irq) || irqd_is_per_cpu(data))
                                continue;

                        /*
                         * A single irq may be mapped to multiple
                         * cpu's vector_irq[] (for example IOAPIC cluster
                         * mode).  In this case we have two
                         * possibilities:
                         *
                         * 1) the resulting affinity mask is empty; that is
                         * this the down'd cpu is the last cpu in the irq's
                         * affinity mask, or
                         *
                         * 2) the resulting affinity mask is no longer
                         * a subset of the online cpus but the affinity
                         * mask is not zero; that is the down'd cpu is the
                         * last online cpu in a user set affinity mask.
                         */
                        if (cpumask_empty(&affinity_new) ||
                            !cpumask_subset(&affinity_new, &online_new))
                                this_count++;
                }
        }


Everyone okay with that?

P.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to