On Mon, Apr 13 2026 at 15:43, Qiliang Yuan wrote:
> +     irq_lock_sparse();
> +     for_each_active_irq(irq) {
> +             struct irq_data *irqd;

Please move the declaration into the scope where it is used.

> +             struct irq_desc *desc;
> +
> +             desc = irq_to_desc(irq);
> +             if (!desc)
> +                     continue;
> +
> +             scoped_guard(raw_spinlock_irqsave, &desc->lock) {
> +                     irqd = irq_desc_get_irq_data(desc);
> +                     if (!irqd_affinity_is_managed(irqd) || !desc->action ||
> +                         !irq_data_get_irq_chip(irqd))
> +                             continue;

That's a pretty random choice of conditions.

> +                     /*
> +                      * Re-apply existing affinity to honor the new
> +                      * housekeeping mask via __irq_set_affinity() logic.
> +                      */
> +                     irq_set_affinity_locked(irqd, 
> irq_data_get_affinity_mask(irqd), false);

That's not sufficient. Assume an interrupt was shut down before the
change because there was no online CPU in the affinity mask, but now the
affinity mask changes so there is an online CPU. What starts it up?
Same the other way around.

> +static struct notifier_block irq_housekeeping_nb = {
> +     .notifier_call = irq_housekeeping_reconfigure,
> +};
> +
> +static int __init irq_init_housekeeping_notifier(void)
> +{
> +     housekeeping_register_notifier(&irq_housekeeping_nb);
> +     return 0;
> +}
> +core_initcall(irq_init_housekeeping_notifier);

I fundamentaly despise notifiers especially when they are just invoking
something which is built in.

Reply via email to