On 14/12/20 15:54, Lai Jiangshan wrote:
> @@ -1848,11 +1848,11 @@ static void worker_attach_to_pool(struct worker 
> *worker,
>  {
>       mutex_lock(&wq_pool_attach_mutex);
>
> -     /*
> -      * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any
> -      * online CPUs.  It'll be re-applied when any of the CPUs come up.
> -      */
> -     set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask);
> +     /* Is there any cpu in pool->attrs->cpumask online? */
> +     if (cpumask_any_and(pool->attrs->cpumask, wq_online_cpumask) < 
> nr_cpu_ids)

  if (cpumask_intersects(pool->attrs->cpumask, wq_online_cpumask))

> +             WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, 
> pool->attrs->cpumask) < 0);
> +     else
> +             WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, 
> cpu_possible_mask) < 0);

So for that late-spawned per-CPU kworker case: the outgoing CPU should have
already been cleared from wq_online_cpumask, so it gets its affinity reset
to the possible mask and the subsequent wakeup will ensure it's put on an
active CPU.

Seems alright to me.

>
>       /*
>        * The wq_pool_attach_mutex ensures %POOL_DISASSOCIATED remains

Reply via email to