On 04/12/2017 10:10 AM, Hillf Danton wrote:
> On April 11, 2017 10:06 PM Vlastimil Babka wrote: 
>>
>>  static void cpuset_change_task_nodemask(struct task_struct *tsk,
>>                                      nodemask_t *newmems)
>>  {
>> -    bool need_loop;
>> -
>>      task_lock(tsk);
>> -    /*
>> -     * Determine if a loop is necessary if another thread is doing
>> -     * read_mems_allowed_begin().  If at least one node remains unchanged 
>> and
>> -     * tsk does not have a mempolicy, then an empty nodemask will not be
>> -     * possible when mems_allowed is larger than a word.
>> -     */
>> -    need_loop = task_has_mempolicy(tsk) ||
>> -                    !nodes_intersects(*newmems, tsk->mems_allowed);
>>
>> -    if (need_loop) {
>> -            local_irq_disable();
>> -            write_seqcount_begin(&tsk->mems_allowed_seq);
>> -    }
>> +    local_irq_disable();
>> +    write_seqcount_begin(&tsk->mems_allowed_seq);
>>
>> -    nodes_or(tsk->mems_allowed, tsk->mems_allowed, *newmems);
>>      mpol_rebind_task(tsk, newmems);
>>      tsk->mems_allowed = *newmems;
>>
>> -    if (need_loop) {
>> -            write_seqcount_end(&tsk->mems_allowed_seq);
>> -            local_irq_enable();
>> -    }
>> +    write_seqcount_end(&tsk->mems_allowed_seq);
>>
> Doubt if we'd listen irq again.

Ugh, thanks for catching this. Looks like my testing config didn't have
lockup detectors enabled.

>>      task_unlock(tsk);
>>  }
>> --
>> 2.12.2
> 

Reply via email to