On 10/04/2019 10:27, Qais Yousef wrote:
[...]
>> @@ -1066,9 +1067,14 @@ static struct sched_group *get_group(int cpu, struct 
>> sd_data *sdd)
>>      sg = *per_cpu_ptr(sdd->sg, cpu);
>>      sg->sgc = *per_cpu_ptr(sdd->sgc, cpu);
>>  
>> -    /* For claim_allocations: */
>> -    atomic_inc(&sg->ref);
>> -    atomic_inc(&sg->sgc->ref);
>> +    /* Increase refcounts for claim_allocations: */
>> +    already_visited = atomic_inc_return(&sg->ref) > 1;
>> +    /* sgc visits should follow a similar trend as sg */
>> +    WARN_ON(already_visited != (atomic_inc_return(&sg->sgc->ref) > 1));
> 
> NIT: I think it would be better to not have any side effect of calling
> WARN_ON(). Ie: do the atomic_inc_return() outside the WARN_ON() condition.
> Having two bool already_visited_sg and already_visited_sgc will make the code
> more readable too.
> 

I agree it's not the nicest reading flow there is. It's already gone in
tip/sched/core but if needed I can resend with this extra separation.

> Thanks
> 
> --
> Qais Yousef
> 
>> +
>> +    /* If we have already visited that group, it's already initialized. */
>> +    if (already_visited)
>> +            return sg;
>>  
>>      if (child) {
>>              cpumask_copy(sched_group_span(sg), sched_domain_span(child));
>> --
>> 2.20.1
>>

Reply via email to