On 04/10/19 11:17, Valentin Schneider wrote:
> On 10/04/2019 10:27, Qais Yousef wrote:
> [...]
> >> @@ -1066,9 +1067,14 @@ static struct sched_group *get_group(int cpu, 
> >> struct sd_data *sdd)
> >>    sg = *per_cpu_ptr(sdd->sg, cpu);
> >>    sg->sgc = *per_cpu_ptr(sdd->sgc, cpu);
> >>  
> >> -  /* For claim_allocations: */
> >> -  atomic_inc(&sg->ref);
> >> -  atomic_inc(&sg->sgc->ref);
> >> +  /* Increase refcounts for claim_allocations: */
> >> +  already_visited = atomic_inc_return(&sg->ref) > 1;
> >> +  /* sgc visits should follow a similar trend as sg */
> >> +  WARN_ON(already_visited != (atomic_inc_return(&sg->sgc->ref) > 1));
> > 
> > NIT: I think it would be better to not have any side effect of calling
> > WARN_ON(). Ie: do the atomic_inc_return() outside the WARN_ON() condition.
> > Having two bool already_visited_sg and already_visited_sgc will make the 
> > code
> > more readable too.
> > 
> 
> I agree it's not the nicest reading flow there is. It's already gone in
> tip/sched/core but if needed I can resend with this extra separation.

It was just a nit from my side. So for me it's not worth a resend if it's
already accepted.

--
Qais Yousef

Reply via email to