On Wednesday, October 28, 2015 12:13:17 PM Viresh Kumar wrote: > On 28-10-15, 06:54, Rafael J. Wysocki wrote: > > On Wednesday, October 28, 2015 10:14:51 AM Viresh Kumar wrote: > > > In cases where a single policy controls multiple CPUs, a timer is > > > queued for every cpu present in policy->cpus. When we reach the timer > > > handler (which can be on multiple CPUs together) on any CPU, we trace > > > CPU load for all policy->cpus and update the frequency accordingly. > > > > That would be in dbs_timer(), right? > > Yeah, and we already do stuff from within the mutex there. > > > > The lock is for protecting multiple CPUs to do the same thing > > > together, as only its required to be done by a single CPU. Once any > > > CPUs handler has completed, it updates the last update time and drops > > > the mutex. At that point of time, other blocked handler (if any) check > > > the last update time and return early. > > > > Well, that would mean we only needed to hold the lock around the > > need_load_eval() evaluation in dbs_timer() if I'm not mistaken. > > Actually yeah, but then the fourth patch of this series uses the > timer_mutex to fix a long standing problem (which was fixed by hacking > the code earlier). And so we need to take the lock for the entire > dbs_timer() routine.
I don't actually think that that patch is correct and even if it is, we'll only need to do that *after* that patch, so at least it would be fair to say a word about it in the changelog, wouldn't it? > > We also should acquire it around updates of the sampling rate, which > > essentially is set_sampling_rate(). > > Why? In the worst case we may schedule the next timer for the earlier > sampling rate. But do we care that much for that race, that we want to > add locks here as well ? OK That works because we actully hold the mutex around the whole function, as otherwise we'd have seen races between delayed work items on different CPUs sharing the policy. > > Is there any reason to acquire it in cpufreq_governor_limits(), then, > > for example? > > Yeah, we are calling dbs_check_cpu(dbs_data, cpu) from that path, > which will reevaluate the load. Which means that we should take the lock around dbs_check_cpu() everywhere in a consistent way. Which in turn means that the lock actually does more than you said. My point is basically that we seem to have a vague idea about what the lock is used for, while we need to know exactly why we need it. Thanks, Rafael -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/