On 2 August 2014 01:06, Stephen Boyd <sb...@codeaurora.org> wrote:
> I have the same options. The difference is that my driver has a governor
> per policy. That's set with the CPUFREQ_HAVE_GOVERNOR_PER_POLICY flag.

You may call me stupid but I got a bit confused after looking into the code
again. Why does the crash dump depends on this flag?

We *always* remove the governor specific directory while switching governors
(Ofcourse only if its updated for All CPUs). And so on a dual core platform,
where both CPU 0 & 1 share a clock line, switching of governors should result
in this crash dump?

I may know the answer to the stupid question I had, but not sure why that is a
problem. The only (and quite significant) difference that this flag makes
is the location of governor-specific directory:
- w/o this flag: /sys/devices/system/cpu/cpufreq/<here>
- w/ this flag: /sys/devices/system/cpu/cpu*/cpufreq/<here>

So, is there some issue with the sysfs lock for <cpu*/cpufreq/> node as while
switching governor we change  <cpu*/cpufreq/scaling_governor> at the same
location?

--
viresh

--
viresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to