Hi,
Sorry for the delay, I just got back from holiday as well.
On Monday 07 Sep 2020 at 11:41:54 (+0530), Viresh Kumar wrote:
> On 04-09-20, 10:43, Ionela Voinescu wrote:
> > Do you know why it was designed this way in the first place?
>
> No.
>
> > I assumed it was designed like this (per-cpu
On 04-09-20, 10:43, Ionela Voinescu wrote:
> Do you know why it was designed this way in the first place?
No.
> I assumed it was designed like this (per-cpu cppc_cpudata structures) to
> allow for the future addition of support for the HW_ALL CPPC coordination
> type. In that case you can still
Hi Viresh,
On Friday 04 Sep 2020 at 10:36:04 (+0530), Viresh Kumar wrote:
[..]
> > /* Per CPU container for runtime CPPC management. */
> > struct cppc_cpudata {
> > - int cpu;
> > struct cppc_perf_caps perf_caps;
> > struct cppc_perf_ctrls perf_ctrls;
> > struct cppc_perf_fb_ctrs
On 03-09-20, 12:19, Ionela Voinescu wrote:
> An issue is observed in the cpufreq CPPC driver when having dependency
> domains (PSD) and the policy->cpu is hotplugged out.
>
> Considering a platform with 4 CPUs and 2 PSD domains (CPUs 0 and 1 in
> PSD-1, CPUs 2 and 3 in PSD-2),
An issue is observed in the cpufreq CPPC driver when having dependency
domains (PSD) and the policy->cpu is hotplugged out.
Considering a platform with 4 CPUs and 2 PSD domains (CPUs 0 and 1 in
PSD-1, CPUs 2 and 3 in PSD-2), cppc_cpufreq_cpu_init() will be called
for the two cpufreq policies that
5 matches
Mail list logo