On 05/08/2019 09:42, Martin Kepplinger wrote:
> On 05.08.19 09:39, Daniel Lezcano wrote:
>> On 05/08/2019 08:53, Martin Kepplinger wrote:
>>
>> [ ... ]
>>
> +static s64 cpuidle_cooling_runtime(struct cpuidle_cooling_device
> *idle_cdev)
> +{
> + s64 next_wakeup;
> + unsigned
On 05.08.19 09:39, Daniel Lezcano wrote:
> On 05/08/2019 08:53, Martin Kepplinger wrote:
>
> [ ... ]
>
+static s64 cpuidle_cooling_runtime(struct cpuidle_cooling_device
*idle_cdev)
+{
+ s64 next_wakeup;
+ unsigned long state = idle_cdev->state;
+
+ /*
On 05.08.19 09:37, Daniel Lezcano wrote:
> On 05/08/2019 07:11, Martin Kepplinger wrote:
>> ---
>
> [ ... ]
>
>>> +static s64 cpuidle_cooling_runtime(struct cpuidle_cooling_device
>>> *idle_cdev)
>>> +{
>>> + s64 next_wakeup;
>>> + unsigned long state = idle_cdev->state;
>>> +
>>> + /*
On 05/08/2019 08:53, Martin Kepplinger wrote:
[ ... ]
>>> +static s64 cpuidle_cooling_runtime(struct cpuidle_cooling_device
>>> *idle_cdev)
>>> +{
>>> + s64 next_wakeup;
>>> + unsigned long state = idle_cdev->state;
>>> +
>>> + /*
>>> +* The function should not be called when there is
On 05/08/2019 07:11, Martin Kepplinger wrote:
> ---
[ ... ]
>> +static s64 cpuidle_cooling_runtime(struct cpuidle_cooling_device *idle_cdev)
>> +{
>> +s64 next_wakeup;
>> +unsigned long state = idle_cdev->state;
>> +
>> +/*
>> + * The function should not be called when there is
On 05.08.19 07:11, Martin Kepplinger wrote:
> ---
>
> On 05-04-18, 18:16, Daniel Lezcano wrote:
>> The cpu idle cooling driver performs synchronized idle injection across all
>> cpus belonging to the same cluster and offers a new method to cool down a
>> SoC.
>>
>> Each cluster has its own idle
---
On 05-04-18, 18:16, Daniel Lezcano wrote:
> The cpu idle cooling driver performs synchronized idle injection across all
> cpus belonging to the same cluster and offers a new method to cool down a SoC.
>
> Each cluster has its own idle cooling device, each core has its own idle
> injection
On Tue, Apr 17, 2018 at 09:17:36AM +0200, Daniel Lezcano wrote:
[...]
> Actually there is no impact with the change Sudeep is referring to. It
> is for ACPI, we are DT based. Confirmed with Jeremy.
>
> So AFAICT, it is not a problem.
> >>>
> >>> It is a problem - DT or ACPI
On Tue, Apr 17, 2018 at 09:17:36AM +0200, Daniel Lezcano wrote:
[...]
> Actually there is no impact with the change Sudeep is referring to. It
> is for ACPI, we are DT based. Confirmed with Jeremy.
>
> So AFAICT, it is not a problem.
> >>>
> >>> It is a problem - DT or ACPI
On 16/04/2018 16:22, Lorenzo Pieralisi wrote:
> On Mon, Apr 16, 2018 at 03:57:03PM +0200, Daniel Lezcano wrote:
>> On 16/04/2018 14:30, Lorenzo Pieralisi wrote:
>>> On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
On 16/04/2018 12:10, Viresh Kumar wrote:
> On 16-04-18,
On 16/04/2018 16:22, Lorenzo Pieralisi wrote:
> On Mon, Apr 16, 2018 at 03:57:03PM +0200, Daniel Lezcano wrote:
>> On 16/04/2018 14:30, Lorenzo Pieralisi wrote:
>>> On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
On 16/04/2018 12:10, Viresh Kumar wrote:
> On 16-04-18,
On Mon, Apr 16, 2018 at 03:57:03PM +0200, Daniel Lezcano wrote:
> On 16/04/2018 14:30, Lorenzo Pieralisi wrote:
> > On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
> >> On 16/04/2018 12:10, Viresh Kumar wrote:
> >>> On 16-04-18, 12:03, Daniel Lezcano wrote:
> On 16/04/2018
On Mon, Apr 16, 2018 at 03:57:03PM +0200, Daniel Lezcano wrote:
> On 16/04/2018 14:30, Lorenzo Pieralisi wrote:
> > On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
> >> On 16/04/2018 12:10, Viresh Kumar wrote:
> >>> On 16-04-18, 12:03, Daniel Lezcano wrote:
> On 16/04/2018
On 16/04/2018 14:30, Lorenzo Pieralisi wrote:
> On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
>> On 16/04/2018 12:10, Viresh Kumar wrote:
>>> On 16-04-18, 12:03, Daniel Lezcano wrote:
On 16/04/2018 11:50, Viresh Kumar wrote:
> On 16-04-18, 11:45, Daniel Lezcano wrote:
On 16/04/2018 14:30, Lorenzo Pieralisi wrote:
> On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
>> On 16/04/2018 12:10, Viresh Kumar wrote:
>>> On 16-04-18, 12:03, Daniel Lezcano wrote:
On 16/04/2018 11:50, Viresh Kumar wrote:
> On 16-04-18, 11:45, Daniel Lezcano wrote:
On Mon, Apr 16, 2018 at 02:49:35PM +0200, Daniel Lezcano wrote:
> On 16/04/2018 14:31, Sudeep Holla wrote:
> > On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
> >> On 16/04/2018 12:10, Viresh Kumar wrote:
> >>> On 16-04-18, 12:03, Daniel Lezcano wrote:
> On 16/04/2018 11:50,
On Mon, Apr 16, 2018 at 02:49:35PM +0200, Daniel Lezcano wrote:
> On 16/04/2018 14:31, Sudeep Holla wrote:
> > On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
> >> On 16/04/2018 12:10, Viresh Kumar wrote:
> >>> On 16-04-18, 12:03, Daniel Lezcano wrote:
> On 16/04/2018 11:50,
On 16/04/2018 14:31, Sudeep Holla wrote:
> On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
>> On 16/04/2018 12:10, Viresh Kumar wrote:
>>> On 16-04-18, 12:03, Daniel Lezcano wrote:
On 16/04/2018 11:50, Viresh Kumar wrote:
> On 16-04-18, 11:45, Daniel Lezcano wrote:
>>
On 16/04/2018 14:31, Sudeep Holla wrote:
> On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
>> On 16/04/2018 12:10, Viresh Kumar wrote:
>>> On 16-04-18, 12:03, Daniel Lezcano wrote:
On 16/04/2018 11:50, Viresh Kumar wrote:
> On 16-04-18, 11:45, Daniel Lezcano wrote:
>>
On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
> On 16/04/2018 12:10, Viresh Kumar wrote:
> > On 16-04-18, 12:03, Daniel Lezcano wrote:
> >> On 16/04/2018 11:50, Viresh Kumar wrote:
> >>> On 16-04-18, 11:45, Daniel Lezcano wrote:
> Can you elaborate a bit ? I'm not sure to
On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
> On 16/04/2018 12:10, Viresh Kumar wrote:
> > On 16-04-18, 12:03, Daniel Lezcano wrote:
> >> On 16/04/2018 11:50, Viresh Kumar wrote:
> >>> On 16-04-18, 11:45, Daniel Lezcano wrote:
> Can you elaborate a bit ? I'm not sure to
On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
> On 16/04/2018 12:10, Viresh Kumar wrote:
> > On 16-04-18, 12:03, Daniel Lezcano wrote:
> >> On 16/04/2018 11:50, Viresh Kumar wrote:
> >>> On 16-04-18, 11:45, Daniel Lezcano wrote:
> Can you elaborate a bit ? I'm not sure to
On Mon, Apr 16, 2018 at 02:10:30PM +0200, Daniel Lezcano wrote:
> On 16/04/2018 12:10, Viresh Kumar wrote:
> > On 16-04-18, 12:03, Daniel Lezcano wrote:
> >> On 16/04/2018 11:50, Viresh Kumar wrote:
> >>> On 16-04-18, 11:45, Daniel Lezcano wrote:
> Can you elaborate a bit ? I'm not sure to
On Mon, Apr 16, 2018 at 03:20:06PM +0530, Viresh Kumar wrote:
> On 16-04-18, 11:45, Daniel Lezcano wrote:
> > Can you elaborate a bit ? I'm not sure to get the point.
>
> Sure. With your current code on Hikey960 (big/LITTLE), you end up
> creating two cooling devices, one for the big cluster and
On Mon, Apr 16, 2018 at 03:20:06PM +0530, Viresh Kumar wrote:
> On 16-04-18, 11:45, Daniel Lezcano wrote:
> > Can you elaborate a bit ? I'm not sure to get the point.
>
> Sure. With your current code on Hikey960 (big/LITTLE), you end up
> creating two cooling devices, one for the big cluster and
On 16/04/2018 12:10, Viresh Kumar wrote:
> On 16-04-18, 12:03, Daniel Lezcano wrote:
>> On 16/04/2018 11:50, Viresh Kumar wrote:
>>> On 16-04-18, 11:45, Daniel Lezcano wrote:
Can you elaborate a bit ? I'm not sure to get the point.
>>>
>>> Sure. With your current code on Hikey960
On 16/04/2018 12:10, Viresh Kumar wrote:
> On 16-04-18, 12:03, Daniel Lezcano wrote:
>> On 16/04/2018 11:50, Viresh Kumar wrote:
>>> On 16-04-18, 11:45, Daniel Lezcano wrote:
Can you elaborate a bit ? I'm not sure to get the point.
>>>
>>> Sure. With your current code on Hikey960
On 16-04-18, 12:03, Daniel Lezcano wrote:
> On 16/04/2018 11:50, Viresh Kumar wrote:
> > On 16-04-18, 11:45, Daniel Lezcano wrote:
> >> Can you elaborate a bit ? I'm not sure to get the point.
> >
> > Sure. With your current code on Hikey960 (big/LITTLE), you end up
> > creating two cooling
On 16-04-18, 12:03, Daniel Lezcano wrote:
> On 16/04/2018 11:50, Viresh Kumar wrote:
> > On 16-04-18, 11:45, Daniel Lezcano wrote:
> >> Can you elaborate a bit ? I'm not sure to get the point.
> >
> > Sure. With your current code on Hikey960 (big/LITTLE), you end up
> > creating two cooling
On 16/04/2018 11:50, Viresh Kumar wrote:
> On 16-04-18, 11:45, Daniel Lezcano wrote:
>> Can you elaborate a bit ? I'm not sure to get the point.
>
> Sure. With your current code on Hikey960 (big/LITTLE), you end up
> creating two cooling devices, one for the big cluster and one for
> small
On 16/04/2018 11:50, Viresh Kumar wrote:
> On 16-04-18, 11:45, Daniel Lezcano wrote:
>> Can you elaborate a bit ? I'm not sure to get the point.
>
> Sure. With your current code on Hikey960 (big/LITTLE), you end up
> creating two cooling devices, one for the big cluster and one for
> small
On 16-04-18, 11:45, Daniel Lezcano wrote:
> Can you elaborate a bit ? I'm not sure to get the point.
Sure. With your current code on Hikey960 (big/LITTLE), you end up
creating two cooling devices, one for the big cluster and one for
small cluster. Which is the right thing to do, as we also have
On 16-04-18, 11:45, Daniel Lezcano wrote:
> Can you elaborate a bit ? I'm not sure to get the point.
Sure. With your current code on Hikey960 (big/LITTLE), you end up
creating two cooling devices, one for the big cluster and one for
small cluster. Which is the right thing to do, as we also have
On 16/04/2018 11:37, Viresh Kumar wrote:
> On 16-04-18, 09:44, Daniel Lezcano wrote:
>> Because we rely on the number to identify the cluster and flag it
>> 'processed'. The number itself is not important.
>
> It is, because you are creating multiple groups (like cpufreq
> policies) right now
On 16/04/2018 11:37, Viresh Kumar wrote:
> On 16-04-18, 09:44, Daniel Lezcano wrote:
>> Because we rely on the number to identify the cluster and flag it
>> 'processed'. The number itself is not important.
>
> It is, because you are creating multiple groups (like cpufreq
> policies) right now
On 16-04-18, 09:44, Daniel Lezcano wrote:
> Because we rely on the number to identify the cluster and flag it
> 'processed'. The number itself is not important.
It is, because you are creating multiple groups (like cpufreq
policies) right now based on cluster id. Which will be zero for all
the
On 16-04-18, 09:44, Daniel Lezcano wrote:
> Because we rely on the number to identify the cluster and flag it
> 'processed'. The number itself is not important.
It is, because you are creating multiple groups (like cpufreq
policies) right now based on cluster id. Which will be zero for all
the
On Mon, Apr 16, 2018 at 09:44:51AM +0200, Daniel Lezcano wrote:
> On 16/04/2018 09:37, Viresh Kumar wrote:
> > On 13-04-18, 13:47, Daniel Lezcano wrote:
> >> Ok, noted. At the first glance, it should not be a problem.
> >
> > Why do you think it wouldn't be a problem ?
>
> Because we rely on the
On Mon, Apr 16, 2018 at 09:44:51AM +0200, Daniel Lezcano wrote:
> On 16/04/2018 09:37, Viresh Kumar wrote:
> > On 13-04-18, 13:47, Daniel Lezcano wrote:
> >> Ok, noted. At the first glance, it should not be a problem.
> >
> > Why do you think it wouldn't be a problem ?
>
> Because we rely on the
On 16/04/2018 09:37, Viresh Kumar wrote:
> On 13-04-18, 13:47, Daniel Lezcano wrote:
>> Ok, noted. At the first glance, it should not be a problem.
>
> Why do you think it wouldn't be a problem ?
Because we rely on the number to identify the cluster and flag it
'processed'. The number itself is
On 16/04/2018 09:37, Viresh Kumar wrote:
> On 13-04-18, 13:47, Daniel Lezcano wrote:
>> Ok, noted. At the first glance, it should not be a problem.
>
> Why do you think it wouldn't be a problem ?
Because we rely on the number to identify the cluster and flag it
'processed'. The number itself is
On 13-04-18, 13:47, Daniel Lezcano wrote:
> Ok, noted. At the first glance, it should not be a problem.
Why do you think it wouldn't be a problem ?
--
viresh
On 13-04-18, 13:47, Daniel Lezcano wrote:
> Ok, noted. At the first glance, it should not be a problem.
Why do you think it wouldn't be a problem ?
--
viresh
On 13/04/2018 13:23, Sudeep Holla wrote:
> Hi Daniel,
>
> On 05/04/18 17:16, Daniel Lezcano wrote:
>
> [...]
>
>> +/**
>> + * cpuidle_cooling_register - Idle cooling device initialization function
>> + *
>> + * This function is in charge of creating a cooling device per cluster
>> + * and
On 13/04/2018 13:23, Sudeep Holla wrote:
> Hi Daniel,
>
> On 05/04/18 17:16, Daniel Lezcano wrote:
>
> [...]
>
>> +/**
>> + * cpuidle_cooling_register - Idle cooling device initialization function
>> + *
>> + * This function is in charge of creating a cooling device per cluster
>> + * and
On 13/04/2018 13:38, Daniel Thompson wrote:
[ ... ]
>> +/*
>> + * Allocate the cpuidle cooling device with the list
>> + * of the cpus belonging to the cluster.
>> + */
>> +idle_cdev = cpuidle_cooling_alloc(topology_core_cpumask(cpu));
On 13/04/2018 13:38, Daniel Thompson wrote:
[ ... ]
>> +/*
>> + * Allocate the cpuidle cooling device with the list
>> + * of the cpus belonging to the cluster.
>> + */
>> +idle_cdev = cpuidle_cooling_alloc(topology_core_cpumask(cpu));
On Thu, Apr 05, 2018 at 06:16:43PM +0200, Daniel Lezcano wrote:
> +/**
> + * cpuidle_cooling_register - Idle cooling device initialization function
> + *
> + * This function is in charge of creating a cooling device per cluster
> + * and register it to thermal framework. For this we rely on the
>
On Thu, Apr 05, 2018 at 06:16:43PM +0200, Daniel Lezcano wrote:
> +/**
> + * cpuidle_cooling_register - Idle cooling device initialization function
> + *
> + * This function is in charge of creating a cooling device per cluster
> + * and register it to thermal framework. For this we rely on the
>
Hi Daniel,
On 05/04/18 17:16, Daniel Lezcano wrote:
[...]
> +/**
> + * cpuidle_cooling_register - Idle cooling device initialization function
> + *
> + * This function is in charge of creating a cooling device per cluster
> + * and register it to thermal framework. For this we rely on the
> + *
Hi Daniel,
On 05/04/18 17:16, Daniel Lezcano wrote:
[...]
> +/**
> + * cpuidle_cooling_register - Idle cooling device initialization function
> + *
> + * This function is in charge of creating a cooling device per cluster
> + * and register it to thermal framework. For this we rely on the
> + *
Hi Viresh,
thanks for the review.
On 11/04/2018 10:51, Viresh Kumar wrote:
[ ... ]
>> +void __init cpuidle_cooling_register(void)
>> +{
>> +struct cpuidle_cooling_device *idle_cdev = NULL;
>> +struct thermal_cooling_device *cdev;
>> +struct device_node *np;
>> +cpumask_var_t
Hi Viresh,
thanks for the review.
On 11/04/2018 10:51, Viresh Kumar wrote:
[ ... ]
>> +void __init cpuidle_cooling_register(void)
>> +{
>> +struct cpuidle_cooling_device *idle_cdev = NULL;
>> +struct thermal_cooling_device *cdev;
>> +struct device_node *np;
>> +cpumask_var_t
On 05-04-18, 18:16, Daniel Lezcano wrote:
> The cpu idle cooling driver performs synchronized idle injection across all
> cpus belonging to the same cluster and offers a new method to cool down a SoC.
>
> Each cluster has its own idle cooling device, each core has its own idle
> injection thread,
On 05-04-18, 18:16, Daniel Lezcano wrote:
> The cpu idle cooling driver performs synchronized idle injection across all
> cpus belonging to the same cluster and offers a new method to cool down a SoC.
>
> Each cluster has its own idle cooling device, each core has its own idle
> injection thread,
The cpu idle cooling driver performs synchronized idle injection across all
cpus belonging to the same cluster and offers a new method to cool down a SoC.
Each cluster has its own idle cooling device, each core has its own idle
injection thread, each idle injection thread uses play_idle to enter
The cpu idle cooling driver performs synchronized idle injection across all
cpus belonging to the same cluster and offers a new method to cool down a SoC.
Each cluster has its own idle cooling device, each core has its own idle
injection thread, each idle injection thread uses play_idle to enter
57 matches
Mail list logo