On 09/07/2019 16:36, Vincent Guittot wrote:
Hi Chris,
We enter this code quite often in our testing, most individual runs of a
test which has small tasks involved have at least one hit where we make
a change to the clock with this patch in.
Do you have a rt-app file that you can share ?
Hi Peter,
On 09/07/2019 14:50, Peter Zijlstra wrote:
> On Tue, Jul 09, 2019 at 12:57:59PM +0100, Chris Redpath wrote:
>> The ancient workaround to avoid the cost of updating rq clocks in the
>> middle of a migration causes some issues on asymmetric CPU capacity
>> syst
.
Signed-off-by: Chris Redpath
---
kernel/sched/fair.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b798fe7ff7cd..51791db26a2a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6545,6 +6545,21 @@ st
Commit-ID: 4ad3831a9d4af5e36da5d44a3b9c6522d0353cee
Gitweb: https://git.kernel.org/tip/4ad3831a9d4af5e36da5d44a3b9c6522d0353cee
Author: Chris Redpath
AuthorDate: Wed, 4 Jul 2018 11:17:48 +0100
Committer: Ingo Molnar
CommitDate: Mon, 10 Sep 2018 11:05:53 +0200
sched/fair: Don't move
Commit-ID: 4ad3831a9d4af5e36da5d44a3b9c6522d0353cee
Gitweb: https://git.kernel.org/tip/4ad3831a9d4af5e36da5d44a3b9c6522d0353cee
Author: Chris Redpath
AuthorDate: Wed, 4 Jul 2018 11:17:48 +0100
Committer: Ingo Molnar
CommitDate: Mon, 10 Sep 2018 11:05:53 +0200
sched/fair: Don't move
Thanks guys, really appreciated!
--Chris
On 07/11/17 10:09, Rafael J. Wysocki wrote:
On Tue, Nov 7, 2017 at 10:59 AM, Viresh Kumar <viresh.ku...@linaro.org> wrote:
On 07-11-17, 09:49, Chris Redpath wrote:
Hi Viresh, Rafael,
Without this patch, schedutil is totally broken
Thanks guys, really appreciated!
--Chris
On 07/11/17 10:09, Rafael J. Wysocki wrote:
On Tue, Nov 7, 2017 at 10:59 AM, Viresh Kumar wrote:
On 07-11-17, 09:49, Chris Redpath wrote:
Hi Viresh, Rafael,
Without this patch, schedutil is totally broken for us - is
there any chance at all
Hi Viresh, Rafael,
Without this patch, schedutil is totally broken for us - is
there any chance at all this could go in 4.14 or is it just
too late?
Best Regards,
Chris
On 03/11/17 15:45, Viresh Kumar wrote:
On 03-11-17, 13:36, Chris Redpath wrote:
After
674e75411fc2 ("sched: cpufreq:
Hi Viresh, Rafael,
Without this patch, schedutil is totally broken for us - is
there any chance at all this could go in 4.14 or is it just
too late?
Best Regards,
Chris
On 03/11/17 15:45, Viresh Kumar wrote:
On 03-11-17, 13:36, Chris Redpath wrote:
After
674e75411fc2 ("sched: cpufreq:
value is set in sugov_register but we clear it in sugov_start
which leads to always looking at the utilization of CPU0 instead
of the correct one.
Let's fix this by consolidating the initialization code into
sugov_start().
Fixes: 674e75411fc2 ("sched: cpufreq: Allow remote cpufreq callbacks")
Signe
value is set in sugov_register but we clear it in sugov_start
which leads to always looking at the utilization of CPU0 instead
of the correct one.
Let's fix this by consolidating the initialization code into
sugov_start().
Fixes: 674e75411fc2 ("sched: cpufreq: Allow remote cpufreq callbacks")
Signe
Hi Viresh
On 02/11/17 11:40, Viresh Kumar wrote:
On 02-11-17, 11:38, Chris Redpath wrote:
Since:
4296f23ed cpufreq: schedutil: Fix per-CPU structure initialization in
sugov_start()
This is still incorrect. This BUG has nothing to do with 4296f23ed
AFAICT.
According to my diff
Hi Viresh
On 02/11/17 11:40, Viresh Kumar wrote:
On 02-11-17, 11:38, Chris Redpath wrote:
Since:
4296f23ed cpufreq: schedutil: Fix per-CPU structure initialization in
sugov_start()
This is still incorrect. This BUG has nothing to do with 4296f23ed
AFAICT.
According to my diff
674e75411fc2 ("sched: cpufreq: Allow remote cpufreq callbacks")
Signed-off-by: Chris Redpath <chris.redp...@arm.com>
Reviewed-by: Patrick Bellasi <patrick.bell...@arm.com>
Reviewed-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Rafael J. Wysocki <r...@rjwysocki.ne
674e75411fc2 ("sched: cpufreq: Allow remote cpufreq callbacks")
Signed-off-by: Chris Redpath
Reviewed-by: Patrick Bellasi
Reviewed-by: Brendan Jackman
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Ingo Molnar
Cc: Peter Zijlstra
---
kernel/sched/cpufreq_schedutil.c | 6 +-
1 fi
we just updated when we do a utilization update callback.
Let's fix this by consolidating the initialization code into
sugov_start().
Fixes: 4296f23ed49a ("cpufreq: schedutil: Fix per-CPU structure initialization
in sugov_start()")
Signed-off-by: Chris Redpath <chris.redp...@a
we just updated when we do a utilization update callback.
Let's fix this by consolidating the initialization code into
sugov_start().
Fixes: 4296f23ed49a ("cpufreq: schedutil: Fix per-CPU structure initialization
in sugov_start()")
Signed-off-by: Chris Redpath
Reviewed-by: Patrick Bell
On 09/07/14 11:44, Viresh Kumar wrote:
Hi Chris,
On 9 July 2014 16:02, Chris Redpath wrote:
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index fc17a9d..f911acd 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -2186,8 +2186,6 @@ static void spin(struct pktgen_dev *pkt_dev
Hi Viresh,
On 09/07/14 07:55, Viresh Kumar wrote:
hrtimer_start*() family never fails to enqueue a hrtimer to a clock-base. The
only special case is when the hrtimer was in past. If it is getting enqueued to
local CPUs's clock-base, we raise a softirq and exit, else we handle that on
next
Hi Viresh,
On 09/07/14 07:55, Viresh Kumar wrote:
hrtimer_start*() family never fails to enqueue a hrtimer to a clock-base. The
only special case is when the hrtimer was in past. If it is getting enqueued to
local CPUs's clock-base, we raise a softirq and exit, else we handle that on
next
Hi Viresh,
On 09/07/14 07:55, Viresh Kumar wrote:
hrtimer_start*() family never fails to enqueue a hrtimer to a clock-base. The
only special case is when the hrtimer was in past. If it is getting enqueued to
local CPUs's clock-base, we raise a softirq and exit, else we handle that on
next
Hi Viresh,
On 09/07/14 07:55, Viresh Kumar wrote:
hrtimer_start*() family never fails to enqueue a hrtimer to a clock-base. The
only special case is when the hrtimer was in past. If it is getting enqueued to
local CPUs's clock-base, we raise a softirq and exit, else we handle that on
next
On 09/07/14 11:44, Viresh Kumar wrote:
Hi Chris,
On 9 July 2014 16:02, Chris Redpath chris.redp...@arm.com wrote:
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index fc17a9d..f911acd 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -2186,8 +2186,6 @@ static void spin(struct
On 17/04/14 06:42, Alex Shi wrote:
On 04/16/2014 08:13 PM, Peter Zijlstra wrote:
On Wed, Apr 16, 2014 at 07:34:29PM +0800, Alex Shi wrote:
Chris Redpath found an issue on active balance:
We let the task source cpu, the busiest cpu, do the active balance,
while the destination cpu maybe idle
On 17/04/14 06:42, Alex Shi wrote:
On 04/16/2014 08:13 PM, Peter Zijlstra wrote:
On Wed, Apr 16, 2014 at 07:34:29PM +0800, Alex Shi wrote:
Chris Redpath found an issue on active balance:
We let the task source cpu, the busiest cpu, do the active balance,
while the destination cpu maybe idle
On 21/01/14 16:12, Vincent Guittot wrote:
With the current implementation, the load average statistics of a sched entity
change according to other activity on the CPU even if this activity is done
between the running window of the sched entity and have no influence on the
running duration of the
On 21/01/14 16:12, Vincent Guittot wrote:
With the current implementation, the load average statistics of a sched entity
change according to other activity on the CPU even if this activity is done
between the running window of the sched entity and have no influence on the
running duration of the
On 17/12/13 18:03, bseg...@google.com wrote:
__synchronize_entity_decay will decay load_avg_contrib in order to
figure out how much to remove from old_cfs_rq->blocked_load.
update_entity_load_avg will update the underlying runnable_avg_sum/period that
is used to update load_avg_contrib.
On 17/12/13 18:03, bseg...@google.com wrote:
__synchronize_entity_decay will decay load_avg_contrib in order to
figure out how much to remove from old_cfs_rq-blocked_load.
update_entity_load_avg will update the underlying runnable_avg_sum/period that
is used to update load_avg_contrib.
On 12/12/13 18:24, Peter Zijlstra wrote:
Would pre_schedule_idle() -> rq_last_tick_reset() -> rq->last_sched_tick
be useful?
I suppose we could easily lift that to NO_HZ_COMMON.
Many thanks for the tip Peter, I have tried this out and it does provide
enough information to be able to correct
On 12/12/13 18:24, Peter Zijlstra wrote:
Would pre_schedule_idle() - rq_last_tick_reset() - rq-last_sched_tick
be useful?
I suppose we could easily lift that to NO_HZ_COMMON.
Many thanks for the tip Peter, I have tried this out and it does provide
enough information to be able to correct
On 10/12/13 15:14, Peter Zijlstra wrote:
On Tue, Dec 10, 2013 at 01:24:21PM +, Chris Redpath wrote:
What happens is that if you have a task which sleeps for a while and wakes
on a different CPU and the previous CPU hasn't had a tick for a while, then
that sleep time is lost.
/me more
On 10/12/13 11:48, Peter Zijlstra wrote:
On Mon, Dec 09, 2013 at 12:59:10PM +, Chris Redpath wrote:
If we migrate a sleeping task away from a CPU which has the
tick stopped, then both the clock_task and decay_counter will
be out of date for that CPU and we will not decay load correctly
On 10/12/13 11:48, Peter Zijlstra wrote:
On Mon, Dec 09, 2013 at 12:59:10PM +, Chris Redpath wrote:
If we migrate a sleeping task away from a CPU which has the
tick stopped, then both the clock_task and decay_counter will
be out of date for that CPU and we will not decay load correctly
On 10/12/13 15:14, Peter Zijlstra wrote:
On Tue, Dec 10, 2013 at 01:24:21PM +, Chris Redpath wrote:
What happens is that if you have a task which sleeps for a while and wakes
on a different CPU and the previous CPU hasn't had a tick for a while, then
that sleep time is lost.
/me more
be
migrated and its load will be decayed incorrectly.
All users of this function expect decay_count to be zero'ed after
use.
Signed-off-by: Chris Redpath
---
kernel/sched/fair.c |8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
are not on a runqueue
(because otherwise that CPU would be awake) and simultaneously
the CPU the task previously ran on has had the tick stopped.
Signed-off-by: Chris Redpath
---
kernel/sched/fair.c | 30 ++
1 file changed, 30 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel
really like this fix much, but the root of the problem
is that load tracking more-or-less expects the runqueue's
decay_counter to be up to date, and when nohz is in use it is
not. The fix demonstrates the issue anyway, I haven't seen
other occasions where nohz CPUs distort the tracked load.
Chris
really like this fix much, but the root of the problem
is that load tracking more-or-less expects the runqueue's
decay_counter to be up to date, and when nohz is in use it is
not. The fix demonstrates the issue anyway, I haven't seen
other occasions where nohz CPUs distort the tracked load.
Chris
are not on a runqueue
(because otherwise that CPU would be awake) and simultaneously
the CPU the task previously ran on has had the tick stopped.
Signed-off-by: Chris Redpath chris.redp...@arm.com
---
kernel/sched/fair.c | 30 ++
1 file changed, 30 insertions(+)
diff --git a/kernel
be
migrated and its load will be decayed incorrectly.
All users of this function expect decay_count to be zero'ed after
use.
Signed-off-by: Chris Redpath chris.redp...@arm.com
---
kernel/sched/fair.c |8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c
Modulate the tracked load of a task using the measure of current
and maximum compute capacity for the core it is executing on.
Change-Id: If6aea806e631f2313fd925c8902260a522663dbd
Conflicts:
kernel/sched/fair.c
---
kernel/sched/fair.c | 51
Bsed upon the CPU Power of a core, computes a capacity measure
between 0 and 1024 scaling in line with the frequency using a
simple linear scale derived from the maximum frequency reported
by CPUFreq.
Scaling CPU Power with frequency and estimated capacity gives an
estimate of the amount of
Using the per-cpu compute capacity exported from topology
when CONFIG_ARCH_SCALE_INVARIANT_CPU_CAPACITY is active, place this
information alongside cpu_power in the scheduler and combine for the
various aggregating entities.
Change-Id: I4984c335bcdc128680e7459b3f86bb05e04593cc
---
l(SIGALRM, catch_alarm);
/* Set an alarm to go off in a little while. */
ualarm(alarm_rate,alarm_rate);
/* Check the flag once in a while to see when to quit. */
while (1) {
pause();
}
return EXIT_SUCCESS;
}
Chris Redpath (3):
ARM: (Experimental) Provide Estimated CPU Capaci
();
}
return EXIT_SUCCESS;
}
Chris Redpath (3):
ARM: (Experimental) Provide Estimated CPU Capacity measure
sched: introduce compute capacity for CPUs, groups and domains
sched: Scale load contribution by CPU Capacity
arch/arm/Kconfig | 16 +++
arch/arm/include/asm
Using the per-cpu compute capacity exported from topology
when CONFIG_ARCH_SCALE_INVARIANT_CPU_CAPACITY is active, place this
information alongside cpu_power in the scheduler and combine for the
various aggregating entities.
Change-Id: I4984c335bcdc128680e7459b3f86bb05e04593cc
---
Bsed upon the CPU Power of a core, computes a capacity measure
between 0 and 1024 scaling in line with the frequency using a
simple linear scale derived from the maximum frequency reported
by CPUFreq.
Scaling CPU Power with frequency and estimated capacity gives an
estimate of the amount of
Modulate the tracked load of a task using the measure of current
and maximum compute capacity for the core it is executing on.
Change-Id: If6aea806e631f2313fd925c8902260a522663dbd
Conflicts:
kernel/sched/fair.c
---
kernel/sched/fair.c | 51
49 matches
Mail list logo