or kernels prior to 4.17.
Signed-off-by: Steve Muckle
---
.../testing/selftests/x86/test_syscall_vdso.c | 30 +++
1 file changed, 30 insertions(+)
diff --git a/tools/testing/selftests/x86/test_syscall_vdso.c
b/tools/testing/selftests/x86/test_syscall_vdso.c
index c9c3281077bc..f7
On 09/27/2018 05:43 PM, Wanpeng Li wrote:
On your CPU4:
scheduler_ipi()
-> sched_ttwu_pending()
-> ttwu_do_activate()=> p->sched_remote_wakeup should be
false, so ENQUEUE_WAKEUP is set, ENQUEUE_MIGRATED is not
-> ttwu_activate()
-> activate_task()
Commit-ID: d0cdb3ce8834332d918fc9c8ff74f8a169ec9abe
Gitweb: https://git.kernel.org/tip/d0cdb3ce8834332d918fc9c8ff74f8a169ec9abe
Author: Steve Muckle
AuthorDate: Fri, 31 Aug 2018 15:42:17 -0700
Committer: Ingo Molnar
CommitDate: Mon, 10 Sep 2018 10:13:47 +0200
sched/fair: Fix
priority
penalty.
Fix this by recognizing a WAKING task's vruntime as normalized only if
sched_remote_wakeup is true. This indicates a migration, in which case
the vruntime would have been normalized in migrate_task_rq_fair().
Based on a similar patch from joaod...@google.com.
Suggested-by: P
On 08/29/2018 08:33 AM, Dietmar Eggemann wrote:
Yes, this solves the issue for the case I described. Using
'p->sched_remote_wakeup' (WF_MIGRATED) looks more elegant than using
'p->sched_class == &fair_sched_class'.
It's confirmed that this patch solves the original issue we saw (and my
test ca
On 08/24/2018 02:47 AM, Peter Zijlstra wrote:
On 08/17/2018 11:27 AM, Steve Muckle wrote:
When rt_mutex_setprio changes a task's scheduling class to RT,
we're seeing cases where the task's vruntime is not updated
correctly upon return to the fair class.
Specifically, the fol
On 08/23/2018 11:54 PM, Juri Lelli wrote:
I tried to catch this issue on my Arm64 Juno board using pi_test (and a
slightly adapted pip_test (usleep_val = 1500 and keep low as cfs)) from
rt-tests but wasn't able to do so.
# pi_stress --inversions=1 --duration=1 --groups=1 --sched id=low,policy=cf
the vruntime inflation repeatedly doubled.
The change here is to detect when vruntime_normalized is being
called when the task is waking but is waking in another class,
and to conclude that this is a case where vruntime has not
been normalized.
Signed-off-by: John Dias
Signed-off-by: Steve Muckl
On 08/07/2018 10:40 AM, 'Todd Kjos' via kernel-team wrote:
This issue was discovered on a 4.9-based android device, but the
relevant mainline code appears to be the same. The symptom is that
over time the some workloads become sluggish resulting in missed
frames or sluggishness. It appears to be
On 05/04/2018 06:13 PM, Shuah Khan (Samsung OSG) wrote:
When execveat test is skipped because of unmet dependencies and/or
unsupported configuration, it exits with error which is treated as
a fail by the Kselftest framework. This leads to false negative
result even when the test could not be run.
On 10/30/2017 12:02 PM, Joel Fernandes wrote:
Also, this more looks like a policy decision. Will it be better to
put that directly into schedutil? Like this:
if (cpu_idle())
"Don't change the freq";
Will something like that work?
I thought about this and I think it w
successfully execute the test binary/script may fail because of this.
To keep the semantics of the test the same, rework the relative pathname
part of the test to be relative to the root directory so it isn't
decreased by the length of the current working directory path.
Signed-off-by: Steve M
Thanks David for the review. Replies inline.
On 10/12/2017 05:22 AM, David Drysdale wrote:
Modulo the minor comment below:
Reviewed-by: David Drysdale
On Thu, Oct 12, 2017 at 1:40 AM, Steve Muckle wrote:
When creating a pathname close to PATH_MAX to test execveat, factor in
the current
successfully execute the test binary/script may fail because of this.
To keep the semantics of the test the same, rework the relative pathname
part of the test to be relative to the root directory so it isn't
decreased by the length of the current working directory path.
Signed-off-by: Steve M
On 09/19/2017 10:08 AM, John Stultz wrote:
So what I was thinking of to improve the developer usability might be
the following:
1) Leave the upstream configs in place. We can try to keep
maintaining them, but its not something the Android team is likely to
care greatly about, and hopefully can
On 09/07/2017 09:14 AM, Joel Fernandes wrote:
I'm planning to rebase this series on Linus's master and post it
again, but just checking any thoughts about it?
Just to add more context, the reason for not updating the frequency:
- When a last dequeue of a sleeping task happens, it is sufficient
Hi Viresh,
On Tue, Nov 15, 2016 at 01:53:19PM +0530, Viresh Kumar wrote:
> This work was started by Steve Muckle, where he used a simple kthread
> instead of kthread-worker and that wasn't sufficient as some guarantees
> weren't met.
I know this has already gone in, but ca
On Fri, Nov 11, 2016 at 11:16:59PM +0100, Rafael J. Wysocki wrote:
> > + struct sched_param param = { .sched_priority = 50 };
>
> I'd define a symbol for the 50. It's just one extra line of code ...
A minor point for sure, but in general what's the motivation for
defining symbols for thing
On Sun, Nov 13, 2016 at 03:37:18PM +0100, Rafael J. Wysocki wrote:
> > Hold on a sec. I thought during LPC someone (Peter?) made a point that when
> > RT thread run, we should bump the frequency to max? So, schedutil is going
> > to trigger schedutil to bump up the frequency to max, right?
>
> No,
On Wed, Sep 07, 2016 at 05:35:50PM -0700, Srinivas Pandruvada wrote:
> Did you see any performance regression on Android workloads?
I did a few AnTuTU runs and did not observe a regression.
thanks,
Steve
On Sat, Sep 03, 2016 at 02:56:48AM +0200, Rafael J. Wysocki wrote:
> Please let me know what you think and if you can run some benchmarks you
> care about and see if the changes make any difference (this way or another),
> please do that and let me know what you've found.
LGTM (I just reviewed the
On Wed, Aug 31, 2016 at 06:00:02PM +0100, Juri Lelli wrote:
> > Another problem is that we have many semi related knobs; we have the
> > global RT runtime limit knob, but that doesn't affect cpufreq (maybe it
> > should)
>
> Maybe we could create this sort of link when using the cgroup RT
> thrott
On Wed, Aug 31, 2016 at 04:39:07PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 26, 2016 at 11:40:48AM -0700, Steve Muckle wrote:
> > A policy of going to fmax on any RT activity will be detrimental
> > for power on many platforms. Often RT accounts for only a small amount
> &g
On Wed, Aug 31, 2016 at 03:31:07AM +0200, Rafael J. Wysocki wrote:
> On Friday, August 26, 2016 11:40:48 AM Steve Muckle wrote:
> > A policy of going to fmax on any RT activity will be detrimental
> > for power on many platforms. Often RT accounts for only a small amount
> &g
instead use rt_avg as an estimate of
RT utilization of the CPU.
Based on previous work by Vincent Guittot .
Signed-off-by: Steve Muckle
---
kernel/sched/cpufreq_schedutil.c | 26 +-
1 file changed, 17 insertions(+), 9 deletions(-)
diff --git a/kernel/sched
fmax during RT task
activity, instead using rt_avg as an estimate of RT utilization.
Steve Muckle (2):
sched: cpufreq: ignore SMT when determining max cpu capacity
sched: cpufreq: use rt_avg as estimate of required RT CPU capacity
kernel/sched/cpufreq_schedutil.c | 26
rig is 589 but
util_avg scales up to 1024. This means that a 50% utilized CPU will show
up in schedutil as ~86% busy.
Fix this by using the same CPU scaling value in schedutil as that which
is used by PELT.
Signed-off-by: Steve Muckle
---
kernel/sched/cpufreq_schedutil.c | 4 +++-
1 file chan
rig is 589 but
util_avg scales up to 1024. This means that a 50% utilized CPU will show
up in schedutil as ~86% busy.
Fix this by using the same CPU scaling value in schedutil as that which
is used by PELT.
Signed-off-by: Steve Muckle
---
kernel/sched/cpufreq_schedutil.c | 4 +++-
1 file chan
On Fri, Aug 19, 2016 at 04:00:57PM +0100, Dietmar Eggemann wrote:
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 61d485421bed..95d34b337152 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -2731,7 +2731,7 @@ __update_load_avg(u64 now, int cpu, struct s
On Fri, Aug 19, 2016 at 04:30:39PM +0100, Morten Rasmussen wrote:
> Hi Steve,
>
> On Thu, Aug 18, 2016 at 06:55:41PM -0700, Steve Muckle wrote:
> > PELT scales its util_sum and util_avg values via
> > arch_scale_cpu_capacity(). If that function is passed the CPU's sch
On Fri, Aug 19, 2016 at 10:30:36AM +0800, Wanpeng Li wrote:
> 2016-08-19 9:55 GMT+08:00 Steve Muckle :
> > PELT scales its util_sum and util_avg values via
> > arch_scale_cpu_capacity(). If that function is passed the CPU's sched
> > domain then it will redu
acity, update_cpu_capacity(), does. This means
util_sum and util_avg scale beyond the CPU capacity on SMT.
On an Intel i7-3630QM for example rq->cpu_capacity_orig is 589 but
util_avg scales up to 1024.
Fix this by passing in the sd in __update_load_avg() as well.
Signed-off-by: Steve Muckle
---
kernel/sche
LGTM
On Fri, Aug 12, 2016 at 02:06:44AM +0200, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki
>
> All of the callers of cpufreq_update_util() pass rq_clock(rq) to it
> as the time argument and some of them check whether or not cpu_of(rq)
> is equal to smp_processor_id() before calling it, so
LGTM
On Fri, Aug 12, 2016 at 02:04:42AM +0200, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki
>
> It is useful to know the reason why cpufreq_update_util() has just
> been called and that can be passed as flags to cpufreq_update_util()
> and to the ->func() callback in struct update_util_dat
On Wed, Aug 10, 2016 at 03:11:17AM +0200, Rafael J. Wysocki wrote:
> Index: linux-pm/kernel/sched/fair.c
> ===
> --- linux-pm.orig/kernel/sched/fair.c
> +++ linux-pm/kernel/sched/fair.c
> @@ -2876,8 +2876,6 @@ static inline void update
On Thu, Aug 11, 2016 at 11:03:47AM -0700, Steve Muckle wrote:
> On Wed, Aug 10, 2016 at 03:49:07AM +0200, Rafael J. Wysocki wrote:
> > Index: linux-pm/kernel/sched/fair.c
> > ===
> > --- linux-pm.orig/kernel/sched
On Wed, Aug 10, 2016 at 03:49:07AM +0200, Rafael J. Wysocki wrote:
> Index: linux-pm/kernel/sched/fair.c
> ===
> --- linux-pm.orig/kernel/sched/fair.c
> +++ linux-pm/kernel/sched/fair.c
> @@ -2875,11 +2875,8 @@ static inline void updat
On Thu, Aug 04, 2016 at 11:19:00PM +0200, Rafael J. Wysocki wrote:
> On Wednesday, August 03, 2016 07:24:18 PM Steve Muckle wrote:
> > On Wed, Aug 03, 2016 at 12:38:20AM +0200, Rafael J. Wysocki wrote:
> > > On Wed, Aug 3, 2016 at 12:02 AM, Steve Muckle
> > > wrote:
&
On Wed, Aug 03, 2016 at 12:38:20AM +0200, Rafael J. Wysocki wrote:
> On Wed, Aug 3, 2016 at 12:02 AM, Steve Muckle wrote:
> > On Tue, Aug 02, 2016 at 03:37:02AM +0200, Rafael J. Wysocki wrote:
> >> On Tue, Aug 2, 2016 at 3:22 AM, Steve Muckle
> >> wrote:
> >
On Tue, Aug 02, 2016 at 03:37:02AM +0200, Rafael J. Wysocki wrote:
> On Tue, Aug 2, 2016 at 3:22 AM, Steve Muckle wrote:
> > On Mon, Aug 01, 2016 at 01:37:23AM +0200, Rafael J. Wysocki wrote:
> > ...
> >> For this purpose, define a new cpufreq_update_util() fla
On Tue, Aug 02, 2016 at 11:38:17AM +0100, Juri Lelli wrote:
> > > Anyway one way that my patch differed was that I had used the flags
> > > field to keep the behavior the same for both RT and DL.
>
> Do you mean "go to max" policy for both, until proper policies will be
> implemented in the future
On Tue, Aug 02, 2016 at 01:44:41AM +0200, Rafael J. Wysocki wrote:
> On Monday, August 01, 2016 12:59:30 PM Steve Muckle wrote:
> > On Mon, Aug 01, 2016 at 04:57:18PM +0200, Rafael J. Wysocki wrote:
> > > On Monday, August 01, 2016 09:33:12 AM Dominik Brodowski wrote:
> >
On Mon, Aug 01, 2016 at 01:37:59AM +0200, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki
>
> Modify the schedutil cpufreq governor to boost the CPU frequency
> if the UUF_IO flag is passed to it via cpufreq_update_util().
>
> If that happens, the frequency is set to the maximum during
> the
On Mon, Aug 01, 2016 at 01:37:23AM +0200, Rafael J. Wysocki wrote:
...
> For this purpose, define a new cpufreq_update_util() flag
> UUF_IO and modify enqueue_task_fair() to pass that flag to
> cpufreq_update_util() in the in_iowait case. That generally
> requires cpufreq_update_util() to be calle
On Mon, Aug 01, 2016 at 04:57:18PM +0200, Rafael J. Wysocki wrote:
> On Monday, August 01, 2016 09:33:12 AM Dominik Brodowski wrote:
> > On Mon, Aug 01, 2016 at 01:36:46AM +0200, Rafael J. Wysocki wrote:
> > > +#define UUF_RT 0x01
> >
> > What does UUF stand for?
>
> "Utilization upadte flag".
On Mon, Aug 01, 2016 at 09:29:57AM +0200, Dominik Brodowski wrote:
> A small nitpick:
>
> On Mon, Aug 01, 2016 at 01:36:01AM +0200, Rafael J. Wysocki wrote:
> > --- linux-pm.orig/kernel/sched/sched.h
> > +++ linux-pm/kernel/sched/sched.h
> > @@ -1760,7 +1760,7 @@ DECLARE_PER_CPU(struct update_util
On Mon, Aug 01, 2016 at 01:34:36AM +0200, Rafael J. Wysocki wrote:
...
> Index: linux-pm/kernel/sched/cpufreq_schedutil.c
> ===
> --- linux-pm.orig/kernel/sched/cpufreq_schedutil.c
> +++ linux-pm/kernel/sched/cpufreq_schedutil.c
> @@ -
On Fri, Jul 22, 2016 at 08:16:42AM -0700, Viresh Kumar wrote:
> > Long term as I was mentioning in the other thread I think it'd be good
> > if the current target() drivers were modified to supply resolve_freq(),
> > and that cpufreq_register_driver() were again changed to require it for
> > those
On Fri, Jul 22, 2016 at 11:56:20AM +1000, Stephen Rothwell wrote:
> Hi Rafael,
>
> After merging the pm tree, today's linux-next build (arm
> multi_v7_defconfig) failed like this:
>
> ERROR: "cpufreq_driver_resolve_freq" [kernel/sched/cpufreq_schedutil.ko]
> undefined!
>
> Caused by commit
>
>
Export cpufreq_driver_resolve_freq() since governors may be compiled as
modules.
Reported-by: Stephen Rothwell
Signed-off-by: Steve Muckle
---
drivers/cpufreq/cpufreq.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index b696baeb249d
On Thu, Jul 21, 2016 at 04:36:48PM -0700, Steve Muckle wrote:
> As another alternative, this could be caught in cpufreq driver
> initialization? I believe you suggested that originally, but I avoided
> it as I didn't want to have to implement resolve_freq() for every
> target()
On Fri, Jul 22, 2016 at 02:18:54AM +0200, Rafael J. Wysocki wrote:
> > My thinking was that one of these two would be preferable:
> >
> > - Forcing ->target() drivers to install a ->resolve_freq callback,
> > enforcing this at cpufreq driver init time.
>
> That would have been possible, but your
On Fri, Jul 22, 2016 at 01:53:13AM +0200, Rafael J. Wysocki wrote:
> On Fri, Jul 22, 2016 at 1:45 AM, Steve Muckle wrote:
> > On Fri, Jul 22, 2016 at 01:32:00AM +0200, Rafael J. Wysocki wrote:
> >> On Fri, Jul 22, 2016 at 1:22 AM, Steve Muckle
> >> wrote:
> >
On Fri, Jul 22, 2016 at 01:32:00AM +0200, Rafael J. Wysocki wrote:
> On Fri, Jul 22, 2016 at 1:22 AM, Steve Muckle wrote:
> > On Fri, Jul 22, 2016 at 01:22:22AM +0200, Rafael J. Wysocki wrote:
> >> OK, applied.
> >
> > FWIW I do have a concern on this patc
On Thu, Jul 21, 2016 at 04:36:48PM -0700, Steve Muckle wrote:
> On Thu, Jul 21, 2016 at 04:30:03PM -0700, Viresh Kumar wrote:
> > On 21-07-16, 16:21, Steve Muckle wrote:
> > > On Thu, Jul 21, 2016 at 01:30:41PM -0700, Viresh Kumar wrote:
> > > > Okay, but in that
On Thu, Jul 21, 2016 at 04:30:03PM -0700, Viresh Kumar wrote:
> On 21-07-16, 16:21, Steve Muckle wrote:
> > On Thu, Jul 21, 2016 at 01:30:41PM -0700, Viresh Kumar wrote:
> > > Okay, but in that case shouldn't we do something like this:
> > >
> > > unsign
On Thu, Jul 21, 2016 at 04:21:31PM -0700, Steve Muckle wrote:
> On Thu, Jul 21, 2016 at 01:30:41PM -0700, Viresh Kumar wrote:
> > Okay, but in that case shouldn't we do something like this:
> >
> > unsigned int cpufreq_driver_resolve_freq(str
On Fri, Jul 22, 2016 at 01:22:22AM +0200, Rafael J. Wysocki wrote:
> OK, applied.
FWIW I do have a concern on this patch, I think it adds unnecessary
overhead.
On Thu, Jul 21, 2016 at 01:30:41PM -0700, Viresh Kumar wrote:
> Okay, but in that case shouldn't we do something like this:
>
> unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
> unsigned int target_freq)
> {
>target_freq = cla
On Thu, Jul 14, 2016 at 06:02:31PM +0800, Pingbo Wen wrote:
> > Steve Muckle (3):
> > cpufreq: add cpufreq_driver_resolve_freq()
> > cpufreq: schedutil: map raw required frequency to driver frequency
>
> Tested the first two patches on db410c, only waking up ir
only for ->target() style drivers, to use cpufreq's freq table operations,
and move freq mapping caching into cpufreq policy
Changes since v1:
- incorporated feedback from Rafael to avoid referencing freq_table from
schedutil by introducing a new cpufreq API
Steve Muckle (3)
req().
Suggested-by: Rafael J. Wysocki
Signed-off-by: Steve Muckle
---
drivers/cpufreq/cpufreq.c | 25 +
include/linux/cpufreq.h | 16
2 files changed, 41 insertions(+)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 118b4f30a
A call to cpufreq_driver_resolve_freq will cache the mapping from
the desired target frequency to the frequency table index. If there
is a mapping for the desired target frequency then use it instead of
looking up the mapping again.
Signed-off-by: Steve Muckle
---
drivers/cpufreq/acpi-cpufreq.c
event that the new raw required
frequency matches the last one, assuming a frequency update has not been
forced due to limits changing (indicated by a next_freq value of
UINT_MAX, see sugov_should_update_freq).
Signed-off-by: Steve Muckle
---
kernel/sched/cpufreq_schedutil.c | 31
On Fri, Jun 03, 2016 at 07:05:14PM +0530, Viresh Kumar wrote:
...
> @@ -468,20 +469,15 @@ unsigned int acpi_cpufreq_fast_switch(struct
> cpufreq_policy *policy,
> struct acpi_cpufreq_data *data = policy->driver_data;
> struct acpi_processor_performance *perf;
> struct cpufreq_fre
On Thu, Jun 02, 2016 at 06:59:04AM +0530, Viresh Kumar wrote:
> On 01-06-16, 12:46, Steve Muckle wrote:
> > > /*
> > >* Find the closest frequency above target_freq.
> > > - *
> > > - * The table is sorted in the reverse order with respect to the
>
On Sat, May 21, 2016 at 12:46:06PM -0700, Steve Muckle wrote:
> Hi Peter, Ingo,
Hi Peter/Ingo would appreciate any thoughts you may have on the issue
below.
thanks,
Steve
>
> On Thu, May 19, 2016 at 04:04:19PM -0700, Steve Muckle wrote:
> > On Thu, May 19, 2016 at 11:06:14PM
On Wed, Jun 01, 2016 at 04:09:55PM +0530, Viresh Kumar wrote:
> cpufreq core keeps another table of sorted frequencies now and that can
> be used to find a match quickly, instead of traversing the unsorted list
> in an inefficient way.
>
> Create helper routines for separate relation types to opti
On Tue, May 31, 2016 at 11:00:11AM +0530, Viresh Kumar wrote:
> On 30-05-16, 08:31, Steve Muckle wrote:
> > My goal here was to have the system operate in this case in a manner
> > that is obviously not optimized (running at fmax), so the platform owner
> > realizes that the c
On Tue, May 31, 2016 at 04:44:51PM +0530, Viresh Kumar wrote:
> On 25-05-16, 19:52, Steve Muckle wrote:
> > +unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
> > +unsigned int target_freq)
> > +{
> > + str
On Fri, May 27, 2016 at 01:41:02PM +0800, Wanpeng Li wrote:
> 2016-05-26 10:53 GMT+08:00 Steve Muckle :
> > The slow-path frequency transition path is relatively expensive as it
> > requires waking up a thread to do work. Should support be added for
> > remote CPU cpufreq
On Thu, May 26, 2016 at 12:46:29PM +0530, Viresh Kumar wrote:
> On 25-05-16, 19:53, Steve Muckle wrote:
> > The slow-path frequency transition path is relatively expensive as it
> > requires waking up a thread to do work. Should support be added for
> > remote CPU cpufreq
On Thu, May 26, 2016 at 12:13:41PM +0530, Viresh Kumar wrote:
> On 25-05-16, 19:53, Steve Muckle wrote:
> > Support the new resolve_freq cpufreq callback which resolves a target
> > frequency to a driver-supported frequency without actually setting it.
>
> And here is the fir
On Thu, May 26, 2016 at 11:55:14AM +0530, Viresh Kumar wrote:
> On 25-05-16, 19:52, Steve Muckle wrote:
> > Cpufreq governors may need to know what a particular target frequency
> > maps to in the driver without necessarily wanting to set the frequency.
> > Support this operat
Signed-off-by: Steve Muckle
---
drivers/cpufreq/cpufreq.c | 25 +
include/linux/cpufreq.h | 11 +++
2 files changed, 36 insertions(+)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 77d77a4e3b74..3b44f4bdc071 100644
--- a/drivers/cpufreq
it on an ensuing
fast_switch. This series implements that approach.
Given that this change is beneficial on its own I've split it out into its own
series from the remote callback support.
[0] https://lkml.org/lkml/2016/5/9/853
Steve Muckle (3):
cpufreq: add resolve_freq driver cal
the requested target frequency is the same.
Suggested-by: Rafael J. Wysocki
Signed-off-by: Steve Muckle
---
drivers/cpufreq/acpi-cpufreq.c | 56 --
1 file changed, 43 insertions(+), 13 deletions(-)
diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers
the event that the new raw required
frequency matches the last one, assuming a frequency update has not been
forced due to limits changing (indicated by a next_freq value of
UINT_MAX, see sugov_should_update_freq).
Signed-off-by: Steve Muckle
---
kernel/sched/cpufreq_schedutil.c | 30
On Sun, May 22, 2016 at 12:39:12PM +0200, Peter Zijlstra wrote:
> On Fri, May 20, 2016 at 05:53:41PM +0530, Shilpasri G Bhat wrote:
> >
> > Below are the comparisons by disabling watchdog.
> > Both schedutil and ondemand have a similar ramp-down trend. And in both the
> > cases I can see that freq
Hi Peter, Ingo,
On Thu, May 19, 2016 at 04:04:19PM -0700, Steve Muckle wrote:
> On Thu, May 19, 2016 at 11:06:14PM +0200, Rafael J. Wysocki wrote:
> > > In the case of a remote update the hook has to run (or not) after it is
> > > known whether preemption will occur so we d
On Fri, May 20, 2016 at 02:37:17AM +0200, Rafael J. Wysocki wrote:
> Also I think that it would be good to avoid walking the frequency
> table twice in case we end up wanting to update the frequency after
> all. With the [4/5] we'd do it once in get_next_freq() and then once
> more in cpufreq_driv
On Fri, May 20, 2016 at 02:24:19AM +0200, Rafael J. Wysocki wrote:
> On Fri, May 20, 2016 at 1:34 AM, Steve Muckle wrote:
> > On Thu, May 19, 2016 at 11:15:52PM +0200, Rafael J. Wysocki wrote:
> >> But anyway this change again seems to be an optimization that might be
>
On Thu, May 19, 2016 at 11:15:52PM +0200, Rafael J. Wysocki wrote:
> But anyway this change again seems to be an optimization that might be
> done later to me.
>
> I guess there are many things that might be optimized in schedutil,
> but I'd prefer to address one item at a time, maybe going after
On Thu, May 19, 2016 at 11:06:14PM +0200, Rafael J. Wysocki wrote:
> > In the case of a remote update the hook has to run (or not) after it is
> > known whether preemption will occur so we don't do needless work or
> > IPIs. If the policy CPUs aren't known in the scheduler then the early
> > hook w
On Thu, May 19, 2016 at 10:55:23PM +0200, Rafael J. Wysocki wrote:
> >> > +static inline bool sugov_queue_remote_callback(struct sugov_policy
> >> > *sg_policy,
> >> > +int cpu)
> >> > +{
> >> > + struct cpufreq_policy *policy = sg_policy->policy;
> >>
On Thu, May 19, 2016 at 01:44:36AM +0200, Rafael J. Wysocki wrote:
> On Mon, May 9, 2016 at 11:20 PM, Steve Muckle wrote:
> > The rate limit timestamp (last_freq_update_time) is currently advanced
> > anytime schedutil re-evaluates the policy regardless of whether the CPU
> >
On Thu, May 19, 2016 at 01:37:40AM +0200, Rafael J. Wysocki wrote:
> On Mon, May 9, 2016 at 11:20 PM, Steve Muckle wrote:
> > The mechanisms for remote CPU updates and slow-path frequency
> > transitions are relatively expensive - the former is an IPI while the
> > latter
On Thu, May 19, 2016 at 02:00:54PM +0200, Rafael J. Wysocki wrote:
> On Thu, May 19, 2016 at 1:33 AM, Rafael J. Wysocki wrote:
> > On Mon, May 9, 2016 at 11:20 PM, Steve Muckle
> > wrote:
> >> Without calling the cpufreq hook for a remote wakeup it is possible
>
On Thu, May 19, 2016 at 01:24:41AM +0200, Rafael J. Wysocki wrote:
> On Mon, May 9, 2016 at 11:20 PM, Steve Muckle wrote:
> > In preparation for the scheduler cpufreq callback happening on remote
> > CPUs, add support for this in schedutil.
> >
> > Schedutil currently
f update_load_avg()
> > > invoke cpufreq update hooks too.
> > >
> > > Fixes: 34e2c555f3e1 (cpufreq: Add mechanism for registering utilization
> > > update callbacks)
> > > Reported-by: Steve Muckle
> > > Signed-off-by: Rafael J. Wysocki
> >
01 seconds time elapsed ( +- 0.14% )
Steve Muckle (5):
sched: cpufreq: add cpu to update_util_data
cpufreq: schedutil: support scheduler cpufreq callbacks on remote CPUs
sched: cpufreq: call cpufreq hook from remote CPUs
cpufreq: schedutil: map raw required frequency to CPU-supported
freq
Upcoming support for scheduler cpufreq callbacks on remote wakeups
will require the client to know what the target CPU is that the
callback is being invoked for. Add this information into the callback
data structure.
Signed-off-by: Steve Muckle
---
include/linux/sched.h | 1 +
kernel/sched
policy frequency.
Signed-off-by: Steve Muckle
---
kernel/sched/cpufreq_schedutil.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index e185075fcb5c..4d2907c8a142 100644
--- a/kernel/sched/cpufreq_schedutil.c
frequency required by the new utilization
value in schedutil. If it is the same as the previously requested
frequency then there is no need to continue with the update.
Signed-off-by: Steve Muckle
---
kernel/sched/cpufreq_schedutil.c | 14 +-
1 file changed, 13 insertions(+), 1 deletion
ULL as the new policy_cpus
parameter to cpufreq_add_update_util_hook(). Callbacks will only be
issued in this case when the target CPU and the current CPU are the
same. Otherwise policy_cpus is used to determine what is a local
vs. remote callback.
Signed-off-by: Steve Muckle
---
drive
is carried out on the local CPU.
Signed-off-by: Steve Muckle
---
kernel/sched/cpufreq_schedutil.c | 86 ++--
1 file changed, 65 insertions(+), 21 deletions(-)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 15
On Fri, Apr 29, 2016 at 01:21:24PM +0200, Rafael J. Wysocki wrote:
> On Friday, April 29, 2016 04:08:16 PM Viresh Kumar wrote:
...
> > Any clue, why we don't have a non-SMP version of irq_work_queue_on(), Which
> > can
> > simply call irq_work_queue() ?
>
> Because nobody else needs it?
>
> But
Looks good to me.
Also re-tested with intel_pstate on i7-3630QM !SMP, confirmed issue
is resolved. I didn't retest with ondemand because for some reason
that wasn't showing the problem before.
On Fri, May 06, 2016 at 02:09:07AM +0200, Rafael J. Wysocki wrote:
> In turn, schedutil should probably depend on CONFIG_SMP.
In the long term I wonder if it's worth putting PELT under its own
separate feature or just removing #ifdef CONFIG_SMP.
Aside from task migration CPU frequency updates the
While working on a few patches for schedutil I noticed that the CFS
cpufreq hooks depend on PELT, which depends on CONFIG_SMP.
I compiled and ran a UP kernel with intel_pstate. Running a cpu-bound
task did not result in the frequency increasing beyond fmin. For some reason
ondemand is working for
1 - 100 of 222 matches
Mail list logo