On 02/01/13 10:29, Vincent Guittot wrote:
On 2 January 2013 06:28, Viresh Kumar wrote:
On 20 December 2012 13:41, Vincent Guittot wrote:
On 19 December 2012 11:57, Morten Rasmussen wrote:
If I understand the new version of "sched: secure access to other CPU
statistics" corr
On 19/12/12 09:34, Viresh Kumar wrote:
On 19 December 2012 14:53, Vincent Guittot wrote:
Le 19 déc. 2012 07:34, "Viresh Kumar" a écrit :
Can we resolve this issue now? I don't want anything during the release
period
this time.
The new version of the patchset should solve the concerns of eve
On 07/12/12 14:54, Viresh Kumar wrote:
On 7 December 2012 18:43, Morten Rasmussen wrote:
I should have included the numbers in the cover letter. Here are
numbers for TC2.
sysbench (normalized execution time, lower is better)
threads 2 4 8
HMP 1.00 1.00 1.00
HMP+GB
Dec 7, 2012 at 5:33 PM, Morten Rasmussen
> wrote:
> > Hi Viresh,
> >
> > Here is a patch that introduces global load balancing on top of the
> > existing HMP
> > patch set. It depends on the HMP patches already present in your
> > task-placement-v2
> >
is under-utilized.
Signed-off-by: Morten Rasmussen
---
kernel/sched/fair.c | 101 +--
1 file changed, 97 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1cfe112..7ac47c9 100644
--- a/kernel/sched/fair.c
the MP branch for the 12.12 release? Testing with sysbench
and
coremark show significant performance improvements for parallel workloads as all
cpus can now be used for cpu intensive tasks.
Thanks,
Morten
Morten Rasmussen (1):
sched: Basic global balancing support for HMP
kernel/sched/fair.c
On 05/12/12 11:35, Viresh Kumar wrote:
On 5 December 2012 16:58, Morten Rasmussen wrote:
I tested Vincent's fix ("sched: pack small tasks: fix update packing
domain") for the buddy selection some weeks ago and confirmed that it
works. So my quick fixes are no longer necessa
On 05/12/12 11:01, Viresh Kumar wrote:
On 5 December 2012 16:28, Liviu Dudau wrote:
The revert request came at Morten's suggestion. He has comments on the code and
technical reasons
why he believes that the approach is not the best one as well as some scenarios
where possible race
conditions
Hi Vincent,
On Mon, Nov 12, 2012 at 01:51:00PM +, Vincent Guittot wrote:
> On 9 November 2012 18:13, Morten Rasmussen wrote:
> > Hi Vincent,
> >
> > I have experienced suboptimal buddy selection on a dual cluster setup
> > (ARM TC2) if SD_SHARE_POWERLINE is enabl
On 19/11/12 14:09, Vincent Guittot wrote:
On 19 November 2012 14:36, Morten Rasmussen wrote:
On 19/11/12 12:23, Vincent Guittot wrote:
On 19 November 2012 13:08, Morten Rasmussen
wrote:
Hi Vincent,
On 19/11/12 09:20, Vincent Guittot wrote:
Hi,
On 16 November 2012 19:32, Liviu Dudau
On 19/11/12 12:23, Vincent Guittot wrote:
On 19 November 2012 13:08, Morten Rasmussen wrote:
Hi Vincent,
On 19/11/12 09:20, Vincent Guittot wrote:
Hi,
On 16 November 2012 19:32, Liviu Dudau wrote:
From: Morten Rasmussen
Re-enable SD_SHARE_POWERLINE to reflect the power domains of TC2
Hi Vincent,
On 19/11/12 09:20, Vincent Guittot wrote:
Hi,
On 16 November 2012 19:32, Liviu Dudau wrote:
From: Morten Rasmussen
Re-enable SD_SHARE_POWERLINE to reflect the power domains of TC2.
---
arch/arm/kernel/topology.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
Hi Vincent,
I have experienced suboptimal buddy selection on a dual cluster setup
(ARM TC2) if SD_SHARE_POWERLINE is enabled at MC level and disabled at
CPU level. This seems to be the correct flag settings for a system with
only cluster level power gating.
To me it looks like update_packing_doma
On Fri, Nov 02, 2012 at 10:53:47AM +, Santosh Shilimkar wrote:
> On Monday 29 October 2012 06:42 PM, Vincent Guittot wrote:
> > On 24 October 2012 17:20, Santosh Shilimkar
> > wrote:
> >> Vincent,
> >>
> >> Few comments/questions.
> >>
> >>
> >> On Sunday 07 October 2012 01:13 PM, Vincent Gui
On Fri, Oct 12, 2012 at 04:33:19PM +0100, Jon Medhurst (Tixy) wrote:
> On Fri, 2012-10-12 at 16:11 +0100, Morten Rasmussen wrote:
> > Hi Tixy,
> >
> > Thanks for the patch. I think this patch is the right way to solve this
> > issue.
> >
> > There is stil
cpumask_copy(&domain->cpus, &hmp_slow_cpu_mask);
> + list_add(&domain->hmp_domains, hmp_domains_list);
> + }
> domain = (struct hmp_domain *)
> kmalloc(sizeof(struct hmp_domain), GFP_KERNEL);
> cpumask_copy(&domain->cpus, &hmp_fa
On Thu, Oct 04, 2012 at 07:58:45AM +0100, Viresh Kumar wrote:
> On 22 September 2012 00:02, wrote:
> > diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
>
> > +void __init arch_get_hmp_domains(struct list_head *hmp_domains_list)
> > +{
> > + struct cpumask hmp_fast_cpu_m
Hi Tixy,
Could you have a look at my code stealing patch below? Since it is
basically a trimmed version of one of your patches I would prefer to
put you as author and have your SOB on it. What is your opinion?
Thanks,
Morten
On Fri, Sep 21, 2012 at 07:32:21PM +0100, Morten Rasmussen wrote
On Thu, Oct 04, 2012 at 07:49:32AM +0100, Viresh Kumar wrote:
> On 22 September 2012 00:02, wrote:
> > From: Morten Rasmussen
> >
> > We can't rely on Kconfig options to set the fast and slow CPU lists for
> > HMP scheduling if we want a single kernel binary to s
On Thu, Oct 04, 2012 at 07:27:00AM +0100, Viresh Kumar wrote:
> On 22 September 2012 00:02, wrote:
>
> > +config SCHED_HMP_PRIO_FILTER
> > + bool "(EXPERIMENTAL) Filter HMP migrations by task priority"
> > + depends on SCHED_HMP
>
> Should it depend on EXPERIMENTAL?
>
> > + h
Hi Viresh,
On Thu, Oct 04, 2012 at 07:02:03AM +0100, Viresh Kumar wrote:
> Hi Morten,
>
> On 22 September 2012 00:02, wrote:
> > From: Morten Rasmussen
> >
> > This patch introduces the basic SCHED_HMP infrastructure. Each class of
> > cpus is represented by
From: Morten Rasmussen
This patch adds load_avg_ratio to each task. The load_avg_ratio is a
variant of load_avg_contrib which is not scaled by the task priority. It
is calculated like this:
runnable_avg_sum * NICE_0_LOAD / (runnable_avg_period + 1).
Signed-off-by: Morten Rasmussen
From: Morten Rasmussen
This patch introduces the basic SCHED_HMP infrastructure. Each class of
cpus is represented by a hmp_domain and tasks will only be moved between
these domains when their load profiles suggest it is beneficial.
SCHED_HMP relies heavily on the task load-tracking introduced
From: Morten Rasmussen
Introduces a priority threshold which prevents low priority task
from migrating to faster hmp_domains (cpus). This is useful for
user-space software which assigns lower task priority to background
task.
Signed-off-by: Morten Rasmussen
---
arch/arm/Kconfig| 13
From: Morten Rasmussen
We need a way to prevent tasks that are migrating up and down the
hmp_domains from migrating straight on through before the load has
adapted to the new compute capacity of the CPU on the new hmp_domain.
This patch adds a next up/down migration delay that prevents the task
From: Morten Rasmussen
Adds ftrace events for key variables related to the entity
load-tracking to help debugging scheduler behaviour. Allows tracing
of load contribution and runqueue residency ratio for both entities
and runqueues as well as entity CPU usage ratio.
Signed-off-by: Morten
From: Morten Rasmussen
Hi Paul, Paul, Peter, Suresh, linaro-sched-sig, and LKML,
As a follow-up on my Linux Plumbers Conference talk about my experiments with
scheduling on heterogeneous systems I'm posting a proof-of-concept patch set
with my modifications. The intention behin
From: Morten Rasmussen
SCHED_HMP requires the different cpu types to be represented by an
ordered list of hmp_domains. Each hmp_domain represents all cpus of
a particular type using a cpumask.
The list is platform specific and therefore must be generated by
platform code by implementing
From: Morten Rasmussen
We can't rely on Kconfig options to set the fast and slow CPU lists for
HMP scheduling if we want a single kernel binary to support multiple
devices with different CPU topology. E.g. TC2 (ARM's Test-Chip-2
big.LITTLE system), Fast Models, or even non big.LITT
From: Morten Rasmussen
This patch introduces forced task migration for moving suitable
currently running tasks between hmp_domains. Task behaviour is likely
to change over time. Tasks running in a less capable hmp_domain may
change to become more demanding and should therefore be migrated up
From: Morten Rasmussen
Adds ftrace event for tracing task migrations using HMP
optimized scheduling.
Signed-off-by: Morten Rasmussen
---
include/trace/events/sched.h | 28
kernel/sched/fair.c | 15 +++
2 files changed, 39 insertions(+), 4
From: Morten Rasmussen
Adds Kconfig entries to enable HMP scheduling on ARM platforms.
Currently, it disables CPU level sched_domain load-balacing in order
to simplify things. This needs fixing in a later revision. HMP
scheduling will do the load-balancing at this level instead.
Signed-off-by
Hi Viresh,
On Mon, Sep 03, 2012 at 06:21:26AM +0100, Viresh Kumar wrote:
> On 28 August 2012 10:37, Viresh Kumar wrote:
> > I have updated
> >
> > https://wiki.linaro.org/WorkingGroups/PowerManagement/Process/bigLittleMPTree
> >
> > as per our last discussion. Please see if i have missed somethin
33 matches
Mail list logo