> > > Should we take the consideration of whether a idle_balance was
> > > successful or not?
> >
> > I recently ran fserver on the 8 socket machine with HT-enabled and found
> > that load balance was succeeding at a higher than average rate, but idle
> > balance was still lowering performance
Should we take the consideration of whether a idle_balance was
successful or not?
I recently ran fserver on the 8 socket machine with HT-enabled and found
that load balance was succeeding at a higher than average rate, but idle
balance was still lowering performance of that
On Tue, 2013-07-23 at 16:36 +0530, Srikar Dronamraju wrote:
> >
> > A potential issue I have found with avg_idle is that it may sometimes be
> > not quite as accurate for the purposes of this patch, because it is
> > always given a max value (default is 100 ns). For example, a CPU
> > could
On Tue, 2013-07-23 at 14:05 +0200, Peter Zijlstra wrote:
> On Tue, Jul 23, 2013 at 04:36:46PM +0530, Srikar Dronamraju wrote:
>
> > May be the current max value is a limiting factor, but I think there
> > should be a limit to the maximum value. Peter and Ingo may help us
> > understand why they
On Tue, Jul 23, 2013 at 04:36:46PM +0530, Srikar Dronamraju wrote:
> May be the current max value is a limiting factor, but I think there
> should be a limit to the maximum value. Peter and Ingo may help us
> understand why they limited to the 1ms. But I dont think we should
> introduce a new
>
> A potential issue I have found with avg_idle is that it may sometimes be
> not quite as accurate for the purposes of this patch, because it is
> always given a max value (default is 100 ns). For example, a CPU
> could have remained idle for 1 second and avg_idle would be set to 1
>
On Mon, Jul 22, 2013 at 11:57:47AM -0700, Jason Low wrote:
> On Mon, 2013-07-22 at 12:31 +0530, Srikar Dronamraju wrote:
> > >
> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > index e8b3350..da2cb3e 100644
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@
On Mon, Jul 22, 2013 at 11:57:47AM -0700, Jason Low wrote:
On Mon, 2013-07-22 at 12:31 +0530, Srikar Dronamraju wrote:
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e8b3350..da2cb3e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1348,6 +1348,8 @@
A potential issue I have found with avg_idle is that it may sometimes be
not quite as accurate for the purposes of this patch, because it is
always given a max value (default is 100 ns). For example, a CPU
could have remained idle for 1 second and avg_idle would be set to 1
millisecond.
On Tue, Jul 23, 2013 at 04:36:46PM +0530, Srikar Dronamraju wrote:
May be the current max value is a limiting factor, but I think there
should be a limit to the maximum value. Peter and Ingo may help us
understand why they limited to the 1ms. But I dont think we should
introduce a new
On Tue, 2013-07-23 at 14:05 +0200, Peter Zijlstra wrote:
On Tue, Jul 23, 2013 at 04:36:46PM +0530, Srikar Dronamraju wrote:
May be the current max value is a limiting factor, but I think there
should be a limit to the maximum value. Peter and Ingo may help us
understand why they limited
On Tue, 2013-07-23 at 16:36 +0530, Srikar Dronamraju wrote:
A potential issue I have found with avg_idle is that it may sometimes be
not quite as accurate for the purposes of this patch, because it is
always given a max value (default is 100 ns). For example, a CPU
could have
On Mon, 2013-07-22 at 12:31 +0530, Srikar Dronamraju wrote:
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index e8b3350..da2cb3e 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -1348,6 +1348,8 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p,
On Sun, 2013-07-21 at 23:02 +0530, Preeti U Murthy wrote:
> Hi Json,
>
> With V2 of your patch here are the results for the ebizzy run on
> 3.11-rc1 + patch on a 1 socket, 16 core powerpc machine. Each ebizzy
> run was for 30 seconds.
>
> Number_of_threads %improvement_with_patch
>
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index e8b3350..da2cb3e 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1348,6 +1348,8 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p,
> int wake_flags)
> else
>
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e8b3350..da2cb3e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1348,6 +1348,8 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p,
int wake_flags)
else
On Sun, 2013-07-21 at 23:02 +0530, Preeti U Murthy wrote:
Hi Json,
With V2 of your patch here are the results for the ebizzy run on
3.11-rc1 + patch on a 1 socket, 16 core powerpc machine. Each ebizzy
run was for 30 seconds.
Number_of_threads %improvement_with_patch
4
On Mon, 2013-07-22 at 12:31 +0530, Srikar Dronamraju wrote:
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e8b3350..da2cb3e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1348,6 +1348,8 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p,
int
Hi Json,
With V2 of your patch here are the results for the ebizzy run on
3.11-rc1 + patch on a 1 socket, 16 core powerpc machine. Each ebizzy
run was for 30 seconds.
Number_of_threads %improvement_with_patch
48.63
81.29
12
Hi Json,
With V2 of your patch here are the results for the ebizzy run on
3.11-rc1 + patch on a 1 socket, 16 core powerpc machine. Each ebizzy
run was for 30 seconds.
Number_of_threads %improvement_with_patch
48.63
81.29
12
On Fri, 2013-07-19 at 16:54 +0530, Preeti U Murthy wrote:
> Hi Json,
>
> I ran ebizzy and kernbench benchmarks on your 3.11-rc1 + your"V1
> patch" on a 1 socket, 16 core powerpc machine. I thought I would let you
> know the results before I try your V2.
>
> Ebizzy: 30 seconds run. The
Hi Json,
I ran ebizzy and kernbench benchmarks on your 3.11-rc1 + your"V1
patch" on a 1 socket, 16 core powerpc machine. I thought I would let you
know the results before I try your V2.
Ebizzy: 30 seconds run. The table below shows the improvement in the
number of records completed. I
When idle balance is being used frequently, functions such as load_balance(),
update_sd_lb_stats(), tg_load_down(), and acquiring run queue locks can increase
time spent in the kernel by quite a bit. In addition to spending kernel time
in those functions, it may sometimes indirectly increase the %
When idle balance is being used frequently, functions such as load_balance(),
update_sd_lb_stats(), tg_load_down(), and acquiring run queue locks can increase
time spent in the kernel by quite a bit. In addition to spending kernel time
in those functions, it may sometimes indirectly increase the %
Hi Json,
I ran ebizzy and kernbench benchmarks on your 3.11-rc1 + yourV1
patch on a 1 socket, 16 core powerpc machine. I thought I would let you
know the results before I try your V2.
Ebizzy: 30 seconds run. The table below shows the improvement in the
number of records completed. I have
On Fri, 2013-07-19 at 16:54 +0530, Preeti U Murthy wrote:
Hi Json,
I ran ebizzy and kernbench benchmarks on your 3.11-rc1 + yourV1
patch on a 1 socket, 16 core powerpc machine. I thought I would let you
know the results before I try your V2.
Ebizzy: 30 seconds run. The table below
26 matches
Mail list logo