Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-24 Thread Jason Low
> > > Should we take the consideration of whether a idle_balance was > > > successful or not? > > > > I recently ran fserver on the 8 socket machine with HT-enabled and found > > that load balance was succeeding at a higher than average rate, but idle > > balance was still lowering performance

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-24 Thread Jason Low
Should we take the consideration of whether a idle_balance was successful or not? I recently ran fserver on the 8 socket machine with HT-enabled and found that load balance was succeeding at a higher than average rate, but idle balance was still lowering performance of that

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-23 Thread Jason Low
On Tue, 2013-07-23 at 16:36 +0530, Srikar Dronamraju wrote: > > > > A potential issue I have found with avg_idle is that it may sometimes be > > not quite as accurate for the purposes of this patch, because it is > > always given a max value (default is 100 ns). For example, a CPU > > could

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-23 Thread Mike Galbraith
On Tue, 2013-07-23 at 14:05 +0200, Peter Zijlstra wrote: > On Tue, Jul 23, 2013 at 04:36:46PM +0530, Srikar Dronamraju wrote: > > > May be the current max value is a limiting factor, but I think there > > should be a limit to the maximum value. Peter and Ingo may help us > > understand why they

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-23 Thread Peter Zijlstra
On Tue, Jul 23, 2013 at 04:36:46PM +0530, Srikar Dronamraju wrote: > May be the current max value is a limiting factor, but I think there > should be a limit to the maximum value. Peter and Ingo may help us > understand why they limited to the 1ms. But I dont think we should > introduce a new

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-23 Thread Srikar Dronamraju
> > A potential issue I have found with avg_idle is that it may sometimes be > not quite as accurate for the purposes of this patch, because it is > always given a max value (default is 100 ns). For example, a CPU > could have remained idle for 1 second and avg_idle would be set to 1 >

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-23 Thread Peter Zijlstra
On Mon, Jul 22, 2013 at 11:57:47AM -0700, Jason Low wrote: > On Mon, 2013-07-22 at 12:31 +0530, Srikar Dronamraju wrote: > > > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > > index e8b3350..da2cb3e 100644 > > > --- a/kernel/sched/core.c > > > +++ b/kernel/sched/core.c > > > @@

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-23 Thread Peter Zijlstra
On Mon, Jul 22, 2013 at 11:57:47AM -0700, Jason Low wrote: On Mon, 2013-07-22 at 12:31 +0530, Srikar Dronamraju wrote: diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e8b3350..da2cb3e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1348,6 +1348,8 @@

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-23 Thread Srikar Dronamraju
A potential issue I have found with avg_idle is that it may sometimes be not quite as accurate for the purposes of this patch, because it is always given a max value (default is 100 ns). For example, a CPU could have remained idle for 1 second and avg_idle would be set to 1 millisecond.

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-23 Thread Peter Zijlstra
On Tue, Jul 23, 2013 at 04:36:46PM +0530, Srikar Dronamraju wrote: May be the current max value is a limiting factor, but I think there should be a limit to the maximum value. Peter and Ingo may help us understand why they limited to the 1ms. But I dont think we should introduce a new

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-23 Thread Mike Galbraith
On Tue, 2013-07-23 at 14:05 +0200, Peter Zijlstra wrote: On Tue, Jul 23, 2013 at 04:36:46PM +0530, Srikar Dronamraju wrote: May be the current max value is a limiting factor, but I think there should be a limit to the maximum value. Peter and Ingo may help us understand why they limited

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-23 Thread Jason Low
On Tue, 2013-07-23 at 16:36 +0530, Srikar Dronamraju wrote: A potential issue I have found with avg_idle is that it may sometimes be not quite as accurate for the purposes of this patch, because it is always given a max value (default is 100 ns). For example, a CPU could have

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-22 Thread Jason Low
On Mon, 2013-07-22 at 12:31 +0530, Srikar Dronamraju wrote: > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index e8b3350..da2cb3e 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -1348,6 +1348,8 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p,

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-22 Thread Jason Low
On Sun, 2013-07-21 at 23:02 +0530, Preeti U Murthy wrote: > Hi Json, > > With V2 of your patch here are the results for the ebizzy run on > 3.11-rc1 + patch on a 1 socket, 16 core powerpc machine. Each ebizzy > run was for 30 seconds. > > Number_of_threads %improvement_with_patch >

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-22 Thread Srikar Dronamraju
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index e8b3350..da2cb3e 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -1348,6 +1348,8 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p, > int wake_flags) > else >

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-22 Thread Srikar Dronamraju
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e8b3350..da2cb3e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1348,6 +1348,8 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags) else

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-22 Thread Jason Low
On Sun, 2013-07-21 at 23:02 +0530, Preeti U Murthy wrote: Hi Json, With V2 of your patch here are the results for the ebizzy run on 3.11-rc1 + patch on a 1 socket, 16 core powerpc machine. Each ebizzy run was for 30 seconds. Number_of_threads %improvement_with_patch 4

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-22 Thread Jason Low
On Mon, 2013-07-22 at 12:31 +0530, Srikar Dronamraju wrote: diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e8b3350..da2cb3e 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1348,6 +1348,8 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-21 Thread Preeti U Murthy
Hi Json, With V2 of your patch here are the results for the ebizzy run on 3.11-rc1 + patch on a 1 socket, 16 core powerpc machine. Each ebizzy run was for 30 seconds. Number_of_threads %improvement_with_patch 48.63 81.29 12

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-21 Thread Preeti U Murthy
Hi Json, With V2 of your patch here are the results for the ebizzy run on 3.11-rc1 + patch on a 1 socket, 16 core powerpc machine. Each ebizzy run was for 30 seconds. Number_of_threads %improvement_with_patch 48.63 81.29 12

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-19 Thread Jason Low
On Fri, 2013-07-19 at 16:54 +0530, Preeti U Murthy wrote: > Hi Json, > > I ran ebizzy and kernbench benchmarks on your 3.11-rc1 + your"V1 > patch" on a 1 socket, 16 core powerpc machine. I thought I would let you > know the results before I try your V2. > > Ebizzy: 30 seconds run. The

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-19 Thread Preeti U Murthy
Hi Json, I ran ebizzy and kernbench benchmarks on your 3.11-rc1 + your"V1 patch" on a 1 socket, 16 core powerpc machine. I thought I would let you know the results before I try your V2. Ebizzy: 30 seconds run. The table below shows the improvement in the number of records completed. I

[RFC PATCH v2] sched: Limit idle_balance()

2013-07-19 Thread Jason Low
When idle balance is being used frequently, functions such as load_balance(), update_sd_lb_stats(), tg_load_down(), and acquiring run queue locks can increase time spent in the kernel by quite a bit. In addition to spending kernel time in those functions, it may sometimes indirectly increase the %

[RFC PATCH v2] sched: Limit idle_balance()

2013-07-19 Thread Jason Low
When idle balance is being used frequently, functions such as load_balance(), update_sd_lb_stats(), tg_load_down(), and acquiring run queue locks can increase time spent in the kernel by quite a bit. In addition to spending kernel time in those functions, it may sometimes indirectly increase the %

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-19 Thread Preeti U Murthy
Hi Json, I ran ebizzy and kernbench benchmarks on your 3.11-rc1 + yourV1 patch on a 1 socket, 16 core powerpc machine. I thought I would let you know the results before I try your V2. Ebizzy: 30 seconds run. The table below shows the improvement in the number of records completed. I have

Re: [RFC PATCH v2] sched: Limit idle_balance()

2013-07-19 Thread Jason Low
On Fri, 2013-07-19 at 16:54 +0530, Preeti U Murthy wrote: Hi Json, I ran ebizzy and kernbench benchmarks on your 3.11-rc1 + yourV1 patch on a 1 socket, 16 core powerpc machine. I thought I would let you know the results before I try your V2. Ebizzy: 30 seconds run. The table below