>>
>> Maybe we can skip local group since it's a bottom-up search so we know
>> there's no idle cpu in the lower domain from the prior iteration.
>>
>
> I did this change but seems results are worse on my machines, guess start
> seeking idle cpu bottom up is a bad idea.
> The following is full
Maybe we can skip local group since it's a bottom-up search so we know
there's no idle cpu in the lower domain from the prior iteration.
I did this change but seems results are worse on my machines, guess start
seeking idle cpu bottom up is a bad idea.
The following is full version with
On 01/21/2013 10:40 AM, Preeti U Murthy wrote:
> Hi Alex,
> Thank you very much for running the below benchmark on
> blocked_load+runnable_load:) Just a few queries.
>
> How did you do the wake up balancing? Did you iterate over the L3
> package looking for an idle cpu? Or did you just query the
Hi Alex,
Thank you very much for running the below benchmark on
blocked_load+runnable_load:) Just a few queries.
How did you do the wake up balancing? Did you iterate over the L3
package looking for an idle cpu? Or did you just query the L2 package
for an idle cpu?
I think when you are using
>>> The blocked load of a cluster will be high if the blocked tasks have
>>> run recently. The contribution of a blocked task will be divided by 2
>>> each 32ms, so it means that a high blocked load will be made of recent
>>> running tasks and the long sleeping tasks will not influence the load
On 01/09/2013 11:14 AM, Preeti U Murthy wrote:
> Here comes the point of making both load balancing and wake up
> balance(select_idle_sibling) co operative. How about we always schedule
> the woken up task on the prev_cpu? This seems more sensible considering
> load balancing
On 01/09/2013 11:14 AM, Preeti U Murthy wrote:
Here comes the point of making both load balancing and wake up
balance(select_idle_sibling) co operative. How about we always schedule
the woken up task on the prev_cpu? This seems more sensible considering
load balancing considers blocked load as
The blocked load of a cluster will be high if the blocked tasks have
run recently. The contribution of a blocked task will be divided by 2
each 32ms, so it means that a high blocked load will be made of recent
running tasks and the long sleeping tasks will not influence the load
balancing.
Hi Alex,
Thank you very much for running the below benchmark on
blocked_load+runnable_load:) Just a few queries.
How did you do the wake up balancing? Did you iterate over the L3
package looking for an idle cpu? Or did you just query the L2 package
for an idle cpu?
I think when you are using
On 01/21/2013 10:40 AM, Preeti U Murthy wrote:
Hi Alex,
Thank you very much for running the below benchmark on
blocked_load+runnable_load:) Just a few queries.
How did you do the wake up balancing? Did you iterate over the L3
package looking for an idle cpu? Or did you just query the L2
On 01/17/2013 01:17 PM, Namhyung Kim wrote:
> On Wed, 16 Jan 2013 22:08:21 +0800, Alex Shi wrote:
>> On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
>>> Hi Mike,
>>>
>>> Thank you very much for such a clear and comprehensive explanation.
>>> So when I put together the problem and the proposed
Hi Namhyung,
>> I re-written the patch as following. hackbench/aim9 doest show clean
>> performance change.
>> Actually we can get some profit. it also will be very slight. :)
>> BTW, it still need another patch before apply this. Just to show the logical.
>>
>> ===
>>> From
Hi Alex,
On 01/16/2013 07:38 PM, Alex Shi wrote:
> On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
>> Hi Mike,
>>
>> Thank you very much for such a clear and comprehensive explanation.
>> So when I put together the problem and the proposed solution pieces in the
>> current
>> scheduler
Hi Alex,
On 01/16/2013 07:38 PM, Alex Shi wrote:
On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
Hi Mike,
Thank you very much for such a clear and comprehensive explanation.
So when I put together the problem and the proposed solution pieces in the
current
scheduler scalability,the
Hi Namhyung,
I re-written the patch as following. hackbench/aim9 doest show clean
performance change.
Actually we can get some profit. it also will be very slight. :)
BTW, it still need another patch before apply this. Just to show the logical.
===
From
On 01/17/2013 01:17 PM, Namhyung Kim wrote:
On Wed, 16 Jan 2013 22:08:21 +0800, Alex Shi wrote:
On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
Hi Mike,
Thank you very much for such a clear and comprehensive explanation.
So when I put together the problem and the proposed solution pieces in
On Wed, 16 Jan 2013 22:08:21 +0800, Alex Shi wrote:
> On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
>> Hi Mike,
>>
>> Thank you very much for such a clear and comprehensive explanation.
>> So when I put together the problem and the proposed solution pieces in the
>> current
>> scheduler
On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
> Hi Mike,
>
> Thank you very much for such a clear and comprehensive explanation.
> So when I put together the problem and the proposed solution pieces in the
> current
> scheduler scalability,the following was what I found:
>
> 1.
On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
Hi Mike,
Thank you very much for such a clear and comprehensive explanation.
So when I put together the problem and the proposed solution pieces in the
current
scheduler scalability,the following was what I found:
1. select_idle_sibling()
On Wed, 16 Jan 2013 22:08:21 +0800, Alex Shi wrote:
On 01/08/2013 04:41 PM, Preeti U Murthy wrote:
Hi Mike,
Thank you very much for such a clear and comprehensive explanation.
So when I put together the problem and the proposed solution pieces in the
current
scheduler scalability,the
Here comes the point of making both load balancing and wake up
balance(select_idle_sibling) co operative. How about we always schedule
the woken up task on the prev_cpu? This seems more sensible considering
load balancing considers blocked load as being a part of the load of
On 8 January 2013 07:06, Preeti U Murthy wrote:
> On 01/07/2013 09:18 PM, Vincent Guittot wrote:
>> On 2 January 2013 05:22, Preeti U Murthy wrote:
>>> Hi everyone,
>>> I have been looking at how different workloads react when the per entity
>>> load tracking metric is integrated into the load
Hi Mike,
Thank you very much for such a clear and comprehensive explanation.
So when I put together the problem and the proposed solution pieces in the
current
scheduler scalability,the following was what I found:
1. select_idle_sibling() is needed as an agent to correctly find the right cpu
Hi Mike,
Thank you very much for such a clear and comprehensive explanation.
So when I put together the problem and the proposed solution pieces in the
current
scheduler scalability,the following was what I found:
1. select_idle_sibling() is needed as an agent to correctly find the right cpu
On 8 January 2013 07:06, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
On 01/07/2013 09:18 PM, Vincent Guittot wrote:
On 2 January 2013 05:22, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi everyone,
I have been looking at how different workloads react when the per entity
load
Here comes the point of making both load balancing and wake up
balance(select_idle_sibling) co operative. How about we always schedule
the woken up task on the prev_cpu? This seems more sensible considering
load balancing considers blocked load as being a part of the load of cpu2.
Hi Preeti,
On 01/07/2013 09:18 PM, Vincent Guittot wrote:
> On 2 January 2013 05:22, Preeti U Murthy wrote:
>> Hi everyone,
>> I have been looking at how different workloads react when the per entity
>> load tracking metric is integrated into the load balancer and what are
>> the possible reasons for it.
>>
On 2 January 2013 05:22, Preeti U Murthy wrote:
> Hi everyone,
> I have been looking at how different workloads react when the per entity
> load tracking metric is integrated into the load balancer and what are
> the possible reasons for it.
>
> I had posted the integration patch earlier:
>
On 2 January 2013 05:22, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi everyone,
I have been looking at how different workloads react when the per entity
load tracking metric is integrated into the load balancer and what are
the possible reasons for it.
I had posted the integration
On 01/07/2013 09:18 PM, Vincent Guittot wrote:
On 2 January 2013 05:22, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi everyone,
I have been looking at how different workloads react when the per entity
load tracking metric is integrated into the load balancer and what are
the possible
On Mon, 2013-01-07 at 10:59 +0530, Preeti U Murthy wrote:
> Hi Mike,
> Thank you very much for your inputs.Just a few thoughts so that we are
> clear with the problems so far in the scheduler scalability and in what
> direction we ought to move to correct them.
>
> 1. During fork or exec,the
Hi Mike,
Thank you very much for your inputs.Just a few thoughts so that we are
clear with the problems so far in the scheduler scalability and in what
direction we ought to move to correct them.
1. During fork or exec,the scheduler goes through find_idlest_group()
and find_idlest_cpu() in
On Sat, 2013-01-05 at 09:13 +0100, Mike Galbraith wrote:
> I still have a 2.6-rt problem I need to find time to squabble with, but
> maybe I'll soonish see if what you did plus what I did combined works
> out on that 4x10 core box where current is _so_ unbelievably horrible.
> Heck, it can't get
On Sat, 2013-01-05 at 09:13 +0100, Mike Galbraith wrote:
I still have a 2.6-rt problem I need to find time to squabble with, but
maybe I'll soonish see if what you did plus what I did combined works
out on that 4x10 core box where current is _so_ unbelievably horrible.
Heck, it can't get any
Hi Mike,
Thank you very much for your inputs.Just a few thoughts so that we are
clear with the problems so far in the scheduler scalability and in what
direction we ought to move to correct them.
1. During fork or exec,the scheduler goes through find_idlest_group()
and find_idlest_cpu() in
On Mon, 2013-01-07 at 10:59 +0530, Preeti U Murthy wrote:
Hi Mike,
Thank you very much for your inputs.Just a few thoughts so that we are
clear with the problems so far in the scheduler scalability and in what
direction we ought to move to correct them.
1. During fork or exec,the scheduler
On Thu, 2013-01-03 at 16:08 +0530, Preeti U Murthy wrote:
> Subject: [PATCH] sched: Merge select_idle_sibling with the behaviour of
> SD_BALANCE_WAKE
>
> The function of select_idle_sibling() is to place the woken up task in the
> vicinity of the waking cpu or on the previous cpu depending on
On Thu, 2013-01-03 at 16:08 +0530, Preeti U Murthy wrote:
Subject: [PATCH] sched: Merge select_idle_sibling with the behaviour of
SD_BALANCE_WAKE
The function of select_idle_sibling() is to place the woken up task in the
vicinity of the waking cpu or on the previous cpu depending on what
On Thu, 2013-01-03 at 16:08 +0530, Preeti U Murthy wrote:
> Hi Mike,
>
> Thank you very much for your feedback.Considering your suggestions,I have
> posted out a
> proposed solution to prevent select_idle_sibling() from becoming a
> disadvantage to normal
> load balancing,rather aiding it.
>
On Thu, 2013-01-03 at 16:08 +0530, Preeti U Murthy wrote:
Hi Mike,
Thank you very much for your feedback.Considering your suggestions,I have
posted out a
proposed solution to prevent select_idle_sibling() from becoming a
disadvantage to normal
load balancing,rather aiding it.
**This
On Thu, 2013-01-03 at 16:08 +0530, Preeti U Murthy wrote:
> Hi Mike,
>
> Thank you very much for your feedback.Considering your suggestions,I have
> posted out a
> proposed solution to prevent select_idle_sibling() from becoming a
> disadvantage to normal
> load balancing,rather aiding it.
>
Hi Mike,
Thank you very much for your feedback.Considering your suggestions,I have
posted out a
proposed solution to prevent select_idle_sibling() from becoming a disadvantage
to normal
load balancing,rather aiding it.
**This patch is *without* the enablement of the per entity load tracking
Hi Mike,
Thank you very much for your feedback.Considering your suggestions,I have
posted out a
proposed solution to prevent select_idle_sibling() from becoming a disadvantage
to normal
load balancing,rather aiding it.
**This patch is *without* the enablement of the per entity load tracking
On Thu, 2013-01-03 at 16:08 +0530, Preeti U Murthy wrote:
Hi Mike,
Thank you very much for your feedback.Considering your suggestions,I have
posted out a
proposed solution to prevent select_idle_sibling() from becoming a
disadvantage to normal
load balancing,rather aiding it.
**This
On Wed, 2013-01-02 at 09:52 +0530, Preeti U Murthy wrote:
> Hi everyone,
> I have been looking at how different workloads react when the per entity
> load tracking metric is integrated into the load balancer and what are
> the possible reasons for it.
>
> I had posted the integration patch
On Wed, 2013-01-02 at 09:52 +0530, Preeti U Murthy wrote:
Hi everyone,
I have been looking at how different workloads react when the per entity
load tracking metric is integrated into the load balancer and what are
the possible reasons for it.
I had posted the integration patch earlier:
Hi everyone,
I have been looking at how different workloads react when the per entity
load tracking metric is integrated into the load balancer and what are
the possible reasons for it.
I had posted the integration patch earlier:
https://lkml.org/lkml/2012/11/15/391
Essentially what I am doing
Hi everyone,
I have been looking at how different workloads react when the per entity
load tracking metric is integrated into the load balancer and what are
the possible reasons for it.
I had posted the integration patch earlier:
https://lkml.org/lkml/2012/11/15/391
Essentially what I am doing
48 matches
Mail list logo