> I'm a bit confused, do you have one global CC that tracks the number of
> tasks across all runqueues in the system or one for each cpu? There
> could be some contention when updating that value on larger systems if
> it one global CC. If they are separate, how do you then decide when to
>
On Sun, Apr 27, 2014 at 09:07:25PM +0100, Yuyang Du wrote:
> On Fri, Apr 25, 2014 at 03:53:34PM +0100, Morten Rasmussen wrote:
> > I fully agree. My point was that there is more to task consolidation
> > than just observing the degree of task parallelism. The system topology
> > has a lot to say
On Sun, Apr 27, 2014 at 09:07:25PM +0100, Yuyang Du wrote:
On Fri, Apr 25, 2014 at 03:53:34PM +0100, Morten Rasmussen wrote:
I fully agree. My point was that there is more to task consolidation
than just observing the degree of task parallelism. The system topology
has a lot to say when
I'm a bit confused, do you have one global CC that tracks the number of
tasks across all runqueues in the system or one for each cpu? There
could be some contention when updating that value on larger systems if
it one global CC. If they are separate, how do you then decide when to
On Fri, Apr 25, 2014 at 03:53:34PM +0100, Morten Rasmussen wrote:
> I fully agree. My point was that there is more to task consolidation
> than just observing the degree of task parallelism. The system topology
> has a lot to say when deciding whether or not to pack. That was the
> motivation for
On Friday, April 25, 2014 03:53:34 PM Morten Rasmussen wrote:
> On Fri, Apr 25, 2014 at 01:19:46PM +0100, Rafael J. Wysocki wrote:
[...]
> >
> > So in my opinion we need to figure out how to measure workloads while they
> > are
> > running and then use that information to make load balancing
On Fri, Apr 25, 2014 at 03:53:34PM +0100, Morten Rasmussen wrote:
I fully agree. My point was that there is more to task consolidation
than just observing the degree of task parallelism. The system topology
has a lot to say when deciding whether or not to pack. That was the
motivation for
On Friday, April 25, 2014 03:53:34 PM Morten Rasmussen wrote:
On Fri, Apr 25, 2014 at 01:19:46PM +0100, Rafael J. Wysocki wrote:
[...]
So in my opinion we need to figure out how to measure workloads while they
are
running and then use that information to make load balancing decisions.
On Fri, Apr 25, 2014 at 01:19:46PM +0100, Rafael J. Wysocki wrote:
> On Friday, April 25, 2014 11:23:07 AM Morten Rasmussen wrote:
> > Hi Yuyang,
> >
> > On Thu, Apr 24, 2014 at 08:30:05PM +0100, Yuyang Du wrote:
> > > 1)Divide continuous time into periods of time, and average task
> > >
On Friday, April 25, 2014 11:23:07 AM Morten Rasmussen wrote:
> Hi Yuyang,
>
> On Thu, Apr 24, 2014 at 08:30:05PM +0100, Yuyang Du wrote:
> > 1) Divide continuous time into periods of time, and average task
> > concurrency
> > in period, for tolerating the transient bursts:
> > a =
Hi Yuyang,
On Thu, Apr 24, 2014 at 08:30:05PM +0100, Yuyang Du wrote:
> 1)Divide continuous time into periods of time, and average task
> concurrency
> in period, for tolerating the transient bursts:
> a = sum(concurrency * time) / period
> 2)Exponentially decay past periods, and
On Fri, Apr 25, 2014 at 10:00:02AM +0200, Vincent Guittot wrote:
> On 24 April 2014 21:30, Yuyang Du wrote:
> > Hi Ingo, PeterZ, and others,
> >
> > The current scheduler's load balancing is completely work-conserving. In
> > some
> > workload, generally low CPU utilization but immersed with CPU
On 24 April 2014 21:30, Yuyang Du wrote:
> Hi Ingo, PeterZ, and others,
>
> The current scheduler's load balancing is completely work-conserving. In some
> workload, generally low CPU utilization but immersed with CPU bursts of
> transient tasks, migrating task to engage all available CPUs for
>
On Fri, 2014-04-25 at 03:30 +0800, Yuyang Du wrote:
> To track CC, we intercept the scheduler in 1) enqueue, 2) dequeue, 3)
> scheduler tick, and 4) enter/exit idle.
Boo hiss to 1, 2 and 4. Less fastpath math would be better.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe
On Fri, 2014-04-25 at 03:30 +0800, Yuyang Du wrote:
To track CC, we intercept the scheduler in 1) enqueue, 2) dequeue, 3)
scheduler tick, and 4) enter/exit idle.
Boo hiss to 1, 2 and 4. Less fastpath math would be better.
-Mike
--
To unsubscribe from this list: send the line unsubscribe
On 24 April 2014 21:30, Yuyang Du yuyang...@intel.com wrote:
Hi Ingo, PeterZ, and others,
The current scheduler's load balancing is completely work-conserving. In some
workload, generally low CPU utilization but immersed with CPU bursts of
transient tasks, migrating task to engage all
On Fri, Apr 25, 2014 at 10:00:02AM +0200, Vincent Guittot wrote:
On 24 April 2014 21:30, Yuyang Du yuyang...@intel.com wrote:
Hi Ingo, PeterZ, and others,
The current scheduler's load balancing is completely work-conserving. In
some
workload, generally low CPU utilization but immersed
Hi Yuyang,
On Thu, Apr 24, 2014 at 08:30:05PM +0100, Yuyang Du wrote:
1)Divide continuous time into periods of time, and average task
concurrency
in period, for tolerating the transient bursts:
a = sum(concurrency * time) / period
2)Exponentially decay past periods, and synthesize
On Friday, April 25, 2014 11:23:07 AM Morten Rasmussen wrote:
Hi Yuyang,
On Thu, Apr 24, 2014 at 08:30:05PM +0100, Yuyang Du wrote:
1) Divide continuous time into periods of time, and average task
concurrency
in period, for tolerating the transient bursts:
a = sum(concurrency * time)
On Fri, Apr 25, 2014 at 01:19:46PM +0100, Rafael J. Wysocki wrote:
On Friday, April 25, 2014 11:23:07 AM Morten Rasmussen wrote:
Hi Yuyang,
On Thu, Apr 24, 2014 at 08:30:05PM +0100, Yuyang Du wrote:
1)Divide continuous time into periods of time, and average task
concurrency
Hi Ingo, PeterZ, and others,
The current scheduler’s load balancing is completely work-conserving. In some
workload, generally low CPU utilization but immersed with CPU bursts of
transient tasks, migrating task to engage all available CPUs for
work-conserving can lead to significant overhead:
Hi Ingo, PeterZ, and others,
The current scheduler’s load balancing is completely work-conserving. In some
workload, generally low CPU utilization but immersed with CPU bursts of
transient tasks, migrating task to engage all available CPUs for
work-conserving can lead to significant overhead:
22 matches
Mail list logo