On Thu 06-04-17 12:47:57, Peter Zijlstra wrote:
> On Thu, Apr 06, 2017 at 12:42:04PM +0200, Michal Hocko wrote:
>
> > Is this something dictated by usecases which rely on isolcpus or rather
> > nobody bothered to implement one scheduling domain?
>
> Its from the original use-case I suspect. It
On Thu 06-04-17 12:47:57, Peter Zijlstra wrote:
> On Thu, Apr 06, 2017 at 12:42:04PM +0200, Michal Hocko wrote:
>
> > Is this something dictated by usecases which rely on isolcpus or rather
> > nobody bothered to implement one scheduling domain?
>
> Its from the original use-case I suspect. It
On Thu, Apr 06, 2017 at 12:42:04PM +0200, Michal Hocko wrote:
> Is this something dictated by usecases which rely on isolcpus or rather
> nobody bothered to implement one scheduling domain?
Its from the original use-case I suspect. It was done very much on
purpose.
If you want bigger partitions
On Thu, Apr 06, 2017 at 12:42:04PM +0200, Michal Hocko wrote:
> Is this something dictated by usecases which rely on isolcpus or rather
> nobody bothered to implement one scheduling domain?
Its from the original use-case I suspect. It was done very much on
purpose.
If you want bigger partitions
On Thu 06-04-17 12:29:57, Peter Zijlstra wrote:
> On Thu, Apr 06, 2017 at 12:13:49PM +0200, Michal Hocko wrote:
> > On Thu 06-04-17 11:23:29, Peter Zijlstra wrote:
> > > On Thu, Apr 06, 2017 at 09:34:36AM +0200, Michal Hocko wrote:
> > [...]
> > > > I would really like to see it confirmed by the
On Thu 06-04-17 12:29:57, Peter Zijlstra wrote:
> On Thu, Apr 06, 2017 at 12:13:49PM +0200, Michal Hocko wrote:
> > On Thu 06-04-17 11:23:29, Peter Zijlstra wrote:
> > > On Thu, Apr 06, 2017 at 09:34:36AM +0200, Michal Hocko wrote:
> > [...]
> > > > I would really like to see it confirmed by the
On Thu, Apr 06, 2017 at 12:13:49PM +0200, Michal Hocko wrote:
> On Thu 06-04-17 11:23:29, Peter Zijlstra wrote:
> > On Thu, Apr 06, 2017 at 09:34:36AM +0200, Michal Hocko wrote:
> [...]
> > > I would really like to see it confirmed by the scheduler maintainers and
> > > documented properly as
On Thu, Apr 06, 2017 at 12:13:49PM +0200, Michal Hocko wrote:
> On Thu 06-04-17 11:23:29, Peter Zijlstra wrote:
> > On Thu, Apr 06, 2017 at 09:34:36AM +0200, Michal Hocko wrote:
> [...]
> > > I would really like to see it confirmed by the scheduler maintainers and
> > > documented properly as
On Thu 06-04-17 11:23:29, Peter Zijlstra wrote:
> On Thu, Apr 06, 2017 at 09:34:36AM +0200, Michal Hocko wrote:
[...]
> > I would really like to see it confirmed by the scheduler maintainers and
> > documented properly as well. What you are claiming here is rather
> > surprising to my
On Thu 06-04-17 11:23:29, Peter Zijlstra wrote:
> On Thu, Apr 06, 2017 at 09:34:36AM +0200, Michal Hocko wrote:
[...]
> > I would really like to see it confirmed by the scheduler maintainers and
> > documented properly as well. What you are claiming here is rather
> > surprising to my
On Thu, Apr 06, 2017 at 09:34:36AM +0200, Michal Hocko wrote:
> On Thu 06-04-17 12:49:50, Srikar Dronamraju wrote:
> > Similar example that I gave in my reply to Mel.
> >
> > Lets consider 2 node, 24 core with 12 cores in each node.
> > Cores 0-11 in Node 1 and cores 12-23 in the other node.
> >
On Thu, Apr 06, 2017 at 09:34:36AM +0200, Michal Hocko wrote:
> On Thu 06-04-17 12:49:50, Srikar Dronamraju wrote:
> > Similar example that I gave in my reply to Mel.
> >
> > Lets consider 2 node, 24 core with 12 cores in each node.
> > Cores 0-11 in Node 1 and cores 12-23 in the other node.
> >
On Tue, 2017-04-04 at 22:57 +0530, Srikar Dronamraju wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index f045a35..f853dc0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1666,6 +1666,10 @@ static void task_numa_find_cpu(struct task_numa_env
> *env,
> >
On Tue, 2017-04-04 at 22:57 +0530, Srikar Dronamraju wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index f045a35..f853dc0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1666,6 +1666,10 @@ static void task_numa_find_cpu(struct task_numa_env
> *env,
> >
On Tue, Apr 04, 2017 at 10:57:28PM +0530, Srikar Dronamraju wrote:
> When performing load balancing, numabalancing only looks at
> task->cpus_allowed to see if the task can run on the target cpu. If
> isolcpus kernel parameter is set, then isolated cpus will not be part of
> mask
On Tue, Apr 04, 2017 at 10:57:28PM +0530, Srikar Dronamraju wrote:
> When performing load balancing, numabalancing only looks at
> task->cpus_allowed to see if the task can run on the target cpu. If
> isolcpus kernel parameter is set, then isolated cpus will not be part of
> mask
On Thu 06-04-17 12:49:50, Srikar Dronamraju wrote:
> > > > > The isolated cpus are part of the cpus allowed list. In the above
> > > > > case,
> > > > > numabalancing ends up scheduling some of these tasks on isolated cpus.
> > > >
> > > > Why is this bad? If the task is allowed to run on
On Thu 06-04-17 12:49:50, Srikar Dronamraju wrote:
> > > > > The isolated cpus are part of the cpus allowed list. In the above
> > > > > case,
> > > > > numabalancing ends up scheduling some of these tasks on isolated cpus.
> > > >
> > > > Why is this bad? If the task is allowed to run on
> > > > The isolated cpus are part of the cpus allowed list. In the above case,
> > > > numabalancing ends up scheduling some of these tasks on isolated cpus.
> > >
> > > Why is this bad? If the task is allowed to run on isolated CPUs then why
> >
> > 1. kernel-parameters.txt states: isolcpus as
> > > > The isolated cpus are part of the cpus allowed list. In the above case,
> > > > numabalancing ends up scheduling some of these tasks on isolated cpus.
> > >
> > > Why is this bad? If the task is allowed to run on isolated CPUs then why
> >
> > 1. kernel-parameters.txt states: isolcpus as
On Wed 05-04-17 20:52:15, Srikar Dronamraju wrote:
> * Michal Hocko [2017-04-05 14:57:43]:
>
> > On Tue 04-04-17 22:57:28, Srikar Dronamraju wrote:
> > [...]
> > > For example:
> > > perf bench numa mem --no-data_rand_walk -p 4 -t $THREADS -G 0 -P 3072 -T
> > > 0 -l 50 -c -s
On Wed 05-04-17 20:52:15, Srikar Dronamraju wrote:
> * Michal Hocko [2017-04-05 14:57:43]:
>
> > On Tue 04-04-17 22:57:28, Srikar Dronamraju wrote:
> > [...]
> > > For example:
> > > perf bench numa mem --no-data_rand_walk -p 4 -t $THREADS -G 0 -P 3072 -T
> > > 0 -l 50 -c -s 1000
> > > would
* Michal Hocko [2017-04-05 14:57:43]:
> On Tue 04-04-17 22:57:28, Srikar Dronamraju wrote:
> [...]
> > For example:
> > perf bench numa mem --no-data_rand_walk -p 4 -t $THREADS -G 0 -P 3072 -T 0
> > -l 50 -c -s 1000
> > would call sched_setaffinity that resets the
* Michal Hocko [2017-04-05 14:57:43]:
> On Tue 04-04-17 22:57:28, Srikar Dronamraju wrote:
> [...]
> > For example:
> > perf bench numa mem --no-data_rand_walk -p 4 -t $THREADS -G 0 -P 3072 -T 0
> > -l 50 -c -s 1000
> > would call sched_setaffinity that resets the cpus_allowed mask.
> >
> >
On Tue 04-04-17 22:57:28, Srikar Dronamraju wrote:
[...]
> For example:
> perf bench numa mem --no-data_rand_walk -p 4 -t $THREADS -G 0 -P 3072 -T 0 -l
> 50 -c -s 1000
> would call sched_setaffinity that resets the cpus_allowed mask.
>
> Cpus_allowed_list:0-55,57-63,65-71,73-79,81-87,89-175
On Tue 04-04-17 22:57:28, Srikar Dronamraju wrote:
[...]
> For example:
> perf bench numa mem --no-data_rand_walk -p 4 -t $THREADS -G 0 -P 3072 -T 0 -l
> 50 -c -s 1000
> would call sched_setaffinity that resets the cpus_allowed mask.
>
> Cpus_allowed_list:0-55,57-63,65-71,73-79,81-87,89-175
On Wed, Apr 05, 2017 at 07:20:06AM +0530, Srikar Dronamraju wrote:
> > >
> > > To avoid this, please check for isolated cpus before choosing a target
> > > cpu.
> > >
> >
> > Hmm, would this also prevent a task running inside a cgroup that is
> > allowed accessed to isolated CPUs from
On Wed, Apr 05, 2017 at 07:20:06AM +0530, Srikar Dronamraju wrote:
> > >
> > > To avoid this, please check for isolated cpus before choosing a target
> > > cpu.
> > >
> >
> > Hmm, would this also prevent a task running inside a cgroup that is
> > allowed accessed to isolated CPUs from
> >
> > To avoid this, please check for isolated cpus before choosing a target
> > cpu.
> >
>
> Hmm, would this also prevent a task running inside a cgroup that is
> allowed accessed to isolated CPUs from balancing? I severely doubt it
Scheduler doesn't do any kind of load balancing for
> >
> > To avoid this, please check for isolated cpus before choosing a target
> > cpu.
> >
>
> Hmm, would this also prevent a task running inside a cgroup that is
> allowed accessed to isolated CPUs from balancing? I severely doubt it
Scheduler doesn't do any kind of load balancing for
On Tue, Apr 04, 2017 at 10:57:28PM +0530, Srikar Dronamraju wrote:
> When performing load balancing, numabalancing only looks at
> task->cpus_allowed to see if the task can run on the target cpu. If
> isolcpus kernel parameter is set, then isolated cpus will not be part of
> mask
On Tue, Apr 04, 2017 at 10:57:28PM +0530, Srikar Dronamraju wrote:
> When performing load balancing, numabalancing only looks at
> task->cpus_allowed to see if the task can run on the target cpu. If
> isolcpus kernel parameter is set, then isolated cpus will not be part of
> mask
On Tue, 2017-04-04 at 22:57 +0530, Srikar Dronamraju wrote:
>
> The isolated cpus are part of the cpus allowed list. In the above
> case,
> numabalancing ends up scheduling some of these tasks on isolated
> cpus.
>
> To avoid this, please check for isolated cpus before choosing a
> target
> cpu.
On Tue, 2017-04-04 at 22:57 +0530, Srikar Dronamraju wrote:
>
> The isolated cpus are part of the cpus allowed list. In the above
> case,
> numabalancing ends up scheduling some of these tasks on isolated
> cpus.
>
> To avoid this, please check for isolated cpus before choosing a
> target
> cpu.
When performing load balancing, numabalancing only looks at
task->cpus_allowed to see if the task can run on the target cpu. If
isolcpus kernel parameter is set, then isolated cpus will not be part of
mask task->cpus_allowed.
For example: (On a Power 8 box running in smt 1 mode)
When performing load balancing, numabalancing only looks at
task->cpus_allowed to see if the task can run on the target cpu. If
isolcpus kernel parameter is set, then isolated cpus will not be part of
mask task->cpus_allowed.
For example: (On a Power 8 box running in smt 1 mode)
36 matches
Mail list logo