* Peter Zijlstra [2018-10-26 10:41:05]:
> > #
> > # ./hist.sh
> > 10:35:21 IST UID TGID TID%usr %system %guest%CPU
> > CPU Command
> > 10:35:21 IST 0 2645 -8.700.010.008.71
> > 1 ebizzy
> > 10:35:21 IST 0 - 2645
On Thu, Oct 25, 2018 at 11:53:17PM +0530, Srikar Dronamraju wrote:
> >
> > You can create multiple partitions with cpusets but still have an
> > unbound task in the root cgroup. That would suffer the exact same
> > problems.
> >
>
> I probably don't understand this. Even if the child cgroups has
On Thu, Oct 25, 2018 at 11:00:58PM +0530, Srikar Dronamraju wrote:
> > But it doesn't solve the problem.
> >
> > You can create multiple partitions with cpusets but still have an
> > unbound task in the root cgroup. That would suffer the exact same
> > problems.
> >
> > Thing is, load-balancing,
>
> You can create multiple partitions with cpusets but still have an
> unbound task in the root cgroup. That would suffer the exact same
> problems.
>
I probably don't understand this. Even if the child cgroups has
cpu_exclusive or sched_load_balance reset, the tasks in root cgroup has
access t
>
> That's completely broken. Nothing in the numa balancing path uses that
> variable and afaict preemption is actually enabled where that's used, so
> using that per-cpu variable at all is broken.
>
I can demonstrate that even without numa balancing, there are
inconsistent behaviour with isolcp
On Wed, Oct 24, 2018 at 04:00:02PM +0530, Srikar Dronamraju wrote:
> * Peter Zijlstra [2018-10-24 12:03:23]:
>
> > It appears to me the for_each_online_node() iteration in
> > task_numa_migrate() needs an addition test to see if the selected node
> > has any CPUs in the relevant sched_domain _at_
On Wed, Oct 24, 2018 at 04:11:24PM +0530, Srikar Dronamraju wrote:
> * Peter Zijlstra [2018-10-24 12:15:08]:
>
> > On Wed, Oct 24, 2018 at 03:16:46PM +0530, Srikar Dronamraju wrote:
> > > * Mel Gorman [2018-10-24 09:56:36]:
> > >
> > > > On Wed, Oct 24, 2018 at 08:32:49AM +0530, Srikar Dronamra
* Peter Zijlstra [2018-10-24 12:15:08]:
> On Wed, Oct 24, 2018 at 03:16:46PM +0530, Srikar Dronamraju wrote:
> > * Mel Gorman [2018-10-24 09:56:36]:
> >
> > > On Wed, Oct 24, 2018 at 08:32:49AM +0530, Srikar Dronamraju wrote:
> > > It would certainly be a bit odd because the
> > > application i
On Wed, Oct 24, 2018 at 03:16:46PM +0530, Srikar Dronamraju wrote:
> * Mel Gorman [2018-10-24 09:56:36]:
>
> > On Wed, Oct 24, 2018 at 08:32:49AM +0530, Srikar Dronamraju wrote:
> > It would certainly be a bit odd because the
> > application is asking for some protection but no guarantees are giv
* Peter Zijlstra [2018-10-24 12:03:23]:
> It appears to me the for_each_online_node() iteration in
> task_numa_migrate() needs an addition test to see if the selected node
> has any CPUs in the relevant sched_domain _at_all_.
>
Yes, this should work.
Yi Wang does this extra check a little diffe
On Wed, Oct 24, 2018 at 03:16:46PM +0530, Srikar Dronamraju wrote:
> * Mel Gorman [2018-10-24 09:56:36]:
>
> > On Wed, Oct 24, 2018 at 08:32:49AM +0530, Srikar Dronamraju wrote:
> > It would certainly be a bit odd because the
> > application is asking for some protection but no guarantees are giv
On Wed, Oct 24, 2018 at 08:32:49AM +0530, Srikar Dronamraju wrote:
> Load balancer and NUMA balancer are not suppose to work on isolcpus.
>
> Currently when setting sched affinity, there are no checks to see if the
> requested cpumask has CPUs from both isolcpus and housekeeping CPUs.
>
> If user
* Mel Gorman [2018-10-24 09:56:36]:
> On Wed, Oct 24, 2018 at 08:32:49AM +0530, Srikar Dronamraju wrote:
> It would certainly be a bit odd because the
> application is asking for some protection but no guarantees are given
> and the application is not made aware via an error code that there is a
On Wed, Oct 24, 2018 at 08:32:49AM +0530, Srikar Dronamraju wrote:
> Load balancer and NUMA balancer are not suppose to work on isolcpus.
>
> Currently when setting sched affinity, there are no checks to see if the
> requested cpumask has CPUs from both isolcpus and housekeeping CPUs.
>
> If user
Load balancer and NUMA balancer are not suppose to work on isolcpus.
Currently when setting sched affinity, there are no checks to see if the
requested cpumask has CPUs from both isolcpus and housekeeping CPUs.
If user passes a mix of isolcpus and housekeeping CPUs, then
NUMA balancer can pick a
15 matches
Mail list logo