On Mon, Dec 14, 2020 at 12:36:32PM +, Mel Gorman wrote:
> As the merge window is open, it's inevitable that this will need to be
> evaluated against 5.11-rc1 when all the current batch of scheduler code
> has been merged. Do you mind splitting your prototype into three patches
> and slap some
On Mon, Dec 14, 2020 at 10:18:16AM +0100, Vincent Guittot wrote:
> On Fri, 11 Dec 2020 at 18:45, Peter Zijlstra wrote:
> >
> > On Thu, Dec 10, 2020 at 12:58:33PM +, Mel Gorman wrote:
> > > The prequisite patch to make that approach work was rejected though
> > > as on its own, it's not very
On Mon, Dec 14, 2020 at 10:31:22AM +0100, Peter Zijlstra wrote:
> On Mon, Dec 14, 2020 at 09:11:29AM +0100, Vincent Guittot wrote:
> > On Fri, 11 Dec 2020 at 23:50, Mel Gorman
> > wrote:
>
> > > I originally did something like that on purpose too but Vincent called
> > > it out so it is worth
On Mon, Dec 14, 2020 at 09:11:29AM +0100, Vincent Guittot wrote:
> On Fri, 11 Dec 2020 at 23:50, Mel Gorman wrote:
> > I originally did something like that on purpose too but Vincent called
> > it out so it is worth mentioning now to avoid surprises. That said, I'm
> > at the point where
On Fri, Dec 11, 2020 at 10:50:02PM +, Mel Gorman wrote:
> > > The third potential downside is that the SMT sibling is not guaranteed to
> > > be checked due to SIS_PROP throttling but in the old code, that would have
> > > been checked by select_idle_smt(). That might result in premature
On Fri, 11 Dec 2020 at 18:45, Peter Zijlstra wrote:
>
> On Thu, Dec 10, 2020 at 12:58:33PM +, Mel Gorman wrote:
> > The prequisite patch to make that approach work was rejected though
> > as on its own, it's not very helpful and Vincent didn't like that the
> > load_balance_mask was abused to
On Fri, 11 Dec 2020 at 23:50, Mel Gorman wrote:
>
> On Fri, Dec 11, 2020 at 11:19:05PM +0100, Peter Zijlstra wrote:
> > On Fri, Dec 11, 2020 at 08:43:37PM +, Mel Gorman wrote:
> > > One bug is in __select_idle_core() though. It's scanning the SMT mask,
> > > not select_idle_mask so it can
On 2020/12/10 19:34, Mel Gorman wrote:
> On Thu, Dec 10, 2020 at 04:23:47PM +0800, Li, Aubrey wrote:
>>> I ran this patch with tbench on top of of the schedstat patches that
>>> track SIS efficiency. The tracking adds overhead so it's not a perfect
>>> performance comparison but the expectation
On Fri, Dec 11, 2020 at 11:19:05PM +0100, Peter Zijlstra wrote:
> On Fri, Dec 11, 2020 at 08:43:37PM +, Mel Gorman wrote:
> > One bug is in __select_idle_core() though. It's scanning the SMT mask,
> > not select_idle_mask so it can return an idle candidate that is not in
> > p->cpus_ptr.
>
>
On Fri, Dec 11, 2020 at 08:43:37PM +, Mel Gorman wrote:
> One bug is in __select_idle_core() though. It's scanning the SMT mask,
> not select_idle_mask so it can return an idle candidate that is not in
> p->cpus_ptr.
D'0h.. luckily the benchmarks don't hit that :-)
> There are some other
On Fri, Dec 11, 2020 at 06:44:42PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 10, 2020 at 12:58:33PM +, Mel Gorman wrote:
> > The prequisite patch to make that approach work was rejected though
> > as on its own, it's not very helpful and Vincent didn't like that the
> > load_balance_mask was
On Thu, Dec 10, 2020 at 12:58:33PM +, Mel Gorman wrote:
> The prequisite patch to make that approach work was rejected though
> as on its own, it's not very helpful and Vincent didn't like that the
> load_balance_mask was abused to make it effective.
So last time I poked at all this, I found
On Thu, Dec 10, 2020 at 08:21:14PM +0800, Li, Aubrey wrote:
> >>>
> >>> The field I would expect to decrease is SIS Domain Scanned -- the number
> >>> of runqueues that were examined but it's actually worse and graphing over
> >>> time shows it's worse for the client thread counts.
On 2020/12/10 19:34, Mel Gorman wrote:
> On Thu, Dec 10, 2020 at 04:23:47PM +0800, Li, Aubrey wrote:
>>> I ran this patch with tbench on top of of the schedstat patches that
>>> track SIS efficiency. The tracking adds overhead so it's not a perfect
>>> performance comparison but the expectation
On Thu, Dec 10, 2020 at 04:23:47PM +0800, Li, Aubrey wrote:
> > I ran this patch with tbench on top of of the schedstat patches that
> > track SIS efficiency. The tracking adds overhead so it's not a perfect
> > performance comparison but the expectation would be that the patch reduces
> > the
Hi Mel,
On 2020/12/9 22:36, Mel Gorman wrote:
> On Wed, Dec 09, 2020 at 02:24:04PM +0800, Aubrey Li wrote:
>> Add idle cpumask to track idle cpus in sched domain. Every time
>> a CPU enters idle, the CPU is set in idle cpumask to be a wakeup
>> target. And if the CPU is not in idle, the CPU is
On 2020/12/9 21:09, Vincent Guittot wrote:
> On Wed, 9 Dec 2020 at 11:58, Li, Aubrey wrote:
>>
>> On 2020/12/9 16:15, Vincent Guittot wrote:
>>> Le mercredi 09 déc. 2020 à 14:24:04 (+0800), Aubrey Li a écrit :
Add idle cpumask to track idle cpus in sched domain. Every time
a CPU enters
On Wed, Dec 09, 2020 at 02:24:04PM +0800, Aubrey Li wrote:
> Add idle cpumask to track idle cpus in sched domain. Every time
> a CPU enters idle, the CPU is set in idle cpumask to be a wakeup
> target. And if the CPU is not in idle, the CPU is cleared in idle
> cpumask during scheduler tick to
On Wed, 9 Dec 2020 at 11:58, Li, Aubrey wrote:
>
> On 2020/12/9 16:15, Vincent Guittot wrote:
> > Le mercredi 09 déc. 2020 à 14:24:04 (+0800), Aubrey Li a écrit :
> >> Add idle cpumask to track idle cpus in sched domain. Every time
> >> a CPU enters idle, the CPU is set in idle cpumask to be a
On 2020/12/9 16:15, Vincent Guittot wrote:
> Le mercredi 09 déc. 2020 à 14:24:04 (+0800), Aubrey Li a écrit :
>> Add idle cpumask to track idle cpus in sched domain. Every time
>> a CPU enters idle, the CPU is set in idle cpumask to be a wakeup
>> target. And if the CPU is not in idle, the CPU is
Le mercredi 09 déc. 2020 à 14:24:04 (+0800), Aubrey Li a écrit :
> Add idle cpumask to track idle cpus in sched domain. Every time
> a CPU enters idle, the CPU is set in idle cpumask to be a wakeup
> target. And if the CPU is not in idle, the CPU is cleared in idle
> cpumask during scheduler tick
Add idle cpumask to track idle cpus in sched domain. Every time
a CPU enters idle, the CPU is set in idle cpumask to be a wakeup
target. And if the CPU is not in idle, the CPU is cleared in idle
cpumask during scheduler tick to ratelimit idle cpumask update.
When a task wakes up to select an idle
22 matches
Mail list logo