Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-21 Thread Vineeth Remanan Pillai
On Mon, Oct 14, 2019 at 5:57 AM Aaron Lu wrote: > > I now remembered why I used max(). > > Assume rq1 and rq2's min_vruntime are both at 2000 and the core wide > min_vruntime is also 2000. Also assume both runqueues are empty at the > moment. Then task t1 is queued to rq1 and runs for a long time

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-14 Thread Aaron Lu
On Sun, Oct 13, 2019 at 08:44:32AM -0400, Vineeth Remanan Pillai wrote: > On Fri, Oct 11, 2019 at 11:55 PM Aaron Lu wrote: > > > > > I don't think we need do the normalization afterwrads and it appears > > we are on the same page regarding core wide vruntime. Should be "we are not on the same

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-13 Thread Vineeth Remanan Pillai
On Fri, Oct 11, 2019 at 11:55 PM Aaron Lu wrote: > > I don't think we need do the normalization afterwrads and it appears > we are on the same page regarding core wide vruntime. > > The intent of my patch is to treat all the root level sched entities of > the two siblings as if they are in a

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-11 Thread Aaron Lu
On Fri, Oct 11, 2019 at 08:10:30AM -0400, Vineeth Remanan Pillai wrote: > > Thanks for the clarification. > > > > Yes, this is the initialization issue I mentioned before when core > > scheduling is initially enabled. rq1's vruntime is bumped the first time > > update_core_cfs_rq_min_vruntime() is

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-11 Thread Vineeth Remanan Pillai
> Thanks for the clarification. > > Yes, this is the initialization issue I mentioned before when core > scheduling is initially enabled. rq1's vruntime is bumped the first time > update_core_cfs_rq_min_vruntime() is called and if there are already > some tasks queued, new tasks queued on rq1 will

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-11 Thread Aaron Lu
On Fri, Oct 11, 2019 at 07:32:48AM -0400, Vineeth Remanan Pillai wrote: > > > The reason we need to do this is because, new tasks that gets created will > > > have a vruntime based on the new min_vruntime and old tasks will have it > > > based on the old min_vruntime > > > > I think this is

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-11 Thread Vineeth Remanan Pillai
> > The reason we need to do this is because, new tasks that gets created will > > have a vruntime based on the new min_vruntime and old tasks will have it > > based on the old min_vruntime > > I think this is expected behaviour. > I don't think this is the expected behavior. If we hadn't changed

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-11 Thread Aaron Lu
On Thu, Oct 10, 2019 at 10:29:47AM -0400, Vineeth Remanan Pillai wrote: > > I didn't see why we need do this. > > > > We only need to have the root level sched entities' vruntime become core > > wide since we will compare vruntime for them across hyperthreads. For > > sched entities on sub

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-10 Thread Vineeth Remanan Pillai
> I didn't see why we need do this. > > We only need to have the root level sched entities' vruntime become core > wide since we will compare vruntime for them across hyperthreads. For > sched entities on sub cfs_rqs, we never(at least, not now) compare their > vruntime outside their cfs_rqs. >

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-10 Thread Aaron Lu
On Wed, Oct 02, 2019 at 04:48:14PM -0400, Vineeth Remanan Pillai wrote: > On Mon, Sep 30, 2019 at 7:53 AM Vineeth Remanan Pillai > wrote: > > > > > > > Sorry, I misunderstood the fix and I did not initially see the core wide > > min_vruntime that you tried to maintain in the rq->core. This

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-10-02 Thread Vineeth Remanan Pillai
On Mon, Sep 30, 2019 at 7:53 AM Vineeth Remanan Pillai wrote: > > > > Sorry, I misunderstood the fix and I did not initially see the core wide > min_vruntime that you tried to maintain in the rq->core. This approach > seems reasonable. I think we can fix the potential starvation that you >

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-30 Thread Julien Desfossez
> I've made an attempt in the following two patches to address > the load balancing of mismatched load between the siblings. > > It is applied on top of Aaron's patches: > - sched: Fix incorrect rq tagged as forced idle > - wrapper for cfs_rq->min_vruntime >

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-30 Thread Vineeth Remanan Pillai
On Wed, Sep 18, 2019 at 6:16 PM Aubrey Li wrote: > > On Thu, Sep 19, 2019 at 4:41 AM Tim Chen wrote: > > > > On 9/17/19 6:33 PM, Aubrey Li wrote: > > > On Sun, Sep 15, 2019 at 10:14 PM Aaron Lu > > > wrote: > > > > >> > > >> And I have pushed Tim's branch to: > > >>

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-30 Thread Vineeth Remanan Pillai
On Thu, Sep 12, 2019 at 8:35 AM Aaron Lu wrote: > > > > I think comparing parent's runtime also will have issues once > > the task group has a lot more threads with different running > > patterns. One example is a task group with lot of active threads > > and a thread with fairly less activity.

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-25 Thread Aubrey Li
On Thu, Sep 26, 2019 at 1:24 AM Tim Chen wrote: > > On 9/24/19 7:40 PM, Aubrey Li wrote: > > On Sat, Sep 7, 2019 at 2:30 AM Tim Chen wrote: > >> +static inline s64 core_sched_imbalance_delta(int src_cpu, int dst_cpu, > >> + int src_sibling, int dst_sibling, > >> +

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-25 Thread Tim Chen
On 9/24/19 7:40 PM, Aubrey Li wrote: > On Sat, Sep 7, 2019 at 2:30 AM Tim Chen wrote: >> +static inline s64 core_sched_imbalance_delta(int src_cpu, int dst_cpu, >> + int src_sibling, int dst_sibling, >> + struct task_group *tg, u64 task_load) >> +{ >> +

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-24 Thread Aubrey Li
On Sat, Sep 7, 2019 at 2:30 AM Tim Chen wrote: > +static inline s64 core_sched_imbalance_delta(int src_cpu, int dst_cpu, > + int src_sibling, int dst_sibling, > + struct task_group *tg, u64 task_load) > +{ > + struct sched_entity *se, *se_sibling,

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-18 Thread Aubrey Li
On Thu, Sep 19, 2019 at 4:41 AM Tim Chen wrote: > > On 9/17/19 6:33 PM, Aubrey Li wrote: > > On Sun, Sep 15, 2019 at 10:14 PM Aaron Lu > > wrote: > > >> > >> And I have pushed Tim's branch to: > >> https://github.com/aaronlu/linux coresched-v3-v5.1.5-test-tim > >> > >> Mine: > >>

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-18 Thread Tim Chen
On 9/4/19 6:44 PM, Julien Desfossez wrote: > + > +static void coresched_idle_worker_fini(struct rq *rq) > +{ > + if (rq->core_idle_task) { > + kthread_stop(rq->core_idle_task); > + rq->core_idle_task = NULL; > + } During testing, I have found access of

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-18 Thread Tim Chen
On 9/10/19 7:27 AM, Julien Desfossez wrote: > On 29-Aug-2019 04:38:21 PM, Peter Zijlstra wrote: >> On Thu, Aug 29, 2019 at 10:30:51AM -0400, Phil Auld wrote: >>> I think, though, that you were basically agreeing with me that the current >>> core scheduler does not close the holes, or am I reading

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-18 Thread Tim Chen
On 9/17/19 6:33 PM, Aubrey Li wrote: > On Sun, Sep 15, 2019 at 10:14 PM Aaron Lu wrote: >> >> And I have pushed Tim's branch to: >> https://github.com/aaronlu/linux coresched-v3-v5.1.5-test-tim >> >> Mine: >> https://github.com/aaronlu/linux coresched-v3-v5.1.5-test-core_vruntime Aubrey,

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-17 Thread Aubrey Li
On Sun, Sep 15, 2019 at 10:14 PM Aaron Lu wrote: > > On Fri, Sep 13, 2019 at 07:12:52AM +0800, Aubrey Li wrote: > > On Thu, Sep 12, 2019 at 8:04 PM Aaron Lu wrote: > > > > > > On Wed, Sep 11, 2019 at 09:19:02AM -0700, Tim Chen wrote: > > > > On 9/11/19 7:02 AM, Aaron Lu wrote: > > > > I think

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-15 Thread Aaron Lu
On Fri, Sep 13, 2019 at 07:12:52AM +0800, Aubrey Li wrote: > On Thu, Sep 12, 2019 at 8:04 PM Aaron Lu wrote: > > > > On Wed, Sep 11, 2019 at 09:19:02AM -0700, Tim Chen wrote: > > > On 9/11/19 7:02 AM, Aaron Lu wrote: > > > I think Julien's result show that my patches did not do as well as > > >

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-13 Thread Tim Chen
On 9/13/19 7:15 AM, Aaron Lu wrote: > On Thu, Sep 12, 2019 at 10:29:13AM -0700, Tim Chen wrote: > >> The better thing to do is to move one task from cgroupA to another core, >> that has only one cgroupA task so it can be paired up >> with that lonely cgroupA task. This will eliminate the forced

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-13 Thread Aaron Lu
On Thu, Sep 12, 2019 at 10:29:13AM -0700, Tim Chen wrote: > On 9/12/19 5:35 AM, Aaron Lu wrote: > > On Wed, Sep 11, 2019 at 12:47:34PM -0400, Vineeth Remanan Pillai wrote: > > > > > core wide vruntime makes sense when there are multiple tasks of > > different cgroups queued on the same core.

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-13 Thread Aaron Lu
On Thu, Sep 12, 2019 at 10:05:43AM -0700, Tim Chen wrote: > On 9/12/19 5:04 AM, Aaron Lu wrote: > > > Well, I have done following tests: > > 1 Julien's test script: https://paste.debian.net/plainh/834cf45c > > 2 start two tagged will-it-scale/page_fault1, see how each performs; > > 3 Aubrey's

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-12 Thread Aubrey Li
On Thu, Sep 12, 2019 at 8:04 PM Aaron Lu wrote: > > On Wed, Sep 11, 2019 at 09:19:02AM -0700, Tim Chen wrote: > > On 9/11/19 7:02 AM, Aaron Lu wrote: > > I think Julien's result show that my patches did not do as well as > > your patches for fairness. Aubrey did some other testing with the same >

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-12 Thread Tim Chen
On 9/12/19 5:35 AM, Aaron Lu wrote: > On Wed, Sep 11, 2019 at 12:47:34PM -0400, Vineeth Remanan Pillai wrote: > > core wide vruntime makes sense when there are multiple tasks of > different cgroups queued on the same core. e.g. when there are two > tasks of cgroupA and one task of cgroupB are

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-12 Thread Tim Chen
On 9/12/19 5:04 AM, Aaron Lu wrote: > Well, I have done following tests: > 1 Julien's test script: https://paste.debian.net/plainh/834cf45c > 2 start two tagged will-it-scale/page_fault1, see how each performs; > 3 Aubrey's mysql test: https://github.com/aubreyli/coresched_bench.git > > They all

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-12 Thread Aaron Lu
On Wed, Sep 11, 2019 at 12:47:34PM -0400, Vineeth Remanan Pillai wrote: > > > So both of you are working on top of my 2 patches that deal with the > > > fairness issue, but I had the feeling Tim's alternative patches[1] are > > > simpler than mine and achieves the same result(after the force idle

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-12 Thread Aaron Lu
On Wed, Sep 11, 2019 at 09:19:02AM -0700, Tim Chen wrote: > On 9/11/19 7:02 AM, Aaron Lu wrote: > > Hi Tim & Julien, > > > > On Fri, Sep 06, 2019 at 11:30:20AM -0700, Tim Chen wrote: > >> On 8/7/19 10:10 AM, Tim Chen wrote: > >> > >>> 3) Load balancing between CPU cores > >>>

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-11 Thread Vineeth Remanan Pillai
> > So both of you are working on top of my 2 patches that deal with the > > fairness issue, but I had the feeling Tim's alternative patches[1] are > > simpler than mine and achieves the same result(after the force idle tag > > I think Julien's result show that my patches did not do as well as >

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-11 Thread Tim Chen
On 9/11/19 7:02 AM, Aaron Lu wrote: > Hi Tim & Julien, > > On Fri, Sep 06, 2019 at 11:30:20AM -0700, Tim Chen wrote: >> On 8/7/19 10:10 AM, Tim Chen wrote: >> >>> 3) Load balancing between CPU cores >>> --- >>> Say if one CPU core's sibling threads get forced idled

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-11 Thread Aaron Lu
Hi Tim & Julien, On Fri, Sep 06, 2019 at 11:30:20AM -0700, Tim Chen wrote: > On 8/7/19 10:10 AM, Tim Chen wrote: > > > 3) Load balancing between CPU cores > > --- > > Say if one CPU core's sibling threads get forced idled > > a lot as it has mostly incompatible

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-10 Thread Julien Desfossez
On 29-Aug-2019 04:38:21 PM, Peter Zijlstra wrote: > On Thu, Aug 29, 2019 at 10:30:51AM -0400, Phil Auld wrote: > > I think, though, that you were basically agreeing with me that the current > > core scheduler does not close the holes, or am I reading that wrong. > > Agreed; the missing bits for

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-06 Thread Tim Chen
On 9/4/19 6:44 PM, Julien Desfossez wrote: >@@ -3853,7 +3880,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, >struct rq_flags *rf) > goto done; > } > >- if (!is_idle_task(p)) >+ if

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-06 Thread Tim Chen
On 8/7/19 10:10 AM, Tim Chen wrote: > 3) Load balancing between CPU cores > --- > Say if one CPU core's sibling threads get forced idled > a lot as it has mostly incompatible tasks between the siblings, > moving the incompatible load to other cores and pulling >

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-09-04 Thread Julien Desfossez
> 1) Unfairness between the sibling threads > - > One sibling thread could be suppressing and force idling > the sibling thread over proportionally. Resulting in > the force idled CPU not getting run and stall tasks on > suppressed CPU. > > Status: > i)

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-29 Thread Peter Zijlstra
On Thu, Aug 29, 2019 at 10:30:51AM -0400, Phil Auld wrote: > On Wed, Aug 28, 2019 at 06:01:14PM +0200 Peter Zijlstra wrote: > > On Wed, Aug 28, 2019 at 11:30:34AM -0400, Phil Auld wrote: > > > On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote: > > > > > > And given MDS, I'm still not

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-29 Thread Phil Auld
On Wed, Aug 28, 2019 at 06:01:14PM +0200 Peter Zijlstra wrote: > On Wed, Aug 28, 2019 at 11:30:34AM -0400, Phil Auld wrote: > > On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote: > > > > And given MDS, I'm still not entirely convinced it all makes sense. If > > > it were just L1TF,

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-28 Thread Tim Chen
On 8/28/19 9:01 AM, Peter Zijlstra wrote: > On Wed, Aug 28, 2019 at 11:30:34AM -0400, Phil Auld wrote: >> On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote: > >> The current core scheduler implementation, I believe, still has >> (theoretical?) >> holes involving interrupts, once/if

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-28 Thread Peter Zijlstra
On Wed, Aug 28, 2019 at 08:59:21AM -0700, Tim Chen wrote: > On 8/27/19 2:50 PM, Peter Zijlstra wrote: > > On Tue, Aug 27, 2019 at 10:14:17PM +0100, Matthew Garrett wrote: > >> Apple have provided a sysctl that allows applications to indicate that > >> specific threads should make use of core

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-28 Thread Peter Zijlstra
On Wed, Aug 28, 2019 at 11:30:34AM -0400, Phil Auld wrote: > On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote: > > And given MDS, I'm still not entirely convinced it all makes sense. If > > it were just L1TF, then yes, but now... > > I was thinking MDS is really the reason for this.

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-28 Thread Tim Chen
On 8/27/19 2:50 PM, Peter Zijlstra wrote: > On Tue, Aug 27, 2019 at 10:14:17PM +0100, Matthew Garrett wrote: >> Apple have provided a sysctl that allows applications to indicate that >> specific threads should make use of core isolation while allowing >> the rest of the system to make use of

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-28 Thread Phil Auld
On Tue, Aug 27, 2019 at 11:50:35PM +0200 Peter Zijlstra wrote: > On Tue, Aug 27, 2019 at 10:14:17PM +0100, Matthew Garrett wrote: > > Apple have provided a sysctl that allows applications to indicate that > > specific threads should make use of core isolation while allowing > > the rest of the

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-27 Thread Aubrey Li
On Wed, Aug 28, 2019 at 5:14 AM Matthew Garrett wrote: > > Apple have provided a sysctl that allows applications to indicate that > specific threads should make use of core isolation while allowing > the rest of the system to make use of SMT, and browsers (Safari, Firefox > and Chrome, at least)

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-27 Thread Peter Zijlstra
On Tue, Aug 27, 2019 at 10:14:17PM +0100, Matthew Garrett wrote: > Apple have provided a sysctl that allows applications to indicate that > specific threads should make use of core isolation while allowing > the rest of the system to make use of SMT, and browsers (Safari, Firefox > and Chrome,

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-27 Thread Matthew Garrett
Apple have provided a sysctl that allows applications to indicate that specific threads should make use of core isolation while allowing the rest of the system to make use of SMT, and browsers (Safari, Firefox and Chrome, at least) are now making use of this. Trying to do something similar

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-15 Thread Aaron Lu
On Thu, Aug 15, 2019 at 06:09:28PM +0200, Dario Faggioli wrote: > On Wed, 2019-08-07 at 10:10 -0700, Tim Chen wrote: > > On 8/7/19 1:58 AM, Dario Faggioli wrote: > > > > > Since I see that, in this thread, there are various patches being > > > proposed and discussed... should I rerun my

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-15 Thread Dario Faggioli
On Wed, 2019-08-07 at 10:10 -0700, Tim Chen wrote: > On 8/7/19 1:58 AM, Dario Faggioli wrote: > > > Since I see that, in this thread, there are various patches being > > proposed and discussed... should I rerun my benchmarks with them > > applied? If yes, which ones? And is there, by any chance,

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-12 Thread Aaron Lu
On 2019/8/12 23:38, Vineeth Remanan Pillai wrote: >> I have two other small changes that I think are worth sending out. >> >> The first simplify logic in pick_task() and the 2nd avoid task pick all >> over again when max is preempted. I also refined the previous hack patch to >> make schedule

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-12 Thread Vineeth Remanan Pillai
> I have two other small changes that I think are worth sending out. > > The first simplify logic in pick_task() and the 2nd avoid task pick all > over again when max is preempted. I also refined the previous hack patch to > make schedule always happen only for root cfs rq. Please see below for >

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-10 Thread Aaron Lu
On Thu, Aug 08, 2019 at 09:39:45AM -0700, Tim Chen wrote: > On 8/8/19 5:55 AM, Aaron Lu wrote: > > On Mon, Aug 05, 2019 at 08:55:28AM -0700, Tim Chen wrote: > >> On 8/2/19 8:37 AM, Julien Desfossez wrote: > >>> We tested both Aaron's and Tim's patches and here are our results. > > > > > diff

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-10 Thread Aaron Lu
On Thu, Aug 08, 2019 at 02:42:57PM -0700, Tim Chen wrote: > On 8/8/19 10:27 AM, Tim Chen wrote: > > On 8/7/19 11:47 PM, Aaron Lu wrote: > >> On Tue, Aug 06, 2019 at 02:19:57PM -0700, Tim Chen wrote: > >>> +void account_core_idletime(struct task_struct *p, u64 exec) > >>> +{ > >>> + const struct

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-08 Thread Tim Chen
On 8/8/19 10:27 AM, Tim Chen wrote: > On 8/7/19 11:47 PM, Aaron Lu wrote: >> On Tue, Aug 06, 2019 at 02:19:57PM -0700, Tim Chen wrote: >>> +void account_core_idletime(struct task_struct *p, u64 exec) >>> +{ >>> + const struct cpumask *smt_mask; >>> + struct rq *rq; >>> + bool force_idle,

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-08 Thread Tim Chen
On 8/7/19 11:47 PM, Aaron Lu wrote: > On Tue, Aug 06, 2019 at 02:19:57PM -0700, Tim Chen wrote: >> +void account_core_idletime(struct task_struct *p, u64 exec) >> +{ >> +const struct cpumask *smt_mask; >> +struct rq *rq; >> +bool force_idle, refill; >> +int i, cpu; >> + >> +rq

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-08 Thread Tim Chen
On 8/8/19 5:55 AM, Aaron Lu wrote: > On Mon, Aug 05, 2019 at 08:55:28AM -0700, Tim Chen wrote: >> On 8/2/19 8:37 AM, Julien Desfossez wrote: >>> We tested both Aaron's and Tim's patches and here are our results. > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-08 Thread Aaron Lu
On Mon, Aug 05, 2019 at 08:55:28AM -0700, Tim Chen wrote: > On 8/2/19 8:37 AM, Julien Desfossez wrote: > > We tested both Aaron's and Tim's patches and here are our results. > > > > Test setup: > > - 2 1-thread sysbench, one running the cpu benchmark, the other one the > > mem benchmark > > -

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-08 Thread Aaron Lu
On Tue, Aug 06, 2019 at 02:19:57PM -0700, Tim Chen wrote: > +void account_core_idletime(struct task_struct *p, u64 exec) > +{ > + const struct cpumask *smt_mask; > + struct rq *rq; > + bool force_idle, refill; > + int i, cpu; > + > + rq = task_rq(p); > + if

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-07 Thread Tim Chen
On 8/7/19 1:58 AM, Dario Faggioli wrote: > So, here comes my question: I've done a benchmarking campaign (yes, > I'll post numbers soon) using this branch: > > https://github.com/digitalocean/linux-coresched.git > vpillai/coresched-v3-v5.1.5-test >

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-07 Thread Dario Faggioli
Hello everyone, This is Dario, from SUSE. I'm also interesting in core-scheduling, and using it in virtualization use cases. Just for context, I'm working in virt since a few years, mostly on Xen, but I've done Linux stuff before, and I am getting back at it. For now, I've been looking at the

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Tim Chen
On 8/6/19 10:12 AM, Peter Zijlstra wrote: >> I'm wondering if something simpler will work. It is easier to maintain >> fairness >> between the CPU threads. A simple scheme may be if the force idle deficit >> on a CPU thread exceeds a threshold compared to its sibling, we will >> bias in

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Peter Zijlstra
On Tue, Aug 06, 2019 at 10:03:29AM -0700, Tim Chen wrote: > On 8/5/19 8:24 PM, Aaron Lu wrote: > > > I've been thinking if we should consider core wide tenent fairness? > > > > Let's say there are 3 tasks on 2 threads' rq of the same core, 2 tasks > > (e.g. A1, A2) belong to tenent A and the 3rd

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Tim Chen
On 8/5/19 8:24 PM, Aaron Lu wrote: > I've been thinking if we should consider core wide tenent fairness? > > Let's say there are 3 tasks on 2 threads' rq of the same core, 2 tasks > (e.g. A1, A2) belong to tenent A and the 3rd B1 belong to another tenent > B. Assume A1 and B1 are queued on the

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Vineeth Remanan Pillai
> I think tenant will have per core weight, similar to sched entity's per > cpu weight. The tenant's per core weight could derive from its > corresponding taskgroup's per cpu sched entities' weight(sum them up > perhaps). Tenant with higher weight will have its core wide vruntime > advance slower

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Vineeth Remanan Pillai
> > What accounting in particular is upset? Is it things like > select_idle_sibling() that thinks the thread is idle and tries to place > tasks there? > The major issue that we saw was, certain work load causes the idle cpu to never wakeup and schedule again even when there are runnable threads in

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Phil Auld
On Tue, Aug 06, 2019 at 10:41:25PM +0800 Aaron Lu wrote: > On 2019/8/6 22:17, Phil Auld wrote: > > On Tue, Aug 06, 2019 at 09:54:01PM +0800 Aaron Lu wrote: > >> On Mon, Aug 05, 2019 at 04:09:15PM -0400, Phil Auld wrote: > >>> Hi, > >>> > >>> On Fri, Aug 02, 2019 at 11:37:15AM -0400 Julien

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Aaron Lu
On 2019/8/6 22:17, Phil Auld wrote: > On Tue, Aug 06, 2019 at 09:54:01PM +0800 Aaron Lu wrote: >> On Mon, Aug 05, 2019 at 04:09:15PM -0400, Phil Auld wrote: >>> Hi, >>> >>> On Fri, Aug 02, 2019 at 11:37:15AM -0400 Julien Desfossez wrote: We tested both Aaron's and Tim's patches and here are

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Phil Auld
On Tue, Aug 06, 2019 at 09:54:01PM +0800 Aaron Lu wrote: > On Mon, Aug 05, 2019 at 04:09:15PM -0400, Phil Auld wrote: > > Hi, > > > > On Fri, Aug 02, 2019 at 11:37:15AM -0400 Julien Desfossez wrote: > > > We tested both Aaron's and Tim's patches and here are our results. > > > > > > Test setup:

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Peter Zijlstra
On Tue, Aug 06, 2019 at 08:24:17AM -0400, Vineeth Remanan Pillai wrote: > Peter's rebalance logic actually takes care of most of the runq > imbalance caused > due to cookie tagging. What we have found from our testing is, fairness issue > is > caused mostly due to a Hyperthread going idle and not

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Aaron Lu
On Mon, Aug 05, 2019 at 04:09:15PM -0400, Phil Auld wrote: > Hi, > > On Fri, Aug 02, 2019 at 11:37:15AM -0400 Julien Desfossez wrote: > > We tested both Aaron's and Tim's patches and here are our results. > > > > Test setup: > > - 2 1-thread sysbench, one running the cpu benchmark, the other one

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Aaron Lu
On Tue, Aug 06, 2019 at 08:24:17AM -0400, Vineeth Remanan Pillai wrote: > > > > > > I also think a way to make fairness per cookie per core, is this what you > > > want to propose? > > > > Yes, that's what I meant. > > I think that would hurt some kind of workloads badly, especially if > one

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Vineeth Remanan Pillai
> > > > I also think a way to make fairness per cookie per core, is this what you > > want to propose? > > Yes, that's what I meant. I think that would hurt some kind of workloads badly, especially if one tenant is having way more tasks than the other. Tenant with more task on the same core might

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Aaron Lu
On 2019/8/6 14:56, Aubrey Li wrote: > On Tue, Aug 6, 2019 at 11:24 AM Aaron Lu wrote: >> I've been thinking if we should consider core wide tenent fairness? >> >> Let's say there are 3 tasks on 2 threads' rq of the same core, 2 tasks >> (e.g. A1, A2) belong to tenent A and the 3rd B1 belong to

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-06 Thread Aubrey Li
On Tue, Aug 6, 2019 at 11:24 AM Aaron Lu wrote: > > On Mon, Aug 05, 2019 at 08:55:28AM -0700, Tim Chen wrote: > > On 8/2/19 8:37 AM, Julien Desfossez wrote: > > > We tested both Aaron's and Tim's patches and here are our results. > > > > > > Test setup: > > > - 2 1-thread sysbench, one running

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-05 Thread Aaron Lu
On Mon, Aug 05, 2019 at 08:55:28AM -0700, Tim Chen wrote: > On 8/2/19 8:37 AM, Julien Desfossez wrote: > > We tested both Aaron's and Tim's patches and here are our results. > > > > Test setup: > > - 2 1-thread sysbench, one running the cpu benchmark, the other one the > > mem benchmark > > -

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-05 Thread Phil Auld
Hi, On Fri, Aug 02, 2019 at 11:37:15AM -0400 Julien Desfossez wrote: > We tested both Aaron's and Tim's patches and here are our results. > > Test setup: > - 2 1-thread sysbench, one running the cpu benchmark, the other one the > mem benchmark > - both started at the same time > - both are

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-05 Thread Tim Chen
On 8/2/19 8:37 AM, Julien Desfossez wrote: > We tested both Aaron's and Tim's patches and here are our results. > > Test setup: > - 2 1-thread sysbench, one running the cpu benchmark, the other one the > mem benchmark > - both started at the same time > - both are pinned on the same core (2

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-08-02 Thread Julien Desfossez
We tested both Aaron's and Tim's patches and here are our results. Test setup: - 2 1-thread sysbench, one running the cpu benchmark, the other one the mem benchmark - both started at the same time - both are pinned on the same core (2 hardware threads) - 10 30-seconds runs - test script:

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-30 Thread Li, Aubrey
On 2019/7/26 23:21, Julien Desfossez wrote: > On 25-Jul-2019 10:30:03 PM, Aaron Lu wrote: >> >> I tried a different approach based on vruntime with 3 patches following. > [...] > > We have experimented with this new patchset and indeed the fairness is > now much better. Interactive tasks with v3

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-26 Thread Tim Chen
On 7/26/19 8:21 AM, Julien Desfossez wrote: > On 25-Jul-2019 10:30:03 PM, Aaron Lu wrote: >> >> I tried a different approach based on vruntime with 3 patches following. > [...] > > We have experimented with this new patchset and indeed the fairness is > now much better. Interactive tasks with v3

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-26 Thread Julien Desfossez
On 25-Jul-2019 10:30:03 PM, Aaron Lu wrote: > > I tried a different approach based on vruntime with 3 patches following. [...] We have experimented with this new patchset and indeed the fairness is now much better. Interactive tasks with v3 were complete starving when there were cpu-intensive

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-25 Thread Li, Aubrey
On 2019/7/25 22:30, Aaron Lu wrote: > On Mon, Jul 22, 2019 at 06:26:46PM +0800, Aubrey Li wrote: >> The granularity period of util_avg seems too large to decide task priority >> during pick_task(), at least it is in my case, cfs_prio_less() always picked >> core max task, so pick_task() eventually

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-25 Thread Aaron Lu
On Mon, Jul 22, 2019 at 06:26:46PM +0800, Aubrey Li wrote: > The granularity period of util_avg seems too large to decide task priority > during pick_task(), at least it is in my case, cfs_prio_less() always picked > core max task, so pick_task() eventually picked idle, which causes this change >

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-22 Thread Aubrey Li
On Mon, Jul 22, 2019 at 6:43 PM Aaron Lu wrote: > > On 2019/7/22 18:26, Aubrey Li wrote: > > The granularity period of util_avg seems too large to decide task priority > > during pick_task(), at least it is in my case, cfs_prio_less() always picked > > core max task, so pick_task() eventually

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-22 Thread Aaron Lu
On 2019/7/22 18:26, Aubrey Li wrote: > The granularity period of util_avg seems too large to decide task priority > during pick_task(), at least it is in my case, cfs_prio_less() always picked > core max task, so pick_task() eventually picked idle, which causes this change > not very helpful for

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-22 Thread Aubrey Li
On Thu, Jul 18, 2019 at 6:07 PM Aaron Lu wrote: > > On Wed, Jun 19, 2019 at 02:33:02PM -0400, Julien Desfossez wrote: > > On 17-Jun-2019 10:51:27 AM, Aubrey Li wrote: > > > The result looks still unfair, and particularly, the variance is too high, > > > > I just want to confirm that I am also

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-19 Thread Tim Chen
On 7/18/19 10:52 PM, Aaron Lu wrote: > On Thu, Jul 18, 2019 at 04:27:19PM -0700, Tim Chen wrote: >> >> >> On 7/18/19 3:07 AM, Aaron Lu wrote: >>> On Wed, Jun 19, 2019 at 02:33:02PM -0400, Julien Desfossez wrote: >> >>> >>> With the below patch on top of v3 that makes use of util_avg to decide >>>

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-19 Thread Aubrey Li
On Fri, Jul 19, 2019 at 1:53 PM Aaron Lu wrote: > > On Thu, Jul 18, 2019 at 04:27:19PM -0700, Tim Chen wrote: > > > > > > On 7/18/19 3:07 AM, Aaron Lu wrote: > > > On Wed, Jun 19, 2019 at 02:33:02PM -0400, Julien Desfossez wrote: > > > > > > > > With the below patch on top of v3 that makes use of

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-18 Thread Aaron Lu
On Thu, Jul 18, 2019 at 04:27:19PM -0700, Tim Chen wrote: > > > On 7/18/19 3:07 AM, Aaron Lu wrote: > > On Wed, Jun 19, 2019 at 02:33:02PM -0400, Julien Desfossez wrote: > > > > > With the below patch on top of v3 that makes use of util_avg to decide > > which task win, I can do all 8 steps

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-18 Thread Tim Chen
On 7/18/19 3:07 AM, Aaron Lu wrote: > On Wed, Jun 19, 2019 at 02:33:02PM -0400, Julien Desfossez wrote: > > With the below patch on top of v3 that makes use of util_avg to decide > which task win, I can do all 8 steps and the final scores of the 2 > workloads are: 1796191 and 2199586. The

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-07-18 Thread Aaron Lu
On Wed, Jun 19, 2019 at 02:33:02PM -0400, Julien Desfossez wrote: > On 17-Jun-2019 10:51:27 AM, Aubrey Li wrote: > > The result looks still unfair, and particularly, the variance is too high, > > I just want to confirm that I am also seeing the same issue with a > similar setup. I also tried with

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-06-19 Thread Julien Desfossez
On 17-Jun-2019 10:51:27 AM, Aubrey Li wrote: > The result looks still unfair, and particularly, the variance is too high, I just want to confirm that I am also seeing the same issue with a similar setup. I also tried with the priority boost fix we previously posted, the results are slightly

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-06-16 Thread Aubrey Li
On Thu, Jun 13, 2019 at 11:22 AM Julien Desfossez wrote: > > On 12-Jun-2019 05:03:08 PM, Subhra Mazumdar wrote: > > > > On 6/12/19 9:33 AM, Julien Desfossez wrote: > > >After reading more traces and trying to understand why only untagged > > >tasks are starving when there are cpu-intensive tasks

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-06-13 Thread Subhra Mazumdar
On 6/12/19 9:33 AM, Julien Desfossez wrote: After reading more traces and trying to understand why only untagged tasks are starving when there are cpu-intensive tasks running on the same set of CPUs, we noticed a difference in behavior in ‘pick_task’. In the case where ‘core_cookie’ is 0, we

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-06-13 Thread Julien Desfossez
On 12-Jun-2019 05:03:08 PM, Subhra Mazumdar wrote: > > On 6/12/19 9:33 AM, Julien Desfossez wrote: > >After reading more traces and trying to understand why only untagged > >tasks are starving when there are cpu-intensive tasks running on the > >same set of CPUs, we noticed a difference in

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-06-12 Thread Julien Desfossez
After reading more traces and trying to understand why only untagged tasks are starving when there are cpu-intensive tasks running on the same set of CPUs, we noticed a difference in behavior in ‘pick_task’. In the case where ‘core_cookie’ is 0, we are supposed to only prefer the tagged task if

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-06-12 Thread Julien Desfossez
> The data on my side looks good with CORESCHED_STALL_FIX = true. Thank you for testing this fix, I'm glad it works for this use-case as well. We will be posting another (simpler) version today, stay tuned :-) Julien

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-06-11 Thread Li, Aubrey
On 2019/6/6 23:26, Julien Desfossez wrote: > As mentioned above, we have come up with a fix for the long starvation > of untagged interactive threads competing for the same core with tagged > threads at the same priority. The idea is to detect the stall and boost > the stalling threads priority so

Re: [RFC PATCH v3 00/16] Core scheduling v3

2019-06-06 Thread Julien Desfossez
On 31-May-2019 05:08:16 PM, Julien Desfossez wrote: > > My first reaction is: when shell wakes up from sleep, it will > > fork date. If the script is untagged and those workloads are > > tagged and all available cores are already running workload > > threads, the forked date can lose to the

  1   2   >