On Wed, Sep 11, 2019 at 12:47:34PM -0400, Vineeth Remanan Pillai wrote: > > > So both of you are working on top of my 2 patches that deal with the > > > fairness issue, but I had the feeling Tim's alternative patches[1] are > > > simpler than mine and achieves the same result(after the force idle tag > > > > I think Julien's result show that my patches did not do as well as > > your patches for fairness. Aubrey did some other testing with the same > > conclusion. So I think keeping the forced idle time balanced is not > > enough for maintaining fairness. > > > There are two main issues - vruntime comparison issue and the > forced idle issue. coresched_idle thread patch is addressing > the forced idle issue as scheduler is no longer overloading idle > thread for forcing idle. If I understand correctly, Tim's patch > also tries to fix the forced idle issue. On top of fixing forced
Er...I don't think so. Tim's patch is meant to solve fairness issue as mine, it doesn't attempt to address the forced idle issue. > idle issue, we also need to fix that vruntime comparison issue > and I think thats where Aaron's patch helps. > > I think comparing parent's runtime also will have issues once > the task group has a lot more threads with different running > patterns. One example is a task group with lot of active threads > and a thread with fairly less activity. So when this less active > thread is competing with a thread in another group, there is a > chance that it loses continuously for a while until the other > group catches up on its vruntime. I actually think this is expected behaviour. Without core scheduling, when deciding which task to run, we will first decide which "se" to run from the CPU's root level cfs runqueue and then go downwards. Let's call the chosen se on the root level cfs runqueue the winner se. Then with core scheduling, we will also need compare the two winner "se"s of each hyperthread and choose the core wide winner "se". > > As discussed during LPC, probably start thinking along the lines > of global vruntime or core wide vruntime to fix the vruntime > comparison issue? core wide vruntime makes sense when there are multiple tasks of different cgroups queued on the same core. e.g. when there are two tasks of cgroupA and one task of cgroupB are queued on the same core, assume cgroupA's one task is on one hyperthread and its other task is on the other hyperthread with cgroupB's task. With my current implementation or Tim's, cgroupA will get more time than cgroupB. If we maintain core wide vruntime for cgroupA and cgroupB, we should be able to maintain fairness between cgroups on this core. Tim propose to solve this problem by doing some kind of load balancing if I'm not mistaken, I haven't taken a look at this yet.