On Mon, Sep 19, 2016 at 11:04:49AM -0700, Joonwoo Park wrote:
> On Mon, Sep 19, 2016 at 10:21:58AM +0200, Peter Zijlstra wrote:
> > On Fri, Sep 16, 2016 at 06:28:51PM -0700, Joonwoo Park wrote:
> > > From: Srivatsa Vaddagiri <va...@codeaurora.org>
> > > 
> > > SCHED_HRTICK feature is useful to preempt SCHED_FAIR tasks on-the-dot
> > 
> > Right, but I always found the overhead of the thing too high to be
> > really useful.
> > 
> > How come you're using this?
> 
> This patch was in our internal tree for decades so I unfortunately cannot
> find actual usecase or history.
> But I guess it was about excessive latency when there are number of CPU
> bound tasks running on a CPU but on different cfs_rqs and CONFIG_HZ = 100.
> 
> See how I recreated :
> 
> * run 4 cpu hogs on the same cgroup [1] :
>  dd-960   [000] d..3   110.651060: sched_switch: prev_comm=dd prev_pid=960 
> prev_prio=120 prev_state=R+ ==> next_comm=dd next_pid=959 next_prio=120
>  dd-959   [000] d..3   110.652566: sched_switch: prev_comm=dd prev_pid=959 
> prev_prio=120 prev_state=R+ ==> next_comm=dd next_pid=961 next_prio=120
>  dd-961   [000] d..3   110.654072: sched_switch: prev_comm=dd prev_pid=961 
> prev_prio=120 prev_state=R+ ==> next_comm=dd next_pid=962 next_prio=120
>  dd-962   [000] d..3   110.655578: sched_switch: prev_comm=dd prev_pid=962 
> prev_prio=120 prev_state=R+ ==> next_comm=dd next_pid=960 next_prio=120
>   preempt every 1.5ms slice by hrtick.
> 
> * run 4 CPU hogs on 4 different cgroups [2] :
>  dd-964   [000] d..3    24.169873: sched_switch: prev_comm=dd prev_pid=964 
> prev_prio=120 prev_state=R+ ==> next_comm=dd next_pid=966 next_prio=120
>  dd-966   [000] d..3    24.179873: sched_switch: prev_comm=dd prev_pid=966 
> prev_prio=120 prev_state=R+ ==> next_comm=dd next_pid=965 next_prio=120
>  dd-965   [000] d..3    24.189873: sched_switch: prev_comm=dd prev_pid=965 
> prev_prio=120 prev_state=R+ ==> next_comm=dd next_pid=967 next_prio=120
>  dd-967   [000] d..3    24.199873: sched_switch: prev_comm=dd prev_pid=967 
> prev_prio=120 prev_state=R+ ==> next_comm=dd next_pid=964 next_prio=120
>   preempt every 10ms by scheduler tick so that all tasks suffers from 40ms 
> preemption latency.
> 
> [1] : 
>  dd if=/dev/zero of=/dev/zero &
Ugh..  of=/dev/null instead.

>  dd if=/dev/zero of=/dev/zero &
>  dd if=/dev/zero of=/dev/zero &
>  dd if=/dev/zero of=/dev/zero &
> 
> [2] :
>  mount -t cgroup -o cpu cpu /sys/fs/cgroup
>  mkdir /sys/fs/cgroup/grp1
>  mkdir /sys/fs/cgroup/grp2
>  mkdir /sys/fs/cgroup/grp3
>  mkdir /sys/fs/cgroup/grp4
>  dd if=/dev/zero of=/dev/zero &
>  echo $! > /sys/fs/cgroup/grp1/tasks 
>  dd if=/dev/zero of=/dev/zero &
>  echo $! > /sys/fs/cgroup/grp2/tasks 
>  dd if=/dev/zero of=/dev/zero &
>  echo $! > /sys/fs/cgroup/grp3/tasks 
>  dd if=/dev/zero of=/dev/zero &
>  echo $! > /sys/fs/cgroup/grp4/tasks 
> 
> I could confirm this patch makes the latter behaves as same as the former in 
> terms of preemption latency.
> 
> > 
> > 
> > >  joonwoop: Do we also need to update or remove if-statement inside
> > >  hrtick_update()?
> > 
> > >  I guess not because hrtick_update() doesn't want to start hrtick when 
> > > cfs_rq
> > >  has large number of nr_running where slice is longer than sched_latency.
> > 
> > Right, you want that to match with whatever sched_slice() does.
> 
> Cool.  Thank you!
> 
> Thanks,
> Joonwoo
> 
> > 
> > > +++ b/kernel/sched/fair.c
> > > @@ -4458,7 +4458,7 @@ static void hrtick_start_fair(struct rq *rq, struct 
> > > task_struct *p)
> > >  
> > >   WARN_ON(task_rq(p) != rq);
> > >  
> > > - if (cfs_rq->nr_running > 1) {
> > > + if (rq->cfs.h_nr_running > 1) {
> > >           u64 slice = sched_slice(cfs_rq, se);
> > >           u64 ran = se->sum_exec_runtime - se->prev_sum_exec_runtime;
> > >           s64 delta = slice - ran;
> > 
> > Yeah, that looks right. I don't think I've ever tried hrtick with
> > cgroups enabled...

Reply via email to