On Wed, Dec 01, 2010 at 06:45:02PM +0100, Peter Zijlstra wrote:
> On Wed, 2010-12-01 at 22:59 +0530, Srivatsa Vaddagiri wrote:
> > 
> > yield_task_fair(...)
> > {
> > 
> > +       ideal_runtime = sched_slice(cfs_rq, curr);
> > +       delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
> > +       rem_time_slice = ideal_runtime - delta_exec;
> > +
> > +       current->donate_time += rem_time_slice > some_threshold ?
> > +                                some_threshold : rem_time_slice;
> > 
> >         ...
> > }
> > 
> > 
> > sched_slice(...)
> > {
> >         slice = ...
> > 
> > +       slice += current->donate_time;
> > 
> > }
> > 
> > or something close to it. I am bit reluctant to go that route myself, 
> > unless the
> > fairness issue with plain yield is quite bad. 
> 
> That really won't do anything. You need to adjust both tasks their
> vruntime.

We are dealing with just one task here (the task that is yielding).
After recording how much timeslice we are "giving up" in current->donate_time
(donate_time is perhaps not the right name to use), we adjust the yielding
task's vruntime as per existing logic (for ex: to make it go to back of
runqueue). When the yielding tasks gets to run again, lock is hopefully 
available for it to grab, we let it run longer than the default sched_slice()
to compensate for what time it gave up previously to other threads in same
runqueue. This ensures that because of yielding upon lock contention, we are not
leaking bandwidth in favor of other guests. Again I don't know how much of
fairness issue this is in practice, so unless we see some numbers I'd prefer
sticking to plain yield() upon lock-contention (for unmodified guests that is).

> Also, I really wouldn't touch the yield() implementation, nor
> would I expose any such time donation crap to userspace.

- vatsa

Reply via email to