Re: [RFC -v5 PATCH 2/4] sched: Add yield_to(task, preempt) functionality.

2011-01-17 Thread Srivatsa Vaddagiri
On Fri, Jan 14, 2011 at 01:29:52PM -0500, Rik van Riel wrote:
 I am not sure whether we are meeting that objective via this patch, as
 lock-spinning vcpu would simply yield after setting next buddy to preferred
 vcpu on target pcpu, thereby leaking some amount of bandwidth on the pcpu
 where it is spinning.
 
 Have you read the patch?

Sorry had mis-read the patch!

On reviewing it further, I am wondering if we can optimize yield_to() further
for case when target and current are on same pcpu, by swapping vruntimes of two
tasks (to let target run in current's place - as we do in task_fork_fair()).

- vatsa
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC -v5 PATCH 2/4] sched: Add yield_to(task, preempt) functionality.

2011-01-14 Thread Peter Zijlstra
On Fri, 2011-01-14 at 03:03 -0500, Rik van Riel wrote:
 From: Mike Galbraith efa...@gmx.de
 
 Currently only implemented for fair class tasks.
 
 Add a yield_to_task method() to the fair scheduling class. allowing the
 caller of yield_to() to accelerate another thread in it's thread group,
 task group.
 
 Implemented via a scheduler hint, using cfs_rq-next to encourage the
 target being selected.  We can rely on pick_next_entity to keep things
 fair, so noone can accelerate a thread that has already used its fair
 share of CPU time.
 
 This also means callers should only call yield_to when they really
 mean it.  Calling it too often can result in the scheduler just
 ignoring the hint.
 
 Signed-off-by: Rik van Riel r...@redhat.com
 Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
 Signed-off-by: Mike Galbraith efa...@gmx.de 

Looks good to me, do you want me to merge this or will you merge it
through the kvm tree with all other patches?

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC -v5 PATCH 2/4] sched: Add yield_to(task, preempt) functionality.

2011-01-14 Thread Srivatsa Vaddagiri
On Fri, Jan 14, 2011 at 03:03:57AM -0500, Rik van Riel wrote:
 From: Mike Galbraith efa...@gmx.de
 
 Currently only implemented for fair class tasks.
 
 Add a yield_to_task method() to the fair scheduling class. allowing the
 caller of yield_to() to accelerate another thread in it's thread group,
 task group.
 
 Implemented via a scheduler hint, using cfs_rq-next to encourage the
 target being selected.  We can rely on pick_next_entity to keep things
 fair, so noone can accelerate a thread that has already used its fair
 share of CPU time.

If I recall correctly, one of the motivations for yield_to_task (rather than
a simple yield) was to avoid leaking bandwidth to other guests i.e we don't want
the remaining timeslice of spinning vcpu to be given away to other guests but
rather donate it to another (lock-holding) vcpu and thus retain the bandwidth
allocated to the guest.

I am not sure whether we are meeting that objective via this patch, as
lock-spinning vcpu would simply yield after setting next buddy to preferred
vcpu on target pcpu, thereby leaking some amount of bandwidth on the pcpu
where it is spinning. Would be nice to see what kind of fairness impact this 
has under heavy contention scenario.

- vatsa
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC -v5 PATCH 2/4] sched: Add yield_to(task, preempt) functionality.

2011-01-14 Thread Rik van Riel

On 01/14/2011 12:47 PM, Srivatsa Vaddagiri wrote:


If I recall correctly, one of the motivations for yield_to_task (rather than
a simple yield) was to avoid leaking bandwidth to other guests i.e we don't want
the remaining timeslice of spinning vcpu to be given away to other guests but
rather donate it to another (lock-holding) vcpu and thus retain the bandwidth
allocated to the guest.


No, that was not the motivation.   The motivation was to try
and get the lock holder to run soon, so it can release the
lock.

What you describe is merely one of the mechanisms considered
for meeting that objective.


I am not sure whether we are meeting that objective via this patch, as
lock-spinning vcpu would simply yield after setting next buddy to preferred
vcpu on target pcpu, thereby leaking some amount of bandwidth on the pcpu
where it is spinning.


Have you read the patch?

--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html