On Thu, Dec 09, 2010 at 11:34:46PM -0500, Rik van Riel wrote:
On 12/03/2010 09:06 AM, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 03:03:30PM +0100, Peter Zijlstra wrote:
No, because they do receive service (they spend some time spinning
before being interrupted), so the respective
On 12/10/2010 03:39 AM, Srivatsa Vaddagiri wrote:
On Thu, Dec 09, 2010 at 11:34:46PM -0500, Rik van Riel wrote:
On 12/03/2010 09:06 AM, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 03:03:30PM +0100, Peter Zijlstra wrote:
No, because they do receive service (they spend some time spinning
On 12/03/2010 09:06 AM, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 03:03:30PM +0100, Peter Zijlstra wrote:
No, because they do receive service (they spend some time spinning
before being interrupted), so the respective vruntimes will increase, at
some point they'll pass B0 and it'll get
On 12/03/2010 08:23 AM, Peter Zijlstra wrote:
On Thu, 2010-12-02 at 14:44 -0500, Rik van Riel wrote:
unsigned long clone_flags);
+
+#ifdef CONFIG_SCHED_HRTICK
+extern u64 slice_remain(struct task_struct *);
+extern void yield_to(struct task_struct *);
+#else
On Wed, 2010-12-08 at 12:55 -0500, Rik van Riel wrote:
Right, so another approach might be to simply swap the vruntime between
curr and p.
Doesn't that run into the same scale issue you described
above?
Not really, but its tricky on SMP because vruntime only has meaning
within a
On Wed, 2010-12-08 at 21:00 +0100, Peter Zijlstra wrote:
+ lag0 = avg_vruntime(cfs_rq_of(se));
+ p_lag0 = avg_vruntime(cfs_rq_of(p_se));
+
+ lag = se-vruntime - avg_vruntime(cfs_rq);
+ p_lag = p_se-vruntime - avg_vruntime(p_cfs_rq);
+
+ if (p_lag lag) { /* if
On 12/08/2010 03:00 PM, Peter Zijlstra wrote:
Anyway, complete untested and such..
Looks very promising. I've been making a few changes in the same
direction (except for the fancy CFS bits) and have one way to solve
the one problem you pointed out in your patch.
+void yield_to(struct
On 12/03/2010 04:23 PM, Peter Zijlstra wrote:
On Fri, 2010-12-03 at 19:40 +0530, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 07:36:07PM +0530, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 03:03:30PM +0100, Peter Zijlstra wrote:
No, because they do receive service (they spend some
On Thu, 2010-12-02 at 14:44 -0500, Rik van Riel wrote:
unsigned long clone_flags);
+
+#ifdef CONFIG_SCHED_HRTICK
+extern u64 slice_remain(struct task_struct *);
+extern void yield_to(struct task_struct *);
+#else
+static inline void yield_to(struct task_struct
On Fri, Dec 03, 2010 at 02:23:39PM +0100, Peter Zijlstra wrote:
Right, so another approach might be to simply swap the vruntime between
curr and p.
Can't that cause others to stave? For ex: consider a cpu p0 having these tasks:
p0 - A0 B0 A1
A0/A1 have entered some sort of AB-BA
On Fri, Dec 03, 2010 at 06:54:16AM +0100, Mike Galbraith wrote:
+void yield_to(struct task_struct *p)
+{
+ unsigned long flags;
+ struct sched_entity *se = p-se;
+ struct rq *rq;
+ struct cfs_rq *cfs_rq;
+ u64 remain = slice_remain(current);
That slice remaining only
On Fri, 2010-12-03 at 19:00 +0530, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 02:23:39PM +0100, Peter Zijlstra wrote:
Right, so another approach might be to simply swap the vruntime between
curr and p.
Can't that cause others to stave? For ex: consider a cpu p0 having these
tasks:
On Fri, Dec 03, 2010 at 03:03:30PM +0100, Peter Zijlstra wrote:
No, because they do receive service (they spend some time spinning
before being interrupted), so the respective vruntimes will increase, at
some point they'll pass B0 and it'll get scheduled.
Is that sufficient to ensure that B0
On Fri, Dec 03, 2010 at 07:36:07PM +0530, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 03:03:30PM +0100, Peter Zijlstra wrote:
No, because they do receive service (they spend some time spinning
before being interrupted), so the respective vruntimes will increase, at
some point they'll
On Fri, 2010-12-03 at 19:16 +0530, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 06:54:16AM +0100, Mike Galbraith wrote:
+void yield_to(struct task_struct *p)
+{
+ unsigned long flags;
+ struct sched_entity *se = p-se;
+ struct rq *rq;
+ struct cfs_rq *cfs_rq;
+ u64
On 12/03/2010 09:45 AM, Mike Galbraith wrote:
I'll have to go back and re-read that. Off the top of my head, I see no
way it could matter which container the numbers live in as long as they
keep advancing, and stay in the same runqueue. (hm, task weights would
have to be the same too or
On Fri, 2010-12-03 at 09:48 -0500, Rik van Riel wrote:
On 12/03/2010 09:45 AM, Mike Galbraith wrote:
I'll have to go back and re-read that. Off the top of my head, I see no
way it could matter which container the numbers live in as long as they
keep advancing, and stay in the same
On 12/03/2010 10:09 AM, Mike Galbraith wrote:
On Fri, 2010-12-03 at 09:48 -0500, Rik van Riel wrote:
On 12/03/2010 09:45 AM, Mike Galbraith wrote:
I'll have to go back and re-read that. Off the top of my head, I see no
way it could matter which container the numbers live in as long as they
On Fri, Dec 03, 2010 at 10:35:25AM -0500, Rik van Riel wrote:
Do you have suggestions on what I should do to make
this yield_to functionality work?
Keeping in mind the complications of yield_to, I had suggested we do something
suggested below:
http://marc.info/?l=kvmm=129122645006996w=2
On 12/03/2010 11:20 AM, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 10:35:25AM -0500, Rik van Riel wrote:
Do you have suggestions on what I should do to make
this yield_to functionality work?
Keeping in mind the complications of yield_to, I had suggested we do something
suggested below:
On Fri, Dec 03, 2010 at 12:09:01PM -0500, Rik van Riel wrote:
I don't see how that is going to help get the lock
released, when the VCPU holding the lock is on another
CPU.
Even the directed yield() is not guaranteed to get the lock released, given its
shooting in the dark?
Anyway, the
On 12/03/2010 12:29 PM, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 12:09:01PM -0500, Rik van Riel wrote:
I don't see how that is going to help get the lock
released, when the VCPU holding the lock is on another
CPU.
Even the directed yield() is not guaranteed to get the lock released,
On Fri, Dec 03, 2010 at 12:33:29PM -0500, Rik van Riel wrote:
Anyway, the intention of yield() proposed was not to get lock released
immediately (which will happen eventually), but rather to avoid inefficiency
associated with (long) spinning and at the same time make sure we are not
leaking
On 12/02/2010 07:50 PM, Chris Wright wrote:
+void requeue_task(struct rq *rq, struct task_struct *p)
+{
+ assert_spin_locked(rq-lock);
+
+ if (!p-se.on_rq || task_running(rq, p) || task_has_rt_policy(p))
+ return;
already checked task_running(rq, p) ||
* Rik van Riel (r...@redhat.com) wrote:
On 12/02/2010 07:50 PM, Chris Wright wrote:
+/*
+ * Yield the CPU, giving the remainder of our time slice to task p.
+ * Typically used to hand CPU time to another thread inside the same
+ * process, eg. when p holds a resource other threads are waiting
On Fri, 2010-12-03 at 10:35 -0500, Rik van Riel wrote:
On 12/03/2010 10:09 AM, Mike Galbraith wrote:
On Fri, 2010-12-03 at 09:48 -0500, Rik van Riel wrote:
On 12/03/2010 09:45 AM, Mike Galbraith wrote:
I'll have to go back and re-read that. Off the top of my head, I see no
way it could
On Fri, 2010-12-03 at 16:09 +0100, Mike Galbraith wrote:
On Fri, 2010-12-03 at 09:48 -0500, Rik van Riel wrote:
On 12/03/2010 09:45 AM, Mike Galbraith wrote:
I'll have to go back and re-read that. Off the top of my head, I see no
way it could matter which container the numbers live in
On Fri, 2010-12-03 at 19:40 +0530, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 07:36:07PM +0530, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 03:03:30PM +0100, Peter Zijlstra wrote:
No, because they do receive service (they spend some time spinning
before being interrupted), so
On Fri, 2010-12-03 at 13:27 -0500, Rik van Riel wrote:
Should these details all be in sched_fair? Seems like the wrong layer
here. And would that condition go the other way? If new vruntime is
smaller than min, then it becomes new cfs_rq-min_vruntime?
That would be nice.
Add a yield_to function to the scheduler code, allowing us to
give the remainder of our timeslice to another thread.
We may want to use this to provide a sys_yield_to system call
one day.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff --git
* Rik van Riel (r...@redhat.com) wrote:
Add a yield_to function to the scheduler code, allowing us to
give the remainder of our timeslice to another thread.
We may want to use this to provide a sys_yield_to system call
one day.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by:
On Thu, 2010-12-02 at 14:44 -0500, Rik van Riel wrote:
+#ifdef CONFIG_SCHED_HRTICK
+/*
+ * Yield the CPU, giving the remainder of our time slice to task p.
+ * Typically used to hand CPU time to another thread inside the same
+ * process, eg. when p holds a resource other threads are waiting
32 matches
Mail list logo