On Sat, 2014-10-25 at 00:04 +0200, Peter Zijlstra wrote:
On Fri, Oct 24, 2014 at 04:53:27PM -0400, Waiman Long wrote:
The additional register pressure may just cause a few more register moves
which should be negligible in the overall performance . The additional
icache pressure, however,
On Thu, 2013-01-03 at 11:41 -0200, Marcelo Tosatti wrote:
Andy, Mike, can you confirm whether this fixes the percpu allocation
failures when loading kvm.ko? TIA
monteverdi:~/:[1]# dmesg|grep PERCPU
[0.00] PERCPU: Embedded 27 pages/cpu @88047f80 s80704 r8192
d21696 u262144
On Tue, 2011-04-05 at 10:48 +0200, Peter Zijlstra wrote:
On Tue, 2011-03-22 at 12:35 +0200, Avi Kivity wrote:
Here's top with 96 idle guests running:
On some hacked up 2.6.38 kernel...
Start of perf report -g
55.26%kvm [kernel.kallsyms] [k] __ticket_spin_lock
On Wed, 2011-01-12 at 22:02 -0500, Rik van Riel wrote:
Cgroups only makes the matter worse - libvirt places
each KVM guest into its own cgroup, so a VCPU will
generally always be alone on its own per-cgroup, per-cpu
runqueue! That can lead to pulling a VCPU onto our local
CPU because we
On Wed, 2011-01-05 at 18:04 +0100, Peter Zijlstra wrote:
On Wed, 2011-01-05 at 17:57 +0100, Mike Galbraith wrote:
+ p_cfs_rq = cfs_rq_of(pse);
+ local = 1;
+ }
+#endif
+
+ /* Tell the scheduler that we'd really like pse to run next
...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Mike Galbraith efa...@gmx.de
---
include/linux/sched.h |1
kernel/sched.c| 56 ++
kernel/sched_fair.c | 52
On Wed, 2011-01-05 at 18:04 +0100, Peter Zijlstra wrote:
On Wed, 2011-01-05 at 17:57 +0100, Mike Galbraith wrote:
+ p_cfs_rq = cfs_rq_of(pse);
+ local = 1;
+ }
+#endif
+
+ /* Tell the scheduler that we'd really like pse to run next
On Tue, 2011-01-04 at 11:14 +0200, Avi Kivity wrote:
Assuming there are no objections, Mike, can you get 2/3 into a
fast-forward-only branch of tip.git? I'll then merge it into kvm.git
and apply 1/3 and 2/3.
Wrong guy. Fast-forward is Peter's department.
-Mike
--
To unsubscribe
On Tue, 2011-01-04 at 11:09 +0200, Avi Kivity wrote:
On 01/04/2011 08:42 AM, Mike Galbraith wrote:
If I were to, say, run a 256 CPU VM on my quad, would this help me get
more hackbench or whatever oomph from my (256X80386/20:) box?
First of all, you can't run 256 guests on x86 kvm.
I
On Tue, 2011-01-04 at 19:04 +0100, Peter Zijlstra wrote:
+ p_cfs_rq = cfs_rq_of(pse);
+ yield = 1;
+ }
+#endif
+
+ if (yield)
+ clear_buddies(cfs_rq, se);
+ else if (preempt)
+ clear_buddies(p_cfs_rq, curr);
+
+ /* Tell the
On Mon, 2011-01-03 at 16:29 -0500, Rik van Riel wrote:
From: Mike Galbraith efa...@gmx.de
Add a yield_to function to the scheduler code, allowing us to
give enough of our timeslice to another thread to allow it to
run and release whatever resource we need it to release.
We may want to use
A couple questions.
On Mon, 2011-01-03 at 16:26 -0500, Rik van Riel wrote:
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something
On Mon, 2010-12-20 at 17:04 +0100, Mike Galbraith wrote:
On Mon, 2010-12-20 at 10:40 -0500, Rik van Riel wrote:
On 12/17/2010 02:15 AM, Mike Galbraith wrote:
BTW, with this vruntime donation thingy, what prevents a task from
forking off accomplices who do nothing but wait for a wakeup
On Sun, 2010-12-19 at 11:19 +0200, Avi Kivity wrote:
On 12/19/2010 12:05 PM, Mike Galbraith wrote:
We definitely want to maintain fairness. Both with a dedicated virt
host and with a mixed workload.
That makes it difficult to the point of impossible.
You want a specific task
On Mon, 2010-12-20 at 11:03 +0200, Avi Kivity wrote:
On 12/20/2010 10:55 AM, Mike Galbraith wrote:
I don't want it to run now. I want it to run before some other
task. I
don't care if N other tasks run before both. So no godlike powers
needed, simply
On Mon, 2010-12-20 at 11:46 +0200, Avi Kivity wrote:
On 12/20/2010 11:30 AM, Mike Galbraith wrote:
Because preempting a perfect stranger is not courteous, all tasks have
to play nice.
I don't want to preempt anybody, simply make the task run before me.
I thought you
On Mon, 2010-12-20 at 12:39 +0200, Avi Kivity wrote:
On 12/20/2010 12:33 PM, Mike Galbraith wrote:
Correct. I don't want the other task to run before me, I just don't
want to run before it.
OK, so what I gather is that if you can preempt another of your own
threads to get
On Mon, 2010-12-20 at 12:49 +0200, Avi Kivity wrote:
On 12/20/2010 12:46 PM, Mike Galbraith wrote:
However, if I'm all alone on my cpu, and the other task is runnable but
not running, behind some unrelated task, then I do need that task to be
preempted (or to move tasks around
On Mon, 2010-12-20 at 10:40 -0500, Rik van Riel wrote:
On 12/17/2010 02:15 AM, Mike Galbraith wrote:
BTW, with this vruntime donation thingy, what prevents a task from
forking off accomplices who do nothing but wait for a wakeup and
yield_to(exploit)?
Even swapping vruntimes
On Sun, 2010-12-19 at 08:21 +0200, Avi Kivity wrote:
On 12/18/2010 09:06 PM, Mike Galbraith wrote:
Hm, so it needs to be very cheap, and highly repeatable.
What if: so you're trying to get spinners out of the way right? You
somehow know they're spinning, so instead of trying to boost
On Sun, 2010-12-19 at 11:19 +0200, Avi Kivity wrote:
On 12/19/2010 12:05 PM, Mike Galbraith wrote:
That's why you'd drop lag, set to max(se-vruntime, cfs_rq-min_vruntime).
Internal scheduler terminology again, don't follow.
(distance to the fair stick, worthiness to receive cpu
On Sat, 2010-12-18 at 19:02 +0200, Avi Kivity wrote:
On 12/17/2010 09:51 PM, Mike Galbraith wrote:
On Fri, 2010-12-17 at 17:09 +0200, Avi Kivity wrote:
On 12/17/2010 08:56 AM, Mike Galbraith wrote:
Surely that makes it a reasonable idea to call yield, and
get one
On Sat, 2010-12-18 at 19:08 +0200, Avi Kivity wrote:
On 12/17/2010 09:15 AM, Mike Galbraith wrote:
BTW, with this vruntime donation thingy, what prevents a task from
forking off accomplices who do nothing but wait for a wakeup and
yield_to(exploit)?
What's the difference between
On Fri, 2010-12-17 at 17:09 +0200, Avi Kivity wrote:
On 12/17/2010 08:56 AM, Mike Galbraith wrote:
Surely that makes it a reasonable idea to call yield, and
get one of the other tasks on the current CPU running for
a bit?
There's nothing wrong with trying to give up the cpu. It's
On Thu, 2010-12-16 at 14:49 -0500, Rik van Riel wrote:
On 12/14/2010 01:08 AM, Mike Galbraith wrote:
+EXPORT_SYMBOL_GPL(yield_to);
That part looks ok, except for the yield cross cpu bit. Trying to yield
a resource you don't have doesn't make much sense to me.
The current task just
On Fri, 2010-12-17 at 07:57 +0100, Mike Galbraith wrote:
On Thu, 2010-12-16 at 14:49 -0500, Rik van Riel wrote:
+static void yield_to_fair(struct rq *rq, struct task_struct *p)
+{
+struct sched_entity *se =p-se;
+struct cfs_rq *cfs_rq = cfs_rq_of(se);
+u64
On Tue, 2010-12-14 at 15:54 +0530, Srivatsa Vaddagiri wrote:
On Tue, Dec 14, 2010 at 07:08:16AM +0100, Mike Galbraith wrote:
That part looks ok, except for the yield cross cpu bit. Trying to yield
a resource you don't have doesn't make much sense to me.
So another (crazy) idea is to move
On Tue, 2010-12-14 at 16:56 +0530, Srivatsa Vaddagiri wrote:
On Tue, Dec 14, 2010 at 12:03:58PM +0100, Mike Galbraith wrote:
On Tue, 2010-12-14 at 15:54 +0530, Srivatsa Vaddagiri wrote:
On Tue, Dec 14, 2010 at 07:08:16AM +0100, Mike Galbraith wrote:
That part looks ok, except
On Mon, 2010-12-13 at 22:46 -0500, Rik van Riel wrote:
diff --git a/kernel/sched.c b/kernel/sched.c
index dc91a4d..6399641 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5166,6 +5166,46 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid,
unsigned int, len,
return ret;
}
On Fri, 2010-12-03 at 19:16 +0530, Srivatsa Vaddagiri wrote:
On Fri, Dec 03, 2010 at 06:54:16AM +0100, Mike Galbraith wrote:
+void yield_to(struct task_struct *p)
+{
+ unsigned long flags;
+ struct sched_entity *se = p-se;
+ struct rq *rq;
+ struct cfs_rq *cfs_rq;
+ u64
On Fri, 2010-12-03 at 09:48 -0500, Rik van Riel wrote:
On 12/03/2010 09:45 AM, Mike Galbraith wrote:
I'll have to go back and re-read that. Off the top of my head, I see no
way it could matter which container the numbers live in as long as they
keep advancing, and stay in the same
On Fri, 2010-12-03 at 10:35 -0500, Rik van Riel wrote:
On 12/03/2010 10:09 AM, Mike Galbraith wrote:
On Fri, 2010-12-03 at 09:48 -0500, Rik van Riel wrote:
On 12/03/2010 09:45 AM, Mike Galbraith wrote:
I'll have to go back and re-read that. Off the top of my head, I see no
way it could
On Thu, 2010-12-02 at 14:44 -0500, Rik van Riel wrote:
+#ifdef CONFIG_SCHED_HRTICK
+/*
+ * Yield the CPU, giving the remainder of our time slice to task p.
+ * Typically used to hand CPU time to another thread inside the same
+ * process, eg. when p holds a resource other threads are waiting
33 matches
Mail list logo