On 12/02/2010 03:31 AM, Takuya Yoshikawa wrote:
Thanks for the answers Avi, Juan,
Some FYI, (not about the bottleneck)
On Wed, 01 Dec 2010 14:35:57 +0200
Avi Kivitya...@redhat.com wrote:
- how many dirty pages do we have to care?
default values and assuming 1Gigabit ethernet
On 12/01/2010 08:03 PM, Anthony Liguori wrote:
In certain use-cases, we want to allocate guests fixed time slices where idle
guest cycles leave the machine idling. There are many approaches to achieve
this but the most direct is to simply avoid trapping the HLT instruction which
lets the guest
On 12/01/2010 09:07 PM, Peter Zijlstra wrote:
The pause loop exiting directed yield patches I am working on
preserve inter-vcpu fairness by round robining among the vcpus
inside one KVM guest.
I don't necessarily think that's enough.
Suppose you've got 4 vcpus, one is holding a lock
On 12/01/2010 07:29 PM, Srivatsa Vaddagiri wrote:
A plain yield (ignoring no-opiness on Linux) will penalize the
running guest wrt other guests. We need to maintain fairness.
Avi, any idea how much penalty are we talking of here in using plain yield?
If that is acceptable in
On 12/01/2010 09:09 PM, Peter Zijlstra wrote:
We are dealing with just one task here (the task that is yielding).
After recording how much timeslice we are giving up in current-donate_time
(donate_time is perhaps not the right name to use), we adjust the yielding
task's vruntime as per
It's the speculative path if 'no_apf = 1' and we will specially handle this
speculative path in the later patch, so 'prefault' is better to fit the sense
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |3 ++-
arch/x86/kvm/mmu.c |
Retry #PF is the speculative path, so don't set the accessed bit
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 10 ++
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 010736e..c6bb449
Retry #PF for softmmu only when the current vcpu has the same cr3 as the time
when #PF occurs
Changelog:
Just compare cr3 value since It's harmless to instantiate an spte for an
unused translation from Marcelo's comment
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
The Buildbot has detected a new failure of disable_kvm_x86_64_debian_5_0 on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/disable_kvm_x86_64_debian_5_0/builds/658
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build:
The Buildbot has detected a new failure of disable_kvm_i386_debian_5_0 on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/disable_kvm_i386_debian_5_0/builds/659
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build:
The Buildbot has detected a new failure of disable_kvm_x86_64_out_of_tree on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/disable_kvm_x86_64_out_of_tree/builds/607
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build:
The Buildbot has detected a new failure of disable_kvm_i386_out_of_tree on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/disable_kvm_i386_out_of_tree/builds/607
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build:
The Buildbot has detected a new failure of default_x86_64_debian_5_0 on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_debian_5_0/builds/668
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_1
The Buildbot has detected a new failure of default_x86_64_out_of_tree on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_out_of_tree/builds/609
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build:
The Buildbot has detected a new failure of default_i386_debian_5_0 on qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_i386_debian_5_0/builds/670
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_2
The Buildbot has detected a new failure of default_i386_out_of_tree on qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_i386_out_of_tree/builds/607
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_2
Retry #PF for softmmu only when the current vcpu has the same cr3 as the time
when #PF occurs
Changelog:
Just compare cr3 value since It's harmless to instantiate an spte for an
unused translation from Marcelo's comment
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
On Thu, Dec 02, 2010 at 09:13:28AM +0800, Yang, Sheng wrote:
On Wednesday 01 December 2010 22:03:58 Michael S. Tsirkin wrote:
On Wed, Dec 01, 2010 at 04:41:38PM +0800, lidong chen wrote:
I used sr-iov, give each vm 2 vf.
after apply the patch, and i found performence is the same.
Bugs item #1808970, was opened at 2007-10-07 16:42
Message generated for change (Comment added) made by jessorensen
You can respond by visiting:
https://sourceforge.net/tracker/?func=detailatid=893831aid=1808970group_id=180599
Please note that this message will contain a full copy of the comment
On Thu, Dec 02, 2010 at 11:17:52AM +0200, Avi Kivity wrote:
On 12/01/2010 09:09 PM, Peter Zijlstra wrote:
We are dealing with just one task here (the task that is yielding).
After recording how much timeslice we are giving up in
current-donate_time
(donate_time is perhaps not the
On Thu, Dec 2, 2010 at 5:49 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Thu, Dec 02, 2010 at 09:13:28AM +0800, Yang, Sheng wrote:
On Wednesday 01 December 2010 22:03:58 Michael S. Tsirkin wrote:
On Wed, Dec 01, 2010 at 04:41:38PM +0800, lidong chen wrote:
I used sr-iov, give each vm 2
On Wed, Dec 01, 2010 at 05:03:43PM +0900, Yoshiaki Tamura wrote:
2010/11/28 Michael S. Tsirkin m...@redhat.com:
On Sun, Nov 28, 2010 at 08:27:58PM +0900, Yoshiaki Tamura wrote:
2010/11/28 Michael S. Tsirkin m...@redhat.com:
On Thu, Nov 25, 2010 at 03:06:44PM +0900, Yoshiaki Tamura wrote:
Hello,
I am having little trouble figuring out which motherboards have support for
VT-d. I am following the link mentioned on the KVM site
http://wiki.xensource.com/xenwiki/VTdHowTo
The link says
Following desktop boards have the VT-d support
* Intel DQ35JO
* Intel DQ35MP
*
On Thu, Dec 02, 2010 at 11:17:52AM +0200, Avi Kivity wrote:
What I'd like to see in directed yield is donating exactly the
amount of vruntime that's needed to make the target thread run. The
How would that work well with hard-limits? The target thread would have been
rate limited and no amount
On Thu, Dec 02, 2010 at 05:17:00PM +0530, Srivatsa Vaddagiri wrote:
Just was wondering how this would work in case of buggy guests. Lets say that
a
guest ran into a AB-BA deadlock. VCPU0 spins on lock B (held by VCPU1
currently), while VCPU spins on lock A (held by VCPU0 currently). Both keep
On Thu, Dec 02, 2010 at 07:52:00PM +0800, Sheng Yang wrote:
On Thu, Dec 2, 2010 at 5:49 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Thu, Dec 02, 2010 at 09:13:28AM +0800, Yang, Sheng wrote:
On Wednesday 01 December 2010 22:03:58 Michael S. Tsirkin wrote:
On Wed, Dec 01, 2010 at
On Wed, Dec 01, 2010 at 09:25:40PM -0500, Kevin O'Connor wrote:
On Wed, Dec 01, 2010 at 02:27:40PM +0200, Gleb Natapov wrote:
On Tue, Nov 30, 2010 at 09:53:32PM -0500, Kevin O'Connor wrote:
BTW, what's the plan for handling SCSI adapters? Lets say a user has
a scsi card with three drives
On 12/02/2010 01:47 PM, Srivatsa Vaddagiri wrote:
On Thu, Dec 02, 2010 at 11:17:52AM +0200, Avi Kivity wrote:
On 12/01/2010 09:09 PM, Peter Zijlstra wrote:
We are dealing with just one task here (the task that is yielding).
After recording how much timeslice we are giving up in
On 12/02/2010 02:19 PM, Srivatsa Vaddagiri wrote:
On Thu, Dec 02, 2010 at 11:17:52AM +0200, Avi Kivity wrote:
What I'd like to see in directed yield is donating exactly the
amount of vruntime that's needed to make the target thread run. The
How would that work well with hard-limits? The
On 12/01/2010 04:36 AM, Yang, Sheng wrote:
On Tuesday 30 November 2010 22:15:29 Avi Kivity wrote:
On 11/26/2010 04:35 AM, Yang, Sheng wrote:
Shouldn't kvm also service reads from the pending bitmask?
Of course KVM should service reading from pending bitmask. For
On Thu, Dec 02, 2010 at 02:41:35PM +0200, Avi Kivity wrote:
What I'd like to see in directed yield is donating exactly the
amount of vruntime that's needed to make the target thread run.
I presume this requires the target vcpu to move left in rb-tree to run
earlier than scheduled
On Thu, Dec 02, 2010 at 03:09:43PM +0200, Avi Kivity wrote:
On 12/01/2010 04:36 AM, Yang, Sheng wrote:
On Tuesday 30 November 2010 22:15:29 Avi Kivity wrote:
On 11/26/2010 04:35 AM, Yang, Sheng wrote:
Shouldn't kvm also service reads from the pending bitmask?
Of
On 12/02/2010 03:13 PM, Srivatsa Vaddagiri wrote:
On Thu, Dec 02, 2010 at 02:41:35PM +0200, Avi Kivity wrote:
What I'd like to see in directed yield is donating exactly the
amount of vruntime that's needed to make the target thread run.
I presume this requires the target vcpu to
On 24/11/10 18:10 +0200, Avi Kivity wrote:
On 11/24/2010 05:50 PM, Anthony Liguori wrote:
My answer is that C++ is the only language that allows you to evolve
away from C, with mixed C/C++ source (not just linkage level
compatibility). If there are others, I want to know about them.
On 12/02/2010 03:47 PM, Michael S. Tsirkin wrote:
Which case? the readl() doesn't need access to the routing table,
just the entry.
One thing that read should do is flush in the outstanding
interrupts and flush out the mask bit writes.
The mask bit writes are synchronous.
wrt
In certain use-cases, we want to allocate guests fixed time slices where idle
guest cycles leave the machine idling. There are many approaches to achieve
this but the most direct is to simply avoid trapping the HLT instruction which
lets the guest directly execute the instruction putting the
i apply patch correctly.
the addr is not in mmio range because kvm_io_bus_write test the addr
for each device.
/* kvm_io_bus_write - called under kvm-slots_lock */
int kvm_io_bus_write(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
int len, const void *val)
{
int
On Thu, Dec 02, 2010 at 03:56:52PM +0200, Avi Kivity wrote:
On 12/02/2010 03:47 PM, Michael S. Tsirkin wrote:
Which case? the readl() doesn't need access to the routing table,
just the entry.
One thing that read should do is flush in the outstanding
interrupts and flush out the mask
In certain use-cases, we want to allocate guests fixed time slices where idle
guest cycles leave the machine idling.
i could not understand why need this? can you tell more detailedly?
thanks.
2010/12/2 Anthony Liguori aligu...@us.ibm.com:
In certain use-cases, we want to allocate guests fixed
On Thu, Dec 2, 2010 at 10:26 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Thu, Dec 02, 2010 at 03:56:52PM +0200, Avi Kivity wrote:
On 12/02/2010 03:47 PM, Michael S. Tsirkin wrote:
Which case? the readl() doesn't need access to the routing table,
just the entry.
One thing that
Gleb Natapov wrote:
BBS specification is broken since it doesn't provide a way for
discovered boot method (BCV) to be linked back to a device it will
boot from. Nothing we can do to fix this except moving to EFI (an
hope the problem is fixed there).
There is that option, or there could be
On 12/02/2010 08:39 AM, lidong chen wrote:
In certain use-cases, we want to allocate guests fixed time slices where idle
guest cycles leave the machine idling.
i could not understand why need this? can you tell more detailedly?
If you run 4 guests on a CPU, and they're all trying to
On 12/02/2010 08:39 AM, lidong chen wrote:
In certain use-cases, we want to allocate guests fixed time slices where idle
guest cycles leave the machine idling.
i could not understand why need this? can you tell more detailedly?
If you run 4 guests on a CPU, and they're all trying to
On Thu, Dec 02, 2010 at 03:49:44PM +0200, Avi Kivity wrote:
On 12/02/2010 03:13 PM, Srivatsa Vaddagiri wrote:
On Thu, Dec 02, 2010 at 02:41:35PM +0200, Avi Kivity wrote:
What I'd like to see in directed yield is donating exactly the
amount of vruntime that's needed to make the target
On 12/02/2010 05:27 PM, Srivatsa Vaddagiri wrote:
Even that would require some precaution in directed yield to ensure that it
doesn't unduly inflate vruntime of target, hurting fairness for other
guests on
same cpu as target (example guest code that can lead to this situation
below):
On Thu, Dec 02, 2010 at 05:33:40PM +0200, Avi Kivity wrote:
A0 and A1's vruntime will keep growing, eventually B will become
leftmost and become runnable (assuming leftmost == min vruntime, not
sure what the terminology is).
Donation (in directed yield) will cause vruntime to drop as well
Reboot with guests running, on Intel hosts, with a non-preemptible host
kernel is broken. This patchset fixes the issue.
Avi Kivity (2):
KVM: Don't spin on virt instruction faults during reboot
KVM: VMX: Return 0 from a failed VMREAD
arch/x86/include/asm/kvm_host.h |8 ++--
If we execute VMREAD during reboot we'll just skip over it. Instead of
returning garbage, return 0, which has a much smaller chance of confusing
the code. Otherwise we risk a flood of debug printk()s which block the
reboot process if a serial console or netconsole is enabled.
Signed-off-by: Avi
Since vmx blocks INIT signals, we disable virtualization extensions during
reboot. This leads to virtualization instructions faulting; we trap these
faults and spin while the reboot continues.
Unfortunately spinning on a non-preemptible kernel may block a task that
reboot depends on; this causes
Actually CCing Rik now!
On Thu, Dec 02, 2010 at 08:57:16PM +0530, Srivatsa Vaddagiri wrote:
On Thu, Dec 02, 2010 at 03:49:44PM +0200, Avi Kivity wrote:
On 12/02/2010 03:13 PM, Srivatsa Vaddagiri wrote:
On Thu, Dec 02, 2010 at 02:41:35PM +0200, Avi Kivity wrote:
What I'd like to see in
On Tue, Nov 30, 2010 at 06:03:57PM +0100, Joerg Roedel wrote:
This patch wraps changes to the CRx intercepts of SVM into
seperate functions to abstract nested-svm better and prepare
the implementation of the vmcb-clean-bits feature.
Signed-off-by: Joerg Roedel joerg.roe...@amd.com
---
On Tue, Nov 30, 2010 at 05:14:13PM +0900, Jin Dongming wrote:
When the following test case is injected with mce command, maybe user could
not
get the expected result.
DATA
command cpu bank status mcg_status addr misc
(qemu) mce 1 1
On Thu, Dec 02, 2010 at 10:54:24PM +0800, Sheng Yang wrote:
On Thu, Dec 2, 2010 at 10:26 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Thu, Dec 02, 2010 at 03:56:52PM +0200, Avi Kivity wrote:
On 12/02/2010 03:47 PM, Michael S. Tsirkin wrote:
Which case? the readl() doesn't need
On Thu, Dec 02, 2010 at 04:07:16PM +0100, Peter Stuge wrote:
Gleb Natapov wrote:
BBS specification is broken since it doesn't provide a way for
discovered boot method (BCV) to be linked back to a device it will
boot from. Nothing we can do to fix this except moving to EFI (an
hope the
On Thu, Dec 02, 2010 at 07:59:17AM -0600, Anthony Liguori wrote:
In certain use-cases, we want to allocate guests fixed time slices where idle
guest cycles leave the machine idling. There are many approaches to achieve
this but the most direct is to simply avoid trapping the HLT instruction
On 12/02/2010 11:37 AM, Marcelo Tosatti wrote:
On Thu, Dec 02, 2010 at 07:59:17AM -0600, Anthony Liguori wrote:
In certain use-cases, we want to allocate guests fixed time slices where idle
guest cycles leave the machine idling. There are many approaches to achieve
this but the most direct
On Thu, Dec 02, 2010 at 11:00:37AM -0800, Paul E. McKenney wrote:
On Mon, Nov 29, 2010 at 07:09:01PM +0200, Michael S. Tsirkin wrote:
This adds a test module for vhost infrastructure.
Intentionally not tied to kbuild to prevent people
from installing and loading it accidentally.
* Anthony Liguori (aligu...@us.ibm.com) wrote:
In certain use-cases, we want to allocate guests fixed time slices where idle
guest cycles leave the machine idling. There are many approaches to achieve
this but the most direct is to simply avoid trapping the HLT instruction which
lets the
On Mon, Nov 29, 2010 at 07:09:01PM +0200, Michael S. Tsirkin wrote:
This adds a test module for vhost infrastructure.
Intentionally not tied to kbuild to prevent people
from installing and loading it accidentally.
Signed-off-by: Michael S. Tsirkin m...@redhat.com
On question below.
---
On Thu, Dec 02, 2010 at 09:11:30PM +0200, Michael S. Tsirkin wrote:
On Thu, Dec 02, 2010 at 11:00:37AM -0800, Paul E. McKenney wrote:
On Mon, Nov 29, 2010 at 07:09:01PM +0200, Michael S. Tsirkin wrote:
This adds a test module for vhost infrastructure.
Intentionally not tied to kbuild to
Instead of sleeping in kvm_vcpu_on_spin, which can cause gigantic
slowdowns of certain workloads, we instead use yield_to to hand
the rest of our timeslice to another vcpu in the same KVM guest.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff
Add a yield_to function to the scheduler code, allowing us to
give the remainder of our timeslice to another thread.
We may want to use this to provide a sys_yield_to system call
one day.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff --git
Keep track of which task is running a KVM vcpu. This helps us
figure out later what task to wake up if we want to boost a
vcpu that got preempted.
Unfortunately there are no guarantees that the same task
always keeps the same vcpu, so we can only track the task
across a single run of the vcpu.
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a feature that detects when a virtual
CPU is
On Thu, Dec 02, 2010 at 11:26:16AM -0800, Paul E. McKenney wrote:
On Thu, Dec 02, 2010 at 09:11:30PM +0200, Michael S. Tsirkin wrote:
On Thu, Dec 02, 2010 at 11:00:37AM -0800, Paul E. McKenney wrote:
On Mon, Nov 29, 2010 at 07:09:01PM +0200, Michael S. Tsirkin wrote:
This adds a test
On 12/02/2010 01:14 PM, Chris Wright wrote:
* Anthony Liguori (aligu...@us.ibm.com) wrote:
In certain use-cases, we want to allocate guests fixed time slices where idle
guest cycles leave the machine idling. There are many approaches to achieve
this but the most direct is to simply avoid
* Anthony Liguori (anth...@codemonkey.ws) wrote:
On 12/02/2010 01:14 PM, Chris Wright wrote:
* Anthony Liguori (aligu...@us.ibm.com) wrote:
In certain use-cases, we want to allocate guests fixed time slices where
idle
guest cycles leave the machine idling. There are many approaches to
opt = CPU_BASED_TPR_SHADOW |
CPU_BASED_USE_MSR_BITMAPS |
CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
--
1.7.0.4
Breaks async PF (see checks on guest state),
Sorry, I don't follow what you mean here. Can you elaborate?
VCPU in HLT state only allows injection of
On Thu, Dec 02, 2010 at 11:14:16AM -0800, Chris Wright wrote:
* Anthony Liguori (aligu...@us.ibm.com) wrote:
In certain use-cases, we want to allocate guests fixed time slices where
idle
guest cycles leave the machine idling. There are many approaches to achieve
this but the most direct
On 12/02/2010 02:12 PM, Marcelo Tosatti wrote:
opt = CPU_BASED_TPR_SHADOW |
CPU_BASED_USE_MSR_BITMAPS |
CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
--
1.7.0.4
Breaks async PF (see checks on guest state),
Sorry, I don't follow what you mean here.
* Marcelo Tosatti (mtosa...@redhat.com) wrote:
On Thu, Dec 02, 2010 at 11:14:16AM -0800, Chris Wright wrote:
* Anthony Liguori (aligu...@us.ibm.com) wrote:
In certain use-cases, we want to allocate guests fixed time slices where
idle
guest cycles leave the machine idling. There are
Gleb Natapov wrote:
How can we get to EDD info after device is mapped? Looking at Seabios
implementation it builds EDD table on the fly when int_1348 is called
and it does it only for internal devices. Can we use disconnect vector
to connect device temporarily get EDD and then disconnect?
On 12/02/2010 02:40 PM, Marcelo Tosatti wrote:
Consuming the timeslice outside guest mode is less intrusive and easier
to replace. Something like this should work?
if (vcpu-arch.mp_state == KVM_MP_STATE_HALTED) {
while (!need_resched())
default_idle();
}
But you agree this is no
On 12/02/2010 03:07 PM, Chris Wright wrote:
But you agree this is no KVM business.
Like non-trapping hlt, that too will guarantee that the guest is preempted
by timeslice exhaustion (and is simpler than non-trapping hlt). So it
may well be the simplest for the case where we are perfectly
* Rik van Riel (r...@redhat.com) wrote:
When running SMP virtual machines, it is possible for one VCPU to be
spinning on a spinlock, while the VCPU that holds the spinlock is not
currently running, because the host scheduler preempted it to run
something else.
Both Intel and AMD CPUs have a
We are getting failures when executing apic.flat on our periodic upstream tests:
12/02 18:40:59 DEBUG|kvm_vm:0664| Running qemu command:
/usr/local/autotest/tests/kvm/qemu -name 'vm1' -monitor
unix:'/tmp/monitor-humanmonitor1-20101202-184059-9EnX',server,nowait -serial
unix:'/tmp/serial
On Thu, Dec 02, 2010 at 09:47:09PM +0200, Michael S. Tsirkin wrote:
On Thu, Dec 02, 2010 at 11:26:16AM -0800, Paul E. McKenney wrote:
On Thu, Dec 02, 2010 at 09:11:30PM +0200, Michael S. Tsirkin wrote:
On Thu, Dec 02, 2010 at 11:00:37AM -0800, Paul E. McKenney wrote:
On Mon, Nov 29, 2010
On Thu, Dec 02, 2010 at 03:13:03PM -0800, Paul E. McKenney wrote:
On Thu, Dec 02, 2010 at 09:47:09PM +0200, Michael S. Tsirkin wrote:
On Thu, Dec 02, 2010 at 11:26:16AM -0800, Paul E. McKenney wrote:
On Thu, Dec 02, 2010 at 09:11:30PM +0200, Michael S. Tsirkin wrote:
On Thu, Dec 02, 2010
On Fri, Dec 03, 2010 at 01:18:18AM +0200, Michael S. Tsirkin wrote:
On Thu, Dec 02, 2010 at 03:13:03PM -0800, Paul E. McKenney wrote:
On Thu, Dec 02, 2010 at 09:47:09PM +0200, Michael S. Tsirkin wrote:
On Thu, Dec 02, 2010 at 11:26:16AM -0800, Paul E. McKenney wrote:
On Thu, Dec 02, 2010
* Rik van Riel (r...@redhat.com) wrote:
Add a yield_to function to the scheduler code, allowing us to
give the remainder of our timeslice to another thread.
We may want to use this to provide a sys_yield_to system call
one day.
Signed-off-by: Rik van Riel r...@redhat.com
Signed-off-by:
* Rik van Riel (r...@redhat.com) wrote:
Keep track of which task is running a KVM vcpu. This helps us
figure out later what task to wake up if we want to boost a
vcpu that got preempted.
Unfortunately there are no guarantees that the same task
always keeps the same vcpu, so we can only
On Thu, Dec 02, 2010 at 02:30:42PM +0200, Gleb Natapov wrote:
On Wed, Dec 01, 2010 at 09:25:40PM -0500, Kevin O'Connor wrote:
You're thinking in terms of which device to boot, which does make this
difficult. However, it's equally valid to think in terms of which
boot method to invoke,
* Rik van Riel (r...@redhat.com) wrote:
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1880,18 +1880,53 @@ void kvm_resched(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_resched);
-void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu)
+void kvm_vcpu_on_spin(struct kvm_vcpu *me)
{
-
* Anthony Liguori (anth...@codemonkey.ws) wrote:
On 12/02/2010 03:07 PM, Chris Wright wrote:
Like non-trapping hlt, that too will guarantee that the guest is preempted
by timeslice exhaustion (and is simpler than non-trapping hlt). So it
may well be the simplest for the case where we are
On Friday 03 December 2010 00:55:03 Michael S. Tsirkin wrote:
On Thu, Dec 02, 2010 at 10:54:24PM +0800, Sheng Yang wrote:
On Thu, Dec 2, 2010 at 10:26 PM, Michael S. Tsirkin m...@redhat.com wrote:
On Thu, Dec 02, 2010 at 03:56:52PM +0200, Avi Kivity wrote:
On 12/02/2010 03:47 PM, Michael
On 12/02/2010 08:42 PM, Chris Wright wrote:
OK, let's say a single PCPU == 12 Compute Units.
If the guest is the first to migrate to a newly added unused host, and
we are using either non-trapping hlt or Marcelo's non-yielding trapping
hlt, then that guest is going to get more CPU than it
* Anthony Liguori (anth...@codemonkey.ws) wrote:
On 12/02/2010 08:42 PM, Chris Wright wrote:
OK, let's say a single PCPU == 12 Compute Units.
If the guest is the first to migrate to a newly added unused host, and
we are using either non-trapping hlt or Marcelo's non-yielding trapping
hlt,
On Thu, 2010-12-02 at 14:44 -0500, Rik van Riel wrote:
+#ifdef CONFIG_SCHED_HRTICK
+/*
+ * Yield the CPU, giving the remainder of our time slice to task p.
+ * Typically used to hand CPU time to another thread inside the same
+ * process, eg. when p holds a resource other threads are waiting
On Thu, Dec 02, 2010 at 09:01:25PM -0500, Kevin O'Connor wrote:
On Thu, Dec 02, 2010 at 02:30:42PM +0200, Gleb Natapov wrote:
On Wed, Dec 01, 2010 at 09:25:40PM -0500, Kevin O'Connor wrote:
You're thinking in terms of which device to boot, which does make this
difficult. However, it's
2010/12/2 Michael S. Tsirkin m...@redhat.com:
On Wed, Dec 01, 2010 at 05:03:43PM +0900, Yoshiaki Tamura wrote:
2010/11/28 Michael S. Tsirkin m...@redhat.com:
On Sun, Nov 28, 2010 at 08:27:58PM +0900, Yoshiaki Tamura wrote:
2010/11/28 Michael S. Tsirkin m...@redhat.com:
On Thu, Nov 25,
90 matches
Mail list logo