During the last kvm forum, I described a unit test framework that can
help test the kvm APIs. Briefly, it starts a process in host userspace,
which sets up a memory slot mapping gpa 0:3G to hva 0:3G. It then sets
up guest registers for unpaged protected mode (or paged protected mode
with
On 10/04/2010 03:18 AM, Anthony Liguori wrote:
On 10/03/2010 09:28 AM, Michael S. Tsirkin wrote:
This is using eventfd as well.
Sorry, I meant irqfd.
I've tried using irqfd in userspace. It hurts performance quite a bit
compared to doing an ioctl so I would suspect this too.
A
Please don't top post.
Sorry
Please use 'top' to find out which processes are busy, the aggregate
statistics don't help to find out what the problem is.
The thing is - all more or less active processes become busy, like
httpd, etc - I can't identify any single process that generates all
the
On 10/04/2010 03:04 AM, Avi Kivity wrote:
On 10/04/2010 03:18 AM, Anthony Liguori wrote:
On 10/03/2010 09:28 AM, Michael S. Tsirkin wrote:
This is using eventfd as well.
Sorry, I meant irqfd.
I've tried using irqfd in userspace. It hurts performance quite a
bit compared to doing an
On 10/03/2010 10:24 PM, Dmitry Golubev wrote:
So, I started anew. I decreased the memory allocated to each guest to
3500MB (from 3500MB as I told earlier), but have not decreased number
of hugepages - it is still 3696.
Please don't top post.
Please use 'top' to find out which processes are
Hi Avi,
On Mon, Oct 04, 2010 at 11:35:28AM +0200, Avi Kivity wrote:
During the last kvm forum, I described a unit test framework that can
help test the kvm APIs. Briefly, it starts a process in host userspace,
which sets up a memory slot mapping gpa 0:3G to hva 0:3G. It then sets
up
Am 02.10.2010 19:25, Avi Kivity wrote:
On 10/01/2010 06:30 PM, Jan Kiszka wrote:
Hi,
for the past days I've been trying to understand a very strange hard
lock-up of some Intel i7 boxes when running our 16-bit guest OS under
KVM. After applying some instrumentation before and after the VM
On Sun, Oct 3, 2010 at 12:01 PM, Avi Kivity a...@redhat.com wrote:
On 09/30/2010 04:01 PM, Stefan Hajnoczi wrote:
Virtqueue notify is currently handled synchronously in userspace virtio.
This prevents the vcpu from executing guest code while hardware
emulation code handles the notify.
On
If guest can detect that it runs in non-preemptable context it can
handle async PFs at any time, so let host know that it can send async
PF even if guest cpu is not in userspace.
Acked-by: Rik van Riel r...@redhat.com
Signed-off-by: Gleb Natapov g...@redhat.com
---
Documentation/kvm/msr.txt
When page is swapped in it is mapped into guest memory only after guest
tries to access it again and generate another fault. To save this fault
we can map it immediately since we know that guest is going to access
the page. Do it only when tdp is enabled for now. Shadow paging case is
more
Async PF also needs to hook into smp_prepare_boot_cpu so move the hook
into generic code.
Acked-by: Rik van Riel r...@redhat.com
Signed-off-by: Gleb Natapov g...@redhat.com
---
arch/x86/include/asm/kvm_para.h |1 +
arch/x86/kernel/kvm.c | 11 +++
Keep track of memslots changes by keeping generation number in memslots
structure. Provide kvm_write_guest_cached() function that skips
gfn_to_hva() translation if memslots was not changed since previous
invocation.
Signed-off-by: Gleb Natapov g...@redhat.com
---
include/linux/kvm_host.h |7
Enable async PF in a guest if async PF capability is discovered.
Signed-off-by: Gleb Natapov g...@redhat.com
---
Documentation/kernel-parameters.txt |3 +
arch/x86/include/asm/kvm_para.h |5 ++
arch/x86/kernel/kvm.c | 92 +++
3 files
If a guest accesses swapped out memory do not swap it in from vcpu thread
context. Schedule work to do swapping and put vcpu into halted state
instead.
Interrupts will still be delivered to the guest and if interrupt will
cause reschedule guest will continue to run another task.
Signed-off-by:
Send async page fault to a PV guest if it accesses swapped out memory.
Guest will choose another task to run upon receiving the fault.
Allow async page fault injection only when guest is in user mode since
otherwise guest may be in non-sleepable context and will not be able
to reschedule.
Vcpu
KVM virtualizes guest memory by means of shadow pages or HW assistance
like NPT/EPT. Not all memory used by a guest is mapped into the guest
address space or even present in a host memory at any given time.
When vcpu tries to access memory page that is not mapped into the guest
address space KVM
This patch add get_user_pages() variant that only succeeds if getting
a reference to a page doesn't require major fault.
Reviewed-by: Rik van Riel r...@redhat.com
Signed-off-by: Gleb Natapov g...@redhat.com
---
fs/ncpfs/mmap.c|2 ++
include/linux/mm.h |5 +
mm/filemap.c |
When async PF capability is detected hook up special page fault handler
that will handle async page fault events and bypass other page faults to
regular page fault handler. Also add async PF handling to nested SVM
emulation. Async PF always generates exit to L1 where vcpu thread will
be scheduled
If async page fault is received by idle task or when preemp_count is
not zero guest cannot reschedule, so do sti; hlt and wait for page to be
ready. vcpu can still process interrupts while it waits for the page to
be ready.
Acked-by: Rik van Riel r...@redhat.com
Signed-off-by: Gleb Natapov
Guest enables async PF vcpu functionality using this MSR.
Reviewed-by: Rik van Riel r...@redhat.com
Signed-off-by: Gleb Natapov g...@redhat.com
---
Documentation/kvm/cpuid.txt |3 +++
Documentation/kvm/msr.txt | 13 -
arch/x86/include/asm/kvm_host.h |2 ++
If guest indicates that it can handle async pf in kernel mode too send
it, but only if interrupts are enabled.
Signed-off-by: Gleb Natapov g...@redhat.com
---
arch/x86/kvm/x86.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
On Mon, Oct 04, 2010 at 09:01:14AM -0500, Anthony Liguori wrote:
On 10/04/2010 03:04 AM, Avi Kivity wrote:
On 10/04/2010 03:18 AM, Anthony Liguori wrote:
On 10/03/2010 09:28 AM, Michael S. Tsirkin wrote:
This is using eventfd as well.
Sorry, I meant irqfd.
I've tried using irqfd in
On 10/04/2010 11:12 AM, Michael S. Tsirkin wrote:
On Mon, Oct 04, 2010 at 09:01:14AM -0500, Anthony Liguori wrote:
On 10/04/2010 03:04 AM, Avi Kivity wrote:
On 10/04/2010 03:18 AM, Anthony Liguori wrote:
On 10/03/2010 09:28 AM, Michael S. Tsirkin wrote:
On Mon, Oct 04, 2010 at 11:20:19AM -0500, Anthony Liguori wrote:
On 10/04/2010 11:12 AM, Michael S. Tsirkin wrote:
On Mon, Oct 04, 2010 at 09:01:14AM -0500, Anthony Liguori wrote:
On 10/04/2010 03:04 AM, Avi Kivity wrote:
On 10/04/2010 03:18 AM, Anthony Liguori wrote:
On 10/03/2010 09:28 AM,
Zach,
vcpu-hv_clock.tsc_timestamp = tsc_timestamp;
vcpu-hv_clock.system_time = kernel_ns + v-kvm-arch.kvmclock_offset;
vcpu-last_kernel_ns = kernel_ns; = (1)
vcpu-last_guest_tsc = tsc_timestamp;
vcpu-hv_clock.flags = 0;
If I understand your intention
Please send in any agenda items you are interested in covering.
thanks,
-chris
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
From: Huang Ying ying.hu...@intel.com
In QEMU-KVM, physical address != RAM address. While MCE simulation
needs physical address instead of RAM address. So
kvm_physical_memory_addr_from_ram() is implemented to do the
conversion, and it is invoked before being filled in the IA32_MCi_ADDR
MSR.
To be used by next patches.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: qemu/cpu-common.h
===
--- qemu.orig/cpu-common.h
+++ qemu/cpu-common.h
@@ -47,6 +47,7 @@ void qemu_ram_free(ram_addr_t addr);
/* This should only
commit ce6325ff1af34dbaee91c8d28e792277e43f1227
Author: Glauber Costa gco...@redhat.com
Date: Wed Mar 5 17:01:10 2008 -0300
Augment info cpus
This patch exposes the thread id associated with each
cpu through the already well known 'info cpus' interface.
Signed-off-by: Marcelo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Port qemu-kvm's
commit 1bab5d11545d8de5facf46c28630085a2f9651ae
Author: Huang Ying ying.hu...@intel.com
Date: Wed Mar 3 16:52:46 2010 +0800
Add savevm/loadvm support for MCE
MCE registers are saved/load into/from CPUState in
kvm_arch_save/load_regs. To simulate the MCG_STATUS
Port qemu-kvm's signalfd compat code.
commit 5a7fdd0abd7cd24dac205317a4195446ab8748b5
Author: Anthony Liguori aligu...@us.ibm.com
Date: Wed May 7 11:55:47 2008 -0500
Use signalfd() in io-thread
This patch reworks the IO thread to use signalfd() instead of sigtimedwait()
This
Block SIGALRM, SIGIO and consume them via signalfd.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: qemu/cpus.c
===
--- qemu.orig/cpus.c
+++ qemu/cpus.c
@@ -33,6 +33,7 @@
#include exec-all.h
#include cpus.h
+#include
Port qemu-kvm's
commit 4b62fff1101a7ad77553147717a8bd3bf79df7ef
Author: Huang Ying ying.hu...@intel.com
Date: Mon Sep 21 10:43:25 2009 +0800
MCE: Relay UCR MCE to guest
UCR (uncorrected recovery) MCE is supported in recent Intel CPUs,
where some hardware error such as some
Port qemu-kvm's MCE support
commit c68b2374c9048812f488e00ffb95db66c0bc07a7
Author: Huang Ying ying.hu...@intel.com
Date: Mon Jul 20 10:00:53 2009 +0800
Add MCE simulation support to qemu/kvm
KVM ioctls are used to initialize MCE simulation and inject MCE. The
real MCE
Hi
I'm trying to figure out how the network bridging work, focusing on:
- How to create a network where the DomU can get a dhcp address from
the central network.
- How to setup the domain network so that the DomU eth0 and eth1 goes
to the coresponding physical nics, without any chance of
This cleans up device assignment option ROM support and allows
us to use romfile and rombar default PCI options. Thanks,
Alex
---
Alex Williamson (2):
device-assignment: Allow PCI to manage the option ROM
PCI: Export pci_map_option_rom()
hw/device-assignment.c | 155
Allow it to be referenced outside of hw/pci.c so we can register
option ROM BARs using the default mapping routine.
Signed-off-by: Alex Williamson alex.william...@redhat.com
---
hw/pci.c |2 +-
hw/pci.h |3 +++
2 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/hw/pci.c
We don't need to duplicate PCI code for mapping and managing the
option ROM for an assigned device. We're already using an in-memory
copy of the ROM, so we can simply fill the contents from the physical
device and pass the rest off to PCI. As a benefit, we can now make
use of the rombar and
On 10/04/2010 06:50 AM, Glauber Costa wrote:
Zach,
vcpu-hv_clock.tsc_timestamp = tsc_timestamp;
vcpu-hv_clock.system_time = kernel_ns + v-kvm-arch.kvmclock_offset;
vcpu-last_kernel_ns = kernel_ns;= (1)
vcpu-last_guest_tsc = tsc_timestamp;
On Mon, Oct 04, 2010 at 03:40:52PM +0200, Andrea Arcangeli wrote:
Hi Avi,
On Mon, Oct 04, 2010 at 11:35:28AM +0200, Avi Kivity wrote:
During the last kvm forum, I described a unit test framework that can
help test the kvm APIs. Briefly, it starts a process in host userspace,
which
On 10/04/2010 11:56 AM, Gleb Natapov wrote:
If a guest accesses swapped out memory do not swap it in from vcpu thread
context. Schedule work to do swapping and put vcpu into halted state
instead.
Interrupts will still be delivered to the guest and if interrupt will
cause reschedule guest will
On 10/04/2010 11:56 AM, Gleb Natapov wrote:
Keep track of memslots changes by keeping generation number in memslots
structure. Provide kvm_write_guest_cached() function that skips
gfn_to_hva() translation if memslots was not changed since previous
invocation.
Signed-off-by: Gleb
* Henry Pepper henryp...@gmail.com [2010-10-04 16:15]:
Hi
I'm trying to figure out how the network bridging work, focusing on:
- How to create a network where the DomU can get a dhcp address from
the central network.
- How to setup the domain network so that the DomU eth0 and eth1 goes
On 10/04/2010 11:56 AM, Gleb Natapov wrote:
Enable async PF in a guest if async PF capability is discovered.
Signed-off-by: Gleb Natapovg...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of
On 10/04/2010 11:56 AM, Gleb Natapov wrote:
Send async page fault to a PV guest if it accesses swapped out memory.
Guest will choose another task to run upon receiving the fault.
Allow async page fault injection only when guest is in user mode since
otherwise guest may be in non-sleepable
On 10/04/2010 11:56 AM, Gleb Natapov wrote:
If guest indicates that it can handle async pf in kernel mode too send
it, but only if interrupts are enabled.
Signed-off-by: Gleb Natapovg...@redhat.com
Acked-by: Rik van Riel r...@redhat.com
--
All rights reversed
--
To unsubscribe from this
Hey all,
The other day I upgraded the kernel on one of my KVM hosts. I went
from 2.6.34.1 to 2.6.35.7, and immediately I noticed that my Windows
XP guests was now using significantly more CPU while idle, compared to
the 2.6.34.1 kernel. All the Windows XP guests are running with
-usbdevice
On 10/01/2010 12:07 AM, Alexander Graf wrote:
On 30.09.2010, at 21:28, Scott Wood wrote:
It is not legal to call mutex_lock() with interrupts disabled.
This will assert with debug checks enabled.
If there's a real need to disable interrupts here, it could be done
after the mutex is acquired
On 04.10.2010, at 07:22, Christian Ehrhardt wrote:
On 10/01/2010 12:07 AM, Alexander Graf wrote:
On 30.09.2010, at 21:28, Scott Wood wrote:
It is not legal to call mutex_lock() with interrupts disabled.
This will assert with debug checks enabled.
If there's a real need to disable
50 matches
Mail list logo