On Wed, Apr 21, 2010 at 05:14:04PM +0200, Jan Kiszka wrote:
No you don't. I was told that software should be prepared to handle NMI
after MOV SS. What part of SDM does this contradict? I found nothing in
latest SDM.
[ updated to March 2010 version ]
To sum up the scenario again, I
On Sun, May 02, 2010 at 08:09:56AM -0700, K D wrote:
After I added code to raise ulimits for qemu, I don't see any memory related
issue. I'm trying to spawn VM on an embedded linux with no window manager
etc. with '-curses' it goes into 'VGA Blank Mode' and it stops there. not
sure what it
Michael Tokarev wrote:
02.05.2010 14:04, Avi Kivity wrote:
On 05/01/2010 12:40 AM, Michael Tokarev wrote:
01.05.2010 00:59, Michael Tokarev wrote:
Apparently with current kvm stable (0.12.3)
Windows NT 4.0 does not install anymore.
With default -cpu, it boots, displays the
Inspecting your
On 05/03/2010 11:24 AM, Andre Przywara wrote:
can you try -cpu kvm64? This should be somewhat in between -cpu host
and -cpu qemu64.
Also look in dmesg for uncatched rd/wrmsrs. In case you find something
there, please try:
# modprobe kvm ignore_msrs=1
(You have to unload the modules first)
Avi Kivity wrote:
On 05/03/2010 11:24 AM, Andre Przywara wrote:
can you try -cpu kvm64? This should be somewhat in between -cpu host
and -cpu qemu64.
Also look in dmesg for uncatched rd/wrmsrs. In case you find something
there, please try:
# modprobe kvm ignore_msrs=1
(You have to unload
2010/4/23 Avi Kivity a...@redhat.com:
On 04/23/2010 04:22 PM, Anthony Liguori wrote:
I currently don't have data, but I'll prepare it.
There were two things I wanted to avoid.
1. Pages to be copied to QEMUFile buf through qemu_put_buffer.
2. Calling write() everytime even when we want to
On Wed, Apr 28, 2010 at 01:57:12PM -0700, David L Stevens wrote:
This patch adds mergeable receive buffer support to vhost_net.
Signed-off-by: David L Stevens dlstev...@us.ibm.com
I've been doing some more testing before sending out a pull
request, and I see a drastic performance degradation
This is a fix to a previous patch by me.
It's on 'next' branch, as of now.
commit 848bd0c89c83814023cf51c72effdbc7de0d18b7 causes the linker script
itself (flat.lds) to become part of the linked objects, which messed
the output file, one such problem is that symbol edata is not the last symbol
On 05/03/2010 04:32 AM, Yoshiaki Tamura wrote:
2010/4/23 Avi Kivitya...@redhat.com:
On 04/23/2010 04:22 PM, Anthony Liguori wrote:
I currently don't have data, but I'll prepare it.
There were two things I wanted to avoid.
1. Pages to be copied to QEMUFile buf through
Marcelo Tosatti wrote:
On Fri, Apr 23, 2010 at 01:58:22PM +0800, Gui Jianfeng wrote:
Currently, in kvm_mmu_change_mmu_pages(kvm, page), used_pages-- is
performed after calling
kvm_mmu_zap_page() in spite of that whether page is actually reclaimed.
Because root sp won't
be reclaimed by
KVM_REQ_KICK poisons vcpu-requests by having a bit set during normal
operation. This causes the fast path check for a clear vcpu-requests
to fail all the time, triggering tons of atomic operations.
Fix by replacing KVM_REQ_KICK with a vcpu-guest_mode atomic.
Signed-off-by: Avi Kivity
cr0.ts may change between entries, so we copy cr0 to HOST_CR0 before each
entry. That is slow, so instead, set HOST_CR0 to have TS set unconditionally
(which is a safe value), and issue a clts() just before exiting vcpu context
if the task indeed owns the fpu.
Saves ~50 cycles/exit.
2010/5/3 Anthony Liguori aligu...@linux.vnet.ibm.com:
On 05/03/2010 04:32 AM, Yoshiaki Tamura wrote:
2010/4/23 Avi Kivitya...@redhat.com:
On 04/23/2010 04:22 PM, Anthony Liguori wrote:
I currently don't have data, but I'll prepare it.
There were two things I wanted to avoid.
1. Pages
Michael S. Tsirkin m...@redhat.com wrote on 05/03/2010 03:34:11 AM:
On Wed, Apr 28, 2010 at 01:57:12PM -0700, David L Stevens wrote:
This patch adds mergeable receive buffer support to vhost_net.
Signed-off-by: David L Stevens dlstev...@us.ibm.com
I've been doing some more testing
based on 'next' branch.
Changed test-case stringio into C code and merged into emulator test-case.
Removed traces of stringio test-case.
Signed-off-by: Naphtali Sprei nsp...@redhat.com
---
kvm/user/config-x86-common.mak |2 --
kvm/user/config-x86_64.mak |4 ++--
Here is a new version of this series.
I am definitely leaving any warp calculations out, as Jeremy wisely
points out that Chuck Norris should perish before we warp.
Also, in this series, I am using KVM_GET_SUPPORTED_CPUID to export
our features to userspace, as avi suggets. (patch 4/7), and
Right now, we were using individual KVM_CAP entities to communicate
userspace about which cpuids we support. This is suboptimal, since it
generates a delay between the feature arriving in the host, and
being available at the guest.
A much better mechanism is to list para features in
This patch puts up the flag that tells the guest that we'll warn it
about the tsc being trustworthy or not. By now, we also say
it is not.
---
arch/x86/kvm/x86.c |5 -
1 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index
If the HV told us we can fully trust the TSC, skip any
correction
Signed-off-by: Glauber Costa glom...@redhat.com
---
arch/x86/include/asm/kvm_para.h|5 +
arch/x86/include/asm/pvclock-abi.h |1 +
arch/x86/kernel/kvmclock.c |3 +++
arch/x86/kernel/pvclock.c |
We now added a new set of clock-related msrs in replacement of the old
ones. In theory, we could just try to use them and get a return value
indicating they do not exist, due to our use of kvm_write_msr_save.
However, kvm clock registration happens very early, and if we ever
try to write to a
Avi pointed out a while ago that those MSRs falls into the pentium
PMU range. So the idea here is to add new ones, and after a while,
deprecate the old ones.
Signed-off-by: Glauber Costa glom...@redhat.com
---
arch/x86/include/asm/kvm_para.h |4
arch/x86/kvm/x86.c |7
In recent stress tests, it was found that pvclock-based systems
could seriously warp in smp systems. Using ingo's time-warp-test.c,
I could trigger a scenario as bad as 1.5mi warps a minute in some systems.
(to be fair, it wasn't that bad in most of them). Investigating further, I
found out that
This patch removes one padding byte and transform it into a flags
field. New versions of guests using pvclock will query these flags
upon each read.
Flags, however, will only be interpreted when the guest decides to.
It uses the pvclock_valid_flags function to signal that a specific
set of flags
On Mon, May 03, 2010 at 08:39:08AM -0700, David Stevens wrote:
Michael S. Tsirkin m...@redhat.com wrote on 05/03/2010 03:34:11 AM:
On Wed, Apr 28, 2010 at 01:57:12PM -0700, David L Stevens wrote:
This patch adds mergeable receive buffer support to vhost_net.
Signed-off-by: David L
On 05/03/2010 10:36 AM, Yoshiaki Tamura wrote:
Great!
I also wanted to test with 10GE but I'm physically away from my office
now, and can't set up the test environment. I'll measure the numbers
w/ 10GE next week.
BTW, I was thinking to write a patch to separate threads for both
sender and
Michael S. Tsirkin m...@redhat.com wrote on 05/03/2010 08:56:14 AM:
On Mon, May 03, 2010 at 08:39:08AM -0700, David Stevens wrote:
Michael S. Tsirkin m...@redhat.com wrote on 05/03/2010 03:34:11 AM:
On Wed, Apr 28, 2010 at 01:57:12PM -0700, David L Stevens wrote:
This patch adds
On Tue, Apr 27, 2010 at 03:58:36PM +0300, Avi Kivity wrote:
So we probably need to upgrade gva_t to a u64. Please send this as
a separate patch, and test on i386 hosts.
Are there _any_ regular tests of KVM on i386 hosts? For me this is
terribly broken (also after I fixed the issue which gave
On Sun, May 2, 2010 at 1:21 AM, Yuhong Bao yuhongbao_...@hotmail.com wrote:
What about details?
This old thread seems to have more details.
http://www.mail-archive.com/kvm@vger.kernel.org/msg11704.html
Hopefully they are of more use to you than me ;-)
Thanks,
Rusty
--
To unsubscribe from this
03.05.2010 12:24, Andre Przywara wrote:
Michael Tokarev wrote:
02.05.2010 14:04, Avi Kivity wrote:
On 05/01/2010 12:40 AM, Michael Tokarev wrote:
01.05.2010 00:59, Michael Tokarev wrote:
Apparently with current kvm stable (0.12.3)
Windows NT 4.0 does not install anymore.
With default -cpu,
03.05.2010 13:28, Andre Przywara wrote:
Avi Kivity wrote:
On 05/03/2010 11:24 AM, Andre Przywara wrote:
can you try -cpu kvm64? This should be somewhat in between -cpu host
and -cpu qemu64.
Also look in dmesg for uncatched rd/wrmsrs. In case you find
something there, please try:
# modprobe
On Thu, Apr 29, 2010 at 12:29:40PM -0700, Tom Lyon wrote:
Michael, et al - sorry for the delay, but I've been digesting the comments
and researching new approaches.
I think the plan for V4 will be to take things entirely out of the UIO
framework, and instead have a driver which supports
Avi, please apply to both master and uq/master.
---
Fallback to qemu_vmalloc in case file_ram_alloc fails.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff --git a/exec.c b/exec.c
index 467a0e7..7125a93 100644
--- a/exec.c
+++ b/exec.c
@@ -2821,8 +2821,12 @@ ram_addr_t
David,
The following tree includes a couple of enhancements that help vhost-net.
Please pull them for net-next. Another set of patches is under
debugging/testing and I hope to get them ready in time for 2.6.35,
so there may be another pull request later.
Thanks!
The following changes since
On 05/02/2010 10:44 AM, Avi Kivity wrote:
On 05/02/2010 08:38 PM, Brian Gerst wrote:
On Sun, May 2, 2010 at 10:53 AM, Avi Kivitya...@redhat.com wrote:
The fpu code currently uses current-thread_info-status TS_XSAVE as
a way to distinguish between XSAVE capable processors and older
This module contains code to postprocess IOzone data
in a convenient way so we can generate performance graphs
and condensed data. The graph generation part depends
on gnuplot, but if the utility is not present,
functionality will gracefully degrade.
Use the postprocessing module introduced on
Hi Qemu/KVM Devel Team,
i'm using qemu-kvm 0.12.3 with latest Kernel 2.6.33.3.
As backend we use open-iSCSI with dm-multipath.
Multipath is configured to queue i/o if no path is available.
If we create a failure on all paths, qemu starts to consume 100%
CPU due to i/o waits which is ok so far.
Following the new IOzone postprocessing changes, add a new
KVM subtest iozone_windows, which takes advantage of the
fact that there's a windows build for the test, so we can
ship it on winutils.iso and run it, providing this way
the ability to track IO performance for windows guests also.
The new
From: Michael S. Tsirkin m...@redhat.com
Date: Tue, 4 May 2010 00:32:45 +0300
The following tree includes a couple of enhancements that help vhost-net.
Please pull them for net-next. Another set of patches is under
debugging/testing and I hope to get them ready in time for 2.6.35,
so there
From: David Miller da...@davemloft.net
Date: Mon, 03 May 2010 15:07:29 -0700 (PDT)
From: Michael S. Tsirkin m...@redhat.com
Date: Tue, 4 May 2010 00:32:45 +0300
The following tree includes a couple of enhancements that help vhost-net.
Please pull them for net-next. Another set of patches is
On Mon, May 03, 2010 at 03:08:29PM -0700, David Miller wrote:
From: David Miller da...@davemloft.net
Date: Mon, 03 May 2010 15:07:29 -0700 (PDT)
From: Michael S. Tsirkin m...@redhat.com
Date: Tue, 4 May 2010 00:32:45 +0300
The following tree includes a couple of enhancements that help
Kernels 2.6.24 hardcoded slot 0 for the location of VMX rmode TSS
pages. Remove support for it.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: qemu-kvm-memslot/qemu-kvm.c
===
--- qemu-kvm-memslot.orig/qemu-kvm.c
+++
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: qemu-kvm-memslot/qemu-kvm.c
===
--- qemu-kvm-memslot.orig/qemu-kvm.c
+++ qemu-kvm-memslot/qemu-kvm.c
@@ -2154,7 +2154,6 @@ void kvm_set_phys_mem(target_phys_addr_t
* dirty
Which allows drivers to register an mmap region into ram block mappings.
To be used by device assignment driver.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: qemu-kvm/cpu-common.h
===
--- qemu-kvm.orig/cpu-common.h
+++
Aliases were added to workaround kvm's inability to destroy
memory regions. This was fixed in 2.6.29, and advertised via
KVM_CAP_DESTROY_MEMORY_REGION_WORKS.
Also, alias support will be removed from the kernel in July 2010.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index:
And make kvm_unregister_memory_area static.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: qemu-kvm-memslot/qemu-kvm.c
===
--- qemu-kvm-memslot.orig/qemu-kvm.c
+++ qemu-kvm-memslot/qemu-kvm.c
@@ -639,8 +639,8 @@ void
Drop qemu-kvm's implementation in favour of qemu's, they are
functionally equivalent.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
Index: qemu-kvm/qemu-kvm.c
===
--- qemu-kvm.orig/qemu-kvm.c
+++ qemu-kvm/qemu-kvm.c
@@ -76,21
See individual patches for details.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
only thing that strikes me is whether the gnuplot support
should be abstracted out a bit. See tko/plotgraph.py ?
On Mon, May 3, 2010 at 2:52 PM, Lucas Meneghel Rodrigues l...@redhat.com
wrote:
This module contains code to postprocess IOzone data
in a convenient way so we can generate
On Mon, 2010-05-03 at 16:52 -0700, Martin Bligh wrote:
only thing that strikes me is whether the gnuplot support
should be abstracted out a bit. See tko/plotgraph.py ?
I thought about it. Ideally, we would do all the plotting using a python
library, such as matplotlib, which has a decent API.
The recent changes to emulate string instructions without entering guest
mode exposed a bug where pending interrupts are not properly reflected
in ready_for_interrupt_injection.
The result is that userspace overwrites a previously queued interrupt,
when irqchip's are emulated in qemu.
Fix by
unsubscribe kvm
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, 19 Feb 2010 08:52:20 am Michael S. Tsirkin wrote:
I took a stub at documenting CMD and FLUSH request types in virtio
block. Christoph, could you look over this please?
I note that the interface seems full of warts to me,
this might be a first step to cleaning them.
ISTR Christoph
yup, fair enough. Go ahead and check it in. If we end up doing
this in another test, we should make an abstraction
On Mon, May 3, 2010 at 5:39 PM, Lucas Meneghel Rodrigues l...@redhat.com
wrote:
On Mon, 2010-05-03 at 16:52 -0700, Martin Bligh wrote:
only thing that strikes me is whether the
Please send in any agenda items you are interested in covering.
If we have a lack of agenda items I'll cancel the week's call.
thanks,
-chris
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi Peter,
On 03.05.2010 23:26, Peter Lieven wrote:
Hi Qemu/KVM Devel Team,
i'm using qemu-kvm 0.12.3 with latest Kernel 2.6.33.3.
As backend we use open-iSCSI with dm-multipath.
Multipath is configured to queue i/o if no path is available.
If we create a failure on all paths, qemu starts to
55 matches
Mail list logo