> Since the problem you pinpointed do exist, I would suggest measuring
> the average load of the last,
> say, 10 iterations.
The "last 10 interation" does not define a fixed time. I guess it is much more
reasonable to measure the average of the last '10 seconds'.
But usually a migration only tak
On 09/30/2009 01:54 AM, Anthony Liguori wrote:
Hi,
Now that 0.11.0 is behind us, it's time to start thinking about 0.12.0.
I'd like to do a few things different this time around. I don't think
the -rc process went very well as I don't think we got more testing
out of it. I'd like to shorten
On 09/30/2009 03:01 AM, Zhai, Edwin wrote:
Avi,
I modify it according your comments. The only thing I want to keep is
the module param ple_gap/window. Although they are not per-guest,
they can be used to find the right value, and disable PLE for debug
purpose.
Fair enough, ACK.
--
Do not m
On 09/29/2009 10:45 PM, Mark McLoughlin wrote:
On Tue, 2009-05-05 at 09:56 +0100, Mark McLoughlin wrote:
This commit:
commit 559a8f45f34cc50d1a60b4f67a06614d506b2e01
Subject: Remove stray GSO code from virtio_net (Mark McLoughlin)
Removed some GSO code from upstream qemu.git, but i
On 09/30/2009 07:09 AM, Nikola Ciprich wrote:
"The default, IDE, is highly supported by guests but may be slow, especially with
disk arrays. If your guest supports it, use the virtio interface:"
Avi,
what is the status of data integrity issues Chris Hellwig summarized some time
ago?
I don
On (Tue) Sep 29 2009 [18:54:53], Anthony Liguori wrote:
> Hi,
>
> Now that 0.11.0 is behind us, it's time to start thinking about 0.12.0.
>
> I'd like to do a few things different this time around. I don't think
> the -rc process went very well as I don't think we got more testing out
> of it.
"The default, IDE, is highly supported by guests but may be slow, especially
with disk arrays. If your guest supports it, use the virtio interface:"
Avi,
what is the status of data integrity issues Chris Hellwig summarized some time
ago?
Is it safe to recommend virtio to newbies already? Shouldn'
On Tue, Sep 29, 2009 at 06:36:57PM +0200, Dietmar Maurer wrote:
> > Also, if this is really the case (buffered), then the bandwidth capping
> > part
> > of migration is also wrong.
> >
> > Have you compared the reported bandwidth to your actual bandwith ? I
> > suspect
> > the source of the proble
On Tue, Sep 29, 2009 at 06:54:53PM -0500, Anthony Liguori wrote:
> Hi,
>
> Now that 0.11.0 is behind us, it's time to start thinking about 0.12.0.
>
> I'd like to do a few things different this time around. I don't think
> the -rc process went very well as I don't think we got more testing out
http://www.linux-kvm.org/page/HOWTO1 says to build kvm I should get the
latest kvm-release.tar.gz.
http://www.linux-kvm.org/page/Downloads says "If you want to use the
latest version of KVM kernel modules and supporting userspace, you can
download the latest version from
http://sourceforge.net/pro
Dustin Kirkland wrote:
On Tue, Sep 29, 2009 at 6:54 PM, Anthony Liguori wrote:
Now that 0.11.0 is behind us, it's time to start thinking about 0.12.0.
I'd like to do a few things different this time around. I don't think the
-rc process went very well as I don't think we got more testing o
Avi,
I modify it according your comments. The only thing I want to keep is
the module param ple_gap/window. Although they are not per-guest, they
can be used to find the right value, and disable PLE for debug purpose.
Thanks,
Avi Kivity wrote:
On 09/28/2009 11:33 AM, Zhai, Edwin wrote:
On Tue, Sep 29, 2009 at 6:54 PM, Anthony Liguori wrote:
> Now that 0.11.0 is behind us, it's time to start thinking about 0.12.0.
>
> I'd like to do a few things different this time around. I don't think the
> -rc process went very well as I don't think we got more testing out of it.
> I'd like
Hi,
Now that 0.11.0 is behind us, it's time to start thinking about 0.12.0.
I'd like to do a few things different this time around. I don't think
the -rc process went very well as I don't think we got more testing out
of it. I'd like to shorten the timeline for 0.12.0 a good bit. The
0.10
On Wed, 2009-09-30 at 01:48 +0400, Michael Tokarev wrote:
> Dustin Kirkland wrote:
> > On Sun, Sep 27, 2009 at 2:42 AM, Avi Kivity wrote:
> >> qemu-kvm-0.11.0 is now available. This release is is based on the upstream
> >> qemu 0.11.0, plus kvm-specific enhancements.
> >
> > Thanks, Avi.
> >
>
Rest of cases are already fixed qemu-upstream
Signed-off-by: Juan Quintela
---
hw/device-assignment.c |2 +-
qemu-kvm.c |2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/device-assignment.c b/hw/device-assignment.c
index 46e6471..17d68be 100644
--- a/hw
Dustin Kirkland wrote:
On Sun, Sep 27, 2009 at 2:42 AM, Avi Kivity wrote:
qemu-kvm-0.11.0 is now available. This release is is based on the upstream
qemu 0.11.0, plus kvm-specific enhancements.
Thanks, Avi.
We in Ubuntu have tracked each of the two previous RC's, and we will
have this GA ve
On Sun, Sep 27, 2009 at 2:42 AM, Avi Kivity wrote:
> qemu-kvm-0.11.0 is now available. This release is is based on the upstream
> qemu 0.11.0, plus kvm-specific enhancements.
Thanks, Avi.
We in Ubuntu have tracked each of the two previous RC's, and we will
have this GA version in Karmic within
Both VMX and SVM require per-cpu memory allocation, which is done at module
init time, for only online cpus.
Backend was not allocating enough structure for all possible CPUs, so
new CPUs coming online could not be hardware enabled.
Signed-off-by: Zachary Amsden
---
arch/x86/kvm/svm.c |4 ++
Signed-off-by: Zachary Amsden
---
arch/x86/kvm/svm.c |5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9a4daca..d1036ce 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -330,13 +330,14 @@ static int svm_hardware_e
They are globals, not clearly protected by any ordering or locking, and
vulnerable to various startup races.
Instead, for variable TSC machines, register the cpufreq notifier and get
the TSC frequency directly from the cpufreq machinery. Not only is it
always right, it is also perfectly accurate,
Signed-off-by: Zachary Amsden
---
arch/x86/kvm/x86.c | 23 +++
1 files changed, 15 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fedac9d..15d2ace 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3116,9 +3116,22 @@ stati
Matthew Tippett wrote:
Okay, bringing the leafs of the discussions onto this thread.
As per
http://www.phoronix.com/scan.php?page=article&item=linux_2631_kvm&num=1&single=1
"The host OS (as well as the guest OS when testing under KVM) was
running an Ubuntu 9.10 daily snapshot with the Linu
On Tue, 2009-09-29 at 21:45 +0100, Mark McLoughlin wrote:
> On Tue, 2009-05-05 at 09:56 +0100, Mark McLoughlin wrote:
> > This commit:
> >
> >commit 559a8f45f34cc50d1a60b4f67a06614d506b2e01
> >Subject: Remove stray GSO code from virtio_net (Mark McLoughlin)
> >
> > Removed some GSO code f
On Tue, 2009-05-05 at 09:56 +0100, Mark McLoughlin wrote:
> This commit:
>
>commit 559a8f45f34cc50d1a60b4f67a06614d506b2e01
>Subject: Remove stray GSO code from virtio_net (Mark McLoughlin)
>
> Removed some GSO code from upstream qemu.git, but it needs to
> be re-instated in qemu-kvm.git.
On Tue, Sep 29, 2009 at 2:32 PM, Matthew Tippett wrote:
> I would prefer rather than riling against Phoronix or the results as
> presented, ask questions to seek further information about what was tested
> rather than writing off all of it as completely invalid.
Matthew-
If you could please prov
Currently read_nonblocking() is called repeatedly until a match is found.
This is fine as long as internal_timeout, the timeout parameter passed to
read_nonblocking(), is greater than zero. If it equals zero the loop will keep
the CPU busy and stress the host.
To avoid this, use select() to wait u
Add a few comments, group similar lines and replace some blocks with one-liners.
Note: the weird form
timeout_multiplier = float(params.get("timeout_multiplier") or 1)
allows the user to fall back to the default value by setting timeout_multiplier
to "".
The more common form
timeout_multiplier = f
This parameter multiplies the timeout values of all the barriers in a step file
test.
It is useful for slower hosts, under load (e.g. when executing multiple tests
in parallel) and for testing QEMU without KVM. In any of these cases, the
multiplier should be greater than 1 in order to give the tes
The timeout of qemu-img commands is currently 30 seconds.
This may not suffice under heavy load (e.g. when multiple tests run in parallel
and use the same physical disk).
Signed-off-by: Michael Goldish
---
client/tests/kvm/kvm_vm.py |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
di
get_command_output() is safer because it waits for the prompt to return.
sendline() returns immediately, and the output generated in response to the
command can appear later and interfere with the test.
For example, the prompt can appear while the Autotest wrapper waits for an
Autotest test to comp
In get_command_status_output() and is_responsive() use read_nonblocking(0) to
read the unread output before sending input (e.g. a command).
The timeout is currently 0.1 because theoretically it should help if the guest
still produces output when the function is called, but in practice there's no
gu
Okay, bringing the leafs of the discussions onto this thread.
As per
http://www.phoronix.com/scan.php?page=article&item=linux_2631_kvm&num=1&single=1
"The host OS (as well as the guest OS when testing under KVM) was
running an Ubuntu 9.10 daily snapshot with the Linux 2.6.31 (final) kernel"
Matthew Tippett wrote:
I have created a launchpad bug against qemu-kvm in Ubuntu.
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/437473
Just re-iterating, my concern isn't so much performance, but integrity
of stock KVM configurations with server or other workloads that expect
sync f
Matthew Tippett wrote:
First up, Phoronix hasn't tuned. It's observing the delivered state
by an OS vendor. I started with what I believe to be the starting
point - KVM.
So the position of the KVM now is that it is either QEMU's
configuration or Ubuntu's configuration. No further guidance or
Avi Kivity wrote:
On 09/24/2009 10:49 PM, Matthew Tippett wrote:
The test itself is a simple usage of SQLite. It is stock KVM as
available in 2.6.31 on Ubuntu Karmic. So it would be the environment,
not the test.
So assuming that KVM upstream works as expected that would leave
either 2.6.31 h
Matthew Tippett wrote:
Hi,
I would like to call attention to the SQLite performance under KVM in
the current Ubuntu Alpha.
http://www.phoronix.com/scan.php?page=article&item=linux_2631_kvm&num=3
SQLite's benchmark as part of the Phoronix Test Suite is typically IO
limited and is affected by
Avi Kivity escreveu:
but a newbie is born every minute.
Reporting in!
As a newbie in KVM, I really appreciate those efforts.
Promise to help on docs when I realize that I know what I am doing. :)
Thank you.
--
Leandro Quibem Magnabosco.
--
To unsubscribe from this list: send the line "unsubscrib
On Tue, Sep 29, 2009 at 08:56:24AM +0300, Priit Laes wrote:
> Hi!
>
> I've been getting some locking-related warning when I fire up a KVM
> machine. This has been popping up since a 2.6.30 kernel:
>
> [snip]
> [119206.978058] BUG: MAX_LOCK_DEPTH too low!
> [119206.978062] turning off the locking
On 09/27/2009 04:53 PM, Joerg Roedel wrote:
Depends. If it's a global yield(), yes. If it's a local yield() that
doesn't rebalance the runqueues we might be left with the spinning task
re-running.
Only one runable task on each cpu is unlikely in a situation of high
vcpu overcommit
> Also, if this is really the case (buffered), then the bandwidth capping
> part
> of migration is also wrong.
>
> Have you compared the reported bandwidth to your actual bandwith ? I
> suspect
> the source of the problem can be that we're currently ignoring the time
> we take
> to transfer the st
On Tue, Sep 29, 2009 at 10:39:57AM -0500, Anthony Liguori wrote:
> Dietmar Maurer wrote:
>> this patch solves the problem by calculation an average bandwidth.
>>
>
> Can you take a look Glauber?
>
> Regards,
>
> Anthony Liguori
>
>> - Dietmar
>>
>>
>>> -Original Message-
>>> From: kvm
On 09/28/2009 10:30 PM, Avi Kivity wrote:
On 09/29/2009 06:04 AM, Zachary Amsden wrote:
Both VMX and SVM require per-cpu memory allocation, which is done at
module
init time, for only online cpus.
Backend was not allocating enough structure for all possible CPUs, so
new CPUs coming online coul
On Fri, 2009-09-25 at 05:22 -0400, Jiri Zupka wrote:
> - "Dor Laor" wrote:
>
> > On 09/16/2009 04:09 PM, Jiri Zupka wrote:
> > >
> > > - "Dor Laor" wrote:
> > >
> > >> On 09/15/2009 09:58 PM, Jiri Zupka wrote:
> > After a quick review I have the following questions:
> > 1. Why
Dietmar Maurer wrote:
this patch solves the problem by calculation an average bandwidth.
Can you take a look Glauber?
Regards,
Anthony Liguori
- Dietmar
-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On
Behalf Of Dietmar Maurer
Sent: Di
this patch solves the problem by calculation an average bandwidth.
- Dietmar
> -Original Message-
> From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On
> Behalf Of Dietmar Maurer
> Sent: Dienstag, 29. September 2009 16:37
> To: kvm
> Subject: RE: migrate_set_downtime bug
Seems the bwidth calculation is the problem. The code simply does:
bwidth = (bytes_transferred - bytes_transferred_last) / timediff
but I assume network traffic is buffered, so calculated bwidth is sometimes
much too high.
- Dietmar
> -Original Message-
> From: kvm-ow...@vger.kernel.o
Instead of using the unmaintained original CTCS suite,
use the CTCS2 project. Since it's not necessary to keep
the old version around, just bump the source version and
add patches to fix the build under 64 bit architectures
New features of the test:
* Using a newer version compared to the existe
Bugs item #2868883, was opened at 2009-09-28 15:27
Message generated for change (Comment added) made by mdw21
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2868883&group_id=180599
Please note that this message will contain a full copy of the comment thr
Avi Kivity wrote:
On 09/29/2009 03:12 PM, Michael Tokarev wrote:
[]
The thing is that after some uptime, kvm
guest prints something like this:
hrtimer: interrupt too slow, forcing clock min delta to 461487495 ns
[]
What happens if you use hpet or pmtimer as guest clocksource?
For all the g
On 09/29/2009 03:12 PM, Michael Tokarev wrote:
Hello.
I'm having quite an.. unusable system here.
It's not really a regresssion with 0.11.0,
it was something similar before, but with
0.11.0 and/or 2.6.31 it become much worse.
The thing is that after some uptime, kvm
guest prints something like
On 09/28/2009 11:33 AM, Zhai, Edwin wrote:
Avi Kivity wrote:
+#define KVM_VMX_DEFAULT_PLE_GAP41
+#define KVM_VMX_DEFAULT_PLE_WINDOW 4096
+static int __read_mostly ple_gap = KVM_VMX_DEFAULT_PLE_GAP;
+module_param(ple_gap, int, S_IRUGO);
+
+static int __read_mostly ple_window = KVM_VMX_DEFAUL
Hello.
I'm having quite an.. unusable system here.
It's not really a regresssion with 0.11.0,
it was something similar before, but with
0.11.0 and/or 2.6.31 it become much worse.
The thing is that after some uptime, kvm
guest prints something like this:
hrtimer: interrupt too slow, forcing cloc
using 0.11.0, live migration works as expected, but max downtime does not seem
to work, for example:
# migrate_set_downtime 1
After that tcp migration has much longer downtimes (up to 20 seconds).
Also, it seems that the 'monitor' is locked (take up to 10 seconds until I get
a monitor prompt).
Bugs item #2868883, was opened at 2009-09-28 16:27
Message generated for change (Comment added) made by yanv
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2868883&group_id=180599
Please note that this message will contain a full copy of the comment thre
Bugs item #2869748, was opened at 2009-09-29 14:47
Message generated for change (Tracker Item Submitted) made by dietmarmaurer
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2869748&group_id=180599
Please note that this message will contain a full copy o
Avi,
Any comments for this new patch?
Thanks,
Zhai, Edwin wrote:
Avi Kivity wrote:
+#define KVM_VMX_DEFAULT_PLE_GAP41
+#define KVM_VMX_DEFAULT_PLE_WINDOW 4096
+static int __read_mostly ple_gap = KVM_VMX_DEFAULT_PLE_GAP;
+module_param(ple_gap, int, S_IRUGO);
+
+static int __read_mostly p
Resending with a couple of extra CCs.
Only extra details I guess were debian stable with kvm that reports itself as:
QEMU PC emulator version 0.9.1 (kvm-72), Copyright (c) 2003-2008 Fabrice
Bellard
Note that I get this stack corruption *every time*, and it must be with
the -no-kvm option.
Con
Hi
Actually I am using 2.6.21 kernel and inserting kvm-76 module.
It is fine when I am using cpu affinity( like " taskset 1
./qemu-system-x86-64 -hda imgage name ").
I am having quad core intel xeon processor.
But without cpu affinity It is hanging randomly.(./qemu-system-x86-64
-
Bugs item #2868883, was opened at 2009-09-28 16:27
Message generated for change (Comment added) made by yanv
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2868883&group_id=180599
Please note that this message will contain a full copy of the comment thre
On 09/29/2009 12:18 PM, Con Kolivas wrote:
Resending with a couple of extra CCs.
Only extra details I guess were debian stable with kvm that reports itself as:
QEMU PC emulator version 0.9.1 (kvm-72), Copyright (c) 2003-2008 Fabrice
Bellard
Note that I get this stack corruption *every time*, an
On 09/28/2009 01:22 PM, Marcelo Tosatti wrote:
So one can measure SMP overhead.
+
+ for (n = cpu_count(); n> 0; n--)
+ on_cpu(n-1, do_tests, 0, 0);
Should be done inside do_test(), so we can start the measurement on all
cpus at the same time (right now, if some cpus c
On 09/28/2009 01:22 PM, Marcelo Tosatti wrote:
To determine whether to wait for IPI to finish on remote cpu.
Signed-off-by: Marcelo Tosatti
Index: qemu-kvm/kvm/user/test/lib/x86/smp.c
===
--- qemu-kvm.orig/kvm/user/test/lib/x86/smp
On 09/29/2009 06:04 AM, Zachary Amsden wrote:
Both VMX and SVM require per-cpu memory allocation, which is done at module
init time, for only online cpus.
Backend was not allocating enough structure for all possible CPUs, so
new CPUs coming online could not be hardware enabled.
diff --git a/vir
On 09/29/2009 06:04 AM, Zachary Amsden wrote:
They are globals, not clearly protected by any ordering or locking, and
vulnerable to various startup races.
Instead, for variable TSC machines, register the cpufreq notifier and get
the TSC frequency directly from the cpufreq machinery. Not only is
65 matches
Mail list logo