I used lmbench, and even simple test shows the difference here. Simple
read/writes (but not open/close?) seem to take much longer. Perhaps this over
simplifies things, but it's repeatable. Any thoughts? Other tests that might
be interesting?
ubuntu-12.04 - first boot
=
7:32 PM
To: Marcelo Tosatti
Cc: kvm@vger.kernel.org; shouta.ueh...@jp.yokogawa.com
Subject: RE: Guest performance is reduced after live migration
I believe I disabled huge pages on the guest and host previously, but I'll test
a few scenarios and look at transparent hugepage usage specifical
e: Guest performance is reduced after live migration
On Wed, Jan 02, 2013 at 11:56:11PM +, Mark Petersen wrote:
> I don't think it's related to huge pages...
>
> I was using phoronix-test-suite to run benchmarks. The 'batch/compilation'
> group shows the slowdo
On Wed, Jan 02, 2013 at 11:56:11PM +, Mark Petersen wrote:
> I don't think it's related to huge pages...
>
> I was using phoronix-test-suite to run benchmarks. The 'batch/compilation'
> group shows the slowdown for all tests, the 'batch/computation' show some
> performance degradation, but
I don't think it's related to huge pages...
I was using phoronix-test-suite to run benchmarks. The 'batch/compilation'
group shows the slowdown for all tests, the 'batch/computation' show some
performance degradation, but not nearly as significant.
You could probably easily test this way witho
Can you describe more details of the test you are performing?
If transparent hugepages are being used then there is the possibility
that there has been no time for khugepaged to back guest memory
with huge pages, in the destination (don't recall the interface for
retrieving number of hugepages f
Hello KVM,
I'm seeing something similar to this
(http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592) as well when
doing live migrations on Ubuntu 12.04 (Host and Guest) with a backported
libvirt 1.0 and qemu-kvm 1.2 (improved performance for live migrations on
guests with large memo
On 11/21/2012 09:25 AM, shouta.ueh...@jp.yokogawa.com wrote:
> Dear allI
>
> I continue watching a mailing list whether a similar problem is reported
> because a problem does not seem to happen to others.
> Any information, however small, would be appreciated.
>
I am digging into it, but did no
> Sent: Friday, November 09, 2012 6:52 PM
> To: 'kvm@vger.kernel.org'; 'Xiao Guangrong
> (xiaoguangr...@linux.vnet.ibm.com)'
> Subject: RE: Guest performance is reduced after live migration
>
> I've analysed the problem with migration using perf-events, and
Sent: Thursday, November 01, 2012 1:45 PM
> To: Uehara, Shouta (shouta.ueh...@jp.yokogawa.com)
> Cc: kvm@vger.kernel.org
> Subject: Re: Guest performance is reduced after live migration
>
> Shouta,
>
> Can it be reproduced if thp/hugetlbfs is disabled on both source and
> destina
12 1:45 PM
> To: Uehara, Shouta (shouta.ueh...@jp.yokogawa.com)
> Cc: kvm@vger.kernel.org
> Subject: Re: Guest performance is reduced after live migration
>
> Shouta,
>
> Can it be reproduced if thp/hugetlbfs is disabled on both source and
> destination?
>
> On 11/01
Shouta,
Can it be reproduced if thp/hugetlbfs is disabled on both source and
destination?
On 11/01/2012 08:12 AM, shouta.ueh...@jp.yokogawa.com wrote:
> Hello.
>
> I have a problem with the performance of the guest Linux after live migration.
> When I analyze the file I/O latency of the guest u
Hello.
I have a problem with the performance of the guest Linux after live migration.
When I analyze the file I/O latency of the guest using LMbench3, the latency of
the guest on the destination host is about 2 times bigger than the guest on the
source host. As a result that I investigated it, thi
13 matches
Mail list logo