Re: [Qemu-devel] [PATCH 1/2] target-i386: kvm: -cpu host: use GET_SUPPORTED_CPUID for SVM features

2013-01-02 Thread Igor Mammedov
On Fri, 28 Dec 2012 16:37:33 -0200
Eduardo Habkost ehabk...@redhat.com wrote:

 The existing -cpu host code simply set every bit inside svm_features
 (initializing it to -1), and that makes it impossible to make the
 enforce/check options work properly when the user asks for SVM features
 explicitly in the command-line.
 
 So, instead of initializing svm_features to -1, use GET_SUPPORTED_CPUID
 to fill only the bits that are supported by the host (just like we do
 for all other CPUID feature words inside kvm_cpu_fill_host()).
 
 This will keep the existing behavior (as filter_features_for_kvm()
 already uses GET_SUPPORTED_CPUID to filter svm_features), but will allow
 us to properly check for KVM features inside
 kvm_check_features_against_host() later.
 
 For example, we will be able to make this:
 
   $ qemu-system-x86_64 -cpu ...,+pfthreshold,enforce
 
 refuse to start if the SVM pfthreshold feature is not supported by the
 host (after we fix kvm_check_features_against_host() to check SVM flags
 as well).
 
 Signed-off-by: Eduardo Habkost ehabk...@redhat.com
Reviewed-By: Igor Mammedov imamm...@redhat.com

 ---
  target-i386/cpu.c | 11 ---
  1 file changed, 4 insertions(+), 7 deletions(-)
 
 diff --git a/target-i386/cpu.c b/target-i386/cpu.c
 index 3cd1cee..6e2d32d 100644
 --- a/target-i386/cpu.c
 +++ b/target-i386/cpu.c
 @@ -897,13 +897,10 @@ static void kvm_cpu_fill_host(x86_def_t *x86_cpu_def)
  }
  }
  
 -/*
 - * Every SVM feature requires emulation support in KVM - so we can't
 just
 - * read the host features here. KVM might even support SVM features not
 - * available on the host hardware. Just set all bits and mask out the
 - * unsupported ones later.
 - */
 -x86_cpu_def-svm_features = -1;
 +/* Other KVM-specific feature fields: */
 +x86_cpu_def-svm_features =
 +kvm_arch_get_supported_cpuid(s, 0x800A, 0, R_EDX);
 +
  #endif /* CONFIG_KVM */
  }
  

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 1/2] target-i386: kvm: -cpu host: use GET_SUPPORTED_CPUID for SVM features

2013-01-02 Thread Andreas Färber
Am 28.12.2012 19:37, schrieb Eduardo Habkost:
 The existing -cpu host code simply set every bit inside svm_features
 (initializing it to -1), and that makes it impossible to make the
 enforce/check options work properly when the user asks for SVM features
 explicitly in the command-line.
 
 So, instead of initializing svm_features to -1, use GET_SUPPORTED_CPUID
 to fill only the bits that are supported by the host (just like we do
 for all other CPUID feature words inside kvm_cpu_fill_host()).
 
 This will keep the existing behavior (as filter_features_for_kvm()
 already uses GET_SUPPORTED_CPUID to filter svm_features), but will allow
 us to properly check for KVM features inside
 kvm_check_features_against_host() later.
 
 For example, we will be able to make this:
 
   $ qemu-system-x86_64 -cpu ...,+pfthreshold,enforce
 
 refuse to start if the SVM pfthreshold feature is not supported by the
 host (after we fix kvm_check_features_against_host() to check SVM flags
 as well).
 
 Signed-off-by: Eduardo Habkost ehabk...@redhat.com
 ---
  target-i386/cpu.c | 11 ---
  1 file changed, 4 insertions(+), 7 deletions(-)
 
 diff --git a/target-i386/cpu.c b/target-i386/cpu.c
 index 3cd1cee..6e2d32d 100644
 --- a/target-i386/cpu.c
 +++ b/target-i386/cpu.c
 @@ -897,13 +897,10 @@ static void kvm_cpu_fill_host(x86_def_t *x86_cpu_def)
  }
  }
  
 -/*
 - * Every SVM feature requires emulation support in KVM - so we can't just
 - * read the host features here. KVM might even support SVM features not
 - * available on the host hardware. Just set all bits and mask out the
 - * unsupported ones later.
 - */
 -x86_cpu_def-svm_features = -1;
 +/* Other KVM-specific feature fields: */
 +x86_cpu_def-svm_features =
 +kvm_arch_get_supported_cpuid(s, 0x800A, 0, R_EDX);

Is there no #define for this, similar to KVM_CPUID_FEATURES in 2/2?
FWIW indentation looks odd.

Andreas

 +
  #endif /* CONFIG_KVM */
  }
  
 


-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 0/2] Fixes for -cpu host KVM/SVM feature initialization

2013-01-02 Thread Andreas Färber
Am 28.12.2012 19:37, schrieb Eduardo Habkost:
 This series has two very similar fixes for feature initizliation for -cpu
 host. This should allow us to make the check/enforce code check for host
 support of KVM and SVM features, later.

I am out of my field here to verify whether this is semantically correct
and whether any fallback code may be needed. However, this will conflict
with X86CPU subclasses, so I'd be interested in taking this through my
qom-cpu queue if there are acks from the KVM folks.

Regards,
Andreas

 Eduardo Habkost (2):
   target-i386: kvm: -cpu host: use GET_SUPPORTED_CPUID for SVM features
   target-i386: kvm: enable all supported KVM features for -cpu host
 
  target-i386/cpu.c | 13 ++---
  1 file changed, 6 insertions(+), 7 deletions(-)

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 2/2] target-i386: kvm: enable all supported KVM features for -cpu host

2013-01-02 Thread Igor Mammedov
On Fri, 28 Dec 2012 16:37:34 -0200
Eduardo Habkost ehabk...@redhat.com wrote:

 When using -cpu host, we don't need to use the kvm_default_features
 variable, as the user is explicitly asking QEMU to enable all feature
 supported by the host.
 
 This changes the kvm_cpu_fill_host() code to use GET_SUPPORTED_CPUID to
 initialize the kvm_features field, so we get all host KVM features
 enabled.

1_2 and 1_3 compat machines diff on pv_eoi flag, with this patch 1_2 might
have it set.
Is it ok from compat machines pov?

 
 This will also allow use to properly check/enforce KVM features inside
 kvm_check_features_against_host() later. For example, we will be able to
 make this:
 
   $ qemu-system-x86_64 -cpu ...,+kvm_pv_eoi,enforce
 
 refuse to start if kvm_pv_eoi is not supported by the host (after we fix
 kvm_check_features_against_host() to check KVM flags as well).
It would be nice to have kvm_check_features_against_host() patch in this
series to verify that this patch and previous patch works as expected.

 
 Signed-off-by: Eduardo Habkost ehabk...@redhat.com
 ---
  target-i386/cpu.c | 2 ++
  1 file changed, 2 insertions(+)
 
 diff --git a/target-i386/cpu.c b/target-i386/cpu.c
 index 6e2d32d..76f19f0 100644
 --- a/target-i386/cpu.c
 +++ b/target-i386/cpu.c
 @@ -900,6 +900,8 @@ static void kvm_cpu_fill_host(x86_def_t *x86_cpu_def)
  /* Other KVM-specific feature fields: */
  x86_cpu_def-svm_features =
  kvm_arch_get_supported_cpuid(s, 0x800A, 0, R_EDX);
 +x86_cpu_def-kvm_features =
 +kvm_arch_get_supported_cpuid(s, KVM_CPUID_FEATURES, 0,
 R_EAX); 
  #endif /* CONFIG_KVM */
  }

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 1/2] target-i386: kvm: -cpu host: use GET_SUPPORTED_CPUID for SVM features

2013-01-02 Thread Eduardo Habkost
On Wed, Jan 02, 2013 at 03:39:03PM +0100, Andreas Färber wrote:
 Am 28.12.2012 19:37, schrieb Eduardo Habkost:
  The existing -cpu host code simply set every bit inside svm_features
  (initializing it to -1), and that makes it impossible to make the
  enforce/check options work properly when the user asks for SVM features
  explicitly in the command-line.
  
  So, instead of initializing svm_features to -1, use GET_SUPPORTED_CPUID
  to fill only the bits that are supported by the host (just like we do
  for all other CPUID feature words inside kvm_cpu_fill_host()).
  
  This will keep the existing behavior (as filter_features_for_kvm()
  already uses GET_SUPPORTED_CPUID to filter svm_features), but will allow
  us to properly check for KVM features inside
  kvm_check_features_against_host() later.
  
  For example, we will be able to make this:
  
$ qemu-system-x86_64 -cpu ...,+pfthreshold,enforce
  
  refuse to start if the SVM pfthreshold feature is not supported by the
  host (after we fix kvm_check_features_against_host() to check SVM flags
  as well).
  
  Signed-off-by: Eduardo Habkost ehabk...@redhat.com
  ---
   target-i386/cpu.c | 11 ---
   1 file changed, 4 insertions(+), 7 deletions(-)
  
  diff --git a/target-i386/cpu.c b/target-i386/cpu.c
  index 3cd1cee..6e2d32d 100644
  --- a/target-i386/cpu.c
  +++ b/target-i386/cpu.c
  @@ -897,13 +897,10 @@ static void kvm_cpu_fill_host(x86_def_t *x86_cpu_def)
   }
   }
   
  -/*
  - * Every SVM feature requires emulation support in KVM - so we can't 
  just
  - * read the host features here. KVM might even support SVM features not
  - * available on the host hardware. Just set all bits and mask out the
  - * unsupported ones later.
  - */
  -x86_cpu_def-svm_features = -1;
  +/* Other KVM-specific feature fields: */
  +x86_cpu_def-svm_features =
  +kvm_arch_get_supported_cpuid(s, 0x800A, 0, R_EDX);
 
 Is there no #define for this, similar to KVM_CPUID_FEATURES in 2/2?

I believve KVM_CPUID_FEATURES is the exception, all other leaves have
their own numbers hardcoded in the code.

(The way I plan to fix this is to introduce the feature-array and a
CPUID leaf/register table for the feature-array, so kvm_cpu_fill_host(),
filter_features_for_kvm(), kvm_check_features_against_host()  similar
functions would always handle exactly the same set of CPUID leaves, by
simply looking at the table).


 FWIW indentation looks odd.

Oops, I intended to follow the existing style used for ext2_features and
ext3_features and use 8 spaces instead of 12. I will resubmit.

-- 
Eduardo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 2/2] target-i386: kvm: enable all supported KVM features for -cpu host

2013-01-02 Thread Eduardo Habkost
On Wed, Jan 02, 2013 at 03:52:45PM +0100, Igor Mammedov wrote:
 On Fri, 28 Dec 2012 16:37:34 -0200
 Eduardo Habkost ehabk...@redhat.com wrote:
 
  When using -cpu host, we don't need to use the kvm_default_features
  variable, as the user is explicitly asking QEMU to enable all feature
  supported by the host.
  
  This changes the kvm_cpu_fill_host() code to use GET_SUPPORTED_CPUID to
  initialize the kvm_features field, so we get all host KVM features
  enabled.
 
 1_2 and 1_3 compat machines diff on pv_eoi flag, with this patch 1_2 might
 have it set.
 Is it ok from compat machines pov?

-cpu host is completely dependent on host hardware and kernel version,
there are no compatibility expectations.

 
  
  This will also allow use to properly check/enforce KVM features inside
  kvm_check_features_against_host() later. For example, we will be able to
  make this:
  
$ qemu-system-x86_64 -cpu ...,+kvm_pv_eoi,enforce
  
  refuse to start if kvm_pv_eoi is not supported by the host (after we fix
  kvm_check_features_against_host() to check KVM flags as well).
 It would be nice to have kvm_check_features_against_host() patch in this
 series to verify that this patch and previous patch works as expected.

The kvm_check_features_against_host() change would be a user-visible
behavior change, and I wanted to keep the changes minimal by now. (the
main reason I submitted this earlier is to make it easier to clean up
the init code for CPU subclasses)

I was planning to introduce those behavior changes only after
introducing the feature-word array, so the kvm_check_features_against_host()
code can become simpler and easier to review (instead of adding 4
additional items to the messy struct model_features_t array). But if you
think we can introduce those changes now, I will be happy to send a
series that changes that code as well.

 
  
  Signed-off-by: Eduardo Habkost ehabk...@redhat.com
  ---
   target-i386/cpu.c | 2 ++
   1 file changed, 2 insertions(+)
  
  diff --git a/target-i386/cpu.c b/target-i386/cpu.c
  index 6e2d32d..76f19f0 100644
  --- a/target-i386/cpu.c
  +++ b/target-i386/cpu.c
  @@ -900,6 +900,8 @@ static void kvm_cpu_fill_host(x86_def_t *x86_cpu_def)
   /* Other KVM-specific feature fields: */
   x86_cpu_def-svm_features =
   kvm_arch_get_supported_cpuid(s, 0x800A, 0, R_EDX);
  +x86_cpu_def-kvm_features =
  +kvm_arch_get_supported_cpuid(s, KVM_CPUID_FEATURES, 0,
  R_EAX); 
   #endif /* CONFIG_KVM */
   }
 

-- 
Eduardo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] [PULL] qemu-kvm.git uq/master queue

2013-01-02 Thread Anthony Liguori
Gleb Natapov g...@redhat.com writes:

 The following changes since commit e376a788ae130454ad5e797f60cb70d0308babb6:

   Merge remote-tracking branch 'kwolf/for-anthony' into staging (2012-12-13 
 14:32:28 -0600)

 are available in the git repository at:


   git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git uq/master

 for you to fetch changes up to 0a2a59d35cbabf63c91340a1c62038e3e60538c1:

   qemu-kvm/pci-assign: 64 bits bar emulation (2012-12-25 14:37:52 +0200)


Pulled. Thanks.

Regards,

Anthony Liguori

 
 Will Auld (1):
   target-i386: Enabling IA32_TSC_ADJUST for QEMU KVM guest VMs

 Xudong Hao (1):
   qemu-kvm/pci-assign: 64 bits bar emulation

  hw/kvm/pci-assign.c   |   14 ++
  target-i386/cpu.h |2 ++
  target-i386/kvm.c |   14 ++
  target-i386/machine.c |   21 +
  4 files changed, 47 insertions(+), 4 deletions(-)
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 2/2] target-i386: kvm: enable all supported KVM features for -cpu host

2013-01-02 Thread Igor Mammedov
On Wed, 2 Jan 2013 13:29:10 -0200
Eduardo Habkost ehabk...@redhat.com wrote:

 On Wed, Jan 02, 2013 at 03:52:45PM +0100, Igor Mammedov wrote:
  On Fri, 28 Dec 2012 16:37:34 -0200
  Eduardo Habkost ehabk...@redhat.com wrote:
  
   When using -cpu host, we don't need to use the kvm_default_features
   variable, as the user is explicitly asking QEMU to enable all feature
   supported by the host.
   
   This changes the kvm_cpu_fill_host() code to use GET_SUPPORTED_CPUID to
   initialize the kvm_features field, so we get all host KVM features
   enabled.
  
  1_2 and 1_3 compat machines diff on pv_eoi flag, with this patch 1_2 might
  have it set.
  Is it ok from compat machines pov?
 
 -cpu host is completely dependent on host hardware and kernel version,
 there are no compatibility expectations.

It's still kind of unpleasant surprise if on the same host
qemu-1.3 -cpu host -machine pc-1.2 and qemu-1.3+ -cpu host -machine pc-1.2
would give different pv_eoi feature default, where pv-eoi should be
available after -machine pc-1.2 by default.

 
  
   
   This will also allow use to properly check/enforce KVM features inside
   kvm_check_features_against_host() later. For example, we will be able to
   make this:
   
 $ qemu-system-x86_64 -cpu ...,+kvm_pv_eoi,enforce
   
   refuse to start if kvm_pv_eoi is not supported by the host (after we fix
   kvm_check_features_against_host() to check KVM flags as well).
  It would be nice to have kvm_check_features_against_host() patch in this
  series to verify that this patch and previous patch works as expected.
 
 The kvm_check_features_against_host() change would be a user-visible
 behavior change, and I wanted to keep the changes minimal by now. (the
 main reason I submitted this earlier is to make it easier to clean up
 the init code for CPU subclasses)
 
 I was planning to introduce those behavior changes only after
 introducing the feature-word array, so the kvm_check_features_against_host()
 code can become simpler and easier to review (instead of adding 4
 additional items to the messy struct model_features_t array). But if you
 think we can introduce those changes now, I will be happy to send a
 series that changes that code as well.
It would be better if it and simplifying kvm_check_features_against_host()
were in here together.

 
  
   
   Signed-off-by: Eduardo Habkost ehabk...@redhat.com
   ---
target-i386/cpu.c | 2 ++
1 file changed, 2 insertions(+)
   
   diff --git a/target-i386/cpu.c b/target-i386/cpu.c
   index 6e2d32d..76f19f0 100644
   --- a/target-i386/cpu.c
   +++ b/target-i386/cpu.c
   @@ -900,6 +900,8 @@ static void kvm_cpu_fill_host(x86_def_t *x86_cpu_def)
/* Other KVM-specific feature fields: */
x86_cpu_def-svm_features =
kvm_arch_get_supported_cpuid(s, 0x800A, 0, R_EDX);
   +x86_cpu_def-kvm_features =
   +kvm_arch_get_supported_cpuid(s, KVM_CPUID_FEATURES, 0,
   R_EAX); 
#endif /* CONFIG_KVM */
}
  
 
 -- 
 Eduardo
 


-- 
Regards,
  Igor
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 2/2] target-i386: kvm: enable all supported KVM features for -cpu host

2013-01-02 Thread Eduardo Habkost
On Wed, Jan 02, 2013 at 09:30:20PM +0100, Igor Mammedov wrote:
 On Wed, 2 Jan 2013 13:29:10 -0200
 Eduardo Habkost ehabk...@redhat.com wrote:
 
  On Wed, Jan 02, 2013 at 03:52:45PM +0100, Igor Mammedov wrote:
   On Fri, 28 Dec 2012 16:37:34 -0200
   Eduardo Habkost ehabk...@redhat.com wrote:
   
When using -cpu host, we don't need to use the kvm_default_features
variable, as the user is explicitly asking QEMU to enable all feature
supported by the host.

This changes the kvm_cpu_fill_host() code to use GET_SUPPORTED_CPUID to
initialize the kvm_features field, so we get all host KVM features
enabled.
   
   1_2 and 1_3 compat machines diff on pv_eoi flag, with this patch 1_2 might
   have it set.
   Is it ok from compat machines pov?
  
  -cpu host is completely dependent on host hardware and kernel version,
  there are no compatibility expectations.
 
 It's still kind of unpleasant surprise if on the same host
 qemu-1.3 -cpu host -machine pc-1.2 and qemu-1.3+ -cpu host -machine pc-1.2
 would give different pv_eoi feature default, where pv-eoi should be
 available after -machine pc-1.2 by default.

Just like you may end up getting new features enabled by -cpu host after
upgrading the kernel, you may end up getting new features enabled by
-cpu host after upgrading qemu. If you don't like surprises, don't use
-cpu host.  ;-)

I don't think machine-types exist to avoid user surprise, they exist to
keep compatibility (which is not expected to happen when using -cpu
host). Keeping compatibility is hard enough in the cases where we really
need it, I don't think it is worth the extra work and complexity for an
use case where compatibility is not expected.

 
  
   

This will also allow use to properly check/enforce KVM features inside
kvm_check_features_against_host() later. For example, we will be able to
make this:

  $ qemu-system-x86_64 -cpu ...,+kvm_pv_eoi,enforce

refuse to start if kvm_pv_eoi is not supported by the host (after we fix
kvm_check_features_against_host() to check KVM flags as well).
   It would be nice to have kvm_check_features_against_host() patch in this
   series to verify that this patch and previous patch works as expected.
  
  The kvm_check_features_against_host() change would be a user-visible
  behavior change, and I wanted to keep the changes minimal by now. (the
  main reason I submitted this earlier is to make it easier to clean up
  the init code for CPU subclasses)
  
  I was planning to introduce those behavior changes only after
  introducing the feature-word array, so the kvm_check_features_against_host()
  code can become simpler and easier to review (instead of adding 4
  additional items to the messy struct model_features_t array). But if you
  think we can introduce those changes now, I will be happy to send a
  series that changes that code as well.
 It would be better if it and simplifying kvm_check_features_against_host()
 were in here together.

The best way I see to simplify kvm_check_features_against_host()
requires the feature words array patch, that touches _lots_ of code. I
wanted to avoid adding such an intrusive patch as a dependency.

 
  
   

Signed-off-by: Eduardo Habkost ehabk...@redhat.com
---
 target-i386/cpu.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index 6e2d32d..76f19f0 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -900,6 +900,8 @@ static void kvm_cpu_fill_host(x86_def_t 
*x86_cpu_def)
 /* Other KVM-specific feature fields: */
 x86_cpu_def-svm_features =
 kvm_arch_get_supported_cpuid(s, 0x800A, 0, R_EDX);
+x86_cpu_def-kvm_features =
+kvm_arch_get_supported_cpuid(s, KVM_CPUID_FEATURES, 0,
R_EAX); 
 #endif /* CONFIG_KVM */
 }
   
  
  -- 
  Eduardo
  
 
 
 -- 
 Regards,
   Igor

-- 
Eduardo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Guest performance is reduced after live migration

2013-01-02 Thread Marcelo Tosatti

Can you describe more details of the test you are performing? 

If transparent hugepages are being used then there is the possibility
that there has been no time for khugepaged to back guest memory
with huge pages, in the destination (don't recall the interface for
retrieving number of hugepages for a given process, probably somewhere
in /proc/pid/).

On Wed, Dec 19, 2012 at 12:43:37AM +, Mark Petersen wrote:
 Hello KVM,
 
 I'm seeing something similar to this 
 (http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592) as well when 
 doing live migrations on Ubuntu 12.04 (Host and Guest) with a backported 
 libvirt 1.0 and qemu-kvm 1.2 (improved performance for live migrations on 
 guests with large memory guests is great!)  The default libvirt  0.9.8 and 
 qemu-kvm 1.0 have the same issue.
 
 Kernel is 3.2.0-34-generic and eglicb 2.15 on both host/guest.  I'm seeing 
 similar issues with both virtio and ide bus.  Hugetblfs is not used, but 
 transparent hugepages are.  Host machines are dual core Xeon E5-2660 
 processors.  I tried disabling EPT but that doesn't seem to make a difference 
 so I don't think it's a requirement to reproduce.
 
 If I use Ubuntu 10.04 guest with eglibc 2.11 and any of these kernels I don't 
 seem to have the issue:
 
 linux-image-2.6.32-32-server - 2.6.32-32.62
 linux-image-2.6.32-38-server - 2.6.32-38.83
 linux-image-2.6.32-43-server - 2.6.32-43.97
 linux-image-2.6.35-32-server - 2.6.35-32.68~lucid1
 linux-image-2.6.38-16-server - 2.6.38-16.67~lucid1
 linux-image-3.0.0-26-server  - 3.0.0-26.43~lucid1 
 linux-image-3.2-5 - mainline 3.2.5 kernel
 
 I'm guess it's a libc issue (or at least a libc change causing the issue) as 
 it doesn't seem to a be kernel related.
 
 I'll try other distributions as a guest (probably Debian/Ubuntu) with newer 
 libc's and see if I can pinpoint the issue to a libc version.  Any other 
 ideas?
 
 Shared disk backend is clvm/LV via FC to EMC SAN, not sure what else might be 
 relevant.
 
 Thanks,
 Mark
 
 
 __
 
 See http://www.peak6.com/email_disclaimer/ for terms and conditions related 
 to this email
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH][TRIVIAL] kvm_para: fix typo in hypercall comments

2013-01-02 Thread Marcelo Tosatti
On Mon, Dec 10, 2012 at 03:31:51PM -0600, Jesse Larrew wrote:
 
 Correct a typo in the comment explaining hypercalls.
 
 Signed-off-by: Jesse Larrew jlar...@linux.vnet.ibm.com
 ---
  arch/x86/include/asm/kvm_para.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

Applied, thanks.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Guest performance is reduced after live migration

2013-01-02 Thread Mark Petersen
I don't think it's related to huge pages...

I was using phoronix-test-suite to run benchmarks.  The 'batch/compilation' 
group shows the slowdown for all tests, the 'batch/computation' show some 
performance degradation, but not nearly as significant.

You could probably easily test this way without phoronix -  Start a VM with 
almost nothing running.  Download mainline Linux kernel, compile.  This takes 
about 45 seconds in my case (72GB memory, 16 virtual CPUs, idle physical host 
running this VM.)  Run as many times as you want, still takes ~45 seconds.

Migrate to a new idle host, kernel compile now takes ~90 seconds, wait 3 hours 
(should give khugepaged a change to do its thing I imagine), kernel compiles 
still take 90 seconds.  Reboot virtual machine (run 'shutdown -r now', reboot, 
whatever.)  First compile will take ~45 seconds after reboot.  You don't even 
need to reset/destroy/shutdown the VM, just a reboot in the guest fixes the 
issue.

I'm going to test more with qemu-kvm 1.3 tomorrow as I have a new/dedicated lab 
setup and recently built the 1.3 code base.  I'd be happy to run any test that 
would help in diagnosing the real issue here, I'm just not sure how to best 
diagnose this issue.

Thanks,
Mark
 
-Original Message-

Can you describe more details of the test you are performing? 

If transparent hugepages are being used then there is the possibility that 
there has been no time for khugepaged to back guest memory with huge pages, in 
the destination (don't recall the interface for retrieving number of hugepages 
for a given process, probably somewhere in /proc/pid/).

On Wed, Dec 19, 2012 at 12:43:37AM +, Mark Petersen wrote:
 Hello KVM,
 
 I'm seeing something similar to this 
 (http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592) as well when 
 doing live migrations on Ubuntu 12.04 (Host and Guest) with a backported 
 libvirt 1.0 and qemu-kvm 1.2 (improved performance for live migrations on 
 guests with large memory guests is great!)  The default libvirt  0.9.8 and 
 qemu-kvm 1.0 have the same issue.
 
 Kernel is 3.2.0-34-generic and eglicb 2.15 on both host/guest.  I'm seeing 
 similar issues with both virtio and ide bus.  Hugetblfs is not used, but 
 transparent hugepages are.  Host machines are dual core Xeon E5-2660 
 processors.  I tried disabling EPT but that doesn't seem to make a difference 
 so I don't think it's a requirement to reproduce.
 
 If I use Ubuntu 10.04 guest with eglibc 2.11 and any of these kernels I don't 
 seem to have the issue:
 
 linux-image-2.6.32-32-server - 2.6.32-32.62 
 linux-image-2.6.32-38-server - 2.6.32-38.83 
 linux-image-2.6.32-43-server - 2.6.32-43.97 
 linux-image-2.6.35-32-server - 2.6.35-32.68~lucid1 
 linux-image-2.6.38-16-server - 2.6.38-16.67~lucid1 
 linux-image-3.0.0-26-server  - 3.0.0-26.43~lucid1
 linux-image-3.2-5 - mainline 3.2.5 kernel
 
 I'm guess it's a libc issue (or at least a libc change causing the issue) as 
 it doesn't seem to a be kernel related.
 
 I'll try other distributions as a guest (probably Debian/Ubuntu) with newer 
 libc's and see if I can pinpoint the issue to a libc version.  Any other 
 ideas?
 
 Shared disk backend is clvm/LV via FC to EMC SAN, not sure what else might be 
 relevant.
 
 Thanks,
 Mark
 
 
 __
 
 See http://www.peak6.com/email_disclaimer/ for terms and conditions 
 related to this email
 --
 To unsubscribe from this list: send the line unsubscribe kvm in the 
 body of a message to majord...@vger.kernel.org More majordomo info at  
 http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Guest performance is reduced after live migration

2013-01-02 Thread Marcelo Tosatti
On Wed, Jan 02, 2013 at 11:56:11PM +, Mark Petersen wrote:
 I don't think it's related to huge pages...
 
 I was using phoronix-test-suite to run benchmarks.  The 'batch/compilation' 
 group shows the slowdown for all tests, the 'batch/computation' show some 
 performance degradation, but not nearly as significant.

Huge pages in the host, for the qemu-kvm process, i mean.
Without huge pages backing guest memory in the host, 4k EPT TLB entries
will be used instead of 2MB EPT TLB entries.

 You could probably easily test this way without phoronix -  Start a VM with 
 almost nothing running.  Download mainline Linux kernel, compile.  This takes 
 about 45 seconds in my case (72GB memory, 16 virtual CPUs, idle physical host 
 running this VM.)  Run as many times as you want, still takes ~45 seconds.
 
 Migrate to a new idle host, kernel compile now takes ~90 seconds, wait 3 
 hours (should give khugepaged a change to do its thing I imagine),

Please verify its the case (by checking how much memory is backed by
hugepages).

http://www.mjmwired.net/kernel/Documentation/vm/transhuge.txt
Monitoring Usage.


 kernel compiles still take 90 seconds.  Reboot virtual machine (run 'shutdown 
 -r now', reboot, whatever.)  First compile will take ~45 seconds after 
 reboot.  You don't even need to reset/destroy/shutdown the VM, just a reboot 
 in the guest fixes the issue.

What is the qemu command line?

 I'm going to test more with qemu-kvm 1.3 tomorrow as I have a new/dedicated 
 lab setup and recently built the 1.3 code base.  I'd be happy to run any test 
 that would help in diagnosing the real issue here, I'm just not sure how to 
 best diagnose this issue.
 
 Thanks,
 Mark
  
 -Original Message-
 
 Can you describe more details of the test you are performing? 
 
 If transparent hugepages are being used then there is the possibility that 
 there has been no time for khugepaged to back guest memory with huge pages, 
 in the destination (don't recall the interface for retrieving number of 
 hugepages for a given process, probably somewhere in /proc/pid/).
 
 On Wed, Dec 19, 2012 at 12:43:37AM +, Mark Petersen wrote:
  Hello KVM,
  
  I'm seeing something similar to this 
  (http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592) as well 
  when doing live migrations on Ubuntu 12.04 (Host and Guest) with a 
  backported libvirt 1.0 and qemu-kvm 1.2 (improved performance for live 
  migrations on guests with large memory guests is great!)  The default 
  libvirt  0.9.8 and qemu-kvm 1.0 have the same issue.
  
  Kernel is 3.2.0-34-generic and eglicb 2.15 on both host/guest.  I'm seeing 
  similar issues with both virtio and ide bus.  Hugetblfs is not used, but 
  transparent hugepages are.  Host machines are dual core Xeon E5-2660 
  processors.  I tried disabling EPT but that doesn't seem to make a 
  difference so I don't think it's a requirement to reproduce.
  
  If I use Ubuntu 10.04 guest with eglibc 2.11 and any of these kernels I 
  don't seem to have the issue:
  
  linux-image-2.6.32-32-server - 2.6.32-32.62 
  linux-image-2.6.32-38-server - 2.6.32-38.83 
  linux-image-2.6.32-43-server - 2.6.32-43.97 
  linux-image-2.6.35-32-server - 2.6.35-32.68~lucid1 
  linux-image-2.6.38-16-server - 2.6.38-16.67~lucid1 
  linux-image-3.0.0-26-server  - 3.0.0-26.43~lucid1
  linux-image-3.2-5 - mainline 3.2.5 kernel
  
  I'm guess it's a libc issue (or at least a libc change causing the issue) 
  as it doesn't seem to a be kernel related.
  
  I'll try other distributions as a guest (probably Debian/Ubuntu) with newer 
  libc's and see if I can pinpoint the issue to a libc version.  Any other 
  ideas?
  
  Shared disk backend is clvm/LV via FC to EMC SAN, not sure what else might 
  be relevant.
  
  Thanks,
  Mark
  
  
  __
  
  See http://www.peak6.com/email_disclaimer/ for terms and conditions 
  related to this email
  --
  To unsubscribe from this list: send the line unsubscribe kvm in the 
  body of a message to majord...@vger.kernel.org More majordomo info at  
  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Guest performance is reduced after live migration

2013-01-02 Thread Mark Petersen
I believe I disabled huge pages on the guest and host previously, but I'll test 
a few scenarios and look at transparent hugepage usage specifically again over 
the next couple days and report back.


Below is a command line used for testing.

/usr/bin/kvm - qemu-x86_64

/usr/bin/kvm -name one-483 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 
16,sockets=2,cores=8,threads=1 -uuid a844146a-0d72-a661-fe6c-cb6b7a4ba240 
-no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-483.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=/var/lib/one//datastores/0/483/disk.0,if=none,id=drive-virtio-disk0,format=raw,cache=none
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive 
file=/var/lib/one//datastores/0/483/disk.1,if=none,id=drive-ide0-0-0,readonly=on,format=raw
 -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev 
tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:0a:02:02:4b,bus=pci.0,addr=0x3 
-vnc 0.0.0.0 -vga cirrus -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


-Original Message-
From: Marcelo Tosatti [mailto:mtosa...@redhat.com] 
Sent: Wednesday, January 02, 2013 6:49 PM
To: Mark Petersen
Cc: kvm@vger.kernel.org; shouta.ueh...@jp.yokogawa.com
Subject: Re: Guest performance is reduced after live migration

On Wed, Jan 02, 2013 at 11:56:11PM +, Mark Petersen wrote:
 I don't think it's related to huge pages...
 
 I was using phoronix-test-suite to run benchmarks.  The 'batch/compilation' 
 group shows the slowdown for all tests, the 'batch/computation' show some 
 performance degradation, but not nearly as significant.

Huge pages in the host, for the qemu-kvm process, i mean.
Without huge pages backing guest memory in the host, 4k EPT TLB entries will be 
used instead of 2MB EPT TLB entries.

 You could probably easily test this way without phoronix -  Start a VM with 
 almost nothing running.  Download mainline Linux kernel, compile.  This takes 
 about 45 seconds in my case (72GB memory, 16 virtual CPUs, idle physical host 
 running this VM.)  Run as many times as you want, still takes ~45 seconds.
 
 Migrate to a new idle host, kernel compile now takes ~90 seconds, wait 
 3 hours (should give khugepaged a change to do its thing I imagine),

Please verify its the case (by checking how much memory is backed by hugepages).

http://www.mjmwired.net/kernel/Documentation/vm/transhuge.txt
Monitoring Usage.


 kernel compiles still take 90 seconds.  Reboot virtual machine (run 'shutdown 
 -r now', reboot, whatever.)  First compile will take ~45 seconds after 
 reboot.  You don't even need to reset/destroy/shutdown the VM, just a reboot 
 in the guest fixes the issue.

What is the qemu command line?

 I'm going to test more with qemu-kvm 1.3 tomorrow as I have a new/dedicated 
 lab setup and recently built the 1.3 code base.  I'd be happy to run any test 
 that would help in diagnosing the real issue here, I'm just not sure how to 
 best diagnose this issue.
 
 Thanks,
 Mark
  
 -Original Message-
 
 Can you describe more details of the test you are performing? 
 
 If transparent hugepages are being used then there is the possibility that 
 there has been no time for khugepaged to back guest memory with huge pages, 
 in the destination (don't recall the interface for retrieving number of 
 hugepages for a given process, probably somewhere in /proc/pid/).
 
 On Wed, Dec 19, 2012 at 12:43:37AM +, Mark Petersen wrote:
  Hello KVM,
  
  I'm seeing something similar to this 
  (http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592) as well 
  when doing live migrations on Ubuntu 12.04 (Host and Guest) with a 
  backported libvirt 1.0 and qemu-kvm 1.2 (improved performance for live 
  migrations on guests with large memory guests is great!)  The default 
  libvirt  0.9.8 and qemu-kvm 1.0 have the same issue.
  
  Kernel is 3.2.0-34-generic and eglicb 2.15 on both host/guest.  I'm seeing 
  similar issues with both virtio and ide bus.  Hugetblfs is not used, but 
  transparent hugepages are.  Host machines are dual core Xeon E5-2660 
  processors.  I tried disabling EPT but that doesn't seem to make a 
  difference so I don't think it's a requirement to reproduce.
  
  If I use Ubuntu 10.04 guest with eglibc 2.11 and any of these kernels I 
  don't seem to have the issue:
  
  linux-image-2.6.32-32-server - 2.6.32-32.62 
  linux-image-2.6.32-38-server - 2.6.32-38.83 
  linux-image-2.6.32-43-server - 2.6.32-43.97 
  linux-image-2.6.35-32-server - 2.6.35-32.68~lucid1 
  linux-image-2.6.38-16-server - 2.6.38-16.67~lucid1 
  linux-image-3.0.0-26-server  - 3.0.0-26.43~lucid1
  linux-image-3.2-5 - mainline 3.2.5 kernel
  
  I'm guess it's a libc issue (or at least a libc change causing the issue)