Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86

2012-01-16 Thread Ryan Harper
* Alexander Graf ag...@suse.de [2012-01-08 17:53]:
 When running QEMU without -cpu parameter, the user usually wants a sane
 default. So far, we're using the qemu64/qemu32 CPU type, which basically
 means the maximum TCG can emulate.

it also means we all maximum possible migration targets.  Have you
given any thought to migration with -cpu best? 

 
 That's a really good default when using TCG, but when running with KVM
 we much rather want a default saying the maximum performance I can get.
 
 Fortunately we just added an option that gives us the best performance
 while still staying safe on the testability side of things: -cpu best.
 So all we need to do is make -cpu best the default when the user doesn't
 define any.
 
 This fixes a lot of subtile breakage in the GNU toolchain (libgmp) which
 hicks up on QEMU's non-existent CPU models.
 
 This patch also adds a new pc-1.1 machine type to keep backwards compatible
 with older versions of QEMU.
 
 Signed-off-by: Alexander Graf ag...@suse.de
 ---
  hw/pc_piix.c |   42 ++
  1 files changed, 34 insertions(+), 8 deletions(-)
 
 diff --git a/hw/pc_piix.c b/hw/pc_piix.c
 index 00f525e..3d78ccb 100644
 --- a/hw/pc_piix.c
 +++ b/hw/pc_piix.c
 @@ -79,7 +79,8 @@ static void pc_init1(MemoryRegion *system_memory,
   const char *initrd_filename,
   const char *cpu_model,
   int pci_enabled,
 - int kvmclock_enabled)
 + int kvmclock_enabled,
 + int may_cpu_best)
  {
  int i;
  ram_addr_t below_4g_mem_size, above_4g_mem_size;
 @@ -102,6 +103,9 @@ static void pc_init1(MemoryRegion *system_memory,
  MemoryRegion *rom_memory;
  DeviceState *dev;
 
 +if (!cpu_model  kvm_enabled()  may_cpu_best) {
 +cpu_model = best;
 +}
  pc_cpus_init(cpu_model);
 
  if (kvmclock_enabled) {
 @@ -263,7 +267,21 @@ static void pc_init_pci(ram_addr_t ram_size,
   get_system_io(),
   ram_size, boot_device,
   kernel_filename, kernel_cmdline,
 - initrd_filename, cpu_model, 1, 1);
 + initrd_filename, cpu_model, 1, 1, 1);
 +}
 +
 +static void pc_init_pci_oldcpu(ram_addr_t ram_size,
 +   const char *boot_device,
 +   const char *kernel_filename,
 +   const char *kernel_cmdline,
 +   const char *initrd_filename,
 +   const char *cpu_model)
 +{
 +pc_init1(get_system_memory(),
 + get_system_io(),
 + ram_size, boot_device,
 + kernel_filename, kernel_cmdline,
 + initrd_filename, cpu_model, 1, 1, 0);
  }
 
  static void pc_init_pci_no_kvmclock(ram_addr_t ram_size,
 @@ -277,7 +295,7 @@ static void pc_init_pci_no_kvmclock(ram_addr_t ram_size,
   get_system_io(),
   ram_size, boot_device,
   kernel_filename, kernel_cmdline,
 - initrd_filename, cpu_model, 1, 0);
 + initrd_filename, cpu_model, 1, 0, 0);
  }
 
  static void pc_init_isa(ram_addr_t ram_size,
 @@ -293,7 +311,7 @@ static void pc_init_isa(ram_addr_t ram_size,
   get_system_io(),
   ram_size, boot_device,
   kernel_filename, kernel_cmdline,
 - initrd_filename, cpu_model, 0, 1);
 + initrd_filename, cpu_model, 0, 1, 0);
  }
 
  #ifdef CONFIG_XEN
 @@ -314,8 +332,8 @@ static void pc_xen_hvm_init(ram_addr_t ram_size,
  }
  #endif
 
 -static QEMUMachine pc_machine_v1_0 = {
 -.name = pc-1.0,
 +static QEMUMachine pc_machine_v1_1 = {
 +.name = pc-1.1,
  .alias = pc,
  .desc = Standard PC,
  .init = pc_init_pci,
 @@ -323,17 +341,24 @@ static QEMUMachine pc_machine_v1_0 = {
  .is_default = 1,
  };
 
 +static QEMUMachine pc_machine_v1_0 = {
 +.name = pc-1.0,
 +.desc = Standard PC,
 +.init = pc_init_pci_oldcpu,
 +.max_cpus = 255,
 +};
 +
  static QEMUMachine pc_machine_v0_15 = {
  .name = pc-0.15,
  .desc = Standard PC,
 -.init = pc_init_pci,
 +.init = pc_init_pci_oldcpu,
  .max_cpus = 255,
  };
 
  static QEMUMachine pc_machine_v0_14 = {
  .name = pc-0.14,
  .desc = Standard PC,
 -.init = pc_init_pci,
 +.init = pc_init_pci_oldcpu,
  .max_cpus = 255,
  .compat_props = (GlobalProperty[]) {
  {
 @@ -612,6 +637,7 @@ static QEMUMachine xenfv_machine = {
 
  static void pc_machine_init(void)
  {
 +qemu_register_machine(pc_machine_v1_1);
  qemu_register_machine(pc_machine_v1_0);
  qemu_register_machine(pc_machine_v0_15);
  qemu_register_machine(pc_machine_v0_14);
 -- 
 1.6.0.2
 

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ry...@us.ibm.com

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More 

Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86

2012-01-16 Thread Alexander Graf

On 16.01.2012, at 20:30, Ryan Harper wrote:

 * Alexander Graf ag...@suse.de [2012-01-08 17:53]:
 When running QEMU without -cpu parameter, the user usually wants a sane
 default. So far, we're using the qemu64/qemu32 CPU type, which basically
 means the maximum TCG can emulate.
 
 it also means we all maximum possible migration targets.  Have you
 given any thought to migration with -cpu best? 

If you have the same boxes in your cluster, migration just works. If you don't, 
you usually use a specific CPU model that is the least dominator between your 
boxes either way.

The current kvm64 type is broken. Libgmp just abort()s when we pass it in. So 
anything is better than what we do today on AMD hosts :).


Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86

2012-01-16 Thread Ryan Harper
* Alexander Graf ag...@suse.de [2012-01-16 13:37]:
 
 On 16.01.2012, at 20:30, Ryan Harper wrote:
 
  * Alexander Graf ag...@suse.de [2012-01-08 17:53]:
  When running QEMU without -cpu parameter, the user usually wants a sane
  default. So far, we're using the qemu64/qemu32 CPU type, which basically
  means the maximum TCG can emulate.
  
  it also means we all maximum possible migration targets.  Have you
  given any thought to migration with -cpu best? 
 
 If you have the same boxes in your cluster, migration just works. If
 you don't, you usually use a specific CPU model that is the least
 dominator between your boxes either way.

Sure, but the idea behind -cpu best is to not have to figure that out;
you had suggested that the qemu64/qemu32 were just related to TCG, and
what I'm suggesting is that it's also the most compatible w.r.t
migration.  

it sounds like if migration is a requirement, then -cpu best probably
isn't something that would be used.  I suppose I'm OK with that, or at
least I don't have a better suggestion on how to carefully push up the
capabilities without at some point breaking migration.

 
 The current kvm64 type is broken. Libgmp just abort()s when we pass it
 in. So anything is better than what we do today on AMD hosts :).

I wonder if it breaks with Cyris cpus... other tools tend to do runtime
detection (mplayer).


 
 
 Alex

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ry...@us.ibm.com

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86

2012-01-16 Thread Ryan Harper
* Alexander Graf ag...@suse.de [2012-01-16 13:52]:
 
 On 16.01.2012, at 20:46, Ryan Harper wrote:
 
  * Alexander Graf ag...@suse.de [2012-01-16 13:37]:
  
  On 16.01.2012, at 20:30, Ryan Harper wrote:
  
  * Alexander Graf ag...@suse.de [2012-01-08 17:53]:
  When running QEMU without -cpu parameter, the user usually wants a sane
  default. So far, we're using the qemu64/qemu32 CPU type, which basically
  means the maximum TCG can emulate.
  
  it also means we all maximum possible migration targets.  Have you
  given any thought to migration with -cpu best? 
  
  If you have the same boxes in your cluster, migration just works. If
  you don't, you usually use a specific CPU model that is the least
  dominator between your boxes either way.
  
  Sure, but the idea behind -cpu best is to not have to figure that out;
  you had suggested that the qemu64/qemu32 were just related to TCG, and
  what I'm suggesting is that it's also the most compatible w.r.t
  migration.  
 
 The, the most compatible wrt migration is -cpu kvm64 / kvm32.
 
  it sounds like if migration is a requirement, then -cpu best probably
  isn't something that would be used.  I suppose I'm OK with that, or at
  least I don't have a better suggestion on how to carefully push up the
  capabilities without at some point breaking migration.
 
 Yes, if you're interested in migration, then you're almost guaranteed to have 
 a toolstack on top that has knowledge of your whole cluster and can do the 
 least dominator detection over all of your nodes. On the QEMU level we don't 
 know anything about other machines.
 
  
  
  The current kvm64 type is broken. Libgmp just abort()s when we pass it
  in. So anything is better than what we do today on AMD hosts :).
  
  I wonder if it breaks with Cyris cpus... other tools tend to do runtime
  detection (mplayer).
 
 It probably does :). But then again those don't do KVM, do they?

not following; mplayer issues SSE2, 3 and 4 instructions to see what
works to figure out how to optimize; it doesn't care if the cpu is
called QEMU64 or Cyrus or AMD.  I'm not saying that we can't do better
than qemu64 w.r.t best cpu to select by default, but there are plenty of
applications that want to optimize their code based on what's available,
but this is done via code execution instead of string comparision.

 
 
 Alex
 

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ry...@us.ibm.com

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86

2012-01-16 Thread Alexander Graf

On 16.01.2012, at 21:13, Ryan Harper wrote:

 * Alexander Graf ag...@suse.de [2012-01-16 13:52]:
 
 On 16.01.2012, at 20:46, Ryan Harper wrote:
 
 * Alexander Graf ag...@suse.de [2012-01-16 13:37]:
 
 On 16.01.2012, at 20:30, Ryan Harper wrote:
 
 * Alexander Graf ag...@suse.de [2012-01-08 17:53]:
 When running QEMU without -cpu parameter, the user usually wants a sane
 default. So far, we're using the qemu64/qemu32 CPU type, which basically
 means the maximum TCG can emulate.
 
 it also means we all maximum possible migration targets.  Have you
 given any thought to migration with -cpu best? 
 
 If you have the same boxes in your cluster, migration just works. If
 you don't, you usually use a specific CPU model that is the least
 dominator between your boxes either way.
 
 Sure, but the idea behind -cpu best is to not have to figure that out;
 you had suggested that the qemu64/qemu32 were just related to TCG, and
 what I'm suggesting is that it's also the most compatible w.r.t
 migration.  
 
 The, the most compatible wrt migration is -cpu kvm64 / kvm32.
 
 it sounds like if migration is a requirement, then -cpu best probably
 isn't something that would be used.  I suppose I'm OK with that, or at
 least I don't have a better suggestion on how to carefully push up the
 capabilities without at some point breaking migration.
 
 Yes, if you're interested in migration, then you're almost guaranteed to 
 have a toolstack on top that has knowledge of your whole cluster and can do 
 the least dominator detection over all of your nodes. On the QEMU level we 
 don't know anything about other machines.
 
 
 
 The current kvm64 type is broken. Libgmp just abort()s when we pass it
 in. So anything is better than what we do today on AMD hosts :).
 
 I wonder if it breaks with Cyris cpus... other tools tend to do runtime
 detection (mplayer).
 
 It probably does :). But then again those don't do KVM, do they?
 
 not following; mplayer issues SSE2, 3 and 4 instructions to see what
 works to figure out how to optimize; it doesn't care if the cpu is
 called QEMU64 or Cyrus or AMD.  I'm not saying that we can't do better
 than qemu64 w.r.t best cpu to select by default, but there are plenty of
 applications that want to optimize their code based on what's available,
 but this is done via code execution instead of string comparision.

The problem with -cpu kvm64 is that we choose a family/model that doesn't exist 
in the real world, and then glue AuthenticAMD or GenuineIntel in the vendor 
string. Libgmp checks for existing CPUs, finds that this CPU doesn't match any 
real world IDs and abort()s.

The problem is that there is not a single CPU on this planet in silicon that 
has the same model+family numbers, but exists in AuthenticAMD _and_ 
GenuineIntel flavors. We need to pass the host vendor in though, because the 
guest uses it to detect if it should execute SYSCALL or SYSENTER, because intel 
and amd screwed up heavily on that one.

It's not about feature flags which is what mplayer uses. Those are fine :).

Alex

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH 2/2] KVM: Use -cpu best as default on x86

2012-01-16 Thread Ryan Harper
* Alexander Graf ag...@suse.de [2012-01-16 14:52]:
 
 On 16.01.2012, at 21:13, Ryan Harper wrote:
 
  * Alexander Graf ag...@suse.de [2012-01-16 13:52]:
  
  On 16.01.2012, at 20:46, Ryan Harper wrote:
  
  * Alexander Graf ag...@suse.de [2012-01-16 13:37]:
  
  On 16.01.2012, at 20:30, Ryan Harper wrote:
  
  * Alexander Graf ag...@suse.de [2012-01-08 17:53]:
  When running QEMU without -cpu parameter, the user usually wants a sane
  default. So far, we're using the qemu64/qemu32 CPU type, which 
  basically
  means the maximum TCG can emulate.
  
  it also means we all maximum possible migration targets.  Have you
  given any thought to migration with -cpu best? 
  
  If you have the same boxes in your cluster, migration just works. If
  you don't, you usually use a specific CPU model that is the least
  dominator between your boxes either way.
  
  Sure, but the idea behind -cpu best is to not have to figure that out;
  you had suggested that the qemu64/qemu32 were just related to TCG, and
  what I'm suggesting is that it's also the most compatible w.r.t
  migration.  
  
  The, the most compatible wrt migration is -cpu kvm64 / kvm32.
  
  it sounds like if migration is a requirement, then -cpu best probably
  isn't something that would be used.  I suppose I'm OK with that, or at
  least I don't have a better suggestion on how to carefully push up the
  capabilities without at some point breaking migration.
  
  Yes, if you're interested in migration, then you're almost guaranteed to 
  have a toolstack on top that has knowledge of your whole cluster and can 
  do the least dominator detection over all of your nodes. On the QEMU level 
  we don't know anything about other machines.
  
  
  
  The current kvm64 type is broken. Libgmp just abort()s when we pass it
  in. So anything is better than what we do today on AMD hosts :).
  
  I wonder if it breaks with Cyris cpus... other tools tend to do runtime
  detection (mplayer).
  
  It probably does :). But then again those don't do KVM, do they?
  
  not following; mplayer issues SSE2, 3 and 4 instructions to see what
  works to figure out how to optimize; it doesn't care if the cpu is
  called QEMU64 or Cyrus or AMD.  I'm not saying that we can't do better
  than qemu64 w.r.t best cpu to select by default, but there are plenty of
  applications that want to optimize their code based on what's available,
  but this is done via code execution instead of string comparision.
 
 The problem with -cpu kvm64 is that we choose a family/model that
 doesn't exist in the real world, and then glue AuthenticAMD or
 GenuineIntel in the vendor string. Libgmp checks for existing CPUs,
 finds that this CPU doesn't match any real world IDs and abort()s.
 
 The problem is that there is not a single CPU on this planet in
 silicon that has the same model+family numbers, but exists in
 AuthenticAMD _and_ GenuineIntel flavors. We need to pass the host
 vendor in though, because the guest uses it to detect if it should
 execute SYSCALL or SYSENTER, because intel and amd screwed up heavily
 on that one.

I forgot about this one.  =(


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ry...@us.ibm.com

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html