Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-11 Thread Markus Armbruster
Avi Kivity a...@redhat.com writes:

 On 01/07/2010 02:33 PM, Anthony Liguori wrote:

 There's another option.

 Make cpuid information part of live migration protocol, and then
 support something like -cpu Xeon-3550.  We would remember the exact
 cpuid mask we present to the guest and then we could validate that
 we can obtain the same mask on the destination.

 Currently, our policy is to only migrate dynamic (from the guest's
 point of view) state, and specify static state on the command line
 [1].

 I think your suggestion makes a lot of sense, but I'd like to expand
 it to move all guest state, whether dynamic or static.  So '-m 1G'
 would be migrated as well (but not -mem-path).  Similarly, in -drive
 file=...,if=ide,index=1, everything but file=... would be migrated.

Becomes a bit clearer with the new way to configure stuff:

  -drive if=none,id=DRIVE-ID,...--- host, don't migrate
  -device ide=drive,drive=DRIVE-ID,...  --- guest, do migrate

[...]
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Dor Laor

On 01/06/2010 05:16 PM, Anthony Liguori wrote:

On 01/06/2010 08:48 AM, Dor Laor wrote:

On 01/06/2010 04:32 PM, Avi Kivity wrote:

On 01/06/2010 04:22 PM, Michael S. Tsirkin wrote:

We can probably default -enable-kvm to -cpu host, as long as we
explain
very carefully that if users wish to preserve cpu features across
upgrades, they can't depend on the default.

Hardware upgrades or software upgrades?


Yes.



I just want to remind all the the main motivation for using -cpu
realModelThatWasOnceShiped is to provide correct cpu emulation for the
guest. Using a random qemu|kvm64+flag1-flag2 might really cause
trouble for the guest OS or guest apps.

On top of -cpu nehalem we can always add fancy features like x2apic, etc.


I think it boils down to, how are people going to use this.

For individuals, code names like Nehalem are too obscure. From my own
personal experience, even power users often have no clue whether there
processor is a Nehalem or not.

For management tools, Nehalem is a somewhat imprecise target because it
covers a wide range of potential processors. In general, I think what we
really need to do is simplify the process of going from, here's the
output of /proc/cpuinfo for a 100 nodes, what do I need to pass to qemu
so that migration always works for these systems.

I don't think -cpu nehalem really helps with that problem. -cpu none
helps a bit, but I hope we can find something nicer.


We can debate about the exact name/model to represent the Nehalem 
family, I don't have an issue with that and actually Intel and Amd 
should define it.


There are two main motivations behind the above approach:
1. Sound guest cpu definition.
   Using a predefined model should automatically set all the relevant
   vendor/stepping/cpuid flags/cache sizes/etc.
   We just can let every management application deal with it. It breaks
   guest OS/apps. For instance there are MSI support in windows guest
   relay on the stepping.

2. Simplifying end user and mgmt tools.
   qemu/kvm have the best knowledge about these low levels. If we push
   it up in the stack, eventually it reaches the user. The end user,
   not a 'qemu-devel user' which is actually far better from the
   average user.

   This means that such users will have to know what is popcount and
   whether or not to limit migration on one host by adding sse4.2 or
   not.

This is exactly what vmware are doing:
 - Intel CPUs : 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1991
 - AMD CPUs : 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1992


Why should we invent the wheel (qemu64..)? Let's learn from their 
experience.


This is the test description of the original patch by John:


# Intel
# -

# Management layers remove pentium3 by default.
# It primarily remains here for testing of 32-bit migration.
#
[0:Pentium 3 Intel
:vmx
:pentium3;]

# Core 2, 65nm
# possible option sets: (+nx,+cx16), (+nx,+cx16,+ssse3)
#
1:Merom
:vmx,sse2
:qemu64,-nx,+sse2;

# Core2 45nm
#
2:Penryn
:vmx,sse2,nx,cx16,ssse3,sse4_1
:qemu64,+sse2,+cx16,+ssse3,+sse4_1;

# Core i7 45/32nm
#
3:Nehalem
:vmx,sse2,nx,cx16,ssse3,sse4_1,sse4_2,popcnt
:qemu64,+sse2,+cx16,+ssse3,+sse4_1,+sse4_2,+popcnt;


# AMD
# ---

# Management layers remove pentium3 by default.
# It primarily remains here for testing of 32-bit migration.
#
[0:Pentium 3 AMD
:svm
:pentium3;]

# Opteron 90nm stepping E1/E4/E6
# possible option sets: (-nx) for 130nm
#
1:Opteron G1
:svm,sse2,nx
:qemu64,+sse2;

# Opteron 90nm stepping F2/F3
#
2:Opteron G2
:svm,sse2,nx,cx16,rdtscp
:qemu64,+sse2,+cx16,+rdtscp;

# Opteron 65/45nm
#
3:Opteron G3
:svm,sse2,nx,cx16,sse4a,misalignsse,popcnt,abm
:qemu64,+sse2,+cx16,+sse4a,+misalignsse,+popcnt,+abm;





Regards,

Anthony Liguori




--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Avi Kivity

On 01/07/2010 10:03 AM, Dor Laor wrote:


We can debate about the exact name/model to represent the Nehalem 
family, I don't have an issue with that and actually Intel and Amd 
should define it.


AMD and Intel already defined their names (in cat /proc/cpuinfo).  They 
don't define families, the whole idea is to segment the market.




There are two main motivations behind the above approach:
1. Sound guest cpu definition.
   Using a predefined model should automatically set all the relevant
   vendor/stepping/cpuid flags/cache sizes/etc.
   We just can let every management application deal with it. It breaks
   guest OS/apps. For instance there are MSI support in windows guest
   relay on the stepping.

2. Simplifying end user and mgmt tools.
   qemu/kvm have the best knowledge about these low levels. If we push
   it up in the stack, eventually it reaches the user. The end user,
   not a 'qemu-devel user' which is actually far better from the
   average user.

   This means that such users will have to know what is popcount and
   whether or not to limit migration on one host by adding sse4.2 or
   not.

This is exactly what vmware are doing:
 - Intel CPUs : 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1991 

 - AMD CPUs : 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1992 



They don't have to deal with different qemu and kvm versions.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Daniel P. Berrange
On Thu, Jan 07, 2010 at 10:03:28AM +0200, Dor Laor wrote:
 On 01/06/2010 05:16 PM, Anthony Liguori wrote:
 On 01/06/2010 08:48 AM, Dor Laor wrote:
 On 01/06/2010 04:32 PM, Avi Kivity wrote:
 On 01/06/2010 04:22 PM, Michael S. Tsirkin wrote:
 We can probably default -enable-kvm to -cpu host, as long as we
 explain
 very carefully that if users wish to preserve cpu features across
 upgrades, they can't depend on the default.
 Hardware upgrades or software upgrades?
 
 Yes.
 
 
 I just want to remind all the the main motivation for using -cpu
 realModelThatWasOnceShiped is to provide correct cpu emulation for the
 guest. Using a random qemu|kvm64+flag1-flag2 might really cause
 trouble for the guest OS or guest apps.
 
 On top of -cpu nehalem we can always add fancy features like x2apic, etc.
 
 I think it boils down to, how are people going to use this.
 
 For individuals, code names like Nehalem are too obscure. From my own
 personal experience, even power users often have no clue whether there
 processor is a Nehalem or not.
 
 For management tools, Nehalem is a somewhat imprecise target because it
 covers a wide range of potential processors. In general, I think what we
 really need to do is simplify the process of going from, here's the
 output of /proc/cpuinfo for a 100 nodes, what do I need to pass to qemu
 so that migration always works for these systems.
 
 I don't think -cpu nehalem really helps with that problem. -cpu none
 helps a bit, but I hope we can find something nicer.
 
 We can debate about the exact name/model to represent the Nehalem 
 family, I don't have an issue with that and actually Intel and Amd 
 should define it.
 
 There are two main motivations behind the above approach:
 1. Sound guest cpu definition.
Using a predefined model should automatically set all the relevant
vendor/stepping/cpuid flags/cache sizes/etc.
We just can let every management application deal with it. It breaks
guest OS/apps. For instance there are MSI support in windows guest
relay on the stepping.
 
 2. Simplifying end user and mgmt tools.
qemu/kvm have the best knowledge about these low levels. If we push
it up in the stack, eventually it reaches the user. The end user,
not a 'qemu-devel user' which is actually far better from the
average user.
 
This means that such users will have to know what is popcount and
whether or not to limit migration on one host by adding sse4.2 or
not.
 
 This is exactly what vmware are doing:
  - Intel CPUs : 
 http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1991
  - AMD CPUs : 
 http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1992
 
 Why should we invent the wheel (qemu64..)? Let's learn from their 
 experience.

NB, be careful to distinguish the different levels of VMwares mgmt stack. In
terms of guest configuration, VMWare ESX APIs require the management app to
specify the raw CPUID masks. With VirtualCenter VMotion they defined this 
handful of common Intel/AMD CPU sets, and will automatically classify hosts
into one  of these sets and use that to specify a default CPUID mask, in the
case that the guest does not have an explicit one in its config. This gives
them good default, out-of-the-box behaviour, while also allowing mgmt apps
100% control over each guest's CPUID should they want it.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Dor Laor

On 01/07/2010 10:18 AM, Avi Kivity wrote:

On 01/07/2010 10:03 AM, Dor Laor wrote:


We can debate about the exact name/model to represent the Nehalem
family, I don't have an issue with that and actually Intel and Amd
should define it.


AMD and Intel already defined their names (in cat /proc/cpuinfo). They
don't define families, the whole idea is to segment the market.


The idea here is to minimize the number of models we should have the 
following range for Intel for example:

  pentium3 - merom -  penry - Nehalem - host - kvm/qemu64
So we're supplying wide range of cpus, p3 for maximum flexibility and 
migration, nehalem for performance and migration, host for maximum 
performance and qemu/kvm64 for custom maid.






There are two main motivations behind the above approach:
1. Sound guest cpu definition.
Using a predefined model should automatically set all the relevant
vendor/stepping/cpuid flags/cache sizes/etc.
We just can let every management application deal with it. It breaks
guest OS/apps. For instance there are MSI support in windows guest
relay on the stepping.

2. Simplifying end user and mgmt tools.
qemu/kvm have the best knowledge about these low levels. If we push
it up in the stack, eventually it reaches the user. The end user,
not a 'qemu-devel user' which is actually far better from the
average user.

This means that such users will have to know what is popcount and
whether or not to limit migration on one host by adding sse4.2 or
not.

This is exactly what vmware are doing:
- Intel CPUs :
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1991

- AMD CPUs :
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1992



They don't have to deal with different qemu and kvm versions.



Both our customers - the end users. It's not their problem.
IMO what's missing today is a safe and sound cpu emulation that is 
simply and friendly to represent. qemu64,+popcount is not simple for the 
end user. There is no reason to through it on higher level mgmt.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Dor Laor

On 01/07/2010 11:24 AM, Avi Kivity wrote:

On 01/07/2010 11:11 AM, Dor Laor wrote:

On 01/07/2010 10:18 AM, Avi Kivity wrote:

On 01/07/2010 10:03 AM, Dor Laor wrote:


We can debate about the exact name/model to represent the Nehalem
family, I don't have an issue with that and actually Intel and Amd
should define it.


AMD and Intel already defined their names (in cat /proc/cpuinfo). They
don't define families, the whole idea is to segment the market.


The idea here is to minimize the number of models we should have the
following range for Intel for example:
pentium3 - merom - penry - Nehalem - host - kvm/qemu64
So we're supplying wide range of cpus, p3 for maximum flexibility and
migration, nehalem for performance and migration, host for maximum
performance and qemu/kvm64 for custom maid.


There's no such thing as Nehalem.


Intel were ok with it. Again, you can name is corei7 or xeon34234234234, 
I don't care, the principle remains the same.






This is exactly what vmware are doing:
- Intel CPUs :
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1991


- AMD CPUs :
http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1992




They don't have to deal with different qemu and kvm versions.



Both our customers - the end users. It's not their problem.
IMO what's missing today is a safe and sound cpu emulation that is
simply and friendly to represent. qemu64,+popcount is not simple for
the end user. There is no reason to through it on higher level mgmt.


There's no simple solution except to restrict features to what was
available on the first processors.


What's not simple about the above 4 options?
What's a better alternative (that insures users understand it and use it 
and guest msi and even skype application is happy about it)?



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Anthony Liguori

On 01/07/2010 03:40 AM, Dor Laor wrote:

There's no simple solution except to restrict features to what was
available on the first processors.


What's not simple about the above 4 options?
What's a better alternative (that insures users understand it and use 
it and guest msi and even skype application is happy about it)?


Even if you have -cpu Nehalem, different versions of the KVM kernel 
module may additionally filter cpuid flags.


So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary 
to say:


(2.6.33) qemu -cpu Nehalem,-syscall
(2.6.18) qemu -cpu Nehalem

In order to be compatible.

Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Dor Laor

On 01/07/2010 01:39 PM, Anthony Liguori wrote:

On 01/07/2010 03:40 AM, Dor Laor wrote:

There's no simple solution except to restrict features to what was
available on the first processors.


What's not simple about the above 4 options?
What's a better alternative (that insures users understand it and use
it and guest msi and even skype application is happy about it)?


Even if you have -cpu Nehalem, different versions of the KVM kernel
module may additionally filter cpuid flags.

So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary
to say:

(2.6.33) qemu -cpu Nehalem,-syscall
(2.6.18) qemu -cpu Nehalem


Or let qemu do it automatically for you.



In order to be compatible.

Regards,

Anthony Liguori



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Avi Kivity

On 01/07/2010 11:40 AM, Dor Laor wrote:

There's no such thing as Nehalem.



Intel were ok with it. Again, you can name is corei7 or 
xeon34234234234, I don't care, the principle remains the same.




There are several processors belonging to the Nehalem family and each 
have different features.




What's not simple about the above 4 options?


If a qemu/kvm/processor combo doesn't support a feature (say, nx) we 
have to remove it from the migration pool even if the Nehalem processor 
class says it's included.  Or else not admit that combination into the 
migration pool in the first place.


What's a better alternative (that insures users understand it and use 
it and guest msi and even skype application is happy about it)?




Have management scan new nodes and classify them.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Avi Kivity

On 01/07/2010 01:44 PM, Dor Laor wrote:

So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary
to say:

(2.6.33) qemu -cpu Nehalem,-syscall
(2.6.18) qemu -cpu Nehalem



Or let qemu do it automatically for you.


qemu on 2.6.33 doesn't know that you're running qemu on 2.6.18 on 
another node.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Dor Laor

On 01/07/2010 02:00 PM, Avi Kivity wrote:

On 01/07/2010 01:44 PM, Dor Laor wrote:

So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary
to say:

(2.6.33) qemu -cpu Nehalem,-syscall
(2.6.18) qemu -cpu Nehalem



Or let qemu do it automatically for you.


qemu on 2.6.33 doesn't know that you're running qemu on 2.6.18 on
another node.



We can live with it, either have qemu realize the kernel version out of 
another existing feature or query uname.


Alternatively, the matching libvirt package can be the one adding or 
removing it in the right distribution.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Anthony Liguori

On 01/07/2010 06:20 AM, Dor Laor wrote:

On 01/07/2010 02:00 PM, Avi Kivity wrote:

On 01/07/2010 01:44 PM, Dor Laor wrote:

So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary
to say:

(2.6.33) qemu -cpu Nehalem,-syscall
(2.6.18) qemu -cpu Nehalem



Or let qemu do it automatically for you.


qemu on 2.6.33 doesn't know that you're running qemu on 2.6.18 on
another node.



We can live with it, either have qemu realize the kernel version out 
of another existing feature or query uname.


Alternatively, the matching libvirt package can be the one adding or 
removing it in the right distribution.


There's another option.

Make cpuid information part of live migration protocol, and then support 
something like -cpu Xeon-3550.  We would remember the exact cpuid mask 
we present to the guest and then we could validate that we can obtain 
the same mask on the destination.


Regards,

Anthony Liguori


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Avi Kivity

On 01/07/2010 02:33 PM, Anthony Liguori wrote:


There's another option.

Make cpuid information part of live migration protocol, and then 
support something like -cpu Xeon-3550.  We would remember the exact 
cpuid mask we present to the guest and then we could validate that we 
can obtain the same mask on the destination.


Currently, our policy is to only migrate dynamic (from the guest's point 
of view) state, and specify static state on the command line [1].


I think your suggestion makes a lot of sense, but I'd like to expand it 
to move all guest state, whether dynamic or static.  So '-m 1G' would be 
migrated as well (but not -mem-path).  Similarly, in -drive 
file=...,if=ide,index=1, everything but file=... would be migrated.


This has an advantage wrt hotplug: since qemu is responsible for 
migrating all guest visible information, the migrator is no longer 
responsible for replaying hotplug events in the exact sequence they 
happened.


In short, I think we should apply your suggestion as broadly as possible.

[1] cpuid state is actually dynamic; repeated cpuid instruction 
execution with the same operands can return different results.  kvm 
supports querying and setting this state.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Daniel P. Berrange
On Thu, Jan 07, 2010 at 02:40:34PM +0200, Avi Kivity wrote:
 On 01/07/2010 02:33 PM, Anthony Liguori wrote:
 
 There's another option.
 
 Make cpuid information part of live migration protocol, and then 
 support something like -cpu Xeon-3550.  We would remember the exact 
 cpuid mask we present to the guest and then we could validate that we 
 can obtain the same mask on the destination.
 
 Currently, our policy is to only migrate dynamic (from the guest's point 
 of view) state, and specify static state on the command line [1].
 
 I think your suggestion makes a lot of sense, but I'd like to expand it 
 to move all guest state, whether dynamic or static.  So '-m 1G' would be 
 migrated as well (but not -mem-path).  Similarly, in -drive 
 file=...,if=ide,index=1, everything but file=... would be migrated.
 
 This has an advantage wrt hotplug: since qemu is responsible for 
 migrating all guest visible information, the migrator is no longer 
 responsible for replaying hotplug events in the exact sequence they 
 happened.

With the introduction of the new -device spport, there's no need to
replay hotplug events in order any more. Instead just use static
PCI addresses when starting the guest, and the same addresses after
migration. You could argue that QEMU should preserve the addressing
automatically during migration, but apps need to do it manually
already to keep addreses stable across power-offs, so doing it manually
across migration too is no extra burden.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Avi Kivity

On 01/07/2010 02:47 PM, Daniel P. Berrange wrote:


With the introduction of the new -device spport, there's no need to
replay hotplug events in order any more. Instead just use static
PCI addresses when starting the guest, and the same addresses after
migration. You could argue that QEMU should preserve the addressing
automatically during migration, but apps need to do it manually
already to keep addreses stable across power-offs, so doing it manually
across migration too is no extra burden.

   


That's true - shutdown and startup are an equivalent problem to live 
migration from that point of view.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Anthony Liguori

On 01/07/2010 06:40 AM, Avi Kivity wrote:

On 01/07/2010 02:33 PM, Anthony Liguori wrote:


There's another option.

Make cpuid information part of live migration protocol, and then 
support something like -cpu Xeon-3550.  We would remember the exact 
cpuid mask we present to the guest and then we could validate that we 
can obtain the same mask on the destination.


Currently, our policy is to only migrate dynamic (from the guest's 
point of view) state, and specify static state on the command line [1].


I think your suggestion makes a lot of sense, but I'd like to expand 
it to move all guest state, whether dynamic or static.  So '-m 1G' 
would be migrated as well (but not -mem-path).  Similarly, in -drive 
file=...,if=ide,index=1, everything but file=... would be migrated.


Yes, I agree with this and it should be in the form of an fdt.  This 
means we need full qdev conversion.


But I think cpuid is somewhere in the middle with respect to static vs. 
dynamic.  For instance, -cpu host is very dynamic in that you get very 
difficult results on different systems.  Likewise, because of kvm 
filtering, even -cpu qemu64 can be dynamic.


So if we didn't have filtering and -cpu host, I'd agree that it's 
totally static but I think in the current state, it's dynamic.


This has an advantage wrt hotplug: since qemu is responsible for 
migrating all guest visible information, the migrator is no longer 
responsible for replaying hotplug events in the exact sequence they 
happened.


Yup, 100% in agreement as a long term goal.


In short, I think we should apply your suggestion as broadly as possible.

[1] cpuid state is actually dynamic; repeated cpuid instruction 
execution with the same operands can return different results.  kvm 
supports querying and setting this state.


Yes, and we save some cpuid state in cpu.  We just don't save all of it.

Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] cpuid problem in upstream qemu with kvm

2010-01-07 Thread Dor Laor

On 01/07/2010 03:14 PM, Anthony Liguori wrote:

On 01/07/2010 06:40 AM, Avi Kivity wrote:

On 01/07/2010 02:33 PM, Anthony Liguori wrote:


There's another option.

Make cpuid information part of live migration protocol, and then
support something like -cpu Xeon-3550. We would remember the exact
cpuid mask we present to the guest and then we could validate that we
can obtain the same mask on the destination.


It solves controlling the destination qemu execution all right but does 
not change the initial spawning of the original guest - to know whether 
,-syscall is needed or not.


Anyway, I'm in favor of it too.



Currently, our policy is to only migrate dynamic (from the guest's
point of view) state, and specify static state on the command line [1].

I think your suggestion makes a lot of sense, but I'd like to expand
it to move all guest state, whether dynamic or static. So '-m 1G'
would be migrated as well (but not -mem-path). Similarly, in -drive
file=...,if=ide,index=1, everything but file=... would be migrated.


Yes, I agree with this and it should be in the form of an fdt. This
means we need full qdev conversion.

But I think cpuid is somewhere in the middle with respect to static vs.
dynamic. For instance, -cpu host is very dynamic in that you get very
difficult results on different systems. Likewise, because of kvm
filtering, even -cpu qemu64 can be dynamic.

So if we didn't have filtering and -cpu host, I'd agree that it's
totally static but I think in the current state, it's dynamic.


This has an advantage wrt hotplug: since qemu is responsible for
migrating all guest visible information, the migrator is no longer
responsible for replaying hotplug events in the exact sequence they
happened.


Yup, 100% in agreement as a long term goal.


In short, I think we should apply your suggestion as broadly as possible.

[1] cpuid state is actually dynamic; repeated cpuid instruction
execution with the same operands can return different results. kvm
supports querying and setting this state.


Yes, and we save some cpuid state in cpu. We just don't save all of it.

Regards,

Anthony Liguori



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html