On Thu, Jan 07, 2010 at 10:03:28AM +0200, Dor Laor wrote:
> On 01/06/2010 05:16 PM, Anthony Liguori wrote:
> >On 01/06/2010 08:48 AM, Dor Laor wrote:
> >>On 01/06/2010 04:32 PM, Avi Kivity wrote:
> >>>On 01/06/2010 04:22 PM, Michael S. Tsirkin wrote:
> >>>>>We can probably default -enable-kvm to -cpu host, as long as we
> >>>>>explain
> >>>>>very carefully that if users wish to preserve cpu features across
> >>>>>upgrades, they can't depend on the default.
> >>>>Hardware upgrades or software upgrades?
> >>>
> >>>Yes.
> >>>
> >>
> >>I just want to remind all the the main motivation for using -cpu
> >>realModelThatWasOnceShiped is to provide correct cpu emulation for the
> >>guest. Using a random qemu|kvm64+flag1-flag2 might really cause
> >>trouble for the guest OS or guest apps.
> >>
> >>On top of -cpu nehalem we can always add fancy features like x2apic, etc.
> >
> >I think it boils down to, how are people going to use this.
> >
> >For individuals, code names like Nehalem are too obscure. From my own
> >personal experience, even power users often have no clue whether there
> >processor is a Nehalem or not.
> >
> >For management tools, Nehalem is a somewhat imprecise target because it
> >covers a wide range of potential processors. In general, I think what we
> >really need to do is simplify the process of going from, here's the
> >output of /proc/cpuinfo for a 100 nodes, what do I need to pass to qemu
> >so that migration always works for these systems.
> >
> >I don't think -cpu nehalem really helps with that problem. -cpu none
> >helps a bit, but I hope we can find something nicer.
> 
> We can debate about the exact name/model to represent the Nehalem 
> family, I don't have an issue with that and actually Intel and Amd 
> should define it.
> 
> There are two main motivations behind the above approach:
> 1. Sound guest cpu definition.
>    Using a predefined model should automatically set all the relevant
>    vendor/stepping/cpuid flags/cache sizes/etc.
>    We just can let every management application deal with it. It breaks
>    guest OS/apps. For instance there are MSI support in windows guest
>    relay on the stepping.
> 
> 2. Simplifying end user and mgmt tools.
>    qemu/kvm have the best knowledge about these low levels. If we push
>    it up in the stack, eventually it reaches the user. The end user,
>    not a 'qemu-devel user' which is actually far better from the
>    average user.
> 
>    This means that such users will have to know what is popcount and
>    whether or not to limit migration on one host by adding sse4.2 or
>    not.
> 
> This is exactly what vmware are doing:
>  - Intel CPUs : 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991
>  - AMD CPUs : 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1992
> 
> Why should we invent the wheel (qemu64..)? Let's learn from their 
> experience.

NB, be careful to distinguish the different levels of VMwares mgmt stack. In
terms of guest configuration, VMWare ESX APIs require the management app to
specify the raw CPUID masks. With VirtualCenter VMotion they defined this 
handful of common Intel/AMD CPU sets, and will automatically classify hosts
into one  of these sets and use that to specify a default CPUID mask, in the
case that the guest does not have an explicit one in its config. This gives
them good default, out-of-the-box behaviour, while also allowing mgmt apps
100% control over each guest's CPUID should they want it.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|


Reply via email to