Re: [Qemu-devel] [RFC v5] target-i386: Slim conversion to X86CPU subclasses + KVM subclasses
On Thu, 14 Feb 2013 17:56:58 -0200 Eduardo Habkost ehabk...@redhat.com wrote: On Thu, Feb 14, 2013 at 08:16:32PM +0100, Igor Mammedov wrote: [...] + +} + +static void x86_cpu_def_class_init(ObjectClass *oc, void *data) +{ +X86CPUClass *xcc = X86_CPU_CLASS(oc); +x86_def_t *def = data; +int i; +static const char *versioned_models[] = { qemu32, qemu64, athlon }; + +memcpy(xcc-info, def, sizeof(x86_def_t)); +xcc-info.ext_features |= CPUID_EXT_HYPERVISOR; If this is TCG-specific now, we could make the class match the reality and mask TCG_FEATURES/TCG_EXT_FEATURES/etc here. That's what happens in practice, e.g.: -cpu SandyBridge in TCG mode is a completely different CPU, today. well, this function is shared between TCG and KVM, I mean, it's common code for both. Which asks for one more TCG class_init function for TCG specific behavior. Having both TCG-specific and KVM-specific subclasses instead of making the KVM class inherit from the TCG class would make sense to me, as we may have TCG-specific behavior in other places too. Or we could do memcpy() again on the KVM class_init function. Or we could set a different void* pointer on the TCG class, with already-filtered x86_def_t object. There are multiple possible solutions. But could it be a separate patch (i.e. fixing TCG filtering), I think just moving tcg filtering might cause regression. And need to worked on separately. I'm OK with doing that in a separate patch, as it is not a bug fix or behavior change. But it would be nice to do that before we introduce the feature properties, to make the reported defaults right since the beginning. It's behavior change, if we move filtering from realizefn to class_init, user would be able to add features not visible now to guest. ordering now: class_init, initfn, setting defautls, custom features, realizefn(filtering) would be ordering with static properties: clas_init, static props defaults, global properties(custom features), x86_cpu_initfn, realizefn in would be scenario filtering comes before custom features, which is not necessarily bad and may be nice to consciously add/test features before enabling them in filter for everyone, but it's behavior change. We should keep the filtering on realize because of custom -cpu strings, but if we filter on class_init too, we will make the class introspection reflect reality more closely. Wasn't this one the main reasons you argued (and convinced me) for having separate subclasses? Also, if we do this we will be able to make enforce work for TCG too. For example: if enforce worked for TCG today, people who want to work around the existing n270 MOVBE bug could use -cpu n270,+movbe,enforce to make sure the QEMU version they are using really accepts movbe. I've thought you proposed to move it, if we duplicate it then it should be fine. [...] +#ifdef CONFIG_KVM +static void x86_cpu_kvm_def_class_init(ObjectClass *oc, void *data) +{ +uint32_t eax, ebx, ecx, edx; +X86CPUClass *xcc = X86_CPU_CLASS(oc); + +x86_cpu_def_class_init(oc, data); +/* sysenter isn't supported in compatibility mode on AMD, + * syscall isn't supported in compatibility mode on Intel. + * Normally we advertise the actual CPU vendor, but you can + * override this using the 'vendor' property if you want to use + * KVM's sysenter/syscall emulation in compatibility mode and + * when doing cross vendor migration + */ +host_cpuid(0x0, 0, eax, ebx, ecx, edx); Cool, this is exactly what I was going to suggest, to avoid depending on the host CPU class and KVM initialization. I see two options when making the vendor property static, and I don't know which one is preferable: One solution is the one in this patch: to call host_cpuid() here in class_init, and have a different property default depending on the host CPU. I prefer this one ^^^. Another solution is to have default to vendor=host, and have instance_init understand vendor=host as use the host CPU vendor. This would make the property default really static (not depending on the host CPU). I am inclined for the latter, because I am assuming that the QOM design expects class_init results to be completely static. This is not as bad as depending on KVM initialization, but it may be an unexpected surprise, or even something not allowed by QOM. That's where I have to disagree :) You might have a point with 'static' if our goal is for introspection for source level to work. But we have a number of issues with that: * model_id is not static for
Re: [Qemu-devel] [RFC v5] target-i386: Slim conversion to X86CPU subclasses + KVM subclasses
I'm happy because the replies are getting shorter! :-) (But I'm unhappy because my questions about QOM design requirements are not being answered yet.) On Fri, Feb 15, 2013 at 12:49:46PM +0100, Igor Mammedov wrote: On Thu, 14 Feb 2013 17:56:58 -0200 Eduardo Habkost ehabk...@redhat.com wrote: On Thu, Feb 14, 2013 at 08:16:32PM +0100, Igor Mammedov wrote: [...] + +} + +static void x86_cpu_def_class_init(ObjectClass *oc, void *data) +{ +X86CPUClass *xcc = X86_CPU_CLASS(oc); +x86_def_t *def = data; +int i; +static const char *versioned_models[] = { qemu32, qemu64, athlon }; + +memcpy(xcc-info, def, sizeof(x86_def_t)); +xcc-info.ext_features |= CPUID_EXT_HYPERVISOR; If this is TCG-specific now, we could make the class match the reality and mask TCG_FEATURES/TCG_EXT_FEATURES/etc here. That's what happens in practice, e.g.: -cpu SandyBridge in TCG mode is a completely different CPU, today. well, this function is shared between TCG and KVM, I mean, it's common code for both. Which asks for one more TCG class_init function for TCG specific behavior. Having both TCG-specific and KVM-specific subclasses instead of making the KVM class inherit from the TCG class would make sense to me, as we may have TCG-specific behavior in other places too. Or we could do memcpy() again on the KVM class_init function. Or we could set a different void* pointer on the TCG class, with already-filtered x86_def_t object. There are multiple possible solutions. But could it be a separate patch (i.e. fixing TCG filtering), I think just moving tcg filtering might cause regression. And need to worked on separately. I'm OK with doing that in a separate patch, as it is not a bug fix or behavior change. But it would be nice to do that before we introduce the feature properties, to make the reported defaults right since the beginning. It's behavior change, if we move filtering from realizefn to class_init, user would be able to add features not visible now to guest. ordering now: class_init, initfn, setting defautls, custom features, realizefn(filtering) would be ordering with static properties: clas_init, static props defaults, global properties(custom features), x86_cpu_initfn, realizefn in would be scenario filtering comes before custom features, which is not necessarily bad and may be nice to consciously add/test features before enabling them in filter for everyone, but it's behavior change. We should keep the filtering on realize because of custom -cpu strings, but if we filter on class_init too, we will make the class introspection reflect reality more closely. Wasn't this one the main reasons you argued (and convinced me) for having separate subclasses? Also, if we do this we will be able to make enforce work for TCG too. For example: if enforce worked for TCG today, people who want to work around the existing n270 MOVBE bug could use -cpu n270,+movbe,enforce to make sure the QEMU version they are using really accepts movbe. I've thought you proposed to move it, if we duplicate it then it should be fine. I was, but then you pointed out that it would change behavior. :-) [...] +#ifdef CONFIG_KVM +static void x86_cpu_kvm_def_class_init(ObjectClass *oc, void *data) +{ +uint32_t eax, ebx, ecx, edx; +X86CPUClass *xcc = X86_CPU_CLASS(oc); + +x86_cpu_def_class_init(oc, data); +/* sysenter isn't supported in compatibility mode on AMD, + * syscall isn't supported in compatibility mode on Intel. + * Normally we advertise the actual CPU vendor, but you can + * override this using the 'vendor' property if you want to use + * KVM's sysenter/syscall emulation in compatibility mode and + * when doing cross vendor migration + */ +host_cpuid(0x0, 0, eax, ebx, ecx, edx); Cool, this is exactly what I was going to suggest, to avoid depending on the host CPU class and KVM initialization. I see two options when making the vendor property static, and I don't know which one is preferable: One solution is the one in this patch: to call host_cpuid() here in class_init, and have a different property default depending on the host CPU. I prefer this one ^^^. Another solution is to have default to vendor=host, and have instance_init understand vendor=host as use the host CPU vendor. This would make the property default really static (not depending on the host CPU). I am inclined for the latter, because I am assuming that the QOM design expects class_init
Re: [Qemu-devel] [RFC v5] target-i386: Slim conversion to X86CPU subclasses + KVM subclasses
On Wed, Feb 13, 2013 at 04:33:33PM +0100, Igor Mammedov wrote: From: Andreas Färber afaer...@suse.de Depends on http://lists.gnu.org/archive/html/qemu-devel/2013-02/msg00677.html Move x86_def_t definition to header and embed into X86CPUClass. Register types per built-in model definition. Move version initialization from x86_cpudef_setup() to class_init(). Move default setting of CPUID_EXT_HYPERVISOR to class_init(). Move KVM specific built-in CPU defaults overrides in a kvm specific x86_cpu_kvm_def_class_init(). And select TCG vs KVM class of CPU to create at runtime in x86_cpu_class_by_name() when kvm_enable() is available. Inline cpu_x86_register() into the X86CPU initfn. Since instance_init cannot reports errors, die there if some of default values are incorrect, instead of ignoring errors. Replace cpu_x86_find_by_name() with x86_cpu_class_by_name(). Move handling of KVM host vendor override from cpu_x86_find_by_name() to the kvm_arch_init() and class_init(). Use TYPE_X86_CPU class to communicate kvm specific defaults to other sub-classes. Register host-kvm-{i386,x86_64}-cpu type from KVM code to avoid #ifdefs and only when KVM is enabled to avoid workarounds in name to class lookup code in x86_cpu_class_by_name(). Make kvm_cpu_fill_host() into a host specific class_init and inline cpu_x86_fill_model_id(). Let kvm_check_features_against_host() obtain host-kvm-{i386,86_64}-cpu for comparison. Signed-off-by: Andreas Färber afaer...@suse.de Signed-off-by: Igor Mammedov imamm...@redhat.com --- v5: * remove special case for 'host' CPU check in x86_cpu_class_by_name(), due to 'host' CPU will not find anything if not in KVM mode or return 'host' CPU class in KVM mode, i.e. treat it as regular CPUs. * register KVM specific subclasses for built-in CPU models. * abort() in instance_init() if property setter fails to set default value. v4: * set error if cpu model is not found and goto out; * copy vendor override from 'host' CPU class in sub-class'es class_init() if 'host' CPU class is available. * register type TYPE_HOST_X86_CPU in kvm_arch_init(), this type should be available only in KVM mode and we haven't printed it in -cpu ? output so far, so we can continue doing so. It's not really confusing to show 'host' cpu (even if we do it) when KVM is not enabled. --- target-i386/cpu-qom.h | 24 target-i386/cpu.c | 348 +++-- target-i386/cpu.h |5 +- target-i386/kvm.c | 72 ++ 4 files changed, 232 insertions(+), 217 deletions(-) [...] diff --git a/target-i386/cpu-qom.h b/target-i386/cpu-qom.h index 48e6b54..c8f320d 100644 --- a/target-i386/cpu-qom.h +++ b/target-i386/cpu-qom.h @@ -30,6 +30,27 @@ #define TYPE_X86_CPU i386-cpu #endif +#define TYPE_HOST_X86_CPU host-kvm- TYPE_X86_CPU + [...] if (name == NULL) { -return -1; -} -if (kvm_enabled() strcmp(name, host) == 0) { -kvm_cpu_fill_host(x86_cpu_def); -return 0; +return NULL; } -for (i = 0; i ARRAY_SIZE(builtin_x86_defs); i++) { -def = builtin_x86_defs[i]; -if (strcmp(name, def-name) == 0) { -memcpy(x86_cpu_def, def, sizeof(*def)); -/* sysenter isn't supported in compatibility mode on AMD, - * syscall isn't supported in compatibility mode on Intel. - * Normally we advertise the actual CPU vendor, but you can - * override this using the 'vendor' property if you want to use - * KVM's sysenter/syscall emulation in compatibility mode and - * when doing cross vendor migration - */ -if (kvm_enabled()) { -uint32_t ebx = 0, ecx = 0, edx = 0; -host_cpuid(0, 0, NULL, ebx, ecx, edx); -x86_cpu_vendor_words2str(x86_cpu_def-vendor, ebx, edx, ecx); -} -return 0; -} +if (kvm_enabled()) { +typename = g_strdup_printf(%s-kvm- TYPE_X86_CPU, name); * Could we use the -tcg suffix for the non-KVM mode? * Can we make the code fall back to no-suffix CPU names? We need to add the -kvm suffix (and maybe a -tcg suffix) to keep compatibility with existing CPU models, but maybe we should deprecate the automatic -tcg/-kvm suffixing and ask users to specify the full CPU name. [...] @@ -2234,7 +2075,57 @@ static void x86_cpu_initfn(Object *obj) cpu_set_debug_excp_handler(breakpoint_handler); #endif } + +if (error) { +fprintf(stderr, %s\n, error_get_pretty(error)); +error_free(error); +abort(); +} Good idea, we should have done that a long time ago, to avoid surprises if one day the property setting fails. + +} + +static void x86_cpu_def_class_init(ObjectClass *oc, void *data) +{ +X86CPUClass *xcc = X86_CPU_CLASS(oc);
Re: [Qemu-devel] [RFC v5] target-i386: Slim conversion to X86CPU subclasses + KVM subclasses
On Thu, 14 Feb 2013 09:44:59 -0200 Eduardo Habkost ehabk...@redhat.com wrote: On Wed, Feb 13, 2013 at 04:33:33PM +0100, Igor Mammedov wrote: From: Andreas Färber afaer...@suse.de Depends on http://lists.gnu.org/archive/html/qemu-devel/2013-02/msg00677.html Move x86_def_t definition to header and embed into X86CPUClass. Register types per built-in model definition. Move version initialization from x86_cpudef_setup() to class_init(). Move default setting of CPUID_EXT_HYPERVISOR to class_init(). Move KVM specific built-in CPU defaults overrides in a kvm specific x86_cpu_kvm_def_class_init(). And select TCG vs KVM class of CPU to create at runtime in x86_cpu_class_by_name() when kvm_enable() is available. Inline cpu_x86_register() into the X86CPU initfn. Since instance_init cannot reports errors, die there if some of default values are incorrect, instead of ignoring errors. Replace cpu_x86_find_by_name() with x86_cpu_class_by_name(). Move handling of KVM host vendor override from cpu_x86_find_by_name() to the kvm_arch_init() and class_init(). Use TYPE_X86_CPU class to communicate kvm specific defaults to other sub-classes. Register host-kvm-{i386,x86_64}-cpu type from KVM code to avoid #ifdefs and only when KVM is enabled to avoid workarounds in name to class lookup code in x86_cpu_class_by_name(). Make kvm_cpu_fill_host() into a host specific class_init and inline cpu_x86_fill_model_id(). Let kvm_check_features_against_host() obtain host-kvm-{i386,86_64}-cpu for comparison. Signed-off-by: Andreas Färber afaer...@suse.de Signed-off-by: Igor Mammedov imamm...@redhat.com --- v5: * remove special case for 'host' CPU check in x86_cpu_class_by_name(), due to 'host' CPU will not find anything if not in KVM mode or return 'host' CPU class in KVM mode, i.e. treat it as regular CPUs. * register KVM specific subclasses for built-in CPU models. * abort() in instance_init() if property setter fails to set default value. v4: * set error if cpu model is not found and goto out; * copy vendor override from 'host' CPU class in sub-class'es class_init() if 'host' CPU class is available. * register type TYPE_HOST_X86_CPU in kvm_arch_init(), this type should be available only in KVM mode and we haven't printed it in -cpu ? output so far, so we can continue doing so. It's not really confusing to show 'host' cpu (even if we do it) when KVM is not enabled. --- target-i386/cpu-qom.h | 24 target-i386/cpu.c | 348 +++-- target-i386/cpu.h |5 +- target-i386/kvm.c | 72 ++ 4 files changed, 232 insertions(+), 217 deletions(-) [...] diff --git a/target-i386/cpu-qom.h b/target-i386/cpu-qom.h index 48e6b54..c8f320d 100644 --- a/target-i386/cpu-qom.h +++ b/target-i386/cpu-qom.h @@ -30,6 +30,27 @@ #define TYPE_X86_CPU i386-cpu #endif +#define TYPE_HOST_X86_CPU host-kvm- TYPE_X86_CPU + [...] if (name == NULL) { -return -1; -} -if (kvm_enabled() strcmp(name, host) == 0) { -kvm_cpu_fill_host(x86_cpu_def); -return 0; +return NULL; } -for (i = 0; i ARRAY_SIZE(builtin_x86_defs); i++) { -def = builtin_x86_defs[i]; -if (strcmp(name, def-name) == 0) { -memcpy(x86_cpu_def, def, sizeof(*def)); -/* sysenter isn't supported in compatibility mode on AMD, - * syscall isn't supported in compatibility mode on Intel. - * Normally we advertise the actual CPU vendor, but you can - * override this using the 'vendor' property if you want to use - * KVM's sysenter/syscall emulation in compatibility mode and - * when doing cross vendor migration - */ -if (kvm_enabled()) { -uint32_t ebx = 0, ecx = 0, edx = 0; -host_cpuid(0, 0, NULL, ebx, ecx, edx); -x86_cpu_vendor_words2str(x86_cpu_def-vendor, ebx, edx, ecx); -} -return 0; -} +if (kvm_enabled()) { +typename = g_strdup_printf(%s-kvm- TYPE_X86_CPU, name); * Could we use the -tcg suffix for the non-KVM mode? sure, I'll add it for the next re-spin. * Can we make the code fall back to no-suffix CPU names? We need to add the -kvm suffix (and maybe a -tcg suffix) to keep compatibility with existing CPU models, but maybe we should deprecate the automatic -tcg/-kvm suffixing and ask users to specify the full CPU name. I'm not sure that I got you right, Have you meant something like this: if (kvm) { typename = name-kvm-... } else { typename = name-tcg-... } oc = object_class_by_name(typename) if (!oc) { oc = object_class_by_name(name) } [...] @@ -2234,7 +2075,57 @@ static void
Re: [Qemu-devel] [RFC v5] target-i386: Slim conversion to X86CPU subclasses + KVM subclasses
On Thu, Feb 14, 2013 at 04:13:22PM +0100, Igor Mammedov wrote: On Thu, 14 Feb 2013 09:44:59 -0200 Eduardo Habkost ehabk...@redhat.com wrote: [...] +if (kvm_enabled()) { +typename = g_strdup_printf(%s-kvm- TYPE_X86_CPU, name); * Could we use the -tcg suffix for the non-KVM mode? sure, I'll add it for the next re-spin. * Can we make the code fall back to no-suffix CPU names? We need to add the -kvm suffix (and maybe a -tcg suffix) to keep compatibility with existing CPU models, but maybe we should deprecate the automatic -tcg/-kvm suffixing and ask users to specify the full CPU name. I'm not sure that I got you right, Have you meant something like this: if (kvm) { typename = name-kvm-... } else { typename = name-tcg-... } oc = object_class_by_name(typename) if (!oc) { oc = object_class_by_name(name) } Yes, exactly. [...] + +} + +static void x86_cpu_def_class_init(ObjectClass *oc, void *data) +{ +X86CPUClass *xcc = X86_CPU_CLASS(oc); +x86_def_t *def = data; +int i; +static const char *versioned_models[] = { qemu32, qemu64, athlon }; + +memcpy(xcc-info, def, sizeof(x86_def_t)); +xcc-info.ext_features |= CPUID_EXT_HYPERVISOR; If this is TCG-specific now, we could make the class match the reality and mask TCG_FEATURES/TCG_EXT_FEATURES/etc here. That's what happens in practice, e.g.: -cpu SandyBridge in TCG mode is a completely different CPU, today. well, this function is shared between TCG and KVM, I mean, it's common code for both. Which asks for one more TCG class_init function for TCG specific behavior. Having both TCG-specific and KVM-specific subclasses instead of making the KVM class inherit from the TCG class would make sense to me, as we may have TCG-specific behavior in other places too. Or we could do memcpy() again on the KVM class_init function. Or we could set a different void* pointer on the TCG class, with already-filtered x86_def_t object. There are multiple possible solutions. But could it be a separate patch (i.e. fixing TCG filtering), I think just moving tcg filtering might cause regression. And need to worked on separately. I'm OK with doing that in a separate patch, as it is not a bug fix or behavior change. But it would be nice to do that before we introduce the feature properties, to make the reported defaults right since the beginning. [...] +#ifdef CONFIG_KVM +static void x86_cpu_kvm_def_class_init(ObjectClass *oc, void *data) +{ +uint32_t eax, ebx, ecx, edx; +X86CPUClass *xcc = X86_CPU_CLASS(oc); + +x86_cpu_def_class_init(oc, data); +/* sysenter isn't supported in compatibility mode on AMD, + * syscall isn't supported in compatibility mode on Intel. + * Normally we advertise the actual CPU vendor, but you can + * override this using the 'vendor' property if you want to use + * KVM's sysenter/syscall emulation in compatibility mode and + * when doing cross vendor migration + */ +host_cpuid(0x0, 0, eax, ebx, ecx, edx); Cool, this is exactly what I was going to suggest, to avoid depending on the host CPU class and KVM initialization. I see two options when making the vendor property static, and I don't know which one is preferable: One solution is the one in this patch: to call host_cpuid() here in class_init, and have a different property default depending on the host CPU. I prefer this one ^^^. Another solution is to have default to vendor=host, and have instance_init understand vendor=host as use the host CPU vendor. This would make the property default really static (not depending on the host CPU). I am inclined for the latter, because I am assuming that the QOM design expects class_init results to be completely static. This is not as bad as depending on KVM initialization, but it may be an unexpected surprise, or even something not allowed by QOM. That's where I have to disagree :) You might have a point with 'static' if our goal is for introspection for source level to work. But we have a number of issues with that: * model_id is not static for some cpus, 'vendor' is the same just for different set of classes model_id is static for a given QEMU version, isn't it? This may be OK (but I am _also_ not sure about this case). * we generate sub-classes at runtime dynamically, which makes source level introspection impossible for them. It's not source-level introspection I am looking for, but runtime introspection that won't surprise other components in the system by changing unexpectedly (e.g. when the host CPU changes). I think that QOM is aiming towards run-time introspection of classes/objects. And that allows us to define defaults (even generate them dynamically) at class_init() time, they don't have to be static constants. Yes, QOM introspection is run-time based. What
Re: [Qemu-devel] [RFC v5] target-i386: Slim conversion to X86CPU subclasses + KVM subclasses
On Thu, 14 Feb 2013 13:52:59 -0200 Eduardo Habkost ehabk...@redhat.com wrote: On Thu, Feb 14, 2013 at 04:13:22PM +0100, Igor Mammedov wrote: On Thu, 14 Feb 2013 09:44:59 -0200 Eduardo Habkost ehabk...@redhat.com wrote: [...] +if (kvm_enabled()) { +typename = g_strdup_printf(%s-kvm- TYPE_X86_CPU, name); * Could we use the -tcg suffix for the non-KVM mode? sure, I'll add it for the next re-spin. * Can we make the code fall back to no-suffix CPU names? We need to add the -kvm suffix (and maybe a -tcg suffix) to keep compatibility with existing CPU models, but maybe we should deprecate the automatic -tcg/-kvm suffixing and ask users to specify the full CPU name. I'm not sure that I got you right, Have you meant something like this: if (kvm) { typename = name-kvm-... } else { typename = name-tcg-... } oc = object_class_by_name(typename) if (!oc) { oc = object_class_by_name(name) } Yes, exactly. [...] + +} + +static void x86_cpu_def_class_init(ObjectClass *oc, void *data) +{ +X86CPUClass *xcc = X86_CPU_CLASS(oc); +x86_def_t *def = data; +int i; +static const char *versioned_models[] = { qemu32, qemu64, athlon }; + +memcpy(xcc-info, def, sizeof(x86_def_t)); +xcc-info.ext_features |= CPUID_EXT_HYPERVISOR; If this is TCG-specific now, we could make the class match the reality and mask TCG_FEATURES/TCG_EXT_FEATURES/etc here. That's what happens in practice, e.g.: -cpu SandyBridge in TCG mode is a completely different CPU, today. well, this function is shared between TCG and KVM, I mean, it's common code for both. Which asks for one more TCG class_init function for TCG specific behavior. Having both TCG-specific and KVM-specific subclasses instead of making the KVM class inherit from the TCG class would make sense to me, as we may have TCG-specific behavior in other places too. Or we could do memcpy() again on the KVM class_init function. Or we could set a different void* pointer on the TCG class, with already-filtered x86_def_t object. There are multiple possible solutions. But could it be a separate patch (i.e. fixing TCG filtering), I think just moving tcg filtering might cause regression. And need to worked on separately. I'm OK with doing that in a separate patch, as it is not a bug fix or behavior change. But it would be nice to do that before we introduce the feature properties, to make the reported defaults right since the beginning. It's behavior change, if we move filtering from realizefn to class_init, user would be able to add features not visible now to guest. ordering now: class_init, initfn, setting defautls, custom features, realizefn(filtering) would be ordering with static properties: clas_init, static props defaults, global properties(custom features), x86_cpu_initfn, realizefn in would be scenario filtering comes before custom features, which is not necessarily bad and may be nice to consciously add/test features before enabling them in filter for everyone, but it's behavior change. [...] +#ifdef CONFIG_KVM +static void x86_cpu_kvm_def_class_init(ObjectClass *oc, void *data) +{ +uint32_t eax, ebx, ecx, edx; +X86CPUClass *xcc = X86_CPU_CLASS(oc); + +x86_cpu_def_class_init(oc, data); +/* sysenter isn't supported in compatibility mode on AMD, + * syscall isn't supported in compatibility mode on Intel. + * Normally we advertise the actual CPU vendor, but you can + * override this using the 'vendor' property if you want to use + * KVM's sysenter/syscall emulation in compatibility mode and + * when doing cross vendor migration + */ +host_cpuid(0x0, 0, eax, ebx, ecx, edx); Cool, this is exactly what I was going to suggest, to avoid depending on the host CPU class and KVM initialization. I see two options when making the vendor property static, and I don't know which one is preferable: One solution is the one in this patch: to call host_cpuid() here in class_init, and have a different property default depending on the host CPU. I prefer this one ^^^. Another solution is to have default to vendor=host, and have instance_init understand vendor=host as use the host CPU vendor. This would make the property default really static (not depending on the host CPU). I am inclined for the latter, because I am assuming that the QOM design expects class_init results to be completely static. This is not as bad as depending on KVM initialization, but it may be an unexpected surprise, or even something not allowed by QOM. That's where I have to disagree :) You might have a point with 'static' if our goal is for introspection for source level to work. But we have a number of issues with that: * model_id is not
Re: [Qemu-devel] [RFC v5] target-i386: Slim conversion to X86CPU subclasses + KVM subclasses
On Thu, Feb 14, 2013 at 08:16:32PM +0100, Igor Mammedov wrote: [...] + +} + +static void x86_cpu_def_class_init(ObjectClass *oc, void *data) +{ +X86CPUClass *xcc = X86_CPU_CLASS(oc); +x86_def_t *def = data; +int i; +static const char *versioned_models[] = { qemu32, qemu64, athlon }; + +memcpy(xcc-info, def, sizeof(x86_def_t)); +xcc-info.ext_features |= CPUID_EXT_HYPERVISOR; If this is TCG-specific now, we could make the class match the reality and mask TCG_FEATURES/TCG_EXT_FEATURES/etc here. That's what happens in practice, e.g.: -cpu SandyBridge in TCG mode is a completely different CPU, today. well, this function is shared between TCG and KVM, I mean, it's common code for both. Which asks for one more TCG class_init function for TCG specific behavior. Having both TCG-specific and KVM-specific subclasses instead of making the KVM class inherit from the TCG class would make sense to me, as we may have TCG-specific behavior in other places too. Or we could do memcpy() again on the KVM class_init function. Or we could set a different void* pointer on the TCG class, with already-filtered x86_def_t object. There are multiple possible solutions. But could it be a separate patch (i.e. fixing TCG filtering), I think just moving tcg filtering might cause regression. And need to worked on separately. I'm OK with doing that in a separate patch, as it is not a bug fix or behavior change. But it would be nice to do that before we introduce the feature properties, to make the reported defaults right since the beginning. It's behavior change, if we move filtering from realizefn to class_init, user would be able to add features not visible now to guest. ordering now: class_init, initfn, setting defautls, custom features, realizefn(filtering) would be ordering with static properties: clas_init, static props defaults, global properties(custom features), x86_cpu_initfn, realizefn in would be scenario filtering comes before custom features, which is not necessarily bad and may be nice to consciously add/test features before enabling them in filter for everyone, but it's behavior change. We should keep the filtering on realize because of custom -cpu strings, but if we filter on class_init too, we will make the class introspection reflect reality more closely. Wasn't this one the main reasons you argued (and convinced me) for having separate subclasses? Also, if we do this we will be able to make enforce work for TCG too. For example: if enforce worked for TCG today, people who want to work around the existing n270 MOVBE bug could use -cpu n270,+movbe,enforce to make sure the QEMU version they are using really accepts movbe. [...] +#ifdef CONFIG_KVM +static void x86_cpu_kvm_def_class_init(ObjectClass *oc, void *data) +{ +uint32_t eax, ebx, ecx, edx; +X86CPUClass *xcc = X86_CPU_CLASS(oc); + +x86_cpu_def_class_init(oc, data); +/* sysenter isn't supported in compatibility mode on AMD, + * syscall isn't supported in compatibility mode on Intel. + * Normally we advertise the actual CPU vendor, but you can + * override this using the 'vendor' property if you want to use + * KVM's sysenter/syscall emulation in compatibility mode and + * when doing cross vendor migration + */ +host_cpuid(0x0, 0, eax, ebx, ecx, edx); Cool, this is exactly what I was going to suggest, to avoid depending on the host CPU class and KVM initialization. I see two options when making the vendor property static, and I don't know which one is preferable: One solution is the one in this patch: to call host_cpuid() here in class_init, and have a different property default depending on the host CPU. I prefer this one ^^^. Another solution is to have default to vendor=host, and have instance_init understand vendor=host as use the host CPU vendor. This would make the property default really static (not depending on the host CPU). I am inclined for the latter, because I am assuming that the QOM design expects class_init results to be completely static. This is not as bad as depending on KVM initialization, but it may be an unexpected surprise, or even something not allowed by QOM. That's where I have to disagree :) You might have a point with 'static' if our goal is for introspection for source level to work. But we have a number of issues with that: * model_id is not static for some cpus, 'vendor' is the same just for different set of classes model_id is static for a given QEMU version, isn't it? This may be OK (but I am _also_ not sure about this case). I guess you mean constant when using static. Yes, I mean constant if by constant you
[Qemu-devel] [RFC v5] target-i386: Slim conversion to X86CPU subclasses + KVM subclasses
From: Andreas Färber afaer...@suse.de Depends on http://lists.gnu.org/archive/html/qemu-devel/2013-02/msg00677.html Move x86_def_t definition to header and embed into X86CPUClass. Register types per built-in model definition. Move version initialization from x86_cpudef_setup() to class_init(). Move default setting of CPUID_EXT_HYPERVISOR to class_init(). Move KVM specific built-in CPU defaults overrides in a kvm specific x86_cpu_kvm_def_class_init(). And select TCG vs KVM class of CPU to create at runtime in x86_cpu_class_by_name() when kvm_enable() is available. Inline cpu_x86_register() into the X86CPU initfn. Since instance_init cannot reports errors, die there if some of default values are incorrect, instead of ignoring errors. Replace cpu_x86_find_by_name() with x86_cpu_class_by_name(). Move handling of KVM host vendor override from cpu_x86_find_by_name() to the kvm_arch_init() and class_init(). Use TYPE_X86_CPU class to communicate kvm specific defaults to other sub-classes. Register host-kvm-{i386,x86_64}-cpu type from KVM code to avoid #ifdefs and only when KVM is enabled to avoid workarounds in name to class lookup code in x86_cpu_class_by_name(). Make kvm_cpu_fill_host() into a host specific class_init and inline cpu_x86_fill_model_id(). Let kvm_check_features_against_host() obtain host-kvm-{i386,86_64}-cpu for comparison. Signed-off-by: Andreas Färber afaer...@suse.de Signed-off-by: Igor Mammedov imamm...@redhat.com --- v5: * remove special case for 'host' CPU check in x86_cpu_class_by_name(), due to 'host' CPU will not find anything if not in KVM mode or return 'host' CPU class in KVM mode, i.e. treat it as regular CPUs. * register KVM specific subclasses for built-in CPU models. * abort() in instance_init() if property setter fails to set default value. v4: * set error if cpu model is not found and goto out; * copy vendor override from 'host' CPU class in sub-class'es class_init() if 'host' CPU class is available. * register type TYPE_HOST_X86_CPU in kvm_arch_init(), this type should be available only in KVM mode and we haven't printed it in -cpu ? output so far, so we can continue doing so. It's not really confusing to show 'host' cpu (even if we do it) when KVM is not enabled. --- target-i386/cpu-qom.h | 24 target-i386/cpu.c | 348 +++-- target-i386/cpu.h |5 +- target-i386/kvm.c | 72 ++ 4 files changed, 232 insertions(+), 217 deletions(-) diff --git a/target-i386/cpu-qom.h b/target-i386/cpu-qom.h index 48e6b54..c8f320d 100644 --- a/target-i386/cpu-qom.h +++ b/target-i386/cpu-qom.h @@ -30,6 +30,27 @@ #define TYPE_X86_CPU i386-cpu #endif +#define TYPE_HOST_X86_CPU host-kvm- TYPE_X86_CPU + +typedef struct x86_def_t { +const char *name; +uint32_t level; +/* vendor is zero-terminated, 12 character ASCII string */ +char vendor[CPUID_VENDOR_SZ + 1]; +int family; +int model; +int stepping; +uint32_t features, ext_features, ext2_features, ext3_features; +uint32_t kvm_features, svm_features; +uint32_t xlevel; +char model_id[48]; +/* Store the results of Centaur's CPUID instructions */ +uint32_t ext4_features; +uint32_t xlevel2; +/* The feature bits on CPUID[EAX=7,ECX=0].EBX */ +uint32_t cpuid_7_0_ebx_features; +} x86_def_t; + #define X86_CPU_CLASS(klass) \ OBJECT_CLASS_CHECK(X86CPUClass, (klass), TYPE_X86_CPU) #define X86_CPU(obj) \ @@ -41,6 +62,7 @@ * X86CPUClass: * @parent_realize: The parent class' realize handler. * @parent_reset: The parent class' reset handler. + * @info: Model-specific data. * * An x86 CPU model or family. */ @@ -51,6 +73,8 @@ typedef struct X86CPUClass { DeviceRealize parent_realize; void (*parent_reset)(CPUState *cpu); + +x86_def_t info; } X86CPUClass; /** diff --git a/target-i386/cpu.c b/target-i386/cpu.c index 1aee097..b786a57 100644 --- a/target-i386/cpu.c +++ b/target-i386/cpu.c @@ -47,8 +47,8 @@ #include hw/apic_internal.h #endif -static void x86_cpu_vendor_words2str(char *dst, uint32_t vendor1, - uint32_t vendor2, uint32_t vendor3) +void x86_cpu_vendor_words2str(char *dst, uint32_t vendor1, + uint32_t vendor2, uint32_t vendor3) { int i; for (i = 0; i 4; i++) { @@ -346,25 +346,6 @@ static void add_flagname_to_bitmaps(const char *flagname, } } -typedef struct x86_def_t { -const char *name; -uint32_t level; -/* vendor is zero-terminated, 12 character ASCII string */ -char vendor[CPUID_VENDOR_SZ + 1]; -int family; -int model; -int stepping; -uint32_t features, ext_features, ext2_features, ext3_features; -uint32_t kvm_features, svm_features; -uint32_t xlevel; -char model_id[48]; -/* Store the results of Centaur's CPUID instructions */ -uint32_t ext4_features; -uint32_t xlevel2; -/* The