Re: [PATCH v2 1/3] KVM: Add new -cpu best

2012-07-09 Thread Alexander Graf

On 02.07.2012, at 16:25, Avi Kivity wrote:

> On 06/26/2012 07:39 PM, Alexander Graf wrote:
>> During discussions on whether to make -cpu host the default in SLE, I found
>> myself disagreeing to the thought, because it potentially opens a big can
>> of worms for potential bugs. But if I already am so opposed to it for SLE, 
>> how
>> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
>> what would a sane default look like?
>> 
>> So I had this idea of looping through all available CPU definitions. We can
>> pretty well tell if our host is able to execute any of them by checking the
>> respective flags and seeing if our host has all features the CPU definition
>> requires. With that, we can create a -cpu type that would fall back to the
>> "best known CPU definition" that our host can fulfill. On my Phenom II
>> system for example, that would be -cpu phenom.
>> 
>> With this approach we can test and verify that CPU types actually work at
>> any random user setup, because we can always verify that all the -cpu types
>> we ship actually work. And we only default to some clever mechanism that
>> chooses from one of these.
>> 
>> 
>> +/* Are all guest feature bits present on the host? */
>> +static bool cpu_x86_feature_subset(uint32_t host, uint32_t guest)
>> +{
>> +int i;
>> +
>> +for (i = 0; i < 32; i++) {
>> +uint32_t mask = 1 << i;
>> +if ((guest & mask) && !(host & mask)) {
>> +return false;
>> +}
>> +}
>> +
>> +return true;
> 
>return !(guest & ~host);

I guess it helps to think :).

> 
> 
>> +}
> 
> 
> 
>> +
>> +
>> +
>> +static void cpu_x86_fill_best(x86_def_t *x86_cpu_def)
>> +{
>> +x86_def_t *def;
>> +
>> +x86_cpu_def->family = 0;
>> +x86_cpu_def->model = 0;
>> +for (def = x86_defs; def; def = def->next) {
>> +if (cpu_x86_fits_host(def) && cpu_x86_fits_higher(def, 
>> x86_cpu_def)) {
>> +memcpy(x86_cpu_def, def, sizeof(*def));
>> +}
>  *x86_cpu_def = *def;
>> +}
>> +
>> +if (!x86_cpu_def->family && !x86_cpu_def->model) {
>> +fprintf(stderr, "No fitting CPU model found!\n");
>> +exit(1);
>> +}
>> +}
>> +
>> static int unavailable_host_feature(struct model_features_t *f, uint32_t 
>> mask)
>> {
>> int i;
>> @@ -878,6 +957,8 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, 
>> const char *cpu_model)
>> break;
>> if (kvm_enabled() && name && strcmp(name, "host") == 0) {
>> cpu_x86_fill_host(x86_cpu_def);
>> +} else if (kvm_enabled() && name && strcmp(name, "best") == 0) {
>> +cpu_x86_fill_best(x86_cpu_def);
>> } else if (!def) {
>> goto error;
>> } else {
>> 
> 
> Should we copy the cache size etc. from the host?

I don't think so. We should rather make sure we always have cpu descriptions 
available close to what people out there actually use.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/3] KVM: Add new -cpu best

2012-07-02 Thread Avi Kivity
On 06/26/2012 07:39 PM, Alexander Graf wrote:
> During discussions on whether to make -cpu host the default in SLE, I found
> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?
> 
> So I had this idea of looping through all available CPU definitions. We can
> pretty well tell if our host is able to execute any of them by checking the
> respective flags and seeing if our host has all features the CPU definition
> requires. With that, we can create a -cpu type that would fall back to the
> "best known CPU definition" that our host can fulfill. On my Phenom II
> system for example, that would be -cpu phenom.
> 
> With this approach we can test and verify that CPU types actually work at
> any random user setup, because we can always verify that all the -cpu types
> we ship actually work. And we only default to some clever mechanism that
> chooses from one of these.
> 
>  
> +/* Are all guest feature bits present on the host? */
> +static bool cpu_x86_feature_subset(uint32_t host, uint32_t guest)
> +{
> +int i;
> +
> +for (i = 0; i < 32; i++) {
> +uint32_t mask = 1 << i;
> +if ((guest & mask) && !(host & mask)) {
> +return false;
> +}
> +}
> +
> +return true;

return !(guest & ~host);


> +}



> +
> +
> +
> +static void cpu_x86_fill_best(x86_def_t *x86_cpu_def)
> +{
> +x86_def_t *def;
> +
> +x86_cpu_def->family = 0;
> +x86_cpu_def->model = 0;
> +for (def = x86_defs; def; def = def->next) {
> +if (cpu_x86_fits_host(def) && cpu_x86_fits_higher(def, x86_cpu_def)) 
> {
> +memcpy(x86_cpu_def, def, sizeof(*def));
> +}
  *x86_cpu_def = *def;
> +}
> +
> +if (!x86_cpu_def->family && !x86_cpu_def->model) {
> +fprintf(stderr, "No fitting CPU model found!\n");
> +exit(1);
> +}
> +}
> +
>  static int unavailable_host_feature(struct model_features_t *f, uint32_t 
> mask)
>  {
>  int i;
> @@ -878,6 +957,8 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, 
> const char *cpu_model)
>  break;
>  if (kvm_enabled() && name && strcmp(name, "host") == 0) {
>  cpu_x86_fill_host(x86_cpu_def);
> +} else if (kvm_enabled() && name && strcmp(name, "best") == 0) {
> +cpu_x86_fill_best(x86_cpu_def);
>  } else if (!def) {
>  goto error;
>  } else {
> 

Should we copy the cache size etc. from the host?


-- 
error compiling committee.c: too many arguments to function


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH v2 1/3] KVM: Add new -cpu best

2012-07-02 Thread Andreas Färber
Am 26.06.2012 18:39, schrieb Alexander Graf:
> During discussions on whether to make -cpu host the default in SLE, I found

s/make -cpu host the default/support/?

> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?
> 
> So I had this idea of looping through all available CPU definitions. We can
> pretty well tell if our host is able to execute any of them by checking the
> respective flags and seeing if our host has all features the CPU definition
> requires. With that, we can create a -cpu type that would fall back to the
> "best known CPU definition" that our host can fulfill. On my Phenom II
> system for example, that would be -cpu phenom.
> 
> With this approach we can test and verify that CPU types actually work at
> any random user setup, because we can always verify that all the -cpu types
> we ship actually work. And we only default to some clever mechanism that
> chooses from one of these.
> 
> Signed-off-by: Alexander Graf 

Despite the long commit message a cover letter would've been nice. ;)

Anything that operates on x86_def_t will obviously need to be refactored
when we agree on the course for x86 CPU subclasses.
But no objection to getting it done some way that works today.

Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [PATCH v2 1/3] KVM: Add new -cpu best

2012-07-02 Thread Alexander Graf

On 26.06.2012, at 18:39, Alexander Graf wrote:

> During discussions on whether to make -cpu host the default in SLE, I found
> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?
> 
> So I had this idea of looping through all available CPU definitions. We can
> pretty well tell if our host is able to execute any of them by checking the
> respective flags and seeing if our host has all features the CPU definition
> requires. With that, we can create a -cpu type that would fall back to the
> "best known CPU definition" that our host can fulfill. On my Phenom II
> system for example, that would be -cpu phenom.
> 
> With this approach we can test and verify that CPU types actually work at
> any random user setup, because we can always verify that all the -cpu types
> we ship actually work. And we only default to some clever mechanism that
> chooses from one of these.
> 
> Signed-off-by: Alexander Graf 

Ping :)


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 1/3] KVM: Add new -cpu best

2012-06-26 Thread Alexander Graf
During discussions on whether to make -cpu host the default in SLE, I found
myself disagreeing to the thought, because it potentially opens a big can
of worms for potential bugs. But if I already am so opposed to it for SLE, how
can it possibly be reasonable to default to -cpu host in upstream QEMU? And
what would a sane default look like?

So I had this idea of looping through all available CPU definitions. We can
pretty well tell if our host is able to execute any of them by checking the
respective flags and seeing if our host has all features the CPU definition
requires. With that, we can create a -cpu type that would fall back to the
"best known CPU definition" that our host can fulfill. On my Phenom II
system for example, that would be -cpu phenom.

With this approach we can test and verify that CPU types actually work at
any random user setup, because we can always verify that all the -cpu types
we ship actually work. And we only default to some clever mechanism that
chooses from one of these.

Signed-off-by: Alexander Graf 
---
 target-i386/cpu.c |   81 +
 1 files changed, 81 insertions(+), 0 deletions(-)

diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index fdd95be..98cc1ec 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -558,6 +558,85 @@ static int cpu_x86_fill_host(x86_def_t *x86_cpu_def)
 return 0;
 }
 
+/* Are all guest feature bits present on the host? */
+static bool cpu_x86_feature_subset(uint32_t host, uint32_t guest)
+{
+int i;
+
+for (i = 0; i < 32; i++) {
+uint32_t mask = 1 << i;
+if ((guest & mask) && !(host & mask)) {
+return false;
+}
+}
+
+return true;
+}
+
+/* Does the host support all the features of the CPU definition? */
+static bool cpu_x86_fits_host(x86_def_t *x86_cpu_def)
+{
+uint32_t eax = 0, ebx = 0, ecx = 0, edx = 0;
+
+host_cpuid(0x0, 0, &eax, &ebx, &ecx, &edx);
+if (x86_cpu_def->level > eax) {
+return false;
+}
+if ((x86_cpu_def->vendor1 != ebx) ||
+(x86_cpu_def->vendor2 != edx) ||
+(x86_cpu_def->vendor3 != ecx)) {
+return false;
+}
+
+host_cpuid(0x1, 0, &eax, &ebx, &ecx, &edx);
+if (!cpu_x86_feature_subset(ecx, x86_cpu_def->ext_features) ||
+!cpu_x86_feature_subset(edx, x86_cpu_def->features)) {
+return false;
+}
+
+host_cpuid(0x8000, 0, &eax, &ebx, &ecx, &edx);
+if (x86_cpu_def->xlevel > eax) {
+return false;
+}
+
+host_cpuid(0x8001, 0, &eax, &ebx, &ecx, &edx);
+if (!cpu_x86_feature_subset(edx, x86_cpu_def->ext2_features) ||
+!cpu_x86_feature_subset(ecx, x86_cpu_def->ext3_features)) {
+return false;
+}
+
+return true;
+}
+
+/* Returns true when new_def is higher versioned than old_def */
+static int cpu_x86_fits_higher(x86_def_t *new_def, x86_def_t *old_def)
+{
+int old_fammod = (old_def->family << 24) | (old_def->model << 8)
+   | (old_def->stepping);
+int new_fammod = (new_def->family << 24) | (new_def->model << 8)
+   | (new_def->stepping);
+
+return new_fammod > old_fammod;
+}
+
+static void cpu_x86_fill_best(x86_def_t *x86_cpu_def)
+{
+x86_def_t *def;
+
+x86_cpu_def->family = 0;
+x86_cpu_def->model = 0;
+for (def = x86_defs; def; def = def->next) {
+if (cpu_x86_fits_host(def) && cpu_x86_fits_higher(def, x86_cpu_def)) {
+memcpy(x86_cpu_def, def, sizeof(*def));
+}
+}
+
+if (!x86_cpu_def->family && !x86_cpu_def->model) {
+fprintf(stderr, "No fitting CPU model found!\n");
+exit(1);
+}
+}
+
 static int unavailable_host_feature(struct model_features_t *f, uint32_t mask)
 {
 int i;
@@ -878,6 +957,8 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, 
const char *cpu_model)
 break;
 if (kvm_enabled() && name && strcmp(name, "host") == 0) {
 cpu_x86_fill_host(x86_cpu_def);
+} else if (kvm_enabled() && name && strcmp(name, "best") == 0) {
+cpu_x86_fill_best(x86_cpu_def);
 } else if (!def) {
 goto error;
 } else {
-- 
1.6.0.2

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html