Re: [libvirt] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Gleb Natapov
On Fri, Mar 09, 2012 at 03:15:26PM -0600, Anthony Liguori wrote:
 On 03/09/2012 03:04 PM, Daniel P. Berrange wrote:
 On Fri, Mar 09, 2012 at 05:56:52PM -0300, Eduardo Habkost wrote:
 Resurrecting an old thread:
 
 I didn't see any clear conclusion in this thread (this is why I am
 resurrecting it), except that many were arguing that libvirt should
 simply copy and/or generate the CPU model definitions from Qemu. I
 really don't think it's reasonable to expect that.
 
 On Thu, Dec 15, 2011 at 03:54:15PM +0100, Jiri Denemark wrote:
 Hi,
 
 Recently I realized that all modern CPU models defined in
 /etc/qemu/target-x86_64.conf are useless when qemu is used through libvirt.
 That's because we start qemu with -nodefconfig which results in qemu 
 ignoring
 that file with CPU model definitions. We have a very good reason for using
 -nodefconfig because we need to control the ABI presented to a guest OS 
 and we
 don't want any configuration file that can contain lots of things including
 device definitions to be read by qemu. However, we would really like the 
 new
 CPU models to be understood by qemu even if used through libvirt. What 
 would
 be the best way to solve this?
 
 I suspect this could have been already discussed in the past but obviously 
 a
 workable solution was either not found or just not implemented.
 
 So, our problem today is basically:
 
 A) libvirt uses -nodefconfig;
 B) -nodefconfig makes Qemu not load the config file containing the CPU
 model definitions; and
 C) libvirt expects the full CPU model list from Qemu to be available.
 
 I could have sworn we had this discussion a year ago or so, and had decided
 that the default CPU models would be in something like 
 /usr/share/qemu/cpu-x86_64.conf
 and loaded regardless of the -nodefconfig setting. 
 /etc/qemu/target-x86_64.conf
 would be solely for end user configuration changes, not for QEMU builtin
 defaults.
 
 But looking at the code in QEMU, it doesn't seem we ever implemented this ?
 
 I don't remember that discussion and really don't think I agree with the 
 conclusion.
 
 If libvirt wants to define CPU models on their own, they can.  If
It can't without knowing qemu/host cpu/host kernel capabilities and
knowing the logic that qemu uses to combine them.

 libvirt wants to use the user's definitions, don't use -nodefconfig.
 
 CPU models aren't a QEMU concept.  The reason it's in the
I do not know what do you mean by that, but CPU capabilities (and CPU
model is only a name for a group of them) are KVM/TCG concept and,
by inclusion, are QEMU concept. If QEMU will not have built in support
for CPU models (as a name for a group of CPU capabilities) then how do
you start a guest without specifying full set of CPU capabilities on a
command line?

 configuration file is to allow a user to add their own as they see
 fit.  There is no right model names. It's strictly a policy.
 
So you think it should be user's responsibility to check what his
qemu/host cpu/host kernel combo can support?

--
Gleb.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Gleb Natapov
On Sat, Mar 10, 2012 at 12:58:43PM -0300, Eduardo Habkost wrote:
 On Sat, Mar 10, 2012 at 12:42:46PM +, Daniel P. Berrange wrote:
   
   I could have sworn we had this discussion a year ago or so, and had 
   decided
   that the default CPU models would be in something like 
   /usr/share/qemu/cpu-x86_64.conf
   and loaded regardless of the -nodefconfig setting. 
   /etc/qemu/target-x86_64.conf
   would be solely for end user configuration changes, not for QEMU builtin
   defaults.
   
   But looking at the code in QEMU, it doesn't seem we ever implemented this 
   ?
  
  Arrrgggh. It seems this was implemented as a patch in RHEL-6 qemu RPMs but,
  contrary to our normal RHEL development practice, it was not based on
  a cherry-pick of an upstream patch :-(
  
  For sake of reference, I'm attaching the two patches from the RHEL6 source
  RPM that do what I'm describing
  
  NB, I'm not neccessarily advocating these patches for upstream. I still
  maintain that libvirt should write out a config file containing the
  exact CPU model description it desires and specify that with -readconfig.
  The end result would be identical from QEMU's POV and it would avoid
  playing games with QEMU's config loading code.
 
 I agree that libvirt should just write the config somewhere. The problem
 here is to define: 1) what information should be mandatory on that
 config data; 2) who should be responsible to test and maintain sane
 defaults (and where should they be maintained).
 
 The current cpudef definitions are simply too low-level to require it to
 be written from scratch. Lots of testing have to be done to make sure we
 have working combinations of CPUID bits defined, so they can be used as
 defaults or templates. Not facilitating reuse of those tested
 defauls/templates by libvirt is duplication of efforts.
 
 Really, if we expect libvirt to define all the CPU bits from scratch on
 a config file, we could as well just expect libvirt to open /dev/kvm
 itself and call the all CPUID setup ioctl()s itself. That's how
 low-level some of the cpudef bits are.
 
s/some/all

If libvirt assumes anything about what kvm actually supports it is
working only by sheer luck.

 (Also, there are additional low-level bits that really have to be
 maintained somewhere, just to have sane defaults. Currently many CPUID
 leafs are exposed to the guest without letting the user control them,
 and worse: without keeping stability of guest-visible bits when
 upgrading Qemu or the host kernel. And that's what machine-types are
 for: to have sane defaults to be used as base.)
 
 Let me give you a practical example: I had a bug report about improper
 CPU topology information[1]. After investigating it, I have found out
 that the level cpudef field is too low; CPU core topology information
 is provided on CPUID leaf 4, and most of the Intel CPU models on Qemu
 have level=2 today (I don't know why). So, Qemu is responsible for
 exposing CPU topology information set using '-smp' to the guest OS, but
 libvirt would have to be responsible for choosing a proper level value
 that makes that information visible to the guest. We can _allow_ libvirt
 to fiddle with these low-level bits, of course, but requiring every
 management layer to build this low-level information from scratch is
 just a recipe to waste developer time.
And QEMU become even less usable from a command line. One more point to
kvm-tool I guess.

 
 (And I really hope that there's no plan to require all those low-level
 bits to appear as-is on the libvirt XML definitions. Because that would
 require users to read the Intel 64 and IA-32 Architectures Software
 Developer's Manual, or the AMD64 Architecture Programmer's Manual and
 BIOS and Kernel Developer's Guides, just to understand why something is
 not working on his Virtual Machine.)
 
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=689665
 
 -- 
 Eduardo

--
Gleb.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Gleb Natapov
On Sat, Mar 10, 2012 at 12:24:47PM -0600, Anthony Liguori wrote:
 Let's step back here.
 
 Why are you writing these patches?  It's probably not because you
 have a desire to say -cpu Westmere when you run QEMU on your laptop.
 I'd wager to say that no human has ever done that or that if they
 had, they did so by accident because they read documentation and
 thought they had to.
 
I'd be glad if QEMU will chose -cpu Westmere for me if it detects
Westmere host CPU as a default.

 Humans probably do one of two things: 1) no cpu option or 2) -cpu host.
 
And both are not optimal. Actually both are bad. First one because
default cpu is very conservative and the second because there is no
guaranty that guest will continue to work after qemu or kernel upgrade.

Let me elaborate about the later. Suppose host CPU has kill_guest
feature and at the time a guest was installed it was not implemented by
kvm. Since it was not implemented by kvm it was not present in vcpu
during installation and the guest didn't install workaround kill_guest
module. Now unsuspecting user upgrades the kernel and tries to restart
the guest and fails. He writes angry letter to qemu-devel and is asked to
reinstall his guest and move along.

 So then why are you introducing -cpu Westmere?  Because ovirt-engine
 has a concept of datacenters and the entire datacenter has to use a
 compatible CPU model to allow migration compatibility.  Today, the
 interface that ovirt-engine exposes is based on CPU codenames.
 Presumably ovirt-engine wants to add a Westmere CPU group and as
 such have levied a requirement down the stack to QEMU.
 
First of all this is not about live migration only. Guest visible vcpu
should not change after guest reboot (or hibernate/resume) too. And
second this concept exists with only your laptop and single guest on it
too. There are three inputs into a CPU model module: 1) host cpu, 2)
qemu capabilities, 3) kvm capabilities. With datacenters scenario all
three can change, with your laptop only last two can change (first one
can change too when you'll get new laptop) , but the net result is that
guest visible cpuid can change and it shouldn't. This is the goal of
introducing -cpu Westmere, to prevent it from happening.

 But there's no intrinsic reason why it uses CPU model names.  VMware
 doesn't do this.  It has a concept of compatibility groups[1].
 
As Andrew noted, not any more. There is no intrinsic reason, but people
are more familiar with Intel terminology than random hypervisor
terminology.

 oVirt could just as well define compatibility groups like GroupA,
 GroupB, GroupC, etc. and then the -cpu option we would be discussing
 would be -cpu GroupA.
It could, but I can't see why this is less confusing. 

 
 This is why it's a configuration option and not builtin to QEMU.
 It's a user interface as as such, should be defined at a higher
 level.
This is not the only configuration that is builtin in QEMU. As it stands
now QEMU does not even allow configuring cpuid enough to define those
compatibility groups outside of QEMU. And after the work is done to allow
enough configurability there is no much left to provide compatibility
groups in QEMU itself.

 
 Perhaps it really should be VDSM that is providing the model info to
 libvirt? Then they can add whatever groups then want whenever they
 want as long as we have the appropriate feature bits.
 
 P.S. I spent 30 minutes the other day helping a user who was
 attempting to figure out whether his processor was a Conroe, Penryn,
 etc.  Making this determination is fairly difficult and it makes me
 wonder whether having CPU code names is even the best interface for
 oVirt..
 
 [1] 
 http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1991
 
 Regards,
 
 Anthony Liguori
 
 
 (Also, there are additional low-level bits that really have to be
 maintained somewhere, just to have sane defaults. Currently many CPUID
 leafs are exposed to the guest without letting the user control them,
 and worse: without keeping stability of guest-visible bits when
 upgrading Qemu or the host kernel. And that's what machine-types are
 for: to have sane defaults to be used as base.)
 
 Let me give you a practical example: I had a bug report about improper
 CPU topology information[1]. After investigating it, I have found out
 that the level cpudef field is too low; CPU core topology information
 is provided on CPUID leaf 4, and most of the Intel CPU models on Qemu
 have level=2 today (I don't know why). So, Qemu is responsible for
 exposing CPU topology information set using '-smp' to the guest OS, but
 libvirt would have to be responsible for choosing a proper level value
 that makes that information visible to the guest. We can _allow_ libvirt
 to fiddle with these low-level bits, of course, but requiring every
 management layer to build this low-level information from scratch is
 just a recipe to waste developer time.
 
 (And I really hope that there's no 

Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Anthony Liguori

On 03/11/2012 08:27 AM, Gleb Natapov wrote:

On Sat, Mar 10, 2012 at 12:24:47PM -0600, Anthony Liguori wrote:

Let's step back here.

Why are you writing these patches?  It's probably not because you
have a desire to say -cpu Westmere when you run QEMU on your laptop.
I'd wager to say that no human has ever done that or that if they
had, they did so by accident because they read documentation and
thought they had to.


I'd be glad if QEMU will chose -cpu Westmere for me if it detects
Westmere host CPU as a default.


This is -cpu best that Alex proposed FWIW.


Humans probably do one of two things: 1) no cpu option or 2) -cpu host.


And both are not optimal. Actually both are bad. First one because
default cpu is very conservative and the second because there is no
guaranty that guest will continue to work after qemu or kernel upgrade.

Let me elaborate about the later. Suppose host CPU has kill_guest
feature and at the time a guest was installed it was not implemented by
kvm. Since it was not implemented by kvm it was not present in vcpu
during installation and the guest didn't install workaround kill_guest
module. Now unsuspecting user upgrades the kernel and tries to restart
the guest and fails. He writes angry letter to qemu-devel and is asked to
reinstall his guest and move along.


-cpu best wouldn't solve this.  You need a read/write configuration file where 
QEMU probes the available CPU and records it to be used for the lifetime of the VM.



So then why are you introducing -cpu Westmere?  Because ovirt-engine
has a concept of datacenters and the entire datacenter has to use a
compatible CPU model to allow migration compatibility.  Today, the
interface that ovirt-engine exposes is based on CPU codenames.
Presumably ovirt-engine wants to add a Westmere CPU group and as
such have levied a requirement down the stack to QEMU.


First of all this is not about live migration only. Guest visible vcpu
should not change after guest reboot (or hibernate/resume) too. And
second this concept exists with only your laptop and single guest on it
too. There are three inputs into a CPU model module: 1) host cpu, 2)
qemu capabilities, 3) kvm capabilities. With datacenters scenario all
three can change, with your laptop only last two can change (first one
can change too when you'll get new laptop) , but the net result is that
guest visible cpuid can change and it shouldn't. This is the goal of
introducing -cpu Westmere, to prevent it from happening.


This discussion isn't about whether QEMU should have a Westmere processor 
definition.  In fact, I think I already applied that patch.


It's a discussion about how we handle this up and down the stack.

The question is who should define and manage CPU compatibility.  Right now QEMU 
does to a certain degree, libvirt discards this and does it's own thing, and 
VDSM/ovirt-engine assume that we're providing something and has built a UI 
around it.


What I'm proposing we consider: have VDSM manage CPU definitions in order to 
provide a specific user experience in ovirt-engine.


We would continue to have Westmere/etc in QEMU exposed as part of the user 
configuration.  But I don't think it makes a lot of sense to have to modify QEMU 
any time a new CPU comes out.


Regards,

Anthony Liguori

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Anthony Liguori

On 03/11/2012 07:41 AM, Gleb Natapov wrote:

On Sat, Mar 10, 2012 at 12:58:43PM -0300, Eduardo Habkost wrote:

On Sat, Mar 10, 2012 at 12:42:46PM +, Daniel P. Berrange wrote:


I could have sworn we had this discussion a year ago or so, and had decided
that the default CPU models would be in something like 
/usr/share/qemu/cpu-x86_64.conf
and loaded regardless of the -nodefconfig setting. /etc/qemu/target-x86_64.conf
would be solely for end user configuration changes, not for QEMU builtin
defaults.

But looking at the code in QEMU, it doesn't seem we ever implemented this ?


Arrrgggh. It seems this was implemented as a patch in RHEL-6 qemu RPMs but,
contrary to our normal RHEL development practice, it was not based on
a cherry-pick of an upstream patch :-(

For sake of reference, I'm attaching the two patches from the RHEL6 source
RPM that do what I'm describing

NB, I'm not neccessarily advocating these patches for upstream. I still
maintain that libvirt should write out a config file containing the
exact CPU model description it desires and specify that with -readconfig.
The end result would be identical from QEMU's POV and it would avoid
playing games with QEMU's config loading code.


I agree that libvirt should just write the config somewhere. The problem
here is to define: 1) what information should be mandatory on that
config data; 2) who should be responsible to test and maintain sane
defaults (and where should they be maintained).

The current cpudef definitions are simply too low-level to require it to
be written from scratch. Lots of testing have to be done to make sure we
have working combinations of CPUID bits defined, so they can be used as
defaults or templates. Not facilitating reuse of those tested
defauls/templates by libvirt is duplication of efforts.

Really, if we expect libvirt to define all the CPU bits from scratch on
a config file, we could as well just expect libvirt to open /dev/kvm
itself and call the all CPUID setup ioctl()s itself. That's how
low-level some of the cpudef bits are.


s/some/all

If libvirt assumes anything about what kvm actually supports it is
working only by sheer luck.


Well the simple answer for libvirt is don't use -nodefconfig and then it can 
reuse the CPU definitions (including any that the user adds).


Really, what's the point of having a layer of management if we're saying that 
doing policy management is too complicated for that layer?  What does that layer 
exist to provide then?



(Also, there are additional low-level bits that really have to be
maintained somewhere, just to have sane defaults. Currently many CPUID
leafs are exposed to the guest without letting the user control them,
and worse: without keeping stability of guest-visible bits when
upgrading Qemu or the host kernel. And that's what machine-types are
for: to have sane defaults to be used as base.)

Let me give you a practical example: I had a bug report about improper
CPU topology information[1]. After investigating it, I have found out
that the level cpudef field is too low; CPU core topology information
is provided on CPUID leaf 4, and most of the Intel CPU models on Qemu
have level=2 today (I don't know why). So, Qemu is responsible for
exposing CPU topology information set using '-smp' to the guest OS, but
libvirt would have to be responsible for choosing a proper level value
that makes that information visible to the guest. We can _allow_ libvirt
to fiddle with these low-level bits, of course, but requiring every
management layer to build this low-level information from scratch is
just a recipe to waste developer time.

And QEMU become even less usable from a command line. One more point to
kvm-tool I guess.


I'm not sure what your point is.  We're talking about an option that humans 
don't use.  How is this a discussion about QEMU usability?


Regards,

Anthony Liguori

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 8/9] qemu: Update domain status to paused while suspend event is emitted

2012-03-11 Thread Paolo Bonzini
Il 05/03/2012 11:25, Osier Yang ha scritto:
 This introduces a new paused reason VIR_DOMAIN_PAUSED_SUSPEND,
 and new suspend event type VIR_DOMAIN_EVENT_SUSPENDED_SUSPEND.
 
 While a SUSPEND event occurs, the running domain status will be
 transferred to paused with reason VIR_DOMAIN_PAUSED_SUSPEND,
 and a new domain lifecycle event emitted with type
 VIR_DOMAIN_EVENT_SUSPENDED_SUSPEND.
 ---

Does virsh resume correctly wakeup such a domain?  If not, perhaps a
different state should be added so that virsh resume can look at the
state and issue the appropriate monitor command (or alternatively, it
could be considered a QEMU bug).

Paolo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 4/9] qemu: Do not start with source for removable disks if tray is open

2012-03-11 Thread Paolo Bonzini
Il 05/03/2012 11:25, Osier Yang ha scritto:
 This is similiar with physical world, one will be surprised if the
 box starts with medium exists while the tray is open.
 
 New tests are added, tests disk-{cdrom,floppy}-tray are for the qemu
 supports -device flag, and disk-{cdrom,floppy}-no-device-cap are
 for old qemu, i.e. which doesn't support -device flag.

If the disk type is block, and the source drive is a CD-ROM, the
virtual tray state should be tied to the physical tray, even though this
isn't always the case due to QEMU bugs.

Perhaps you should fail creation if the tray attribute is open in the
above circumstances.  Another possibility is to forbid specifying the
tray attribute completely.

Or you can just drop this patch, and only print the tray state in the
virsh dumpxml output.  There are other attributes that already handled
this way.

Paolo

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Gleb Natapov
On Sun, Mar 11, 2012 at 09:12:58AM -0500, Anthony Liguori wrote:
 On 03/11/2012 08:27 AM, Gleb Natapov wrote:
 On Sat, Mar 10, 2012 at 12:24:47PM -0600, Anthony Liguori wrote:
 Let's step back here.
 
 Why are you writing these patches?  It's probably not because you
 have a desire to say -cpu Westmere when you run QEMU on your laptop.
 I'd wager to say that no human has ever done that or that if they
 had, they did so by accident because they read documentation and
 thought they had to.
 
 I'd be glad if QEMU will chose -cpu Westmere for me if it detects
 Westmere host CPU as a default.
 
 This is -cpu best that Alex proposed FWIW.
 
I didn't look at exact implementation but I doubt it does exactly what
we need because currently we do not have infrastructure for that. If qemu
is upgraded with support for new cpuid bits and -cpu best will pass them
to a guest on next boot then this is not the same. -cpu Westmere can
mean different thing for different machine types with proper
infrastructure in place.

 Humans probably do one of two things: 1) no cpu option or 2) -cpu host.
 
 And both are not optimal. Actually both are bad. First one because
 default cpu is very conservative and the second because there is no
 guaranty that guest will continue to work after qemu or kernel upgrade.
 
 Let me elaborate about the later. Suppose host CPU has kill_guest
 feature and at the time a guest was installed it was not implemented by
 kvm. Since it was not implemented by kvm it was not present in vcpu
 during installation and the guest didn't install workaround kill_guest
 module. Now unsuspecting user upgrades the kernel and tries to restart
 the guest and fails. He writes angry letter to qemu-devel and is asked to
 reinstall his guest and move along.
 
 -cpu best wouldn't solve this.  You need a read/write configuration
 file where QEMU probes the available CPU and records it to be used
 for the lifetime of the VM.
That what I thought too, but this shouldn't be the case (Avi's idea).
We need two things: 1) CPU model config should be per machine type.
2) QEMU should refuse to start if it cannot create cpu exactly as
specified by model config.

With two conditions above if user creates VM with qemu 1.0 and cpu model
Westmere which has no kill_guest feature he will still be able to run it
in QEMU 1.1 (where kill_guest is added to Westmere model) and new kvm
that support kill_guest by providing -M pc-1.0 flag (old definition of
Westmere will be used). If user will try to create VM with QEMU 1.1 on
a kernel that does not support kill_guest QEMU will refuse to start.

 
 So then why are you introducing -cpu Westmere?  Because ovirt-engine
 has a concept of datacenters and the entire datacenter has to use a
 compatible CPU model to allow migration compatibility.  Today, the
 interface that ovirt-engine exposes is based on CPU codenames.
 Presumably ovirt-engine wants to add a Westmere CPU group and as
 such have levied a requirement down the stack to QEMU.
 
 First of all this is not about live migration only. Guest visible vcpu
 should not change after guest reboot (or hibernate/resume) too. And
 second this concept exists with only your laptop and single guest on it
 too. There are three inputs into a CPU model module: 1) host cpu, 2)
 qemu capabilities, 3) kvm capabilities. With datacenters scenario all
 three can change, with your laptop only last two can change (first one
 can change too when you'll get new laptop) , but the net result is that
 guest visible cpuid can change and it shouldn't. This is the goal of
 introducing -cpu Westmere, to prevent it from happening.
 
 This discussion isn't about whether QEMU should have a Westmere
 processor definition.  In fact, I think I already applied that
 patch.
 
 It's a discussion about how we handle this up and down the stack.
 
 The question is who should define and manage CPU compatibility.
 Right now QEMU does to a certain degree, libvirt discards this and
 does it's own thing, and VDSM/ovirt-engine assume that we're
 providing something and has built a UI around it.
If we want QEMU to be usable without management layer then QEMU should
provide stable CPU models. Stable in a sense that qemu, kernel or CPU
upgrade does not change what guest sees. If libvirt wants to override 
QEMU we should have a way to allow that, but than compatibility becomes
libvirt problem. Figuring out what minimal CPU model that can be used
across a cluster of different machines should be ovirt task.

 
 What I'm proposing we consider: have VDSM manage CPU definitions in
 order to provide a specific user experience in ovirt-engine.
 
 We would continue to have Westmere/etc in QEMU exposed as part of
 the user configuration.  But I don't think it makes a lot of sense
 to have to modify QEMU any time a new CPU comes out.
 
If new cpu does not provide any new instruction set or capability that
can be passed to a guest then there is no point creating CPU model for
it in QEMU. If it does it is just a 

Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Gleb Natapov
On Sun, Mar 11, 2012 at 09:16:49AM -0500, Anthony Liguori wrote:
 If libvirt assumes anything about what kvm actually supports it is
 working only by sheer luck.
 
 Well the simple answer for libvirt is don't use -nodefconfig and
 then it can reuse the CPU definitions (including any that the user
 adds).
CPU models should be usable even with -nodefconfig. CPU model is more
like device. By -cpu Nehalem I am saying I want Nehalem device in my
machine.

 
 Really, what's the point of having a layer of management if we're
 saying that doing policy management is too complicated for that
 layer?  What does that layer exist to provide then?
 
I was always against libvirt configuring low level details of CPU. What
it should do IMO is to chose best CPU model for host cpu (one can argue
that fiddling with /proc/cpuinfo is not QEMU busyness).

 (Also, there are additional low-level bits that really have to be
 maintained somewhere, just to have sane defaults. Currently many CPUID
 leafs are exposed to the guest without letting the user control them,
 and worse: without keeping stability of guest-visible bits when
 upgrading Qemu or the host kernel. And that's what machine-types are
 for: to have sane defaults to be used as base.)
 
 Let me give you a practical example: I had a bug report about improper
 CPU topology information[1]. After investigating it, I have found out
 that the level cpudef field is too low; CPU core topology information
 is provided on CPUID leaf 4, and most of the Intel CPU models on Qemu
 have level=2 today (I don't know why). So, Qemu is responsible for
 exposing CPU topology information set using '-smp' to the guest OS, but
 libvirt would have to be responsible for choosing a proper level value
 that makes that information visible to the guest. We can _allow_ libvirt
 to fiddle with these low-level bits, of course, but requiring every
 management layer to build this low-level information from scratch is
 just a recipe to waste developer time.
 And QEMU become even less usable from a command line. One more point to
 kvm-tool I guess.
 
 I'm not sure what your point is.  We're talking about an option that
 humans don't use.  How is this a discussion about QEMU usability?
 
If for a user to have stable guest environment we require libvirt use
then QEMU by itself is less usable. We do have machine types in QEMU to
expose stable machine to a guest. CPU models should be part of it.

--
Gleb.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Anthony Liguori

On 03/11/2012 09:56 AM, Gleb Natapov wrote:

On Sun, Mar 11, 2012 at 09:12:58AM -0500, Anthony Liguori wrote:

-cpu best wouldn't solve this.  You need a read/write configuration
file where QEMU probes the available CPU and records it to be used
for the lifetime of the VM.

That what I thought too, but this shouldn't be the case (Avi's idea).
We need two things: 1) CPU model config should be per machine type.
2) QEMU should refuse to start if it cannot create cpu exactly as
specified by model config.


This would either mean:

A. pc-1.1 uses -cpu best with a fixed mask for 1.1

B. pc-1.1 hardcodes Westmere or some other family

(A) would imply a different CPU if you moved the machine from one system to 
another.  I would think this would be very problematic from a user's perspective.


(B) would imply that we had to choose the least common denominator which is 
essentially what we do today with qemu64.  If you want to just switch qemu64 to 
Conroe, I don't think that's a huge difference from what we have today.



It's a discussion about how we handle this up and down the stack.

The question is who should define and manage CPU compatibility.
Right now QEMU does to a certain degree, libvirt discards this and
does it's own thing, and VDSM/ovirt-engine assume that we're
providing something and has built a UI around it.

If we want QEMU to be usable without management layer then QEMU should
provide stable CPU models. Stable in a sense that qemu, kernel or CPU
upgrade does not change what guest sees.


We do this today by exposing -cpu qemu64 by default.  If all you're advocating 
is doing -cpu Conroe by default, that's fine.


But I fail to see where this fits into the larger discussion here.  The problem 
to solve is: I want to use the largest possible subset of CPU features available 
uniformly throughout my datacenter.


QEMU and libvirt have single node views so they cannot solve this problem on 
their own.  Whether that subset is a generic Westmere-like processor that never 
existed IRL or a specific Westmere processor seems like a decision that should 
be made by the datacenter level manager with the node level view.


If I have a homogeneous environments of Xeon 7540, I would probably like to see 
a Xeon 7540 in my guest.  Doesn't it make sense to enable the management tool to 
make this decision?


Regards,

Anthony Liguori

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Anthony Liguori

On 03/11/2012 10:12 AM, Gleb Natapov wrote:

On Sun, Mar 11, 2012 at 09:16:49AM -0500, Anthony Liguori wrote:

If libvirt assumes anything about what kvm actually supports it is
working only by sheer luck.


Well the simple answer for libvirt is don't use -nodefconfig and
then it can reuse the CPU definitions (including any that the user
adds).

CPU models should be usable even with -nodefconfig. CPU model is more
like device. By -cpu Nehalem I am saying I want Nehalem device in my
machine.


Let's say we moved CPU definitions to /usr/share/qemu/cpu-models.xml.

Obviously, we'd want a command line option to be able to change that location so 
we'd introduce -cpu-models PATH.


But we want all of our command line options to be settable by the global 
configuration file so we would have a cpu-model=PATH to the configuration file.


But why hard code a path when we can just set the default path in the 
configuration file so let's avoid hard coding and just put 
cpu-models=/usr/share/qemu/cpu-models.xml in the default configuration file.


But now when libvirt uses -nodefconfig, those models go away.  -nodefconfig 
means start QEMU in the most minimal state possible.  You get what you pay for 
if you use it.


We'll have the same problem with machine configuration files.  At some point in 
time, -nodefconfig will make machine models disappear.


Regards,

Anthony Liguori

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Gleb Natapov
On Sun, Mar 11, 2012 at 10:33:15AM -0500, Anthony Liguori wrote:
 On 03/11/2012 09:56 AM, Gleb Natapov wrote:
 On Sun, Mar 11, 2012 at 09:12:58AM -0500, Anthony Liguori wrote:
 -cpu best wouldn't solve this.  You need a read/write configuration
 file where QEMU probes the available CPU and records it to be used
 for the lifetime of the VM.
 That what I thought too, but this shouldn't be the case (Avi's idea).
 We need two things: 1) CPU model config should be per machine type.
 2) QEMU should refuse to start if it cannot create cpu exactly as
 specified by model config.
 
 This would either mean:
 
 A. pc-1.1 uses -cpu best with a fixed mask for 1.1
 
 B. pc-1.1 hardcodes Westmere or some other family
 
This would mean neither A nor B. May be it wasn't clear but I didn't talk
about -cpu best above. I am talking about any CPU model with fixed meaning
(not host or best which are host cpu dependant). Lets take Nehalem for
example (just to move from Westmere :)). Currently it has level=2. Eduardo
wants to fix it to be 11, but old guests, installed with -cpu Nehalem,
should see the same CPU exactly. How do you do it? Have different
Nehalem definition for pc-1.0 (which level=2) and pc-1.1 (with level=11).
Lets get back to Westmere. It actually has level=11, but that's only
expose another problem. Kernel 3.3 and qemu-1.1 combo will support
architectural PMU which is exposed in cpuid leaf 10. We do not want
guests installed with -cpu Westmere and qemu-1.0 to see architectural
PMU after upgrade. How do you do it? Have different Westmere definitions
for pc-1.0 (does not report PMU) and pc-1.1 (reports PMU). What happens
if you'll try to run qemu-1.1 -cpu Westmere on Kernel  3.3 (without
PMU support)? Qemu will fail to start.


 (A) would imply a different CPU if you moved the machine from one
 system to another.  I would think this would be very problematic
 from a user's perspective.
 
 (B) would imply that we had to choose the least common denominator
 which is essentially what we do today with qemu64.  If you want to
 just switch qemu64 to Conroe, I don't think that's a huge difference
 from what we have today.
 
 It's a discussion about how we handle this up and down the stack.
 
 The question is who should define and manage CPU compatibility.
 Right now QEMU does to a certain degree, libvirt discards this and
 does it's own thing, and VDSM/ovirt-engine assume that we're
 providing something and has built a UI around it.
 If we want QEMU to be usable without management layer then QEMU should
 provide stable CPU models. Stable in a sense that qemu, kernel or CPU
 upgrade does not change what guest sees.
 
 We do this today by exposing -cpu qemu64 by default.  If all you're
 advocating is doing -cpu Conroe by default, that's fine.
I am not advocating that. I am saying we should be able to amend qemu64
definition without breaking older guests that use it.

 
 But I fail to see where this fits into the larger discussion here.
 The problem to solve is: I want to use the largest possible subset
 of CPU features available uniformly throughout my datacenter.
 
 QEMU and libvirt have single node views so they cannot solve this
 problem on their own.  Whether that subset is a generic
 Westmere-like processor that never existed IRL or a specific
 Westmere processor seems like a decision that should be made by the
 datacenter level manager with the node level view.
 
 If I have a homogeneous environments of Xeon 7540, I would probably
 like to see a Xeon 7540 in my guest.  Doesn't it make sense to
 enable the management tool to make this decision?
 
Of course neither QEMU nor libvirt can't made a cluster wide decision.
If QEMU provides sane CPU model definitions (usable even with -nodefconfig)
it would be always possible to find the model that fits best. If the
oldest CPU in data center is Nehalem then probably -cpu Nehalem will do.
But our CPU model definitions have a lot of shortcomings and we were
talking with Edurado how to fix them when he brought this thread back to
life, so may be I stirred the discussion a little bit in the wrong
direction, but I do think those things are connected. If QEMU CPU model
definitions are not stable across upgrades how can we say to management
that it is safe to use them? Instead they insist in reimplementing the
same logic in mngmt layer and do it badly (because the lack of info).

--
Gleb.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt

2012-03-11 Thread Gleb Natapov
On Sun, Mar 11, 2012 at 10:41:32AM -0500, Anthony Liguori wrote:
 On 03/11/2012 10:12 AM, Gleb Natapov wrote:
 On Sun, Mar 11, 2012 at 09:16:49AM -0500, Anthony Liguori wrote:
 If libvirt assumes anything about what kvm actually supports it is
 working only by sheer luck.
 
 Well the simple answer for libvirt is don't use -nodefconfig and
 then it can reuse the CPU definitions (including any that the user
 adds).
 CPU models should be usable even with -nodefconfig. CPU model is more
 like device. By -cpu Nehalem I am saying I want Nehalem device in my
 machine.
 
 Let's say we moved CPU definitions to /usr/share/qemu/cpu-models.xml.
 
 Obviously, we'd want a command line option to be able to change that
 location so we'd introduce -cpu-models PATH.
 
 But we want all of our command line options to be settable by the
 global configuration file so we would have a cpu-model=PATH to the
 configuration file.
 
 But why hard code a path when we can just set the default path in
 the configuration file so let's avoid hard coding and just put
 cpu-models=/usr/share/qemu/cpu-models.xml in the default
 configuration file.
 
We have two places where we define cpu models: hardcoded in
target-i386/cpuid.c and in target-x86_64.conf. We moved them out to conf
file because this way it is easier to add, update, examine compare CPU
models. But they still should be treated as essential part of qemu. Given
this I do not see the step above as a logical one. CPU models are not
part of machine config.  -cpu Nehalem,-sse,level=3,model=5 is part of
machine config.

What if we introduce a way to write devices in LUA. Should -nodefconfig
drop devices implemented as LUA scripts too?

 But now when libvirt uses -nodefconfig, those models go away.
 -nodefconfig means start QEMU in the most minimal state possible.
 You get what you pay for if you use it.
 
 We'll have the same problem with machine configuration files.  At
 some point in time, -nodefconfig will make machine models disappear.
 
--
Gleb.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH RFC 1/8] add virCapabilitiesDupMachines

2012-03-11 Thread Lee Schermerhorn
Add support to duplicate a virCapsGuestMachine object -- e.g., from
cached emulator information.

---
 src/conf/capabilities.c  |   34 ++
 src/conf/capabilities.h  |3 +++
 src/libvirt_private.syms |1 +
 3 files changed, 38 insertions(+)

Index: libvirt-0.9.10/src/conf/capabilities.h
===
--- libvirt-0.9.10.orig/src/conf/capabilities.h
+++ libvirt-0.9.10/src/conf/capabilities.h
@@ -208,6 +208,9 @@ virCapabilitiesSetHostCPU(virCapsPtr cap
 extern virCapsGuestMachinePtr *
 virCapabilitiesAllocMachines(const char *const *names,
  int nnames);
+extern virCapsGuestMachinePtr *
+virCapabilitiesDupMachines(const virCapsGuestMachinePtr *smachines,
+ int nmachines);
 extern void
 virCapabilitiesFreeMachines(virCapsGuestMachinePtr *machines,
 int nmachines);
Index: libvirt-0.9.10/src/conf/capabilities.c
===
--- libvirt-0.9.10.orig/src/conf/capabilities.c
+++ libvirt-0.9.10/src/conf/capabilities.c
@@ -325,6 +325,40 @@ virCapabilitiesAllocMachines(const char
 }
 
 /**
+ * virCapabilitiesDupMachines:
+ * @smachines: table of virCapsGuestMachinePtr
+ * @nmachines: number of machine variants in table
+ *
+ * Allocate a table of virCapsGuestMachinePtr from the supplied table
+ * of virCapsGuestMachinePtr
+ */
+virCapsGuestMachinePtr *
+virCapabilitiesDupMachines(const virCapsGuestMachinePtr *smachines, int 
nmachines)
+{
+virCapsGuestMachinePtr *machines;
+int i;
+
+if (VIR_ALLOC_N(machines, nmachines)  0)
+return NULL;
+
+for (i = 0; i  nmachines; i++) {
+if (VIR_ALLOC(machines[i])  0 ||
+!(machines[i]-name = strdup(smachines[i]-name)))
+goto error;
+if (smachines[i]-canonical 
+ !(machines[i]-canonical = strdup(smachines[i]-canonical)))
+ goto error;
+}
+
+return machines;
+
+error:
+virCapabilitiesFreeMachines(machines, nmachines);
+return NULL;
+
+}
+
+/**
  * virCapabilitiesFreeMachines:
  * @machines: table of vircapsGuestMachinePtr
  *
Index: libvirt-0.9.10/src/libvirt_private.syms
===
--- libvirt-0.9.10.orig/src/libvirt_private.syms
+++ libvirt-0.9.10/src/libvirt_private.syms
@@ -49,6 +49,7 @@ virCapabilitiesAllocMachines;
 virCapabilitiesDefaultGuestArch;
 virCapabilitiesDefaultGuestEmulator;
 virCapabilitiesDefaultGuestMachine;
+virCapabilitiesDupMachines;
 virCapabilitiesFormatXML;
 virCapabilitiesFree;
 virCapabilitiesFreeMachines;

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH RFC 2/8] qemu: add VirBitmapDup

2012-03-11 Thread Lee Schermerhorn
Add function to duplicate a virBitmap for returning
cached qemuCaps[Flags] from qemuCapsExtractVersionInfo

TODO:  could factor the size calculation into a macro or inline
function as it's used by both Dup and Alloc.

---
 src/libvirt_private.syms |1 +
 src/util/bitmap.c|   24 
 src/util/bitmap.h|6 ++
 3 files changed, 31 insertions(+)

Index: libvirt-0.9.10/src/util/bitmap.c
===
--- libvirt-0.9.10.orig/src/util/bitmap.c
+++ libvirt-0.9.10/src/util/bitmap.c
@@ -80,6 +80,30 @@ virBitmapPtr virBitmapAlloc(size_t size)
 }
 
 /**
+ * virBitmapDup:
+ * @sbm: source bitmap to duplicate
+ *
+ * Duplicate the source bitmap.
+ *
+ * Returns a pointer to the duplicated bitmap or NULL if
+ * memory cannot be allocated.
+ */
+virBitmapPtr virBitmapDup(virBitmapPtr sbm)
+{
+virBitmapPtr bitmap;
+
+bitmap = virBitmapAlloc(sbm-size);
+if (bitmap) {
+size_t sz = (sbm-size + VIR_BITMAP_BITS_PER_UNIT - 1) /
+ VIR_BITMAP_BITS_PER_UNIT;
+int i;
+for (i = 0; i  sz; ++i)
+bitmap-map[i] = sbm-map[i];
+}
+return bitmap;
+}
+
+/**
  * virBitmapFree:
  * @bitmap: previously allocated bitmap
  *
Index: libvirt-0.9.10/src/util/bitmap.h
===
--- libvirt-0.9.10.orig/src/util/bitmap.h
+++ libvirt-0.9.10/src/util/bitmap.h
@@ -37,6 +37,12 @@ typedef virBitmap *virBitmapPtr;
 virBitmapPtr virBitmapAlloc(size_t size) ATTRIBUTE_RETURN_CHECK;
 
 /*
+ * Duplicate the specified virBitmap
+ */
+virBitmapPtr virBitmapDup(virBitmapPtr src)
+ATTRIBUTE_NONNULL(1) ATTRIBUTE_RETURN_CHECK;
+
+/*
  * Free previously allocated bitmap
  */
 void virBitmapFree(virBitmapPtr bitmap);
Index: libvirt-0.9.10/src/libvirt_private.syms
===
--- libvirt-0.9.10.orig/src/libvirt_private.syms
+++ libvirt-0.9.10/src/libvirt_private.syms
@@ -13,6 +13,7 @@ virRequestUsername;
 # bitmap.h
 virBitmapAlloc;
 virBitmapClearBit;
+virBitmapDup;
 virBitmapFree;
 virBitmapGetBit;
 virBitmapSetBit;

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH RFC 0/8] qemu: Cache results of parsing qemu help output

2012-03-11 Thread Lee Schermerhorn
Stracing libvirtd shows that the qemu driver is executing 2 different
qemu binaries 3 times each to fetch the version, capabilities [supported
devices], and cpu models each time VM state is queried.  E.g., [lines wrapped]:

6471  17:15:26.561890 execve(/usr/bin/qemu,
[/usr/bin/qemu, -cpu, ?], [/* 2 vars */]) = 0
6472  17:15:26.626668 execve(/usr/bin/qemu,
[/usr/bin/qemu, -help], [/* 2 vars */]) = 0
6473  17:15:26.698104 execve(/usr/bin/qemu,
[/usr/bin/qemu, -device, ?, -device, pci-assign,?,
-device, virtio-blk-pci,?], [/* 2 vars */]) = 0
6484  17:15:27.267770 execve(/usr/bin/qemu-system-x86_64,
[/usr/bin/qemu-system-x86_64, -cpu, ?],
/* 2 vars */]) = 0
6492  17:15:27.333177 execve(/usr/bin/qemu-system-x86_64,
[/usr/bin/qemu-system-x86_64, -help],
[/* 2 vars */]) = 0
6496  17:15:27.402280 execve(/usr/bin/qemu-system-x86_64,
[/usr/bin/qemu-system-x86_64, -device, ?, -device,
pci-assign,?, -device, virtio-blk-pci,?],
[/* 2 vars */]) = 0

~1sec per libvirt api call.  Not a killer, but on a heavily loaded
host -- several 10s of VMs -- a periodic query of all VM state, such
as from a cloud compute manager, can take a couple of minutes to
complete.

Because the qemu binaries on the host do not change all that often,
the results of parsing the qemu help output from the exec's above
can be cached.  The qemu driver already does some caching of
capabilities, but it does not prevent the execs above.

This series is a work in progress.  I'm submitting it as an RFC because I
saw Eric mention the frequent execing of qemu binaries and I have been
working on this to eliminate the overhead shown above.

The series caches the parse results of:

+ qemuCapsExtractVersionInfo
+ qemuCapsProbeMachineTypes
+ qemuCapsProbeCPUModels

by splitting these functions into two parts.  The existing function
name fetches the cached parse results for the specified binary and returns
them.  The other half, named qemuCapsCacheX, where X is one of
ExtractVersionInfo, ProbeMachineTypes, and ProbeCPUModels, exec's the
emulator binary and caches the results.  The act of fetching the
cached results will fill or refresh the cache as necessary in a new
function qemuCapsCachedInfoGet().  A few auxilliary function have been
added -- e.g., virCapabilitiesDupMachines() to duplicate a cached list
of machine types and virBitmapDup() to duplicate cached capabilities
flags.

The series does not attempt to integrate with nor remove the existing
capabilities caching.  TBD.

The series was developed and tested in the context of the Ubuntu 11.04 natty
libvirt_0.8.8-1ubuntu6.7 package using quilt to manage patches in the
debian/patches directory.  In that context, it builds, passes all
make check tests [under pbuilder] and some fairly heavy, overlapping VM
launch tests where it does eliminate all but a few initial exec's of the
various qemu* and kvm binaries.

The version here, rebased to libvirt-0.9.10, builds cleanly under mock on
Fedora 16 in the context of a modified libvirt-0.9.10-1.fc16 source package.
I.e., no errors and warning-for-warning compatible with build of the
libvirt-0.9.10 fc16 srpm downloaded from libvirt.org.  I placed the modified
spec file [applies the patches] and the build logs at:

http://downloads.linux.hp.com/~lts/Libvirt/

I have installed the patched libvirt on a fedora 16 system and successfully
defined and launched a vm.  Testing in progress.  I'll place an annotated
test log ont the site above when complete.

I also need to rebase atop the current mainline sources, but I wanted to get
this series out for review to see if the overall approach would be acceptable.

Comments?

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH RFC 4/8] qemu: hook up qemuCapsExtractVersionInfo

2012-03-11 Thread Lee Schermerhorn
Hook up qemuCapsExtractVersionInfo capabilities api to the emulator
cache framework by splitting into two parts:

  - qemuCapsExtractVersionInfo() looks up emulator in cache and
returns the cached version and flags.  Cache look up may fill or
refresh the cache.
  - wrap the part of the original qemuCapsExtractVersionInfo() with
qemuCapsCacheVersionInfo() to run the specified binary and update
the cached information for this binary.
---
 src/qemu/qemu_capabilities.c |   51 ++-
 1 file changed, 32 insertions(+), 19 deletions(-)

Index: libvirt-0.9.10/src/qemu/qemu_capabilities.c
===
--- libvirt-0.9.10.orig/src/qemu/qemu_capabilities.c
+++ libvirt-0.9.10/src/qemu/qemu_capabilities.c
@@ -1498,16 +1498,37 @@ int qemuCapsExtractVersionInfo(const cha
unsigned int *retversion,
virBitmapPtr *retflags)
 {
+qemuEmulatorCachePtr emulator;
+int ret = -1;
+
+emulator = qemuEmulatorCachedInfoGet(QEMU_PROBE_VERSION_INFO, qemu, arch);
+if (emulator) {
+if (retflags)
+*retflags = virBitmapDup(emulator-caps);
+if (retversion)
+   *retversion = emulator-version;
+ret = 0;
+} else {
+if (retflags)
+*retflags = 0;
+if (retversion)
+*retversion = 0;
+}
+
+qemuEmulatorCachedInfoRelease(emulator);
+return ret;
+}
+
+static int
+qemuCapsCacheVersionInfo(qemuEmulatorCachePtr emulator)
+{
 int ret = -1;
 unsigned int version, is_kvm, kvm_version;
 virBitmapPtr flags = NULL;
 char *help = NULL;
+char *qemu = emulator-path, *arch = emulator-arch;
 virCommandPtr cmd;
-
-if (retflags)
-*retflags = NULL;
-if (retversion)
-*retversion = 0;
+VIR_DEBUG(Caching Version Info for %s - %s, qemu, arch ?: no-arch);
 
 /* Make sure the binary we are about to try exec'ing exists.
  * Technically we could catch the exec() failure, but that's
@@ -1532,8 +1553,8 @@ int qemuCapsExtractVersionInfo(const cha
 goto cleanup;
 
 /* Currently only x86_64 and i686 support PCI-multibus. */
-if (STREQLEN(arch, x86_64, 6) ||
-STREQLEN(arch, i686, 4)) {
+if (arch  (STREQLEN(arch, x86_64, 6) ||
+ STREQLEN(arch, i686, 4))) {
 qemuCapsSet(flags, QEMU_CAPS_PCI_MULTIBUS);
 }
 
@@ -1544,12 +1565,10 @@ int qemuCapsExtractVersionInfo(const cha
 qemuCapsExtractDeviceStr(qemu, flags)  0)
 goto cleanup;
 
-if (retversion)
-*retversion = version;
-if (retflags) {
-*retflags = flags;
-flags = NULL;
-}
+emulator-version = version;
+qemuCapsFree(emulator-caps);/* for possible refresh */
+emulator-caps = flags;
+flags = NULL;
 
 ret = 0;
 
@@ -1561,12 +1580,6 @@ cleanup:
 return ret;
 }
 
-static int
-qemuCapsCacheVersionInfo(qemuEmulatorCachePtr emulator)
-{
-   return emulator ? 0 : 1;
-}
-
 static void
 uname_normalize (struct utsname *ut)
 {

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH RFC 3/8] qemu: add qemu emulator cache framework

2012-03-11 Thread Lee Schermerhorn
Define a qemu emulator cache structure and function to lookup,
create, refresh emulator cache objects.  The cache tags are
the paths to the emulator binaries.  E.g., /usr/bin/qemu

Subsequent patches will hook up the various extract/probe info
functions to consult the cache.

Notes/questions:
 1) qemuCapsProbes converted to bitmap along with capabilities flags
as part of rebase.  Overkill?
 2) Is it OK for the root of the cache and the nEmulators to be statically
defined in qemu_capabilities.c as opposed to a field in the driver struct?
It is private to that source file and I don't see an easy wait to get
a handle on the driver struct therein.

---
 src/qemu/qemu_capabilities.c |  166 +++
 1 file changed, 166 insertions(+)

Index: libvirt-0.9.10/src/qemu/qemu_capabilities.c
===
--- libvirt-0.9.10.orig/src/qemu/qemu_capabilities.c
+++ libvirt-0.9.10/src/qemu/qemu_capabilities.c
@@ -170,6 +170,46 @@ struct qemu_arch_info {
 int nflags;
 };
 
+/*
+ *  * Flags to record which probes have been cached
+ *   */
+enum qemuCapsProbes {
+QEMU_PROBE_VERSION_INFO   = 0,
+QEMU_PROBE_MACHINE_TYPES  = 1,
+QEMU_PROBE_CPU_MODELS = 2,
+QEMU_PROBE_SIZE
+};
+
+typedef struct _qemuEmulatorCache qemuEmulatorCache;
+typedef qemuEmulatorCache* qemuEmulatorCachePtr;
+struct _qemuEmulatorCache {
+char*path;
+char*arch;
+time_t  mtime;
+virBitmapPtrcachedProbes;
+
+unsigned intversion;
+virBitmapPtrcaps;
+
+virCapsGuestMachinePtr  *machines;
+int nmachines;
+
+char**cpus;
+unsigned intncpus;
+};
+
+static qemuEmulatorCachePtr *emulatorCache = NULL;
+static int nEmulators;
+
+static qemuEmulatorCachePtr
+qemuEmulatorCachedInfoGet(enum qemuCapsProbes,
+  const char *binary,
+  const char *arch);
+
+static void
+qemuEmulatorCachedInfoRelease(qemuEmulatorCachePtr emulator ATTRIBUTE_UNUSED)
+{ }
+
 /* Feature flags for the architecture info */
 static const struct qemu_feature_flags const arch_info_i686_flags [] = {
 { pae,  1, 0 },
@@ -319,6 +359,12 @@ cleanup:
 }
 
 static int
+qemuCapsCacheMachineTypes(qemuEmulatorCachePtr emulator)
+{
+   return emulator ? 0 : 1;
+}
+
+static int
 qemuCapsGetOldMachinesFromInfo(virCapsGuestDomainInfoPtr info,
const char *emulator,
time_t emulator_mtime,
@@ -612,6 +658,11 @@ cleanup:
 return ret;
 }
 
+static int
+qemuCapsCacheCPUModels(qemuEmulatorCachePtr emulator)
+{
+   return emulator ? 0 : 1;
+}
 
 static int
 qemuCapsInitGuest(virCapsPtr caps,
@@ -1510,6 +1561,12 @@ cleanup:
 return ret;
 }
 
+static int
+qemuCapsCacheVersionInfo(qemuEmulatorCachePtr emulator)
+{
+   return emulator ? 0 : 1;
+}
+
 static void
 uname_normalize (struct utsname *ut)
 {
@@ -1610,3 +1667,112 @@ qemuCapsGet(virBitmapPtr caps,
 else
 return b;
 }
+
+static qemuEmulatorCachePtr
+qemuEmulatorCachedInfoGet(enum qemuCapsProbes probe,
+  const char *binary,
+  const char *arch)
+{
+qemuEmulatorCachePtr emulator = NULL;
+struct stat st;
+bool alreadyCached;
+int i;
+
+if (stat(binary, st) != 0) {
+char ebuf[1024];
+VIR_INFO(Failed to stat emulator %s : %s,
+ binary, virStrerror(errno, ebuf, sizeof(ebuf)));
+goto error;
+}
+
+for (i = 0; i  nEmulators; ++i) {
+emulator = emulatorCache[i];
+
+if (!STREQ(binary, emulator-path))
+continue;
+
+if (arch  !emulator-arch) {
+if (!(emulator-arch = strdup(arch)))
+goto no_memory;
+   /*
+* We have an 'arch' now, where we didn't before.
+* So, even if we've already cached this probe,
+* refresh the cache with the specified arch.
+*/
+break;
+}
+
+if (st.st_mtime != emulator-mtime)
+break;  /* binary changed, refresh cache */
+
+if (virBitmapGetBit(emulator-cachedProbes, probe, alreadyCached)  
0) {
+VIR_ERROR(_(Unrecognized probe id '%d'), probe);
+goto error;
+}
+if (!alreadyCached)
+break;  /* do it now */
+
+return emulator;
+}
+
+if (i == nEmulators) {
+ if (VIR_REALLOC_N(emulatorCache, nEmulators + 1)  0)
+ goto no_memory;
+ if (VIR_ALLOC(emulator)  0)
+ goto no_memory;
+ if (!(emulator-path = strdup(binary)))
+ goto no_memory_free_emulator;
+ if (arch  !(emulator-arch = strdup(arch)))
+goto no_memory_free_emulator;
+ if (!(emulator-cachedProbes = virBitmapAlloc(QEMU_PROBE_SIZE)))

[libvirt] [PATCH RFC 5/8] qemu: hook up qemuCapsProbeMachineTypes

2012-03-11 Thread Lee Schermerhorn
Hook up qemuCapsProbeMachineTypes capabilities api to the emulator
cache framework:

  - qemuCapsProbeMachineTypes() looks up emulator in cache and
returns the version and flags.
  - wrap the part of the original qemuCapsProbeMachineTypes() with
qemuCapsCacheMachineTypes() to run the specified binary and update
the cached machine types supported by this binary.
---
 src/qemu/qemu_capabilities.c |   39 ---
 1 file changed, 32 insertions(+), 7 deletions(-)

Index: libvirt-0.9.10/src/qemu/qemu_capabilities.c
===
--- libvirt-0.9.10.orig/src/qemu/qemu_capabilities.c
+++ libvirt-0.9.10/src/qemu/qemu_capabilities.c
@@ -307,6 +307,11 @@ qemuCapsParseMachineTypesStr(const char
 }
 } while ((p = next));
 
+/*
+ * Free in case of possible cache refresh
+ */
+virCapabilitiesFreeMachines(*machines, *nmachines);
+
 *machines = list;
 *nmachines = nitems;
 
@@ -323,10 +328,36 @@ qemuCapsProbeMachineTypes(const char *bi
   virCapsGuestMachinePtr **machines,
   int *nmachines)
 {
+qemuEmulatorCachePtr emulator;
+int ret = -1;
+
+emulator = qemuEmulatorCachedInfoGet(QEMU_PROBE_MACHINE_TYPES, binary, 
NULL);
+if (emulator) {
+ret = 0;
+if (machines)
+*machines = virCapabilitiesDupMachines(emulator-machines, 
emulator-nmachines);
+if (nmachines)
+*nmachines = emulator-nmachines;
+} else {
+if (machines)
+*machines = NULL;
+if (nmachines)
+*nmachines = 0;
+}
+
+qemuEmulatorCachedInfoRelease(emulator);
+return ret;
+}
+
+static int
+qemuCapsCacheMachineTypes(qemuEmulatorCachePtr emulator)
+{
+char *binary = emulator-path;
 char *output;
 int ret = -1;
 virCommandPtr cmd;
 int status;
+VIR_DEBUG(Caching Machine Types for %s, binary);
 
 /* Make sure the binary we are about to try exec'ing exists.
  * Technically we could catch the exec() failure, but that's
@@ -346,7 +377,7 @@ qemuCapsProbeMachineTypes(const char *bi
 if (virCommandRun(cmd, status)  0)
 goto cleanup;
 
-if (qemuCapsParseMachineTypesStr(output, machines, nmachines)  0)
+if (qemuCapsParseMachineTypesStr(output, emulator-machines, 
emulator-nmachines)  0)
 goto cleanup;
 
 ret = 0;
@@ -359,12 +390,6 @@ cleanup:
 }
 
 static int
-qemuCapsCacheMachineTypes(qemuEmulatorCachePtr emulator)
-{
-   return emulator ? 0 : 1;
-}
-
-static int
 qemuCapsGetOldMachinesFromInfo(virCapsGuestDomainInfoPtr info,
const char *emulator,
time_t emulator_mtime,

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] Restart: [PATCH RFC 0/8] qemu: Cache results of parsing qemu help output

2012-03-11 Thread Lee Schermerhorn
Send of series failed at patch 6.  Attempting to restart here
rather than resend entire series.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH RFC 7/8] use qemuCapsFreeCPUModels in qemuBuildCpuArgString

2012-03-11 Thread Lee Schermerhorn
Use qemuCapsFreeCPUModels() in qemuBuildCpuArgStr().
Because it's there.


---
 src/qemu/qemu_command.c |6 +-
 1 file changed, 1 insertion(+), 5 deletions(-)

Index: libvirt-0.9.10/src/qemu/qemu_command.c
===
--- libvirt-0.9.10.orig/src/qemu/qemu_command.c
+++ libvirt-0.9.10/src/qemu/qemu_command.c
@@ -3651,11 +3651,7 @@ cleanup:
 virCPUDefFree(guest);
 virCPUDefFree(cpu);
 
-if (cpus) {
-for (i = 0; i  ncpus; i++)
-VIR_FREE(cpus[i]);
-VIR_FREE(cpus);
-}
+qemuCapsFreeCPUModels(ncpus, cpus);
 
 return ret;
 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH RFC 6/8] qemu: hook up qemuCapsProbeCPUModels

2012-03-11 Thread Lee Schermerhorn
Hook up qemuCapsProbeCPUModels capabilities api to the emulator
cache framework:

  - qemuCapsProbeCPUModels() looks up emulator in cache and
returns the version and flags.
  - wrap the part of the original qemuCapsProbeCPUModels() with
qemuCapsCacheCPUModels() to run the specified binary and update
the cached CPU models supported by this binary.
---
 src/qemu/qemu_capabilities.c |  123 ++-
 src/qemu/qemu_capabilities.h |3 +
 2 files changed, 91 insertions(+), 35 deletions(-)

Index: libvirt-0.9.10/src/qemu/qemu_capabilities.c
===
--- libvirt-0.9.10.orig/src/qemu/qemu_capabilities.c
+++ libvirt-0.9.10/src/qemu/qemu_capabilities.c
@@ -194,7 +194,7 @@ struct _qemuEmulatorCache {
 virCapsGuestMachinePtr  *machines;
 int nmachines;
 
-char**cpus;
+const char  **cpus;
 unsigned intncpus;
 };
 
@@ -480,6 +480,21 @@ qemuCapsGetOldMachines(const char *ostyp
 return 0;
 }
 
+/*
+ * Free list of cpus returned by qemuCapsParseCPUModels()
+ */
+void
+qemuCapsFreeCPUModels(unsigned int *ncpus, const char ***cpus)
+{
+int i;
+
+if (*cpus)
+return;
+for (i = 0; i  *ncpus; i++)
+VIR_FREE(*cpus[i]);
+VIR_FREE(*cpus);
+*ncpus = 0;
+}
 
 typedef int
 (*qemuCapsParseCPUModels)(const char *output,
@@ -500,7 +515,6 @@ qemuCapsParseX86Models(const char *outpu
 const char *next;
 unsigned int count = 0;
 const char **cpus = NULL;
-int i;
 
 do {
 const char *t;
@@ -547,6 +561,11 @@ qemuCapsParseX86Models(const char *outpu
 count++;
 } while ((p = next));
 
+/*
+ * Free any cached cpu models in case of possible cache refresh
+ */
+qemuCapsFreeCPUModels(retcount, retcpus);
+
 if (retcount)
 *retcount = count;
 if (retcpus)
@@ -555,11 +574,7 @@ qemuCapsParseX86Models(const char *outpu
 return 0;
 
 error:
-if (cpus) {
-for (i = 0; i  count; i++)
-VIR_FREE(cpus[i]);
-}
-VIR_FREE(cpus);
+qemuCapsFreeCPUModels(count, cpus);
 
 return -1;
 }
@@ -576,7 +591,7 @@ qemuCapsParsePPCModels(const char *outpu
 const char *next;
 unsigned int count = 0;
 const char **cpus = NULL;
-int i, ret = -1;
+int ret = -1;
 
 do {
 const char *t;
@@ -618,6 +633,11 @@ qemuCapsParsePPCModels(const char *outpu
 count++;
 } while ((p = next));
 
+/*
+ * Free any cached cpu models in case of possible cache refresh
+ */
+qemuCapsFreeCPUModels(retcount, retcpus);
+
 if (retcount)
 *retcount = count;
 if (retcpus) {
@@ -627,42 +647,81 @@ qemuCapsParsePPCModels(const char *outpu
 ret = 0;
 
 cleanup:
-if (cpus) {
-for (i = 0; i  count; i++)
-VIR_FREE(cpus[i]);
-VIR_FREE(cpus);
-}
+qemuCapsFreeCPUModels(count, cpus);
 return ret;
 }
 
 int
 qemuCapsProbeCPUModels(const char *qemu,
-   virBitmapPtr qemuCaps,
+   virBitmapPtr qemuCaps ATTRIBUTE_UNUSED,
const char *arch,
unsigned int *count,
-   const char ***cpus)
+   const char ***retcpus)
 {
-char *output = NULL;
-int ret = -1;
-qemuCapsParseCPUModels parse;
-virCommandPtr cmd;
+qemuEmulatorCachePtr emulator;
+const char **cpus = NULL;
+int i, ret = -1;
+int ncpus = 0;
 
 if (count)
 *count = 0;
-if (cpus)
-*cpus = NULL;
+if (retcpus)
+*retcpus = NULL;
 
-if (STREQ(arch, i686) || STREQ(arch, x86_64))
-parse = qemuCapsParseX86Models;
-else if (STREQ(arch, ppc64))
-parse = qemuCapsParsePPCModels;
-else {
-VIR_DEBUG(don't know how to parse %s CPU models, arch);
+emulator = qemuEmulatorCachedInfoGet(QEMU_PROBE_CPU_MODELS, qemu, arch);
+if (emulator) {
+if (retcpus) {
+ncpus = emulator-ncpus;
+if (VIR_ALLOC_N(cpus, ncpus)  0)
+goto no_memory;
+for (i = 0; i  ncpus; ++i) {
+if (!(cpus[i] = strdup(emulator-cpus[i])))
+goto no_memory;
+}
+*retcpus = cpus;
+}
+if (count)
+*count = emulator-ncpus;
+ret = 0;
+}
+goto release;
+
+no_memory:
+if (cpus) {
+for (i = 0; i  ncpus; i++)
+VIR_FREE(cpus[i]);
+VIR_FREE(cpus);
+}
+
+release:
+qemuEmulatorCachedInfoRelease(emulator);
+return ret;
+}
+
+static int
+qemuCapsCacheCPUModels(qemuEmulatorCachePtr emulator)
+{
+char *output = NULL;
+char *arch = emulator-arch, *qemu = emulator-path;
+int ret = -1;
+qemuCapsParseCPUModels parse = NULL;
+virCommandPtr cmd;
+VIR_DEBUG(Caching CPU Models for %s - %s, qemu, arch ?: no-arch);
+
+if (arch) {
+ 

[libvirt] Restart2: [PATCH RFC 0/8] qemu: Cache results of parsing qemu help output

2012-03-11 Thread Lee Schermerhorn
Restart of send failed at patch 8.  Not sure that one is needed,
but try, try again.

FYI:  failure is:  
smtplib.SMTPSenderRefused: (452, '4.3.1 Insufficient system storage', ...

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH RFC 8/8] add qemu cache mutex

2012-03-11 Thread Lee Schermerhorn
Add a mutex for access to the qemu emulator cache.  Not clear that
this is actually needed -- driver should be locked across calls [?].
The patch can be dropped if not needed.
---
 src/qemu/qemu_capabilities.c |   18 +-
 src/qemu/qemu_capabilities.h |2 ++
 src/qemu/qemu_driver.c   |3 +++
 3 files changed, 22 insertions(+), 1 deletion(-)

Index: libvirt-0.9.10/src/qemu/qemu_capabilities.c
===
--- libvirt-0.9.10.orig/src/qemu/qemu_capabilities.c
+++ libvirt-0.9.10/src/qemu/qemu_capabilities.c
@@ -27,6 +27,7 @@
 #include memory.h
 #include logging.h
 #include virterror_internal.h
+#include threads.h
 #include util.h
 #include virfile.h
 #include nodeinfo.h
@@ -180,6 +181,11 @@ enum qemuCapsProbes {
 QEMU_PROBE_SIZE
 };
 
+/*
+ * Use static initializer for tests
+ */
+static virMutex qemuEmulatorCacheMutex = { PTHREAD_MUTEX_INITIALIZER };
+
 typedef struct _qemuEmulatorCache qemuEmulatorCache;
 typedef qemuEmulatorCache* qemuEmulatorCachePtr;
 struct _qemuEmulatorCache {
@@ -206,9 +212,17 @@ qemuEmulatorCachedInfoGet(enum qemuCapsP
   const char *binary,
   const char *arch);
 
+int
+qemuCapsCacheInit(void)
+{
+return virMutexInit(qemuEmulatorCacheMutex);
+}
+
 static void
 qemuEmulatorCachedInfoRelease(qemuEmulatorCachePtr emulator ATTRIBUTE_UNUSED)
-{ }
+{
+virMutexUnlock(qemuEmulatorCacheMutex);
+}
 
 /* Feature flags for the architecture info */
 static const struct qemu_feature_flags const arch_info_i686_flags [] = {
@@ -1769,6 +1783,8 @@ qemuEmulatorCachedInfoGet(enum qemuCapsP
 bool alreadyCached;
 int i;
 
+virMutexLock(qemuEmulatorCacheMutex);
+
 if (stat(binary, st) != 0) {
 char ebuf[1024];
 VIR_INFO(Failed to stat emulator %s : %s,
Index: libvirt-0.9.10/src/qemu/qemu_driver.c
===
--- libvirt-0.9.10.orig/src/qemu/qemu_driver.c
+++ libvirt-0.9.10/src/qemu/qemu_driver.c
@@ -585,6 +585,9 @@ qemudStartup(int privileged) {
 if (qemuSecurityInit(qemu_driver)  0)
 goto error;
 
+if (qemuCapsCacheInit()  0)
+goto error;
+
 if ((qemu_driver-caps = qemuCreateCapabilities(NULL,
 qemu_driver)) == NULL)
 goto error;
Index: libvirt-0.9.10/src/qemu/qemu_capabilities.h
===
--- libvirt-0.9.10.orig/src/qemu/qemu_capabilities.h
+++ libvirt-0.9.10/src/qemu/qemu_capabilities.h
@@ -139,6 +139,8 @@ void qemuCapsClear(virBitmapPtr caps,
 bool qemuCapsGet(virBitmapPtr caps,
  enum qemuCapsFlags flag);
 
+int qemuCapsCacheInit(void);
+
 virCapsPtr qemuCapsInit(virCapsPtr old_caps);
 
 int qemuCapsProbeMachineTypes(const char *binary,

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 4/9] qemu: Do not start with source for removable disks if tray is open

2012-03-11 Thread Osier Yang

On 03/11/2012 10:44 PM, Paolo Bonzini wrote:

Il 05/03/2012 11:25, Osier Yang ha scritto:

This is similiar with physical world, one will be surprised if the
box starts with medium exists while the tray is open.

New tests are added, tests disk-{cdrom,floppy}-tray are for the qemu
supports -device flag, and disk-{cdrom,floppy}-no-device-cap are
for old qemu, i.e. which doesn't support -device flag.


If the disk type is block, and the source drive is a CD-ROM, the
virtual tray state should be tied to the physical tray, even though this
isn't always the case due to QEMU bugs.

Perhaps you should fail creation if the tray attribute is open in the
above circumstances.  Another possibility is to forbid specifying the
tray attribute completely.


What will be the problem in such case if the tray is open and we
don't start with the physical CD-ROM in QEMU command line?



Or you can just drop this patch, and only print the tray state in the
virsh dumpxml output.  There are other attributes that already handled
this way.



No, Print the tray status as internal XML is opposite with the
purpose of these patches, we don't want the domain migrated
or (saved + restored) see the medium still exists in guest
while it's ejected before. Thus we want the tray status
could be persistent XML, which we could looked up while
domain starting. So neither forbid specifying the tray status
nor only print the XML internally work.

The only choice for use is to fail creation if the tray status
is open, that's the normal logic, and we can do in this way,
but I have to understand why we have to do so, :-)

Thanks for the input, Paolo.

Osier

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 8/9] qemu: Update domain status to paused while suspend event is emitted

2012-03-11 Thread Osier Yang

On 03/11/2012 10:37 PM, Paolo Bonzini wrote:

Il 05/03/2012 11:25, Osier Yang ha scritto:

This introduces a new paused reason VIR_DOMAIN_PAUSED_SUSPEND,
and new suspend event type VIR_DOMAIN_EVENT_SUSPENDED_SUSPEND.

While a SUSPEND event occurs, the running domain status will be
transferred to paused with reason VIR_DOMAIN_PAUSED_SUSPEND,
and a new domain lifecycle event emitted with type
VIR_DOMAIN_EVENT_SUSPENDED_SUSPEND.
---


Does virsh resume correctly wakeup such a domain?


Ah, yes, I prohibited the situation waking up a domain which
wasn't paused by SUSPEND event. But here I forgot it.

If not, perhaps a

different state should be added so that virsh resume can look at the
state and issue the appropriate monitor command (or alternatively, it
could be considered a QEMU bug).


We have the paused reason. And the patch which should be
squashed in in the attachment.

But I'm interested in trying if virsh resume wakeup a
domain pasued by SUSPEND event.

Osier
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 7b6e747..594c774 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1480,6 +1480,7 @@ static int qemudDomainResume(virDomainPtr dom) {
 virDomainObjPtr vm;
 int ret = -1;
 virDomainEventPtr event = NULL;
+int reason;
 
 qemuDriverLock(driver);
 vm = virDomainFindByUUID(driver-domains, dom-uuid);
@@ -1500,18 +1501,25 @@ static int qemudDomainResume(virDomainPtr dom) {
 %s, _(domain is not running));
 goto endjob;
 }
-if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_PAUSED) {
-if (qemuProcessStartCPUs(driver, vm, dom-conn,
- VIR_DOMAIN_RUNNING_UNPAUSED,
- QEMU_ASYNC_JOB_NONE)  0) {
-if (virGetLastError() == NULL)
-qemuReportError(VIR_ERR_OPERATION_FAILED,
-%s, _(resume operation failed));
+if (virDomainObjGetState(vm, reason) == VIR_DOMAIN_PAUSED) {
+if (reason != VIR_DOMAIN_PAUSED_SUSPEND) {
+if (qemuProcessStartCPUs(driver, vm, dom-conn,
+ VIR_DOMAIN_RUNNING_UNPAUSED,
+ QEMU_ASYNC_JOB_NONE)  0) {
+if (virGetLastError() == NULL)
+qemuReportError(VIR_ERR_OPERATION_FAILED,
+%s, _(resume operation failed));
+goto endjob;
+}
+event = virDomainEventNewFromObj(vm,
+ VIR_DOMAIN_EVENT_RESUMED,
+ 
VIR_DOMAIN_EVENT_RESUMED_UNPAUSED);
+} else {
+qemuReportError(VIR_ERR_INTERNAL_ERROR, %s,
+_(domain paused from guest side can't be resumed, 

+  you might want to wakeup it));
 goto endjob;
 }
-event = virDomainEventNewFromObj(vm,
- VIR_DOMAIN_EVENT_RESUMED,
- VIR_DOMAIN_EVENT_RESUMED_UNPAUSED);
 }
 if (virDomainSaveStatus(driver-caps, driver-stateDir, vm)  0)
 goto endjob;
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list