[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Daniel P. Berrange
On Mon, Apr 05, 2010 at 11:11:48PM +0200, Alexander Graf wrote:
 Howdy,
 
 I've been thinking a bit further on the whole issue around 
 libvirt and why the situation as is isn't satisfying. I came
 to the following points that currently hurt building ease of
 use for KVM:
 
 1) Brand
 
 This is one of the major issues we have ourselves when it comes
 to appliances. We can ship appliances built for VMware. We can 
 ship appliances built for Xen. But we can't ship appliances 
 built for KVM, because there is no single management app we could
 target. That destroys the KVM brand IMHO.

With appliances there are two core aspects

 1. The description of VM hardware requirements
 2. The disk format

Traditionally VMware appliances have shipped a VMX file for 1. and
a VMDK file for 2. 

Shipping the native config file format with an appliance though is
the wrong thing to be doing. The native config format describes the
configuration for a VM for a specific deployment. This is not the
same as describing the hardware requirements of an appliance. As
the most simple example, a native config would have hardcoded disk
paths, or a specific choice of host network connectivity. Neither
of these things have any business being in the appliance config.

For this reason, there are now specific appliance formats. Libvirt
has long had its own appliance format (virt-image) which is separate
from the main XML format, so it avoids hardcoding deployment specific
options. There is also the vendor neutral OVF format which is widely 
supported by many mgmt tools. 

If people want to ship QEMU appliances I don't think libvirt is 
causing any problems here. Simply ship a OVF description + either
raw or qcow2 disk image. Any app, libvirt, or not could work with
that.

 2) Machine description
 
 If we build an appliance, we also create a configuration file that
  describes the VM. We can create .vmx files, we can create xen config
 files. We can not create KVM config files. There are none. We could
 create shell scripts, but would that help?

As described above, appliances really don't want to be using the 
native configuration formats, they want a higher level format like
OVF. The only reason soo many people ship .vmx files is that this
predates the OVF format's existance.

With qdev you can load most options from a config file using the new
'-readconfig file' arg, but i guess there's more to be included
there still. 

 3) Configuration conversion
 
 Party due to qemu not having a configuration format, partly due to 
 libvirt's ambivalent approach, there is always conversion in 
 configuration formats involved. I think this is the main reason for
 the feature lag. If there wasn't a conversion step, there wouldn't 
 be lag. You could just hand edit the config file and be good.

[snip]

 Point 3 is the really tough one. It's the very basis of libvirt. And 
 it's plain wrong IMHO. I hate XML. I hate duplicated efforts. The 
 current conversion involves both. Every option added to qemu needs to 
 be added to libvirt. In XML. Bleks.

In the previous thread on this topic, I've already stated that we're
interested in providing a means to pass QEMU config options from
libvirt prior to their full modelling in the XML, to reduce, or completely 
eliminate any time-lag in using new features.


 Reading on IRC I seem to not be the only person thinking that, just 
 the first one mentioning this aloud I suppose. But that whole XML mess
 really hurts us too. Nobody wants to edit XML files. Nobody wants to 
 have two separate syntaxes to describe the same thing. It complicates
 everything without a clear benefit. And it puts me in a position where 
 I can't help people because I don't know the XML format. That should 
 never happen.

Even if an app was using QEMU directly, you can't presume that the app 
will use QEMU's config file as its native format. Many apps will store
configs in their own custom format (or in a database) and simply generate
the QEMU config data on the fly when starting a VM. In the same way libvirt
will  generate QEMU config data on the fly when starting a VM. Having many
config formats  conversion / generation of the fly is a fact of life no 
matter what mgmt system you use.

The key point is that it needs to be really easy to get at the generated
QEMU config data to see what is actually being run. libvirt will always
save the exact QEMU config data it generates into a log file for this
exact purpose (/var/log/libvirt/qemu/$VMNAME.log).  The complexity here
is that you can't directly run this if TAP devices are in use for the
networking since it'll expect TAP FDs to be passed down. Although you 
could allow QEMU to open the TAP devices, this is not good  for security 
separation of QEMU from the host OS, so I'm not sure it can be easily
avoided. 

One attempt to make life easier, was that we added a libvirt command 
'virsh domxml-to-native' command which given a libvirt XML config file
would return a set of QEMU command line 

[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Daniel P. Berrange
On Tue, Apr 06, 2010 at 01:14:36AM +0300, Avi Kivity wrote:
 On 04/06/2010 12:11 AM, Alexander Graf wrote:
 
 I can imagine 1) going away if we would set libvirt + virt-manager as 
 _the_ front-end and have everyone focus on it. I suppose it would also 
 help to rebrand it by then, but I'm not 100% sure about that. Either way, 
 there would have to be a definite statement that libvirt is the solution 
 to use. And _everyone_ would have to agree on that. Sounds like a hard 
 task. And by then we still don't really have a branded product stack.
 
 Point 3 is the really tough one. It's the very basis of libvirt. And it's 
 plain wrong IMHO. I hate XML. I hate duplicated efforts. The current 
 conversion involves both. Every option added to qemu needs to be added to 
 libvirt.
 
 Not just libvirt, virt-manager as well.  And that is typically more 
 difficult technically (though probably takes a lot less time).
 
 In XML. Bleks.

 
 Yeah.

Whether XML is a problem or not really depends on what kind of stack you
are looking at, and what group of users you're considering. 

 1. virsh - QEMU 

This is the lowest level in libvirt, so XML is exposed to people
directly. We're really not expecting people to use this for 
creating new VMs though precisely because people don't like XML,
instead see next option. You can hot-plug/unplug devices without
knowing XML though.

 2. virt-install - QEMU

Instead of XML this takes simple command line args to describe the
VM configuration, avoiding need to know XML at all. it also automates
many other aspects like creation of storage, fetching of install
media, etc.

 2. virt-manager - libvirt - QEMU

Not a GUI, so XML is not exposed to users at all

 3. ovirt/rhev-m - libvirt - QEMU

Configuration is stored in a custom database schema. XML is merely
generated on the fly when spawning VMs.

 4. CIM/DMTF - libvirt - QEMU

Configuration is described in terms of DMTF schema, translated
on the fly to libvirt XML. Apps using CIM likely don't use the
DMTF schema directly either, having their own format.

With exception of the lowest level virsh, XML is just an intermediate
interchange format, not the format that is directly exposed to users.
You can get at the raw QEMU level config that results in all cases.

There is a gap in this though, for people who don't want to use any kind
of management tool at all, but rather just script the low level bits 
directly. For them, virt-install may not be flexible enough, but virsh
is too raw forcing knowledge of the XML format. 

 Sure, for libvirt it makes sense to be hypervisor-agnostic. For qemu it 
 doesn't. We want to be _the_ hypervisor. Setting our default front-end to 
 something that is agnostic weakens our point. And it slows down 
 development. And it hurts integration. And thus usability, thus adoption. 
 It hurts us.

 
 It doesn't make sense for libvirt to be hypervisor agnostic.  If it is, 
 people who want to use one hypervisor's advanced features are forced to 
 work around it.  Anthony wants multiple monitors for this, but that's a 
 bad workaround.  libvirt is placing developers using it in an impossible 
 situation - the developers want to use kvm-specific features and libvirt 
 is in the way.

I have proposed a couple of extensions to address this problem of feature
lg

 - Provide a way to pass extra command line args to QEMU via libvirt
 - Provide a way to send/receive monitor commands via libvirt

This would give access to nearly all of QEMU's features.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London-o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://deltacloud.org :|
|: http://autobuild.org-o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|




[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Avi Kivity

On 04/06/2010 01:29 AM, Alexander Graf wrote:



Well, I did suggest (and then withdraw) qemud.  The problem is that to get 
something working we'd duplicate all the work that's gone into libvirt - 
storage pools, svirt, network setup, etc.
 

That's infrastructure that should probably go along with qemu then. Why should 
other UIs not benefit from secure VMs? Why should other UIs not benefit from 
device passthrough cleverness? Why should other UIs not benefit from easier 
network setup?
   


You're right.  So we should move all the setup code from libvirt to 
qemud, and have libvirt just do the hypervisor-agnostic ABI conversion.


Note things like network setup are a bottomless pit.  Pretty soon you 
need to setup vlans and bonding etc.  If a user needs one of these and 
qemud doesn't provide it, then qemud becomes useless to them.  But the 
same problem applies to libvirt.



Take a look at our competition (vmware / vbox). They do the full stack. That's 
what users want. They want to do something easily. And I do too :-).
   


Well, let's resurrect qemud, populate it with code from libvirt (though 
I'm not sure C is the best language for it), and have libvirt talk to 
qemud.  That's what it does for esx anyway.


--
error compiling committee.c: too many arguments to function





[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Alexander Graf
Avi Kivity wrote:
 On 04/06/2010 01:29 AM, Alexander Graf wrote:

 Well, I did suggest (and then withdraw) qemud.  The problem is that
 to get something working we'd duplicate all the work that's gone
 into libvirt - storage pools, svirt, network setup, etc.
  
 That's infrastructure that should probably go along with qemu then.
 Why should other UIs not benefit from secure VMs? Why should other
 UIs not benefit from device passthrough cleverness? Why should other
 UIs not benefit from easier network setup?


 You're right.  So we should move all the setup code from libvirt to
 qemud, and have libvirt just do the hypervisor-agnostic ABI conversion.

I believe that's the right way to go, yes.


 Note things like network setup are a bottomless pit.  Pretty soon you
 need to setup vlans and bonding etc.  If a user needs one of these and
 qemud doesn't provide it, then qemud becomes useless to them.  But the
 same problem applies to libvirt.

If they are a bottomless pit then they are a bottomless pit. There's
nothing we can do about it. This pit needs to be dug either way, whether
it's in libvirt or in qemud.


 Take a look at our competition (vmware / vbox). They do the full
 stack. That's what users want. They want to do something easily. And
 I do too :-).


 Well, let's resurrect qemud, populate it with code from libvirt
 (though I'm not sure C is the best language for it), and have libvirt
 talk to qemud.  That's what it does for esx anyway.


I'm unsure what the right language would be. C probably is not. But
having VM management be done by something qemu'ish sounds like a good idea.


Alex




[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Avi Kivity

On 04/06/2010 03:28 PM, Alexander Graf wrote:

Note things like network setup are a bottomless pit.  Pretty soon you
need to setup vlans and bonding etc.  If a user needs one of these and
qemud doesn't provide it, then qemud becomes useless to them.  But the
same problem applies to libvirt.
 

If they are a bottomless pit then they are a bottomless pit. There's
nothing we can do about it. This pit needs to be dug either way, whether
it's in libvirt or in qemud.
   


Agreed.  The only difference is who's doing the digging.

One way to avoid it is to have a rich plugin API so if some needs some 
to, say, set up traffic control on the interface, they can write a 
plugin to do that.


--
error compiling committee.c: too many arguments to function





[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Alexander Graf
Daniel P. Berrange wrote:
 On Mon, Apr 05, 2010 at 11:11:48PM +0200, Alexander Graf wrote:
   
 Howdy,

 I've been thinking a bit further on the whole issue around 
 libvirt and why the situation as is isn't satisfying. I came
 to the following points that currently hurt building ease of
 use for KVM:

 1) Brand

 This is one of the major issues we have ourselves when it comes
 to appliances. We can ship appliances built for VMware. We can 
 ship appliances built for Xen. But we can't ship appliances 
 built for KVM, because there is no single management app we could
 target. That destroys the KVM brand IMHO.
 

 With appliances there are two core aspects

  1. The description of VM hardware requirements
  2. The disk format

 Traditionally VMware appliances have shipped a VMX file for 1. and
 a VMDK file for 2. 

 Shipping the native config file format with an appliance though is
 the wrong thing to be doing. The native config format describes the
 configuration for a VM for a specific deployment. This is not the
 same as describing the hardware requirements of an appliance. As
 the most simple example, a native config would have hardcoded disk
 paths, or a specific choice of host network connectivity. Neither
 of these things have any business being in the appliance config.

 For this reason, there are now specific appliance formats. Libvirt
 has long had its own appliance format (virt-image) which is separate
 from the main XML format, so it avoids hardcoding deployment specific
 options. There is also the vendor neutral OVF format which is widely 
 supported by many mgmt tools. 

 If people want to ship QEMU appliances I don't think libvirt is 
 causing any problems here. Simply ship a OVF description + either
 raw or qcow2 disk image. Any app, libvirt, or not could work with
 that.
   

Does VMware Player support OVF?
Does VMware Workstation support OVF?
Does VMware Server support OVF?
Do older VMware ESX versions support OVF?
Does it make sense to build an OVF with a Xen PV image?

We need to deliver vendor specific configs anyways. Of course we could
ship a VMware type, a Xen type and an OVF type. But that would certainly
not help KVM's awareness because it's hidden underneath the OVF type.

It's also hard to tell people what to use. People know KVM. But people
don't know what UI KVM does have. Because there is none. I think we're
losing quite a bit of traction due to that.

   
 2) Machine description

 If we build an appliance, we also create a configuration file that
  describes the VM. We can create .vmx files, we can create xen config
 files. We can not create KVM config files. There are none. We could
 create shell scripts, but would that help?
 

 As described above, appliances really don't want to be using the 
 native configuration formats, they want a higher level format like
 OVF. The only reason soo many people ship .vmx files is that this
 predates the OVF format's existance.

 With qdev you can load most options from a config file using the new
 '-readconfig file' arg, but i guess there's more to be included
 there still. 
   

Getting to a full machine config file is still some way to go, yes. And
as I stated before - I'd love to see that being the default format for
VM storage. If you like management apps could then import and export
those files, but it would still be the point of knowledge. If someone
knows how to hack that description file they'd know how to do it for
every single management app out there. Worst case they'd have to export
and import again.

   
 3) Configuration conversion

 Party due to qemu not having a configuration format, partly due to 
 libvirt's ambivalent approach, there is always conversion in 
 configuration formats involved. I think this is the main reason for
 the feature lag. If there wasn't a conversion step, there wouldn't 
 be lag. You could just hand edit the config file and be good.
 

 [snip]

   
 Point 3 is the really tough one. It's the very basis of libvirt. And 
 it's plain wrong IMHO. I hate XML. I hate duplicated efforts. The 
 current conversion involves both. Every option added to qemu needs to 
 be added to libvirt. In XML. Bleks.
 

 In the previous thread on this topic, I've already stated that we're
 interested in providing a means to pass QEMU config options from
 libvirt prior to their full modelling in the XML, to reduce, or completely 
 eliminate any time-lag in using new features.
   

That would cover new features and would be really good to have
nevertheless. It still doesn't cover the difference in configuration for
native tags. Imagine you'd want to enable cache=none. I'd know how to do
it in qemu, but I'd be lost in the libvirt XML. If I'd be a person
knowledgeable in libvirt, I'd know my way around the XML tags but
wouldn't know what they'd mean in plain qemu syntax. So I couldn't tell
people willing to help me what's going wrong even if I wanted to.

If instead there was a common machine description file 

[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Alexander Graf
Daniel P. Berrange wrote:
 On Tue, Apr 06, 2010 at 01:14:36AM +0300, Avi Kivity wrote:
   
 On 04/06/2010 12:11 AM, Alexander Graf wrote:

 
 I can imagine 1) going away if we would set libvirt + virt-manager as 
 _the_ front-end and have everyone focus on it. I suppose it would also 
 help to rebrand it by then, but I'm not 100% sure about that. Either way, 
 there would have to be a definite statement that libvirt is the solution 
 to use. And _everyone_ would have to agree on that. Sounds like a hard 
 task. And by then we still don't really have a branded product stack.

 Point 3 is the really tough one. It's the very basis of libvirt. And it's 
 plain wrong IMHO. I hate XML. I hate duplicated efforts. The current 
 conversion involves both. Every option added to qemu needs to be added to 
 libvirt.
   
 Not just libvirt, virt-manager as well.  And that is typically more 
 difficult technically (though probably takes a lot less time).

 
 In XML. Bleks.
   
   
 Yeah.
 

 Whether XML is a problem or not really depends on what kind of stack you
 are looking at, and what group of users you're considering. 

  1. virsh - QEMU 

 This is the lowest level in libvirt, so XML is exposed to people
 directly. We're really not expecting people to use this for 
 creating new VMs though precisely because people don't like XML,
 instead see next option. You can hot-plug/unplug devices without
 knowing XML though.

  2. virt-install - QEMU

 Instead of XML this takes simple command line args to describe the
 VM configuration, avoiding need to know XML at all. it also automates
 many other aspects like creation of storage, fetching of install
 media, etc.

  2. virt-manager - libvirt - QEMU

 Not a GUI, so XML is not exposed to users at all

  3. ovirt/rhev-m - libvirt - QEMU

 Configuration is stored in a custom database schema. XML is merely
 generated on the fly when spawning VMs.

  4. CIM/DMTF - libvirt - QEMU

 Configuration is described in terms of DMTF schema, translated
 on the fly to libvirt XML. Apps using CIM likely don't use the
 DMTF schema directly either, having their own format.

 With exception of the lowest level virsh, XML is just an intermediate
 interchange format, not the format that is directly exposed to users.
 You can get at the raw QEMU level config that results in all cases.

 There is a gap in this though, for people who don't want to use any kind
 of management tool at all, but rather just script the low level bits 
 directly. For them, virt-install may not be flexible enough, but virsh
 is too raw forcing knowledge of the XML format. 
   

Yikes. So that means people do one more conversion step? That sounds
like the worst thing possible. It sounds like a basic - fortran - C
converter. That's prone to fail and I'm sure a serious headache for
everyone involved. There's no way people could easily debug things on
such a complex stack anymore.

   
 Sure, for libvirt it makes sense to be hypervisor-agnostic. For qemu it 
 doesn't. We want to be _the_ hypervisor. Setting our default front-end to 
 something that is agnostic weakens our point. And it slows down 
 development. And it hurts integration. And thus usability, thus adoption. 
 It hurts us.
   
   
 It doesn't make sense for libvirt to be hypervisor agnostic.  If it is, 
 people who want to use one hypervisor's advanced features are forced to 
 work around it.  Anthony wants multiple monitors for this, but that's a 
 bad workaround.  libvirt is placing developers using it in an impossible 
 situation - the developers want to use kvm-specific features and libvirt 
 is in the way.
 

 I have proposed a couple of extensions to address this problem of feature
 lg

  - Provide a way to pass extra command line args to QEMU via libvirt
  - Provide a way to send/receive monitor commands via libvirt

 This would give access to nearly all of QEMU's features.
   

It's more than just feature lag. Anthony is the one caring about feature
lag. I care about too many levels of abstraction and conversion. Try to
think as if you were a sysadmin trying to create VMs. You would have to
learn two different languages (libvirt-xml and qemu syntax) to be able
to really work with the whole stack. Because the stack consists of both.
You also need to go back and forth between the two at times. So you
really do end up having to learn both, which is bad.

What I was trying to point out is that we should make things easier for
users by keeping things consistent and always the same syntax-wise. That
makes everyone's life a lot easier.

If you try to disagree with me, try switching to csh from bash. It can
do the same thing with the same applications your bash calls. It's
merely a different syntax. Now try to be productive with it :-).


Alex




[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Alexander Graf
Avi Kivity wrote:
 On 04/06/2010 03:28 PM, Alexander Graf wrote:
 Note things like network setup are a bottomless pit.  Pretty soon you
 need to setup vlans and bonding etc.  If a user needs one of these and
 qemud doesn't provide it, then qemud becomes useless to them.  But the
 same problem applies to libvirt.
  
 If they are a bottomless pit then they are a bottomless pit. There's
 nothing we can do about it. This pit needs to be dug either way, whether
 it's in libvirt or in qemud.


 Agreed.  The only difference is who's doing the digging.

 One way to avoid it is to have a rich plugin API so if some needs some
 to, say, set up traffic control on the interface, they can write a
 plugin to do that.

Another way would be to have an active open source community that just
writes the support for traffic control upstream if they need it. I
actually prefer that to a plugin API.


Alex





[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Avi Kivity

On 04/06/2010 03:43 PM, Alexander Graf wrote:


Does VMware Player support OVF?
Does VMware Workstation support OVF?
Does VMware Server support OVF?
Do older VMware ESX versions support OVF?
Does it make sense to build an OVF with a Xen PV image?

We need to deliver vendor specific configs anyways. Of course we could
ship a VMware type, a Xen type and an OVF type. But that would certainly
not help KVM's awareness because it's hidden underneath the OVF type.
   


Adding yet another format into the mix isn't helping people who create 
appliances.



It's also hard to tell people what to use. People know KVM. But people
don't know what UI KVM does have. Because there is none. I think we're
losing quite a bit of traction due to that.
   


Of course there is a UI,  RHEV-M, proxmox, virt-manager, others.  
virt-manager is special in that it also manages other hypervisors.


Note the esx UI is not called esx, it's called vCenter or something.

--
error compiling committee.c: too many arguments to function





[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Daniel P. Berrange
On Tue, Apr 06, 2010 at 02:49:23PM +0200, Alexander Graf wrote:
 Daniel P. Berrange wrote:
  On Tue, Apr 06, 2010 at 01:14:36AM +0300, Avi Kivity wrote:

  On 04/06/2010 12:11 AM, Alexander Graf wrote:
 
  
  I can imagine 1) going away if we would set libvirt + virt-manager as 
  _the_ front-end and have everyone focus on it. I suppose it would also 
  help to rebrand it by then, but I'm not 100% sure about that. Either way, 
  there would have to be a definite statement that libvirt is the solution 
  to use. And _everyone_ would have to agree on that. Sounds like a hard 
  task. And by then we still don't really have a branded product stack.
 
  Point 3 is the really tough one. It's the very basis of libvirt. And it's 
  plain wrong IMHO. I hate XML. I hate duplicated efforts. The current 
  conversion involves both. Every option added to qemu needs to be added to 
  libvirt.

  Not just libvirt, virt-manager as well.  And that is typically more 
  difficult technically (though probably takes a lot less time).
 
  
  In XML. Bleks.


  Yeah.
  
 
  Whether XML is a problem or not really depends on what kind of stack you
  are looking at, and what group of users you're considering. 
 
   1. virsh - QEMU 
 
  This is the lowest level in libvirt, so XML is exposed to people
  directly. We're really not expecting people to use this for 
  creating new VMs though precisely because people don't like XML,
  instead see next option. You can hot-plug/unplug devices without
  knowing XML though.
 
   2. virt-install - QEMU
 
  Instead of XML this takes simple command line args to describe the
  VM configuration, avoiding need to know XML at all. it also automates
  many other aspects like creation of storage, fetching of install
  media, etc.
 
   2. virt-manager - libvirt - QEMU
 
  Not a GUI, so XML is not exposed to users at all
 
   3. ovirt/rhev-m - libvirt - QEMU
 
  Configuration is stored in a custom database schema. XML is merely
  generated on the fly when spawning VMs.
 
   4. CIM/DMTF - libvirt - QEMU
 
  Configuration is described in terms of DMTF schema, translated
  on the fly to libvirt XML. Apps using CIM likely don't use the
  DMTF schema directly either, having their own format.
 
  With exception of the lowest level virsh, XML is just an intermediate
  interchange format, not the format that is directly exposed to users.
  You can get at the raw QEMU level config that results in all cases.
 
  There is a gap in this though, for people who don't want to use any kind
  of management tool at all, but rather just script the low level bits 
  directly. For them, virt-install may not be flexible enough, but virsh
  is too raw forcing knowledge of the XML format. 

 
 Yikes. So that means people do one more conversion step? That sounds
 like the worst thing possible. It sounds like a basic - fortran - C
 converter. That's prone to fail and I'm sure a serious headache for
 everyone involved. There's no way people could easily debug things on
 such a complex stack anymore.

The different formats are serving different needs really. People use the
libvirt XML format because they want a representation that works across
multiple hypervisors. There is a CIM/DMTF mapping because apps using that
system want to take advtange of the libvirt representation. Apps like
ovirt/rhev-m have their own master representation because the other formats
are far too low level for their needs.  They higher up the stack you go the
less likely people are to want to use the low level config format directly.

 It's more than just feature lag. Anthony is the one caring about feature
 lag. I care about too many levels of abstraction and conversion. Try to
 think as if you were a sysadmin trying to create VMs. You would have to
 learn two different languages (libvirt-xml and qemu syntax) to be able
 to really work with the whole stack. Because the stack consists of both.
 You also need to go back and forth between the two at times. So you
 really do end up having to learn both, which is bad.

That really depends on the target audience. Most end users people won't 
see or care about either the QEMU format, or the libvirt XML format. 
Most of the time the libvirt format will give everything you need and so
you don't care about the QEMU format either.

You only need to care about multiple formats if you're trying todo things
at multiple levels of the stack at once, which should always be a minority
usecase/scenario.

 What I was trying to point out is that we should make things easier for
 users by keeping things consistent and always the same syntax-wise. That
 makes everyone's life a lot easier.
 
 If you try to disagree with me, try switching to csh from bash. It can
 do the same thing with the same applications your bash calls. It's
 merely a different syntax. Now try to be productive with it :-).

This is a false analogy. csh  bash 

[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Alexander Graf
Daniel P. Berrange wrote:
 On Tue, Apr 06, 2010 at 02:49:23PM +0200, Alexander Graf wrote:
   
 Daniel P. Berrange wrote:
 
 On Tue, Apr 06, 2010 at 01:14:36AM +0300, Avi Kivity wrote:
   
   
 On 04/06/2010 12:11 AM, Alexander Graf wrote:

 
 
 I can imagine 1) going away if we would set libvirt + virt-manager as 
 _the_ front-end and have everyone focus on it. I suppose it would also 
 help to rebrand it by then, but I'm not 100% sure about that. Either way, 
 there would have to be a definite statement that libvirt is the solution 
 to use. And _everyone_ would have to agree on that. Sounds like a hard 
 task. And by then we still don't really have a branded product stack.

 Point 3 is the really tough one. It's the very basis of libvirt. And it's 
 plain wrong IMHO. I hate XML. I hate duplicated efforts. The current 
 conversion involves both. Every option added to qemu needs to be added to 
 libvirt.
   
   
 Not just libvirt, virt-manager as well.  And that is typically more 
 difficult technically (though probably takes a lot less time).

 
 
 In XML. Bleks.
   
   
   
 Yeah.
 
 
 Whether XML is a problem or not really depends on what kind of stack you
 are looking at, and what group of users you're considering. 

  1. virsh - QEMU 

 This is the lowest level in libvirt, so XML is exposed to people
 directly. We're really not expecting people to use this for 
 creating new VMs though precisely because people don't like XML,
 instead see next option. You can hot-plug/unplug devices without
 knowing XML though.

  2. virt-install - QEMU

 Instead of XML this takes simple command line args to describe the
 VM configuration, avoiding need to know XML at all. it also automates
 many other aspects like creation of storage, fetching of install
 media, etc.

  2. virt-manager - libvirt - QEMU

 Not a GUI, so XML is not exposed to users at all

  3. ovirt/rhev-m - libvirt - QEMU

 Configuration is stored in a custom database schema. XML is merely
 generated on the fly when spawning VMs.

  4. CIM/DMTF - libvirt - QEMU

 Configuration is described in terms of DMTF schema, translated
 on the fly to libvirt XML. Apps using CIM likely don't use the
 DMTF schema directly either, having their own format.

 With exception of the lowest level virsh, XML is just an intermediate
 interchange format, not the format that is directly exposed to users.
 You can get at the raw QEMU level config that results in all cases.

 There is a gap in this though, for people who don't want to use any kind
 of management tool at all, but rather just script the low level bits 
 directly. For them, virt-install may not be flexible enough, but virsh
 is too raw forcing knowledge of the XML format. 
   
   
 Yikes. So that means people do one more conversion step? That sounds
 like the worst thing possible. It sounds like a basic - fortran - C
 converter. That's prone to fail and I'm sure a serious headache for
 everyone involved. There's no way people could easily debug things on
 such a complex stack anymore.
 

 The different formats are serving different needs really. People use the
 libvirt XML format because they want a representation that works across
 multiple hypervisors. There is a CIM/DMTF mapping because apps using that
 system want to take advtange of the libvirt representation. Apps like
 ovirt/rhev-m have their own master representation because the other formats
 are far too low level for their needs.  They higher up the stack you go the
 less likely people are to want to use the low level config format directly.
   

I'm fairly sure it's true that most people don't want to use low level
config formats. But as soon as you start debugging you will have to go
through the full stack. And so you'll need to know every single protocol
and conversion. As every one of the layers can fail.

   
 It's more than just feature lag. Anthony is the one caring about feature
 lag. I care about too many levels of abstraction and conversion. Try to
 think as if you were a sysadmin trying to create VMs. You would have to
 learn two different languages (libvirt-xml and qemu syntax) to be able
 to really work with the whole stack. Because the stack consists of both.
 You also need to go back and forth between the two at times. So you
 really do end up having to learn both, which is bad.
 

 That really depends on the target audience. Most end users people won't 
 see or care about either the QEMU format, or the libvirt XML format. 
 Most of the time the libvirt format will give everything you need and so
 you don't care about the QEMU format either.

 You only need to care about multiple formats if you're trying todo things
 at multiple levels of the stack at once, which should always be a minority
 usecase/scenario.
   

Debugging requires you to traverse the full stack. Developing does too.

What I'm 

[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Daniel P. Berrange
On Tue, Apr 06, 2010 at 02:43:47PM +0200, Alexander Graf wrote:
 Daniel P. Berrange wrote:
  With appliances there are two core aspects
 
   1. The description of VM hardware requirements
   2. The disk format
 
  Traditionally VMware appliances have shipped a VMX file for 1. and
  a VMDK file for 2. 
 
  Shipping the native config file format with an appliance though is
  the wrong thing to be doing. The native config format describes the
  configuration for a VM for a specific deployment. This is not the
  same as describing the hardware requirements of an appliance. As
  the most simple example, a native config would have hardcoded disk
  paths, or a specific choice of host network connectivity. Neither
  of these things have any business being in the appliance config.
 
  For this reason, there are now specific appliance formats. Libvirt
  has long had its own appliance format (virt-image) which is separate
  from the main XML format, so it avoids hardcoding deployment specific
  options. There is also the vendor neutral OVF format which is widely 
  supported by many mgmt tools. 
 
  If people want to ship QEMU appliances I don't think libvirt is 
  causing any problems here. Simply ship a OVF description + either
  raw or qcow2 disk image. Any app, libvirt, or not could work with
  that.

 
 Does VMware Player support OVF?
 Does VMware Workstation support OVF?
 Does VMware Server support OVF?

I've no idea if they're added support to those. There's no technical
reason why not, but being closed source software they may have 
artificially restricted functionality to force you to get VCenter.

 Do older VMware ESX versions support OVF?

I don't know off hand what version it was introduced in

 Does it make sense to build an OVF with a Xen PV image?

Yes, in fact it is beneficial. If you shipped a PV ops enabled
appliance image with a Xen config file, the distributor would
have to ship several configs one for PV mode, one for HVM, since
it hardcodes the type of guest  disk drivers. If you ship an 
OVF file, then the tool deploying the appliance can decide whether
to deploy it in PV or HVM mode. This same appliance can then even
work for KVM too.

 We need to deliver vendor specific configs anyways. Of course we could
 ship a VMware type, a Xen type and an OVF type. But that would certainly
 not help KVM's awareness because it's hidden underneath the OVF type.

The whole point of the OVF format/project is that it is not explicitly
targetted at one vendor's technology. This isn't the right place to be
raising awareness of KVM.

 

  3) Configuration conversion
 
  Party due to qemu not having a configuration format, partly due to 
  libvirt's ambivalent approach, there is always conversion in 
  configuration formats involved. I think this is the main reason for
  the feature lag. If there wasn't a conversion step, there wouldn't 
  be lag. You could just hand edit the config file and be good.
  
 
  [snip]
 

  Point 3 is the really tough one. It's the very basis of libvirt. And 
  it's plain wrong IMHO. I hate XML. I hate duplicated efforts. The 
  current conversion involves both. Every option added to qemu needs to 
  be added to libvirt. In XML. Bleks.
  
 
  In the previous thread on this topic, I've already stated that we're
  interested in providing a means to pass QEMU config options from
  libvirt prior to their full modelling in the XML, to reduce, or completely 
  eliminate any time-lag in using new features.

 
 That would cover new features and would be really good to have
 nevertheless. It still doesn't cover the difference in configuration for
 native tags. Imagine you'd want to enable cache=none. I'd know how to do
 it in qemu, but I'd be lost in the libvirt XML. If I'd be a person
 knowledgeable in libvirt, I'd know my way around the XML tags but
 wouldn't know what they'd mean in plain qemu syntax. So I couldn't tell
 people willing to help me what's going wrong even if I wanted to.

Going from the XML to QEMU config and vica-verca is not rocket science.
You can trivially see the QEMU config generated for any libvirt VM in
the logs. There are also commands for doing the conversion in both
directions, though I admit the QEMU - XML conversion is not as complete
as the XML - QEMU conversion.

 If instead there was a common machine description file that everyone
 knows, there'd be a single point of knowledge. A RHEL-V admin could work
 on plain qemu. A qemu developer would feel right at home with virt-manager.

This isn't solving the problem. If you see a problem in the QEMU config
uses by a high level tool like RHEV/oVirt, you still aren't going to 
know what the config change you need to make in those apps. They are
never going to work with the QEMU config as their master data format.
It is just something they generate on the fly at runtime, from their
SQL databases, because they want to model concepts at a high level.
A VM as represented in RHEV/oVirt does not 

[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Alexander Graf
Daniel P. Berrange wrote:
 On Tue, Apr 06, 2010 at 02:43:47PM +0200, Alexander Graf wrote:
   
 Daniel P. Berrange wrote:
 
 With appliances there are two core aspects

  1. The description of VM hardware requirements
  2. The disk format

 Traditionally VMware appliances have shipped a VMX file for 1. and
 a VMDK file for 2. 

 Shipping the native config file format with an appliance though is
 the wrong thing to be doing. The native config format describes the
 configuration for a VM for a specific deployment. This is not the
 same as describing the hardware requirements of an appliance. As
 the most simple example, a native config would have hardcoded disk
 paths, or a specific choice of host network connectivity. Neither
 of these things have any business being in the appliance config.

 For this reason, there are now specific appliance formats. Libvirt
 has long had its own appliance format (virt-image) which is separate
 from the main XML format, so it avoids hardcoding deployment specific
 options. There is also the vendor neutral OVF format which is widely 
 supported by many mgmt tools. 

 If people want to ship QEMU appliances I don't think libvirt is 
 causing any problems here. Simply ship a OVF description + either
 raw or qcow2 disk image. Any app, libvirt, or not could work with
 that.
   
   
 Does VMware Player support OVF?
 Does VMware Workstation support OVF?
 Does VMware Server support OVF?
 

 I've no idea if they're added support to those. There's no technical
 reason why not, but being closed source software they may have 
 artificially restricted functionality to force you to get VCenter.
   

Theoretically everything's possible. From an appliance delivery point of
view, this is an important question. people use those VMMs.

   
 Do older VMware ESX versions support OVF?
 

 I don't know off hand what version it was introduced in
   

Pretty sure it only got in with more recent versions.

   
 Does it make sense to build an OVF with a Xen PV image?
 

 Yes, in fact it is beneficial. If you shipped a PV ops enabled
 appliance image with a Xen config file, the distributor would
 have to ship several configs one for PV mode, one for HVM, since
 it hardcodes the type of guest  disk drivers. If you ship an 
 OVF file, then the tool deploying the appliance can decide whether
 to deploy it in PV or HVM mode. This same appliance can then even
 work for KVM too.
   

SLES Xen is PV only. And the non-PV kernel isn't Xen enabled. So it's
rather useless ;-).

   
 We need to deliver vendor specific configs anyways. Of course we could
 ship a VMware type, a Xen type and an OVF type. But that would certainly
 not help KVM's awareness because it's hidden underneath the OVF type.
 

 The whole point of the OVF format/project is that it is not explicitly
 targetted at one vendor's technology. This isn't the right place to be
 raising awareness of KVM.
   

So we're raising awareness for VMware, because they don't always support
OVF. We raise awareness for Xen, because PV only appliances needs to be
built differently. But we don't raise awareness for KVM because we
support OVF? I'm not a PR guy, but that sounds like an odd move.

   
   
   
 3) Configuration conversion

 Party due to qemu not having a configuration format, partly due to 
 libvirt's ambivalent approach, there is always conversion in 
 configuration formats involved. I think this is the main reason for
 the feature lag. If there wasn't a conversion step, there wouldn't 
 be lag. You could just hand edit the config file and be good.
 
 
 [snip]

   
   
 Point 3 is the really tough one. It's the very basis of libvirt. And 
 it's plain wrong IMHO. I hate XML. I hate duplicated efforts. The 
 current conversion involves both. Every option added to qemu needs to 
 be added to libvirt. In XML. Bleks.
 
 
 In the previous thread on this topic, I've already stated that we're
 interested in providing a means to pass QEMU config options from
 libvirt prior to their full modelling in the XML, to reduce, or completely 
 eliminate any time-lag in using new features.
   
   
 That would cover new features and would be really good to have
 nevertheless. It still doesn't cover the difference in configuration for
 native tags. Imagine you'd want to enable cache=none. I'd know how to do
 it in qemu, but I'd be lost in the libvirt XML. If I'd be a person
 knowledgeable in libvirt, I'd know my way around the XML tags but
 wouldn't know what they'd mean in plain qemu syntax. So I couldn't tell
 people willing to help me what's going wrong even if I wanted to.
 

 Going from the XML to QEMU config and vica-verca is not rocket science.
 You can trivially see the QEMU config generated for any libvirt VM in
 the logs. There are also commands for doing the conversion in both
 directions, though I admit the QEMU - XML conversion is not as complete
 as the XML - QEMU conversion.
   

None of that 

[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Daniel P. Berrange
On Tue, Apr 06, 2010 at 03:53:16PM +0200, Alexander Graf wrote:
 Daniel P. Berrange wrote:
  If instead there was a common machine description file that everyone
  knows, there'd be a single point of knowledge. A RHEL-V admin could work
  on plain qemu. A qemu developer would feel right at home with virt-manager.
  
 
  This isn't solving the problem. If you see a problem in the QEMU config
  uses by a high level tool like RHEV/oVirt, you still aren't going to 
  know what the config change you need to make in those apps. They are
  never going to work with the QEMU config as their master data format.
  It is just something they generate on the fly at runtime, from their
  SQL databases, because they want to model concepts at a high level.
  A VM as represented in RHEV/oVirt does not have a single QEMU or libvirt
  config file description - the low level config can potentially vary each
  time the guest is started on a host(s).

 
 So we could still make it transparent to the user, no? RHEV could import
 a KVM machine description as well as it could export one. So the
 internal representation is transparent to the user. That would also ease
 going from RHEV to other management apps. Or the other way around.
 

  Even if an app was using QEMU directly, you can't presume that the app 
  will use QEMU's config file as its native format. Many apps will store
  configs in their own custom format (or in a database) and simply generate
  the QEMU config data on the fly when starting a VM. In the same way 
  libvirt
  will  generate QEMU config data on the fly when starting a VM. Having many
  config formats  conversion / generation of the fly is a fact of life no 
  matter what mgmt system you use.


  I don't see why we shouldn't try to change that. Why not generate a
  common machine description file in qemu for all qemu VMs? Think of word
  documents. Everyone knows how to read and write .doc files. Why
  shouldn't VM description files be the same? It's really the best case
  for the user if there's a single type of configuration.
  
 
  The raw QEMU config for a disk device is specified in terms of the
  file path for the storage.  A management app using QEMU / libvirt is
  not going to store its config for the guest in this way. They will
  have some model of storage and an association between a storage volume
  and a virtual machine. The actual file path for this may is only relevant
  at the time the VM is actually started  may be different on every host
  the VM is run on. eg if you've associated a VM with a LUN based, it may
  be /dev/sda when run on host A and /dev/sdz on host B. The mgmt app is
  going to use a mapping based on the  WWID, not paths. 

 
 Sounds like somebody didn't understand the concept of persistent device
 names here. The device names should be /dev/disk/by-wwid/... then.

To find out either the /dev/sdXX or /dev/disk/by-XXX paths you need to
setup the storage on one of the hosts. At the time the VM is being
configured in the app you can't presume that the storage is visible on
any of the hosts. The /dev/disk/by-XXX paths are only stable for the
type of physical storage. Modelling the VM - storage association based 
on any kind of file path is fundamentally the wrong level of representation
for high level apps. By modelling based on a application specific logical
association, the storage can be moved between filesystems, moved from a
file to an LVM lv, to a SAN etc, without ever breaking the assocation at
an application level. 

Fundamentally, a QEMU level configuration is a description of a specific
instantiation of a VM. An application level configuration is a description
of a VM that can be instantiated in many ways. There's a 1 - M relation
between application level config description  QEMU level config file.
Thus in many cases a QEMU config will not be usable as an application's
master config format.


Regards,
Daniel
-- 
|: Red Hat, Engineering, London-o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://deltacloud.org :|
|: http://autobuild.org-o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|




Re: [Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Jamie Lokier
Alexander Graf wrote:
 So what if you had a special section that gives you the necessary
 information to do that mapping? A vendor specific section so to say.
 That would make it a perfect master config format, right?

XML with XML Namespaces is quite good for mixing data intended for
different applications.  In some ways it's better than a .ini style
file with an extra section, because you can easily associate the extra
data wherever - for example in this case putting management disk info
close to the qemu configuration for the guest disk.

Then again there's something to be said for explicitly naming things
like disk 1 etc. and referring to extra data out of line - making
the linkages more explicit.

I wonder, if libvirt's configs were in YAML or something like samba's
ini style, instead of XML, and if the guest machine config part was
expressive enough to accomodate all of qemu's device attributes, would
that work as a happy medium?

That sounds not unlike machine configs already discussed, except being
a bit more explicit to make the syntax human friendly and to
accomodate libivirt/other management config on an equal footing with
guest machine config.

-- Jamie




Re: [Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Jamie Lokier
Daniel P. Berrange wrote:
 The different formats are serving different needs really. People use the
 libvirt XML format because they want a representation that works across
 multiple hypervisors.

Then my use-case is being missed.

I tried using the libvirt XML format because I want to use the nice
virt-manager GUI to remote-control and view my qemu/kvm-based VMs.

Unfortunately I found it insufficiently expressive for my guests (I
needed particular steps to hand-hold old OS installs, for example),
not to mention the documentation was only online at the time and I wasn't.

Also the user-friendly image making tool lacked almost all the options
I needed to use.  (Think of things like -win2k-hack, clock=vm, and
having to use specific version of kvm, or sometimes even disabling
kernel-kvm due to incompatibilities).

It's fine that I didn't use the libvirt config format - it wasn't
intended for my needs and that's ok.

The big lost opportunity was having to throw out the baby, towels,
nappies and all, with the bathwater: I couldn't use virt-manager's
useful facilities like the GUI, remote management,
instantiation/stopping/starting/migration when I needed to, and
resource monitoring (balloon etc.)

So I had to write some annoyingly hairy scripts and still have only a
half-baked solution.

Obvious solution here is for libvirt to be able to manage a VM but have
the *option* to get out of the way when it comes to configuring and/or
scripting that VM.  Or get out of the way for part of it.

That would make libvirt and it's tools *much* more useful imho.

 are far too low level for their needs.  They higher up the stack you go the
 less likely people are to want to use the low level config format directly.

But what about people who want to use the high level tools for the
*management* aspect, but their guests or use scenarios need low level
config and control?

Users aren't exclusively one or the other.

 This is a false analogy. csh  bash are two different implenetations at the
 same level in the stack.  Compare libX11 against libgtk if you want a more
 sensible comparison. libgtk provides 99% of the features you need. In rare
 cases where it doesn't, you can get access to libX11 APIs directly, but that
 doesn't imply that everyone using GTK needs to know X11.  Your argument
 against libvirt is akin to saying that since GTK can't ever support 100% of
 the X11 functionality, people shouldn't use GTK and apps should work against
 X11 directly.

When I had a go with libvirt/virt-manager, it wasn't missing just 1%
of the functionality.  Quite a lot wasn't available (qemu options
needed for particular guests, scriptable control during installs), or
worked in an unsuitable way (the networking didn't fit my needs
either, but I think that's more unusual).

-- Jamie




Re: [Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-06 Thread Jamie Lokier
Alexander Graf wrote:
  One way to avoid it is to have a rich plugin API so if some needs some
  to, say, set up traffic control on the interface, they can write a
  plugin to do that.
 
 Another way would be to have an active open source community that just
 writes the support for traffic control upstream if they need it. I
 actually prefer that to a plugin API.

Not every local config quirk should go upstream, and if someone has
to edit the C source just to configure their tap interface a bit
differently, that's not a good sign.

Qemu already has a passable API for this in the form of thw network-up
and network-down scripts.  Imho much more ability to hook into many
parts of device setup that way would be good.  You don't need a rich
internal API then.

Even better if callout scripts are allowed to connect back to QMP and
tell Qemu what to do during machine setup and interesting events.

-- Jamie




[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-05 Thread Avi Kivity

On 04/06/2010 12:11 AM, Alexander Graf wrote:

Howdy,

I've been thinking a bit further on the whole issue around libvirt and why the 
situation as is isn't satisfying. I came to the following points that currently 
hurt building ease of use for KVM:

1) Brand

This is one of the major issues we have ourselves when it comes to appliances. 
We can ship appliances built for VMware. We can ship appliances built for Xen. 
But we can't ship appliances built for KVM, because there is no single 
management app we could target.


There are already at least three management apps for kvm:  virt-manager, 
proxmox, and RHEV-M (my personal favorite).  If we define our own format 
then we need those management apps to understand it.  That means we 
either include only simple features, or we wait until the management 
apps catch up to all the features we provide.  Otherwise those 
appliances aren't universal.


An additional problem is that our format will exclude metadata that the 
management app may want to add.



That destroys the KVM brand IMHO.
   


That's because  kvm is infrastructure instead of a complete stack.  I 
agree it's a problem but I see no way around it.



2) Machine description

If we build an appliance, we also create a configuration file that describes 
the VM. We can create .vmx files, we can create xen config files. We can not 
create KVM config files. There are none. We could create shell scripts, but 
would that help?
   


It's not enough for qemu to be able to read the configuration file.  The 
management app needs to read it as well, to understand how much memory 
and cpu the guest needs (so it can schedule it on the cluster), what 
kind of network connectivity it needs (how many interfaces, what 
networks those interfaces connect to, does it need firewall ports 
open).  An appliance configuration is more than a vm configuration, and 
again, the management app needs to be able to understand all of it.



3) Configuration conversion

Party due to qemu not having a configuration format, partly due to libvirt's 
ambivalent approach, there is always conversion in configuration formats 
involved. I think this is the main reason for the feature lag. If there wasn't 
a conversion step, there wouldn't be lag. You could just hand edit the config 
file and be good.
   


There will always be a lag, since management apps (at least the 
non-trivial ones) want to display the configuration in a GUI, allow 
users to edit it, and want to understand it.  It's not just conversion, 
it's plumbing down the whole stack.



Point 2 needs to be solved anyways. We need a machine config format for qemu. 
For general -M description as well as for specific VM description. The command 
line options just become too complicated and too hard to reproduce and save. 
Just think of live migration with hot-plugged devices. Or safe savevm + loadvm. 
The current logic ends there.
   


I don't think the management apps will want to use it.  They will need 
to parse it (currently they only need to write it, which is simpler).  
Things like 'query all smp guests with 4GB memory' become complicated 
instead just a database query.


For managed guests, I think we want to get rid of the command line at 
all.  Start the guest with just a case and cold-plug the motherboard, 
processors, memory, cards.  Migration starts with a replay of these 
(including any hotplugged cards added while the guest is running).  
Hotplugs during migration are relayed to the other side over the wire.



I can imagine 1) going away if we would set libvirt + virt-manager as _the_ 
front-end and have everyone focus on it. I suppose it would also help to 
rebrand it by then, but I'm not 100% sure about that. Either way, there would 
have to be a definite statement that libvirt is the solution to use. And 
_everyone_ would have to agree on that. Sounds like a hard task. And by then we 
still don't really have a branded product stack.

Point 3 is the really tough one. It's the very basis of libvirt. And it's plain 
wrong IMHO. I hate XML. I hate duplicated efforts. The current conversion 
involves both. Every option added to qemu needs to be added to libvirt.


Not just libvirt, virt-manager as well.  And that is typically more 
difficult technically (though probably takes a lot less time).



In XML. Bleks.
   


Yeah.


Reading on IRC I seem to not be the only person thinking that, just the first 
one mentioning this aloud I suppose. But that whole XML mess really hurts us 
too. Nobody wants to edit XML files. Nobody wants to have two separate syntaxes 
to describe the same thing. It complicates everything without a clear benefit. 
And it puts me in a position where I can't help people because I don't know the 
XML format. That should never happen.
   




Sure, for libvirt it makes sense to be hypervisor-agnostic. For qemu it 
doesn't. We want to be _the_ hypervisor. Setting our default front-end to 
something that is agnostic weakens our point. And it 

[Qemu-devel] Re: libvirt vs. in-qemu management

2010-04-05 Thread Alexander Graf

On 06.04.2010, at 00:14, Avi Kivity wrote:

 On 04/06/2010 12:11 AM, Alexander Graf wrote:
 Howdy,
 
 I've been thinking a bit further on the whole issue around libvirt and why 
 the situation as is isn't satisfying. I came to the following points that 
 currently hurt building ease of use for KVM:
 
 1) Brand
 
 This is one of the major issues we have ourselves when it comes to 
 appliances. We can ship appliances built for VMware. We can ship appliances 
 built for Xen. But we can't ship appliances built for KVM, because there is 
 no single management app we could target.
 
 There are already at least three management apps for kvm:  virt-manager, 
 proxmox, and RHEV-M (my personal favorite).  If we define our own format then 
 we need those management apps to understand it.  That means we either include 
 only simple features, or we wait until the management apps catch up to all 
 the features we provide.  Otherwise those appliances aren't universal.
 
 An additional problem is that our format will exclude metadata that the 
 management app may want to add.

Do they have to be? There could always be a metadata section for different 
management apps. If the app understands the metadata, fine. If it doesn't you 
can still run your VM. You could even ship an appliance with multiple metadata 
sections so multiple management stacks understand it.

 
 That destroys the KVM brand IMHO.
   
 
 That's because  kvm is infrastructure instead of a complete stack.  I agree 
 it's a problem but I see no way around it.

I believe we need to (at least partially) get rid of that separation. It's 
really hard to survive as infrastructure in a land of stacks.

 
 2) Machine description
 
 If we build an appliance, we also create a configuration file that describes 
 the VM. We can create .vmx files, we can create xen config files. We can not 
 create KVM config files. There are none. We could create shell scripts, but 
 would that help?
   
 
 It's not enough for qemu to be able to read the configuration file.  The 
 management app needs to read it as well, to understand how much memory and 
 cpu the guest needs (so it can schedule it on the cluster), what kind of 
 network connectivity it needs (how many interfaces, what networks those 
 interfaces connect to, does it need firewall ports open).  An appliance 
 configuration is more than a vm configuration, and again, the management app 
 needs to be able to understand all of it.

Well of course. That's the point. There's a machine config format that both the 
management app and qemu understand. That format is also human readable (XML is 
not) and can thus be hand-edited if necessary. If you want to copy your VM, 
just copy the disk image and that config file.


 
 3) Configuration conversion
 
 Party due to qemu not having a configuration format, partly due to libvirt's 
 ambivalent approach, there is always conversion in configuration formats 
 involved. I think this is the main reason for the feature lag. If there 
 wasn't a conversion step, there wouldn't be lag. You could just hand edit 
 the config file and be good.
   
 
 There will always be a lag, since management apps (at least the non-trivial 
 ones) want to display the configuration in a GUI, allow users to edit it, and 
 want to understand it.  It's not just conversion, it's plumbing down the 
 whole stack.

... which involves conversion from a management specific format to some random 
qemu format munch (cmdline options, monitor commands, etc.).

 
 Point 2 needs to be solved anyways. We need a machine config format for 
 qemu. For general -M description as well as for specific VM description. The 
 command line options just become too complicated and too hard to reproduce 
 and save. Just think of live migration with hot-plugged devices. Or safe 
 savevm + loadvm. The current logic ends there.
   
 
 I don't think the management apps will want to use it.  They will need to 
 parse it (currently they only need to write it, which is simpler).  Things 
 like 'query all smp guests with 4GB memory' become complicated instead just 
 a database query.

That's the only way to get VMs compatible. They have to. We have to force them 
do it.

 
 For managed guests, I think we want to get rid of the command line at all.  
 Start the guest with just a case and cold-plug the motherboard, processors, 
 memory, cards.  Migration starts with a replay of these (including any 
 hotplugged cards added while the guest is running).  Hotplugs during 
 migration are relayed to the other side over the wire.

While I like the idea, I think that belongs into qemu internals. There should 
still be a config file that you pass to libqemu which then creates the machine 
for you.

 
 I can imagine 1) going away if we would set libvirt + virt-manager as _the_ 
 front-end and have everyone focus on it. I suppose it would also help to 
 rebrand it by then, but I'm not 100% sure about that. Either way, there 
 would have to be a definite