Re: [Openstack] How to set vcpupin in guest XML?

2013-07-24 Thread Daniel P. Berrange
On Wed, Jul 24, 2013 at 04:57:07PM +0800, Peeyush Gupta wrote:
 Hi Daniel,
 
 Thanks for your suggestion. I made some changes in config and it
 works fine. One more thing I wanted to ask is I want to add an
 attribute to xml. But I am not able to figure out how exactly
 libvirt is defining xml. I understand that config can only add
 the attributes that are defined in the xml. But how do I add a
 new attribute?

Presumably when you say libvirt here, you mean the libvirt library
itself, rather than the Nova libvirt driver.

If you want libvirt to understand new attributes, then you need to
write a patch for libvirt that updates its XML parsers / generator,
updates its RNG schema, the schema docs, and finally the code to
convert from XML to QEMU command line arguments (or equivalent
for non-QEMU based virt drivers in libvirt). If you need help in
adding features to core libvirt it is best to ask on the main libvirt
mailing lists

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to set vcpupin in guest XML?

2013-07-23 Thread Daniel P. Berrange
On Tue, Jul 23, 2013 at 05:31:25PM +0800, Peeyush Gupta wrote:
 Hi all,
 
 I am working with openstack and I want to pin vcpus to pcpus in
 guest xml. Now, the pinning operation can be done using
 virsh vcpupin guestname vcpu pcpu
 But I want to do it using python API. I investigated the openstack
 code and found out that get_guest_config function in libvirt/driver.py
 is responsible for generation of guest XML file. Now, I tried to put
 vcpupin attribute here by guest.vcpupin or guest.cputune_vcpupin but
 none of them is working. Any idea what am I doing wrong? Or is this
 functionality not available with openstack?

The get_guest_config() function returns an instance of the
nova.virt.libvirt.config.LibvirtConfigGuest object. That class
has defined attributes for setting various parts of the XML
config. You can't simply set arbitrary attributes and expect
it to work - you'll need to define how it is exposed in the
LibvirtConfigGuest object (or one of the related classes
that it uses) and then impl the code to generate the XML dom
from that attribute.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] vga video setting for kvm/libvirt virtual machine on openstack folsom

2013-07-19 Thread Daniel P. Berrange
On Fri, Jul 19, 2013 at 11:35:27AM +0200, Jacques LANDRU wrote:
 Hi all,
 
 I didn't find solution, using google, so trying openstack mailing list. I 
 hope, this is the good place to post.
 
 I'm using Folsom with kvm/libvirt environment.
 
 Is there a simple way to force video -vga std instead of -vga cirrus when 
 launching a vm using nova boot command
 
 As there is no more libvirt.template.xml, I didn't find how to set hardware 
 properties for a virtual machine.
 
 I saw I can set vif to e1000 when registering/updating an image in glance 
 with --property hw_vif_model=e1000.
 
 Is there a hw_video equivalent property to set vga video characteristics ?

There is no facility to change the video card directly at this time. The
only time it will change is if you enable spice, in which case it'll use
qxl instead of cirrus.

Adding a hw_video_model=x image property would be a reasonable addition
if you want to file a feature request, and/or supply a patch.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] chardev: opening backend file failed: Permission denied

2013-06-21 Thread Daniel P. Berrange
On Fri, Jun 21, 2013 at 01:57:44PM -0400, Samuel Winchenbach wrote:
 Here is some more information.  I am using Ubuntu 12.04 LTS, I have turned
 off apparmor.
 
 Here is my mount: dedup:/big_pool/os-grizzly on /os-grizzly type nfs
 (rw,noatime,nolock,tcp,bg,intr,hard,addr=10.54.90.10)
 
 root@test1:/# find /os-grizzly -type d | xargs ls -l -d
 drwxr-xr-x 4 root   root   4 Jun 21 13:39 /os-grizzly
 drwxr-xr-x 3 glance glance 3 Jun  3 10:25 /os-grizzly/glance
 drwxr-x--- 2 glance glance 4 Jun 19 15:41 /os-grizzly/glance/images
 drwxr-xr-x 3 nova   nova   3 Jun 21 13:39 /os-grizzly/nova
 drwxr-xr-x 4 nova   nova   5 Jun 21 13:13 /os-grizzly/nova/instances
 drwxr-xr-x 2 nova   nova   4 Jun 19 15:42 /os-grizzly/nova/instances/_base
 drwxr-xr-x 2 nova   nova   6 Jun 19 15:42 /os-grizzly/nova/instances/locks
 
 root@test1:/# grep -RE ^[^#] /etc/libvirt/*.conf
 /etc/libvirt/libvirtd.conf:listen_tls = 0
 /etc/libvirt/libvirtd.conf:listen_tcp = 1
 /etc/libvirt/libvirtd.conf:unix_sock_group = libvirtd
 /etc/libvirt/libvirtd.conf:unix_sock_rw_perms = 0770
 /etc/libvirt/libvirtd.conf:auth_unix_ro = none
 /etc/libvirt/libvirtd.conf:auth_unix_rw = none
 /etc/libvirt/libvirtd.conf:auth_tcp = none
 /etc/libvirt/qemu.conf:dynamic_ownership = 0
 
 
 I am stumped.  It was working fine before I made the changed
 (dynamic_ownership, listen_tls, listen_tcp, auth_tcp) and started up the
 other compute nodes.  :/

Do not set 'dynamic_ownership' to 0. If you do this, you are required
to have a mgmt app which knows how to set ownership on all resources
used by QEMU. That option was added as a special hack for oVirt which
can do that, but OpenStack does not support this.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Should we discourage KVM block-based live migration?

2013-04-25 Thread Daniel P. Berrange
On Wed, Apr 24, 2013 at 05:23:11PM -0400, Lorin Hochstein wrote:
 On Wed, Apr 24, 2013 at 11:59 AM, Daniel P. Berrange d...@berrange.comwrote:
 
  On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:
   In the docs, we describe how to configure KVM block-based live migration,
   and it has the advantage of avoiding the need for shared storage of
   instances.
  
   However, there's this email from Daniel Berrangé from back in Aug 2012:
   http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html
  
   Block migration is a part of the KVM that none of the upstream
  developers
   really like, is not entirely reliable, and most distros typically do not
   want to support it due to its poor design (eg not supported in RHEL).
  
   It is quite likely that it will be removed in favour of an alternative
   implementation. What that alternative impl will be, and when I will
   arrive, I can't say right now.
  
   Based on this info, the OpenStack Ops guide currently recommends against
   using block-based live migration, but the Compute Admin guide has no
   warnings about this.
  
   I wanted to sanity-check against the mailing list to verify that this was
   still the case. What's the state of block-based live migration with KVM?
   Should we say be dissuading people from using it, or is it reasonable for
   people to use it?
 
  What I wrote above about the existing impl is still accurate. The new
  block migration code is now merged into libvirt and makes use of an
  NBD server built-in to the QMEU process todo block migration. API
  wise it should actually work in the same way as the existing deprecated
  block migration code.  So if you have new enough libvirt and new enough
  KVM, it probably ought to 'just work' with openstack without needing
  any code changes in nova. I have not actually tested this myself
  though.
 
  So we can probably update the docs - but we'd want to checkout just
  what precise versions of libvirt + qemu are needed, and have someone
  check that it does in fact work.
 
 
 Thanks, Daniel. I can update the docs accordingly. How can I find out what
 are the minimum versions of libvirt and qemu are needed?
 
 Also, I noticed you said qemu and not kvm, and I see that
 http://wiki.qemu.org/KVM says that qemu-kvm fork for x86 is deprecated,
 use upstream QEMU now.  Is it the case now that when using KVM as the
 hypervisor for a host, an admin will just install a qemu package instead
 of a qemu-kvm package to get the userspace stuff?

It depends on the distro to be honest. eg on Fedora you'd use qemu-kvm
which is a virtual package which will pull in qemu-system-$ARCH for your
particular host.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Should we discourage KVM block-based live migration?

2013-04-24 Thread Daniel P. Berrange
On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:
 In the docs, we describe how to configure KVM block-based live migration,
 and it has the advantage of avoiding the need for shared storage of
 instances.
 
 However, there's this email from Daniel Berrangé from back in Aug 2012:
 http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html
 
 Block migration is a part of the KVM that none of the upstream developers
 really like, is not entirely reliable, and most distros typically do not
 want to support it due to its poor design (eg not supported in RHEL).
 
 It is quite likely that it will be removed in favour of an alternative
 implementation. What that alternative impl will be, and when I will
 arrive, I can't say right now.
 
 Based on this info, the OpenStack Ops guide currently recommends against
 using block-based live migration, but the Compute Admin guide has no
 warnings about this.
 
 I wanted to sanity-check against the mailing list to verify that this was
 still the case. What's the state of block-based live migration with KVM?
 Should we say be dissuading people from using it, or is it reasonable for
 people to use it?

What I wrote above about the existing impl is still accurate. The new
block migration code is now merged into libvirt and makes use of an
NBD server built-in to the QMEU process todo block migration. API
wise it should actually work in the same way as the existing deprecated
block migration code.  So if you have new enough libvirt and new enough
KVM, it probably ought to 'just work' with openstack without needing
any code changes in nova. I have not actually tested this myself
though.

So we can probably update the docs - but we'd want to checkout just
what precise versions of libvirt + qemu are needed, and have someone
check that it does in fact work.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Local storage and Xen with Libxl

2013-04-19 Thread Daniel P. Berrange
On Fri, Apr 19, 2013 at 01:43:23PM +0300, Cristian Tomoiaga wrote:
 As for the compute part, I may need to work with libvirt but I want to
 avoid that if possible. Libxl was meant for stacks right ? Again, this may
 not be acceptable and I would like to know.

Nova already has two drivers which support Xen, one using XenAPI and
the other using libvirt. Libvirt itself will either use the legacy
XenD/XenStore APIs, or on new enough Xen will use libxl.

libxl is a pretty low level interface, not really targetted for direct
application usage, but rather for building management APIs like libvirt
or XCP. IMHO it would not really be appropriate for OpenStack to directly
use libxl. Given that Nova already has two virt drivers which can work
with Xen, I also don't really think there's a need to add a 3rd using
libxl.

 Regarding KVM, I did not use it until now. I don't like the fact the
 security issues pop up more often then I would like (I may be wrong ?).
 There are other reasons but are not important in my decision.

Having worked with both Xen  KVM for 8 years now, I don't see that
either of them are really winning in terms of security issues in the
hypervisor or userspace. Both of them have had their fair share of
vulnerabilities. In terms of the device model, they both share use
of the QEMU codebase, so many vulnerabilities detected with KVM will
also apply to Xen and vica-verca. So I don't think your assertion
that KVM suffers more issues is really accurate.

 Should I go with Libxl or stick to libvirt ? Should I start to work on
 local storage or has someone already started and I should contact him ?

As far as Nova virt drivers for Xen are concerned, you should either
use the XenAPI driver, or the libvirt driver.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Gerrit Review + SSH

2013-04-04 Thread Daniel P. Berrange
On Thu, Apr 04, 2013 at 10:51:20AM -0700, Ronak Shah wrote:
 Hi,
 
 As OS dev cycle involves Gerrit review tool which requires ssh into the
 gerrit server, I was wondering if any of you guys face problems where your
 company/org does not allow ssh to external hosts.
 
 In general, what is the best practice in terms of environment for
 generating code review?

The traditional workaround when companies have insane firewalls blocking
SSH, is to run an SSH server on port 443, since firewalls typically
allow through any traffic on the HTTPS port, even if it isn't using the
HTTPS protocol :-) This workaround only fails if your company is also
doing a man-in-the-middle attack on HTTPS traffic[1]

GitHub actually have an SSH server on port 443 for exactly this reason

   https://help.github.com/articles/using-ssh-over-the-https-port

I don't know how hard it would be for OpenStack Infrastructure team
to officially make Gerrit available via port 443, in addition to the
normal SSH port.

Regards,
Daniel

[1] Yes some companies really do MITM attack all HTTPS connections
their employees make :-(
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] CY13-Q1 Community Analysis — OpenStack vs OpenNebula vs Eucalyptus vs CloudStack

2013-04-03 Thread Daniel P. Berrange
On Wed, Apr 03, 2013 at 12:15:21PM +0200, Thierry Carrez wrote:
 Qingye Jiang (John) wrote:
  I saw Jay's suggestion on removing review.openstack.org from the git domain 
  analysis. Can you shed some light on how this system works? Is this system 
  shadowing more real code contributors?
 
 Merge commits are created in git history when branches are merged.
 They appear as having two parent commits. In OpenStack, our Gerrit
 review system automatically creates them when merging into master, so
 jenk...@review.openstack.org appears as the author of all of them.

NB you don't need to exclude based on author name. You can simply ask
git for the history, without merges using 'git log --no-merges'

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSG] Security Note: Selecting LXC as Nova Virtualization Driver can lead to data compromise.

2013-03-19 Thread Daniel P. Berrange
On Tue, Mar 19, 2013 at 01:38:42PM +, Clark, Robert Graham wrote:
 Daniel,
 
 I agree with your modification and have made a note of it on the bug page.
 I'll make sure to change it when we have a sensible place to publish all
 of our OSSNs.
 
 Thanks for engaging on this issue, we now have an OSSG mailing list and
 will be ramping up a number of efforts on there, having people with your
 expertise on board is pivotal to improving openstack security overall.

Which is the mailing list we're using for OSSG, so I can make sure I'm
signed up ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSG] Security Note: Selecting LXC as Nova Virtualization Driver can lead to data compromise.

2013-03-15 Thread Daniel P. Berrange
On Fri, Mar 15, 2013 at 10:44:40AM +, Clark, Robert Graham wrote:
 The following is the first of a series of OpenStack Security Notes that will 
 be issued by the OpenStack Security Group. Security notes are similar to 
 advisories; they address vulnerabilities in 3rd party tools typically used 
 within OpenStack deployments and provide guidance on common configuration 
 mistakes that can result in an insecure operating environment. 
 
 Selecting LXC as Nova Virtualization Driver can lead to data compromise.
 --
 
 ### Summary ###
 LXC does not provide the same level of separation as hypervisors when chosen 
 as the Nova 'virtualization driver'. Attempting to use LXC as a drop in 
 replacement for a hypervisor can result in data exposure between tenants.
 
 ### Affected Services / Software ###
 Nova, LXC, Libvirt, 'Virtualization Driver'
 
 ### Discussion ###

 The quality of container isolation in LXC heavily depends on implementation. 
 While
 pure LXC is generally well-isolated through various mechanisms (for example 
 AppArmor
 in Ubuntu), LXC through libvirt is not. A guest who operates within one 
 container is
 able to affect another containers cpu share, memory limit and block devices 
 among other
 issues.

This is really wrong / misleading. Libvirt with LXC is perfectly capable of 
using
mandatory access control frameworks like SELinux  / AppArmour to isolate LXC
containers from each other. The issue is that such use of MAC whether with 
libvirt
LXC or other LXC impls is not practical when you want to be able to run full OS
installs in LXC. As such it is not possible to have OpenStack to make use of it.

I'd like this paragraph to be re-written to something like this


  ### Discussion ###

  The Libvirt LXC functionality exposed by OpenStack is built on the kernel
  namespace  cgroup technologies. Until Linux 3.8, there has been no support
  for separate user namespaces in the kernel. As such, there has been no way
  to securely isolate containers from each other or the host environment using
  DAC (discretionary access control). For example, they can escape their 
resource
  constraints by modifying cgroups settings, or attack the host via various 
files
  in the proc and sysfs filesystems. The use of MAC (mandatory access control)
  technologies like SELinux or AppArmour can mitigate these problems, but it is
  not practical to write MAC policies that would allow running full OS installs
  in LXC under OpenStack.

  Although initial user namespace support was merged in Linux 3.8, it is not
  yet complete, or mature enough to be considered secure. Work is ongoing to
  finish the kernel namespace support and enhance libvirt LXC to take advantage
  of it.

 For more information on the effects of this issue see this [bug]
 (https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1088295)
 
 ### Recommended Actions ###
 The OSSG advises that anyone deploying Nova in environments that require any 
 level of separation use a hypervisor such as Xen, KVM, VMware or Hyper-V.
 
 LXC security pivots on a system known as DAC (discretionary access control) 
 which is not currently capable of providing strong isolation of guests. Work 
 is underway to improve DAC but it's not ready for production use at this time.
 
 The OSSG recommends against using LXC for enforcing secure separation of 
 guests. Even with appropriate AppArmour policies applied.
 
 ### Contacts / References ###
 Nova : http://docs.openstack.org/developer/nova/
 LXC : http://lxc.sourceforge.net/
 Libvirt : http://libvirt.org/
 KVM : http://www.linux-kvm.org/page/Main_Page
 Xen: http://xen.org/products/xenhyp.html
 LXC DAC : https://wiki.ubuntu.com/UserNamespace
 LXC LibVirt Discussion : 
 https://www.berrange.com/posts/2011/09/27/getting-started-with-lxc-using-libvirt/
 OpenStack Security Group : https://launchpad.net/~openstack-ossg
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSG] Security Note: Selecting LXC as Nova Virtualization Driver can lead to data compromise.

2013-03-15 Thread Daniel P. Berrange
On Fri, Mar 15, 2013 at 09:05:30AM -0700, Bryan D. Payne wrote:
  The quality of container isolation in LXC heavily depends on 
  implementation. While
  pure LXC is generally well-isolated through various mechanisms (for 
  example AppArmor
  in Ubuntu), LXC through libvirt is not. A guest who operates within one 
  container is
  able to affect another containers cpu share, memory limit and block 
  devices among other
  issues.
 
  This is really wrong / misleading. snip
 
Although initial user namespace support was merged in Linux 3.8, it is not
yet complete, or mature enough to be considered secure. Work is ongoing to
finish the kernel namespace support and enhance libvirt LXC to take 
  advantage
of it.
 
 Point taken and thank you for the clarification.  As you note, doing
 lxc securely is basically not possible on a current OpenStack
 deployment.  This was the main take home point of the security note.
 I'm happy to see that work is ongoing to help improve this feature,
 and look forward to reviewing it when it is stable.
 
 If you'd like to help with the wording of future notes, I encourage
 you to take part in the weekly OSSG meetings:

Where/when was this wording discussed though ? I don't see anything about
LXC mentioned in the logs of the last two meetings in March ? While IRC
may be a good place for ad-hoc discussions around an issue, I don't really
think it is a good forum for reviewing of these final notices prior to an
announcement. Due to its real-time nature, IRC hits timezone problems
which can prevent relevant from people attending. A posting to an email
list gives time for all relevant parties to provide feedback.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Guest OS shows just one CPU core instead of two

2013-03-05 Thread Daniel P. Berrange
On Tue, Mar 05, 2013 at 04:00:06PM +0530, Balamurugan V G wrote:
 Hi,
 
 I am running Folsom 2.2 with a KVM compute node. When I launch a windows
 instance with flavor that has 2 VCPUs and 2Gb RAM, the guest seems the RAM
 fine but not the 2 CPUs. It reports only one Virtual processor. Whne I look
 at the command line options with which KVM has launched the instance, I see
 that in -smp argument, sockets is set to 2 and core is set to 1. How can I
 get the core to be 2 so that the guest OS can see it?

There's nothing wrong with what the command line set here. It is
simply configuring 2 vCPUS where each vCPU is represented as a
separate socket to the guest, rather than what you describe which
would be 1 socket with dual-cores.

You don't mention what Windows version you are using, but some versions
are not clever enough to switch between a uniprocessor config and SMP
config. So if you installed those versions on a 1 CPU guest, then even
if you later boot the image on 1 CPU guest, it won't see the extra CPUs.
There are hacks you can do to upgrade a windows install from the
uniprocessor to multiprocessor HAL - see the microsoft kbase for more
info

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] spice in devstack no working

2013-02-28 Thread Daniel P. Berrange
On Fri, Mar 01, 2013 at 02:33:32AM +0800, Shake Chen wrote:
 Hi
 
 I try to enable spice in devstack, but when I create vm, would report
 error. if not enable spice, it is work well.

If you want help, you're going to have to tell us much more than just
would report error.

What OS distro are you using ? What version of QEMU ? What version
of libvirt ? What is the full error (+ stack trace if any) you get ?
What tool did you see the error from ?

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] python-novaclient 2.11.0 release

2013-02-13 Thread Daniel P. Berrange
On Tue, Feb 12, 2013 at 09:41:11PM -0800, Vishvananda Ishaya wrote:
 Hello Everyone,
 
 I just pushed version 2.11.0 of python-novaclient to Pypi. There are a lot of 
 fixes and features in this release. Here is a brief overview:
 
 Bug Fixes
 -
 
 simplified keyring support

Sigh, this bug fix actually creates worse bugs  we didn't appear to get
the fix for it merged before you cut the release :-( With this change I
have been unable to get nova client to work at all, outside of an X
session when the gnome-keyring package installed, which is basically any
default Fedora / GNOME install

  https://bugs.launchpad.net/python-novaclient/+bug/1116302
  https://review.openstack.org/#/c/21690/

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-dev]Where is libvirt library packages in Openstack Nova branch

2013-01-21 Thread Daniel P. Berrange
On Sun, Jan 20, 2013 at 01:46:58PM +0800, harryxiyou wrote:
 Hi all,
 
 I read the source code of Openstack Nova branch source codes but i
 can not find the  standard libvirt library packages, which i think
 Nova uses libvirt
 interfaces they are from standard libvirt library to attach Sheepdog(or 
 others)
 volumes to QEMU. If i add a new block storage driver for standard libvirt,
 i have to update these libvirt library packages in Openstack Nova brahch.
 Cloud anyone give me some suggestions? Thanks in advance ;-)

The nova driver for talking to libvirt is in nova/virt/libvirt/

The libvirt code itself is not part of openstack, and available from
our website http://libvirt.org

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-dev]Where is libvirt library packages in Openstack Nova branch

2013-01-21 Thread Daniel P. Berrange
On Tue, Jan 22, 2013 at 12:13:57AM +0800, harryxiyou wrote:
 On Mon, Jan 21, 2013 at 8:14 PM, Daniel P. Berrange berra...@redhat.com 
 wrote:
 [...]
  The nova driver for talking to libvirt is in nova/virt/libvirt/
 
 Yup, i think so. Therefore, i also think nova driver in nova/virt/libvirt
 has some relationships with libvirt code itself, right? Nova driver
 send parameters to libvirt client(in Nova branch), then libvirt client
 send these parameters to libvirt serever(libvirt code itself). How do
 they(libvirt client and libvirt server) communicate with each other
 in details? I wonder if they(libvirt client and libvirt server) are a whole
 one in libvirt code itself. And Openstack just call libvirt client interfaces,
 which Openstack just package libvirt code itself to be a library for calling.

Nova simply uses the standard libvirt python module, which is a thin
python wrapper around the libvirt.so C library. This library talks to
the libvirtd server via a private RPC service. Nova only includes its
own custom libvirt integration code, the libvirt python module isn't
part of Nova.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-dev]Where is libvirt library packages in Openstack Nova branch

2013-01-21 Thread Daniel P. Berrange
On Tue, Jan 22, 2013 at 01:08:23AM +0800, harryxiyou wrote:
 On Tue, Jan 22, 2013 at 12:20 AM, Daniel P. Berrange
 berra...@redhat.com wrote:
 [...]
  Nova simply uses the standard libvirt python module, which is a thin
  python wrapper around the libvirt.so C library.
 
 I think so, but i wonder if the standard libvirt python module is
 nova/virt/libvirt/driver.py?

That isn't the libvirt python module. That is Nova's libvirt integration
driver code.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova root wrapper understanding

2013-01-11 Thread Daniel P. Berrange
On Fri, Jan 11, 2013 at 11:32:08AM +0100, Thierry Carrez wrote:
 Kun Huang wrote:
  In this wiki, http://wiki.openstack.org/Nova/Rootwrap, the part of
  security model results in This chain ensures that the nova user
  itself is not in control of the configuration or modules used by the
  nova-rootwrap executable. I understand that chain but I`m confused with
  this conclusion.
  
  That chain means that a nova-rootwrap executable runs safely under
  root-control. In another word, the program nova-rootwrap runs is
  protected by root, and it cannot be influenced by other users. But that
  conclusion implies that the insecurity model is /nova/ user is in
  control by someone. This is what I'm confused with.
 
 The goal of the rootwrap (used by Nova, but also Cinder and Quantum) is
 to allow openstack services to run some commands as root, while running
 the remaining of the code as an unprivileged user. This limits the
 amount of damage that a security hole in openstack may inflict on the
 system.
 
 Imagine a security hole in Nova that enabled an attacker to execute
 arbitrary code. That code would be executed as the nova user, which
 limits the amount of damage on the rest of the system. The trick is to
 allow the nova user to run /some/ code as root, without letting it
 execute /anything/ as root.
 
 Rootwrap is just a mechanism to provide limited privilege escalation.
 The rootwrap chain mentioned in the wiki ensures that only the root user
 controls what the nova user can execute as root.  If you let the nova
 user itself in control of what the nova user can execute as root, you
 end up with around the same security as if you were running as the root
 user directly.
 
 Now, the privilege escalation limitation is only as good as the
 precision of the rootwrap filters you use. It's pretty good for nova-api
 nodes, which are basically not allowing anything to be run as root. It's
 not nearly as good on nova-compute nodes, where we still use pretty
 liberal filters that could use some work. We could also get rid of the
 code that requires most of the run-as-root stuff, in particular the
 pre-boot file injection mechanisms (evil! use boot-time customization
 instead!).

FWIW, if you've got libguestfs available, the file injection code does
not require any rootwrap usage. Ironically the config drive stuff now
does require root if you configure it to use FAT instead of ISO9660 :-(

I have a general desire to make it such that you can run with KVM and
Nova without requiring rootwrap for anything. last time I looked the
three general areas where we required root wrap were networking, storage
and file injection. My recent refactoring of file injection addressed
the latter by using libguestfs APIs instead of libguestfs FUSE. Networking
is mostly solved if using newest libvirt + Quantum instead of Nova's
own networking. Storage is something that cna be addressed by using
libvirt's storage APIs instead of running commands directly.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Audio card for libvirt/kvm in folsom

2013-01-11 Thread Daniel P. Berrange
On Fri, Jan 11, 2013 at 12:29:35PM +0100, Davide Guerri wrote:
 Hi all,
 it's possible to add an audio card to the domain definition when using 
 libvirt/kvm? 
 If yes, how it can be done?
 
 I'm using the Folsom release.

No, there isn't any support for audio devices in Nova / libvirt at this
time. How were you anticipating using it. Outputting via the host sound
card doesn't make sense, and I don't believe noVNC supports the VNC
audio extension. AFAIK, gtk-vnc is the only common VNC client supporting
audo, and that doesn't support the websockets tunnelling.

So even if we allowed audio devices, I'm not clear how you'd make use of it ?
If you can show how it'd be useful, adding audio devices is an easy RFE to
take care of

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Audio card for libvirt/kvm in folsom

2013-01-11 Thread Daniel P. Berrange
On Fri, Jan 11, 2013 at 02:56:23PM +0100, Davide Guerri wrote:
 Daniel, 
 let me explain what I'm trying to do. 
 I'm trying to setup a simple virtual desktop infrastructure on the
 top of OpenStack using both Windows (7 only atm) and Linux guests.
 
 On Linux the missing audio board wouldn't be a problem since I'm
 planning to use freeNX or x2go that in turn use an esound remote
 tcp connection through ssh.
 For windows guests (which I'm not an expert on) without an audio
 card installed I'm facing the following problem: for some remote
 desktop clients (CORD on OsX and Mocha RDP on iOS, for instance)
 seems there is no way to enable sound if the guest isn't configured
 with an audio card. Other rdp solutions (like the Microsoft RDC
 on OsX) use a remote audio device that make audio output work
 but there is no way to enable audio recording.
 A sound board like ich6 makes audio works as expected.
  
 In order to have a working simple VD solution I'll try to hack
 the code of nova to add a sound card. 

Ok, your use case sounds more reasonable now. To add sound support
to nova there's a couple of things you'll need to do

 1. In nova/virt/libvirt/config.py define a new class for
describing the libvirt sound XML config, and
test it in nova/tests/test_libvirt_config.py

 2. In nova/virt/libvirt/driver.py add a new config option
libvirt_sound_driver=XXX, defaulting to None, but
allowing strings like 'ich6' or 'ac97', etc which describe
a sound card model.

 3. In the same file modify get_guest_config() to add the
sound device using the APIs you added in config.py

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Audio card for libvirt/kvm in folsom

2013-01-11 Thread Daniel P. Berrange
On Fri, Jan 11, 2013 at 10:53:25PM +0800, 孙玉新 wrote:
 Davide,
 
 If you use KVM, it's possible to enable audio.
 Please refer  http://libvirt.org/formatdomain.html#elementsSound
 
 Here is some infomation about how to enable it in nova:
  http://www.gossamer-threads.com/lists/openstack/operators/21302
 
 Haven't test it, I think the steps should be:
 1. cp /usr/lib/python2.7/dist-packages/*nova*/virt/libvirt.xml.template
 /etc/*nova/*
 2. add next line to *nova*.conf
 --*libvirt_xml_template*=/etc/*nova*/libvirt.xml.template
 3. Edit /etc/*nova*/libvirt.xml.template, add the following lines:
 
  devices
 sound model='ich6'
   codec type='micro'/
 sound/
   /devices
 
 Hope this is helpful.


That won't work on Folsom since the libvirt.xml.template no longer
exists.  See my reply elsewhere in this thread for how todo it in
the Nova code.  If you want to just hack it in without modifying
Nova though, it is possible to use a libvirt hook script:

  http://libvirt.org/hooks.html

this is invoked by libvirt immediately prior to launching a guest,
allowing you to modify the XML that is used.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Wiki content imported into MediaWiki - please check

2012-12-19 Thread Daniel P. Berrange
On Mon, Dec 17, 2012 at 06:51:23PM -0800, Ryan Lane wrote:
 I've just finished importing the content from the MoinMoin wiki into the
 MediaWiki instance. Please check the content:
 
 https://wiki-staging.openstack.org/wiki/Main_Page
 
 We're using a self-signed certificate for now. We are ordering a proper
 certificate, but eve that cert will still appear invalid until we've
 switched to the correct URL.
 
 Also note that the migration script doesn't map from MoinMoin to MediaWiki
 perfectly and we'll need to clean up some of the content manually. We'll
 need to create some templates for missing features too (like columned
 layout).
 
 We're going to leave the wiki up for a couple days in this state. If the
 content is mostly agreeable and we decide to press forward, I'll migrate
 the data again and we'll replace MoinMoin.

The migration has not handled 'BR' syntax that Moin uses to
insert line breaks. This causes a bit of a mess, eg look at
the  Things to avoid when creating commits  section here

  https://wiki-staging.openstack.org/wiki/GitCommitMessages

which is full of stray '' and '' characters

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Scheduler issues in folsom

2012-10-31 Thread Daniel P. Berrange
On Wed, Oct 31, 2012 at 10:40:57AM +0800, Huang Zhiteng wrote:
 On Wed, Oct 31, 2012 at 10:07 AM, Vishvananda Ishaya
 vishvana...@gmail.com wrote:
 
  On Oct 30, 2012, at 7:01 PM, Huang Zhiteng winsto...@gmail.com wrote:
 
  I'd suggest the same ratio too.  But besides memory overcommitment, I
  suspect this issue is also related to how KVM do memory allocation (it
  doesn't do actual allocation of the entire memory for guest when
  booting). I've seen compute node reported more memory than it should
  have (e.g. 4G node has two 1GB instances running but still report 3GB
  free memory) because libvirt driver calculates free memory simply
  based on /proc/meminfo, which doesn't reflect how many memory guests
  are intended to use.
 
  Ah interesting, if this is true then this is a bug we should try to fix.
  I was under the impression that it allocated all of the memory unless
  you were using virtio_balloon, but I haven't verified.

 I'm pretty sure about this.  Can anyone from RedHat confirm this is
 how KVM works?

Yes, that is correct. KVM only allocates memory when the guest actually
touches each page. For Linux guests this means that when the guest boots
up very little memory is actally allocated on the host. For Windows guests
you would typically see all memory allocated imediately, since the Windows
kernel will memset() the entire of RAM to zero on startup.

Also if you are using explicit huge pages for KVM it will allocate all
RAM upfront, but this is not the default - we use transparant/automatic
huge pages normally which still allocates on demand. Finally if you have
passed any PCI devices through to the guest, all guest RAM will be mlock()d
on the host so it can't even overcommit to swap.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] openstack libvirt lxc

2012-08-21 Thread Daniel P. Berrange
On Tue, Aug 21, 2012 at 10:19:34AM +0800, 廖南海 wrote:
 Who use the lxc virtual machine?
 Please give me some advices?

My advice would be not to use LXC since, as it exists today, it is not
secure. ie root within the container can break out  compromise the
entire host. This is not really the fault of OpenStack, but rather the
fact that the Linux kernel container support is still under development
and does not provide all the pieces required to form a secure solution.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] Disk attachment consistency

2012-08-15 Thread Daniel P. Berrange
On Wed, Aug 15, 2012 at 03:49:45PM +0100, John Garbutt wrote:
 You can see what XenAPI exposes here:
  http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/api/?c=VBD
 
 I think the only thing you can influence when plugging in the disk is the 
 “userdevice”
 which is the disk position: 0,1,2…  When you have attached the disk you can 
 find out
 the “device” name, such as /dev/xvda
 
 I don't know about Xen with libvirt. But from the previous discussion it 
 seems using
 the disk position would also work with KVM?

No, this doesn't really work in general. Virtio disks get assigned SCSI device
numbers on a first-come first served basis. In the configuration you only have
control over the PCI device slot/function. You might assume that your disks
are probed in PCI device order, and thus get SCSI device numbers in that same
order. This is not really safe though. Further more if the guest has any
other kinds of devices, eg perhaps they logged into an iSCSI target, then all
bets are off for what SCSI device you get assigned.

All the host can safely say is

  - Virtio-blk disks get PCI address domain:bus:slot:function
  - Virtio-SCSI disks get SCSI address A.B.C.D
  - Disks have an unique serial string ZZZ

As a guest OS admin you can use this info get reliable disk names
in /dev/disk/by-{path,id}.

If your disk has a filesystem on it, you can also get a unique UUID
and /or filesystem label, which means you can refer to the device
from /dev/disk/by-{uuid,label} too.

Relying on /dev/sdXXX is doomed to failure in the long term, even on
bare metal, and should be avoided wherever possible.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-14 Thread Daniel P. Berrange
On Tue, Aug 14, 2012 at 11:30:29AM -0700, Matt Joyce wrote:
 I have to ask.  Wasn't FUSE designed to do alot of this stuff?  It is
 userspace and it doesn't do nasty stuff to file systems.  Why aren't we
 going that route?

If you read earlier in this thread, you'll see that FUSE is what Nova
already uses, and is why we have this CVE.  From a non-security POV,
FUSE is actually quite inefficient since its operations have to map
strictly to POSIX compliant filesystem APIs. Using the libguestfs API
directly gives you better performance and more flexible APIs for
accomplishing many tasks.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-09 Thread Daniel P. Berrange
On Thu, Aug 09, 2012 at 07:10:17AM -0700, Vishvananda Ishaya wrote:
 
 On Aug 9, 2012, at 1:03 AM, Blair Bethwaite blair.bethwa...@gmail.com wrote:
 
  Hi Daniel,
  
  Thanks for following this up!
  
  On 8 August 2012 19:53, Daniel P. Berrange berra...@redhat.com wrote:
  not tune this downtime setting, I don't see how you'd see 4 mins
  downtime unless it was not truely live migration, or there was
  
  Yes, quite right. It turns out Nova is not passing/setting libvirt's
  VIR_MIGRATE_LIVE when it is asked to live-migrate a guest, so it is
  not proper live-migration. That is the default behaviour unless the
  flag is added to the migrate flags in nova.conf, unfortunately that
  flag isn't currently mentioned in the OpenStack docs either.
 
 Can you file a bug on this to change the default? I don't see any
 reason why this should be off.

With non-live migration, the migration operation is guaranteed to
complete. With live migration, you can get into a non-convergence
scenario where the guest is dirtying data faster than it can be
migrated. With the way Nova currently works the live migration
will just run forever with no way to stop it. So if you want to
enable live migration by default, we'll need todo more than
simply set the flag. Nova will need to be able to monitor the
migration, and either cancel it after some time, or tune the
max allowed downtime to let it complete


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-08 Thread Daniel P. Berrange
On Wed, Aug 08, 2012 at 09:50:20AM +0800, Huang Zhiteng wrote:
  But to the contrary. I tested live-migrate (without block migrate)
  last night using a guest with 8GB RAM (almost fully committed) and
  lost any access/contact with the guest for over 4 minutes - it was
  paused for the duration. Not something I'd want to do to a user's
  web-server on a regular basis...
 
 4 minutes of pause (down time)?  That's way too long.  Even there was
 crazy memory intensive workload inside the VM being migrated, the
 worst case is KVM has to pause VM and transmit all 8 GB memory (all
 memory are dirty, which is very rare).  If you have 1GbE link between
 two host, that worst case pause period (down time) is less than 2
 minutes.  My previous experience is: the down time for migrating one
 idle (almost no memory access) 8GB VM via 1GbE is less than 1 second;
 the down time for migrating a 8 GB VM that page got dirty really
 quickly is 60 seconds.  FYI.

KVM has a tunable setting for the maximum allowable live migration
downtime, which IIRC defaults to something very small like 250ms.

If the migration can't be completed within this downtime limit,
KVM will simply never complete migration. Given that Nova does
not tune this downtime setting, I don't see how you'd see 4 mins
downtime unless it was not truely live migration, or there was
something else broken (eg the network bridge device had a delay
inserted by the STP protocol which made the VM /appear/ to be
unreponsive on the network even though it was running fine).

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-08 Thread Daniel P. Berrange
On Tue, Aug 07, 2012 at 04:13:22PM -0400, Jay Pipes wrote:
 On 08/07/2012 08:57 AM, Blair Bethwaite wrote:
  I also feel a little concern about this statement:
 
   It don't work so well, it complicates migration code, and we are building
  a replacement that works.
 
 
  I have to go further with my tests, maybe we could share some ideas, use
  case etc...
  
  I think it may be worth asking about this on the KVM lists, unless
  anyone here has further insights...?
  
  I grabbed the KVM 1.0 source from Ubuntu Precise and vanilla KVM 1.1.1
  from Sourceforge, block migration appears to remain in place despite
  those (sparse) comments from the KVM meeting minutes (though I am
  naive to the source layout and project structure, so could have easily
  missed something). In any case, it seems unlikely Precise would see a
  forced update to the 1.1.x series.
 
 cc'd Daniel Berrange, who seems to be keyed in on upstream KVM/Qemu
 activity. Perhaps Daniel could shed some light.

Block migration is a part of the KVM that none of the upstream developers
really like, is not entirely reliable, and most distros typically do not
want to support it due to its poor design (eg not supported in RHEL).

It is quite likely that it will be removed in favour of an alternative
implementation. What that alternative impl will be, and when I will
arrive, I can't say right now. A lot of the work (possibly all) will
probably be pushed up into libvirt, or even the higher level mgmt apps
using libvirt. It could well involve the mgmt app having to setup an
NBD or iSCSI server on the source host, and then launching QEMU on the
destination host configured to stream the data across from the NBD/iSCSI
server in parallel with the migration stream. But this is all just talk
for now, no firm decisions have been made, beyond a general desire to
kill the current block migration code.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Daniel P. Berrange
On Wed, Aug 08, 2012 at 12:33:57AM -0400, Eric Windisch wrote:
 
 
  What's the security vulnerability here? Its writing to something which
  might be a symlink to somewhere special, right?
 
 
 Mounting filesystems tends to be a source of vulnerabilities in and of
 itself. There are userspace tools as an alternative, but a standard OS
 mount is clearly not secure. While libguestfs is such a userspace
 alternative, and guestmount is in some ways safer than a standard mount, it
 is not used by Nova in a way that has any clear advantage to a standard
 mount as it runs as root.
 
 As this CVE indicates, injecting data into a mounted filesystem has its own
 problems, whether or not that filesystem is mounted directly in-kernel or
 via FUSE. There are also solutions here, some very complex, few if any are
 foolproof.
 
 The solution here may be to use libguestfs, which seems to be a modern
 alternative to mtools, but to use it as a non-privileged user and to forego
 any illusions of mounting the filesystem anywhere via the kernel or FUSE.

Yes, ideally Nova would use the libguestfs API directly to inject files
and stop using guestmount, at which point things are strongly confined,
since every takes place inside a VM which can only see the guest FS.
All files from the host are uploaded into the geust FS using a RPC
mechanism.  Even using the libguestfs API though, applications need
to be somewhat careful about what they do. The libguestfs manpage
highlights important security considerations:

  http://libguestfs.org/guestfs.3.html#security

Also note that current work is being done to make libguestfs use
libvirt to launch its appliance VMs, at which point libguestfs VMs
will be strongly confined by sVirt (SELinux/AppArmour), and also
able to run as a separate user ID.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-011] Compute node filesystem injection/corruption (CVE-2012-3447)

2012-08-08 Thread Daniel P. Berrange
On Wed, Aug 08, 2012 at 02:17:30PM +0200, Thierry Carrez wrote:
 Eric Windisch wrote:
  Unfortunately, this won't be the end of vulnerabilities coming from this 
  feature.
 
 Indeed. I would like to see evil file injection die, and be replaced by
 cloud-init / config-drive. That's the safest way.
 
 If we can't totally get rid of file injection, I'd like it to be a clear
 second-class citizen that you should enable only if you absolutely need it.

If we used the libguestfs APIs instead of guestmount program, then the
security characteristics of file injection would be pretty much equivalent
to config drive IMHO. In both cases you would be primarily relying on
the containment of the QEMU process for security.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Default reply to behavior for mailing list

2012-07-31 Thread Daniel P. Berrange
On Tue, Jul 31, 2012 at 10:50:02AM -0700, Bhuvaneswaran A wrote:
 Stefano,
 
 If a subscriber reply to a mailing list message, it's sent to the
 author only. Each subscriber should use Reply to All every time, to
 post a reply to mailing list.
 
 Can you please configure the mailing list and set reply-to header as
 mailing list address, openstack@lists.launchpad.net. With this setup,
 if the user click reply in his email client, the message is sent to
 mailing list, instead of the author.

This discussion invariably turns up on most open source mailing lists
from time to time. People never agree on the best setting. Asking
for this reply-to setting to be changed is merely shifting the pain
away from one set of users (which includes you) onto other set of
users (which includes me). There's no net gain here. Just shifting
the pain.  As such IMHO we should leave it as it is.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Inaugurating the new Development list

2012-07-10 Thread Daniel P. Berrange
On Mon, Jul 09, 2012 at 01:56:25PM -0700, Stefano Maffulli wrote:
 On Mon 09 Jul 2012 01:48:25 PM PDT, Atul Jha wrote:
  And what happens to openstack@lists.launchpad.net then?
 
 good question: at the moment nothing happens, this list will remain 
 active. According to the new mailing list layout[1], it will be named 
 'General' but it will remain as it is for the time being.
 
 there is a general agreement though that in the future all the lists 
 hosted on launchpad should migrate to the new mailing list server. I'd 
 say lets stabilize the new development list and start thinking about 
 moving stuff around soon after that's done.

I assume that no matter what happens, we will ensure that all the
historical archives are kept permanently available online ?

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to speed-up removal of a volume in Openstack Essex

2012-07-09 Thread Daniel P. Berrange
On Mon, Jul 09, 2012 at 11:17:13AM +0200, Heber Dijks wrote:
 When terminating a volume, openstack default overwrites the complete volume
 with zero’s for security reasons. This can take a long time, especially
 with large volumes.
 
 If security isn’t an issue in your environment, you can speed-up deletion
 to only overwrite the first 1GB with zero’s, which will then delete only
 the mbr, partition table and the first part of the filesystem.
 
 See this post (
 https://dijks.wordpress.com/2012/07/09/how-to-speedup-removal-of-a-volume/)
 for a brief tutorial to speed-up removal of a volume if security isn't an
 issue.

On the flipside if security /is/ your concern, then you may well consider
fill-with-zeros to be insufficient.  The ability to invoke the 'scrub'
command, would be quite desirable. It sounds like Nova really ought to
have this be all configurable, to choose between none, zeros, or one of
the many 'scrub' algorithms.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Libvirt LXC with volume-attach broken ?

2012-07-06 Thread Daniel P. Berrange
On Thu, Jul 05, 2012 at 06:49:06PM -0700, Eric W. Biederman wrote:
 Serge Hallyn serge.hal...@canonical.com writes:
 
  Quoting Daniel P. Berrange (berra...@redhat.com):
  On Thu, Jul 05, 2012 at 03:00:26PM +0100, Daniel P. Berrange wrote:
   Now, when using 'nova volume-attach':
   
 # nova volume-attach 05eb16df-03b8-451b-85c1-b838a8757736 
   a5ad1d37-aed0-4bf6-8c6e-c28543cd38ac /dev/sdf
   
   nova will import an iSCSI LUN from the nova volume service, on the 
   compute
   node. The kernel will assign it the next free SCSI drive letter, in my
   case '/dev/sdc'.
   
   The libvirt nova driver will then do a mknod, using the volume name
   passed to 'nova volume-attach'.
   eg it will do
   
 mknod  /var/lib/nova/instances/instance-000e/rootfs/dev/sdf
  
  Opps, I'm slightly wrong here. What it actually does is
  
mount --bind /dev/sdc 
  /var/lib/nova/instances/instance-000e/rootfs/dev/sdf
  
  so you get a 'sdf' device, but with the major/minor number of the 'sdc'
  device. I can't say I particularly like this approach. Ultimately I
  think we need the kernel support to make this work correctly. In any
 
  Yes, that's what the 'devices namespace' is meant to address.  I'm hoping
  we can some serious design discussion on that in the next few months.
 
 This is not the device namespace problem.
 
 This is the setns problem for mount namespaces, and the unprivilged
 mount problem.
 
 There may be a notification issue so use space can perform actions
 in a container when a device shows up.
 
 But it should be very possible on the host to call.
 setns(containers_mount_namespace);
 mknod(/dev/foo);
 chown(/dev/foo, CONTAINER_ROOT_UID, CONTAINER_ROOT_GID);
 
 And then from inside the container especially when I get the rest of
 the user namespace merged it should be very possible to manipulate
 the block device because you have permission, and to mount the
 partitions of the block device, because you are root in your container.
 
 But until the user namespace is merged you really are root so you can
 mount whatever.
 
 Daniel does that sound like the support you are looking for?

Yes, the setns(mnt) approach you describe above is exactly what I'd
like to be able todo, to solve the first half of the problem.

The part of the problem is that I have a /dev/sdf, or even a
/dev/volgroup00/logvol3 in the host (with whatever major:minor
number that implies), and I want to be able to make it always
appear as /dev/sda  in the container (with the correspondingly
different major:minor number).  I'm guessing this is what Serge
was refering to as the 'device' namespace problem


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Libvirt LXC with volume-attach broken ?

2012-07-06 Thread Daniel P. Berrange
On Fri, Jul 06, 2012 at 02:35:14AM -0700, Eric W. Biederman wrote:
 Daniel P. Berrange berra...@redhat.com writes:
  The part of the problem is that I have a /dev/sdf, or even a
  /dev/volgroup00/logvol3 in the host (with whatever major:minor
  number that implies), and I want to be able to make it always
  appear as /dev/sda  in the container (with the correspondingly
  different major:minor number).  I'm guessing this is what Serge
  was refering to as the 'device' namespace problem
 
 Getting the device to always appear with the name /dev/sda is easy.
 
 Where does the need to have a specific device come from?  I would have
 thought by now that hotplug had been around long enough that in general
 user space would not care.
 
 The only case that I know of where keeping the same device number seems
 reasonable is in the case of live migration an application, in order to
 avoid issues with stat changing for the same file over the transition,
 and I think a synthesized hotplug event could probably handle that case.
 
 Is there another case besides buggy applications that have hard
 coded device numbers that need specific device numbers?

There isn't any particular buggy application we're trying to avoid
here. We're just trying to provide an piece of OpenStack functionality
to LXC in the same way as its provided to KVM.

With a basic OpenStack instance, you just get the root filesystem
from the image you booted, whose contents are transient (ie thrown
away on shutdown). It is possible to tell OpenStack to attach one
or more block devices to a running instance, which give you some
persistent storage.

The end user API for this lets the host admin specify the device
name that the block device will appear as inside the instance.

eg, with KVM you'd invoke:

 # nova volume-attach myguest  mystoragevol1 /dev/vdb
 # nova volume-attach myguest  mystoragevol2 /dev/vdc

Obviously with KVM this just works, because you have a level of
indirection between host  guest device names via virtio-blk.

The desire is to be able to wire up LXC in a similar way

 # nova volume-attach myguest  mystoragevol1 /dev/sdb
 # nova volume-attach myguest  mystoragevol2 /dev/sdc

So it is really the host admin specifying that they want to provide
the container with a '/dev/sdb' device, regardless of what the actual
device node on the host is (it could be an iSCSI LUN, multipath LUN,
LVM volume, or whatever). So I'm really looking to have the container
visible device name be independent of host name.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] Help with libvirt unit-test - get_diagnostics command

2012-07-06 Thread Daniel P. Berrange
On Fri, Jul 06, 2012 at 11:59:51AM +0100, Leander Bessa Beernaert wrote:
 Hello,
 
 I've been working on implementing the diagnostics command for libvirt -
 https://review.openstack.org/#/c/8839/ . Now i need to create the unit test
 for this new operation. I've been looking at the code to try and figure out
 an easy way to replicate this, but i'm a bit lost.
 
 What i need to do is simulate a connection to libvirt, create a fake
 instance with a predefined characteristics, retrieve the virtdomain and
 then verify the results from the get_diagnostics command. I'm more at loss
 as to how exactly do i setup a fake connection a create a fake instance? I
 thought about creating a dummy implementation as i've seen being used
 around the tests, however that doesn't give me any access to real method.
 Do note that I'm relatively new to python world, so there's a lot of things
 i can't grasp yes :s.

As you say, figuring out the libvirt Nova test framework is not entirely
easy, particularly if you're new to Python / Nova. So to help you get
your patch through review, here is one suitable test case you can add to
test_libvirt.py for exercising the get_diagnostics API:

diff --git a/nova/tests/test_libvirt.py b/nova/tests/test_libvirt.py
index eed43b8..f3fa5ff 100644
--- a/nova/tests/test_libvirt.py
+++ b/nova/tests/test_libvirt.py
@@ -1938,6 +1938,91 @@ class LibvirtConnTestCase(test.TestCase):
 got = jsonutils.loads(conn.get_cpu_info())
 self.assertEqual(want, got)
 
+def test_diagnostic_full(self):
+xml = 
+domain type='kvm'
+devices
+disk type='file'
+source file='filename'/
+target dev='vda' bus='virtio'/
+/disk
+disk type='block'
+source dev='/path/to/dev/1'/
+target dev='vdb' bus='virtio'/
+/disk
+interface type='network'
+mac address='52:54:00:a4:38:38'/
+source network='default'/
+target dev='vnet0'/
+/interface
+/devices
+/domain
+
+
+class DiagFakeDomain(FakeVirtDomain):
+
+def __init__(self):
+super(DiagFakeDomain, self).__init__(fake_xml=xml)
+
+def vcpus(self):
+return ([(0, 1, 1534000L, 0),
+ (1, 1, 164000L, 0),
+ (2, 1, 304000L, 0),
+ (3, 1, 142000L, 0)],
+[(True, False),
+ (True, False),
+ (True, False),
+ (True, False)])
+
+def blockStats(self, path):
+return (169L, 688640L, 0L, 0L, -1L)
+
+def interfaceStats(self, path):
+return (4408L, 82L, 0L, 0L, 0L, 0L, 0L, 0L)
+
+def memoryStats(self):
+return {'actual': 220160L, 'rss': 200164L}
+
+def maxMemory(self):
+return 280160L
+
+def fake_lookup_name(name):
+return DiagFakeDomain()
+
+self.mox.StubOutWithMock(libvirt_driver.LibvirtDriver, '_conn')
+libvirt_driver.LibvirtDriver._conn.lookupByName = fake_lookup_name
+
+conn = libvirt_driver.LibvirtDriver(False)
+actual = conn.get_diagnostics( { name: testvirt })
+expect = {'cpu0_time': 1534000L,
+  'cpu1_time': 164000L,
+  'cpu2_time': 304000L,
+  'cpu3_time': 142000L,
+  'vda_read': 688640L,
+  'vda_read_req': 169L,
+  'vda_write': 0L,
+  'vda_write_req': 0L,
+  'vda_errors': -1L,
+  'vdb_read': 688640L,
+  'vdb_read_req': 169L,
+  'vdb_write': 0L,
+  'vdb_write_req': 0L,
+  'vdb_errors': -1L,
+  'memory': 280160L,
+  'memory-actual': 220160L,
+  'memory-rss': 200164L,
+  'vnet0_rx': 4408L,
+  'vnet0_rx_drop': 0L,
+  'vnet0_rx_errors': 0L,
+  'vnet0_rx_packets': 82L,
+  'vnet0_tx': 0L,
+  'vnet0_tx_drop': 0L,
+  'vnet0_tx_errors': 0L,
+  'vnet0_tx_packets': 0L,
+  }
+
+self.assertEqual(actual, expect)
+
 
 class HostStateTestCase(test.TestCase):


A description of what I'm doing here:

 * First we mock up an XML document that describes our test
   guest, with 2 disks and 1 NIC.

 * We create the FakeVirtDomain() class which provides a stub
   implementation of the various libvirt APIs that the
   get_diagnostics method will call.


[Openstack] Libvirt LXC with volume-attach broken ?

2012-07-05 Thread Daniel P. Berrange
In the Libvirt driver there is special-case code for LXC to deal with
the volume-attach functionality, since there is no block device attach
functionality in libvirt for LXC. The code in question was added in

  commit e40b659d320b3c6894862b87adf1011e31cbf8fc
  Author: Chuck Short chuck.sh...@canonical.com
  Date:   Tue Jan 31 20:53:24 2012 -0500

Add support for LXC volumes.

This introduces volume support for LXC containers in Nova.
The way that this works is that when a device is attached to an
LXC container is that, the xml is parsed to find out which device to
connect to the LXC container, binds the device to the LXC container,
and allow the device through cgroups.

This bug fixes LP: #924601.

Change-Id: I00b41426ae8354b3cd4212655ecb48319a63aa9b
Signed-off-by: Chuck Short chuck.sh...@canonical.com

First a little background

The way LXC works with Nova, is that the image file assigned to the instance
eg 

  /var/lib/nova/instances/instance-000e/disk

is exported via qemu-nbd, and then mounted on the host at

  /var/lib/nova/instances/instance-000e/rootfs


When libvirt starts the container it uses that directory as the root
filesystem. libvirt will *also* mount a private /dev, /dev/pts, /proc
and /sys for the container. This is all fine

Now, when using 'nova volume-attach':

  # nova volume-attach 05eb16df-03b8-451b-85c1-b838a8757736 
a5ad1d37-aed0-4bf6-8c6e-c28543cd38ac /dev/sdf

nova will import an iSCSI LUN from the nova volume service, on the compute
node. The kernel will assign it the next free SCSI drive letter, in my
case '/dev/sdc'.

The libvirt nova driver will then do a mknod, using the volume name
passed to 'nova volume-attach'.
eg it will do

  mknod  /var/lib/nova/instances/instance-000e/rootfs/dev/sdf

this is where it has all gone horribly wrong...

  * The iSCSI LUN is completely randomly allocated, and unrelated to the
block device name the user will give to 'nova volume-attach'. So there
is no association between the /dev/sdf in the container and the
/dev/sdc in the host, and you can't expect the caller of 'volume-attach'
to be able to predict what the next assigned LUN will be on the host.

  * The  /var/lib/nova/instances/instance-000e/rootfs/dev/ directory
where nova did the mknod is a completely different filesystem to
the one seen by the container. The /dev in the container is a tmpfs
that is never visible to the host, so a mknod in the host won't
appear to the container.

AFAIK, there is no way to resolve either of these problems given the
current level kernel support for LXC, which is why libvirt has never
implemented block volume attach itself.

Thus I'm wondering how this LXC volume-attach code in Nova has ever
worked, or was tested ? My testing of Nova shows no sign of it working
today. Unless someone can demonstrate a flaw in my logic, I'm inclined
to simply revert this whole commit from Nova.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Libvirt LXC with volume-attach broken ?

2012-07-05 Thread Daniel P. Berrange
On Thu, Jul 05, 2012 at 03:00:26PM +0100, Daniel P. Berrange wrote:
 Now, when using 'nova volume-attach':
 
   # nova volume-attach 05eb16df-03b8-451b-85c1-b838a8757736 
 a5ad1d37-aed0-4bf6-8c6e-c28543cd38ac /dev/sdf
 
 nova will import an iSCSI LUN from the nova volume service, on the compute
 node. The kernel will assign it the next free SCSI drive letter, in my
 case '/dev/sdc'.
 
 The libvirt nova driver will then do a mknod, using the volume name
 passed to 'nova volume-attach'.
 eg it will do
 
   mknod  /var/lib/nova/instances/instance-000e/rootfs/dev/sdf

Opps, I'm slightly wrong here. What it actually does is

  mount --bind /dev/sdc /var/lib/nova/instances/instance-000e/rootfs/dev/sdf

so you get a 'sdf' device, but with the major/minor number of the 'sdc'
device. I can't say I particularly like this approach. Ultimately I
think we need the kernel support to make this work correctly. In any
case, even using mount --bind, doesn't deal with the fact that the guest'
/dev is not visible from the host

 this is where it has all gone horribly wrong...
 
   * The iSCSI LUN is completely randomly allocated, and unrelated to the
 block device name the user will give to 'nova volume-attach'. So there
 is no association between the /dev/sdf in the container and the
 /dev/sdc in the host, and you can't expect the caller of 'volume-attach'
 to be able to predict what the next assigned LUN will be on the host.
 
   * The  /var/lib/nova/instances/instance-000e/rootfs/dev/ directory
 where nova did the mknod is a completely different filesystem to
 the one seen by the container. The /dev in the container is a tmpfs
 that is never visible to the host, so a mknod in the host won't
 appear to the container.
 
 AFAIK, there is no way to resolve either of these problems given the
 current level kernel support for LXC, which is why libvirt has never
 implemented block volume attach itself.
 
 Thus I'm wondering how this LXC volume-attach code in Nova has ever
 worked, or was tested ? My testing of Nova shows no sign of it working
 today. Unless someone can demonstrate a flaw in my logic, I'm inclined
 to simply revert this whole commit from Nova.
 

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [CI] Retriggering Jenkins from Gerrit

2012-07-04 Thread Daniel P. Berrange
On Tue, Jul 03, 2012 at 04:48:32PM -0700, James E. Blair wrote:
 Hi,
 
 As mentioned in the thread Jenkins and transient failures, we've had
 an unusually high number of transient failures in Jenkins lately.  We've
 done several things in response to that:
 
 1) Monty identified a problem with our pypi mirror which was the cause
 of many of the errors, and corrected it.
 
 2) Monty is continuing to work on the single dependency list which
 should allow us to switch to using our local pypi mirror exclusively,
 further reducing transient network errors, as well as significantly
 speeding up test run time.
 
 3) Several transient errors were caused by failed fetches from Gerrit.
 While consulting with the Gerrit authors about tuning, they discovered a
 bug in Gerrit where a 5 minute timeout was being interpreted as a 5
 millisecond timeout.  I have updated our gerrit configuration to work
 around that.
 
 4) Clark Boylan implemented automatic retrying for the git fetches that
 we use with Jenkins.
 
 
 I hope that we'll get to the point where we have almost no transient
 network errors when testing, but we know it will never be perfect, so at
 the CI meeting we discussed how best to implement retriggering with
 Zuul.  Clark added a comment filter that will retrigger Jenkins if you
 leave a comment that matches a regex.
 
 We currently run two kinds of jobs in Jenkins, the check job and the
 gate job.  The check jobs run immediately when a patchset is uploaded
 and vote +/-1.  The gate jobs run on approval, queue up across all
 projects and vote +/-2 (if they fail, jobs behind them in the gate
 pipeline may need to run again).
 
 
 To retrigger the initial Jenkins check job, just leave a comment on the
 review in Gerrit with only the text recheck.
 
 To retrigger the Jenkins merge gate job, leave a comment with only the
 text reverify, or if you are a core reviewer, just leave another
 Approved vote.  (Don't leave a reverify comment if the change hasn't
 been approved yet, it still won't be merged and will slow Jenkins down.)

Thanks to the team who worked on resolving the problems from last 
also adding this retrigger support. The latter will bring a nice
improvement to productivity should there be problems in the future.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenStack G naming poll

2012-07-04 Thread Daniel P. Berrange
On Wed, Jul 04, 2012 at 10:02:46AM +0200, Thierry Carrez wrote:
 Brian Waldon wrote:
  On Jul 3, 2012, at 5:21 PM, Monty Taylor wrote:
  At the g summit, we'd tell everyone where the next summit is:
  At the g summit, we'd vote and announce the name of h
  We wouldn't have to spend half the cycle saying h, or whatever when we
  mean we're going to defer that crazy idea until next time
  I wouldn't have had to use the letter g by itself twice just above here.
  
  Fantastic idea. 
  
  I haven't been involved in choosing the next location, so I'm not sure how 
  hard it would be to choose it that far in advance. Maybe somebody can 
  comment on how doable this is?
 
 It's definitely doable (and desirable). Actually that's the plan for the
 next one: close the date and location deal before the summit so that we
 can announce the next one during the summit itself.
 
 So we would definitely vote in person (like we did for Cactus at the
 Bexar summit), so much funnier than a Launchpad poll.

I'd be against doing a vote in person at the summit on the basis that
this would exclude a large portion of the community who are unable to
attend :-( A online poll is fully inclusive of the OpenStack community.

I think the same point applies more broadly to any important decisions
related to the project, for which the community is ultimately responsible.
Meeting people face-to-face for technical presentations / discussions
/ debates is a productive way of examining difficult issues, but those
who are unfortunate enough to not attend summits can feel very isolated.
This leads to people feeling there is a 2-tier community. To counter this,
IMHO, it is important to communicate important decisions resulting from
f-2-f meetings to the community  provide a means for them to add any
relevant feedback  influence decisions.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [RFC] Add more host checks to the compute filter

2012-07-04 Thread Daniel P. Berrange
On Tue, Jul 03, 2012 at 04:07:36PM -0600, Jim Fehlig wrote:
 Hi Daniel,
 
 Attached is a patch that implements filtering on (architecture,
 hypervisor_type, vm_mode) tuple as was discussed in this previous patch
 
 https://review.openstack.org/#/c/9110/
 
 CC'ing Chuck since he is the author of the ArchFilter patch.
 
 Feedback appreciated before sending this off to gerrit.

AFAICT, this only allows you to tag each node with a single triplet
(arch, hv_type, vm_mode). We in fact need to expose a list of triplets.
Basically if you look at the libvirt capabilities XML virsh capabilities
you'll see a list of guest elements in there. Each one of those guest
definitions corresponds to a triplet, so we need to expose all of them.

For example a KVM x86-64 node can provide

  (x86_64, kvm, hvm)
  (i686, kvm, hvm)

A XenAPI x86-64 node can provide

  (x86_64, xen, hvm)
  (i686, xen, hvm)
  (x86_64, xen, xen)
  (i686, xen, xen)

A QEMU node can provide

  (x86_64, qemu, hvm)
  (i686, qemu, hvm)
  (ppc, qemu, hvm)
  (ppc64, kvm, hvm)
  (sparc64, kvm, hvm)
  (arm7, kvm, hvm)

And many more arbitrary combinations

Meanwhile, bearing in mind the availability of paravirt_ops in all
modern Linux hosts, a VM image can be capable of being run on multiple
different triplets.

Rather than tagging VM images with multiple triplets though, we probably
want to record a single (arch, hv_type, vm_mode) triplet against instance
types. If the user wants to use the same image with multiple different
triplets, then they can just choose a different instance type.

I'd say it is worth mocking up the changes to the libvirt driver.py
so as to validate the design of the schedular filters. When the arch
filter was added the list of guest arches was added in the get_cpu_info()
method of the libvirt driver.py. I tend to thing that this was the wrong
place to add it, since get_cpu_info() is refering to the host CPU,
where as these triplets are related to guests CPUs, which are not
actually required to match host CPUs.  So I'd probably remove the
permitted_instance_types data from libvirt's get_cpu_info() method
and enhance the libvirt.driver.HostState class' update_status()
method to return the list of (arch, hv_type, vm_mode) triplets.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Time for a UK Openstack User Group meeting ?

2012-07-04 Thread Daniel P. Berrange
On Wed, Jul 04, 2012 at 04:38:28PM +0100, Day, Phil wrote:
 Hi All,
 
 I'm thinking it's about time we had an OpenStack User Group meeting
 in the UK , and would be interested in hearing from anyone interested
 in attending, presenting, helping to organise, etc.

I can do presentations about libvirt with a focus on KVM/LXC, and
SELinux / security / sandboxing. I could probably manage something
semi-intelligent about libvirt integration with Nova too, depending
on intended audience interests

 London would seem the obvious choice, but we could also host here in
 HP Bristol if that works for people.

I work in London, so if something was organized in the area  I was free,
I'd hope to attend.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-03 Thread Daniel P. Berrange
On Mon, Jul 02, 2012 at 12:09:55PM -0700, Johannes Erdfelt wrote:
 On Mon, Jul 02, 2012, Daniel P. Berrange berra...@redhat.com wrote:
  On Mon, Jul 02, 2012 at 08:17:08AM -0700, Johannes Erdfelt wrote:
   Not using /tmp for large files is a good reason for practical reasons
   (distributions moving to ramfs for /tmp).
   
   But please don't start throwing around warnings that all uses of /tmp
   are a security risk without backing that up.
  
  I stand by my point that in general usage of /tmp is a risk because
  for every experianced developer who can get things right, there are
  hordes of others who get it wrong  eventually one such bug will
  slip through the review net. Since there are rarely compelling reasons
  for the use of /tmp, avoiding it by default is a good defensive choice.
 
 So your argument isn't that using /tmp is inherently insecure, it's that
 using something not shared is safer?
 
 It seems to me that we're just as likely to have a review slip through
 that uses /tmp insecurely as a review slipping through that uses /tmp at
 all.

We already run a bunch of PEP8 checks across the code on every
commit. It ought to be with the realm of practicality to add a
rule that blacklists any use of mkdtemp() which does not pass
an explicit directory. Most places in Nova don't actually use
it directly, but instead call nova.utils.tempdir() which could
again be made to default to '/var/lib/nova/tmp' or equivalent.


 Ultimately, the most compelling reason for using /tmp is that it's easy,
 it's standard and developers have been trained to use it for a long
 time.

These are all reasons against use of /tmp - precisely because it is
so convenient/easy, developers use it without ever thinking about the
possible consequences of accidental misuse.

 There is no well-defined alternative, either in LSB or in practice (or
 in either that blog post or your email).

It is fairly common for apps to use /var/cache/appname or
/var/lib/appname.

 Since we can't trust developers to use /tmp securely, or avoid using
 /tmp at all, then why not use filesystem namespaces to setup a process
 specific non-shared /tmp?

That is possible, but I simply disagree with your point that we
can't stop using /tmp. It is entirely possible to stop using it
IMHO.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-03 Thread Daniel P. Berrange
On Tue, Jul 03, 2012 at 11:01:11AM +0100, John Garbutt wrote:
 Sorry to go back in the tread, but just wanted to ask a possibly dumb 
 question.
 
  Daniel P. Berrange wrote:
  In the particular case of the qemu-img command described in earlier in this
  thread, I'm not convinced we need a new option. Instead of using /tmp
  when extracting a snapshot from an existing disk image, it could just use 
  the
  path where the source image already resides. ie the existing
  FLAGS.instances_path directory, which can be assumed to be a suitably large
  persistent data store.
 
 Would that not be a bad idea for those having FLAGS.instances_path on
 a shared file system, like gluster?

Well it would mean more I/O to that filesystem yes. Whether this is bad or
not depends on whether there is an alternative. If users of gluster also
have a large local scratch space area then, we could make it possible to
use that, if they don't have a local scratch space, then this is a
reasonable usage.

This would suggest there's a potential use case for a new config parameter
FLAGS.local_scratch_path, whose default value matches FLAGS.instances_path
if not set.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RFC: Thoughts on improving OpenStack GIT commit practice/history

2012-07-02 Thread Daniel P. Berrange
On Fri, Jun 29, 2012 at 03:27:25PM -0500, Andrew Bogott wrote:
 On 6/27/12 8:40 AM, Daniel P. Berrange wrote:
 On Wed, Jun 27, 2012 at 03:24:21PM +0200, Vincent Untz wrote:
 Hi,
 
 
 It'd be really great if we could first improve Gerrit to handle the
 patch series workflow in a better way. Without such a change, pushing
 patch series to Gerrit is really no fun for anyone :/
 
 
 Yep, no argument that Gerrit could do with some improvements, but having
 submitted a number of non-trivial patch series to Nova, I don't think
 current Gerrit UI is a complete blocker to adoption. It is not ideal,
 but it isn't too painful if you're aware of what to look for. I think
 the main problem is that since the patch dependancies are not obvious
 in the UI, reviewers tend to miss the fact that they're reviewing a
 patch that's part of a series.
 
 I agree that patchsets are better than monolithic patches.  Today,
 though, I am working on a 3-patch set and the process is driving me
 crazy.
 
 a)  Any time Jenkins has a hiccup, I have to resubmit the entire
 patchset.  This obscures any reviews or votes that might be attached
 to other patches in the set.

Yeah, this is a major PITA. We need an easy way for patch authors
to retrigger Jenkins without re-submitting each time.

 b) Similarly, any time I change a single patch in the set, I have to
 resubmit the whole set, which causes review history to be obscured,
 even for those patches which have not changed at all.

Gerrit will look at the GIT changeset hashes and notice if only
2 out of the 5 patches have actually changed. THe trouble is that
'git review' with no args will always rebase your patch series
against master before pushing. So even if you only modified the
last patch in your series, this will make Gerrit see all the patches
as new :-(

I'm getting into the habit of always running 'git review --no-rebase'
to get around this behaviour.

 Case b) would be entirely solved via a fix to  this:
 http://code.google.com/p/gerrit/issues/detail?id=71.  That would
 also help with a) but not resolve it entirely... the best solution
 to a) would be a 'retrigger' button in Jenkins or a 'prompt Jenkins
 to re-review' button in Gerrit.  The fact that people (including me)
 are submitting trivial edits to patches only in order to nudge
 Jenkins is pretty stupid.

I must say that this has been driving me mad this last week. IIUC, only
members of the core review team have permission to retrigger Jenkins,
but I feel it is putting too much burden on them to have to track every
patch with a bogus Jenkins failure. If we can't get Jenkins to be more
reliable, then can we see about letting patch submitters retrigger
Jenkins builds for their own patches.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-02 Thread Daniel P. Berrange
On Sat, Jun 30, 2012 at 09:25:10PM -0400, Lars Kellogg-Stedman wrote:
  So, maybe setting any of this environment variables for nova-compute
  to desired value sholuld help.
 
 Yeah, I was expecting that.
 
 Given that this could easily take out a compute host I'd like to see
 it get an explicit configuration value (or default to instance_dir, I
 guess).

In Fedora 18, /tmp is going to be a RAM filesystem, so we absolutely
must not create any sizeable files on /tmp.

In addition from a security POV, we must aim to *never* use /tmp for
anything at all

  http://danwalsh.livejournal.com/11467.html

It would be good to do a thorough audit of the code to make sure
nothing is using the tmpfile functions without explicitly specifying
a directory path that is private to the OpenStack daemon in question.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Jenkins and transient failures

2012-07-02 Thread Daniel P. Berrange
On Sun, Jul 01, 2012 at 08:40:36AM -0700, James E. Blair wrote:
[snip]

 So with all that background, I think we should discuss the following at
 the CI team meeting on Tuesday:

[snip]

 3) Decide on a course of action to mitigate failures from transient
 gerrit errors (but continue to work on eliminating them in the first
 place).
 
 4) Decide how to implement retriggering with Zuul.

Can you expand on what you mean by this 4th point ? Is this a way to
allow individual patch submitters to re-trigger builds on their own
patches ?

IIUC, currently only core reviewers can directly retrigger builds. It
seems patch submitters are working around this restriction by simply
doing no-op rebases  re-uploading their patches again and again until
Jenkins passes.  If there's an easy way to allow re-triggering of failed
builds by any any reviewer, it seems that would mitigate the pain of
these failures, because we won't have to rely on a small pool of
people to notice bogus failures.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Nova] Issues with run_tests.sh, no tests are run when import libvirt is present

2012-07-02 Thread Daniel P. Berrange
On Mon, Jul 02, 2012 at 01:43:31PM +0100, Leander Bessa Beernaert wrote:
 So, if no system packages can be imported, how do you test the connection
 class for the libvirt driver?
 
 How does that particular test case wrap around the fact that it requires
 the libvirt module? The only thing i could find are these lines of code in
 the driver's __init__ method. Do these somehow detect if this is a unit
 test environment and import the fakelibvirt driver instead? I'm no expert
 in python so i'm not sure what's happening there :s
 
  global libvirt
  if libvirt is None:
  libvirt = __import__('libvirt')

If you have installed all the neccessary python packages on your
local host, then it is entirely possible to run the Nova test
suites without using virtualenv. You just need to pass the '-N'
arg to the run_tests.sh script, eg on my Fedora 17 host, I can
run

   ./run_tests.sh -N nova.tests.test_libvirt

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-02 Thread Daniel P. Berrange
On Mon, Jul 02, 2012 at 10:24:02AM -0700, Matt Joyce wrote:
 I like the idea of making this a flagfile option.

In the particular case of the qemu-img command described
in earlier in this thread, I'm not convinced we need a
new option. Instead of using /tmp when extracting a snapshot
from an existing disk image, it could just use the path
where the source image already resides. ie the existing
FLAGS.instances_path directory, which can be assumed to
be a suitably large persistent data store.

Other uses of temporary files should be analysed ona case
by case basis to figure out a suitable storage location.
This might perhaps identify a need for a generic temp
file location for nova, such as /var/run/nova/ or
/var/cache/nova or both (depending on use case).

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How do I stop image-create from using /tmp?

2012-07-02 Thread Daniel P. Berrange
On Mon, Jul 02, 2012 at 08:17:08AM -0700, Johannes Erdfelt wrote:
 On Mon, Jul 02, 2012, Daniel P. Berrange berra...@redhat.com wrote:
  In Fedora 18, /tmp is going to be a RAM filesystem, so we absolutely
  must not create any sizeable files on /tmp.
  
  In addition from a security POV, we must aim to *never* use /tmp for
  anything at all
  
http://danwalsh.livejournal.com/11467.html
 
 I take exception to that. Saying *never* is incorrect.
 
 You (and that blog post) say that we should *never* use /tmp for
 security reasons, but don't go on to explain why using mkstemp or
 mkdtemp is a security problem.
 
 Even the glibc documentation says they are safe wrt to security issues:
 
 http://www.gnu.org/software/libc/manual/html_node/Temporary-Files.html

NB, I never said that mkstemp/mkdtemp are unsafe. I that that
in general usage of /tmp is a bad idea. It is possible to use
/tmp safely, but historical security records across the entire
software industry shows that developers routinely screw up with
their use of /tmp. Since /tmp is a globally accessible directory,
the consequences of such screw ups can be very severe. The globally
writable nature of /tmp also makes it hard for mandatory access
control systems like SELinux / AppArmour to ensure that a daemon's
temporary files are protected against these screw ups.

As the blog post says, /tmp is a reasonable place for end users
to have temporary files. Daemons needing somewhat to create
their own private temporary files should use a private directory
location accessible only to themslves, so that in the event of
a screw up the damage is mor elimit limited. There are very few
compelling reasons why something like Nova should ever need to
use a globally writable directory for its temp files / directories.

  It would be good to do a thorough audit of the code to make sure
  nothing is using the tmpfile functions without explicitly specifying
  a directory path that is private to the OpenStack daemon in question.
 
 Not using /tmp for large files is a good reason for practical reasons
 (distributions moving to ramfs for /tmp).
 
 But please don't start throwing around warnings that all uses of /tmp
 are a security risk without backing that up.

I stand by my point that in general usage of /tmp is a risk because
for every experianced developer who can get things right, there are
hordes of others who get it wrong  eventually one such bug will
slip through the review net. Since there are rarely compelling reasons
for the use of /tmp, avoiding it by default is a good defensive choice.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RFC: Thoughts on improving OpenStack GIT commit practice/history

2012-06-29 Thread Daniel P. Berrange
On Fri, Jun 29, 2012 at 04:57:06AM +, Vaze, Mandar wrote:
  I particularly hate the single-line Fixes bug 1234566-type commit 
  messages.
 
 I assume your concern was regarding commits where Fixes bug 1234566 is the 
 first and ONLY line.
 
 Fixes bug 1234566 comes from Wiki. 
 
 Plus there is restriction on how long the first line of the
 commit message can be. Not everyone is able to describe their
 change in one short sentence.

At the very least it is always possible to describe what area
of the code is being changed, so that you alert the reviewers
you are familiar with that area.

 So typically *I* put Fixes bug 1234567 on the *first* line followed by 
 additional lines describing the change.

IMHO that is one of the most unhelpful things you can do. If you are
a reviewer scanning through your email for patches to review and you
see a subject line  Fixes bug 123456 you are given no useful information.
Few people will bother to go  find out what 'bug 123456' is, when
there are plenty of other patches pending with useful subject lines.

It is also pretty useless for people skimming through the online
patch summaries in GIT history.  As in my first mail this thread,
the bug number should be just a line item at the end of the commit
message, and the commit message  first line should be a complete
self-contained description.

 http://wiki.openstack.org/GerritWorkflow#Committing_Changes should be updated 
 when this discussion is concluded.

Yep, totally agreed.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HVM + Xen Hypervisor via libvirt possible?

2012-06-28 Thread Daniel P. Berrange
On Wed, Jun 27, 2012 at 02:47:59PM -0600, Jim Fehlig wrote:
 Daniel P. Berrange wrote:
  On Fri, Jun 22, 2012 at 12:17:11AM +0800, Huang Zhiteng wrote:

  Of course it is possible.  What kind of issue did you run into?
 
  On Thu, Jun 21, 2012 at 5:52 PM, Wang Li fox...@gmail.com wrote:
  
  hi,all
 
  I need to run virtual machines on Xen Hypervisor in HVM mode,
  is it possible when using libvirt?

 
  Actually, this is not currently possible. For reasons I don't
  know, the libvirt driver currently hardcodes use of paravirtualized
  guests when connected to Xen hosts. It does not allow use of HVM
  guests.  There's no particularly good technical reason why it can't
  be made to work.
 
 Right.  I've been working on this in my few spare cycles and hope to
 post some patches soon.
 
   There'd need to be a way to tag instance types
  with HVM vs paravirt, in addition to their architecture.
 
 I was hoping to just use the vm_mode image property that the XenServer
 folks use.  See option 2 in this mail
 
 https://lists.launchpad.net/openstack/msg11507.html
 
   The
  libvirt driver would have to expose whether each host supports
  paravirt or HVM or both. The schedular would then have to take
  this into account when placing guests.

 
 I posted an RFC patch here that filters hosts based on the vm_mode image
 property and the additional_compute_capabilities flag.  I haven't
 received any comments, so should probably just push this to gerrit.

I'll happily review any patches you send to Gerrit for this.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RFC: Thoughts on improving OpenStack GIT commit practice/history

2012-06-28 Thread Daniel P. Berrange
On Thu, Jun 28, 2012 at 12:01:10PM +0200, Thierry Carrez wrote:
 Daniel P. Berrange wrote:
  [...]
  In other words, when reviewing a change in Gerrit, do not simply look at
  the correctness of the code. Review the commit message itself and request
  improvements to its content. Look out for commits which are mixing multiple
  logical changes and require the submitter to split them into separate 
  commits.
  Ensure whitespace changes are not mixed in with functional changes. Ensure
  no-op code refactoring is done separately from functional changes. And so
  on.
  [...]
 
 Nice work, and agreed on all points ! I particularly hate the
 single-line Fixes bug 1234566-type commit messages.
 
 Is there a way a concise version of this advice could find its way into
 HACKING.rst ? And/Or into http://wiki.openstack.org/ReviewChecklist ?

Sure, MarkMc suggested to me that I put this doc up on the wiki somewhere.
I'll do that and then submit a concise version for HACKING.rst and
the ReviewChecklist page, with a cross-reference to the full thing.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] LibVirt Error

2012-06-28 Thread Daniel P. Berrange
On Thu, Jun 28, 2012 at 04:26:15PM +0530, Trinath Somanchi wrote:
 2012-06-28 16:24:00 TRACE nova.compute.manager [instance:
 7741f67f-ad78-4777-a5a0-6636eb8b460e] libvirtError: Unable to read from
 monitor: Connection reset by peer

This looks like the interesting error messages from that huge log. What
this is saying is that libvirt was talking to QEMU over to the monitor
socket, when the monitor socket closed unexpectedly. This means that
QEMU has quit, or more likely crashed.

There could be a number of reasons for this. As a first step try and
find the /var/log/libvirt/qemu/$GUESTNAME.log file and see if there
are any messages from QEMU. If your host has any MAC system like
SELinux or AppArmour, temporarily trying switching that into
permissive mode to see if that fixes things.

Also see if there are newer QEMU packages available from your distro
vendor.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Jenkins vs SmokeStack tests Gerrit merge blockers

2012-06-28 Thread Daniel P. Berrange
Today we face a situation where Nova GIT master fails to pass all
the libvirt test cases. This regression was accidentally introduced
by the following changeset

   https://review.openstack.org/#/c/8778/

If you look at the history of that, the first SmokeStack test run
fails with some (presumably) transient errors, and added negative
karma to the change against patchset 2. If it were not for this
transient failure, it should have shown the regression in the
libvirt test case. The libvirt test case in question was one that
is skipped, unless libvirt is actually present on the host running
the tests. SmokeStack had made sure the tests would run on such a
host.

There were then further patchsets uploaded, and patchset 4 was
approved for merge. Jenkins ran its gate jobs and these all passed
successfully. I am told that Jenkins will actually run the unittests
that are included in Nova, so I would have expected it to see the
flawed libvirt test case, but it didn't. I presume therefore, that
Jenkins is not running on a libvirt enabled host.

The end result was that the broken changeset was merged to master,
which in turns means any other developers submitting changes
touching the libvirt area will get broken tests reported that
were not actually their own fault.

This leaves me with the following questions...

 1. Why was the recorded failure from SmokeStack not considered
to be a blocker for the merge of the commit by Gerrit or
Jenkins or any of the reviewers ?

 2. Why did SmokeStack not get re-triggered for the later patch
set revisions, before it was merged ?

 3. Why did Jenkins not ensure that the tests were run on a libvirt
enabled host ?


Obviously this was all made worse by the transient problems we've had
with the tests suite infrastructure these past 2 days, but regardless
it seems like we have a gap in our merge approval procedures here.

IMHO, either SmokeStack needs to be made compulsory, or Jenkins needs
to ensure tests are run on suitable hosts like SmokeStack does, or
both.

Regards,
Daniel

[1] 
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Jenkins vs SmokeStack tests Gerrit merge blockers

2012-06-28 Thread Daniel P. Berrange
On Thu, Jun 28, 2012 at 08:13:28AM -0700, Monty Taylor wrote:
 On 06/28/2012 07:32 AM, Daniel P. Berrange wrote:
  This leaves me with the following questions...
  
   1. Why was the recorded failure from SmokeStack not considered
  to be a blocker for the merge of the commit by Gerrit or
  Jenkins or any of the reviewers ?
 
   2. Why did SmokeStack not get re-triggered for the later patch
  set revisions, before it was merged ?
 
 The answer to 1 and 2 is largely the same - SmokeStack is a community
 contributed resources and is not managed by the CI team. Dan Prince does
 a great job with it, but it's not a resource that we have the ability to
 fix should it start messing up, so we have not granted it the
 permissions to file blocking votes.
 
 The tests that smokestack runs could all be written such that they are
 run by jenkins. The repos that run the jenkins tests are all in git and
 managed by openstack's gerrit. If there are testing profiles that it
 runs that we as a community value and want to see part of the gate,
 anyone is welcome to port them.

Ok, this makes sense to me now. I had assumed SmokeStack was a core
part of the infrastructure.

   3. Why did Jenkins not ensure that the tests were run on a libvirt
  enabled host ?
 
 This is a different, and slightly more complex. We run tests in
 virtualenvs so that the process used to test the code can be
 consistently duplicated by all of the developers in the project. This is
 the reason that we no longer do ubuntu package creation as part of the
 gate - turns out that's really hard for a developer running on OSX to do
 locally on their laptop - and if Jenkins reports an blocking error in a
 patch, we want a developer to be able to reproduce the problem locally
 so that they can have a chance at fixing it.
 
 Problem arise in paradise though. libvirt being one of them. It's not
 possible to install libvirt into a virtualenv, because it's a swig-based
 module built as part of the libvirt source itself. One of the solutions
 to this is to allow the testing virtual environments to use packages
 installed at the system level. We suggested this a little while ago, but
 this was rejected by the nova team who valued the benefit of having a
 restricted test run so that we know we've got all of the depends
 properly specified.
 
 To that end, after chatting with Brian Waldon, I put this up as a
 possible next try:
 
 https://review.openstack.org/#/c/8949/
 
 Which adds an additional testing environment that has system software
 enabled and also installs additional optional things. With that
 environment, we should be able to run a jenkins gate that tests things
 with full libvirt, and also tests the mysql upgrade paths, without
 screwing our fine friends who run OSX.
 
 Fundamentally though - we're at a point of trying to have our cake and
 eat it too. Either we want comprehensive testing of all of the unit
 tests, or we want to be careful about not making the test environment to
 hard for a developer to exactly mimic.
 
 I'm obviously on the side of having us have gating tests that some devs
 might not be able to do on their laptops - such as  running the libvirt
 tests properly. We're working on cloud software - worst case scenario if
 there's an intractable problem, as dev can always spin up an ubuntu
 image somewhere.

I think I agree with you, since in practice I believe the additional
requirements on developers are not unreasonable in general. Taking the
livirt example (though it applies to other examples like MySQL too)...

If a developer is submitting changes that touches a part of OpenStack
unrelated to the virt drivers, then there's low liklihood that they'll
cause libvirt test failures. This means they won't need to have libvirt
available themselves in the common case, and thus there's no new onerous
requirement placed on them. A similar scenario occurs if they are touching
non-libvirt drivers (eg XenAPI, VMWare).

Only if they touch the libvirt driver itself, or things that it is derived
/from/ (eg the base virt driver code), then are they likely to need to run
the tests locally to toubleshoot. In such a case, it is not unreasonable
that they be prepared to setup local installs for troubleshooting purposes.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] RFC: Thoughts on improving OpenStack GIT commit practice/history

2012-06-28 Thread Daniel P. Berrange
On Thu, Jun 28, 2012 at 09:21:20AM -0700, Johannes Erdfelt wrote:
 First off, I wanted to say I think these are a great set of
 recommendations.
 
 On Wed, Jun 27, 2012, Daniel P. Berrange berra...@redhat.com wrote:
  Fixes: bug #1003373
  Implements: blueprint libvirt-xml-cpu-model
  Change-Id: I4946a16d27f712ae2adf8441ce78e6c0bb0bb657
  Signed-off-by: Daniel P. Berrange berra...@redhat.com
 
  As well as the 'Signed-off-by' tag, there are various other ad-hoc
  tags that can be used to credit other people involved in a patch
  who aren't the author.
 
 What is the Signed-off-by tag used for?
 
 Your examples have yourself, but isn't that kind of implied by
 submitting the patch for review in the first place?

Yes, you are technically correct in this respect.

It is an idiom originating from the Linux Kernel community, which is
auto-added by GIT if you pass the '-s' arg to 'git commit'. Basically it
is a statement that you have read  are complying with the projects
contributor guidelines (eg Developer's Certificate of Origin[1] in
the kernel), or a more formal contributor license agreement such as
that used by OpenStack.

OpenStack obviously has a formal CLA which all contributors *must* agree
with prior to submitting patches, which serves the same purpose. Thus
using a Signed-off-by: line is pretty much redundant for any contributions
to OpenStack, since you must have signed the OpenStack CLA before you
can even get access to post patches to Gerrit.

The only case where I see that it might be considered relevant, is if
the person submitting the patch to OpenStack, is not the same as the
person who wrote the patch. For example, if someone in Red Hat's QA
team (who isn't an OpenStack contributor) writes a patch for OpenStack
 gives it to me, then they'd typically include their own email addr
in a 'Signed-off-by' tag to indicate that they are the author  they
understand the contribution requirements of OpenStack. This indicates
to me that I can trust their patch  thus I'd be happy to add my own
email 'signed-off-by' line  submit it to OpenStack in my role as
someone who has agreed to the formal CLA.

Since this tagging is a standard feature of GIT, it is quite typical
for people to add Signed-off-by tags on all their commits, to any
project, regardless of whether the project actually mandates this
as their submission policy. I certainly just do it out of habit
for all projects I contribute to.

So in summary, you are perfectly ok to just ignore the whole
Signed-off-by concept in OpenStack, given the formal CLA reqired
for contributors already.

Regards,
Daniel

[1] https://www.kernel.org/doc/Documentation/SubmittingPatches
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] RFC: Thoughts on improving OpenStack GIT commit practice/history

2012-06-27 Thread Daniel P. Berrange
   messages, gitk viewer annotations, merge commit messages and many
   more places where space is at a premium. As well as summarising
   the change itself, it should take care to detail what part of the
   code is affected. eg if it affects the libvirt driver, mention
   'libvirt' somewhere in the first line.

 * Describe any limitations of the current code

   If the code being changed still has future scope for improvements, or
   any known limitations then mention these in the commit message. This
   demonstrates to the reviewer that the broader picture has been considered
   and what tradeoffs have been done in terms of short term goals vs long
   term wishes.

The basic rule to follow is

   The commit message must contain all the information required to fully
   understand  review the patch for correctness. Less is /not/ more.
   More is more.


Including external references
-

The commit message is primarily targetted towards human interpretation,
but there is always some metadata provided for machine use. In the case
of OpenStack this includes at least the 'Change-id', but also optional
bug ID references and blueprint name references. Although GIT records
the author  committer of a patch, it is common practice across many
open source projects to include a Signed-off-by tag. Though OpenStack
does not mandate its use, the latter is still useful to include if a patch
is a combination of work by many different developers, since GIT only
records a single author. All machine targetted metadata, however, is
of secondary consequence to humans and thus it should all be grouped
together at the end of the commit message. For example:


Switch libvirt get_cpu_info method over to use config APIs

The get_cpu_info method in the libvirt driver currently uses
XPath queries to extract information from the capabilities
XML document. Switch this over to use the new config class
LibvirtConfigCaps. Also provide a test case to validate
the data being returned

Fixes: bug #1003373
Implements: blueprint libvirt-xml-cpu-model
Change-Id: I4946a16d27f712ae2adf8441ce78e6c0bb0bb657
Signed-off-by: Daniel P. Berrange berra...@redhat.com

As well as the 'Signed-off-by' tag, there are various other ad-hoc
tags that can be used to credit other people involved in a patch
who aren't the author.

 - 'Reviewed-by: ...some name.. ...email...'

   Although Gerrit tracks formal review by project members, some
   patches have been reviewed by people outside the community
   prior to submission

 - 'Suggested-by: ...some name.. ...email...'

   If a person other than the patch author suggested the code
   enhancement / influnced the design

 - 'Reported-by:  ...some name.. ...email...'

   If a person reported the bug / problem being fixed but did
   not otherwise file a launchpad bug report.

...invent other tags as relevant to credit other contributions


Some examples of bad practice
-

Now for some illustrations from Nova history, again with authors names
removed since no one person is to blame for these.

Example 1:

commit 468e64d019f51d364afb30b0eed2ad09483e0b98
Author: [removed]
Date:   Mon Jun 18 16:07:37 2012 -0400

  Fix missing import in compute/utils.py

  Fixes bug 1014829

Problem: this does not mention what imports where missing and why
they were needed. This info was actually in the bug tracker, and
should have been copied into the commit message, so it provides a
self-contained description. eg:

 Add missing import of 'exception' in compute/utils.py

  nova/compute/utils.py makes a reference to exception.NotFound,
  however exception has not been imported.

Example 2:

   commit 2020fba6731634319a0d541168fbf45138825357
   Author: [removed]
   Date:   Fri Jun 15 11:12:45 2012 -0600

Present correct ec2id format for volumes and snaps

Fixes bug 1013765
* Add template argument to ec2utils.id_to_ec2_id() calls

Change-Id: I5e574f8e60d091ef8862ad814e2c8ab993daa366


Problem: this does not mention what the current (broken) format
is, nor what the new fixed format is. Again this info was available
in the bug tracker and should have been included in the commit message.
Furthermore, this bug was fixing a regression caused by an earlier
change, but there is no mention of what the earlier change was.
eg

Present correct ec2id format for volumes and snaps

During the volume uuid migration, done by changeset XXX,
ec2 id formats for volumes and snapshots was dropped and is
now using the default instance format (i-x). These need
to be changed back to vol-xxx and snap-.

Adds a template argument to ec2utils.id_to_ec2_id() calls

Fixes bug 1013765


Example 3:

  commit f28731c1941e57b776b519783b0337e52e1484ab
  Author: [removed]
  Date:   Wed Jun 13 10:11:04 2012 -0400

Add libvirt min version check.

Fixes LP Bug #1012689.

Change-Id

Re: [Openstack] RFC: Thoughts on improving OpenStack GIT commit practice/history

2012-06-27 Thread Daniel P. Berrange
On Wed, Jun 27, 2012 at 03:24:21PM +0200, Vincent Untz wrote:
 Hi,
 
 As a recent contributor to OpenStack, but with experience in other
 projects, I think moving in the directions you document would be good.
 And as you wrote, it's common practice in many many projects, which is
 another argument for this :-)
 
 However, one comment:
 
 Le mercredi 27 juin 2012, à 11:52 +0100, Daniel P. Berrange a écrit :
  It might be mentioned that Gerrit's handling of patch series is not entirely
  perfect. This is a not a valid reason to avoid creating patch series.
 
 It'd be really great if we could first improve Gerrit to handle the
 patch series workflow in a better way. Without such a change, pushing
 patch series to Gerrit is really no fun for anyone :/
 
 I've no idea if this is currently being worked on (at least, I don't
 really se an issue reported in Gerrit's issue tracker). Maybe we should
 sit down and at least document how we'd like to improve this specific
 workflow?

Yep, no argument that Gerrit could do with some improvements, but having
submitted a number of non-trivial patch series to Nova, I don't think
current Gerrit UI is a complete blocker to adoption. It is not ideal,
but it isn't too painful if you're aware of what to look for. I think
the main problem is that since the patch dependancies are not obvious
in the UI, reviewers tend to miss the fact that they're reviewing a
patch that's part of a series.

I submitted one bug against Gerrit already to improve the way it
deals with bug resolution wrt patch series

  https://bugs.launchpad.net/openstack-ci/+bug/1018013

One think people might not be aware of is that if you create a patch
series on a branch and push that branch using 'git review', then the
branch name becomes the Gerrit topic which provides an easy way to
see the entire series. Alternatively the blueprint or bug IDs might
be chosen to form the topic. For example my most recent series:

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bug/1003373,n,z


I think the main patch display UI though needs to make it much more
obvious that a particular patch is part of a series so that reviewers
know to review the work as a whole. The UI does display dependencies.
eg see the depends on and needed by links:

  https://review.openstack.org/#/c/8694/

but I'd suggest that UI be changed so that instead of showing only the
previous and next, it displays all patches in the series at once.

Gerrit could also be less stupid about repeatedly trying  failing to
merge a patch due to missing dependent patches. In fact I'll file a
bug about that flaw now too.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] New mailing list server [status update]

2012-06-26 Thread Daniel P. Berrange
On Mon, Jun 25, 2012 at 04:39:58PM -0700, Stefano Maffulli wrote:
 Hello folks,
 
 we're getting closer to being able to have a new mailing list manager.
 Duncan and the infra team have a new machine running, with mailman
 installed. We tested also the migration of the archives, successfully.
 
 You can have a look at the result on the staging server:
 
 http://stagelists.openstack.org/cgi-bin/mailman/listinfo/
 
 Notice the visual appearance of the individual lists, like
 
 http://stagelists.openstack.org/cgi-bin/mailman/listinfo/foundation
 
 and of the archives (only visible at the url below):
 
 http://stagelists.openstack.org/pipermail/openstack-dev/
 
 I think it's good to give users landing on the archives an easy way to
 navigate to openstack.org and to our others site, like wiki and docs to
 get more information about openstack.
 
 Please give the site a fast spin when you have time and let us know if
 it doesn't work for you.

I think the site is looking pretty good  - like the styling you
managed to inflict on Mailman !

A couple of small points / suggestions (that you might already be working on):

 * Missing 1 line descriptions for 4 of the lists here:

   http://stagelists.openstack.org/cgi-bin/mailman/listinfo/


 * On the listinfo page, the About {LISTNAME} section of the page
   merely containing a link to the archives:

   http://stagelists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

To see the collection of prior postings to the list, visit the
 OpenStack-Dev Archives[link].

   I would suggest that you might want to put a more expansive
   description on each one. I'm sure you'll probably describe
   the purpose/audience of each list somewhere on the main
   OpenStack website/wiki, but you'll find that users often
   land on the listinfo pages directly, so it is worth
   giving them full info about the list there.

   As an example, with libvirt, we put the following the text in the
   About section:

 This list a place for discussions about the development of
  libvirt. Topics for discussion include

   * New features for libvirt
   * Bug fixing of libvirt
   * New hypervisor drivers
   * Development of language bindings for libvirt API
   * Testing and documentation of libvirt

  For discussion involving users of libvirt, please go to the
  users[link] mailing list

  You can learn more about libvirt on the project web pages
  at http://libvirt.org

  To see the collection of prior postings to the list, visit
  the libvir-list Archives[link].

https://www.redhat.com/mailman/listinfo/libvir-list

   And see corresponding text of the end-users list:

https://www.redhat.com/mailman/listinfo/libvirt-users


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HVM + Xen Hypervisor via libvirt possible?

2012-06-22 Thread Daniel P. Berrange
On Fri, Jun 22, 2012 at 11:22:13AM +0800, Li Wang wrote:
 Thanks all for replying.
 
 We want to stick on to the Xen Hypervisor for some reason.
 
 1. Does the community plan to support this feature?

I'd like to see it supported by Nova, because it would improve the
libvirt driver in general, but I don't think I'll have time to
work on it in the near future.

 2. Could I submit this request to the blueprint? My team would like to
 contribute on it if necessary.

Sounds like a good idea to submit a blueprint, or alternatively
file a bug, or both. I'm happy to review any code submissions
for this.

 3. or some good reasons to migrate from Xen to KVM?

I'd favour KVM for a variety of reasons, but lets not turn this into a
bikeshed discussion about which is best ;-P

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [devstack] Easing maintenance of list of distro packages to install

2012-06-21 Thread Daniel P. Berrange
On Wed, Jun 20, 2012 at 11:02:23AM -0700, Joshua Harlow wrote:
 Everyone should really check out...
 
 https://github.com/yahoo/Openstack-Anvil/tree/master/conf/distros
 
 It is nice to have a standard yaml format that isn't a new 
 micro-custom-format that we have to figure out how to parse.

Yes, that looks like a fairly comprehensive data set there.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HVM + Xen Hypervisor via libvirt possible?

2012-06-21 Thread Daniel P. Berrange
On Fri, Jun 22, 2012 at 12:17:11AM +0800, Huang Zhiteng wrote:
 Of course it is possible.  What kind of issue did you run into?
 
 On Thu, Jun 21, 2012 at 5:52 PM, Wang Li fox...@gmail.com wrote:
  hi,all
 
  I need to run virtual machines on Xen Hypervisor in HVM mode,
  is it possible when using libvirt?

Actually, this is not currently possible. For reasons I don't
know, the libvirt driver currently hardcodes use of paravirtualized
guests when connected to Xen hosts. It does not allow use of HVM
guests.  There's no particularly good technical reason why it can't
be made to work. There'd need to be a way to tag instance types
with HVM vs paravirt, in addition to their architecture. The
libvirt driver would have to expose whether each host supports
paravirt or HVM or both. The schedular would then have to take
this into account when placing guests.

Until this is done, if you really need to run HVM guests, then
you'll have to use KVM  instead of Xen.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Thoughts on client library releasing

2012-06-20 Thread Daniel P. Berrange
On Tue, Jun 19, 2012 at 11:07:05AM -0700, Monty Taylor wrote:
 I'm going to top-post, because there is a whole other thing which is not
 a response to points below. Basically, this is yet-another-instance of
 two competing and partially contradictory sets of use cases and usage
 patterns that we're trying to find the right compromise for.
 
 Tying client libs to server releases is really handy for the distros. It
 is terrible for the public cloud implementations who are doing rolling
 releases - not in as much as it effects the public cloud's ability to
 use the client libs - they can obviously pull trunk client lib from git
 and use that for intra-server communication just fine as part of their
 deployment. Rather, it is a terrible experience for the end-users of
 OpenStack public clouds.

With my distro hat on, I don't actually think tying client libs to
server releases is a particularly big factor. What the absolutely
critical factor is for distros, particularly enterprise distros, is
that /any/ client lib release be capable of talking to /any/ server
release. Mandating matched versions of (client,sever) is a really
undesirable because it is inevitable that at some point deployments
will end up with a mix of openstack server releases.

So if the client libs are on a more frequent release cycle than the
server releases, that is fine from a distro POV. Though I'd expect
you'd want the client libs release cycle to allow for it to periodically
line with the major server releases for convenience.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [devstack] Easing maintenance of list of distro packages to install

2012-06-20 Thread Daniel P. Berrange
On Wed, Jun 20, 2012 at 12:06:46PM +0200, Vincent Untz wrote:
 Hi,
 
 In devstack, we currently have two separate lists of packages to
 install: one for Ubuntu (in files/apts/) and one for Fedora (in
 files/rpms/).
 
 This has two issues:
 
  - this leads to incomplete updates for dependencies. It happens that
someone updates the apts files but not the rpms ones. (shameless
plug: https://review.openstack.org/#/c/8475/ needs some review love)
 
  - this just doesn't scale when adding support for another distro,
especially as rpm-based distros don't all share the same package
names (hence files/rpms/ cannot really be shared).
 
 I'd like us to move to a new scheme where we have one list of packages
 (say the Ubuntu one, for instance) and instead of adding another one
 Fedora, openSUSE, etc., we have translation tables to map package names
 from Ubuntu to other distros.
 
 Supporting a new distro is then a matter of adding a translation table
 (+ hacking the code to change the right config files, obviously), and we
 can easily add tests to make sure the translation tables contain a
 mapping for each package (and therefore fix the first issue).
 
 I already have some working code for that, but I want to check if people
 are fine with the idea before submitting it for review.

I've nothing against your proposal  certainly the motivation is
good, but thought I'd throw out an alternative idea just in case

Instead of having one sub-dir per package/distro, just have a
single (CSV/JSON/XML/whatever) file listing all distros in the
same place

eg a CSV file where the first column is the generic name, and
other columns are the distro-specific names (if required). The
distro specific column would be empty if the generic name applied
without change, or '-' if the package was not applicable to the
distro at all.

  # cat nova.csv
  Package,Ubuntu,Fedora,RHEL,SUSE
  python-devel,,
  libvirt,libvirt-bin,
  dnsmasq-base,,-,-,,
  dnsmasq-utils,,,

Hmm, using JSON would actually be a bit more readable.

The core idea is to have all the data, both the master list and
distro mappings in one clear place.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] List of glance image metadata tags used by nova ?

2012-06-14 Thread Daniel P. Berrange
On Wed, Jun 13, 2012 at 07:59:59AM -0700, Brian Waldon wrote:
 There hasn't been much process around choosing what metadata keys to
 use from Nova, and the best way to figure out what we do use is to
 trace through the code. Doing that, I found instance_uuid, user_id,
 image_type, backup_type, kernel_id, ramdisk_id, architecture, mappings,
 block_device_mapping, image_state, image_location and root_device_name.
 There may be more that I missed.

I was afraid that 'grepping the source' would  be the answer :-)

 With the v2 Images API, we will be able to much better control this
 integration point between Nova and Glance. We'll be able to publish
 schemas that detail what these attributes are and enforce contracts
 for each individual attribute.

This sounds like a very good improvement. Is there any documentation
or blueprint about the v2 images API can I read up on.

In the meantime, I'll just carry on with the current ad-hoc practice
for parameters I need to add.

 On Jun 13, 2012, at 2:34 AM, Daniel P. Berrange wrote:
 
  I was recently pointed at this changeset which adds CPU arch filtering
  to the Nova schedular.
  
   https://review.openstack.org/#/c/8267/
  
  IIUC, this relies on any disk images registered with glance having a
  'architecture' metadata tag assigned.
  
  Some of the plans I have for improving the Libvirt driver for Nova
  will be greatly helped by having other various metadata tags against
  disk images.
  
  This leads me to wonder if there is any defined list of glance image
  metadata used/supported by Nova (possibly with common semantics across
  all hypervisor drivers) ? Or are developers just making up metadat tag
  names as we go along ?
  
  Following on, is there any place where this info is to be included in
  end user documentation, so people deploying openstack know what tags
  they should use when registering images ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Errors running individual tests that call into the database

2012-06-11 Thread Daniel P. Berrange
On Mon, Jun 11, 2012 at 05:04:51PM +0100, John Garbutt wrote:
 Hi,
 
 I am trying to run tests like test_xenapi and test_libvirt by
 themselves do things like:
nosetests test_xenapi
 But it does work, I get DB errors relating to missing tables.
 However, I can successfully run all the tests.
 
 The way I understand it:
  - nova.tests.__init__.py setup() does the database setup
  - nova.test.py TestCase.setUp() does the resetting of the db
  It is almost like doing nosetests test_asdf skips the database
 setup in nova.tests.__init__.py, is that correct?
 
 Any ideas on how to run tests individually, but still get the
 database correclty initialized? Am I just calling the tests incorrectly?

I don't know the answer to your question, but I found this regression in
functionality was caused by the following commit:

  commit cf31b789927cedfd08c67dcf207b4a10ce2b1db6
  Author: Monty Taylor mord...@inaugust.com
  Date:   Sun Jun 3 13:03:21 2012 -0400

Finalize tox config.

Shrink tox.ini to the new short version.
Fix the test cases to be able to be run in nosetets plus the
openstack.nose_plugin, which finally removes the need for
nova/testing/runner.py
Also, now we'll just output directly to stdout, which will
make nose collect the trace logging directly and either output
it at the end of the run, or inject it into the xunit output
appropriately.

Change-Id: I1456e18a11a840145492038108bdfe812c8230d1

Before that commit, it was possible to just invoke something like

   ./run_tests.sh -N -P test_libvirt

from the nova top level GIT directory to run an individual test suite.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Errors running individual tests that call into the database

2012-06-11 Thread Daniel P. Berrange
On Mon, Jun 11, 2012 at 05:51:44PM +0100, Daniel P. Berrange wrote:
 On Mon, Jun 11, 2012 at 05:04:51PM +0100, John Garbutt wrote:
  Hi,
  
  I am trying to run tests like test_xenapi and test_libvirt by
  themselves do things like:
 nosetests test_xenapi
  But it does work, I get DB errors relating to missing tables.
  However, I can successfully run all the tests.
  
  The way I understand it:
   - nova.tests.__init__.py setup() does the database setup
   - nova.test.py TestCase.setUp() does the resetting of the db
   It is almost like doing nosetests test_asdf skips the database
  setup in nova.tests.__init__.py, is that correct?
  
  Any ideas on how to run tests individually, but still get the
  database correclty initialized? Am I just calling the tests incorrectly?
 
 I don't know the answer to your question, but I found this regression in
 functionality was caused by the following commit:
 
   commit cf31b789927cedfd08c67dcf207b4a10ce2b1db6
   Author: Monty Taylor mord...@inaugust.com
   Date:   Sun Jun 3 13:03:21 2012 -0400
 
 Finalize tox config.
 
 Shrink tox.ini to the new short version.
 Fix the test cases to be able to be run in nosetets plus the
 openstack.nose_plugin, which finally removes the need for
 nova/testing/runner.py
 Also, now we'll just output directly to stdout, which will
 make nose collect the trace logging directly and either output
 it at the end of the run, or inject it into the xunit output
 appropriately.
 
 Change-Id: I1456e18a11a840145492038108bdfe812c8230d1
 
 Before that commit, it was possible to just invoke something like
 
./run_tests.sh -N -P test_libvirt
 
 from the nova top level GIT directory to run an individual test suite.

After examining that commit I discovered that the old testing/runner.py
file have a bit of magic to auto-prefix nova.tests onto any args:

 # If any argument looks like a test name but doesn't have nova.tests in
 # front of it, automatically add that so we don't have to type as much
 for i, arg in enumerate(argv):
 if arg.startswith('test_'):
 argv[i] = 'nova.tests.%s' % arg

Since we lost this, you now have to fully specify the names of the
individual tests you want to run. eg this works for me:

  ./run_tests.sh -N -P nova.tests.test_libvirt


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Random libvirt hangs

2012-06-08 Thread Daniel P. Berrange
On Thu, May 31, 2012 at 08:19:47AM +0200, Christian Wittwer wrote:
 Hi Daniel,
 
  I'd file a bug against libvirt in Oneiric, requesting that they
  backport the 4 changesets mentioned in
 
 Do you know if that bug is now fixed in Oneiric?

No idea I'm afraid, I only maintain libvirt upstream or in Fedora/RHEL,
so don't track Ubuntu bugs.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] New layout of the OpenStack mailing lists

2012-05-25 Thread Daniel P. Berrange
On Thu, May 24, 2012 at 05:02:25PM -0700, Stefano Maffulli wrote:
 Hello folks,
 
 we're working on a new mailing list server to host our discussions. The
 main factor behind the move was described in this message by ttx:
 
 http://openstack.markmail.org/thread/ybwazse63sgxozh2
 
 
 The current layout is drafted on
 http://etherpad.openstack.org/newmlist-layout

I'm surprised to see the announce list being removed. What is the
rational for that. Most large projects have a very low traffic
list (~1 msg per day) dedicated for project announcements of new
releases, security notices, important community messages etc, that
is explicitly separate from the general users / developpers lists
which are high traffic.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] libvirt.xml.template

2012-05-23 Thread Daniel P. Berrange
On Wed, May 23, 2012 at 03:18:44PM +0800, William Herry wrote:
 Hi,
 
 I can't find this libvirt.xml.template file with git install, I change this
 file to make my vm show real cpu rather than QEMU, now, I can't find that
 file, some one know where it is?
 
 I use the recent git packages which is 2012.2

In the latest Nova GIT, there is no template file any more. The libvirt
configuration is generated programatically.

We should add explicit support to Nova to allow exposing specific CPU
models to the guest, including the host CPU native model.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] libvirt.xml.template

2012-05-23 Thread Daniel P. Berrange
On Wed, May 23, 2012 at 11:09:12AM +0100, Daniel P. Berrange wrote:
 On Wed, May 23, 2012 at 03:18:44PM +0800, William Herry wrote:
  Hi,
  
  I can't find this libvirt.xml.template file with git install, I change this
  file to make my vm show real cpu rather than QEMU, now, I can't find that
  file, some one know where it is?
  
  I use the recent git packages which is 2012.2
 
 In the latest Nova GIT, there is no template file any more. The libvirt
 configuration is generated programatically.
 
 We should add explicit support to Nova to allow exposing specific CPU
 models to the guest, including the host CPU native model.

BTW, if you file a bug against nova requesting this feature, I'll put it
on my todo list for the folsom release, unless someone else beats me to
it.

Also feel free to file bugs requesting any other libvirt config options
you desire that are not already usable with nova.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Improving Xen support in the libvirt driver

2012-05-10 Thread Daniel P. Berrange
On Wed, May 09, 2012 at 11:08:13PM -0600, Jim Fehlig wrote:
 Hi,
 
 I've been tinkering with improving Xen support in the libvirt driver and
 wanted to discuss a few issues before submitting patches.
 
 Even the latest upstream release of Xen (4.1.x) contains a rather old
 qemu, version 0.10.2, which rejects qcow2 images with cluster size 
 64K.  The libvirt driver creates the COW image with cluster size of 2M. 
 Is this for performance reasons?  Any objections to removing that option
 and going with 'qemu-img create' default of 64K?

In general larger cluster size does improve the performance of
qcow2. I'm not sure how much of a delta we get by going from
64k to 2M though. If there's any doubt then I guess it could be
made into a configuration parameter.

 In a setup with both Xen and KVM compute nodes, I've found a few options
 for controlling scheduling of an instance to the correct node.  One
 option uses availability zones, e.g.
 
 # nova.conf on Xen compute nodes
 node_availability_zone=xen-hosts
 
 # launching a Xen PV instance
 nova boot --image xen-pv-image --availability_zone xen-hosts ...
 
 The other involves a recent commit adding additional capabilities for
 compute nodes [1] and the vm_mode image property [2] used by the
 XenServer driver to distinguish HVM vs PV images.  E.g.
 
 # nova.conf on Xen compute nodes
 additional_compute_capabilities=pv,hvm
 
 # Set vm_mode property on Xen image
 glance update image-uuid vm_mode=pv
 
 I prefer that latter approach since vm_mode will be needed in the
 libvirt driver anyhow to create proper config for PV vs HVM instances. 
 Currently, the driver creates usable config for PV instances, but needs
 some adjustments for HVM.

Yes, tagging the image with details of its required guest ABI does
seem like something we need to do to be able to properly support
a choice betweeen PV  HVM images. It is not very good the way we
currently just hardcode PV only for Xen usage in the libvirt driver.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Improving Xen support in the libvirt driver

2012-05-10 Thread Daniel P. Berrange
On Thu, May 10, 2012 at 09:06:58AM +0100, Daniel P. Berrange wrote:
 On Wed, May 09, 2012 at 11:08:13PM -0600, Jim Fehlig wrote:
  Hi,
  
  I've been tinkering with improving Xen support in the libvirt driver and
  wanted to discuss a few issues before submitting patches.
  
  Even the latest upstream release of Xen (4.1.x) contains a rather old
  qemu, version 0.10.2, which rejects qcow2 images with cluster size 
  64K.  The libvirt driver creates the COW image with cluster size of 2M. 
  Is this for performance reasons?  Any objections to removing that option
  and going with 'qemu-img create' default of 64K?
 
 In general larger cluster size does improve the performance of
 qcow2. I'm not sure how much of a delta we get by going from
 64k to 2M though. If there's any doubt then I guess it could be
 made into a configuration parameter.

I had a quick chat with Kevin Wolf who's the upstream QEMU qcow2 maintainer
and he said that 64k is the current recommended cluster size for qcow2.
Above this size, the cost of COW becomes higher causing an overall
drop in performance.

Looking at GIT history, Nova has used cluster_size=2M since Vish first
added qcow2 support, and there's no mention of why in the commit message.
So unless further info comes to light, I'd say we ought to just switch
to use qemu-img's default setting of 64K for both Xen and KVM.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Improving Xen support in the libvirt driver

2012-05-10 Thread Daniel P. Berrange
On Thu, May 10, 2012 at 03:17:59PM +0200, Muriel wrote:
 Il 10/05/2012 11:48, Alvaro Lopez ha scritto:
 On Thu 10 May 2012 (10:41), Muriel wrote:
 If I remember correctly, the qcow images are not the only problem
 with xen, but I'm far from the code for too long time. In the past
 (diablo), the method for counting the ram (and cpu perhaps?) did not
 work with xen and this affected the choices of the scheduler. I have
 no idea if this happens in essex/folsom.
 I've sent to review some code [1] that tries to fix this issue [2].
 
 [1] https://review.openstack.org/#/c/7296/
 [2] https://bugs.launchpad.net/nova/+bug/997014
 
 Regards,
 Great! But there is a reason if are you using /proc/meminfo instead
 of getInfo when calculating the memory used?
 You know if there is a way to get, using libvirt, the reserved
 memory for dom0? Or the only solution is to read the configuration
 file of xen?

Dom0 appears as just another guest in Xen/libvirt, so you can query
its memory allocation using normal libvirt APIs

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack Essex - Guide for Ubuntu 12.04

2012-04-30 Thread Daniel P. Berrange
On Mon, Apr 30, 2012 at 07:26:17AM -0500, Anne Gentle wrote:
 Hi Emilien -
 Ideally Martin's guide and your guide would be part of the OpenStack
 documentation - your licensing would work within our framework for docs.
 
 Martin, how is progress going on submitting your Quick Start guide through
 the Gerrit review process?

Your suggestion implies that OpenStack upstream is OK with having downstream
distro-specific setup docs. If that's the case we have an equivalent getting
started guide for Essex on Fedora 17:

  https://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17

This wiki content is under Creative Commons Attribution-Share Alike License
3.0 Unported [1]

Regards,
Daniel

[1] https://fedoraproject.org/wiki/Legal/Licenses#This_Website
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] URL Scheme for deploying Openstack in HTTPD

2012-04-30 Thread Daniel P. Berrange
On Mon, Apr 30, 2012 at 01:58:24PM -0500, Dolph Mathews wrote:
 I very much like the idea that we should have a well documented
 recommendation on this topic.
 
 My only criticism is that the API/service names should be used in place of
 project names, e.g. https://hostname/identity, https://hostname/compute,
 etc.

Why do you think that is better ? I've been switching back  forth on
this topic unable to figure out which is a better long term bet. Trying
to think of downsides, I come up with the thought of what happens if we
wwant to support multiple different compute or identity APIs on the
same host. From a namespacing POV it seems better to use the names based
off the openstack component name, rather than generic logical function
which has higher potential for clashing.  This lead me to prefer what
Adam proposes below.

 On Mon, Apr 30, 2012 at 11:34 AM, Adam Young ayo...@redhat.com wrote:
 
   A production configuration of Openstack should be able to run in HTTPD
  using SSL.  I'd like to propose the following URL scheme for the web Apps
  so that the various pieces can be installed on a single machine without
  conflict.
 
  All pieces will be served on port 443 using the https protocol.

I think this is tangential to the main point of the proposal. Even if
every service was on its own plain HTTP port, I would still suggest
that this namespace proposal be followed by them to give consistency.

  I've written these up to use the project names.  Enough documentation and
  information around the projects has circulated such that replacing, say,
  Keystone with identity would cause more confusion than it would remove.
 
 
  #Web UI
  #If and only if this is installed,  we should put in a forward from / to
  /dashboard for browser clients.
  https://hostname/dashboard
 
 
  #identity
  https://hostname/keystone/main
  https://hostname/keystone/admin
 
  #image
  https://hostname/glance/api
  https://hostname/glance/registry
 
  #compute.  Not sure if all of these are required
  https://hostname/nova/api
  https://hostname/nova/crt
  https://hostname/nova/object
  https://hostname/nova/cpu
  https://hostname/nova/network
  https://hostname/nova/volume
  https://hostname/nova/schedule
  https://hostname/nova/novnc
  https://hostname/nova/vncx
  https://hostname/nova/cauth
 
  #network
  https://hostname/quantum/api https://hostname/quantum/service
  #if we had an API for the agent it would be
  https://hostname/quantum/agent https://hostname/quantum/service
 
 
  There was an attempt to make Swift also fit into this scheme.  However,
  Swift URLs fall into a scheme of their own,  and won't likely be colocated
  with the admin pieces outside of development.  Here they are for
  completeness.
 
  #storage
  https://hostname/swift/account
  https://hostname/swift/object
  https://hostname/swift/container

In general I think this proposal is sound. Having clearly distinct
namespaces for each component's API(s) is general good practice,
to allow arbitrary co-location of services on the same host/port.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Mailing-list split

2012-04-27 Thread Daniel P. Berrange
On Fri, Apr 27, 2012 at 12:04:34PM +0200, Thierry Carrez wrote:
 To avoid Launchpad list slowness, we would run the new openstack-dev
 list off lists.openstack.org. Given the potential hassle of dealing with
 spam and delivery issues on mission-critical MLs, we are looking into
 the possibility of outsourcing the maintenance of lists.openstack.org to
 a group with established expertise running mailman instances. Please let
 us know ASAP if you could offer such services. We are not married to
 mailman either -- if an alternative service offers good performance and
 better integration (like OpenID-based subscription to integrate with our
 SSO), we would definitely consider it.

FYI for libvirt mailing lists we are using the GNU project's spam
filter:

  https://savannah.gnu.org/maintenance/ListHelperAntiSpam

Thanks to this we get essentially zero spam on our mailing lists,
with practically no burden on / work required by our list admins.
Jim Meyering (CC'd) set it up for us originally  may have some
recommendations or tips if OpenStack want to make use of it too.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How does everyone build OpenStack disk images?

2012-04-26 Thread Daniel P. Berrange
On Wed, Apr 25, 2012 at 06:14:22PM -0700, Justin Santa Barbara wrote:
 How does everyone build OpenStack disk images?  The official documentation
 describes a manual process (boot VM with ISO), which is sub-optimal in
 terms of repeatability / automation / etc.  I'm hoping we can do better!
 
 I posted how I do it on my blog, here:
 http://blog.justinsb.com/blog/2012/04/25/creating-an-openstack-image/
 
 Please let me know the many ways in which I'm doing it wrong :-)
 
 I'm thinking we can have a discussion here, and then I can then compile the
 responses into a wiki page and/or a nice script...

If you have a KVM enabled machine, then 'Oz' has the ability to create
JeOS images for all the common distros you'll find. It is a very simple
command line tool that just focuses on image building  image customization
(adding more packages to an existing JeOS image).

 http://aeolusproject.org/oz.html

Yes, it is on the Aeolus project website, but it has no external
dependancies on the rest of Aeolus - it just wants kvm, libvirt  a
few commonly available python modules. I've often thought that it
would be desirable to have Oz integrated into OpenStack to provide an
native image building capability. Given their common Python heritage
I think it would work quite well.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Encrypted virtual machines

2012-04-26 Thread Daniel P. Berrange
On Thu, Apr 26, 2012 at 09:05:41AM -0700, Matt Joyce wrote:
 From a security stand point I am curious what you see the benefit as?

Consider that you might have separate people in your data center
managing the virtualization hosts, vs the storage hosts vs the
network. As it standards today any of those groups of people can
compromise data stored in a VM disk image (assuming a network based
filesystem).

First you encrypt the disk image, so that a person with access
to the storage hosts, or network sniffing can't read any data. Then
you have a central key server that only gives out the decryption key
to Nova compute nodes when they have been explicitly authorized to
run an instance of that VM.

So now people with access to the storage hosts cannot compromise
any data. People with access to the virtualization hosts can only
compromise data if the host has been authorized to use that disk
image

You would need to compromise the precise host the VM disk is being
used on, or compromise the key server or the management service
that schedules VMs (thus authorizing key usage on a node).

NB this is better than relying on the guest OS to do encryption,
since you can do stricter decryption key management from the
host side.

 On Thu, Apr 26, 2012 at 8:53 AM, Michael Grosser d...@seetheprogress.net 
 wrote:
  Hey,
 
  I'm following the openstack development for some time now and I was
  wondering if there was a solution to spin up encrypted virtual machines by
  default and if it would be a huge performance blow.
 
  Any ideas?

I would like to extend the libvirt driver in Nova to make use of the qcow2
encryption capabilities between libvirt  QEMU which I describe here:

  
http://berrange.com/posts/2009/12/02/using-qcow2-disk-encryption-with-libvirt-in-fedora-12/

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] raw or qcow2

2012-04-17 Thread Daniel P. Berrange
On Tue, Apr 17, 2012 at 04:23:04PM +0800, William Herry wrote:
 Hi all
 
 we plan to use openstack on our production,
 we are not sure which disk type will be the better choice
 
 I did a little test on qcow2 and it's performance looks good when I use
 cache=writeback
 
 can some one give us some advice, or some article,
 cause for such common topic must be discussed before

Raw files or block devices will always have some performance advantage
over qcow2, though I don't have figures to tell you just how much of
a difference it will be. The performance gap is certainly much smaller
than it used to be a few years back.

The more important question is probably, do you actually need any of
the other features that qcow2 gives over raw ? eg internal snapshots,
external backing files, encryption, compression, etc ?  If you don't
need any of these features, then there is no real point in choosing
to use qcow2 over raw.

REgards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OVF vs. bare container formats for qcow2 images

2012-03-29 Thread Daniel P. Berrange
On Wed, Mar 28, 2012 at 04:41:28PM -0400, Lorin Hochstein wrote:
 All:
 
 Given that I have a qcow2 image from somewhere (e.g., downloaded
 it from a uec-images.ubuntu.com, created one from a raw image using
 qemu-img) that i want to add to glance:
 
 1. How can I tell whether it's an ovf or bare container format?

You are mixing up terminology here. Disk image formats are things like
raw, qcow2, vmdk, etc.

OVF refers to the format of a metadata file provided alongside the
disk image, which describes various requirements for running the
image.

The two are not tied together at all, merely complementary to
each other.

 2. Why does it matter?

OVF provides metadata that is useful to virt/cloud mgmt applications
when deploying a prebuilt disk image.  I've no idea what use OpenStack
makes of the OVF metadata though.

 Whenever I add a qcow2 image to glance, I always choose ovf,
 even though it's probably bare, because I saw an example
 somewhere, and it just works, so I keep doing it. But I don't
 know how to inspect a binary file to determine what its container
 is (if file image.qcow2 says it's a QEMU QCOW2 Image (v2), does
 that mean it's bare?). In particular, why does the user need to
 specify this information?

If you simply have a single  someimage.qcow2 file, then you simply
have a disk image. Thus there is no OVF metadata involved at all.

eg, this is the (qcow2) disk image:

  
http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img

While this is an OVF metadata file that optionally accompanies the disk image

  http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64.ovf


Sometimes, people may create a zip/tar.gz file that contains both the
disk image and OVF file in one convenient download.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OSSA 2012-002] Extremely long passwords can crash Keystone (CVE-2012-1572)

2012-03-28 Thread Daniel P. Berrange
On Tue, Mar 27, 2012 at 02:56:42PM -0400, Russell Bryant wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 OpenStack Security Advisory: 2012-002
 CVE: CVE-2012-1572
 Date: March 27, 2012
 Title: Extremely long passwords can crash Keystone
 Impact: High
 Reporter: Dan Prince dpri...@redhat.com
 Products: Keystone
 Affects: All versions
 
 Description:
 Dan Prince reported a vulnerability in Keystone. He discovered that
 you can remotely trigger a crash in Keystone by sending an extremely
 long password. When Keystone is validating the password, glibc
 allocates space on the stack for the entire password. If the password
 is long enough, stack space can be exhausted, resulting in a crash.
 This vulnerability is mitigated by a patch to impose a reasonable
 limit on password length (4 kB).

What about raising an exception back to the callers, rather than silently
accepting it with truncation ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Networking guru needed: problem with FlatManager ARP when guest and bridge MACs the same

2012-03-14 Thread Daniel P. Berrange
On Wed, Mar 14, 2012 at 10:50:28AM -0700, Justin Santa Barbara wrote:
 We recently changed the MAC address assigned to guests so that they started
 with 0xfe, in the hope of avoiding (theoretical?) issues with MAC addresses
 changing on the bridge device as machines are shut down (because supposedly
 the bridge grabs the lowest MAC address numerically):
 https://bugs.launchpad.net/nova/+bug/921838
 
 However, it looks we bumped into some similar behavior done by libvirt: It
 also sets the first byte to 0xfe for the host network device, in the hope
 of avoiding the same bug.  Thus, with the patch, the host vnetX and the
 guest eth0 have the same MAC address.  I think this breaks FlatManager, but
 I don't know why, and I really don't know why it wouldn't break other
 modes, and I'm hoping a network guru can explain/confirm.

I don't really know why either - all I know is that the host side must
be different from the guest side.

 When they have the same MAC address, ARP resolution isn't working: the
 guest issues an ARP request for the gateway, on the host I can see the ARP
 request and response, but the guest doesn't appear to see/accept the ARP
 response and so it just keeps retrying.
 
 This message appears in dmesg:
 [ 2199.836114] br100: received packet on vnet1 with own address as source
 address
 
 I'm guessing that 'address' means 'MAC address', and this is why ARP is
 failing, it sounds like the bridge might be dropping the packet.
 
 Changing to 0x02, or 0xfc does fix it (although my arithmetic was wrong,
 and vishy points out we should use 0xfa instead of 0xfc).
 
 Networking guru questions:
 
- Does this explanation make sense?
- Why didn't other networking modes break?
- Should we simply revert the change and go back to 0x02?
- Should we switch to 0xfa to try to avoid the bridge interface
problems?  Or does it simply not matter if libvirt is changing the MAC for
us?

Hmm, I guess I mis-read the original patch vish submitted. I thought it
was only changing the MAC address of host TAP devices that Nova created
itself, and not the guest MAC address sent in the MXL.

The MAC address sent in the libvirt XML (which is the guest visible MAC)
should not be using 0xfX at all - ideally it should just use the standard
MAC prefix for the hypervisor in question. eg for Xen, use  00:16:3E and
for LXC/KVM use 52:54:00

If libvirt is creating the TAP device itself, (eg interface with
type=bridge|direct), then Nova should not do anything special with
the MAC.

If Nova is pre-creating a TAP device (eg for use with interface
type=ethernet, then Nova should set the top byte to 0xfe (because
libvirt won't be doing so with pre-created TAP devices).

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Random libvirt hangs

2012-03-12 Thread Daniel P. Berrange
On Mon, Mar 12, 2012 at 02:17:49PM -0400, David Kranz wrote:
 In the spirit of Jay's message, we have a long-running cluster
 (diablo/kvm) where about once every 3-4 weeks a user will complain
 that she cannot connect to a vm. Examining the compute node shows
 that libvirt-bin is hung. Sometimes restarting this process fixes
 the problem. Sometimes it does not, but rebooting the compute node
 and then the vm does. I just heard from people in my company
 operating another cluster (essex/kvm) that they have also seen this.
 I filed a bug about a month ago
 
 https://bugs.launchpad.net/nova/+bug/931540
 
 Has any one been running a kvm cluster for a long time with real
 users and never seen this issue?

There have been various scenarios which can cause libvirtd to hang in
the past, but that bug report doesn't have enough useful data to
diagnose the issue. If libvirtd itself is hanging, then you need to
attach to the daemon with GDB, and run 'thread apply all bt' to collect
stack traces across all threads. Make sure you have debug symbols
available when you do this.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 答复: RFC: Rewritten libvirt driver XML generation

2012-03-09 Thread Daniel P. Berrange
On Fri, Mar 09, 2012 at 09:52:21AM +0800, wangsuyi640 wrote:
 Hi all:
 I tried kvm on my openstack on ubuntu11.10  with the libvirt.xml file as
 follows:

 Howerver, I want change the remote access method by spice, I simply changed
 the libvirt.xml as follows:
 domain type='kvm'

[snip]

 graphics type='spice' port='-1' autoport='yes' keymap='en-us'
 listen='0.0.0.0'/
 
 /devices
 /domain
 
 As you can see, I just change graphics type='vnc' port='-1' autoport='yes'
 keymap='en-us' listen='0.0.0.0'/ to graphics type='spice' port='-1'
 autoport='yes' keymap='en-us' listen='0.0.0.0'/   ,   But it called the
 error as follows:
 
 libvirtError: internal error Process exited while reading console log
 output: char device redirected to /dev/pts/12
 TRACE: do_spice_init: starting 0.8.1
 TRACE: do_spice_init: statistics shm_open failed, Permission denied
 
 I wish someone can give me some help! Thanks!

This problem is unrelated to the changes I made. If I had to guess
I'd say perhaps the AppArmour profile is not allowing QEMU to use SHM.
This is something you should probably report to the Ubuntu bug tracker
for QEMU

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Libvirt Snapshots

2012-03-09 Thread Daniel P. Berrange
On Fri, Mar 09, 2012 at 10:43:35AM -0600, rb...@hexagrid.com wrote:
 Even though it's more of a libvirt question since the topic of snapshot
 is being discussed, thought of asking it. Does libvirt 0.95 uses the
 backing file concept? or is that the same thing that Vish mentioned 
 as option 1

The latest snapshot APIs in libvirt are broadly configurable by passing
in suitable XML. So if you want to take snapshots on the SAN, or using
LVM or backing files, they can all be made to fit in with libvirt's
new APIs. I'm not entirely familiar with how to use it, so if you want
fine details head over to the libvirt mailing lists where the authors
of the libvirt snapshot code will be able to assist.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] RFC: Rewritten libvirt driver XML generation

2012-03-08 Thread Daniel P. Berrange
Back in January Joshua Harlow raised the question of whether we should
replace the usage of Cheetah templates for generating XML in the libvirt
driver:

  https://lists.launchpad.net/openstack/msg06481.html

Since then I have had some time to work on this idea and now have a
working branch available for testing. I don't want to push this to
Gerrit right now, since it isn't really material suitable for the
Essex release, and AFAICT we don't have a separate review/GIT branch
for non-Essex feature dev work.

Thus for now I have pushed it to a private a branch here:

  https://github.com/berrange/nova/tree/libvirt-xml-config-v1

The  foundation for the work is early in the series, where I create
a new nova/virt/libvirt/config.py module with a set of classes for
representing the aspects of libvirt configuration that Nova is interested
in. Each of the config classes implement a format_dom() method for
serializing themselves to an lxml.etree.Element DOM instance.

Currently these objects can be used to generate XML, but in the future
they will also be able to parse the XML. For this they will implement
a parse_dom() method which will de-deserialize the xml.etree.Element
DOM.

Joshua's original posting had talked about having separate layers
for the config objects vs the serialization. IMHO this would be
overkill, just adding abstraction for little real world gain. We
don't need to have pluggable XML serialization impls, one good one
is sufficient.

The rest of the series is simply a piece-by-piece conversion of the
template code to the new object based APIs. I did it in a great many
steps, to make it easier to review  test the changes.

As well as the guest config creation, I also took the opportunity to
change two others places where we generate XML. The host CPU comparison
code and the domain snapshot creation. There is still one place left
to fix, the firewall filter generator.

By the end of the series we have the following benefits

 - No code anywhere outside config.py ever needs to know about XML
   documents

 - We actually have proper XML escaping, making us safe from potential
   exploits in that area

 - There is clean separation of the logic for constructing the
   guest config, from the logic for generating XML.


My next step following on from this is to actually start making the
config generation more flexible, removing alot of hardcoding it
currently does (eg horrible global virtio on/off switch). This will
entail tagging images on import with an operating system identifier,
and then using libosinfo to query exactly what hardware devices the
OS supports  picking the optimal ones.

I tested this on a KVM host and verified the XML generated for the
guest before/after was the same. I've not tested all the possible
block / network driver combinations though, so might have broken
something not covered by the test suite

Diffstat for the whole patch series

 b/nova/tests/fakelibvirt.py |   11 
 b/nova/tests/test_libvirt.py|   67 +++--
 b/nova/tests/test_libvirt_config.py |  448 +
 b/nova/tests/test_libvirt_vif.py|   54 +---
 b/nova/virt/libvirt/config.py   |  420 +++
 b/nova/virt/libvirt/connection.py   |  476 ++--
 b/nova/virt/libvirt/vif.py  |  102 ---
 b/nova/virt/libvirt/volume.py   |   52 ++-
 nova/virt/cpuinfo.xml.template  |9 
 nova/virt/libvirt.xml.template  |  188 --
 10 files changed, 1323 insertions(+), 504 deletions(-)


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Essex dead wood cutting

2012-02-06 Thread Daniel P. Berrange
On Thu, Feb 02, 2012 at 11:52:17AM +, Armando Migliaccio wrote:
 To the best of my knowledge, the ESXi support is up to date. There may be 
 bugs, but which virt driver is perfect ;)?
 
 Sateesh may know more, because he is the main contributor/maintainer from 
 Citrix.
 
 However, as Vish pointed out in a previous email, any driver is doomed to rot 
 if:
 
 a) no one is deploying OpenStack using the specific driver, thus unveiling 
 potential problems;
 b) a pool of developers (not necessarily the first committer) keep the code 
 up to date, increase functionality and test coverage (both unit and 
 functional);
 
 Clearly both xenapi and libvirt are actively developed and deployed. How 
 about vmwareapi? Anyone?

 Let's make sure that vmwareapi is not going to be the next one to bite the 
 dust.

FWIW, libvirt has pretty reasonable abilities to manage VMWare ESX servers,
and some very basic support for Hyper-V. It would be interesting to see if
the OpenStack libvirt driver can be developed to support these targets too.
If the libvirt VMWare/HyperV drivers are not currently good enough for
OpenStack's needs, IMHO, it would be worth putting effort into improving
libvirt. It seems like a needless duplicated effort to have the libvirt
 OpenStack communities both trying to write hypervisor portability
layers.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova is considering Xen Domain-0 as instance

2012-01-16 Thread Daniel P. Berrange
On Mon, Jan 16, 2012 at 10:29:19AM -0200, Rogério Vinhal Nunes wrote:
 As Daniel suggested, I just ignored the ID == 0 and it seems to work fine
 now. The resulting code is even simpler than suggested by Vish:
 
 def list_instances(self):
 return [self._conn.lookupByID(x).name()
 for x in self._conn.listDomainsID()
 if x != 0]
 
 this is more of a design decision. So is this the correct approach to
 correct this bug or for the record it should be done in another way?

Speaking as a libvirt maintainer, using Domain ID == 0 as you do here,
is the best recommendation we have.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Nova is considering Xen Domain-0 as instance

2012-01-12 Thread Daniel P. Berrange
On Thu, Jan 12, 2012 at 07:36:59PM -0200, Rogério Vinhal Nunes wrote:
 I really need some help in getting this to work. This seems pretty simple,
 just tell nova-compute to ignore any instance named Domain-0 (actually it
 could ignore any instance not named 'instance-'). As there is a
 libvirt type to connect to xen, it is in openstack interest to fix this. As
 I did make it work with a flawed old libvirt in Ubuntu 10.04, this seems
 close to working.

To be generally applicable to any libvirt driver, you should check for
domain ID == 0.  Libvirt reserves the domain ID 0, to refer to the VM
representing the host OS, if any. This is why all LXC/KVM guests start
from number 1 instead.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Tempita usage?

2012-01-06 Thread Daniel P. Berrange
On Thu, Jan 05, 2012 at 10:33:00AM -0800, Joshua Harlow wrote:
 Hmmm, so the RNG schemas aren't stable? Is that basically
 the problem there (or part of it)? That seems not so good,
 since I thought the whole point of publishing schemas was
 for people to use them, darn :(

Well there are two different things here to be considered:

 1. The XML document described by the schema
 2. The XML schema itself

Item 1 is long term stable, item 2 is not stable.

 This libvirt-gconfig does sound good though, is there anyway
 we can get the dependencies relaxed to versions that other
 distributions can actually handle (without having more
 dependencies that need to be custom built). Is there a need
 for the glib dependency to be that recent (or is the gobject
 introspection-stuff just that new?).

Unfortunately the only way would be to custom compile a new
glib + gobject introspection stuff for old distros. While the
introspection stuff has been in development for a year or two,
it was only declared stable at approx the same time GNOME 3
was released.

 Still though I think what the above does is still just provide the lower 
 level or the idea I was thinking:
 
 The three levels were:
 
 
  1.  Object format that contains methods/properties for exactly what we use 
 with libvirt
 *   Not connected to #2 or #3 in any way
  2.  Formatter layer that takes in #1 and outputs a string/file (or something 
 similar) using various #3 lower level formats
 *   One formatter could be a TempitaLibvirtFormatter
 *   Second could be RngLibvirtFormatter (or GconfigLibvirtFormatter when 
 that happens...)
  3.  Lower level objects/libraries
 *   This would be where RNG-python objects would live or the 
 libvirt-gconfig objects
 *   This could also use a tempita library
 
 Right now basically there is libvirt/connection.py which
 interacts with #3 (tempita), instead of interacting with #1.
 So this could be phased, get #1, #2, #3 working with the current
 stuff (actually a simplified tempita since I really want to get
 rid of the usage of tempita as a mini-scripting language, since
 the last time I checked we are in python to begin with). #3 could
 then use this simplified tempita template, until this libvirt-gconfig
 comes along (is there a timeline for that?).
 
 Thoughts?

This sounds like a good abstraction idea, since it is cleanly separating
out domain configuration from the LibvirtConnection class. So IIUC, the
'to_xml' method in LibvirtConnection would get an instance of the
abstract LibvirtDomainFormatter class, and call a to_xml() like method
on that todo the formatting. We could start with a simple subclass called
LibvirtTempitaDomaiNFormatter which just contains the current code from
_prepare_xml_info. Then in the future we would then introduce new subclasses
like LibvirtGConfigDomainFormatter, or whatever else we like.

WRT to libvirt-gconfig, my intention was to start experimenting on an
Nova impl using that towards the end of January / early Feb.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Tempita usage?

2012-01-06 Thread Daniel P. Berrange
On Fri, Jan 06, 2012 at 10:36:58AM -0800, Joshua Harlow wrote:
 Cool,
 
 Maybe I can get a branch out there that u can start hooking in by early feb.
 
 That would seem like a good use of time :-)

Great, sounds like a good plan. 

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Tempita usage?

2012-01-05 Thread Daniel P. Berrange
On Tue, Jan 03, 2012 at 11:17:45AM -0800, Joshua Harlow wrote:
 I was wondering if there has been any thought or consideration of removing 
 tempita and replacing it with just python.
 Personally the current tempita usage (libvirt.xml.template) seems to be 
 heading down a hairy path and I wanted to see others opinions on say 
 replacing this with something that doesn't require a whole templating 
 language to use. Some of this may just be my bias against templating 
 languages from experience in different projects @ yahoo (they always start to 
 get hairy, especially when u start to code in them).
 
 Some thoughts:
 
 
  1.  Assuming we can get a libvirt.domain.xsd (?) we can use a xsd-object 
 model
  utility to transform that xsd into a python object model (there seem to be a 
 couple of these?)
 *   http://www.rexx.com/~dkuhlman/generateDS.html or http://pyxsd.org/ 
 (or something else?)

libvirt uses RNG for describing its schemas [1] rather than XSD. That aside 
though,
I'm not really convinced that this is a good idea. Regardless of which schema
langauge is used, there are always multiple different ways to describe the
same overall concept. In libvirt we have often re-structed our schemas to
express things in a different way. If you are generating APIs / object models
from the schema then, AFAICT, your generated API is liable change in a non
backwards compatible manner.

  2.  Create a exposed tree representation of the sections of the libvirt 
 domain
  xml that we are interested in (and only those that we are interested in) as 
 python
 objects and have current code create these objects (which right now is 
 basically a
 set of python hashes getting sent to the tempita library)
  3.  Pass the root element of this exposed tree representation to a 
 formatter class (which itself could use pyxsd objects, or tempita - for 
 backward compatibility, or something else, but I have a strong preference for 
 keeping a single language in use, instead of a tempita language and a python 
 language).
  4.  Write output created by formatter class to domain.xml file (and continue 
 as normal).
  5.  Profit!
 
 Some of the benefits I think exist with this:
 
 
  1.  XML escaping will actually happen (does this happen right now?)

AFAICT, no, and this is a security exploit just waiting to happen, if
indeed the code isn't already vulnerable.

  2.  We can have a underlying object layer which comes directly from the 
 libvirt.domain.xsd (and possibly have versions of this to work with different 
 libvirt versions)
  3.  We can have an exposed object layer which will attempt to be version 
 independent of the underlying layer and only contain methods/properties that 
 we will use with libvirt (ie the xsd will have many properties/fields we will 
 not use, thus we should not expose them).
  4.  We can have a formatter layer that will know how to use this exposed 
 layer and return a object that can convert the exposed layer into a string, 
 thus allowing for different implementations (or at least a separation of what 
 is exposed, how its formatted and what the formatter internally uses).
  5.  We can have the if statements and loops and such that are starting to 
 get put in the template code in python code (thus u don't have to context 
 switch into a templating language to make changes, thus making it easier to 
 work with libvirt).
  6.  Possible remove a dependency (always good).
 
 Thoughts?

In general, I agree with your suggestion that Nova should not generate
XML docs directly, but have object based API to ensure correctly structured
and escape XML.

The upstream libvirt community is working on a new library (libvirt-gconfig [2])
which directly exposes libvirt XML formats via an object oriented API. This
allows apps to read  write libvirt XML configuration documents, without
having to know anything about XML.

The library is written in C, and uses the GObject library as its base. Via
the GObject introspection support, this trivially provides access to the
API from Python, Perl, Php, JavaScript, Java, Vala, and many more, without
having to manually write bindings for each language. It does not directly
depend on the libvirt library itself, so you can use libvirt-gconfig even
in situations where you don't have a connection to libvirt. eg, if you're
using libvirt indirectly via CIM, SNMP or AMQP, you can still use the
API to deal with libvirt XML documents.

We're using this API in libvirt-sandbox, GNOME Boxes, and plan to eventually
port virt-install  virt-manager too. Our intent is that any app which deals
with libvirt that wants to read/write XML can use this API.

I was in fact planning to suggest use of libvirt-gconfig for Nova in the
near future. The main unknown here is around dependancies.

The GObject introspection code, required to use libvirt-gconfig from python,
requires a fairly new version of GLib, which most enterprise distros will
not have yet.  For Fedora this would be Fedora = 15 

Regards,

Re: [Openstack] HPC with Openstack?

2011-12-06 Thread Daniel P. Berrange
On Mon, Dec 05, 2011 at 09:07:06PM -0500, Lorin Hochstein wrote:
 
 
 On Dec 4, 2011, at 7:46 AM, Soren Hansen wrote:
 
  2011/12/4 Lorin Hochstein lo...@isi.edu:
  Some of the LXC-related issues we've run into:
  
  - The CPU affinity issue on LXC you mention. Running LXC with OpenStack, 
  you
  don't get proper space sharing out of the box, each instance actually 
  sees
  all of the available CPUs. It's possible to restrict this, but that
  functionality doesn't seem to be exposed through libvirt, so it would have
  to be implemented in nova.

I recently added support for CPU affinity to the libvirt LXC driver. It will
be in libvirt 0.9.8. I also wired up various other cgroups tunables including
NUMA memory binding, block I/O tuning and CPU quota/period caps.

  - LXC doesn't currently support volume attachment through libvirt. We were
  able to implement a workaround by invoking lxc-attach inside of OpenStack
  instead  (e.g., see
  https://github.com/usc-isi/nova/blob/hpc-testing/nova/virt/libvirt/connection.py#L482.
  But to be able to use lxc-attach, we had to upgrade the Linux kernel in
  RHEL6.1 from 2.6.32 to 2.6.38. This kernel isn't supported by SGI, which
  means that we aren't able to load the SGI numa-related kernel modules.

Can you clarify what you mean by volume attachment ?

Are you talking about passing through host block devices, or hotplug of
further filesystems for the container ?

  Why not address these couple of issues in libvirt itself?

If you let me know what issues you have with libvirt + LXC in OpenStack,
I'll put them on my todo list.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [DODCS] HPC with Openstack?

2011-12-06 Thread Daniel P. Berrange
On Tue, Dec 06, 2011 at 12:04:53PM -0800, Dong-In David Kang wrote:
 
 
 - Original Message -
  On Mon, Dec 05, 2011 at 09:07:06PM -0500, Lorin Hochstein wrote:
  
  
   On Dec 4, 2011, at 7:46 AM, Soren Hansen wrote:
  
2011/12/4 Lorin Hochstein lo...@isi.edu:
Some of the LXC-related issues we've run into:
   
- The CPU affinity issue on LXC you mention. Running LXC with
OpenStack, you
don't get proper space sharing out of the box, each instance
actually sees
all of the available CPUs. It's possible to restrict this, but
that
functionality doesn't seem to be exposed through libvirt, so it
would have
to be implemented in nova.
  
  I recently added support for CPU affinity to the libvirt LXC driver.
  It will
  be in libvirt 0.9.8. I also wired up various other cgroups tunables
  including
  NUMA memory binding, block I/O tuning and CPU quota/period caps.
 
   Great news! 
  We are also looking forward to seeing SElinux 'sVirt' support for
 LXC by libvirt.
 When do you think it will be available? 
 In libvirt-0.9.8?

0.9.8 is due out any day now, so not that. My goal is to get it
done by the Fedora 17 development freeze, so hopefully 0.9.9,
or 0.9.10 worst case.

  By volume attachment, yes, we mean passing through host block devices that 
 is dynamically created by 
 nova-volume service (using iscsi).
 
 
Why not address these couple of issues in libvirt itself?
  
  If you let me know what issues you have with libvirt + LXC in
  OpenStack,
  I'll put them on my todo list.
  
 
  As Lorin said we implemented it using lxc-attach. 
 With lxc-attach we could pass the major/minor number of the (dynamically 
 crated) devices to the LXC instance.
 And with lxc-attach we could do mknod inside of the LXC instance.
 I think supporting that by libvirt would be very useful.
 However, it needs lxc-attach working for the Linux kernel. 
 We had to upgrade and patch Linux kernel for that purpose.
 If there is a better way, it would be wonderful.
 But I don't know if there is a way other than using lxc-attach.

Yeah, I don't see any practical way todo hotplug with LXC without
having the kernel support merged for attaching to all types of
namespace. Once that's available it will be simple todo it via
libvirt

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp