[Openstack] [OpenStack] VMWare ESXi plugin for Quantum
Hi, Do any of you know if there is anybody working on a Quantum plugin for VMWare ESXi? And if yes, when it may be available? Regards, Balu ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] nova-volume communication public eth0 change to private eth1
Hi all: I have some issue about nova-volume network(public eth0 change to private eth1). I use openstack essex version. My Environment : Muti-host network Public ip : 192.168.1.0/24 Private ip : 192.168.2.0/24 Controller has nova-volume partition. public: 192.168.1.50 private:192.168.2.2 Create Volume-1 10G Create VM-1: public: 192.168.1.151 private:192.168.2.50 Compute doesn't have nova-volume partition: public: 192.168.1.51 private:192.168.2.3 Create VM-2: public: 192.168.1.152 private:192.168.2.51 When I attached Volume-1 into VM-1 for NFS. VM-2 mount NFS into /mnt/ for transfer files. The issue is: What it always transfer files from public ip(eth0) not private ip. Can I change from public ip to private ip(eth1)? Because the public interface bandwidth will be occupied. Does anyone have idea? Or some hint for me Thanks a lot Edward ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Agent out of sync with plugin!
On 03/12/2013 12:13 AM, Greg Chavez wrote: So I'm setting up Folsom on Ubuntu 12.10, using the Github Folsom Install Guide: https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst After following the steps to instantiate the network node, I'm left with 3 new but downed OVS bridges, and three error-filled logs for ovs-plugin, dhcp-agent, and l3-agent. I rectify it with this: http://pastebin.com/L43d9q8a When I restart the networking and then restart the plugin and agents, everything seems to work, but I'm getting this strange INFO message in /var/log/quantum/openvswitch-agent.log: 2013-03-11 17:48:02 INFO [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Agent out of sync with plugin! I traced it to this .py file: https://github.com/openstack/quantum/blob/master/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py Now I realize that this is and INFO message, not an error. But I would still like to know what this means. Thanks! The Quantum agents need to retrieve the port data from the Quantum service. When the agents start this is done automatically (hence the message that you have seen). This can also happen if there is an exception in the agent or if the agent is unable to communicate with the service - for example there is a problem with the connection between the agent and the service (link down, etc). -- \*..+.- --Greg Chavez +//..;}; ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] nova-manage service list issues
Anybody has any pointers on this regard ? Thank you, On Fri, Mar 8, 2013 at 10:09 PM, Ashutosh Narayan aashutoshnara...@gmail.com wrote: Hi folks, I am following these instructionshttp://docs.openstack.org/trunk/openstack-compute/install/yum/content/compute-verifying-install.html but the State of *nova-manage service list* shows XXX which means there is some issue in time synchronization with NTP. All the services are running on the *same* host which is actually a virtual machine running on a Xen Server. Here is the output of nova-manange service list : [root@RLD1OPST01 ~]# nova-manage service list Binary Host Zone Status State Updated_At nova-scheduler RLD1OPST01 nova enabledXXX None nova-certRLD1OPST01 nova enabledXXX None nova-network RLD1OPST01 nova enabledXXX None nova-compute RLD1OPST01 nova enabledXXX None nova-console RLD1OPST01 nova enabledXXX None ntpq outputs that system is synchronized with local clock. [root@RLD1OPST01 ~]# ntpq -pn remote refid st t when poll reach delay offset jitter == 192.168.105.61 .INIT. 16 u- 6400.0000.000 0.000 120.88.47.10193.79.237.142 u 61 64 17 33.933 19.644 1.869 202.71.140.36 62.245.153.203 u- 64 37 31.628 24.737 3.634 *127.127.1.0 .LOCL. 10 l- 64 370.0000.000 0.001 I can't boot an image with compute until the state of services are synchronized. Please suggest. Thank you, -- Ashutosh Narayan http://ashutoshn.wordpress.com/ -- Ashutosh Narayan http://ashutoshn.wordpress.com/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [QUANTUM] (Bug ?) L3 routing not correctly fragmenting packets ?
Only changing the VM MTU to 1454 does the trick ('ifconfig eth0 mtu 1454'). I think this is the same issue: https://bugs.launchpad.net/quantum/+bug/1075336 So instead of decreasing the MTU on the physical interface you could also increase it on the openvswitch port. Cheers, Robert ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] about brctl meltdown on RHEL 6.3
hi. I'm sorry can not reply to the original letter. openvswitch and brctl can run normal together in RHEL6.3. error message: ProcessExecutionError: Unexpected error while running command. Command: sudo nova-rootwrap /etc/nova/rootwrap.conf brctl addbr qbr2218b8c4-7d Exit code: 1 Stdout: '' Stderr: 'add bridge failed: Package not installed\n' Solution: 1.openvswitch process running in backend 2.modprobe brcompat_mod 3.ovs-brcompatd --pidfile --log-file --detach 4.ovs-vsctl add-br br-int...add-port...(if you have done then not need redo) 5.login dashboard and launch instance. 6.check instance status,it is ok. NOTE: 1.the process ovs-brcompatd will be shutdown when run service openvswitch stop,then pls startup it by manual. 2.# brctl show bridge name bridge id STP enabled interfaces br-eth1 /sys/class/net/br-eth1/bridge: No such file or directory /sys/class/net/br-eth1/bridge: No such file or directory .7e1636f42c4bno Ignore this message /sys/class/net/br-eth1/bridge: No such file or directory ,to continue. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] the ip_forward is enable when using vlan + multi_host on computer node
There may be some mistake exist. Just know, the vlan works as expected. On Tue, Mar 12, 2013 at 12:02 PM, Lei Zhang zhang.lei@gmail.com wrote: Hi all, I am testing the nova-network + vlan + multi_host. But I found that the ip_forward is enable automatically when launch new instances. You can check the code https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L770 I found there is some issue seriously when the ip_forward=1 on compute node. Here my testing process Controller: [root@openstack-controller conf.d]# ip a 1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: p3p1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen 1000 link/ether 90:b1:1c:0d:87:79 brd ff:ff:ff:ff:ff:ff inet 192.168.3.10/24 brd 192.168.3.255 scope global p3p1 inet6 fe80::92b1:1cff:fe0d:8779/64 scope link valid_lft forever preferred_lft forever 3: em1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen 1000 link/ether 90:b1:1c:0d:87:7a brd ff:ff:ff:ff:ff:ff inet 172.16.0.10/24 brd 172.16.0.255 scope global em1 inet6 fe80::92b1:1cff:fe0d:877a/64 scope link valid_lft forever preferred_lft forever Computer Node: [root@openstack-node2 vlan]# ip a 2: em1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen 1000 link/ether 90:b1:1c:0d:73:ea brd ff:ff:ff:ff:ff:ff inet 172.16.0.12/24 brd 172.16.0.255 scope global em1 inet6 fe80::92b1:1cff:fe0d:73ea/64 scope link valid_lft forever preferred_lft forever 4: p3p1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:10:18:f7:4a:34 brd ff:ff:ff:ff:ff:ff inet 192.168.3.12/24 brd 192.168.3.255 scope global p3p1 inet 192.168.3.33/32 scope global p3p1 inet6 fe80::210:18ff:fef7:4a34/64 scope link valid_lft forever preferred_lft forever 9: vlan102@em1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UP link/ether fa:16:3e:54:ea:11 brd ff:ff:ff:ff:ff:ff inet6 fe80::f816:3eff:fe54:ea11/64 scope link valid_lft forever preferred_lft forever 10: br102: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:54:ea:11 brd ff:ff:ff:ff:ff:ff inet 10.0.102.4/24 brd 10.0.102.255 scope global br102 inet6 fe80::2816:24ff:feb5:5770/64 scope link valid_lft forever preferred_lft forever 11: vlan103@em1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UP link/ether fa:16:3e:3a:a0:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::f816:3eff:fe3a:a020/64 scope link valid_lft forever preferred_lft forever 12: br103: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:3a:a0:20 brd ff:ff:ff:ff:ff:ff inet 10.0.103.4/24 brd 10.0.103.255 scope global br103 inet6 fe80::480c:f2ff:fe9b:a600/64 scope link valid_lft forever preferred_lft forever 13: vnet0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:16:3e:0c:65:73 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe0c:6573/64 scope link valid_lft forever preferred_lft forever 15: vnet1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:16:3e:7f:a2:d5 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe7f:a2d5/64 scope link valid_lft forever preferred_lft forever 16: vnet2: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:16:3e:31:8f:7c brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe31:8f7c/64 scope link valid_lft forever preferred_lft forever 17: vnet3: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:16:3e:63:8c:e2 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe63:8ce2/64 scope link valid_lft forever preferred_lft forever [root@openstack-node2 vlan]# brctl show bridge namebridge idSTP enabledinterfaces br1028000.fa163e54ea11novlan102 vnet0 vnet1 vnet2 br1038000.fa163e3aa020novlan103 vnet3 virbr08000.525400aaa1b5yesvirbr0-nic if the ip_forward=1, then vm1(vnet1) can ping vm2(vnet4) and controller can ping vm1(vnet1) and vm2(vnet4). this should be wrong. Any body meet this error? and how to fix this except for changing the code. -- Lei Zhang Blog: http://jeffrey4l.github.com twitter/weibo: @jeffrey4l -- Lei Zhang Blog: http://jeffrey4l.github.com twitter/weibo: @jeffrey4l ___ Mailing list:
[Openstack] Dashboard login page doesn't show up
Hi folks, I am trying to install dashboard as per instructions mentioned herehttp://docs.openstack.org/trunk/openstack-compute/install/yum/content/installing-openstack-dashboard.html . First of all /etc/openstack-dashboard/local_settings.py file doesn't exist instead /etc/openstack-dashboard/local_settings is present. Second, file name /etc/sysconfig/memcached.conf is not present instead I see /etc/sysconfig/memcached file. I edit local_settings file with minimal requirements but dashboard page doesn't show up. I have restarted relevant services too. Where am I going wrong ? Please suggest. Thank you, -- Ashutosh Narayan http://ashutoshn.wordpress.com/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Entering DST madness zone again
North America entered DST last Sunday (Europe will do the same on March 31), so we are entering DST / UTC time confusion again. Remember that all our OpenStack online meetings are expressed in UTC time (which does not have DST nonsense), and doublecheck what that means for you using tools like: http://www.timeanddate.com/worldclock/fixedtime.html?hour=21min=0sec=0 For our North American friends, that probably means meetings are occuring one hour later than last week. Regards, -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Dashboard login page doesn't show up
On 03/12/2013 11:20 AM, Ashutosh Narayan wrote: Hi folks, I am trying to install dashboard as per instructions mentionedhere http://docs.openstack.org/trunk/openstack-compute/install/yum/content/installing-openstack-dashboard.html. First of all |/etc/openstack-dashboard/local_settings.py file doesn't| exist instead |/etc/openstack-dashboard/local_settings is present.| Second, file name |/etc/sysconfig/memcached.conf is not present| instead I see |/etc/sysconfig/memcached file.| Indeed, the local_settings is the right file; that's a typo in the docs. What is shown? Do you see an error? Is the webserver running? lsof -i is your friend. What about firewall rules? Did you enable access, if not running on localhost? In the docs, the hint to visit http://192.168.206.130/horizon on RHEL is plainly wrong. The dashboard can be found under http://ip-address/dashboard Matthias I edit local_settings file with minimal requirements but dashboard page doesn't show up. I have restarted relevant services too. Where am I going wrong ? Please suggest. Thank you, -- Ashutosh Narayan http://ashutoshn.wordpress.com/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [QUANTUM] (Bug ?) L3 routing not correctly fragmenting packets ?
Le 12/03/2013 09:20, Robert van Leeuwen a écrit : Only changing the VM MTU to 1454 does the trick ('ifconfig eth0 mtu 1454'). I think this is the same issue: https://bugs.launchpad.net/quantum/+bug/1075336 So instead of decreasing the MTU on the physical interface you could also increase it on the openvswitch port. Cheers, Robert I thought about it, but yet not tried. Which OVS port would you recommend to increase MTU ? On the network node (br-ex or qg-) , or on the compute node (br-int) ? Thanks, -Sylvain ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] question on the GRE Communication
Hi, I am trying to configuring openstack with one controller and one network and two compute nodes. I am not able to understand how the communication of the VM's happens which are for the same tenant with same ip range but on the different compute hosts. Please help me to understand how GRE communication happens. Regards, Arumon ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Dashboard login page doesn't show up
On 03/12/2013 11:46 AM, Ashutosh Narayan wrote: Hi Matthias, Thanks for pointing that out. I now get a login page. But it doesn't logs me with admin credentials, Here is a snippet of error_log == /var/log/httpd/error_log == [Tue Mar 12 10:40:16 2013] [error] unable to retrieve service catalog with token [Tue Mar 12 10:40:16 2013] [error] Traceback (most recent call last): [Tue Mar 12 10:40:16 2013] [error] File /usr/lib/python2.6/site-packages/keystoneclient/v2_0/client.py, line 135, in _extract_service_catalog [Tue Mar 12 10:40:16 2013] [error] endpoint_type='adminURL') [Tue Mar 12 10:40:16 2013] [error] File /usr/lib/python2.6/site-packages/keystoneclient/service_catalog.py, line 73, in url_for [Tue Mar 12 10:40:16 2013] [error] raise exceptions.EndpointNotFound('Endpoint not found.') [Tue Mar 12 10:40:16 2013] [error] EndpointNotFound: Endpoint not found. keystone endpoint-list throws this output : That is a known issue in keystone and fixed in Grizzly. So: what do you see, when you try to login as admin? Are you taking the credentials from your keystone? A side note: the config steps in the docs are not applicable at all. Matthias [root@RLD1OPST01 ~]# keystone endpoint-list +--+---+--+--+-+--+ |id| region | publicurl | internalurl | adminurl | service_id| +--+---+--+--+-+--+ | 50bb93035d9a4a18aacafdd895906f10 | RegionOne | http://192.168.105.61:/v1/AUTH_%(tenant_id)s | http://192.168.105.61:/v1/AUTH_%(tenant_id)s | http://192.168.105.61:/v1| 76772ef3f79b4648981f19de39d4cab1 | | 749974748ce8482bb2339954631baea5 | RegionOne | http://192.168.105.61:8773/services/Cloud | http://192.168.105.61:8773/services/Cloud | http://192.168.105.61:8773/services/Admin | 75748b8502964bf8aab214c074058a08 | | 80a940e76e314148bc349578ce8eadf7 | RegionOne | http://192.168.105.61:8776/v1/%(tenant_id)s| http://192.168.105.61:8776/v1/%(tenant_id)s| http://192.168.105.61:8776/v1/%(tenant_id)s | a5da6ff105184cbda9a3619d41d297f6 | | 897375e2d299416db3281d1b8baeabc1 | RegionOne | http://192.168.105.61:8774/v2/%(tenant_id)s| http://192.168.105.61:8774/v2/%(tenant_id)s| http://192.168.105.61:8774/v2/%(tenant_id)s | 5d144f9df18d4b35810afb4199b1a321 | | cc2fd4c2b1314cfb958e8702139a960c | RegionOne | http://192.168.105.61:5000/v2.0 | http://192.168.105.61:5000/v2.0 | http://192.168.105.61:35357/v2.0 | d307d545040e4bda8ace9a7fd3581cb2 | | e2d7819ff84746dc8e79c76bf308f2cd | RegionOne | http://192.168.105.61:9292| http://192.168.105.61:9292| http://192.168.105.61:9292 | 96299197a84a43d7a3932c5c6ae53ca0 | +--+---+--+--+-+--+ Thank you, On Tue, Mar 12, 2013 at 3:59 PM, Matthias Runge mru...@redhat.com mailto:mru...@redhat.com wrote: On 03/12/2013 11:20 AM, Ashutosh Narayan wrote: Hi folks, I am trying to install dashboard as per instructions mentionedhere http://docs.openstack.org/trunk/openstack-compute/install/yum/content/installing-openstack-dashboard.html. First of all |/etc/openstack-dashboard/local_settings.py file doesn't| exist instead |/etc/openstack-dashboard/local_settings is present.| Second, file name |/etc/sysconfig/memcached.conf is not present| instead I see |/etc/sysconfig/memcached file.| Indeed, the local_settings is the right file; that's a typo in the docs. What is shown? Do you see an error? Is the webserver running? lsof -i is your friend. What about firewall rules? Did you enable access, if not running on localhost? In the docs, the hint to visit http://192.168.206.130/horizon on RHEL is plainly wrong. The dashboard can be found under http://ip-address/dashboard Matthias I edit local_settings file with minimal requirements but dashboard page doesn't show up. I have restarted relevant services too. Where am I going wrong ? Please suggest. Thank you, -- Ashutosh Narayan
Re: [Openstack] Entering DST madness zone again
On Tue, Mar 12, 2013 at 11:24 AM, Thierry Carrez thie...@openstack.org wrote: Remember that all our OpenStack online meetings are expressed in UTC time (which does not have DST nonsense), and doublecheck what that means for you using tools like: Or for the lazy ones (i.e: most of us) who don't want to switch and open a browser tab while in the command line : -$ TZ=UTC date Chmouel. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Dashboard login page doesn't show up
On 03/12/2013 12:02 PM, Ashutosh Narayan wrote: The web page keeps on waiting and error_logs shows me what I posted earlier. Are you taking the credentials from your keystone? Yes, I am taking credentials from keystone Please verify your keystone settings in /etc/openstack-dashboard/local_settings: grep OPENSTACK_HOST /etc/openstack_dashboard/local_settings (Should point to your keystone) A side note: the config steps in the docs are not applicable at all. Is there are work around ? Around that additional config step? No! it's not required. Matthias Matthias [root@RLD1OPST01 ~]# keystone endpoint-list +--+---+--+--+-+--+ |id| region | publicurl | internalurl | adminurl | service_id| +--+---+--+--+-+--+ | 50bb93035d9a4a18aacafdd895906f10 | RegionOne | http://192.168.105.61:/v1/AUTH_%(tenant_id)s | http://192.168.105.61:/v1/AUTH_%(tenant_id)s | http://192.168.105.61:/v1| 76772ef3f79b4648981f19de39d4cab1 | | 749974748ce8482bb2339954631baea5 | RegionOne | http://192.168.105.61:8773/services/Cloud | http://192.168.105.61:8773/services/Cloud | http://192.168.105.61:8773/services/Admin | 75748b8502964bf8aab214c074058a08 | | 80a940e76e314148bc349578ce8eadf7 | RegionOne | http://192.168.105.61:8776/v1/%(tenant_id)s| http://192.168.105.61:8776/v1/%(tenant_id)s| http://192.168.105.61:8776/v1/%(tenant_id)s | a5da6ff105184cbda9a3619d41d297f6 | | 897375e2d299416db3281d1b8baeabc1 | RegionOne | http://192.168.105.61:8774/v2/%(tenant_id)s| http://192.168.105.61:8774/v2/%(tenant_id)s| http://192.168.105.61:8774/v2/%(tenant_id)s | 5d144f9df18d4b35810afb4199b1a321 | | cc2fd4c2b1314cfb958e8702139a960c | RegionOne | http://192.168.105.61:5000/v2.0 | http://192.168.105.61:5000/v2.0 | http://192.168.105.61:35357/v2.0 | d307d545040e4bda8ace9a7fd3581cb2 | | e2d7819ff84746dc8e79c76bf308f2cd | RegionOne | http://192.168.105.61:9292| http://192.168.105.61:9292| http://192.168.105.61:9292 | 96299197a84a43d7a3932c5c6ae53ca0 | +--+---+--+--+-+--+ Thank you, On Tue, Mar 12, 2013 at 3:59 PM, Matthias Runge mru...@redhat.com mailto:mru...@redhat.com mailto:mru...@redhat.com mailto:mru...@redhat.com wrote: On 03/12/2013 11:20 AM, Ashutosh Narayan wrote: Hi folks, I am trying to install dashboard as per instructions mentionedhere http://docs.openstack.org/trunk/openstack-compute/install/yum/content/installing-openstack-dashboard.html. First of all |/etc/openstack-dashboard/local_settings.py file doesn't| exist instead |/etc/openstack-dashboard/local_settings is present.| Second, file name |/etc/sysconfig/memcached.conf is not present| instead I see |/etc/sysconfig/memcached file.| Indeed, the local_settings is the right file; that's a typo in the docs. What is shown? Do you see an error? Is the webserver running? lsof -i is your friend. What about firewall rules? Did you enable access, if not running on localhost? In the docs, the hint to visit http://192.168.206.130/horizon on RHEL is plainly wrong. The dashboard can be found under http://ip-address/dashboard Matthias I edit local_settings file with minimal requirements but dashboard page doesn't show up. I have restarted relevant services too. Where am I going wrong ? Please suggest. Thank you, -- Ashutosh Narayan http://ashutoshn.wordpress.com/ ___ Mailing list:
Re: [Openstack] Agent out of sync with plugin!
Logan, thanks for your reply. I've been very conscientious of NTP, so I'm very confident that that is not an issue. Gary: so in this case the agent = quantum-plugin-openvswitch-agent, and plugin = quantum-server. That's confusing. And what you're saying is that the ovs plugin/agent - whatever it is - is simply stating that it assumes it's out of sync since it's starting up, and it's going to phone home. Is that right? Thanks! On Tue, Mar 12, 2013 at 3:18 AM, Gary Kotton gkot...@redhat.com wrote: On 03/12/2013 12:13 AM, Greg Chavez wrote: So I'm setting up Folsom on Ubuntu 12.10, using the Github Folsom Install Guide: https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst After following the steps to instantiate the network node, I'm left with 3 new but downed OVS bridges, and three error-filled logs for ovs-plugin, dhcp-agent, and l3-agent. I rectify it with this: http://pastebin.com/L43d9q8a When I restart the networking and then restart the plugin and agents, everything seems to work, but I'm getting this strange INFO message in /var/log/quantum/openvswitch-agent.log: 2013-03-11 17:48:02 INFO [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Agent out of sync with plugin! I traced it to this .py file: https://github.com/openstack/quantum/blob/master/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py Now I realize that this is and INFO message, not an error. But I would still like to know what this means. Thanks! The Quantum agents need to retrieve the port data from the Quantum service. When the agents start this is done automatically (hence the message that you have seen). This can also happen if there is an exception in the agent or if the agent is unable to communicate with the service - for example there is a problem with the connection between the agent and the service (link down, etc). -- \*..+.- --Greg Chavez +//..;}; ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- \*..+.- --Greg Chavez +//..;}; ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Dashboard login page doesn't show up
On Tue, Mar 12, 2013 at 4:46 PM, Matthias Runge mru...@redhat.com wrote: On 03/12/2013 12:02 PM, Ashutosh Narayan wrote: The web page keeps on waiting and error_logs shows me what I posted earlier. Are you taking the credentials from your keystone? Yes, I am taking credentials from keystone Please verify your keystone settings in /etc/openstack-dashboard/local_settings: grep OPENSTACK_HOST /etc/openstack_dashboard/local_settings It points to keystone IP which in my case is 192.168.105.61 so I have edited it to one as seen below :- [root@] openstack-dashboard]# grep OPENSTACK_HOST local_settings OPENSTACK_HOST = 192.168.105.61 OPENSTACK_KEYSTONE_URL = http://%s:5000/v2.0; % OPENSTACK_HOST (Should point to your keystone) It's pointing to the keystone. A side note: the config steps in the docs are not applicable at all. Is there are work around ? Around that additional config step? No! it's not required. I am now able to login to dashboard after restarting nova-* services. But yes, I have to troubleshoot some steps to get all components working. Matthias Matthias [root@RLD1OPST01 ~]# keystone endpoint-list +--+---+--+--+-+--+ |id| region | publicurl | internalurl | adminurl | service_id| +--+---+--+--+-+--+ | 50bb93035d9a4a18aacafdd895906f10 | RegionOne | http://192.168.105.61:/v1/AUTH_%(tenant_id)s | http://192.168.105.61:/v1/AUTH_%(tenant_id)s | http://192.168.105.61:/v1| 76772ef3f79b4648981f19de39d4cab1 | | 749974748ce8482bb2339954631baea5 | RegionOne | http://192.168.105.61:8773/services/Cloud | http://192.168.105.61:8773/services/Cloud | http://192.168.105.61:8773/services/Admin | 75748b8502964bf8aab214c074058a08 | | 80a940e76e314148bc349578ce8eadf7 | RegionOne | http://192.168.105.61:8776/v1/%(tenant_id)s| http://192.168.105.61:8776/v1/%(tenant_id)s| http://192.168.105.61:8776/v1/%(tenant_id)s | a5da6ff105184cbda9a3619d41d297f6 | | 897375e2d299416db3281d1b8baeabc1 | RegionOne | http://192.168.105.61:8774/v2/%(tenant_id)s| http://192.168.105.61:8774/v2/%(tenant_id)s| http://192.168.105.61:8774/v2/%(tenant_id)s | 5d144f9df18d4b35810afb4199b1a321 | | cc2fd4c2b1314cfb958e8702139a960c | RegionOne | http://192.168.105.61:5000/v2.0 | http://192.168.105.61:5000/v2.0 | http://192.168.105.61:35357/v2.0 | d307d545040e4bda8ace9a7fd3581cb2 | | e2d7819ff84746dc8e79c76bf308f2cd | RegionOne | http://192.168.105.61:9292| http://192.168.105.61:9292| http://192.168.105.61:9292 | 96299197a84a43d7a3932c5c6ae53ca0 | +--+---+--+--+-+--+ Thank you, On Tue, Mar 12, 2013 at 3:59 PM, Matthias Runge mru...@redhat.com mailto:mru...@redhat.com mailto:mru...@redhat.com mailto:mru...@redhat.com wrote: On 03/12/2013 11:20 AM, Ashutosh Narayan wrote: Hi folks, I am trying to install dashboard as per instructions mentionedhere http://docs.openstack.org/trunk/openstack-compute/install/yum/content/installing-openstack-dashboard.html . First of all |/etc/openstack-dashboard/local_settings.py file doesn't| exist instead |/etc/openstack-dashboard/local_settings is present.| Second, file name |/etc/sysconfig/memcached.conf is not present| instead I see |/etc/sysconfig/memcached file.| Indeed, the local_settings is the right file; that's a typo in the docs. What is shown? Do you see an error? Is the webserver running? lsof -i is your friend. What about firewall rules? Did you enable access, if not running on localhost? In the docs, the hint to visit http://192.168.206.130/horizon on RHEL is plainly wrong. The
Re: [Openstack] [QUANTUM] (Bug ?) L3 routing not correctly fragmenting packets ?
I thought about it, but yet not tried. Which OVS port would you recommend to increase MTU ? On the network node (br-ex or qg-) , or on the compute node (br-int) ? You need to set it on the compute nodes ( int-br-ethX ) and possibly the an extra port on the routing node. (we use a bridge-mapped network to connect to the outside world and phy-br-eth1 needs to be set) Cheers, Robert ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Swift with Keystone problem
I'm having trouble with Swift, using Keystone auth, on Folsom. When I try something simple like 'swift stat', there are two errors: Firstly a logging error: 147proxy-server STDOUT: No handlers could be found for logger keystone.middleware.auth_token More importantly, the authorization fails: Account HEAD failed: http://ip:8080/v1/AUTH_dfb9c6d687be4d34bceee256cc3cb123 401 Unauthorized With SWIFTCLIENT_DEBUG set, I can see there are two separate requests: curl -i http://ip:8080/v1/AUTH_dfb9c6d687be4d34bceee256cc3cb123 -X HEAD -H X-Auth-Token: da38c4407cff40b69f236ef0da9d73e8 and two instances of: curl -i http://ip:8080/v1/AUTH_dfb9c6d687be4d34bceee256cc3cb123 -X HEAD -H X-Auth-Token: 0fc76ee28c2e43f0929c7c3ef158830d The proxy-server log for these requests is: proxy-server Authorizing as anonymous which is puzzling. The keystone log shows that real local credentials are being sent: 2013-03-12 12:46:11DEBUG [keystone.common.wsgi] REQUEST BODY 2013-03-12 12:46:11DEBUG [keystone.common.wsgi] {auth: {tenantName: admin, passwordCredentials: {username: admin, password: password}}} then 2013-03-12 12:46:11 WARNING [keystone.common.wsgi] Authorization failed. Invalid user / password from ip 2013-03-12 12:46:11DEBUG [keystone.common.wsgi] {error: {message: Invalid user / password, code: 401, title: Not Authorized}} Keystone auth works for all the other services. Any suggestions appreciated. Adam ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] question on the GRE Communication
For Quantum GRE tunneling, the network node and compute nodes need a NIC on your data network. You assign each of those NIC's an IP (for instance, 192.168.1.1-3). Then (assuming you are using openvswitch with GRE tunneling) you set up your quantum configs. Look at the Quantum administration guide for an example of GRE tunneling with openvswitch. After that you just let it work its magic. All VM traffic is encapsulated inside GRE packets traveling between the nodes, it'll all look like packets in that 192.168.1-3 network. Once the packet reaches its destination node, the GRE encapsulation is removed and the VM packet is read. On Mar 12, 2013 6:36 AM, Aru s arumo...@gmail.com wrote: Hi, I am trying to configuring openstack with one controller and one network and two compute nodes. I am not able to understand how the communication of the VM's happens which are for the same tenant with same ip range but on the different compute hosts. Please help me to understand how GRE communication happens. Regards, Arumon ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Instance snapshotting error
Hi all, I use openstack (folsom release) + XCP + ubuntu 12.04. I tried to create a snapshot of an ubuntu instance. Unfortunately, I get this error. Any ideas how to deal with this error? - 2013-03-12 12:54:02 AUDIT nova.compute.manager [req-39dc25ff-686d-46b8-98ac-29b3b08dbe94 admin demo] [instance: 945a44a0-2001-4165-a66c-a40ae552ab12] instance snapshotting 2013-03-12 12:54:36 ERROR nova.openstack.common.rpc.amqp [req-39dc25ff-686d-46b8-98ac-29b3b08dbe94 admin demo] Exception during message handling 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last): 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 393, in _process_data 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp rval = self.proxy.dispatch(ctxt, version, method, **args) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 133, in dispatch 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/exception.py, line 117, in wrapped 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp temp_level, payload) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/exception.py, line 94, in wrapped 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp return f(self, context, *args, **kw) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/compute/manager.py, line 210, in decorated_function 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp pass 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/compute/manager.py, line 196, in decorated_function 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/compute/manager.py, line 238, in decorated_function 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp e, sys.exc_info()) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/compute/manager.py, line 225, in decorated_function 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/compute/manager.py, line 1651, in snapshot_instance 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp self.driver.snapshot(context, instance, image_id, update_task_state) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/virt/xenapi/driver.py, line 194, in snapshot 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp self._vmops.snapshot(context, instance, image_id, update_task_state) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/virt/xenapi/vmops.py, line 712, in snapshot 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp update_task_state) as vdi_uuids: 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 17, in __enter__ 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp return self.gen.next() 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 627, in snapshot_attached_here 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp original_parent_uuid) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 1831, in _wait_for_vhd_coalesce 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp raise exception.NovaException(msg) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp NovaException: VHD coalesce attempts exceeded (5), giving up... 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp 2013-03-12 12:55:03 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources --- Thank you, Afef ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Instance snapshotting error
It seems as though the plugin is waiting for a coalesce to happen after the snapshot - and timing out. Could you confirm if the coalesce actually happened on the XenServer host, or if there were errors when attempting to coalesce? (probably from /var/log/SMlog) It might be worth checking the integrity of the SR by running an sr-scan on the storage repository in question and checking the output of /var/log/SMlog. Thanks, bob From: openstack-bounces+bob.ball=citrix@lists.launchpad.net [mailto:openstack-bounces+bob.ball=citrix@lists.launchpad.net] On Behalf Of Afef MDHAFFAR Sent: 12 March 2013 13:02 To: openstack@lists.launchpad.net Subject: [Openstack] Instance snapshotting error Hi all, I use openstack (folsom release) + XCP + ubuntu 12.04. I tried to create a snapshot of an ubuntu instance. Unfortunately, I get this error. Any ideas how to deal with this error? - 2013-03-12 12:54:02 AUDIT nova.compute.manager [req-39dc25ff-686d-46b8-98ac-29b3b08dbe94 admin demo] [instance: 945a44a0-2001-4165-a66c-a40ae552ab12] instance snapshotting 2013-03-12 12:54:36 ERROR nova.openstack.common.rpc.amqp [req-39dc25ff-686d-46b8-98ac-29b3b08dbe94 admin demo] Exception during message handling 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last): 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 393, in _process_data 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp rval = self.proxy.dispatch(ctxt, version, method, **args) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 133, in dispatch 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/exception.py, line 117, in wrapped 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp temp_level, payload) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/exception.py, line 94, in wrapped 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp return f(self, context, *args, **kw) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/compute/manager.py, line 210, in decorated_function 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp pass 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/compute/manager.py, line 196, in decorated_function 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/compute/manager.py, line 238, in decorated_function 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp e, sys.exc_info()) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/compute/manager.py, line 225, in decorated_function 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/compute/manager.py, line 1651, in snapshot_instance 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp self.driver.snapshot(context, instance, image_id, update_task_state) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/virt/xenapi/driver.py, line 194, in snapshot 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp self._vmops.snapshot(context, instance, image_id, update_task_state) 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/virt/xenapi/vmops.py, line 712, in snapshot 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp update_task_state) as vdi_uuids: 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /usr/lib/python2.7/contextlib.py, line 17, in __enter__ 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp return self.gen.next() 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp File /opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 627, in snapshot_attached_here 2013-03-12 12:54:36 TRACE nova.openstack.common.rpc.amqp original_parent_uuid) 2013-03-12 12:54:36 TRACE
Re: [Openstack] nova-manage service list issues
We'd hope that there was something output to one of the log files in this case. Is there anything that seems suspicious? Also, is this through a manual setup, or are you running devstack? From: openstack-bounces+bob.ball=citrix@lists.launchpad.net [mailto:openstack-bounces+bob.ball=citrix@lists.launchpad.net] On Behalf Of Ashutosh Narayan Sent: 12 March 2013 07:41 To: OpenStack Subject: Re: [Openstack] nova-manage service list issues Anybody has any pointers on this regard ? Thank you, On Fri, Mar 8, 2013 at 10:09 PM, Ashutosh Narayan aashutoshnara...@gmail.commailto:aashutoshnara...@gmail.com wrote: Hi folks, I am following these instructionshttp://docs.openstack.org/trunk/openstack-compute/install/yum/content/compute-verifying-install.html but the State of nova-manage service list shows XXX which means there is some issue in time synchronization with NTP. All the services are running on the same host which is actually a virtual machine running on a Xen Server. Here is the output of nova-manange service list : [root@RLD1OPST01 ~]# nova-manage service list Binary Host Zone Status State Updated_At nova-scheduler RLD1OPST01 nova enabled XXX None nova-certRLD1OPST01 nova enabled XXX None nova-network RLD1OPST01 nova enabled XXX None nova-compute RLD1OPST01 nova enabled XXX None nova-console RLD1OPST01 nova enabled XXX None ntpq outputs that system is synchronized with local clock. [root@RLD1OPST01 ~]# ntpq -pn remote refid st t when poll reach delay offset jitter == 192.168.105.61 .INIT. 16 u- 6400.0000.000 0.000 120.88.47.10193.79.237.142 u 61 64 17 33.933 19.644 1.869 202.71.140.36tel:202.71.140.36 62.245.153.20tel:62.245.153.203 u - 64 37 31.628 24.737 3.634 *127.127.1.0 .LOCL. 10 l- 64 370.0000.000 0.001 I can't boot an image with compute until the state of services are synchronized. Please suggest. Thank you, -- Ashutosh Narayan http://ashutoshn.wordpress.com/ -- Ashutosh Narayan http://ashutoshn.wordpress.com/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Keystone - Domain admin role policies?
Hi, While studying keystone v3 and domains feature, I realized that current policy.json file has no domain_admin role as I was expecting. I wonder if this role will be defined in Grizzly timeframe or how do you envision domain_admin role enforcement. Thanks in advance! Glaucimar Aguiar ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Tempest for Folsom with QUANTUM
Hi All, I am using https://github.com/openstack/tempest/tree/stable/folsom release of Tempest. I ran tests under /tempest-stable-folsom/tempest/ using : nosetests tests/ It worked fine for most of the test cases. After this I tried to ran : *nosetests tests/network/* *It is giving me an error all the time*. Please find the *attached file* for that. I am suspecting my tempest.conf file is not correct or some version problem. I just copied tempest-stable-folsom/etc/tempest.conf.sample file and made changes. Please go through the attached conf file once. Is this configuration file contains sufficient fields for support of Quantum (in Folsom) testing ? Please find tempest.conf too in attachments. I did this command to see the *version of all services* running on my setup : pip freeze | grep python- Warning: cannot find svn location for distribute==0.6.28dev-r0 python-apt==0.8.7ubuntu4 python-cinderclient==1.0.0 python-cloudfiles==1.7.9.2 python-daemon==1.5.5 python-dateutil==1.5 python-debian==0.1.21-nmu2ubuntu1 python-gflags==1.5.1 python-glanceclient==0.5.1 python-keystoneclient==0.1.3 python-ldap==2.4.10 python-memcached==1.48 python-novaclient==2.9.0 python-novnc==2012.1-e3 python-openid==2.2.5 python-quantumclient==2.1 python-swiftclient==1.2.0 I installed Folsom set up from this Guide : https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst Can I use *tempest/tests/network *test cases if I am having Folsom with Quantum* ?* All other tests are running fine. Again do these tempest/tests/network test cases will run only in Essex ? And if it is so can I add my tests in Tempest to work with Folsom with Quantum ? And what would be required to do so ? I want to understand the existing network tests and further to add some of mine in that. It will be very helpful if someone can guide me in tracing the flow of code and what to add in which directory. It will be very helpful if someone can share some docs for this. Thanks and Regards, Girija Sharan Singh network-test-errors Description: Binary data tempest.conf Description: Binary data ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] nova-manage service list issues
Hi Bob, No, it was through a manual step and I was following the instructions mentioned herehttp://docs.openstack.org/trunk/openstack-compute/install/yum/content/compute-verifying-install.html . I am running all the services on single hosts which is actually a virtual machine running on a Xen Server. The time is synchronized to local clock. Whenever I run *ntpstat *it says time out. But *ntpq -pn *throws this output ; [root@test keystone]# ntpq -pn remote refid st t when poll reach delay offset jitter == 192.168.105.61 .INIT. 16 u- 102400.0000.000 0.000 +123.108.225.6 209.51.161.238 2 u 206 1024 377 30.935 33.969 24.089 *120.88.47.10193.67.79.2022 u 264 1024 377 56.993 -20.153 45.821 127.127.1.0 .LOCL. 10 l5 64 3770.0000.000 0.001 Here there is too much jitter with server bearing IP 120.88.47.10. What if I only set local host as only source of time in *ntp.conf* ? Won't this help for all nova-* services to synchronize as I am running all services on single host? I guess this is the reason why I can't boot an image using nova boot options On Tue, Mar 12, 2013 at 6:54 PM, Bob Ball bob.b...@citrix.com wrote: We’d hope that there was something output to one of the log files in this case. Is there anything that seems suspicious? ** ** Also, is this through a manual setup, or are you running devstack? ** ** *From:* openstack-bounces+bob.ball=citrix@lists.launchpad.net [mailto: openstack-bounces+bob.ball=citrix@lists.launchpad.net] *On Behalf Of *Ashutosh Narayan *Sent:* 12 March 2013 07:41 *To:* OpenStack *Subject:* Re: [Openstack] nova-manage service list issues ** ** Anybody has any pointers on this regard ? ** ** Thank you, On Fri, Mar 8, 2013 at 10:09 PM, Ashutosh Narayan aashutoshnara...@gmail.com wrote: Hi folks, ** ** I am following these instructionshttp://docs.openstack.org/trunk/openstack-compute/install/yum/content/compute-verifying-install.html but the State of *nova-manage service list* shows XXX which means there is some issue in time synchronization with NTP. All the services are running on the *same* host which is actually a virtual machine running on a Xen Server. ** ** Here is the output of nova-manange service list : ** ** [root@RLD1OPST01 ~]# nova-manage service list Binary Host Zone Status State Updated_At nova-scheduler RLD1OPST01 nova enabledXXX None nova-certRLD1OPST01 nova enabledXXX None nova-network RLD1OPST01 nova enabledXXX None nova-compute RLD1OPST01 nova enabledXXX None nova-console RLD1OPST01 nova enabledXXX None ** ** ntpq outputs that system is synchronized with local clock. ** ** [root@RLD1OPST01 ~]# ntpq -pn remote refid st t when poll reach delay offset jitter == 192.168.105.61 .INIT. 16 u- 6400.0000.000 0.000 120.88.47.10193.79.237.142 u 61 64 17 33.933 19.644 1.869 202.71.140.36 62.245.153.203 u- 64 37 31.628 24.737 3.634 *127.127.1.0 .LOCL. 10 l- 64 370.0000.000 0.001 ** ** I can't boot an image with compute until the state of services are synchronized. ** ** Please suggest. ** ** Thank you, -- Ashutosh Narayan http://ashutoshn.wordpress.com/ ** ** -- Ashutosh Narayan http://ashutoshn.wordpress.com/ -- Ashutosh Narayan http://ashutoshn.wordpress.com/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Keystone - Domain admin role policies?
Please note that I know one can edit policy.json to define domain-admin role permissions but with the implementation of domains, it seems that domain-admin role permissions should be defined as admin role permissions are. Thanks in advance, Glaucimar Aguiar -Original Message- From: openstack-bounces+glaucimar.aguiar=hp@lists.launchpad.net [mailto:openstack-bounces+glaucimar.aguiar=hp@lists.launchpad.net] On Behalf Of Aguiar, Glaucimar (Brazil RD-ECL) Sent: terça-feira, 12 de março de 2013 10:34 To: openstack@lists.launchpad.net Subject: [Openstack] Keystone - Domain admin role policies? Hi, While studying keystone v3 and domains feature, I realized that current policy.json file has no domain_admin role as I was expecting. I wonder if this role will be defined in Grizzly timeframe or how do you envision domain_admin role enforcement. Thanks in advance! Glaucimar Aguiar ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Agent out of sync with plugin!
On 03/12/2013 01:35 PM, Greg Chavez wrote: Logan, thanks for your reply. I've been very conscientious of NTP, so I'm very confident that that is not an issue. Gary: so in this case the agent= quantum-plugin-openvswitch-agent agent== quantum-plugin-openvswitch-agent - yes , and plugin = quantum-server. plugin == quantum-server - yes. The quantum service runs the plugin. That's confusing. And what you're saying is that the ovs plugin/agent - whatever it is - is simply stating that it assumes it's out of sync since it's starting up, and it's going to phone home. Is that right? Thanks! Yes, that is correct. Even the first time that it starts it is out of sync :) On Tue, Mar 12, 2013 at 3:18 AM, Gary Kotton gkot...@redhat.com mailto:gkot...@redhat.com wrote: On 03/12/2013 12:13 AM, Greg Chavez wrote: So I'm setting up Folsom on Ubuntu 12.10, using the Github Folsom Install Guide: https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst After following the steps to instantiate the network node, I'm left with 3 new but downed OVS bridges, and three error-filled logs for ovs-plugin, dhcp-agent, and l3-agent. I rectify it with this: http://pastebin.com/L43d9q8a When I restart the networking and then restart the plugin and agents, everything seems to work, but I'm getting this strange INFO message in /var/log/quantum/openvswitch-agent.log: 2013-03-11 17:48:02 INFO [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Agent out of sync with plugin! I traced it to this .py file: https://github.com/openstack/quantum/blob/master/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py Now I realize that this is and INFO message, not an error. But I would still like to know what this means. Thanks! The Quantum agents need to retrieve the port data from the Quantum service. When the agents start this is done automatically (hence the message that you have seen). This can also happen if there is an exception in the agent or if the agent is unable to communicate with the service - for example there is a problem with the connection between the agent and the service (link down, etc). -- \*..+.- --Greg Chavez +//..;}; ___ Mailing list:https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack Post to :openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe :https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack More help :https://help.launchpad.net/ListHelp -- \*..+.- --Greg Chavez +//..;}; ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [QUANTUM] (Bug ?) L3 routing not correctly fragmenting packets ?
Le 12/03/2013 13:12, Robert van Leeuwen a écrit : I thought about it, but yet not tried. Which OVS port would you recommend to increase MTU ? On the network node (br-ex or qg-) , or on the compute node (br-int) ? You need to set it on the compute nodes ( int-br-ethX ) and possibly the an extra port on the routing node. (we use a bridge-mapped network to connect to the outside world and phy-br-eth1 needs to be set) Cheers, Robert I got it : my issue was isolated on the backend GRE tunnel TCP response. I then only increased MTU up to 1546 to the physical ethernet device (here, eth0) to allow the 46-byte encap to be not fragmented. Here is the TCPdump stack trace, we can see the GRE headers overhead : 15:01:34.322883 IP (tos 0x0, ttl 64, id 36074, offset 0, flags [DF], proto GRE ( 47), length 1546) 172.16.0.2 172.16.0.4: GREv0, Flags [key present], key=0x1, length 1526 IP (tos 0x0, ttl 48, id 26871, offset 0, flags [DF], proto TCP (6), length 1500) X.X.X.X.80 10.0.0.4.41507: Flags [P.], cksum 0x6a90 (correct), seq 1420:2880, ack 412, win 3456, length 1460 Here, 172.16.0.2 is the Quantum network node and 172.16.0.4 is the compute node (internal IPs). As the packet was fragmented due to GRE encap, I only changed eth0 on the network node to get things done : ip link set eth0 mtu 1546 Now it works. I assume it's not 100% perfect, being an ugly hack, but that allows to bypass GRE headers overhead in case of PathMTU failing. Thanks all for your help, -Sylvain ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Call for help on Grizzly documentation
Hi all, You all did great with DocImpact, but now that we're less than a month from release, the tiny doc team is facing a long list of doc bugs that won't be done by April 4th, many generated by DocImpact flags. We typically do a release of the docs about a month after the actual release date, to ensure packages are available and to try to get our doc bug backlog to a manageable level. As you can see from our backlog for operator docs in openstack-manuals [1] and API docs in api-site [2], there are over 50 confirmed doc bugs for Grizzly operator and admin docs and less than 20 for API docs. With those numbers we need all the help we can get. Please dive in, the patch process is just like code and fully documented. [3] We're on IRC in #openstack-doc and can answer any questions you have as you go. Thanks! Anne, Tom, Diane, Laura, Emilien, Daisy, and all the other doc peeps 1. https://launchpad.net/openstack-manuals/+milestone/grizzly 2. https://launchpad.net/openstack-api-site/+milestone/grizzly 3. http://wiki.openstack.org/Documentation/HowTo ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Tempest for Integration testing of Openstack (FOLSOM)
On Tue, Mar 12, 2013 at 8:01 PM, Jay Pipes jaypi...@gmail.com wrote: On 03/12/2013 12:55 AM, Girija Sharan wrote: Hi Masayuki, Thanks a lot for your early response. I am using Openstack Folsom and not Devstack. I am not getting how to integrate Tempest for Folsom. As you mentioned there is tempest.conf.sample file but in that there is a field which requires path to nova source directory (source_dir = /opt/stack/nova). What path I have to give for Folsom setup? That part of the configuration file is for whitebox tests. If you are running Tempest on a controller node that has the Nova API service installed on it, set the value to the location of the Nova source code. If you leave it as-is the whitebox tests will simply be skipped. Is this https://github.com/openstack/tempest/blob/stable/folsom/etc/tempest.conf.samplefile for Folsom ? Yes, whatever is in the folsom stable branch is intended for execution against Folsom OpenStack deployments. But the tests in *tempest-stable-folsom/tempest/tests/network *are not running in Folsom with Quantum. All other tests are running fine. Someone said that this stable-folsom release of tempest is not for testing Quantum in Folsom. Is it true ? If yes then how do I test my Quantum in Folsom deployment using Tempest ? Thanks and Regards, Girija Sharan Singh Best, -jay It will be very helpful for me if you can provide little more explanation of the respective fields or if you can suggest any document for that. Thanks and Regards, Girija Sharan Singh On Tue, Mar 12, 2013 at 7:05 AM, Masayuki Igawa masayuki.ig...@gmail.com mailto:masayuki.ig...@gmail.com wrote: Hi, On Tue, Mar 12, 2013 at 1:14 AM, Girija Sharan girijasharansi...@gmail.com mailto:girijasharansi...@gmail.com wrote: Hi All, I am trying to do Integration testing of Openstack environmment (Folsom) using Tempest. On controller node I am having my Tempest root directory and I have also installed Coverage, Tissue and Openstack-nose as it was mentioned in the setup.cfg file. 1. Do I have to use this release https://github.com/openstack/tempest/tree/stable/folsom? I think Yes, essex or master is not suitable for your environmment(Folsom). 2. I am not getting how to configure tempest.conf for Folsom set up. For Essex we were having localrc file but here in Folsom we are not having any such thing. Is there any way to know what values to pass to respective fields in tempest.conf ? tempest.conf.sample has the explanation of the respective fields. https://github.com/openstack/tempest/blob/stable/folsom/etc/tempest.conf.sample Is this not good enough? By the way, do you use devstack?(I think localrc is devstack's file.) If so, localrc is present in your devstack's directory, and tempest.conf is set up in your $TEMPEST_INSTALL_DIR/etc/tempest.conf automatically. Any help will be highly appreciated. Thanks and Regards, Girija Sharan Singh ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp Regards, -- Masayuki Igawa ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Dashboard login page doesn't show up
Doc bug logged https://bugs.launchpad.net/openstack-manuals/+bug/1154141about the Dashboard (Horizon) outdated instructions. Feel free to pick it up and fix. Anne On Tue, Mar 12, 2013 at 7:01 AM, Ashutosh Narayan aashutoshnara...@gmail.com wrote: On Tue, Mar 12, 2013 at 4:46 PM, Matthias Runge mru...@redhat.com wrote: On 03/12/2013 12:02 PM, Ashutosh Narayan wrote: The web page keeps on waiting and error_logs shows me what I posted earlier. Are you taking the credentials from your keystone? Yes, I am taking credentials from keystone Please verify your keystone settings in /etc/openstack-dashboard/local_settings: grep OPENSTACK_HOST /etc/openstack_dashboard/local_settings It points to keystone IP which in my case is 192.168.105.61 so I have edited it to one as seen below :- [root@] openstack-dashboard]# grep OPENSTACK_HOST local_settings OPENSTACK_HOST = 192.168.105.61 OPENSTACK_KEYSTONE_URL = http://%s:5000/v2.0; % OPENSTACK_HOST (Should point to your keystone) It's pointing to the keystone. A side note: the config steps in the docs are not applicable at all. Is there are work around ? Around that additional config step? No! it's not required. I am now able to login to dashboard after restarting nova-* services. But yes, I have to troubleshoot some steps to get all components working. Matthias Matthias [root@RLD1OPST01 ~]# keystone endpoint-list +--+---+--+--+-+--+ |id| region | publicurl | internalurl | adminurl | service_id| +--+---+--+--+-+--+ | 50bb93035d9a4a18aacafdd895906f10 | RegionOne | http://192.168.105.61:/v1/AUTH_%(tenant_id)s | http://192.168.105.61:/v1/AUTH_%(tenant_id)s | http://192.168.105.61:/v1| 76772ef3f79b4648981f19de39d4cab1 | | 749974748ce8482bb2339954631baea5 | RegionOne | http://192.168.105.61:8773/services/Cloud | http://192.168.105.61:8773/services/Cloud | http://192.168.105.61:8773/services/Admin | 75748b8502964bf8aab214c074058a08 | | 80a940e76e314148bc349578ce8eadf7 | RegionOne | http://192.168.105.61:8776/v1/%(tenant_id)s| http://192.168.105.61:8776/v1/%(tenant_id)s| http://192.168.105.61:8776/v1/%(tenant_id)s | a5da6ff105184cbda9a3619d41d297f6 | | 897375e2d299416db3281d1b8baeabc1 | RegionOne | http://192.168.105.61:8774/v2/%(tenant_id)s| http://192.168.105.61:8774/v2/%(tenant_id)s| http://192.168.105.61:8774/v2/%(tenant_id)s | 5d144f9df18d4b35810afb4199b1a321 | | cc2fd4c2b1314cfb958e8702139a960c | RegionOne | http://192.168.105.61:5000/v2.0 | http://192.168.105.61:5000/v2.0 | http://192.168.105.61:35357/v2.0 | d307d545040e4bda8ace9a7fd3581cb2 | | e2d7819ff84746dc8e79c76bf308f2cd | RegionOne | http://192.168.105.61:9292| http://192.168.105.61:9292| http://192.168.105.61:9292 | 96299197a84a43d7a3932c5c6ae53ca0 | +--+---+--+--+-+--+ Thank you, On Tue, Mar 12, 2013 at 3:59 PM, Matthias Runge mru...@redhat.com mailto:mru...@redhat.com mailto:mru...@redhat.com mailto:mru...@redhat.com wrote: On 03/12/2013 11:20 AM, Ashutosh Narayan wrote: Hi folks, I am trying to install dashboard as per instructions mentionedhere http://docs.openstack.org/trunk/openstack-compute/install/yum/content/installing-openstack-dashboard.html . First of all |/etc/openstack-dashboard/local_settings.py file doesn't| exist instead |/etc/openstack-dashboard/local_settings is present.| Second, file name |/etc/sysconfig/memcached.conf is not present| instead I see |/etc/sysconfig/memcached file.| Indeed, the local_settings is the right file; that's a typo in the docs. What is shown? Do you see an error? Is the webserver running? lsof -i is
Re: [Openstack] Tempest for Integration testing of Openstack (FOLSOM)
On 03/12/2013 11:14 AM, Girija Sharan wrote: But the tests in *tempest-stable-folsom/tempest/tests/network *are not running in Folsom with Quantum. All other tests are running fine. Someone said that this stable-folsom release of tempest is not for testing Quantum in Folsom. Is it true ? If yes then how do I test my Quantum in Folsom deployment using Tempest ? I'm sorry, I don't know how to answer your question without seeing the errors you are getting when running Tempest. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Tempest for Integration testing of Openstack (FOLSOM)
Hi, Thanks again. I am getting this error whenever I ran *tempest-stable-folsom/tempest/tests/network tests. == ERROR: test suite for class 'tempest.tests.network.test_networks.NetworksTest' -- Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/nose/suite.py, line 208, in run self.setUp() File /usr/lib/python2.7/dist-packages/nose/suite.py, line 291, in setUp self.setupContext(ancestor) File /usr/lib/python2.7/dist-packages/nose/suite.py, line 314, in setupContext try_run(context, names) File /usr/lib/python2.7/dist-packages/nose/util.py, line 478, in try_run return func() File /home/controller/tempest-stable-folsom/tempest/tests/network/test_networks.py, line 28, in setUpClass super(NetworksTest, cls).setUpClass() File /home/controller/tempest-stable-folsom/tempest/tests/network/base.py, line 39, in setUpClass client.list_networks() File /home/controller/tempest-stable-folsom/tempest/services/network/json/network_client.py, line 13, in list_networks resp, body = self.get('networks') File /home/controller/tempest-stable-folsom/tempest/common/rest_client.py, line 166, in get return self.request('GET', url, headers) File /home/controller/tempest-stable-folsom/tempest/common/rest_client.py, line 203, in request raise exceptions.NotFound(resp_body) NotFound: Object not found Details: Object not found Details: 404 Not Found The resource could not be found. begin captured logging tempest.config: INFO: Using tempest config file /home/controller/tempest-stable-folsom/etc/tempest.conf tempest.common.rest_client: ERROR: Request URL: http://192.168.2.170:9696/v2.0/tenants/09b32430cf8548ec8472d29a79fe2ddd/networks tempest.common.rest_client: ERROR: Request Body: None tempest.common.rest_client: ERROR: Response Headers: {'date': 'Tue, 12 Mar 2013 11:47:32 GMT', 'status': '404', 'content-length': '52', 'content-type': 'text/plain; charset=UTF-8'} tempest.common.rest_client: ERROR: Response Body: 404 Not Found The resource could not be found. - end captured logging - -- Ran 0 tests in 0.186s FAILED (errors=1) Please help me out. Thanks and Regards, Girija Sharan Singh On Tue, Mar 12, 2013 at 8:50 PM, Jay Pipes jaypi...@gmail.com wrote: On 03/12/2013 11:14 AM, Girija Sharan wrote: But the tests in *tempest-stable-folsom/tempest/tests/network *are not running in Folsom with Quantum. All other tests are running fine. Someone said that this stable-folsom release of tempest is not for testing Quantum in Folsom. Is it true ? If yes then how do I test my Quantum in Folsom deployment using Tempest ? I'm sorry, I don't know how to answer your question without seeing the errors you are getting when running Tempest. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift with Keystone problem
Give your whole proxy.conf here. On Tue, Mar 12, 2013 at 8:54 PM, Adam Huffman adam.huff...@gmail.comwrote: I'm having trouble with Swift, using Keystone auth, on Folsom. When I try something simple like 'swift stat', there are two errors: Firstly a logging error: 147proxy-server STDOUT: No handlers could be found for logger keystone.middleware.auth_token More importantly, the authorization fails: Account HEAD failed: http://ip:8080/v1/AUTH_dfb9c6d687be4d34bceee256cc3cb123 401 Unauthorized With SWIFTCLIENT_DEBUG set, I can see there are two separate requests: curl -i http://ip:8080/v1/AUTH_dfb9c6d687be4d34bceee256cc3cb123 -X HEAD -H X-Auth-Token: da38c4407cff40b69f236ef0da9d73e8 and two instances of: curl -i http://ip:8080/v1/AUTH_dfb9c6d687be4d34bceee256cc3cb123 -X HEAD -H X-Auth-Token: 0fc76ee28c2e43f0929c7c3ef158830d The proxy-server log for these requests is: proxy-server Authorizing as anonymous which is puzzling. The keystone log shows that real local credentials are being sent: 2013-03-12 12:46:11DEBUG [keystone.common.wsgi] REQUEST BODY 2013-03-12 12:46:11DEBUG [keystone.common.wsgi] {auth: {tenantName: admin, passwordCredentials: {username: admin, password: password}}} then 2013-03-12 12:46:11 WARNING [keystone.common.wsgi] Authorization failed. Invalid user / password from ip 2013-03-12 12:46:11DEBUG [keystone.common.wsgi] {error: {message: Invalid user / password, code: 401, title: Not Authorized}} Keystone auth works for all the other services. Any suggestions appreciated. Adam ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- Gareth *Cloud Computing, Openstack, Fitness, Basketball * *Novice Openstack contributer* *My promise: if you find any spelling or grammar mistake in my email from Mar 1 2013, notice me * *and I'll donate 1$ or 1¥ to open organization specified by you.* ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift with Keystone problem
[DEFAULT] bind_port = 8080 bind_ip = ip workers = 24 user = swift set log_level = DEBUG log_facility = LOG_LOCAL2 [pipeline:main] pipeline = healthcheck cache authtoken keystone proxy-server [app:proxy-server] use = egg:swift#proxy allow_account_management = true account_autocreate = true [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory signing_dir = /etc/swift auth_host = ip auth_port = 35357 auth_protocol = http auth_uri = http://ip:5000 # if its defined admin_tenant_name = services admin_user = swift admin_password = password delay_auth_decision = 1 [filter:cache] use = egg:swift#memcache memcache_servers = 127.0.0.1:11211 [filter:catch_errors] use = egg:swift#catch_errors [filter:healthcheck] use = egg:swift#healthcheck [filter:ratelimit] use = egg:swift#ratelimit clock_accuracy = 1000 max_sleep_time_seconds = 60 log_sleep_time_seconds = 0 rate_buffer_seconds = 5 account_ratelimit = 0 [filter:keystone] paste.filter_factory = keystone.middleware.swift_auth:filter_factory operator_roles = admin, SwiftOperator is_admin = true cache = swift.cache [filter:proxy-logging] use = egg:swift#proxy_logging # If not set, logging directives from [DEFAULT] without access_ will be used access_log_name = swift access_log_facility = LOG_LOCAL2 access_log_level = DEBUG On Tue, Mar 12, 2013 at 4:15 PM, Gareth academicgar...@gmail.com wrote: Give your whole proxy.conf here. On Tue, Mar 12, 2013 at 8:54 PM, Adam Huffman adam.huff...@gmail.com wrote: I'm having trouble with Swift, using Keystone auth, on Folsom. When I try something simple like 'swift stat', there are two errors: Firstly a logging error: 147proxy-server STDOUT: No handlers could be found for logger keystone.middleware.auth_token More importantly, the authorization fails: Account HEAD failed: http://ip:8080/v1/AUTH_dfb9c6d687be4d34bceee256cc3cb123 401 Unauthorized With SWIFTCLIENT_DEBUG set, I can see there are two separate requests: curl -i http://ip:8080/v1/AUTH_dfb9c6d687be4d34bceee256cc3cb123 -X HEAD -H X-Auth-Token: da38c4407cff40b69f236ef0da9d73e8 and two instances of: curl -i http://ip:8080/v1/AUTH_dfb9c6d687be4d34bceee256cc3cb123 -X HEAD -H X-Auth-Token: 0fc76ee28c2e43f0929c7c3ef158830d The proxy-server log for these requests is: proxy-server Authorizing as anonymous which is puzzling. The keystone log shows that real local credentials are being sent: 2013-03-12 12:46:11DEBUG [keystone.common.wsgi] REQUEST BODY 2013-03-12 12:46:11DEBUG [keystone.common.wsgi] {auth: {tenantName: admin, passwordCredentials: {username: admin, password: password}}} then 2013-03-12 12:46:11 WARNING [keystone.common.wsgi] Authorization failed. Invalid user / password from ip 2013-03-12 12:46:11DEBUG [keystone.common.wsgi] {error: {message: Invalid user / password, code: 401, title: Not Authorized}} Keystone auth works for all the other services. Any suggestions appreciated. Adam ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- Gareth Cloud Computing, Openstack, Fitness, Basketball Novice Openstack contributer My promise: if you find any spelling or grammar mistake in my email from Mar 1 2013, notice me and I'll donate 1$ or 1¥ to open organization specified by you. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Zenoss + Openstack Folson
An information system is asking and i do not know how to enable it - API Key OK with other information: -Username -Project ID - Auth URL - Region Name Alex Vitola @ alexvitola ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] swift containers panel permissions?
Can someone point me to docs describing how to add/modify/delete permissions for a horizon panel? I want a non-admin user to be able to access the Swift object-store containers panel in horizon. Currently, the containers panel.py has the permissions set to: permissions = ('openstack.services.object-store',), Only users with Admin role seem to have access to this panel. Can this be changed, and if so, where do I look to make the changes? Also, in general, its pretty ugly for the WSGI server to barf up an Internal Server Error for a simple permissions issue. Has anyone considered making Nova/Horizon fail a little more gracefully in the face of errors rather than the current HTTP 500 status messages? thanks, Wyllys Ingersoll eVault ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Configuring More-than-One Cinder Node
Logan, Thank you for the response! The high availability configuration documentation will be useful. The immediate problem is less with high availability than performance enhancement. Using the nova-controller and -compute model, I have one cinder node running the api, scheduler, and volume services and the others only running the volume service. The error in my configuration was that the iscsi_ip_address value was set incorrectly; it pointed to the IP address of the controller node. Changing it to the IP address of the host itself solved the problem. As a recap, this is what I think it all means: To run cinder services for a cluster on multiple nodes, pick on to the the controller: openstack-cinder-api openstack-cinder-scheduler openstack-cinder-volume tgtd On the rest of the cinder nodes, run: openstack-cinder-volume tgtd I expect this to change for a high availability configuration. Is there, however, anything particularly wrong with the configuration I described? Thanks, Craig On 03/11/2013 06:15 PM, Logan McNaughton wrote: You'll want to look here: http://docs.openstack.org/trunk/openstack-ha/content/s-cinder-api.html You'll need to basically create a virtual IP and load balance between the nodes running cinder-api and cinder-scheduler. If you want multiple nodes running cinder-volume, you can add them regularly, like you would with a nova-compute node. On Mar 11, 2013 6:51 PM, Debashis Kundu (dkundu) dku...@cisco.com wrote: O - Original Message - From: Craig E. Ward [mailto:cw...@isi.edu] Sent: Monday, March 11, 2013 05:23 PM To: openstack@lists.launchpad.net openstack@lists.launchpad.net Subject: [Openstack] Configuring More-than-One Cinder Node I have an installation that wants to deploy two or more cinder nodes within an OpenStack (Folsom) cluster. All of the hits I find on Google for configuring cinder only describe how to configure the software for a single node. Is it even possible to have more than one node running the cinder services in a cluster? The setup I have has one of the cinder nodes identified as the cinder node to the compute and other node types. A second node was installed and the cinder services started. On the nova controller node, however, while a new volume could be created that was listed in the MySQL database as on the second node, all attempts to attach that volume to an instance silently failed. The nova volume-attach command would come back with an id and mapping of instance to volume, but the very next nova volume-list command continued to show the volume in question as available. If the second cinder node had the cinder-volume service running, volumes on that node could be deleted. If cinder-volume was not running, the delete would go on forever. Everything works as expected with only the cinder node configured in nova.conf running, i.e. as a single cinder node installation. Volumes can be created, attached, used, detached, and deleted. Are there some extra parameters that should be set in either nova.conf or cinder.conf to indicate that the cinder services are available on more-than-one node? Or is what we're trying to do something unexpected and not supported? Thanks, Craig -- Craig E. Ward USC Information Sciences Institute cw...@isi.edu ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- Craig E. Ward USC Information Sciences Institute 310-448-8271 cw...@isi.edu ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Tempest for Integration testing of Openstack (FOLSOM)
On 03/12/2013 05:26 PM, Girija Sharan wrote: Hi, Thanks again. I am getting this error whenever I ran *tempest-stable-folsom/tempest/tests/network tests. == ERROR: test suite for class 'tempest.tests.network.test_networks.NetworksTest' -- Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/nose/suite.py, line 208, in run self.setUp() File /usr/lib/python2.7/dist-packages/nose/suite.py, line 291, in setUp self.setupContext(ancestor) File /usr/lib/python2.7/dist-packages/nose/suite.py, line 314, in setupContext try_run(context, names) File /usr/lib/python2.7/dist-packages/nose/util.py, line 478, in try_run return func() File /home/controller/tempest-stable-folsom/tempest/tests/network/test_networks.py, line 28, in setUpClass super(NetworksTest, cls).setUpClass() File /home/controller/tempest-stable-folsom/tempest/tests/network/base.py, line 39, in setUpClass client.list_networks() File /home/controller/tempest-stable-folsom/tempest/services/network/json/network_client.py, line 13, in list_networks resp, body = self.get('networks') File /home/controller/tempest-stable-folsom/tempest/common/rest_client.py, line 166, in get return self.request('GET', url, headers) File /home/controller/tempest-stable-folsom/tempest/common/rest_client.py, line 203, in request raise exceptions.NotFound(resp_body) NotFound: Object not found Details: Object not found Details: 404 Not Found The resource could not be found. begin captured logging tempest.config: INFO: Using tempest config file /home/controller/tempest-stable-folsom/etc/tempest.conf tempest.common.rest_client: ERROR: Request URL: http://192.168.2.170:9696/v2.0/tenants/09b32430cf8548ec8472d29a79fe2ddd/networks tempest.common.rest_client: ERROR: Request Body: None tempest.common.rest_client: ERROR: Response Headers: {'date': 'Tue, 12 Mar 2013 11:47:32 GMT', 'status': '404', 'content-length': '52', 'content-type': 'text/plain; charset=UTF-8'} tempest.common.rest_client: ERROR: Response Body: 404 Not Found The resource could not be found. - end captured logging - -- Ran 0 tests in 0.186s FAILED (errors=1) Please help me out. Thanks and Regards, Girija Sharan Singh On Tue, Mar 12, 2013 at 8:50 PM, Jay Pipes jaypi...@gmail.com mailto:jaypi...@gmail.com wrote: On 03/12/2013 11:14 AM, Girija Sharan wrote: But the tests in *tempest-stable-folsom/tempest/tests/network *are not running in Folsom with Quantum. All other tests are running fine. Someone said that this stable-folsom release of tempest is not for testing Quantum in Folsom. Is it true ? If yes then how do I test my Quantum in Folsom deployment using Tempest ? Regarding your question about tempest.conf, in order to work with Folsom OpenStack, you just need to make sure you have 2 non-admin users and tenants, 2 active images, then copy the etc/tempest.conf.sample to etc/tempest.conf and change the following: uri username, password, tenant_name alt_username, alt_password, alt_tenant_name admin_username, admin_password image_ref, image_ref_alt It will work for most of the tests. If you're working with Quantum, catalog_type under [network] section should be set to network, for nova-network you should *probably* set it to compute. Regarding the Quantum support in tempest - I'm not familiar with that (at least not yet, anyone else can help here?) but if you would like to perform the same operation in nova-network, networks list currently won't work since the JSON/XML clients (that should call to os-networks API entry point as described in http://api.openstack.org/api-ref.html) are not implemented - they return 404 page too, but in case it's implemented - I think the result you get in the above exception can also indicate that the tenant you refers to does not exist and should be configured in tempest.conf. I'm sorry, I don't know how to answer your question without seeing the errors you are getting when running Tempest. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- Thanks, Rami Vaknin, QE @ Red Hat, TLV, IL. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] swift containers panel permissions?
Hi Wyllys, On 13 March 2013 04:19, Wyllys Ingersoll wyllys.ingers...@evault.com wrote: Can someone point me to docs describing how to add/modify/delete permissions for a horizon panel? I want a non-admin user to be able to access the Swift object-store containers panel in horizon. Currently, the containers panel.py has the permissions set to: permissions = ('openstack.services.object-store',), Only users with Admin role seem to have access to this panel. Can this be changed, and if so, where do I look to make the changes? This permission comes from your keystone service catalog. If you have an object-store entry in your catalog, then all users should see this. Also, in general, its pretty ugly for the WSGI server to barf up an Internal Server Error for a simple permissions issue. Has anyone considered making Nova/Horizon fail a little more gracefully in the face of errors rather than the current HTTP 500 status messages? It generally does. I suspect there's something else going on in your case, possibly a configuration issue with keystone or swift itself. Paste the error here and we might be able to work out what's going on: http://paste.openstack.org/ Cheers, Kieran thanks, Wyllys Ingersoll eVault ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Can't register for forums.openstack.org
If anybody from the forums is on here, or can forward this message to the forum admins, I have been trying for weeks to register. This is from multiple IP address ranges-- work, home, and cellular. I keep getting the following error: Your IP 96.60.255.159 has been blocked because it is blacklisted. For details please see http://search.atlbl.com/search.php?q=96.60.255.159. { IP_BLACKLISTED_INFO } search.atlbl.com goes to a parked website, so I'm guessing the registration blacklist is actually broken. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Help with simplest Quantum setup possible...
Hi! Sorry about the double posting... I need help! :-P I'm trying, without any kind of success, to deploy OpenStack with Quantum on its simplest scenario, I think, which is `Single Flat' with `Linux Bridge' plugin. My topology is: 1 firewall with 2 ethX (eth0 public, eth1 10.32.14.1 and 10.33.14.1) 1 controller with 1 eth0 (10.32.14.232/24 gateway 10.32.14.1) 1 node with 1 eth0 (10.32.14.234/24 gateway 10.32.14.1) Instances Network: 10.33.14.0/24 (Instances gateway must be 10.33.14.1, same router of the physical servers above, NOT its own host hypervisor). I'm trying this: http://docs.openstack.org/trunk/openstack-network/admin/content/demo_flat_installions.html- doesn't work... Even enabling OpenvSwith (but I don't want it for now, only Quantum instead of nova-network, with Linux Bridge for the sake of simplicity). The following guide help me a lot (with Nova Network everything is fine): http://openstack-folsom-install-guide.readthedocs.org/en/latest/ - I'm trying to follow it, by replacing nota-network instructions, for Quantum instructions but, doesn't work... Any docs or tips? NOTE: I do not want any kind of NAT (like nova-network multi=true) or `Floating IPs' within my Cloud Computing environment. Thanks! Thiago ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Ubuntu archive no longer resolves on UEC images
Hi, When booting up a UEC image on our cloud cloud-init writes the apt sources file with: http://availability-zone.clouds.archive.ubuntu.com/ubuntu Today all of a sudden this doesn't resolve to anything. I note that http://availability-zone.cloud.archive.ubuntu.com does (cloud not clouds) I'm guessing there has been some change in the Ubuntu DNS servers that has broken this and it will be affecting all UEC images that use cloud-init in any cloud. Anyone from ubuntu know what's up? Cheers, Sam ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Openstack-operators] Help with simplest Quantum setup possible...
Make sure the quantum-dhcp and l3 agents are running and properly configured. It sounds like either the q-dhcp agent is not functioning or connectivity between the dhcp agent and vm is not functioning. If using GRE tunnels, test connectivity between the tunnel endpoints. You should also see the IP's of your tunnel peers in ovs-vsctl show. If your instance spawns successfully, console into it and manually assign an IP and ping the q-l3-agent and q-dhcp agent. You can follow this guide for deploying Quantum with OpenvSwitch using GRE tunnels: http://docwiki.cisco.com/wiki/Cisco_OpenStack_Edition:_Folsom_Manual_Install Regards, Daneyon Hansen From: Martinx - ジェームズ thiagocmarti...@gmail.commailto:thiagocmarti...@gmail.com Date: Tuesday, March 12, 2013 8:14 PM To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net, openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org openstack-operat...@lists.openstack.orgmailto:openstack-operat...@lists.openstack.org Subject: Re: [Openstack-operators] Help with simplest Quantum setup possible... Well, Just for the record, I'll stick with Quatum + OpenvSwtich... Someone on IRC told me that it is the best way to go with Quantum. I'm still needing help to set it up (Single Flat / multi=false, still the same simplest topology). OpenvSwitch is already working, bridges `br-int' and `br-eth0' created... My main problem, I think, is that my Instances doesn't get an IP (they supposed to be at 10.33.14.X/24). Everything else seems to be working as expected, no apparent errors on the logs... quantum net-create / subnet-create worked... I appreciate any help, tips or docs! Best! Thiago On 12 March 2013 23:34, Martinx - ジェームズ thiagocmarti...@gmail.commailto:thiagocmarti...@gmail.com wrote: Hi! Sorry about the double posting... I need help! :-P I'm trying, without any kind of success, to deploy OpenStack with Quantum on its simplest scenario, I think, which is `Single Flat' with `Linux Bridge' plugin. My topology is: 1 firewall with 2 ethX (eth0 public, eth1 10.32.14.1 and 10.33.14.1) 1 controller with 1 eth0 (10.32.14.232/24http://10.32.14.232/24 gateway 10.32.14.1) 1 node with 1 eth0 (10.32.14.234/24http://10.32.14.234/24 gateway 10.32.14.1) Instances Network: 10.33.14.0/24http://10.33.14.0/24 (Instances gateway must be 10.33.14.1, same router of the physical servers above, NOT its own host hypervisor). I'm trying this: http://docs.openstack.org/trunk/openstack-network/admin/content/demo_flat_installions.html - doesn't work... Even enabling OpenvSwith (but I don't want it for now, only Quantum instead of nova-network, with Linux Bridge for the sake of simplicity). The following guide help me a lot (with Nova Network everything is fine): http://openstack-folsom-install-guide.readthedocs.org/en/latest/ - I'm trying to follow it, by replacing nota-network instructions, for Quantum instructions but, doesn't work... Any docs or tips? NOTE: I do not want any kind of NAT (like nova-network multi=true) or `Floating IPs' within my Cloud Computing environment. Thanks! Thiago ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Openstack-operators] Help with simplest Quantum setup possible...
Daneyon, Thank you for your time! I'll check it! I read that guide from Cisto once... Too complex. I also, tried the following guides too, appears to be like the one from Cisco: https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/stable/GRE/OpenStack_Folsom_Install_Guide_WebVersion.rst and: https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/GRE/2NICs/OpenStack_Folsom_Install_Guide_WebVersion.rst Anyway, at first, I do not want L3 or GRE. Only the most basic setup (Flat / L2). Do you know if my `controller+network node' server eth0 needs to be promisc? Tks! Thiago On 13 March 2013 00:49, Daneyon Hansen (danehans) daneh...@cisco.comwrote: Make sure the quantum-dhcp and l3 agents are running and properly configured. It sounds like either the q-dhcp agent is not functioning or connectivity between the dhcp agent and vm is not functioning. If using GRE tunnels, test connectivity between the tunnel endpoints. You should also see the IP's of your tunnel peers in ovs-vsctl show. If your instance spawns successfully, console into it and manually assign an IP and ping the q-l3-agent and q-dhcp agent. You can follow this guide for deploying Quantum with OpenvSwitch using GRE tunnels: http://docwiki.cisco.com/wiki/Cisco_OpenStack_Edition:_Folsom_Manual_Install Regards, Daneyon Hansen From: Martinx - ジェームズ thiagocmarti...@gmail.com Date: Tuesday, March 12, 2013 8:14 PM To: openstack@lists.launchpad.net openstack@lists.launchpad.net, openstack-operat...@lists.openstack.org openstack-operat...@lists.openstack.org Subject: Re: [Openstack-operators] Help with simplest Quantum setup possible... Well, Just for the record, I'll stick with Quatum + OpenvSwtich... Someone on IRC told me that it is the best way to go with Quantum. I'm still needing help to set it up (Single Flat / multi=false, still the same simplest topology). OpenvSwitch is already working, bridges `br-int' and `br-eth0' created... My main problem, I think, is that my Instances doesn't get an IP (they supposed to be at 10.33.14.X/24). Everything else seems to be working as expected, no apparent errors on the logs... quantum net-create / subnet-create worked... I appreciate any help, tips or docs! Best! Thiago On 12 March 2013 23:34, Martinx - ジェームズ thiagocmarti...@gmail.com wrote: Hi! Sorry about the double posting... I need help! :-P I'm trying, without any kind of success, to deploy OpenStack with Quantum on its simplest scenario, I think, which is `Single Flat' with `Linux Bridge' plugin. My topology is: 1 firewall with 2 ethX (eth0 public, eth1 10.32.14.1 and 10.33.14.1) 1 controller with 1 eth0 (10.32.14.232/24 gateway 10.32.14.1) 1 node with 1 eth0 (10.32.14.234/24 gateway 10.32.14.1) Instances Network: 10.33.14.0/24 (Instances gateway must be 10.33.14.1, same router of the physical servers above, NOT its own host hypervisor). I'm trying this: http://docs.openstack.org/trunk/openstack-network/admin/content/demo_flat_installions.html- doesn't work... Even enabling OpenvSwith (but I don't want it for now, only Quantum instead of nova-network, with Linux Bridge for the sake of simplicity). The following guide help me a lot (with Nova Network everything is fine): http://openstack-folsom-install-guide.readthedocs.org/en/latest/- I'm trying to follow it, by replacing nota-network instructions, for Quantum instructions but, doesn't work... Any docs or tips? NOTE: I do not want any kind of NAT (like nova-network multi=true) or `Floating IPs' within my Cloud Computing environment. Thanks! Thiago ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Issues with latests trunk
Hi, Is the MySQL adapter for python isntalled in your machine. Because the error corresponds to th unavailability of the MySQLdb adapter. Kindly have a look into that. Regards, Avinash From: openstack-bounces+avinash.prasad=nttdata@lists.launchpad.net [openstack-bounces+avinash.prasad=nttdata@lists.launchpad.net] on behalf of Tyler North [tyl...@pistoncloud.com] Sent: Tuesday, March 12, 2013 6:05 PM To: openstack@lists.launchpad.net Subject: [Openstack] Issues with latests trunk Hey everyone, I'm trying to run the latest version of devstack-trunk( latest commit on git log is: 87387596631602b5f676eae65823b4f0c5c71e66a). Im running it currently on Ubuntu 12.04 and whenever I run ./stack.sh I get the following error: Unable to communicate with identity service: {error: {message: An unexpected error prevented the server from fulfilling your request. No module named MySQLdb, code: 500, title: Internal Server Error}}. (HTTP 500) + KEYSTONE_SERVICE= + keystone endpoint-create --region RegionOne --service_id --publicurl http://10.1.10.231:5000/v2.0 --adminurl http://10.1.10.231:35357/v2.0 --internalurl http://10.1.10.231:5000/v2.0 usage: keystone endpoint-create [--region endpoint-region] --service-id service-id [--publicurl public-url] [--adminurl admin-url] [--internalurl internal-url] keystone endpoint-create: error: argument --service-id/--service_id: expected one argument Any help as to where to look for /solve the problem would be appreciated Thanks Tyler __ Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Tempest for Integration testing of Openstack (FOLSOM)
On Wed, Mar 13, 2013 at 2:14 AM, Rami Vaknin rvak...@redhat.com wrote: On 03/12/2013 05:26 PM, Girija Sharan wrote: Hi, Thanks again. I am getting this error whenever I ran *tempest-stable-folsom/tempest/tests/network tests. == ERROR: test suite for class 'tempest.tests.network.test_networks.NetworksTest' -- Traceback (most recent call last): File /usr/lib/python2.7/dist-packages/nose/suite.py, line 208, in run self.setUp() File /usr/lib/python2.7/dist-packages/nose/suite.py, line 291, in setUp self.setupContext(ancestor) File /usr/lib/python2.7/dist-packages/nose/suite.py, line 314, in setupContext try_run(context, names) File /usr/lib/python2.7/dist-packages/nose/util.py, line 478, in try_run return func() File /home/controller/tempest-stable-folsom/tempest/tests/network/test_networks.py, line 28, in setUpClass super(NetworksTest, cls).setUpClass() File /home/controller/tempest-stable-folsom/tempest/tests/network/base.py, line 39, in setUpClass client.list_networks() File /home/controller/tempest-stable-folsom/tempest/services/network/json/network_client.py, line 13, in list_networks resp, body = self.get('networks') File /home/controller/tempest-stable-folsom/tempest/common/rest_client.py, line 166, in get return self.request('GET', url, headers) File /home/controller/tempest-stable-folsom/tempest/common/rest_client.py, line 203, in request raise exceptions.NotFound(resp_body) NotFound: Object not found Details: Object not found Details: 404 Not Found The resource could not be found. begin captured logging tempest.config: INFO: Using tempest config file /home/controller/tempest-stable-folsom/etc/tempest.conf tempest.common.rest_client: ERROR: Request URL: http://192.168.2.170:9696/v2.0/tenants/09b32430cf8548ec8472d29a79fe2ddd/networks tempest.common.rest_client: ERROR: Request Body: None tempest.common.rest_client: ERROR: Response Headers: {'date': 'Tue, 12 Mar 2013 11:47:32 GMT', 'status': '404', 'content-length': '52', 'content-type': 'text/plain; charset=UTF-8'} tempest.common.rest_client: ERROR: Response Body: 404 Not Found The resource could not be found. - end captured logging - -- Ran 0 tests in 0.186s FAILED (errors=1) Please help me out. Thanks and Regards, Girija Sharan Singh On Tue, Mar 12, 2013 at 8:50 PM, Jay Pipes jaypi...@gmail.com wrote: On 03/12/2013 11:14 AM, Girija Sharan wrote: But the tests in *tempest-stable-folsom/tempest/tests/network *are not running in Folsom with Quantum. All other tests are running fine. Someone said that this stable-folsom release of tempest is not for testing Quantum in Folsom. Is it true ? If yes then how do I test my Quantum in Folsom deployment using Tempest ? Regarding your question about tempest.conf, in order to work with Folsom OpenStack, you just need to make sure you have 2 non-admin users and tenants, 2 active images, then copy the etc/tempest.conf.sample to etc/tempest.conf and change the following: uri username, password, tenant_name alt_username, alt_password, alt_tenant_name admin_username, admin_password image_ref, image_ref_alt It will work for most of the tests. If you're working with Quantum, catalog_type under [network] section should be set to network, for nova-network you should *probably* set it to compute. Yes as you are saying I was having catalog_type under [network] section set to network only. That means it should work for Quantum. But I want to know whether it works for Quantum under Folsom release or not ? Thanks and Regards, Girija Regarding the Quantum support in tempest - I'm not familiar with that (at least not yet, anyone else can help here?) but if you would like to perform the same operation in nova-network, networks list currently won't work since the JSON/XML clients (that should call to os-networks API entry point as described in http://api.openstack.org/api-ref.html) are not implemented - they return 404 page too, but in case it's implemented - I think the result you get in the above exception can also indicate that the tenant you refers to does not exist and should be configured in tempest.conf. I'm sorry, I don't know how to answer your question without seeing the errors you are getting when running Tempest. -jay ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- Thanks, Rami Vaknin, QE @ Red Hat, TLV, IL.
[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_ceilometer_trunk #132
Title: precise_grizzly_ceilometer_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_ceilometer_trunk/132/Project:precise_grizzly_ceilometer_trunkDate of build:Tue, 12 Mar 2013 05:01:24 -0400Build duration:1 min 45 secBuild cause:Started by user James PageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 1916 lines...]Fail-Stage: install-depsHost Architecture: amd64Install-Time: 0Job: ceilometer_2013.1+git201303120501~precise-0ubuntu1.dscMachine Architecture: amd64Package: ceilometerPackage-Time: 0Source-Version: 1:2013.1+git201303120501~precise-0ubuntu1Space: 0Status: failedVersion: 1:2013.1+git201303120501~precise-0ubuntu1Finished at 20130312-0502Build needed 00:00:00, 0k disc spaceE: Package build dependencies not satisfied; skippingERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1+git201303120501~precise-0ubuntu1.dsc']' returned non-zero exit status 3ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1+git201303120501~precise-0ubuntu1.dsc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/ceilometer/grizzly /tmp/tmppy4my9/ceilometermk-build-deps -i -r -t apt-get -y /tmp/tmppy4my9/ceilometer/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/ceilometer/precise-grizzly --forcedch -b -D precise --newversion 1:2013.1+git201303120501~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC ceilometer_2013.1+git201303120501~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A ceilometer_2013.1+git201303120501~precise-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1+git201303120501~precise-0ubuntu1.dsc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'ceilometer_2013.1+git201303120501~precise-0ubuntu1.dsc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Fixed: precise_grizzly_ceilometer_trunk #133
Title: precise_grizzly_ceilometer_trunk General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_ceilometer_trunk/133/Project:precise_grizzly_ceilometer_trunkDate of build:Tue, 12 Mar 2013 05:10:40 -0400Build duration:2 min 35 secBuild cause:Started by user James PageBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesNo ChangesConsole Output[...truncated 4953 lines...]gpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key)"gpg: Signature made Tue Mar 12 05:11:53 2013 EDT using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) "Checking signature on .changesGood signature on /tmp/tmpYvBBmS/ceilometer_2013.1+git201303120510~precise-0ubuntu1_source.changes.Checking signature on .dscGood signature on /tmp/tmpYvBBmS/ceilometer_2013.1+git201303120510~precise-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net): Uploading ceilometer_2013.1+git201303120510~precise-0ubuntu1.dsc: done. Uploading ceilometer_2013.1+git201303120510~precise.orig.tar.gz: done. Uploading ceilometer_2013.1+git201303120510~precise-0ubuntu1.debian.tar.gz: done. Uploading ceilometer_2013.1+git201303120510~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'ceilometer_2013.1+git201303120510~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-grizzly/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-grizzly/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/c/ceilometer/ceilometer-agent-central_2013.1+git20130350~precise-0ubuntu1_all.debdeleting and forgetting pool/main/c/ceilometer/ceilometer-agent-compute_2013.1+git20130350~precise-0ubuntu1_all.debdeleting and forgetting pool/main/c/ceilometer/ceilometer-api_2013.1+git20130350~precise-0ubuntu1_all.debdeleting and forgetting pool/main/c/ceilometer/ceilometer-collector_2013.1+git20130350~precise-0ubuntu1_all.debdeleting and forgetting pool/main/c/ceilometer/ceilometer-common_2013.1+git20130350~precise-0ubuntu1_all.debdeleting and forgetting pool/main/c/ceilometer/python-ceilometer_2013.1+git20130350~precise-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/ceilometer/precise-grizzly']Pushed up to revision 22.INFO:root:Storing current commit for next build: fa6bdc284ccd3af20839ea92f17caae465a63d82INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/ceilometer/grizzly /tmp/tmpYvBBmS/ceilometermk-build-deps -i -r -t apt-get -y /tmp/tmpYvBBmS/ceilometer/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/ceilometer/precise-grizzly --forcedch -b -D precise --newversion 1:2013.1+git201303120510~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC ceilometer_2013.1+git201303120510~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A ceilometer_2013.1+git201303120510~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing ceilometer_2013.1+git201303120510~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly ceilometer_2013.1+git201303120510~precise-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/ceilometer/precise-grizzlyEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_horizon_trunk #100
Title: precise_grizzly_horizon_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_horizon_trunk/100/Project:precise_grizzly_horizon_trunkDate of build:Tue, 12 Mar 2013 12:31:39 -0400Build duration:2 min 44 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesUpdate CACHE_ to CACHES settingsby mrungeeditopenstack_dashboard/local/local_settings.py.exampleConsole Output[...truncated 3156 lines...]bzr: ERROR: An error (1) occurred running quilt: Applying patch fix-dashboard-django-wsgi.patchpatching file openstack_dashboard/wsgi/django.wsgiApplying patch fix-dashboard-manage.patchpatching file manage.pyApplying patch fix-ubuntu-tests.patchpatching file run_tests.shApplying patch ubuntu_local_settings.patchpatching file openstack_dashboard/local/local_settings.py.exampleHunk #2 FAILED at 49.1 out of 2 hunks FAILED -- rejects in file openstack_dashboard/local/local_settings.py.examplePatch ubuntu_local_settings.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-c1486d0e-ada0-4dd2-adca-123aa56334b9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-c1486d0e-ada0-4dd2-adca-123aa56334b9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/horizon/grizzly /tmp/tmpXG2_6C/horizonmk-build-deps -i -r -t apt-get -y /tmp/tmpXG2_6C/horizon/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/horizon/precise-grizzly --forcedch -b -D precise --newversion 1:2013.1+git201303121231~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-c1486d0e-ada0-4dd2-adca-123aa56334b9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-c1486d0e-ada0-4dd2-adca-123aa56334b9', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_horizon_trunk #98
Title: raring_grizzly_horizon_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_horizon_trunk/98/Project:raring_grizzly_horizon_trunkDate of build:Tue, 12 Mar 2013 12:31:39 -0400Build duration:3 min 31 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesUpdate CACHE_ to CACHES settingsby mrungeeditopenstack_dashboard/local/local_settings.py.exampleConsole Output[...truncated 3621 lines...]bzr: ERROR: An error (1) occurred running quilt: Applying patch fix-dashboard-django-wsgi.patchpatching file openstack_dashboard/wsgi/django.wsgiApplying patch fix-dashboard-manage.patchpatching file manage.pyApplying patch fix-ubuntu-tests.patchpatching file run_tests.shApplying patch ubuntu_local_settings.patchpatching file openstack_dashboard/local/local_settings.py.exampleHunk #2 FAILED at 49.1 out of 2 hunks FAILED -- rejects in file openstack_dashboard/local/local_settings.py.examplePatch ubuntu_local_settings.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-5aa2ff56-af7e-4a7a-a13f-63eaae2a17fb', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-5aa2ff56-af7e-4a7a-a13f-63eaae2a17fb', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/horizon/grizzly /tmp/tmpjGV382/horizonmk-build-deps -i -r -t apt-get -y /tmp/tmpjGV382/horizon/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/horizon/raring-grizzly --forcedch -b -D raring --newversion 1:2013.1+git201303121231~raring-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-5aa2ff56-af7e-4a7a-a13f-63eaae2a17fb', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-5aa2ff56-af7e-4a7a-a13f-63eaae2a17fb', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_nova_trunk #799
Title: precise_grizzly_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/799/Project:precise_grizzly_nova_trunkDate of build:Tue, 12 Mar 2013 15:01:39 -0400Build duration:13 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesForce resource updates to update updated_atby cbehrenseditnova/db/sqlalchemy/api.pyeditnova/tests/test_db_api.pyConsole Output[...truncated 10263 lines...]Distribution: precise-grizzlyFail-Stage: buildHost Architecture: amd64Install-Time: 43Job: nova_2013.1+git201303121504~precise-0ubuntu1.dscMachine Architecture: amd64Package: novaPackage-Time: 508Source-Version: 1:2013.1+git201303121504~precise-0ubuntu1Space: 123196Status: attemptedVersion: 1:2013.1+git201303121504~precise-0ubuntu1Finished at 20130312-1514Build needed 00:08:28, 123196k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201303121504~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201303121504~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/grizzly /tmp/tmphtgkiO/novamk-build-deps -i -r -t apt-get -y /tmp/tmphtgkiO/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/nova/precise-grizzly --forcedch -b -D precise --newversion 1:2013.1+git201303121504~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201303121504~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nova_2013.1+git201303121504~precise-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201303121504~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'nova_2013.1+git201303121504~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Fixed: precise_grizzly_nova_trunk #800
Title: precise_grizzly_nova_trunk General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/800/Project:precise_grizzly_nova_trunkDate of build:Tue, 12 Mar 2013 15:48:22 -0400Build duration:11 minBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesConsole Output[...truncated 19271 lines...]deleting and forgetting pool/main/n/nova/nova-cert_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-common_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-kvm_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-lxc_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-qemu_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-uml_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-xcp_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-xen_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-conductor_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-console_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-consoleauth_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-doc_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-network_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-novncproxy_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-objectstore_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-scheduler_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-spiceproxy_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-volume_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-network_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-plugins_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xvpvncproxy_2013.1+git201303121303~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/python-nova_2013.1+git201303121303~precise-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/nova/precise-grizzly']Pushed up to revision 562.INFO:root:Storing current commit for next build: 26440ae2cd4c8ffe44beecb6bb0cce19cb43bb7bINFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/grizzly /tmp/tmpEgr6Yg/novamk-build-deps -i -r -t apt-get -y /tmp/tmpEgr6Yg/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/nova/precise-grizzly --forcedch -b -D precise --newversion 1:2013.1+git201303121549~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.1+git201303121549~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A nova_2013.1+git201303121549~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing nova_2013.1+git201303121549~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly nova_2013.1+git201303121549~precise-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/nova/precise-grizzly+ [ ! 0 ]+ jenkins-cli build precise_grizzly_deployEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_deploy #210
Title: precise_grizzly_deploy General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_deploy/210/Project:precise_grizzly_deployDate of build:Tue, 12 Mar 2013 15:59:27 -0400Build duration:2 min 10 secBuild cause:Started by command line by jenkinsBuilt on:masterHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesConsole Output[...truncated 168 lines...]INFO:paramiko.transport:Secsh channel 1 opened.INFO:paramiko.transport.sftp:[chan 1] Opened sftp connection (server version 3)INFO:root:Setting up connection to test-11.os.magners.qa.lexingtonERROR:root:Could not setup SSH connection to test-11.os.magners.qa.lexingtonINFO:root:Setting up connection to test-02.os.magners.qa.lexingtonERROR:root:Could not setup SSH connection to test-02.os.magners.qa.lexingtonINFO:root:Setting up connection to test-07.os.magners.qa.lexingtonERROR:root:Could not setup SSH connection to test-07.os.magners.qa.lexingtonINFO:root:Setting up connection to test-12.os.magners.qa.lexingtonERROR:root:Could not setup SSH connection to test-12.os.magners.qa.lexingtonINFO:root:Setting up connection to test-04.os.magners.qa.lexingtonERROR:root:Could not setup SSH connection to test-04.os.magners.qa.lexingtonINFO:root:Setting up connection to test-09.os.magners.qa.lexingtonERROR:root:Could not setup SSH connection to test-09.os.magners.qa.lexingtonINFO:root:Archiving logs on test-07.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-07.os.magners.qa.lexingtonINFO:root:Archiving logs on test-12.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-12.os.magners.qa.lexingtonINFO:root:Archiving logs on test-09.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-09.os.magners.qa.lexingtonINFO:root:Archiving logs on test-04.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-04.os.magners.qa.lexingtonINFO:root:Archiving logs on test-05.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-11.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-11.os.magners.qa.lexingtonINFO:root:Archiving logs on test-02.os.magners.qa.lexingtonERROR:root:Coult not create tarball of logs on test-02.os.magners.qa.lexingtonINFO:root:Grabbing information from test-07.os.magners.qa.lexingtonERROR:root:Unable to get information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-12.os.magners.qa.lexingtonERROR:root:Unable to get information from test-12.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonERROR:root:Unable to get information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonERROR:root:Unable to get information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-11.os.magners.qa.lexingtonERROR:root:Unable to get information from test-11.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonERROR:root:Unable to get information from test-02.os.magners.qa.lexingtonTraceback (most recent call last): File "/var/lib/jenkins/tools/jenkins-scripts/collate-test-logs.py", line 88, in connections[host]["sftp"].close()KeyError: 'sftp'+ exit 1Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Fixed: raring_grizzly_nova_trunk #895
Title: raring_grizzly_nova_trunk General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/895/Project:raring_grizzly_nova_trunkDate of build:Tue, 12 Mar 2013 17:33:38 -0400Build duration:32 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 3 lines...]Using strategy: DefaultCheckout:nova / /var/lib/jenkins/slave/workspace/raring_grizzly_nova_trunk/nova - hudson.remoting.LocalChannel@27418455Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision cb3063ab33cc499b8efae2d9fdcf8bb3fd569ecc (origin/master)Checking out Revision cb3063ab33cc499b8efae2d9fdcf8bb3fd569ecc (origin/master)No change to record in branch origin/masterNo emails were triggered.[raring_grizzly_nova_trunk] $ /bin/sh -xe /tmp/hudson673564756361992494.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-params+ . ./pipeline_parameters+ export UBUNTU_RELEASE=raring+ export OPENSTACK_RELEASE=grizzly+ export OPENSTACK_COMPONENT=nova+ export OPENSTACK_BRANCH=trunk+ export PIPELINE_ID=91e44f28-8b5c-11e2-b092-6fe7d407f917+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/archive_jobINFO:root:Archiving Jenkins job.DEBUG:root:Checking env for parameter: PIPELINE_IDDEBUG:root:Found: 91e44f28-8b5c-11e2-b092-6fe7d407f917DEBUG:root:Checking env for parameter: STATUSDEBUG:root:Checking env for parameter: BUILD_TAGDEBUG:root:Found: jenkins-raring_grizzly_nova_trunk-895DEBUG:root:Checking env for parameter: PARENT_BUILD_TAGDEBUG:root:Checking env for parameter: OPENSTACK_BRANCHDEBUG:root:Found: trunkDEBUG:root:Checking env for parameter: OPENSTACK_COMPONENTDEBUG:root:Found: novaDEBUG:root:Checking env for parameter: UBUNTU_RELEASEDEBUG:root:Found: raringDEBUG:root:Checking env for parameter: OPENSTACK_RELEASEDEBUG:root:Found: grizzlyDEBUG:root:Checking env for extra parameter: GIT_COMMITDEBUG:root:Checking env for extra parameter: AUTHORDEBUG:root:Checking env for extra parameter: BUILD_URLDEBUG:root:Checking env for extra parameter: DEPLOYMENTINFO:root:Test job saved: jenkins-raring_grizzly_nova_trunk-895INFO:root:Archived job, pipeline id: 91e44f28-8b5c-11e2-b092-6fe7d407f917 build_tag: jenkins-raring_grizzly_nova_trunk-895.debug[raring_grizzly_nova_trunk] $ /bin/sh -xe /tmp/hudson4832190437885156540.sh+ echo SKIPSKIP+ [ 0 != 0 ]+ jenkins-cli build -p pipeline_parameters=pipeline_parameters -p PARENT_BUILD_TAG=jenkins-raring_grizzly_nova_trunk-895 pipeline_runnerEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_swift_trunk #152
Title: raring_grizzly_swift_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_swift_trunk/152/Project:raring_grizzly_swift_trunkDate of build:Tue, 12 Mar 2013 18:00:48 -0400Build duration:2 min 30 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesFixed bug with account_infoby z-launchpadeditswift/proxy/controllers/base.pyedittest/unit/proxy/test_server.pyConsole Output[...truncated 1242 lines...]Setting up python-all-dev (2.7.3-10ubuntu5) ...Setting up python-openssl (0.13-2ubuntu3) ...Setting up python-setuptools (0.6.34-0ubuntu1) ...Setting up python-xattr (0.6.4-2) ...Setting up python-netifaces (0.8-2) ...Setting up python-greenlet (0.4.0-1ubuntu1) ...Setting up libjs-underscore (1.4.3-1ubuntu1) ...Setting up libjs-sphinxdoc (1.1.3+dfsg-7ubuntu1) ...Setting up python-eventlet (0.12.1-0ubuntu1) ...Setting up python-mock (1.0.1-1) ...Setting up python-nose (1.1.2-3ubuntu4) ...Setting up python-formencode (1.2.4-2ubuntu2) ...Setting up python-paste (1.7.5.1-4.1ubuntu1) ...FATAL: hudson.remoting.RequestAbortedException: java.net.SocketException: Socket closedhudson.remoting.RequestAbortedException: hudson.remoting.RequestAbortedException: java.net.SocketException: Socket closed at hudson.remoting.Request.call(Request.java:174) at hudson.remoting.Channel.call(Channel.java:663) at hudson.FilePath.act(FilePath.java:831) at hudson.FilePath.act(FilePath.java:824) at hudson.FilePath.delete(FilePath.java:1129) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:92) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:717) at hudson.model.Build$BuildExecution.build(Build.java:199) at hudson.model.Build$BuildExecution.doRun(Build.java:160) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499) at hudson.model.Run.execute(Run.java:1502) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:236)Caused by: hudson.remoting.RequestAbortedException: java.net.SocketException: Socket closed at hudson.remoting.Request.abort(Request.java:299) at hudson.remoting.Channel.terminate(Channel.java:719) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:69)Caused by: java.net.SocketException: Socket closed at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2266) at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2559) at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2569) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1315) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369) at hudson.remoting.Command.readFrom(Command.java:90) at hudson.remoting.ClassicCommandTransport.read(ClassicCommandTransport.java:59) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: test_devstack_exercises #1
Title: test_devstack_exercises General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/test_devstack_exercises/1/Project:test_devstack_exercisesDate of build:Tue, 12 Mar 2013 18:16:00 -0400Build duration:1.2 secBuild cause:Started by command line by jenkinsBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 21 lines...]DEBUG:root:Found: jenkins-pipeline_coverage-17DEBUG:root:Checking env for parameter: OPENSTACK_BRANCHDEBUG:root:Found: trunkDEBUG:root:Checking env for parameter: OPENSTACK_COMPONENTDEBUG:root:Found: novaDEBUG:root:Checking env for parameter: UBUNTU_RELEASEDEBUG:root:Found: raringDEBUG:root:Checking env for parameter: OPENSTACK_RELEASEDEBUG:root:Found: grizzlyDEBUG:root:Checking env for extra parameter: GIT_COMMITDEBUG:root:Checking env for extra parameter: AUTHORDEBUG:root:Checking env for extra parameter: BUILD_URLDEBUG:root:Checking env for extra parameter: DEPLOYMENTINFO:root:Test job saved: jenkins-test_devstack_exercises-1INFO:root:Archived job, pipeline id: 628e0038-8b62-11e2-b53a-2bb7bfacbca5 build_tag: jenkins-test_devstack_exercises-1.debug[test_devstack_exercises] $ /bin/sh -xe /tmp/hudson7594750313531891489.sh+ OPENSTACK_UBUNTU_TESTING_REPO=lp:openstack-ubuntu-testing+ export DEVSTACK_BRANCH=stable/folsom+ . ./envrc+ export NOVA_HOST=test-12.os.magners.qa.lexington+ export GLANCE_HOST=test-04.os.magners.qa.lexington+ export KEYSTONE_HOST=test-11.os.magners.qa.lexington+ export NOVA_VOLUME_HOST=+ export NOVA_COMPUTE_HOST=test-07.os.magners.qa.lexington+ export OS_USERNAME=admin+ export OS_PASSWORD=openstack+ export OS_TENANT_NAME=admin+ export OS_AUTH_URL=http://test-11.os.magners.qa.lexington:5000/v2.0/+ export ADMIN_TOKEN=ubuntutesting+ export TEMPEST_USER_1=tempest-1+ export TEMPEST_USER_2=tempest-2+ export TEMPEST_USERS_PASSWORD=ubuntu+ export TEMPEST_TENANT_1=tempest-tenant-1+ export TEMPEST_TENANT_2=tempest-tenant-2+ export ENABLED_SERVICES=n-api,n-crt,n-obj,n-sch,g-api,g-reg,key,cinder,c-api,c-vol,n-net+ export EC2_URL=http://test-12.os.magners.qa.lexington:8773/services/Cloud+ export S3_URL=http://test-12.os.magners.qa.lexington:+ export EC2_ACCESS_KEY=2df28ff790e84d1d8a284ad4073731ed+ export EC2_SECRET_KEY=68da06416db941daa787e1699e917689+ export IMAGE_NAME=quantal-server-cloudimg-amd64-ami+ export IMAGE_UUID=9dd5d368-be64-4583-8343-c2df74031bbb+ export EC2_AMI_ID=ami-0002+ cat versions_testedcat: versions_tested: No such file or directoryBuild step 'Execute shell' marked build as failureRecording test resultsEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: test_devstack_exercises #2
Title: test_devstack_exercises General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/test_devstack_exercises/2/Project:test_devstack_exercisesDate of build:Tue, 12 Mar 2013 18:16:08 -0400Build duration:0.6 secBuild cause:Started by command line by jenkinsBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 21 lines...]DEBUG:root:Found: jenkins-pipeline_coverage-18DEBUG:root:Checking env for parameter: OPENSTACK_BRANCHDEBUG:root:Found: trunkDEBUG:root:Checking env for parameter: OPENSTACK_COMPONENTDEBUG:root:Found: novaDEBUG:root:Checking env for parameter: UBUNTU_RELEASEDEBUG:root:Found: raringDEBUG:root:Checking env for parameter: OPENSTACK_RELEASEDEBUG:root:Found: grizzlyDEBUG:root:Checking env for extra parameter: GIT_COMMITDEBUG:root:Checking env for extra parameter: AUTHORDEBUG:root:Checking env for extra parameter: BUILD_URLDEBUG:root:Checking env for extra parameter: DEPLOYMENTINFO:root:Test job saved: jenkins-test_devstack_exercises-2INFO:root:Archived job, pipeline id: 628e0038-8b62-11e2-b53a-2bb7bfacbca5 build_tag: jenkins-test_devstack_exercises-2.debug[test_devstack_exercises] $ /bin/sh -xe /tmp/hudson4920439149449229513.sh+ OPENSTACK_UBUNTU_TESTING_REPO=lp:openstack-ubuntu-testing+ export DEVSTACK_BRANCH=stable/folsom+ . ./envrc+ export NOVA_HOST=test-12.os.magners.qa.lexington+ export GLANCE_HOST=test-04.os.magners.qa.lexington+ export KEYSTONE_HOST=test-11.os.magners.qa.lexington+ export NOVA_VOLUME_HOST=+ export NOVA_COMPUTE_HOST=test-07.os.magners.qa.lexington+ export OS_USERNAME=admin+ export OS_PASSWORD=openstack+ export OS_TENANT_NAME=admin+ export OS_AUTH_URL=http://test-11.os.magners.qa.lexington:5000/v2.0/+ export ADMIN_TOKEN=ubuntutesting+ export TEMPEST_USER_1=tempest-1+ export TEMPEST_USER_2=tempest-2+ export TEMPEST_USERS_PASSWORD=ubuntu+ export TEMPEST_TENANT_1=tempest-tenant-1+ export TEMPEST_TENANT_2=tempest-tenant-2+ export ENABLED_SERVICES=n-api,n-crt,n-obj,n-sch,g-api,g-reg,key,cinder,c-api,c-vol,n-net+ export EC2_URL=http://test-12.os.magners.qa.lexington:8773/services/Cloud+ export S3_URL=http://test-12.os.magners.qa.lexington:+ export EC2_ACCESS_KEY=2df28ff790e84d1d8a284ad4073731ed+ export EC2_SECRET_KEY=68da06416db941daa787e1699e917689+ export IMAGE_NAME=quantal-server-cloudimg-amd64-ami+ export IMAGE_UUID=9dd5d368-be64-4583-8343-c2df74031bbb+ export EC2_AMI_ID=ami-0002+ cat versions_testedcat: versions_tested: No such file or directoryBuild step 'Execute shell' marked build as failureRecording test resultsEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: precise_essex_quantum_stable #11
Title: precise_essex_quantum_stable General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_essex_quantum_stable/11/Project:precise_essex_quantum_stableDate of build:Tue, 12 Mar 2013 18:31:38 -0400Build duration:8 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_essex_quantum_stableCheckout:precise_essex_quantum_stable / /var/lib/jenkins/slave/workspace/precise_essex_quantum_stable - hudson.remoting.Channel@40fc93c5:pkg-builderUsing strategy: DefaultCheckout:quantum / /var/lib/jenkins/slave/workspace/precise_essex_quantum_stable/quantum - hudson.remoting.LocalChannel@4cf95177Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/quantum.gitCommencing build of Revision 753730aaf12585d81038ad3e3988c1294d9ad25b (remotes/origin/stable/essex)Checking out Revision 753730aaf12585d81038ad3e3988c1294d9ad25b (remotes/origin/stable/essex)No change to record in branch remotes/origin/stable/essexNo emails were triggered.[precise_essex_quantum_stable] $ /bin/sh -xe /tmp/hudson2185515319181570263.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package -jINFO:root:Creating tarball using sdistERROR:root:Error occurred during package creation/build: argument of type 'float' is not iterableERROR:root:argument of type 'float' is not iterableINFO:root:Complete command log:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise eTypeError: argument of type 'float' is not iterableError in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise eTypeError: argument of type 'float' is not iterableBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_quantum_trunk #450
Title: raring_grizzly_quantum_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/450/Project:raring_grizzly_quantum_trunkDate of build:Tue, 12 Mar 2013 20:42:25 -0400Build duration:2 min 1 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesAdd l3 db migration for plugins which did not support in folsomby salv.orlandoaddquantum/db/migration/alembic_migrations/common_ext_ops.pyeditquantum/db/migration/alembic_migrations/versions/folsom_initial.pyeditquantum/db/migration/alembic_migrations/versions/5a875d0e5c_ryu.pyaddquantum/db/migration/alembic_migrations/versions/2c4af419145b_l3_support.pyConsole Output[...truncated 1114 lines...]Get:38 http://archive.ubuntu.com/ubuntu/ raring/main python-urllib3 all 1.5-0ubuntu1 [28.3 kB]Get:39 http://archive.ubuntu.com/ubuntu/ raring/main python-requests all 1.1.0-1 [40.4 kB]Get:40 http://archive.ubuntu.com/ubuntu/ raring/main python-oslo.config all 1:1.1.0~b1-0ubuntu4 [19.7 kB]Get:41 http://archive.ubuntu.com/ubuntu/ raring/main python-oslo-config all 1:1.1.0~b1-0ubuntu4 [1558 B]Get:42 http://archive.ubuntu.com/ubuntu/ raring/main python-kombu all 2.1.8-1ubuntu1 [122 kB]Get:43 http://archive.ubuntu.com/ubuntu/ raring/main python-mock all 1.0.1-1 [26.3 kB]Get:44 http://archive.ubuntu.com/ubuntu/ raring/main python-mox all 0.5.3-3 [18.7 kB]Get:45 http://archive.ubuntu.com/ubuntu/ raring/main python-netaddr all 0.7.7-1 [1217 kB]Get:46 http://archive.ubuntu.com/ubuntu/ raring/main python-netifaces amd64 0.8-2 [12.3 kB]Get:47 http://archive.ubuntu.com/ubuntu/ raring/main python-nose all 1.1.2-3ubuntu4 [135 kB]Get:48 http://archive.ubuntu.com/ubuntu/ raring/main python-pyudev all 0.16.1-1 [33.6 kB]Get:49 http://archive.ubuntu.com/ubuntu/ raring/main python-setuptools-git all 1.0b1-0ubuntu2 [11.1 kB]Get:50 http://archive.ubuntu.com/ubuntu/ raring/main python-pastescript all 1.7.5-2 [118 kB]Get:51 http://archive.ubuntu.com/ubuntu/ raring/main python-webtest all 1.3.4-1 [69.1 kB]Get:52 http://archive.ubuntu.com/ubuntu/ raring/main libyaml-0-2 amd64 0.1.4-2build1 [57.3 kB]Get:53 http://archive.ubuntu.com/ubuntu/ raring/main python-beautifulsoup all 3.2.1-1 [34.6 kB]Get:54 http://archive.ubuntu.com/ubuntu/ raring/main python-openid all 2.2.5-3ubuntu1 [118 kB]Get:55 http://archive.ubuntu.com/ubuntu/ raring/main python-openssl amd64 0.13-2ubuntu3 [103 kB]Get:56 http://archive.ubuntu.com/ubuntu/ raring/main python-scgi amd64 1.13-1ubuntu2 [20.4 kB]Get:57 http://archive.ubuntu.com/ubuntu/ raring/main python-sqlalchemy-ext amd64 0.7.9-1 [14.2 kB]Get:58 http://archive.ubuntu.com/ubuntu/ raring/main python-yaml amd64 3.10-4build2 [113 kB]Failed to fetch http://localhost/ubuntu/pool/main/k/keystone/python-keystone_2013.1+git201303121232~raring-0ubuntu1_all.deb 404 Not FoundFetched 6671 kB in 8s (802 kB/s)E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?install call failedERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-1374efa9-888b-466b-aaf3-8e851e1ffa12', '-u', 'root', '--', 'mk-build-deps', '-i', '-r', '-t', 'apt-get -y', '/tmp/tmpAe6rdu/quantum/debian/control']' returned non-zero exit status 100ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-1374efa9-888b-466b-aaf3-8e851e1ffa12', '-u', 'root', '--', 'mk-build-deps', '-i', '-r', '-t', 'apt-get -y', '/tmp/tmpAe6rdu/quantum/debian/control']' returned non-zero exit status 100INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpAe6rdu/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpAe6rdu/quantum/debian/controlTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-1374efa9-888b-466b-aaf3-8e851e1ffa12', '-u', 'root', '--', 'mk-build-deps', '-i', '-r', '-t', 'apt-get -y', '/tmp/tmpAe6rdu/quantum/debian/control']' returned non-zero exit status 100Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-1374efa9-888b-466b-aaf3-8e851e1ffa12', '-u', 'root', '--', 'mk-build-deps', '-i', '-r', '-t', 'apt-get -y', '/tmp/tmpAe6rdu/quantum/debian/control']' returned non-zero exit status 100Build step 'Execute shell' marked build as failureEmail was triggered for:
[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_ceilometer_trunk #139
Title: precise_grizzly_ceilometer_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_ceilometer_trunk/139/Project:precise_grizzly_ceilometer_trunkDate of build:Tue, 12 Mar 2013 22:31:41 -0400Build duration:1 min 21 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesAdd a tox target for building documentationby doug.hellmannedittox.iniAdd sample configuration files for mod_wsgiby doug.hellmannadddoc/source/install/development.rstaddetc/apache2/ceilometerdeletedoc/source/install.rsteditdoc/source/index.rstadddoc/source/install/mod_wsgi.rstadddoc/source/install/manual.rstaddceilometer/api/app.wsgiadddoc/source/install/index.rstSwitch to final 1.1.0 oslo.config releaseby markmcedittools/pip-requiresConsole Output[...truncated 1711 lines...]Looking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmpvx_V7Lbzr: ERROR: An error (1) occurred running quilt: Applying patch remove-hbase-support.patchpatching file tools/pip-requiresHunk #1 FAILED at 25.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch remove-hbase-support.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-a6140093-8fbe-4e71-bace-b05c35dfc558', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-a6140093-8fbe-4e71-bace-b05c35dfc558', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/ceilometer/grizzly /tmp/tmpvx_V7L/ceilometermk-build-deps -i -r -t apt-get -y /tmp/tmpvx_V7L/ceilometer/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 13c0c40961869dd60b6a7ed59c87896eaa0bc6ca..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/ceilometer/precise-grizzly --forcedch -b -D precise --newversion 1:2013.1+git201303122231~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [c59987f] Switch to final 1.1.0 oslo.config releasedch -a [4568fbe] Raise stevedore requirement to 0.7dch -a [7e48037] Fix a pep/hacking error in a swift importdch -a [a0066c3] Add sample configuration files for mod_wsgidch -a [d95bfca] Add a tox target for building documentationdch -a [35d50a5] Use a non-standard port for the test serverdch -a [0bc53f7] Ensure the statistics are sorteddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-a6140093-8fbe-4e71-bace-b05c35dfc558', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-a6140093-8fbe-4e71-bace-b05c35dfc558', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_ceilometer_trunk #140
Title: raring_grizzly_ceilometer_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_ceilometer_trunk/140/Project:raring_grizzly_ceilometer_trunkDate of build:Tue, 12 Mar 2013 22:31:41 -0400Build duration:2 min 28 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 3 builds failed.66ChangesAdd a tox target for building documentationby doug.hellmannedittox.iniAdd sample configuration files for mod_wsgiby doug.hellmannadddoc/source/install/manual.rstadddoc/source/install/development.rstaddetc/apache2/ceilometeradddoc/source/install/mod_wsgi.rstdeletedoc/source/install.rstadddoc/source/install/index.rsteditdoc/source/index.rstaddceilometer/api/app.wsgiSwitch to final 1.1.0 oslo.config releaseby markmcedittools/pip-requiresConsole Output[...truncated 2372 lines...]Looking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmpRXjAaybzr: ERROR: An error (1) occurred running quilt: Applying patch remove-hbase-support.patchpatching file tools/pip-requiresHunk #1 FAILED at 25.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch remove-hbase-support.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-3baae69e-ca24-4196-9dd1-81441fbde7cb', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-3baae69e-ca24-4196-9dd1-81441fbde7cb', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/ceilometer/grizzly /tmp/tmpRXjAay/ceilometermk-build-deps -i -r -t apt-get -y /tmp/tmpRXjAay/ceilometer/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 13c0c40961869dd60b6a7ed59c87896eaa0bc6ca..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/ceilometer/raring-grizzly --forcedch -b -D raring --newversion 1:2013.1+git201303122231~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [c59987f] Switch to final 1.1.0 oslo.config releasedch -a [4568fbe] Raise stevedore requirement to 0.7dch -a [7e48037] Fix a pep/hacking error in a swift importdch -a [a0066c3] Add sample configuration files for mod_wsgidch -a [d95bfca] Add a tox target for building documentationdch -a [35d50a5] Use a non-standard port for the test serverdch -a [0bc53f7] Ensure the statistics are sorteddebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-3baae69e-ca24-4196-9dd1-81441fbde7cb', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 141, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'raring-amd64-3baae69e-ca24-4196-9dd1-81441fbde7cb', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Fixed: raring_grizzly_quantum_trunk #451
Title: raring_grizzly_quantum_trunk General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/451/Project:raring_grizzly_quantum_trunkDate of build:Wed, 13 Mar 2013 00:31:41 -0400Build duration:19 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80Changesremove references to netstack in setup.pyby daneditsetup.pyConsole Output[...truncated 14798 lines...]deleting and forgetting pool/main/q/quantum/quantum-dhcp-agent_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-l3-agent_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-lbaas-agent_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-metadata-agent_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-bigswitch_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-brocade_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-cisco_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-hyperv_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-linuxbridge-agent_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-linuxbridge_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-metaplugin_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-midonet_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-nec-agent_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-nec_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-nicira_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-openvswitch-agent_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-openvswitch_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-plumgrid_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-ryu-agent_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-ryu_2013.1+git201303121931~raring-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-server_2013.1+git201303121931~raring-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/quantum/raring-grizzly']Pushed up to revision 143.INFO:root:Storing current commit for next build: c38a769d4ec0a99c7dad8547e4256c109b88f0e3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmprpNasT/quantummk-build-deps -i -r -t apt-get -y /tmp/tmprpNasT/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 4b258aa4ac12e69d3cb8dafb0d4548a2304ad948..HEAD --no-merges --pretty=format:[%h] %sbzr merge lp:~openstack-ubuntu-testing/quantum/raring-grizzly --forcedch -b -D raring --newversion 1:2013.1+git201303130031~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [c38a769] remove references to netstack in setup.pydch -a [bb43a07] Fix detection of deleted networks in DHCP agent.dch -a [90abfc6] Add l3 db migration for plugins which did not support in folsomdch -a [19027e8] Updates latest OSLO changesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.1+git201303130031~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A quantum_2013.1+git201303130031~raring-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing quantum_2013.1+git201303130031~raring-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include raring-grizzly quantum_2013.1+git201303130031~raring-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/quantum/raring-grizzlyEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp