[Openstack] ask.openstack invalid certificate error

2013-04-23 Thread Sam Morrison
Hi guys,

Looks like the web server for ask.openstack.org isn't sending the necessary 
intermediate certification authority files to the client and hence getting a 
cert not valid (when using firefox at least)


ask.openstack.org uses an invalid security certificate.

The certificate is not trusted because no issuer chain was provided.

(Error code: sec_error_unknown_issuer)


Cheers,
Sam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-23 Thread Salvatore Orlando
Quantum's metadata solution for Grizzly can run either with or without the
l3 agent.
When running within the l3 agent, packets directed to 169.254.169.254 are
sent to the default gateway; the l3 agent will spawn a metadata proxy for
each router; the metadata proxy forwards them to the metadata agent using a
datagram socket, and finally the agent reaches the Nova metadata server.

Without the l3 agent, the 'isolated' mode can be enabled for the metadata
access service. This is achieved by setting the flag
enable_isolated_metadata_proxy to True in the dhcp_agent configuration
file. When the isolated proxy is enabled, the dhcp agent will send an
additional static route to each VM. This static route will have the dhcp
agent as next hop and 169.254.0.0/16 as destination CIDR; the dhcp agent
will spawn a metadata proxy for each network. Once the packet reaches the
proxy, the procedure works as above. This should also explain why the
metadata agent does not depend on the l3 agent.

If you are deploying the l3 agent, but do not want to deploy the metadata
agent on the same host, the 'metadata access network' can be considered.
This option is enabled by setting enable_metadata_network on the dhcp agent
configuration file. When enabled, quantum networks whose cidr is included
in 169.254.0.0/16 will be regarded as 'metadata networks', and will spawn a
metadata proxy. The user can then connect such network to any logical
router through the quantum API; thus granting metadata access to all the
networks connected to such router.

I think the documentation for quantum metadata has not yet been merged in
the admin guide.
I hope this clarifies the matter a little... although this thread has gone
a little bit off-topic. Can you consider submitting one or more questions
to ask.openstack.org?


Regards,
Salvatore


On 23 April 2013 00:50, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 That is precisely what I'm trying to figure out!

 How to setup metadata without L3 using Quantum Single Flat. I can't find
 any document about this.

 Plus, to make things worse, the package quantum-metadata-agent *DOES NOT
 DEPENDS* on quantum-l3-agent.

 BTW, I'm sure that with my guide, I'll be able to run Quantum on its
 simplest scenario!

 Give it a shot!!   https://gist.github.com/tmartinx/d36536b7b62a48f859c2

 My guide is perfect, have no bugs. Tested it +50 times.

 Cheers!
 Thiago



 On 22 April 2013 19:18, Paras pradhan pradhanpa...@gmail.com wrote:

 So this is what I understand. Even if you do flat (nova-network style) ,
 no floating ip you still need l3 for metadata(?). I am really confused. I
 could never ever make quantum work. never had any issues with nova-network.

 Paras.



 On Fri, Apr 19, 2013 at 8:35 PM, Daniels Cai danx...@gmail.com wrote:

 paras

 In my experience the answer is yes .
 In grizzly , metadata proxy works in the qrouter's name space ,no router
 means no metadata .
 I am not sure whether any other approaches .

 Daniels Cai

 http://dnscai.com

 在 2013-4-20,9:28,Martinx - ジェームズ thiagocmarti...@gmail.com 写道:

 Daniels,

 There is no `Quantum L3' on this setup (at least not on my own
 environment / guide).

 So, this leads me to one question: Metadata depends on L3?

 I do not want Quantum L3 package and I want Metadata... Is that possible?

 Tks,
 Thiago


 On 19 April 2013 21:44, Daniels Cai danx...@gmail.com wrote:

 Hi Paras
 The log says your dhcp works fine while metadata is not
 Check the following steps

 1.Make sure nova API enables metadata service

 2. A virtual router should be created for your subnet and this router
 is binding with a l3 agent

 3.in the l3 agent metadata proxy service should be works fine
 Metadata service config file should contains nova API host and keystone
 auth info

 4.  Ovs bridge br-ex is needed in your l3 agent server even you don't
 need floating ip

 Daniels Cai

 http://dnscai.com

 在 2013-4-19,23:42,Paras pradhan pradhanpa...@gmail.com 写道:

 Any idea why I could not hit http://169.254.169.254/20090404/instanceid? 
 Here is what I am seeing in cirros .

 --
 Sending discover...
 Sending select for 192.168.122.98...
 Lease of 192.168.122.98 obtained, lease time 120
 deleting routers
 route: SIOCDELRT: No such process
 route: SIOCADDRT: No such process
 adding dns 192.168.122.1
 adding dns 8.8.8.8
 cirrosds 'net' up at 4.62
 checking http://169.254.169.254/20090404/instanceid
 failed 1/20: up 4.79. request failed
 failed 2/20: up 6.97. request failed
 failed 3/20: up 9.03. request failed
 failed 4/20: up 11.08. request fa

 ..
 --

 Thanks
 Paras.


 On Thu, Apr 11, 2013 at 7:22 AM, Martinx - ジェームズ 
 thiagocmarti...@gmail.com wrote:

 Guys!

  I just update the *Ultimate OpenStack Grizzly 
 Guide*https://gist.github.com/tmartinx/d36536b7b62a48f859c2
 !

  You guys will note that this environment works with *echo 0 
 /proc/sys/net/ipv4/ip_forward*, on *both* controller *AND* compute
 nodes! Take a look! I didn't touch the /etc/sysctl.conf file and it is
 

[Openstack] Heat PTL nominations are open

2013-04-23 Thread Thierry Carrez
Hi everyone,

As you may know, Steve Dake resigned[1] from his Heat PTL position for
personal reasons.

Now that the summit is over, we should start the selection process for
his replacement.

If you would like to announce that you would like to be the PTL for Heat
for the rest of the Havana development cycle, please send an email to
*openstack@lists.launchpad.net* with subject Heat PTL candidacy and a
description of your platform.

This self-nomination period will end on Monday, April 29, 23:59 PST.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-April/007396.html

-- 
Thierry Carrez (ttx)
Chair, OpenStack Technical Committee

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] problem in authenticating...

2013-04-23 Thread Study Kamaill
THis is the output ::  --- 

root@swift-workshop:~# curl -v -H 'X-Auth-User: admin:admin' -H 'X-Auth-Key: 
admin' http://127.0.0.1/auth/v1.0/
* About to connect() to 127.0.0.1 port 80 (#0)
*   Trying 127.0.0.1... connected
 GET /auth/v1.0/ HTTP/1.1
 User-Agent: curl/7.22.0 (i686-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: 127.0.0.1
 Accept: */*
 X-Auth-User: admin:admin
 X-Auth-Key: admin
 
 HTTP/1.1 200 OK
 X-Storage-Url: http://127.0.0.1:80/v1/AUTH_admin
 X-Auth-Token: AUTH_tk2ef67e3e0bce4dfb8191044cfc2101d8
 X-Storage-Token: AUTH_tk2ef67e3e0bce4dfb8191044cfc2101d8
 X-Trans-Id: tx44471930f35c4cf68c960e0850adace3
 Content-Length: 0
 Date: Tue, 23 Apr 2013 07:05:01 GMT
 
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
root@swift-workshop:~# curl -v -H 'XX-Auth-Token: 
AUTH_tk2ef67e3e0bce4dfb8191044cfc2101d8' http://127.0.0.1:80/v1/AUTH_admin
* About to connect() to 127.0.0.1 port 80 (#0)
*   Trying 127.0.0.1... connected
 GET /v1/AUTH_admin HTTP/1.1
 User-Agent: curl/7.22.0 (i686-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: 127.0.0.1
 Accept: */*
 XX-Auth-Token: AUTH_tk2ef67e3e0bce4dfb8191044cfc2101d8
 
 HTTP/1.1 401 Unauthorized
 Content-Length: 131
 Content-Type: text/html
 X-Trans-Id: txc073152b8f78467e93a64677d437ef1b
 Date: Tue, 23 Apr 2013 07:05:54 GMT
 
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
htmlh1Unauthorized/h1pThis server could not verify that you are 
authorized to access the document you 
requested./p/htmlroot@swift-workshop:~#  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack][Swift] Swift Storage nodes leave db.pending files after failed requests

2013-04-23 Thread Sergio Rubio
Howdy folks,

While populating an empty Swift test cluster with
swift-dispersion-populate some container creation requests failed,
leaving 'db.pending' files.

This is standard operation I think
(http://docs.openstack.org/developer/swift/overview_architecture.html#updaters)
however the pending files are never removed (the updaters aren't
picking up those changes?) and I can see periodic errors in the
storage node error log:

pre
root@swift-002:/srv/node# find|grep pending
./cd6b760c-bf47-4733-b7bb-23659dd8ee1d/accounts/523880/adc/e2cfc6be58be71d5ad6111364fff0adc/e2cfc6be58be71d5ad6111364fff0adc.db.pending
./7e0cfcfc-7c6e-41a4-adc8-e4173147bd2a/accounts/523880/adc/e2cfc6be58be71d5ad6111364fff0adc/e2cfc6be58be71d5ad6111364fff0adc.db.pending
./7cf651d7-e2f7-4dc4-ada0-7cd0dc832449/containers/498724/d1d/bd66b837c7b9a95dec68b3ca05a75d1d/bd66b837c7b9a95dec68b3ca05a75d1d.db.pending
/pre


pre
Apr 23 11:06:26 swift-001 account-server ERROR __call__ error with PUT
/d3829495-9f10-4075-a558-f99fb665cfe2/464510/AUTH_aeedb7bdcb8846599e2b6b87bb8a947f/dispersion_916bc3a9cc1548aca41be6633c1bd194
: #012Traceback (most recent call last):#012  File
/usr/lib/python2.7/dist-packages/swift/account/server.py, line 333,
in __call__#012res = method(req)#012  File
/usr/lib/python2.7/dist-packages/swift/common/utils.py, line 1558,
in wrapped#012return func(*a, **kw)#012  File
/usr/lib/python2.7/dist-packages/swift/common/utils.py, line 520, in
_timing_stats#012resp = func(ctrl, *args, **kwargs)#012  File
/usr/lib/python2.7/dist-packages/swift/account/server.py, line 112,
in PUT#012req.headers['x-bytes-used'])#012  File
/usr/lib/python2.7/dist-packages/swift/common/db.py, line 1431, in
put_container#012raise DatabaseConnectionError(self.db_file, DB
doesn't exist)#012DatabaseConnectionError: DB connection error
(/srv/node/d3829495-9f10-4075-a558-f99fb665cfe2/accounts/464510/adc/e2cfc6be58be71d5ad6111364fff0adc/e2cfc6be58be71d5ad6111364fff0adc.db,
0):#012DB doesn't exist
/pre

The test cluster has two storage nodes with 12 drives each, packaged
Swift 1.8.0 from Ubuntu Cloud Archive and the account/container
configuration is the following:
https://gist.github.com/rubiojr/6ea3d0ea0c4d00949d33

The cluster is fully operational and there are no other known issues
ATM. I can easily reproduce the problem by emptying the cluster and
populating it again with swift-dispersion-populate.

I've exhausted all the possibilities (bug reports, mailing lists,
google, etc) so I decided to ask before start digging the source code
trying to gather evidence to open a bug report.

If anyone can shed some light on the issue that would be great.

Thanks!

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem in authenticating...

2013-04-23 Thread Unmesh Gurjar
Kamaill,

I think that is caused because of the wrong HTTP header in the second curl
command. It should be 'X-Auth-Token' instead of 'XX-Auth-Token'.
Give it a try and get back if you have any issue.

-Unmesh.


On Tue, Apr 23, 2013 at 1:17 PM, Study Kamaill study.i...@yahoo.com wrote:

 THis is the output ::  ---

 root@swift-workshop:~# curl -v -H 'X-Auth-User: admin:admin' -H
 'X-Auth-Key: admin' http://127.0.0.1/auth/v1.0/
 * About to connect() to 127.0.0.1 port 80 (#0)
 *   Trying 127.0.0.1... connected
  GET /auth/v1.0/ HTTP/1.1
  User-Agent: curl/7.22.0 (i686-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
  Host: 127.0.0.1
  Accept: */*
  X-Auth-User: admin:admin
  X-Auth-Key: admin
 
  HTTP/1.1 200 OK
  X-Storage-Url: http://127.0.0.1:80/v1/AUTH_admin
  X-Auth-Token: AUTH_tk2ef67e3e0bce4dfb8191044cfc2101d8
  X-Storage-Token: AUTH_tk2ef67e3e0bce4dfb8191044cfc2101d8
  X-Trans-Id: tx44471930f35c4cf68c960e0850adace3
  Content-Length: 0
  Date: Tue, 23 Apr 2013 07:05:01 GMT
 
 * Connection #0 to host 127.0.0.1 left intact
 * Closing connection #0
 root@swift-workshop:~# curl -v -H 'XX-Auth-Token:
 AUTH_tk2ef67e3e0bce4dfb8191044cfc2101d8' http://127.0.0.1:80/v1/AUTH_admin
 * About to connect() to 127.0.0.1 port 80 (#0)
 *   Trying 127.0.0.1... connected
  GET /v1/AUTH_admin HTTP/1.1
  User-Agent: curl/7.22.0 (i686-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
  Host: 127.0.0.1
  Accept: */*
  XX-Auth-Token: AUTH_tk2ef67e3e0bce4dfb8191044cfc2101d8
 
  HTTP/1.1 401 Unauthorized
  Content-Length: 131
  Content-Type: text/html
  X-Trans-Id: txc073152b8f78467e93a64677d437ef1b
  Date: Tue, 23 Apr 2013 07:05:54 GMT
 
 * Connection #0 to host 127.0.0.1 left intact
 * Closing connection #0
 htmlh1Unauthorized/h1pThis server could not verify that you are
 authorized to access the document you
 requested./p/htmlroot@swift-workshop:~#

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] How to configure nova-network for a single node

2013-04-23 Thread Daniel Ellison
Hi all,

I've slowly been configuring a single server with OpenStack for a 
proof-of-concept I want to present to my managers. This single server is 
co-located and directly exposed to the Internet. It has one active Ethernet 
port (eth0) and one inactive and disconnected Ethernet port (eth1). I've 
already set up br100 over eth0 (I was using KVM on this machine previously, so 
bridging was already set up). This machine has an entire class C IPv4 network 
(256 IPs) available to it.

I have both Keystone and Glance in place so far. I'm now working on configuring 
Nova, specifically nova-network. For this PoC there's no need to get into the 
(perceived) complexities of Quantum, though Quantum will eventually be a big 
selling point, I believe.

I'm not yet sure of how to configure this server for networking. I want VMs to 
be assigned public IPs from my pool, but also be on an internal network (albeit 
all on the one server). I've read Libvirt Flat Networking and Libvirt Flat 
DHCP Networking as well as many other pages pertaining to nova-network 
configuration. I know this should be a simple setup, but the various pages have 
done more to confuse me than anything else.

My /etc/network/interfaces is similar to that on the Libvirt Flat Networking 
page, except I'm using static setup for the bridge as I've assigned an IP from 
my network to the server. So it looks more like this:

auto lo
iface lo inet loopback
pre-up iptables-restore  /etc/iptables.up.rules

auto eth0
iface eth0 inet manual

auto br100
iface br100 inet static
network 204.187.138.0
gateway 204.187.138.1
address 204.187.138.2
broadcast 204.187.138.255
netmask 255.255.255.0
bridge_ports eth0
bridge_stp off 
bridge_fd 0
bridge_maxwait 0

So after all that, how best should I configure nova-network on this single 
server? I appreciate any guidance in advance, and am more than willing to 
answer any questions to clarify my setup or intentions.

Thanks!
Daniel
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Heat PTL candidacy

2013-04-23 Thread Steven Hardy
Hi!

I'd like to propose myself as a candidate for the Heat PTL role, ref
Thierry's nominations email [1]

I've been professionally involved with software engineering for around 13
years, working in a variety of industries, from embedded/kernel
development to big-enterprise customer-facing consulting.

Having been involved with the Heat project from very near the start, I've
been part of the strong core team who are making this project grow from a
good idea into something people can actually use (and are using!).  

I have a deep understanding of our current code-base, and a clear view of
our future roadmap (and the challenges we face!), so I believe I am in a
good position to step into the role Steve Dake was unfortunately unable to
continue with, and do what is required to enable the Heat project to
deliver another successful release for Havana.

Having attended the summit last week, I have to say I'm even more driven
and enthusiastic about the project, so much great feedback and ideas from
our users and potential contributors.  I look forward to developing more
features our users want, and encouraging much wider community participation
in the project over the next few months.

Thanks!

Steve Hardy


[1] http://lists.openstack.org/pipermail/openstack-dev/2013-April/007724.html

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] AUTO: Dietmar Noll/Germany/IBM is out of the office (returning 24.04.2013)

2013-04-23 Thread Dietmar Noll

I am out of the office until 24.04.2013.

I am out of the office traveling and will only have limited access to
e-mails.
I will respond as soon as possible.
For TPC/VSC related topics, please contact Sumant Padbidri
For other urgent topics, please contact Horst Zisgen.


Note: This is an automated response to your message  [Openstack] Heat PTL
nominations are open sent on 23/04/2013 10:56:37.

This is the only notification you will receive while this person is away.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Heat PTL candidacy

2013-04-23 Thread Steven Hardy
Repost to correctly include openstack-dev on Cc

On Tue, Apr 23, 2013 at 02:45:31PM +0100, Steven Hardy wrote:
 Hi!
 
 I'd like to propose myself as a candidate for the Heat PTL role, ref
 Thierry's nominations email [1]
 
 I've been professionally involved with software engineering for around 13
 years, working in a variety of industries, from embedded/kernel
 development to big-enterprise customer-facing consulting.
 
 Having been involved with the Heat project from very near the start, I've
 been part of the strong core team who are making this project grow from a
 good idea into something people can actually use (and are using!).  
 
 I have a deep understanding of our current code-base, and a clear view of
 our future roadmap (and the challenges we face!), so I believe I am in a
 good position to step into the role Steve Dake was unfortunately unable to
 continue with, and do what is required to enable the Heat project to
 deliver another successful release for Havana.
 
 Having attended the summit last week, I have to say I'm even more driven
 and enthusiastic about the project, so much great feedback and ideas from
 our users and potential contributors.  I look forward to developing more
 features our users want, and encouraging much wider community participation
 in the project over the next few months.
 
 Thanks!
 
 Steve Hardy
 
 
 [1] http://lists.openstack.org/pipermail/openstack-dev/2013-April/007724.html

-- 
Steve Hardy
Red Hat Engineering, Cloud

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to configure nova-network for a single node

2013-04-23 Thread Eric Marques

Dear Daniel,

I'm working at a company that has published an article on this subject.
http://sysadmin.smile.fr/post/2013/04/23/Use-flat-network-with-OpenStack

Hope it will help you,

Regards,


Le 23/04/2013 14:44, Daniel Ellison a écrit :

Hi all,

I've slowly been configuring a single server with OpenStack for a proof-of-concept I want 
to present to my managers. This single server is co-located and directly exposed to the 
Internet. It has one active Ethernet port (eth0) and one inactive and disconnected 
Ethernet port (eth1). I've already set up br100 over eth0 (I was using KVM on this 
machine previously, so bridging was already set up). This machine has an entire class 
C IPv4 network (256 IPs) available to it.

I have both Keystone and Glance in place so far. I'm now working on configuring 
Nova, specifically nova-network. For this PoC there's no need to get into the 
(perceived) complexities of Quantum, though Quantum will eventually be a big 
selling point, I believe.

I'm not yet sure of how to configure this server for networking. I want VMs to be assigned public 
IPs from my pool, but also be on an internal network (albeit all on the one server). I've read 
Libvirt Flat Networking and Libvirt Flat DHCP Networking as well as many 
other pages pertaining to nova-network configuration. I know this should be a simple setup, but the 
various pages have done more to confuse me than anything else.

My /etc/network/interfaces is similar to that on the Libvirt Flat Networking 
page, except I'm using static setup for the bridge as I've assigned an IP from my network 
to the server. So it looks more like this:

auto lo
iface lo inet loopback
 pre-up iptables-restore  /etc/iptables.up.rules

auto eth0
iface eth0 inet manual

auto br100
iface br100 inet static
 network 204.187.138.0
 gateway 204.187.138.1
 address 204.187.138.2
 broadcast 204.187.138.255
 netmask 255.255.255.0
 bridge_ports eth0
 bridge_stp off
 bridge_fd 0
 bridge_maxwait 0

So after all that, how best should I configure nova-network on this single 
server? I appreciate any guidance in advance, and am more than willing to 
answer any questions to clarify my setup or intentions.

Thanks!
Daniel
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] horizon error

2013-04-23 Thread Mballo Cherif
Hi everybody, when I'm authenticate with horizon I have this message Error: 
Unauthorized: Unable to retrieve usage information. And Error: Unauthorized: 
Unable to retrieve quota information.

How can I fix this issue?



Thanks you for your help!



Sheriff!



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] horizon error

2013-04-23 Thread Heiko Krämer
Hiho,

this occurs if an service not running or not reachable. In your case
mostly api or compute.
Check if each service are running and reachable from your Horizon host.

Check if all endpoints in keystone are configured correctly.

Greetings
Heiko


On 23.04.2013 17:25, Mballo Cherif wrote:

 Hi everybody, when I'm authenticate with horizon I have this message
 *Error: *Unauthorized: Unable to retrieve usage information. And
 *Error: *Unauthorized: Unable to retrieve quota information.

 How can I fix this issue?

  

 Thanks you for your help!

  

 Sheriff!

  



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] CLI to get IP address in use

2013-04-23 Thread Steve Heistand
is there a cli command to get the list of IP addresses in use by a given user 
or tenant?

thanks

-- 

 Steve Heistand   NASA Ames Research Center
 SciCon Group Mail Stop 258-6
 steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000

 Any opinions expressed are those of our alien overlords, not my own.

# For Remedy#
#Action: Resolve#
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #




signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] CLI to get IP address in use

2013-04-23 Thread Yi Yang
nova list should show all the VMs, as well as the ip addresses, that 
the tenant owns


Yi

On 4/23/13 11:33 AM, Steve Heistand wrote:

is there a cli command to get the list of IP addresses in use by a given user 
or tenant?

thanks



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [debian] questions?

2013-04-23 Thread Arindam Choudhury
Hi,

I am following this tutorial 
http://docs.openstack.org/trunk/openstack-compute/install/apt/content/installing-the-cloud-controller.html
 to install grizzly using 

deb http://archive.gplhost.com/debian grizzly main deb 
http://archive.gplhost.com/debian grizzly-backports main 

When I tried to install nova in controller it can not find 
nova-ajax-console-proxy
and:
Errors were encountered while processing:
 /var/cache/apt/archives/nova-novncproxy_2013.1-1_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

so went on and install without these two package. is it okay?

  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] horizon error

2013-04-23 Thread Mballo Cherif
Hi Hiho,

Thanks you for your answers. In fact when I lauch nova-manage service list I 
get this:



nova-certopenstack-grizzly.linux.gem  internal enabled  
  :-)   2013-04-23 16:05:03

nova-conductor   openstack-grizzly.linux.gem  internal enabled  
  :-)   2013-04-23 16:05:03

nova-consoleauth openstack-grizzly.linux.gem  internal enabled  
  :-)   2013-04-23 16:05:03

nova-scheduler   openstack-grizzly.linux.gem  internal enabled  
  :-)   2013-04-23 16:05:03

nova-compute openstack-grizzly.linux.gem  nova enabled  
  :-)   2013-04-23 16:05:04





is it normal not having nova-api in the list ?. Otherwise when I check service 
nova-api status the service is running well (nova-api start/running, process 
13421)







From: Openstack 
[mailto:openstack-bounces+cherif.mballo=gemalto@lists.launchpad.net] On 
Behalf Of Heiko Krämer
Sent: mardi 23 avril 2013 17:30
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] horizon error



Hiho,

this occurs if an service not running or not reachable. In your case mostly api 
or compute.
Check if each service are running and reachable from your Horizon host.

Check if all endpoints in keystone are configured correctly.

Greetings
Heiko


On 23.04.2013 17:25, Mballo Cherif wrote:

Hi everybody, when I'm authenticate with horizon I have this message Error: 
Unauthorized: Unable to retrieve usage information. And Error: Unauthorized: 
Unable to retrieve quota information.

How can I fix this issue?



Thanks you for your help!



Sheriff!








___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-23 Thread Martinx - ジェームズ
Thank you Orlando!

I just remove `quantum-l3-agent' for the sake of simplicity (I don't want
it for now) and have enabled `enable_isolated_metadata = True' in
dhcp_agent.ini but, same result. Metadata doesn't work.

The CirrOS doesn't reach the metadata server. Also, within CirrOS, there is
no route to 169.254.0.0/16 network. Probably because its dhcp client isn't
ready with option 121 ???


Also, the `Ubuntu Cloud Image' doesn't reach the metadata too, I'm seeing:


20130423 15:13:05,687  util.py[WARNING]: '
http://169.254.169.254/20090404/metadata/instanceid' failed [49/120s]: url
error [timed out]


And, I have a `Pre-Installed' Ubuntu template that I can boot and login, it
have the option 121 configured but, no route to 169.254.0.0/24 network in
my routing tables.

To try one more option, I just enable the `enable_metadata_network = True'
but, doesn't work either.

Anyway, this isn't off-topic, because of enabling metadata without L3 on a
Single Flat, is on my TODO list to improve my document, the Ultimate
OpenStack Grizzly Guide, so, it is right on topic.


And I really appreciate your help! Now I have a much better direction to
follow.

But, I still can't figure out how to put metadata to work. Too
complicated...


One more question, at the dhcp_agent, there is a message:

The metadata service will only be activated when the subnet gateway_ip is
None.

What this means?

It means that I can't use `--gateway 10.33.14.1' when running the `quantum
subnet-create' ?


Tks!
Thiago


On 23 April 2013 04:48, Salvatore Orlando sorla...@nicira.com wrote:

 Quantum's metadata solution for Grizzly can run either with or without the
 l3 agent.
 When running within the l3 agent, packets directed to 169.254.169.254 are
 sent to the default gateway; the l3 agent will spawn a metadata proxy for
 each router; the metadata proxy forwards them to the metadata agent using a
 datagram socket, and finally the agent reaches the Nova metadata server.

 Without the l3 agent, the 'isolated' mode can be enabled for the metadata
 access service. This is achieved by setting the flag
 enable_isolated_metadata_proxy to True in the dhcp_agent configuration
 file. When the isolated proxy is enabled, the dhcp agent will send an
 additional static route to each VM. This static route will have the dhcp
 agent as next hop and 169.254.0.0/16 as destination CIDR; the dhcp agent
 will spawn a metadata proxy for each network. Once the packet reaches the
 proxy, the procedure works as above. This should also explain why the
 metadata agent does not depend on the l3 agent.

 If you are deploying the l3 agent, but do not want to deploy the metadata
 agent on the same host, the 'metadata access network' can be considered.
 This option is enabled by setting enable_metadata_network on the dhcp agent
 configuration file. When enabled, quantum networks whose cidr is included
 in 169.254.0.0/16 will be regarded as 'metadata networks', and will spawn
 a metadata proxy. The user can then connect such network to any logical
 router through the quantum API; thus granting metadata access to all the
 networks connected to such router.

 I think the documentation for quantum metadata has not yet been merged in
 the admin guide.
 I hope this clarifies the matter a little... although this thread has gone
 a little bit off-topic. Can you consider submitting one or more questions
 to ask.openstack.org?


 Regards,
 Salvatore


 On 23 April 2013 00:50, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 That is precisely what I'm trying to figure out!

 How to setup metadata without L3 using Quantum Single Flat. I can't find
 any document about this.

 Plus, to make things worse, the package quantum-metadata-agent *DOES NOT
 DEPENDS* on quantum-l3-agent.

 BTW, I'm sure that with my guide, I'll be able to run Quantum on its
 simplest scenario!

 Give it a shot!!   https://gist.github.com/tmartinx/d36536b7b62a48f859c2

 My guide is perfect, have no bugs. Tested it +50 times.

 Cheers!
 Thiago



 On 22 April 2013 19:18, Paras pradhan pradhanpa...@gmail.com wrote:

 So this is what I understand. Even if you do flat (nova-network style) ,
 no floating ip you still need l3 for metadata(?). I am really confused. I
 could never ever make quantum work. never had any issues with nova-network.

 Paras.



 On Fri, Apr 19, 2013 at 8:35 PM, Daniels Cai danx...@gmail.com wrote:

 paras

 In my experience the answer is yes .
 In grizzly , metadata proxy works in the qrouter's name space ,no
 router means no metadata .
 I am not sure whether any other approaches .

 Daniels Cai

 http://dnscai.com

 在 2013-4-20,9:28,Martinx - ジェームズ thiagocmarti...@gmail.com 写道:

 Daniels,

 There is no `Quantum L3' on this setup (at least not on my own
 environment / guide).

 So, this leads me to one question: Metadata depends on L3?

 I do not want Quantum L3 package and I want Metadata... Is that
 possible?

 Tks,
 Thiago


 On 19 April 2013 21:44, Daniels Cai danx...@gmail.com wrote:

 Hi

[Openstack] [OSSG][OSSN] HTTP POST limiting advised to avoid Essex/Folsom Keystone DoS

2013-04-23 Thread Clark, Robert Graham
HTTP POST limiting advised to avoid Essex/Folsom Keystone DoS
---

### Summary ###
Concurrent Keystone POST requests with large body messages are held in memory 
without filtering or rate limiting, this can lead to resource exhaustion on the 
Keystone server.

### Affected Services / Software ###
Keystone, Load Balancer, Proxy

### Discussion ###
Keystone stores POST messages in memory before validation, concurrent 
submission of multiple large POST messages can cause the Keystone process to be 
killed due to memory exhaustion, resulting in a remote Denial of Service.

In many cases Keystone will be deployed behind a load-balancer or proxy that 
can rate limit POST messages inbound to Keystone. Grizzly is protected against 
that through the sizelimit middleware.

### Recommended Actions ###
If you are in a situation where Keystone is directly exposed to incoming POST 
messages and not protected by the sizelimit middleware there are a number of 
load-balancing/proxy options, we suggest you consider one of the following:

Nginx: Open-source, high-performance HTTP server and reverse proxy.
Nginx Config: http://wiki.nginx.org/HttpCoreModule#client_max_body_size

Apache: HTTP Server Project
Apache Config: http://httpd.apache.org/docs/2.4/mod/core.html#limitrequestbody

### Contacts / References ###
This OSSN Bug: https://bugs.launchpad.net/ossn/+bug/1155566
Original LaunchPad Bug : https://bugs.launchpad.net/keystone/+bug/1098177
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Query regarding floating IP configuration

2013-04-23 Thread Edgar Magana
Anil,

If you are testing multiple vNICs I will recommend you to use the following
image:
IMAGE_URLS=http://www.openvswitch.org/tty-quantum.tgz

In your localrc add the above string and you are all set up!

Thanks,

Edgar

From:  Anil Vishnoi vishnoia...@gmail.com
Date:  Wednesday, April 17, 2013 1:29 PM
To:  openstack@lists.launchpad.net openstack@lists.launchpad.net
Subject:  [Openstack] [Quantum] Query regarding floating IP configuration


Hi All,

I am trying to setup openstack in my lab, where i have a plan to run
Controller+Network node on one physical machine and two compute node.
Controller/Network physical machine has 2 NIc, one connected to externet
network (internet) and second nic is on private network.

OS Network Administrator Guide says The node running quantum-l3-agent
should not have an IP address manually configured on the NIC connected to
the external network. Rather, you must have a range of IP addresses from the
external network that can be used by OpenStack Networking for routers that
uplink to the external network.. So my confusion is, if i want to send any
REST API call to my controller/network node from external network, i
obviously need public IP address. But instruction i quoted says that we
should not have manual IP address on the NIC.

Does it mean we can't create floating IP pool in this kind of setup? Or we
need 3 NIC, 1 for private network, 1 for floating ip pool creation and 1 for
external access to the machine?

OR is it that we can assign the public ip address to the br-ex, and remove
it from physical NIC? Please let me know if my query is not clear.
-- 
Thanks
Anil
___ Mailing list:
https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack More help   :
https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to configure nova-network for a single node

2013-04-23 Thread Razique Mahroua
Great one!
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 23 avr. 2013 à 17:09, Eric Marques marque...@free.fr a écrit :Dear Daniel,I'm working at a company that has published an article on this subject.http://sysadmin.smile.fr/post/2013/04/23/Use-flat-network-with-OpenStackHope it will help you,Regards,Le 23/04/2013 14:44, Daniel Ellison a écrit :Hi all,I've slowly been configuring a single server with OpenStack for a proof-of-concept I want to present to my managers. This single server is co-located and directly exposed to the Internet. It has one active Ethernet port (eth0) and one inactive and disconnected Ethernet port (eth1). I've already set up br100 over eth0 (I was using KVM on this machine previously, so bridging was already set up). This machine has an entire class "C" IPv4 network (256 IPs) available to it.I have both Keystone and Glance in place so far. I'm now working on configuring Nova, specifically nova-network. For this PoC there's no need to get into the (perceived) complexities of Quantum, though Quantum will eventually be a big selling point, I believe.I'm not yet sure of how to configure this server for networking. I want VMs to be assigned public IPs from my pool, but also be on an internal network (albeit all on the one server). I've read "Libvirt Flat Networking" and "Libvirt Flat DHCP Networking" as well as many other pages pertaining to nova-network configuration. I know this should be a simple setup, but the various pages have done more to confuse me than anything else.My /etc/network/interfaces is similar to that on the "Libvirt Flat Networking" page, except I'm using static setup for the bridge as I've assigned an IP from my network to the server. So it looks more like this:auto loiface lo inet loopback pre-up iptables-restore  /etc/iptables.up.rulesauto eth0iface eth0 inet manualauto br100iface br100 inet static network 204.187.138.0 gateway 204.187.138.1 address 204.187.138.2 broadcast 204.187.138.255 netmask 255.255.255.0 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0So after all that, how best should I configure nova-network on this single server? I appreciate any guidance in advance, and am more than willing to answer any questions to clarify my setup or intentions.Thanks!Daniel___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-23 Thread Martinx - ジェームズ
Hi!

I just delete my subnet, to create it again without specifying `--gateway
10.33.14.1', my pre-installed images still works but, cloud based images,
that requires metadata, doesn't.

I'm running out of options again...

I tried with and without those options:

---
enable_isolated_metadata = True
enable_metadata_network = True

`--gateway 10.33.14.1' on quantum subnet-create...
---

...multiple times, doesn't work.

Tks,
Thiago


On 23 April 2013 13:19, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Thank you Orlando!

 I just remove `quantum-l3-agent' for the sake of simplicity (I don't want
 it for now) and have enabled `enable_isolated_metadata = True' in
 dhcp_agent.ini but, same result. Metadata doesn't work.

 The CirrOS doesn't reach the metadata server. Also, within CirrOS, there
 is no route to 169.254.0.0/16 network. Probably because its dhcp client
 isn't ready with option 121 ???


 Also, the `Ubuntu Cloud Image' doesn't reach the metadata too, I'm seeing:


 20130423 15:13:05,687  util.py[WARNING]: '
 http://169.254.169.254/20090404/metadata/instanceid' failed [49/120s]:
 url error [timed out]


 And, I have a `Pre-Installed' Ubuntu template that I can boot and login,
 it have the option 121 configured but, no route to 169.254.0.0/24 network
 in my routing tables.

 To try one more option, I just enable the `enable_metadata_network = True'
 but, doesn't work either.

 Anyway, this isn't off-topic, because of enabling metadata without L3 on a
 Single Flat, is on my TODO list to improve my document, the Ultimate
 OpenStack Grizzly Guide, so, it is right on topic.


 And I really appreciate your help! Now I have a much better direction to
 follow.

 But, I still can't figure out how to put metadata to work. Too
 complicated...


 One more question, at the dhcp_agent, there is a message:

 The metadata service will only be activated when the subnet gateway_ip is
 None.

 What this means?

 It means that I can't use `--gateway 10.33.14.1' when running the `quantum
 subnet-create' ?


 Tks!
 Thiago


 On 23 April 2013 04:48, Salvatore Orlando sorla...@nicira.com wrote:

 Quantum's metadata solution for Grizzly can run either with or without
 the l3 agent.
 When running within the l3 agent, packets directed to 169.254.169.254 are
 sent to the default gateway; the l3 agent will spawn a metadata proxy for
 each router; the metadata proxy forwards them to the metadata agent using a
 datagram socket, and finally the agent reaches the Nova metadata server.

 Without the l3 agent, the 'isolated' mode can be enabled for the metadata
 access service. This is achieved by setting the flag
 enable_isolated_metadata_proxy to True in the dhcp_agent configuration
 file. When the isolated proxy is enabled, the dhcp agent will send an
 additional static route to each VM. This static route will have the dhcp
 agent as next hop and 169.254.0.0/16 as destination CIDR; the dhcp agent
 will spawn a metadata proxy for each network. Once the packet reaches the
 proxy, the procedure works as above. This should also explain why the
 metadata agent does not depend on the l3 agent.

 If you are deploying the l3 agent, but do not want to deploy the metadata
 agent on the same host, the 'metadata access network' can be considered.
 This option is enabled by setting enable_metadata_network on the dhcp agent
 configuration file. When enabled, quantum networks whose cidr is included
 in 169.254.0.0/16 will be regarded as 'metadata networks', and will
 spawn a metadata proxy. The user can then connect such network to any
 logical router through the quantum API; thus granting metadata access to
 all the networks connected to such router.

 I think the documentation for quantum metadata has not yet been merged in
 the admin guide.
 I hope this clarifies the matter a little... although this thread has
 gone a little bit off-topic. Can you consider submitting one or more
 questions to ask.openstack.org?


 Regards,
 Salvatore


 On 23 April 2013 00:50, Martinx - ジェームズ thiagocmarti...@gmail.comwrote:

 That is precisely what I'm trying to figure out!

 How to setup metadata without L3 using Quantum Single Flat. I can't find
 any document about this.

 Plus, to make things worse, the package quantum-metadata-agent *DOES
 NOT DEPENDS* on quantum-l3-agent.

 BTW, I'm sure that with my guide, I'll be able to run Quantum on its
 simplest scenario!

 Give it a shot!!   https://gist.github.com/tmartinx/d36536b7b62a48f859c2

 My guide is perfect, have no bugs. Tested it +50 times.

 Cheers!
 Thiago



 On 22 April 2013 19:18, Paras pradhan pradhanpa...@gmail.com wrote:

 So this is what I understand. Even if you do flat (nova-network style)
 , no floating ip you still need l3 for metadata(?). I am really confused. I
 could never ever make quantum work. never had any issues with nova-network.

 Paras.



 On Fri, Apr 19, 2013 at 8:35 PM, Daniels Cai danx...@gmail.com wrote:

 paras

 In my experience the answer is yes .
 In grizzly

Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-23 Thread Paras pradhan
Does these require metadata?

http://uec-images.ubuntu.com/releases/12.04/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img

http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img

Thanks,
Paras.


On Tue, Apr 23, 2013 at 12:53 PM, Martinx - ジェームズ thiagocmarti...@gmail.com
 wrote:

 Hi!

 I just delete my subnet, to create it again without specifying `--gateway
 10.33.14.1', my pre-installed images still works but, cloud based images,
 that requires metadata, doesn't.

 I'm running out of options again...

 I tried with and without those options:

 ---
 enable_isolated_metadata = True
 enable_metadata_network = True

 `--gateway 10.33.14.1' on quantum subnet-create...
 ---

 ...multiple times, doesn't work.

 Tks,
 Thiago


 On 23 April 2013 13:19, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Thank you Orlando!

 I just remove `quantum-l3-agent' for the sake of simplicity (I don't want
 it for now) and have enabled `enable_isolated_metadata = True' in
 dhcp_agent.ini but, same result. Metadata doesn't work.

 The CirrOS doesn't reach the metadata server. Also, within CirrOS, there
 is no route to 169.254.0.0/16 network. Probably because its dhcp client
 isn't ready with option 121 ???


 Also, the `Ubuntu Cloud Image' doesn't reach the metadata too, I'm seeing:


 20130423 15:13:05,687  util.py[WARNING]: '
 http://169.254.169.254/20090404/metadata/instanceid' failed [49/120s]:
 url error [timed out]


 And, I have a `Pre-Installed' Ubuntu template that I can boot and login,
 it have the option 121 configured but, no route to 169.254.0.0/24network in 
 my routing tables.

 To try one more option, I just enable the `enable_metadata_network =
 True' but, doesn't work either.

 Anyway, this isn't off-topic, because of enabling metadata without L3 on
 a Single Flat, is on my TODO list to improve my document, the Ultimate
 OpenStack Grizzly Guide, so, it is right on topic.


 And I really appreciate your help! Now I have a much better direction to
 follow.

 But, I still can't figure out how to put metadata to work. Too
 complicated...


 One more question, at the dhcp_agent, there is a message:

 The metadata service will only be activated when the subnet gateway_ip
 is None.

 What this means?

 It means that I can't use `--gateway 10.33.14.1' when running the
 `quantum subnet-create' ?


 Tks!
 Thiago


 On 23 April 2013 04:48, Salvatore Orlando sorla...@nicira.com wrote:

 Quantum's metadata solution for Grizzly can run either with or without
 the l3 agent.
 When running within the l3 agent, packets directed to 169.254.169.254
 are sent to the default gateway; the l3 agent will spawn a metadata proxy
 for each router; the metadata proxy forwards them to the metadata agent
 using a datagram socket, and finally the agent reaches the Nova metadata
 server.

 Without the l3 agent, the 'isolated' mode can be enabled for the
 metadata access service. This is achieved by setting the flag
 enable_isolated_metadata_proxy to True in the dhcp_agent configuration
 file. When the isolated proxy is enabled, the dhcp agent will send an
 additional static route to each VM. This static route will have the dhcp
 agent as next hop and 169.254.0.0/16 as destination CIDR; the dhcp
 agent will spawn a metadata proxy for each network. Once the packet reaches
 the proxy, the procedure works as above. This should also explain why the
 metadata agent does not depend on the l3 agent.

 If you are deploying the l3 agent, but do not want to deploy the
 metadata agent on the same host, the 'metadata access network' can be
 considered. This option is enabled by setting enable_metadata_network on
 the dhcp agent configuration file. When enabled, quantum networks whose
 cidr is included in 169.254.0.0/16 will be regarded as 'metadata
 networks', and will spawn a metadata proxy. The user can then connect such
 network to any logical router through the quantum API; thus granting
 metadata access to all the networks connected to such router.

 I think the documentation for quantum metadata has not yet been merged
 in the admin guide.
 I hope this clarifies the matter a little... although this thread has
 gone a little bit off-topic. Can you consider submitting one or more
 questions to ask.openstack.org?


 Regards,
 Salvatore


 On 23 April 2013 00:50, Martinx - ジェームズ thiagocmarti...@gmail.comwrote:

 That is precisely what I'm trying to figure out!

 How to setup metadata without L3 using Quantum Single Flat. I can't
 find any document about this.

 Plus, to make things worse, the package quantum-metadata-agent *DOES
 NOT DEPENDS* on quantum-l3-agent.

 BTW, I'm sure that with my guide, I'll be able to run Quantum on its
 simplest scenario!

 Give it a shot!!
 https://gist.github.com/tmartinx/d36536b7b62a48f859c2

 My guide is perfect, have no bugs. Tested it +50 times.

 Cheers!
 Thiago



 On 22 April 2013 19:18, Paras pradhan pradhanpa...@gmail.com wrote:

 So this is what I understand. Even if you do flat (nova

Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-23 Thread Martinx - ジェームズ
The Ubuntu requires metadata, since there is no password there...
The CirrOS have a pre-configured password cubswin:), so, it can be used
without metadata...

Best,
Thiago


On 23 April 2013 15:32, Paras pradhan pradhanpa...@gmail.com wrote:

 Does these require metadata?


 http://uec-images.ubuntu.com/releases/12.04/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img

 http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img

 Thanks,
 Paras.


 On Tue, Apr 23, 2013 at 12:53 PM, Martinx - ジェームズ 
 thiagocmarti...@gmail.com wrote:

 Hi!

 I just delete my subnet, to create it again without specifying `--gateway
 10.33.14.1', my pre-installed images still works but, cloud based images,
 that requires metadata, doesn't.

 I'm running out of options again...

 I tried with and without those options:

 ---
 enable_isolated_metadata = True
 enable_metadata_network = True

 `--gateway 10.33.14.1' on quantum subnet-create...
 ---

 ...multiple times, doesn't work.

 Tks,
 Thiago


 On 23 April 2013 13:19, Martinx - ジェームズ thiagocmarti...@gmail.comwrote:

 Thank you Orlando!

 I just remove `quantum-l3-agent' for the sake of simplicity (I don't
 want it for now) and have enabled `enable_isolated_metadata = True' in
 dhcp_agent.ini but, same result. Metadata doesn't work.

 The CirrOS doesn't reach the metadata server. Also, within CirrOS, there
 is no route to 169.254.0.0/16 network. Probably because its dhcp client
 isn't ready with option 121 ???


 Also, the `Ubuntu Cloud Image' doesn't reach the metadata too, I'm
 seeing:


 20130423 15:13:05,687  util.py[WARNING]: '
 http://169.254.169.254/20090404/metadata/instanceid' failed [49/120s]:
 url error [timed out]


 And, I have a `Pre-Installed' Ubuntu template that I can boot and login,
 it have the option 121 configured but, no route to 169.254.0.0/24network in 
 my routing tables.

 To try one more option, I just enable the `enable_metadata_network =
 True' but, doesn't work either.

 Anyway, this isn't off-topic, because of enabling metadata without L3 on
 a Single Flat, is on my TODO list to improve my document, the Ultimate
 OpenStack Grizzly Guide, so, it is right on topic.


 And I really appreciate your help! Now I have a much better direction to
 follow.

 But, I still can't figure out how to put metadata to work. Too
 complicated...


 One more question, at the dhcp_agent, there is a message:

 The metadata service will only be activated when the subnet gateway_ip
 is None.

 What this means?

 It means that I can't use `--gateway 10.33.14.1' when running the
 `quantum subnet-create' ?


 Tks!
 Thiago


 On 23 April 2013 04:48, Salvatore Orlando sorla...@nicira.com wrote:

 Quantum's metadata solution for Grizzly can run either with or without
 the l3 agent.
 When running within the l3 agent, packets directed to 169.254.169.254
 are sent to the default gateway; the l3 agent will spawn a metadata proxy
 for each router; the metadata proxy forwards them to the metadata agent
 using a datagram socket, and finally the agent reaches the Nova metadata
 server.

 Without the l3 agent, the 'isolated' mode can be enabled for the
 metadata access service. This is achieved by setting the flag
 enable_isolated_metadata_proxy to True in the dhcp_agent configuration
 file. When the isolated proxy is enabled, the dhcp agent will send an
 additional static route to each VM. This static route will have the dhcp
 agent as next hop and 169.254.0.0/16 as destination CIDR; the dhcp
 agent will spawn a metadata proxy for each network. Once the packet reaches
 the proxy, the procedure works as above. This should also explain why the
 metadata agent does not depend on the l3 agent.

 If you are deploying the l3 agent, but do not want to deploy the
 metadata agent on the same host, the 'metadata access network' can be
 considered. This option is enabled by setting enable_metadata_network on
 the dhcp agent configuration file. When enabled, quantum networks whose
 cidr is included in 169.254.0.0/16 will be regarded as 'metadata
 networks', and will spawn a metadata proxy. The user can then connect such
 network to any logical router through the quantum API; thus granting
 metadata access to all the networks connected to such router.

 I think the documentation for quantum metadata has not yet been merged
 in the admin guide.
 I hope this clarifies the matter a little... although this thread has
 gone a little bit off-topic. Can you consider submitting one or more
 questions to ask.openstack.org?


 Regards,
 Salvatore


 On 23 April 2013 00:50, Martinx - ジェームズ thiagocmarti...@gmail.comwrote:

 That is precisely what I'm trying to figure out!

 How to setup metadata without L3 using Quantum Single Flat. I can't
 find any document about this.

 Plus, to make things worse, the package quantum-metadata-agent *DOES
 NOT DEPENDS* on quantum-l3-agent.

 BTW, I'm sure that with my guide, I'll be able to run Quantum on its
 simplest scenario!

 Give it a shot!!
 https

[Openstack] is this normal / is there a workaround

2013-04-23 Thread Steve Heistand
so I have a grizzly/quantum/ovs/gre setup that currently only has the 1 extra 
external IP
for a single tenant network. Its assigned to the quantum router so the VMs dont 
have
external floating IPs. Once Im given them by networking they will.
But in the mean time the only way I can get to the VMs is either a vnc window
or via a command like:

ip netns exec qrouter_id ssh VM_IP

which of course works but a non-root user cant execute the above 'ip...' 
command.

is this all normal?
is there a way that an actual normal user can ssh into the normal VMs?

thanks

s

-- 

 Steve Heistand   NASA Ames Research Center
 SciCon Group Mail Stop 258-6
 steve.heist...@nasa.gov  (650) 604-4369  Moffett Field, CA 94035-1000

 Any opinions expressed are those of our alien overlords, not my own.

# For Remedy#
#Action: Resolve#
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #




signature.asc
Description: OpenPGP digital signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Multinode setup?

2013-04-23 Thread Dmitry Makovey
I have tracked error message to:

# grep -nrF service['updated_at'] /usr/lib/python2.6/site-packages/
/usr/lib/python2.6/site-packages/cinder/utils.py:929:    last_heartbeat = 
service['updated_at'] or service['created_at']

but now I can't quite figure out what event would trigger update of 
'updated_at' column in cinder.services. I see second server references in there 
though:

*** 3. row ***
       created_at: 2013-04-17 20:18:33
       updated_at: NULL
       deleted_at: NULL
          deleted: 0
               id: 3
             host: nodeB
           binary: cinder-volume
            topic: cinder-volume
     report_count: 0
         disabled: 0
availability_zone: nova

so it registered itself initially but didn't update the record? Now I'm a bit 
stuck as there are not too many references to updated_at and none of them seems 
to be relevant to me (most have something to do with glance and volume-to-image 
transition ?).




 From: Dmitry Makovey dmako...@yahoo.com
To: Dmitry Makovey dmako...@yahoo.com; Daniels Cai danx...@gmail.com 
Cc: openstack@lists.launchpad.net openstack@lists.launchpad.net 
Sent: Monday, April 22, 2013 4:36 PM
Subject: Re: [Openstack] Multinode setup?
 


BTW - I did add 


iscsi_ip_prefix= 1.1.1.{2,3}
iscsi_ip_address= 1.1.1.{2,3}


in /etc/cinder/cinder.conf on both nodes (nodeA:1.1.1.2, nodeB:1.1.1.3) as per 
https://lists.launchpad.net/openstack/msg21825.html . If I am to believe that 
post my setup should just work at this point. Could it be because some 
endpoints are defined as localhost? I've added non-127.0.0.0/8 entries for 
most pertinent endpoints (keystone, cinder*) but I wonder if I need something 
else as well? Does OpenStack take issue with multiple endpoints defined?




 From: Dmitry Makovey dmako...@yahoo.com
To: Daniels Cai danx...@gmail.com 
Cc: openstack@lists.launchpad.net openstack@lists.launchpad.net 
Sent: Monday, April 22, 2013 4:16 PM
Subject: Re: [Openstack] Multinode setup?
 


Thanks for reply,


1. checked DB and have mysql:cinder:services listing both cinder hosts 
running cinder-volume, cinder-api and cinder-scheduler. I think I should be 
running cinder-volume only but that doesn't work either. I tried running :
nodeA: cinder-{api,volume,sheduler}, keystone,nova*
nodeB: cinder-volume
but that seems to be wrong.


2. do you mean I should skip --availability-zone flag and just overflow one 
cinder instance and to see whether second will pick up the slack?


BTW - with --availability-zone={nova-volume,nova,cinder}:nodeB I keep on 
getting:


WillNotSchedule: Host nodeB is not up or doesn't exist.



Quite possibly I forgot to setup something simple, but what?


What is the right sequence to install 2 cinder nodes that I can use from my 
nova nodes? (I don't care about HA/redundancy at this point).





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Folsom TryStack on RHEL

2013-04-23 Thread Nachi Ueno
Hi TryStack Users

Please take my apologies for delaying deploy x86 Zones.
We are happy to announce new Folsom X86 zone.

http://trystack.org/
http://x86.trystack.org/dashboard/

One great news is Red Hat's starts contributing,
The cluster is now managed by Redhat and it is running on RHEL.
Some people may think Oh it's Folsom version?.
No worries, Redhat has already started the planning to
upgrade it to Grizzly!

The cluster has only 20 machines, so please use it kindly :)
There are already 4713 members of the TryStack Facebook group, and
3627 requests to join.
So if all of users start boot vm at same time, the cluster may get upset.
We will check the status of the cluster, gradually adding new users
for the clusters.

Best.
Trystack admin team

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Folsom TryStack on RHEL

2013-04-23 Thread Syed Armani

Thank you very much Nachi for sharing this good news. Its good to know
that Red Hat is helping out in keeping trystack.org healthy and running.

Best regards,
Syed

On Wednesday 24 April 2013 04:46 AM, Nachi Ueno wrote:
 Hi TryStack Users
 
 Please take my apologies for delaying deploy x86 Zones.
 We are happy to announce new Folsom X86 zone.
 
 http://trystack.org/
 http://x86.trystack.org/dashboard/
 
 One great news is Red Hat's starts contributing,
 The cluster is now managed by Redhat and it is running on RHEL.
 Some people may think Oh it's Folsom version?.
 No worries, Redhat has already started the planning to
 upgrade it to Grizzly!
 
 The cluster has only 20 machines, so please use it kindly :)
 There are already 4713 members of the TryStack Facebook group, and
 3627 requests to join.
 So if all of users start boot vm at same time, the cluster may get upset.
 We will check the status of the cluster, gradually adding new users
 for the clusters.
 
 Best.
 Trystack admin team
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Folsom TryStack on RHEL

2013-04-23 Thread Nachi Ueno
Hi Syed

Your welcome! :)

And,  Sorry for double post
Best
Nachi




2013/4/24 Syed Armani syed.arm...@hastexo.com:

 Thank you very much Nachi for sharing this good news. Its good to know
 that Red Hat is helping out in keeping trystack.org healthy and running.

 Best regards,
 Syed

 On Wednesday 24 April 2013 04:46 AM, Nachi Ueno wrote:
 Hi TryStack Users

 Please take my apologies for delaying deploy x86 Zones.
 We are happy to announce new Folsom X86 zone.

 http://trystack.org/
 http://x86.trystack.org/dashboard/

 One great news is Red Hat's starts contributing,
 The cluster is now managed by Redhat and it is running on RHEL.
 Some people may think Oh it's Folsom version?.
 No worries, Redhat has already started the planning to
 upgrade it to Grizzly!

 The cluster has only 20 machines, so please use it kindly :)
 There are already 4713 members of the TryStack Facebook group, and
 3627 requests to join.
 So if all of users start boot vm at same time, the cluster may get upset.
 We will check the status of the cluster, gradually adding new users
 for the clusters.

 Best.
 Trystack admin team

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ask.openstack invalid certificate error

2013-04-23 Thread Stefano Maffulli

On 04/23/2013 12:29 AM, Sam Morrison wrote:

Looks like the web server for ask.openstack.org isn't sending the
necessary intermediate certification authority files to the client
and hence getting a cert not valid (when using firefox at least)


Thanks, looks like a bug. We'll work on it.

/stef

--
Ask and answer questions on https://ask.openstack.org

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] grizzly nova+quantum+gre cannot ping instance after nova boot

2013-04-23 Thread Ajiva Fan
i'm following this guide
https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide

if i launch an instance from horizon, i can using ip netns exec qrouter-xxx
ping xxx to ping that instance and ssh to it, and access external network,
everything seems fine, at least from my view.

**However, i cannot ping it if i launch the instance via nova boot command**

is there anybody has met such problem? please help me
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-23 Thread Balamurugan V G
Hi,

In Grizzly, when using quantum and overlapping IPs, does metadata service
work? This wasnt working in Folsom.

Thanks,
Balu
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_glance_trunk #23

2013-04-23 Thread openstack-testing-bot
Title: precise_havana_glance_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_glance_trunk/23/Project:precise_havana_glance_trunkDate of build:Tue, 23 Apr 2013 03:31:36 -0400Build duration:2 min 49 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesAdd a policy handler to control copy-from functionalityby jbresnaheditglance/tests/unit/v1/test_api.pyeditglance/api/v1/images.pyConsole Output[...truncated 2607 lines...]Build-Time: 52Distribution: precise-havanaFail-Stage: buildHost Architecture: amd64Install-Time: 28Job: glance_2013.2+git201304230331~precise-0ubuntu1.dscMachine Architecture: amd64Package: glancePackage-Time: 93Source-Version: 1:2013.2+git201304230331~precise-0ubuntu1Space: 12068Status: attemptedVersion: 1:2013.2+git201304230331~precise-0ubuntu1Finished at 20130423-0334Build needed 00:01:33, 12068k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304230331~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304230331~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpePwWpg/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpePwWpg/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hdch -b -D precise --newversion 1:2013.2+git201304230331~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.2+git201304230331~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A glance_2013.2+git201304230331~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304230331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304230331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_glance_trunk #24

2013-04-23 Thread openstack-testing-bot
Title: precise_havana_glance_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_glance_trunk/24/Project:precise_havana_glance_trunkDate of build:Tue, 23 Apr 2013 04:31:35 -0400Build duration:2 min 54 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesEliminate the race when selecting a port for tests.by jbresnaheditglance/tests/functional/__init__.pyeditglance/common/wsgi.pyeditglance/tests/utils.pyeditglance/tests/functional/test_bin_glance_control.pyeditglance/common/utils.pyConsole Output[...truncated 2636 lines...]Space: 12068Status: attemptedVersion: 1:2013.2+git201304230431~precise-0ubuntu1Finished at 20130423-0434Build needed 00:01:32, 12068k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304230431~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304230431~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpOC300z/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpOC300z/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log e6e02cd147b3b22dc39344df48d3e40abf024240..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304230431~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [6335fdb] Eliminate the race when selecting a port for tests.dch -a [459e3e6] Sync with oslo-incubator copy of setup.py and version.pydch -a [6780571] Fix Qpid test casesdch -a [cd00848] Fix the deletion of a pending_delete image.dch -a [1e49329] Fix functional test 'test_scrubber_with_metadata_enc'dch -a [6eaf42a] Improve unit tests for glance.api.middleware.cache moduledch -a [ae0f904] Add GridFS storedch -a [28b1129] Verify SSL certificates at boot timedch -a [b1ac90f] Add a policy handler to control copy-from functionalitydch -a [7155134] Add unit tests for glance.api.cached_images moduledebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.2+git201304230431~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A glance_2013.2+git201304230431~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304230431~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304230431~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_keystone_trunk #27

2013-04-23 Thread openstack-testing-bot
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/27/Project:precise_havana_keystone_trunkDate of build:Tue, 23 Apr 2013 09:31:37 -0400Build duration:2 min 33 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesclose db migration sessionby cyeoheditkeystone/common/sql/migrate_repo/versions/022_move_legacy_endpoint_id.pyConsole Output[...truncated 2535 lines...]git log a40f7fe155f2246eaa03b616ea01437da7759587..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304230931~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [2d7f991] What is this for?dch -a [335470d] Removed unused importsdch -a [9f7b370] Remove non-production middleware from sample pipelinesdch -a [fccfa39] Fixed logging usage instead of LOGdch -a [8c67341] Sync with oslo-incubator copy of setup.pydch -a [9b9a3d5] Set empty element to ""dch -a [78dcfc6] Fixed unicode username user creation errordch -a [a62d3af] Fix token ids for memcacheddch -a [61629c3] Use is_enabled() in folsom->grizzly upgrade (bug 1167421)dch -a [28ef9cd] Generate HTTPS certificates with ssl_setup.dch -a [cbac771] Fix for configuring non-default auth plugins properlydch -a [23bd9fa] test duplicate namedch -a [e4ec12e] Add TLS Support for LDAPdch -a [f846e28] Clean up duplicate methodsdch -a [3f296e0] don't migrate as oftendch -a [5c217fd] use the openstack test runnerdch -a [b033538] Fix 401 status responsedch -a [a65f737] Add missing colon for documentation build steps.dch -a [9467a66] close db migration sessiondch -a [b94f62a] Use string for port in default endpoints (bug 1160573)dch -a [1121b8d] bug 1159888 broken links in rst docdch -a [6f88699] Remove un-needed LimitingReader read() function.dch -a [e16742b] residual grants after delete action (bug1125637)dch -a [0b4ee31] catch errors in wsgi.Middleware.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.2+git201304230931~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A keystone_2013.2+git201304230931~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304230931~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304230931~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_keystone_trunk #28

2013-04-23 Thread openstack-testing-bot
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/28/Project:precise_havana_keystone_trunkDate of build:Tue, 23 Apr 2013 10:01:38 -0400Build duration:2 min 17 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesfix undefined variableby bknudsonedittests/test_backend.pyeditkeystone/trust/backends/kvs.pyConsole Output[...truncated 2538 lines...]dch -b -D precise --newversion 1:2013.2+git201304231001~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [2d7f991] What is this for?dch -a [335470d] Removed unused importsdch -a [9f7b370] Remove non-production middleware from sample pipelinesdch -a [fccfa39] Fixed logging usage instead of LOGdch -a [8c67341] Sync with oslo-incubator copy of setup.pydch -a [9b9a3d5] Set empty element to ""dch -a [78dcfc6] Fixed unicode username user creation errordch -a [a62d3af] Fix token ids for memcacheddch -a [61629c3] Use is_enabled() in folsom->grizzly upgrade (bug 1167421)dch -a [28ef9cd] Generate HTTPS certificates with ssl_setup.dch -a [cbac771] Fix for configuring non-default auth plugins properlydch -a [23bd9fa] test duplicate namedch -a [e4ec12e] Add TLS Support for LDAPdch -a [97d5624] fix undefined variabledch -a [f846e28] Clean up duplicate methodsdch -a [3f296e0] don't migrate as oftendch -a [5c217fd] use the openstack test runnerdch -a [b033538] Fix 401 status responsedch -a [a65f737] Add missing colon for documentation build steps.dch -a [9467a66] close db migration sessiondch -a [b94f62a] Use string for port in default endpoints (bug 1160573)dch -a [1121b8d] bug 1159888 broken links in rst docdch -a [6f88699] Remove un-needed LimitingReader read() function.dch -a [e16742b] residual grants after delete action (bug1125637)dch -a [0b4ee31] catch errors in wsgi.Middleware.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.2+git201304231001~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A keystone_2013.2+git201304231001~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304231001~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304231001~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #51

2013-04-23 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/51/Project:precise_havana_quantum_trunkDate of build:Tue, 23 Apr 2013 13:01:36 -0400Build duration:2 min 3 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesAdd string quantum/ version to scope/tag in NVPby aroseneditquantum/plugins/nicira/nvplib.pyMetadata agent: reuse authentication info across eventlet threadsby revieweditquantum/tests/unit/test_metadata_agent.pyeditquantum/agent/metadata/agent.pyConsole Output[...truncated 3080 lines...]bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpFPN4GP/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpFPN4GP/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log de5c1e4f281f59d550b476919b27ac4e2aae14ac..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304231301~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [49c1c98] Metadata agent: reuse authentication info across eventlet threadsdch -a [11639a2] Imported Translations from Transifexdch -a [765baf8] Imported Translations from Transifexdch -a [343ca18] Imported Translations from Transifexdch -a [c117074] Remove locals() from strings substitutionsdch -a [fb66e24] Imported Translations from Transifexdch -a [e001a8d] Add string 'quantum'/ version to scope/tag in NVPdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304231301~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304231301~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304231301~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304231301~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_glance_trunk #25

2013-04-23 Thread openstack-testing-bot
Title: precise_havana_glance_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_glance_trunk/25/Project:precise_havana_glance_trunkDate of build:Tue, 23 Apr 2013 14:31:36 -0400Build duration:3 min 10 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesRaise 404 while deleting a deleted imageby iccha.sethieditglance/tests/unit/v1/test_api.pyeditglance/api/v1/images.pyConsole Output[...truncated 2670 lines...]Status: attemptedVersion: 1:2013.2+git201304231431~precise-0ubuntu1Finished at 20130423-1434Build needed 00:01:35, 12072k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304231431~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304231431~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpSRO4Og/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpSRO4Og/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log e6e02cd147b3b22dc39344df48d3e40abf024240..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304231431~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [6335fdb] Eliminate the race when selecting a port for tests.dch -a [7d341de] Raise 404 while deleting a deleted imagedch -a [459e3e6] Sync with oslo-incubator copy of setup.py and version.pydch -a [6780571] Fix Qpid test casesdch -a [cd00848] Fix the deletion of a pending_delete image.dch -a [1e49329] Fix functional test 'test_scrubber_with_metadata_enc'dch -a [6eaf42a] Improve unit tests for glance.api.middleware.cache moduledch -a [ae0f904] Add GridFS storedch -a [28b1129] Verify SSL certificates at boot timedch -a [b1ac90f] Add a policy handler to control copy-from functionalitydch -a [7155134] Add unit tests for glance.api.cached_images moduledebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.2+git201304231431~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A glance_2013.2+git201304231431~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304231431~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304231431~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_grizzly_version-drift #1

2013-04-23 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/1/

--
Started by user Adam Gandelman
Building remotely on pkg-builder in workspace 
http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/ws/
[cloud-archive_grizzly_version-drift] $ /bin/bash -xe 
/tmp/hudson3179971985004608954.sh
+ OS_RELEASE=grizzly
+ 
/var/lib/jenkins/tools/ubuntu-reports/server/cloud-archive/version-tracker/gather-versions.py
 grizzly
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ 
/var/lib/jenkins/tools/ubuntu-reports/server/cloud-archive/version-tracker/ca-versions.py
 -c -r grizzly
---
The following Cloud Archive packages for grizzly
have been superseded newer versions in Ubuntu!

quantum:
Ubuntu: 1:2013.1-0ubuntu2
Cloud Archive staging: 1:2013.1-0ubuntu1~cloud0
python-keystoneclient:
Ubuntu: 1:0.2.3-0ubuntu2
Cloud Archive staging: 1:0.2.3-0ubuntu1~cloud0
horizon:
Ubuntu: 1:2013.1-0ubuntu3
Cloud Archive staging: 1:2013.1-0ubuntu2~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #52

2013-04-23 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/52/Project:precise_havana_quantum_trunkDate of build:Tue, 23 Apr 2013 16:31:37 -0400Build duration:2 min 1 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changeslbaas: check object state before update for pools, members, health monitorsby revieweditquantum/db/loadbalancer/loadbalancer_db.pyConsole Output[...truncated 3084 lines...]mk-build-deps -i -r -t apt-get -y /tmp/tmpD9j7ac/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log de5c1e4f281f59d550b476919b27ac4e2aae14ac..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304231631~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [26b98b7] lbaas: check object state before update for pools, members, health monitorsdch -a [49c1c98] Metadata agent: reuse authentication info across eventlet threadsdch -a [11639a2] Imported Translations from Transifexdch -a [765baf8] Imported Translations from Transifexdch -a [343ca18] Imported Translations from Transifexdch -a [c117074] Remove locals() from strings substitutionsdch -a [fb66e24] Imported Translations from Transifexdch -a [e001a8d] Add string 'quantum'/ version to scope/tag in NVPdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304231631~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304231631~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304231631~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304231631~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: cloud-archive_deploy-new #1

2013-04-23 Thread openstack-testing-bot
Title: cloud-archive_deploy-new
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/cloud-archive_deploy-new/1/Project:cloud-archive_deploy-newDate of build:Tue, 23 Apr 2013 16:47:28 -0400Build duration:86 msBuild cause:Started by user Adam GandelmanBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole OutputStarted by user Adam GandelmanBuilding on master in workspace /var/lib/jenkins/jobs/cloud-archive_deploy-new/workspaceNo emails were triggered.[workspace] $ /bin/sh -xe /tmp/hudson8307118710925343439.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsERROR:root:Imporperly formated job name: cloud-archive_deploy-new. Cannot derive pipeline parameters.Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: cloud-archive_deploy-new #2

2013-04-23 Thread openstack-testing-bot
Title: cloud-archive_deploy-new
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/cloud-archive_deploy-new/2/Project:cloud-archive_deploy-newDate of build:Tue, 23 Apr 2013 16:49:13 -0400Build duration:88 msBuild cause:Started by user Adam GandelmanBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole OutputStarted by user Adam GandelmanBuilding on master in workspace /var/lib/jenkins/jobs/cloud-archive_deploy-new/workspaceNo emails were triggered.[workspace] $ /bin/sh -xe /tmp/hudson7476598503842452529.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: cloud-archive_deploy-new #4

2013-04-23 Thread openstack-testing-bot
Title: cloud-archive_deploy-new
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/cloud-archive_deploy-new/4/Project:cloud-archive_deploy-newDate of build:Tue, 23 Apr 2013 16:51:53 -0400Build duration:0.16 secBuild cause:Started by user Adam GandelmanBuilt on:masterHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 3 lines...][workspace] $ /bin/sh -xe /tmp/hudson5706130502905073262.sh+ envJENKINS_HOME=/var/lib/jenkinsUSER=jenkinsNODE_LABELS=master test_orchestratorHUDSON_URL=http://10.189.74.7:8080/HOME=/var/lib/jenkinsBUILD_URL=http://10.189.74.7:8080/job/cloud-archive_deploy-new/4/JENKINS_SERVER_COOKIE=505d0ca2d9bed53fHUDSON_COOKIE=e0bed329-4cbc-4c94-b043-7ebc6d0e891bWORKSPACE=/var/lib/jenkins/jobs/cloud-archive_deploy-new/workspaceDEBEMAIL=openstack-testing-...@ubuntu.comUPSTART_JOB=jenkinsNODE_NAME=masterDEBFULLNAME=Openstack Ubuntu Testing BotOPENSTACK_UBUNTU_ROOT=/var/lib/jenkins/tools/openstack-ubuntu-testingEXECUTOR_NUMBER=0OPENSTACK_RELEASE=folsomTERM=linuxHUDSON_HOME=/var/lib/jenkinsPATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/binBUILD_ID=2013-04-23_16-51-53BUILD_TAG=jenkins-cloud-archive_deploy-new-4JENKINS_URL=http://10.189.74.7:8080/JOB_URL=http://10.189.74.7:8080/job/cloud-archive_deploy-new/BUILD_NUMBER=4INSTALLATION_SOURCE=cloud-archive-stagingHUDSON_SERVER_COOKIE=505d0ca2d9bed53fTOPOLOGY=allJOB_NAME=cloud-archive_deploy-newUBUNTU_RELEASE=precisePWD=/var/lib/jenkins/jobs/cloud-archive_deploy-new/workspace+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-params> /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-params(37)()-> pipeline.supported_deployment(params) or sys.exit(1)(Pdb) Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-params", line 37, in pipeline.supported_deployment(params) or sys.exit(1)  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-params", line 37, in pipeline.supported_deployment(params) or sys.exit(1)  File "/usr/lib/python2.7/bdb.py", line 49, in trace_dispatchreturn self.dispatch_line(frame)  File "/usr/lib/python2.7/bdb.py", line 68, in dispatch_lineif self.quitting: raise BdbQuitbdb.BdbQuitBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: cloud-archive_deploy-new #5

2013-04-23 Thread openstack-testing-bot
Title: cloud-archive_deploy-new
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/cloud-archive_deploy-new/5/Project:cloud-archive_deploy-newDate of build:Tue, 23 Apr 2013 16:52:11 -0400Build duration:0.36 secBuild cause:Started by user Adam GandelmanBuilt on:masterHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 33 lines...]UBUNTU_RELEASE=precisePWD=/var/lib/jenkins/jobs/cloud-archive_deploy-new/workspace+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-params+ . ./pipeline_parameters+ export OPENSTACK_RELEASE=folsom+ export INSTALLATION_SOURCE=cloud-archive-staging+ export PIPELINE_ID=78b827a5-0565-40cf-9128-f461eff2cfdc+ export OPENSTACK_COMPONENT=manual_trigger+ export DEPLOY_TOPOLOGY=default+ export UBUNTU_RELEASE=precise+ export OPENSTACK_BRANCH=manual_trigger+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/archive_jobINFO:root:Archiving Jenkins job.DEBUG:root:Checking env for parameter: PIPELINE_IDDEBUG:root:Found: 78b827a5-0565-40cf-9128-f461eff2cfdcDEBUG:root:Checking env for parameter: STATUSDEBUG:root:Checking env for parameter: BUILD_TAGDEBUG:root:Found: jenkins-cloud-archive_deploy-new-5DEBUG:root:Checking env for parameter: PARENT_BUILD_TAGDEBUG:root:Checking env for parameter: OPENSTACK_BRANCHDEBUG:root:Found: manual_triggerDEBUG:root:Checking env for parameter: OPENSTACK_COMPONENTDEBUG:root:Found: manual_triggerDEBUG:root:Checking env for parameter: UBUNTU_RELEASEDEBUG:root:Found: preciseDEBUG:root:Checking env for parameter: OPENSTACK_RELEASEDEBUG:root:Found: folsomDEBUG:root:Checking env for extra parameter: GIT_COMMITDEBUG:root:Checking env for extra parameter: AUTHORDEBUG:root:Checking env for extra parameter: BUILD_URLDEBUG:root:Checking env for extra parameter: DEPLOYMENTDEBUG:root:Checking env for extra parameter: DEPLOY_TOPOLOGYDEBUG:root:Checking env for extra parameter: INSTALLATION_SOURCEINFO:root:Test job saved: jenkins-cloud-archive_deploy-new-5INFO:root:Archived job, pipeline id: 78b827a5-0565-40cf-9128-f461eff2cfdc build_tag: jenkins-cloud-archive_deploy-new-5.debug[workspace] $ /bin/sh -xe /tmp/hudson867744728605005053.sh+ cat pipeline_parametersexport OPENSTACK_RELEASE=folsomexport INSTALLATION_SOURCE=cloud-archive-stagingexport PIPELINE_ID=78b827a5-0565-40cf-9128-f461eff2cfdcexport OPENSTACK_COMPONENT=manual_triggerexport DEPLOY_TOPOLOGY=defaultexport UBUNTU_RELEASE=preciseexport OPENSTACK_BRANCH=manual_trigger+ exit 0Email was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_cinder_trunk #27

2013-04-23 Thread openstack-testing-bot
Title: precise_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_cinder_trunk/27/Project:precise_havana_cinder_trunkDate of build:Tue, 23 Apr 2013 20:31:37 -0400Build duration:1 min 7 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesiscsi: Add ability to specify or autodetect block vs fileioby josepheditcinder/volume/iscsi.pyeditcinder/volume/utils.pyeditetc/cinder/cinder.conf.sampleeditcinder/tests/test_iscsi.pyConsole Output[...truncated 1381 lines...]DEBUG:root:['bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']Building using working treeBuilding package in merge modeLooking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmpDjwvFbbzr: ERROR: An error (1) occurred running quilt: Applying patch fix_cinder_dependencies.patchpatching file tools/pip-requiresHunk #1 FAILED at 18.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch fix_cinder_dependencies.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-5521ca4d-b15a-42f9-9ee9-488ce8be9c8e', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-5521ca4d-b15a-42f9-9ee9-488ce8be9c8e', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpDjwvFb/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpDjwvFb/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log -n5 --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304232031~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [7d5787d] iscsi: Add ability to specify or autodetect block vs fileiodch -a [3727324] Rename duplicate test methoddch -a [cc7fe54] Add missing space to "volumes already consumed" messagedch -a [a95a214] Add capabilities reporting to ThinLVM driverdch -a [0d8f269] NetApp: Fix failing NetApp testsdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-5521ca4d-b15a-42f9-9ee9-488ce8be9c8e', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-5521ca4d-b15a-42f9-9ee9-488ce8be9c8e', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #53

2013-04-23 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/53/Project:precise_havana_quantum_trunkDate of build:Tue, 23 Apr 2013 21:01:36 -0400Build duration:2 min 5 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesMake the admin role configurableby salv.orlandoeditquantum/tests/unit/test_policy.pyeditquantum/context.pyeditetc/policy.jsoneditquantum/policy.pyConsole Output[...truncated 3087 lines...]python setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log de5c1e4f281f59d550b476919b27ac4e2aae14ac..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304232101~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [26b98b7] lbaas: check object state before update for pools, members, health monitorsdch -a [49c1c98] Metadata agent: reuse authentication info across eventlet threadsdch -a [11639a2] Imported Translations from Transifexdch -a [35988f1] Make the 'admin' role configurabledch -a [765baf8] Imported Translations from Transifexdch -a [343ca18] Imported Translations from Transifexdch -a [c117074] Remove locals() from strings substitutionsdch -a [fb66e24] Imported Translations from Transifexdch -a [e001a8d] Add string 'quantum'/ version to scope/tag in NVPdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304232101~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304232101~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304232101~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304232101~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #54

2013-04-23 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/54/Project:precise_havana_quantum_trunkDate of build:Tue, 23 Apr 2013 21:31:36 -0400Build duration:2 min 2 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesImported Translations from Transifexby Jenkinseditquantum/locale/quantum.poteditquantum/locale/ja/LC_MESSAGES/quantum.poeditquantum/locale/ka_GE/LC_MESSAGES/quantum.poConsole Output[...truncated 3090 lines...]git log -n1 --no-merges --pretty=format:%Hgit log de5c1e4f281f59d550b476919b27ac4e2aae14ac..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304232131~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [62017cd] Imported Translations from Transifexdch -a [26b98b7] lbaas: check object state before update for pools, members, health monitorsdch -a [49c1c98] Metadata agent: reuse authentication info across eventlet threadsdch -a [11639a2] Imported Translations from Transifexdch -a [35988f1] Make the 'admin' role configurabledch -a [765baf8] Imported Translations from Transifexdch -a [343ca18] Imported Translations from Transifexdch -a [c117074] Remove locals() from strings substitutionsdch -a [fb66e24] Imported Translations from Transifexdch -a [e001a8d] Add string 'quantum'/ version to scope/tag in NVPdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304232131~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304232131~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304232131~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304232131~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build failed in Jenkins: cloud-archive_grizzly_version-drift #2

2013-04-23 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/2/

--
Started by timer
Building remotely on pkg-builder in workspace 
http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/ws/
[cloud-archive_grizzly_version-drift] $ /bin/bash -xe 
/tmp/hudson1536620470053323008.sh
+ OS_RELEASE=grizzly
+ 
/var/lib/jenkins/tools/ubuntu-reports/server/cloud-archive/version-tracker/gather-versions.py
 grizzly
INFO:root:Querying package list and versions from staging PPA.
INFO:root:Initializing connection to LP...
INFO:root:Querying Ubuntu versions for all packages.
INFO:root:Scraping Packages list for CA pocket: proposed
INFO:root:Scraping Packages list for CA pocket: updates
+ 
/var/lib/jenkins/tools/ubuntu-reports/server/cloud-archive/version-tracker/ca-versions.py
 -c -r grizzly
---
The following Cloud Archive packages for grizzly
have been superseded newer versions in Ubuntu!

quantum:
Ubuntu: 1:2013.1-0ubuntu2
Cloud Archive staging: 1:2013.1-0ubuntu1~cloud0
python-keystoneclient:
Ubuntu: 1:0.2.3-0ubuntu2
Cloud Archive staging: 1:0.2.3-0ubuntu1~cloud0
horizon:
Ubuntu: 1:2013.1-0ubuntu3
Cloud Archive staging: 1:2013.1-0ubuntu2~cloud0

--
Build step 'Execute shell' marked build as failure

-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp