Re: [Openstack-operators] [openstack-dev] [all][log] Ideas to log request-ids in cross-projects

2016-03-15 Thread Kekane, Abhishek
-Original Message- From: Doug Hellmann [mailto:d...@doughellmann.com] Sent: 15 March 2016 19:29 To: openstack-operators Subject: Re: [Openstack-operators] [openstack-dev] [all][log] Ideas to log request-ids in cross-projects Excerpts from Kekane, Abhishek's message of 2016-03-15 08:28:

Re: [Openstack-operators] [Scale][Performance] / compute_nodes ratio experience

2016-03-15 Thread Gustavo Randich
We recently had a power outage, and perhaps one of the scenarios of controller capacity planning is starting all of the compute nodes at once or in large batches (when power was restored). We painfully learned about our nova-conductor being low on workers/cores, but still we doubted whether it was

[Openstack-operators] Unable to change environment Variables for demo or admin tenant during runtime

2016-03-15 Thread Umar Yousaf
Unable to set Environment variables like *OS_USERNAME, OS_PASSWORD, OS_TENANT_NAME, OS_REGION_NAME, OS_AUTH_URL* during runtime through for example os.system("source openrc demo demo") ,in order to run commands for demo project like ceilometer event-list. Actually I am unable to get demo's ceilomet

Re: [Openstack-operators] nova-conductor scale out

2016-03-15 Thread Kris G. Lindgren
Yes. Nova-conductor is rpc based so you can add as many servers as you need, and they will process messages from the conductor queue on rabbit, without any problems. I would suggest also moving rabbitmq off on its own server as well. As rabbitmq also chews up a significant amount of CPU as we

Re: [Openstack-operators] nova-conductor scale out

2016-03-15 Thread Gustavo Randich
We are melting right now (rpc timeouts, rabbitmq connection timeouts, high load on controller, etc.): we are running 375 compute nodes, and only one controller (on vmware) on which we run rabbitmq + nova-conductor with 28 workers So I can seamlessly add more controller nodes with more nova-conduct

Re: [Openstack-operators] nova-conductor scale out

2016-03-15 Thread Gustavo Randich
PD: 32 cores On Tue, Mar 15, 2016 at 12:37 PM, Gustavo Randich wrote: > We are melting right now (rpc timeouts, rabbitmq connection timeouts, high > load on controller, etc.): we are running 375 compute nodes, and only one > controller (on vmware) on which we run rabbitmq + nova-conductor with

[Openstack-operators] [keystone] mitaka release recap

2016-03-15 Thread Steve Martinelli
Before beginning, I'd like to thank all members of the Keystone community. The Mitaka release would not be possible without the many dedicated contributors to the Keystone project. This was a great development cycle and I’m very happy with the finished product. Here’s the Keystone Mitaka release a

[Openstack-operators] [swift] swift ops runbook

2016-03-15 Thread John Dickinson
I'd like to share the Swift ops runbook that was recently added to Swift's upstream documentation. http://docs.openstack.org/developer/swift/ops_runbook/index.html This was originally contributed by operators of HPE's public Swift cluster and then cleaned up and landed two weeks ago at the Swif

Re: [Openstack-operators] nova-conductor scale out

2016-03-15 Thread Kris G. Lindgren
We run cells, but when we reached about 250 hv in a cell we needed to add another cell api (went from 2 to 3) to help with the cpu load caused by novaconductor. Nova-conductor was/is constantly crushing the cpu on those servers. _

Re: [Openstack-operators] nova-conductor scale out

2016-03-15 Thread David Medberry
How many compute nodes do you have (that is triggering your controller node limitations)? We run nova-conductor on multiple control nodes. Each control node runs "N" conductors where N is basically the HyperThreaded CPU count. On Tue, Mar 15, 2016 at 8:44 AM, Gustavo Randich wrote: > Hi, > > Si

[Openstack-operators] nova-conductor scale out

2016-03-15 Thread Gustavo Randich
Hi, Simple question: can I deploy nova-conductor across several servers? (Icehouse) Because we are reaching a limit in our controller node ___ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bi

Re: [Openstack-operators] [openstack-dev] [all][log] Ideas to log request-ids in cross-projects

2016-03-15 Thread Doug Hellmann
Excerpts from Kekane, Abhishek's message of 2016-03-15 08:28:13 +: > Excerpts from Kekane, Abhishek's message of 2016-03-01 06:17:15 +: > > > Hi Devs, > > > > > > Considering return request-id to caller specs [1] is implemented in > > > python-*client, I would like to begin discussion o

Re: [Openstack-operators] [neutron] Liberty - Do Neutron Routers actually work?

2016-03-15 Thread Akihiro Motoki
When you are using ML2 with OVS driver and (in-tree) L3 router plugin, and if br-ex is used to connect to the external network, the gateway port of the router can be DOWN even if it does work. It seems you pinged from the root netns on your network node. In a normal setup with ML2 OVS driver and t

Re: [Openstack-operators] [openstack-dev] [all][log] Ideas to log request-ids in cross-projects

2016-03-15 Thread Kekane, Abhishek
Excerpts from Kekane, Abhishek's message of 2016-03-01 06:17:15 +: > Hi Devs, > > Considering return request-id to caller specs [1] is implemented in > python-*client, I would like to begin discussion on how these request-ids > will be logged in cross-projects. In logging work-group meetin

Re: [Openstack-operators] [neutron] Liberty - Do Neutron Routers actually work?

2016-03-15 Thread Salvatore Orlando
They might be not perfect but from my little experience they are able to forward traffic and do SNAT/DNAT without too many issues. If your deployment is failing to properly configure routing, you should be getting errors in the l3 agent logs - sharing them might help. Trying to ping the internal