-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com]
Sent: 15 March 2016 19:29
To: openstack-operators
Subject: Re: [Openstack-operators] [openstack-dev] [all][log] Ideas to log
request-ids in cross-projects
Excerpts from Kekane, Abhishek's message of 2016-03-15 08:28:
We recently had a power outage, and perhaps one of the scenarios of
controller capacity planning is starting all of the compute nodes at once
or in large batches (when power was restored).
We painfully learned about our nova-conductor being low on workers/cores,
but still we doubted whether it was
Unable to set Environment variables like *OS_USERNAME, OS_PASSWORD,
OS_TENANT_NAME, OS_REGION_NAME, OS_AUTH_URL* during runtime through for
example os.system("source openrc demo demo") ,in order to run commands for
demo project like ceilometer event-list. Actually I am unable to get demo's
ceilomet
Yes. Nova-conductor is rpc based so you can add as many servers as you need,
and they will process messages from the conductor queue on rabbit, without any
problems. I would suggest also moving rabbitmq off on its own server as well.
As rabbitmq also chews up a significant amount of CPU as we
We are melting right now (rpc timeouts, rabbitmq connection timeouts, high
load on controller, etc.): we are running 375 compute nodes, and only one
controller (on vmware) on which we run rabbitmq + nova-conductor with 28
workers
So I can seamlessly add more controller nodes with more nova-conduct
PD: 32 cores
On Tue, Mar 15, 2016 at 12:37 PM, Gustavo Randich wrote:
> We are melting right now (rpc timeouts, rabbitmq connection timeouts, high
> load on controller, etc.): we are running 375 compute nodes, and only one
> controller (on vmware) on which we run rabbitmq + nova-conductor with
Before beginning, I'd like to thank all members of the Keystone community.
The Mitaka release would not be possible without the many dedicated
contributors to the Keystone project. This was a great development cycle
and I’m very happy with the finished product. Here’s the Keystone Mitaka
release a
I'd like to share the Swift ops runbook that was recently added to Swift's
upstream documentation.
http://docs.openstack.org/developer/swift/ops_runbook/index.html
This was originally contributed by operators of HPE's public Swift cluster and
then cleaned up and landed two weeks ago at the Swif
We run cells, but when we reached about 250 hv in a cell we needed to add
another cell api (went from 2 to 3) to help with the cpu load caused by
novaconductor. Nova-conductor was/is constantly crushing the cpu on those
servers.
_
How many compute nodes do you have (that is triggering your controller node
limitations)?
We run nova-conductor on multiple control nodes. Each control node runs "N"
conductors where N is basically the HyperThreaded CPU count.
On Tue, Mar 15, 2016 at 8:44 AM, Gustavo Randich
wrote:
> Hi,
>
> Si
Hi,
Simple question: can I deploy nova-conductor across several servers?
(Icehouse)
Because we are reaching a limit in our controller node
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bi
Excerpts from Kekane, Abhishek's message of 2016-03-15 08:28:13 +:
> Excerpts from Kekane, Abhishek's message of 2016-03-01 06:17:15 +:
>
> > Hi Devs,
>
> >
>
> > Considering return request-id to caller specs [1] is implemented in
>
> > python-*client, I would like to begin discussion o
When you are using ML2 with OVS driver and (in-tree) L3 router plugin,
and if br-ex is used to connect to the external network,
the gateway port of the router can be DOWN even if it does work.
It seems you pinged from the root netns on your network node.
In a normal setup with ML2 OVS driver and t
Excerpts from Kekane, Abhishek's message of 2016-03-01 06:17:15 +:
> Hi Devs,
>
> Considering return request-id to caller specs [1] is implemented in
> python-*client, I would like to begin discussion on how these request-ids
> will be logged in cross-projects. In logging work-group meetin
They might be not perfect but from my little experience they are able to
forward traffic and do SNAT/DNAT without too many issues.
If your deployment is failing to properly configure routing, you should be
getting errors in the l3 agent logs - sharing them might help.
Trying to ping the internal
15 matches
Mail list logo