Hello,
When using following site to create an advanced Overcloud, I find that
sometimes keystone works, sometimes does not work.
https://repos.fedorapeople.org/repos/openstack-m/docs/master/advanced_deployment/advanced_deployment.html
$ source overcloudrc
$ openstack-status
== No
Hello, stevebaker.
I think there is a problem about the temple file which at following internal
site.
https://github.com/openstack/heat-templates/blob/master/hot/server_console.yaml
When using the template file to create stack, I can’t get novnc console url:
-
$ heat output-show sta
Hi All
I have been trying to set instack with RDO manager. I have successfully
installed undercloud and moving on to overcloud. Now, when I run
"instack-deploy-overcloud", I get the following error:
+ OVERCLOUD_YAML_PATH=overcloud.yaml
+ heat stack-create -f overcloud.yaml -P
AdminToken=b003d
Hello all,
How to collect the sample of ‘http.request’ and ‘http.response’ on
Ceilometer service.
Now I known that:
1、process_response() and process_request() will send notification of
‘http.response’ and
‘http.request’ at pycadf/middleware/notifier.py.
2、proc
Chris Dent [mailto:chd...@redhat.com]
发送时间: 2015年3月11日 21:08
收件人: Pan, Fengyun/潘 风云
抄送: Vijaya Bhaskar; openstack
主题: Re: 答复: [Openstack] Ceilometer high availability in active-active
On Wed, 11 Mar 2015, Pan, Fengyun wrote:
> We kown that:
> backend_url',
> default=No
oordination
ToozConnectionError: Error connecting to 193.168.196.246:6379. timed out.
2015-03-18 18:48:36.953 16236 TRACE ceilometer.coordination
_
Why is it timed out ?
Is there some trouble in my configuration?
-邮件原件-
发件人: Chris Dent [mailto:chd...@redhat.com]
ise err
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup
ImportError: No module named concurrent
2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup
-邮件原件-
发件人: Chris Dent [mailto:chd...@redhat
We kown that:
backend_url',
default=None,
help='The backend URL to use for distributed coordination. If '
'left empty, per-deployment central agent and per-host '
'compute agent won\'t do workload '
'partition
The High Availability of Dhcp agent :
just modify the neutron.conf:
dhcp_agents_per_network = X
But about the High Availability of l3 agent, keepalived vrrp/conntrackd + dvr
can do it.
High availability features will be implemented as extensions or drivers.A first
extension/driver will be based