> Thanks Robert. I guess that might suggest some configuration issue on
> our end then. I'm curious, do you have any specific settings in terms of
> pagination limits in your nova and neutron configuration files?
I do not remember having to set any in the past.
Do you proxy requests through e.g. a
If you are going back and setting this up again you'll have to run the same
steps you would with a normal keystone configuration. You'll need to make
sure that you have applied the role Admin to the cloudadmin user. Then
you'll need to make sure it is associated with the correct tenant again.
Whe
Hi Folks,
The DVR meeting on Wednesday the 19th 2014 will be cancelled.
if anything urgent we can discuss in l3 meeting.
Thanks
swami
Sent from my iPad
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst
I am beginner in building openstack cloud. I try to build a private
cloud only use for our university. In my design. I use three machine
like this:
1. Main Server that holds the codes of my application and run on it.
2. Key Storage Machine that keeps the keys to decrypt.
3. Stor
So now if you run the ext-list command, is 'router' still missing?
On Tue, Nov 18, 2014 at 4:35 PM, Amit Anand wrote:
> Ok well the neutron server is running on my controller node. Here is the
> log from a restart I just did:
>
> 2014-11-18 19:32:23.139 10165 INFO neutron.common.config [-] Loggi
Ok well the neutron server is running on my controller node. Here is the
log from a restart I just did:
2014-11-18 19:32:23.139 10165 INFO neutron.common.config [-] Logging
enabled!
2014-11-18 19:32:23.143 10165 INFO neutron.common.config [-] Config paste
file: /usr/share/neutron/api-paste.ini
201
This config goes on whichever ones are running the neutron server process.
Can you include a neutron server.log file that begins from a server process
restart (service neutron-server restart)?
On Tue, Nov 18, 2014 at 3:33 PM, Amit Anand wrote:
> Hi Kevin,
>
> Thanks but I have service_plugins =
Hi Kevin,
Thanks but I have service_plugins = router on /etc/neutron.conf already on
all three nodes
On Tue, Nov 18, 2014 at 5:52 PM, Kevin Benton wrote:
> The issue isn't with the configuration of the L3 agent. It's loading the
> l3 plugin on the Neutron server.
>
> In /etc/neutron/neutron
Ethan,
If you are going back and setting this up again you'll have to run the same
steps you would with a normal keystone configuration. You'll need to make
sure that you have applied the role Admin to the cloudadmin user. Then
you'll need to make sure it is associated with the correct tenant agai
The issue isn't with the configuration of the L3 agent. It's loading the l3
plugin on the Neutron server.
In /etc/neutron/neutron.conf you need to enable the router service
plugin.[1]
service_plugins = router
https://github.com/openstack/neutron/blob/c2b1594ad878b1897468210ccb89fc0d0c4146c4/etc/n
After difficulty and downtime spent with Icehouse we rolled back to
Havana as we had a once-working config that was integrated with our Active
Directory server.
Everything was rebuilt, and things work fine with the exception of LDAP,
again.
I'm fairly confident the system is passing the user
Hi Salvatore,
Thanks for emailing! So from what I can see from the guide, Im only
supposed to edit the l3_agent.ini file on the network node. There is
nothing for l3_agent.ini on controller to edit from the guide. I did see
this as I continued to troubleshoot after my original email on the
control
All,
So been following the Juno guide and now have arrived to the point where I
need to create the demo-router - but when I run the command this is what I
get:
[root@controller ~]# source demo-openrc.sh
[root@controller ~]# neutron router-create demo-router
Not Found (HTTP 404) (Request-ID: req-0
I think you do not have a l3 plugin configured in your neutron.conf -
therefore the l3 extension is not being loaded and the router resource does
not exist.
If the l3 plugin is not there just add it to service_plugins.
If the diagnosis is correct, can you post this question to ask.openstack.org
(i
On 2014-11-18 16:06, Robert van Leeuwen wrote:
I do not see this error when running this command on our production environment.
As a test I also spawned 100 vm's in dev on a single hypervisor, also no issues.
Running neutron with ML2 plugin and openvswitch.
Thanks Robert. I guess that might sug
> I'm struggling with what seems like a nova problem on an Icehouse RDO
> deployment. As an admin, I wanted to list all instances, but nova gives
> me an error:
>
> # nova list --all-tenants
> ERROR: The server has either erred or is incapable of performing the
Running Icehouse RDO here on SL6.
I
Hello,
I'm struggling with what seems like a nova problem on an Icehouse RDO
deployment. As an admin, I wanted to list all instances, but nova gives
me an error:
# nova list --all-tenants
ERROR: The server has either erred or is incapable of performing the
requested operation. (HTTP 500) (Re
Hi,
I have multinode setup of Openstack Icehouse on SLES 11 SP3 host OS.
I am having problems while attaching cinder volumes to my instances.
When I check my /var/log/messages file it is continuously getting flooded
with the messages pasted below:
Nov 18 07:06:27 network-node sudo: cinder : TTY
Hello Eoghan,
Thanks for the update.
1.Openstack Document link :
http://docs.openstack.org/trunk/install-guide/install/apt/content/ceilometer-controller-install.html
2. ceilometer packages installed : apt-get install ceilometer-api
ceilometer-collector ceilometer-agent-central ceilometer-a
I failed to mention that this was for volume backed images, there is a
genuine issue and we're making progress in a bug report:
https://bugs.launchpad.net/bugs/1392773
--
Darren
On 17 November 2014 22:48, Gangur, Hrushikesh (R & D HP Cloud)
wrote:
> You need to use block migration. Default is se
Hi,
I have configured ceilometer on Juno running on Ubuntu 14.04.
I would like to know if the following Nova meters are supported
in Juno, as I cannot view them in the ceilometer meter-list.
The meters are as follows,
disk.device.read.requests.rate
disk.device.write.requests.rate
disk.device.
Hello,
I'm using Havana release of openstack with two neutron-server nodes.
Sometimes when I spawn many (8 or 10 or more) instances at once with
"--num-instances X" parameter than some of them got more than one IP
(each on different neutron port) assigned. Do You know maybe why it can
happend
22 matches
Mail list logo