use_namespace is set to true, iproute is up to date from 12.04 LTS.
This is what I see when tcpdump
-
root@havana:/home/localadmin# tcpdump -i qvo6157a9aa-76
tcpdump: WARNING: qvo6157a9aa-76: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
liste
On 11/09/2013 12:18 AM, Paras pradhan wrote:
dnsmaq is running, ip netns doesn't return anything.
Empty output from "ip netns" can be either not using namespaces (run
grep "use_namespaces /etc/neutron/l3_agent.ini" to find out) or using
wrong iproute package.
Could you please also check what
dnsmaq is running, ip netns doesn't return anything.
-Paras.
On Fri, Nov 8, 2013 at 4:07 PM, Rami Vaknin wrote:
> On 11/08/2013 11:35 PM, Paras pradhan wrote:
>
> Hi,
>
> I gpt an instance UP but it doesnot get an IP address. neutron
> agent-list lists DHCP agent happy. While booting I see c
On 11/08/2013 11:35 PM, Paras pradhan wrote:
Hi,
I gpt an instance UP but it doesnot get an IP address. neutron
agent-list lists DHCP agent happy. While booting I see cirros stuck at
sending discover... multiple times. dnsmasq process is running. how do
i debug?
I would start with:
* ps -ef
Hi,
I gpt an instance UP but it doesnot get an IP address. neutron agent-list
lists DHCP agent happy. While booting I see cirros stuck at sending
discover... multiple times. dnsmasq process is running. how do i debug?
Thanks
___
Mailing list: http://lis
Very thanks again.
Best regards.
2013/11/8 Razique Mahroua
> Oh yah true!
> not sure “conductors” exist yet for Cinder, meaning meanwhile, every node
> needs a direct access to the database
> glad to hear it’s working :)
>
> On 08 Nov 2013, at 08:53, Guilherme Russi
> wrote:
>
> Hello again R
Hi all,
I am using devstack and sometimes when i do a ./unstack.sh and then
./stack.sh,
there is a Unable to locate Volume Group stack-volumes ERROR as followed.
Does anyone know what causes the error and how to solve the issue? Before
i do the ./unstack.sh and redo ./stack.sh. Everything looks
Hello Dheerendra,
Check that you set everything according to the guide.
Also could you show your logs for the time when you try to access the
Savanna tab? If you use dev install, the logs should drop right in your
console, otherwise you can find them among apache logs.
Dmitry
2013/11/8 Dheeren
Oh yah true!
not sure “conductors” exist yet for Cinder, meaning meanwhile, every node needs
a direct access to the database
glad to hear it’s working :)
On 08 Nov 2013, at 08:53, Guilherme Russi wrote:
> Hello again Razique, I've found the problem, I need to add the grants on the
> mysql to m
Is there a way to ensure that floating IP's, even when released from a
project, can not be used by other projects?
What I mean is can you restrict which projects individual floating IPs
may be assigned to?
Or is the current best practice to assign them in order to projects
ahead of time and keep
A snapshot and backup are basically analogous terms in trove.
A backup snapshot[1] is a copy of the database data that is stored on the
volume or locally on the instance and backed up to swift at a point in time.
Trove stores the location of the backup so that a user can restore a
database to a ne
Hello again Razique, I've found the problem, I need to add the grants on
the mysql to my another IP. Now it's working really good :D
I've found this link too if someone needs:
http://docs.openstack.org/admin-guide-cloud/content//managing-volumes.html
Thank you so much, and if you need me just let
Hi Thomas,
I didn't see similar error message from the compute node, where OVS
agent/libvirt/Nova-compute daemons are running.
For the network host, I did several things in the last couple of days to
address the issue:
1) switch to a new physical machine for the network host, the VM network
Hello Razique, I got a couple of doubts, do you know if I need to do
something else that's is not on the link you sent me? I'm asking because I
followed the configuration but it's not working, here is what I get: I've
installed the cinder-volume at the second computer that have the HD, and
I've cha
Oh great! I'll try here and send you the results.
Very thanks :)
2013/11/8 Razique Mahroua
> If I’m not mistaken, you only need to install the “cinder-volume’ service
> that will update its status to your main node
> :)
>
> On 08 Nov 2013, at 05:34, Guilherme Russi
> wrote:
>
> Great! I was r
sure :)
On 08 Nov 2013, at 05:39, Guilherme Russi wrote:
> Oh great! I'll try here and send you the results.
>
> Very thanks :)
>
>
> 2013/11/8 Razique Mahroua
> If I’m not mistaken, you only need to install the “cinder-volume’ service
> that will update its status to your main node
> :)
>
If I’m not mistaken, you only need to install the “cinder-volume’ service that
will update its status to your main node
:)
On 08 Nov 2013, at 05:34, Guilherme Russi wrote:
> Great! I was reading the link and I have one question, do I need to install
> cinder at the other computer too?
>
> Than
Great! I was reading the link and I have one question, do I need to install
cinder at the other computer too?
Thanks :)
2013/11/8 Razique Mahroua
> Ok in that case, with Grizzly you can use the “multi-backends” feature:
> https://wiki.openstack.org/wiki/Cinder-multi-backend
>
> and that should
ok !
what is your actual Cinder backend? Is it a hard disk, a SAN, a network volume,
etc…
On 08 Nov 2013, at 05:20, Guilherme Russi wrote:
> Hi Razique, thank you for answering, I want to expand my cinder storage, is
> it the block storage? I'll use the storage to allow VMs to have more hard
Ok in that case, with Grizzly you can use the “multi-backends” feature:
https://wiki.openstack.org/wiki/Cinder-multi-backend
and that should do it :)
On 08 Nov 2013, at 05:29, Guilherme Russi wrote:
> It is a hard disk, my scenario is one Controller (where I have my storage
> cinder and my net
It is a hard disk, my scenario is one Controller (where I have my storage
cinder and my network quantum) and four compute nodes.
2013/11/8 Razique Mahroua
> ok !
> what is your actual Cinder backend? Is it a hard disk, a SAN, a network
> volume, etc…
>
> On 08 Nov 2013, at 05:20, Guilherme Russ
Hi Guilherme !
Which storage do you precisely want to expand?
Regards,
Razique
On 08 Nov 2013, at 04:52, Guilherme Russi wrote:
> Hello guys, I have a Grizzly deployment running fine with 5 nodes, and I want
> to add more storage on it. My question is, can I install a new HD on another
> com
Hi Razique, thank you for answering, I want to expand my cinder storage, is
it the block storage? I'll use the storage to allow VMs to have more hard
disk space.
Regards.
Guilherme.
2013/11/8 Razique Mahroua
> Hi Guilherme !
> Which storage do you precisely want to expand?
>
> Regards,
> Raz
Found how to fix it, if anyone needs like me:
https://ask.openstack.org/en/question/130/why-do-i-get-no-portal-found-error-while-attaching-cinder-volume-to-vm/
2013/11/5 Guilherme Russi
> Hello guys,
>
> Last saturday I needed to turn off my controller node because it was
> needed to perform
Hello guys, I have a Grizzly deployment running fine with 5 nodes, and I
want to add more storage on it. My question is, can I install a new HD on
another computer thats not the controller and link this HD with my cinder
that it can be a storage too?
The computer I will install my new HD is at the
Hi
Installed and Integrated savanna on havana. When I loging to dashboard,
'savanna' tab is shown. When I click on the 'savanna' tab it throws
followinge error.
Request URL: http://192.168.72.100/horizon/savanna/ Django Version: 1.5.4
Exception Type: AuthorizationFailure Exception Value:
Authori
Sam,
172.16.2.0/24 is part of 172.16.0.0/16 network. I guess that is why Neutron
does not allow creating that second network.
Dmitry
2013/11/8 Sam Lee
> Thank you all of you. I will update db directly for now.
>
> @Harshad, I have tried to create a new subnet, but there is something
> wrong w
Hi all,
I would like to create my own scheduler_weight_classes for
Filter Scheduler host weighting. However, I did not find any documentation
online that describe how to create a customized scheduler_weight_classes.
The only document is
http://docs.openstack.org/developer/nova/devref/filter_schedu
On 11/08/2013 09:25 AM, Stephane EVEILLARD wrote:
> Hi
>
> when trying to connect to my dashboard (havanna on centos 6.4), I got an
> 500 internal server error
> with the following message in /var/log/httpd/error_log
> (self.SETTINGS_MODULE, e))
> [Fri Nov 08 09:20:59 2013] [error] [client 192.16
Looks like you're missing pbr. How did you install horizon? Doing pip
install pbr though should resolve your issue.
Aaron
On Fri, Nov 8, 2013 at 12:25 AM, Stephane EVEILLARD <
stephane.eveill...@gmail.com> wrote:
> Hi
>
> when trying to connect to my dashboard (havanna on centos 6.4), I got an
Hi
when trying to connect to my dashboard (havanna on centos 6.4), I got an
500 internal server error
with the following message in /var/log/httpd/error_log
[Fri Nov 08 09:20:59 2013] [error] [client 192.168.1.200] mod_wsgi
(pid=2414): Exception occurred processing WSGI script
'/usr/share/opensta
31 matches
Mail list logo