Re: [Openstack] Initial quantum network state broken

2013-02-19 Thread Sylvain Bauza

Hi Greg,

I did have trouble with DHCP assignation (see my previous post in this 
list), which was being fixed by deleting ovs bridges on network node, 
recreating them and restarting OVS plugin and L3/DHCP agents (which were 
all on the same physical node).

Maybe it helps.

Anyway, when DHCP'ing from your VM (asking for an IP), could you please 
tcpdump :

1. your virtual network interface on compute node
2. your physical network interface on compute node
3. your physical network interface on network node

and see BOOTP/DHCP packets ?
On the physical layer, you should see GRE packets (provided you 
correctly followed the mentioned guide) encapsulating your BOOTP/DHCP 
packets.


If that's OK, could you please issue the below commands (on the network 
node) :

 - brctl show
 - ip a
 - ovs-vsctl show
 - route -n

Thanks,
-Sylvain

Le 19/02/2013 00:55, Greg Chavez a écrit :
Third time I'm replying to my own message.  It seems like the initial 
network state is a problem for many first time openstackers.  Surely 
somewhere would be well to assist me.  I'm running out of time to make 
this work.  Thanks.



On Sun, Feb 17, 2013 at 3:08 AM, Greg Chavez > wrote:


I'm replying to my own message because I'm desperate.  My network
situation is a mess.  I need to add this as well: my bridge
interfaces are all down.  On my compute node:

root@kvm-cs-sn-10i:/var/lib/nova/instances/instance-0005# ip
addr show | grep ^[0-9]
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
2: eth0:  mtu 1500 qdisc mq state
UP qlen 1000
3: eth1:  mtu 1500 qdisc mq state
UP qlen 1000
4: eth2:  mtu 1500 qdisc noop state DOWN qlen
1000
5: eth3:  mtu 1500 qdisc noop state DOWN qlen
1000
9: br-int:  mtu 1500 qdisc noop state DOWN
10: br-eth1:  mtu 1500 qdisc noop state DOWN
13: phy-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
14: int-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
15: qbre56c5d9e-b6:  mtu 1500
qdisc noqueue state UP
16: qvoe56c5d9e-b6:  mtu
1500 qdisc pfifo_fast state UP qlen 1000
17: qvbe56c5d9e-b6:  mtu
1500 qdisc pfifo_fast master qbre56c5d9e-b6 state UP qlen 1000
19: qbrb805a9c9-11:  mtu 1500
qdisc noqueue state UP
20: qvob805a9c9-11:  mtu
1500 qdisc pfifo_fast state UP qlen 1000
21: qvbb805a9c9-11:  mtu
1500 qdisc pfifo_fast master qbrb805a9c9-11 state UP qlen 1000
34: qbr2b23c51f-02:  mtu 1500
qdisc noqueue state UP
35: qvo2b23c51f-02:  mtu
1500 qdisc pfifo_fast state UP qlen 1000
36: qvb2b23c51f-02:  mtu
1500 qdisc pfifo_fast master qbr2b23c51f-02 state UP qlen 1000
37: vnet0:  mtu 1500 qdisc
pfifo_fast master qbr2b23c51f-02 state UNKNOWN qlen 500

And on my network node:

root@knet-cs-gen-01i:~# ip addr show | grep ^[0-9]
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
2: eth0:  mtu 1500 qdisc mq state
UP qlen 1000
3: eth1:  mtu 1500 qdisc mq state
UP qlen 1000
4: eth2:  mtu 1500 qdisc
mq state UP qlen 1000
5: eth3:  mtu 1500 qdisc noop state DOWN qlen
1000
6: br-int:  mtu 1500 qdisc noop state DOWN
7: br-eth1:  mtu 1500 qdisc noop state DOWN
8: br-ex:  mtu 1500 qdisc noqueue
state UNKNOWN
22: phy-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
23: int-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000

I gave br-ex an IP and UP'ed it manually.  I assume this is
correct.  By I honestly don't know.

Thanks.




On Fri, Feb 15, 2013 at 6:54 PM, Greg Chavez
mailto:greg.cha...@gmail.com>> wrote:


Sigh.  So I abandoned RHEL 6.3, rekicked my systems and set up
the scale-ready installation described in these instructions:


https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst

Basically:

(o) controller node on a mgmt and public net
(o) network node (quantum and openvs) on a mgmt, net-config,
and public net
(o) compute node is on a mgmt and net-config net

Took me just over an hour and ran into only a few easily-fixed
speed bumps.  But the VM networks are totally non-functioning.
 VMs launch but no network traffic can go in or out.

I'm particularly befuddled by these problems:

( 1 ) This error in nova-compute:

ERROR nova.network.quantumv2 [-] _get_auth_token() failed

( 2 ) No NAT rules on the compute node, which probably
explains why the VMs complain about not finding a network or
being able to get metadata from 169.254.169.254.

root@kvm-cs-sn-10i:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N nova-api-metadat-OUTPUT
-N nova-api-metadat-POSTROUTING
-N nova-api-metadat-PREROUTING
-N nova-

Re: [Openstack] Initial quantum network state broken

2013-02-19 Thread Gary Kotton

Hi Greg,
Sorry to hear you woes. I agree with you that setting things up is 
challeniging and sometimes problematic. I would suggest a number of things:
1. Give devstack a bash. This is very helpful and useful to try and 
understand how everything fits and works together. www.devstack.org
2. A few months ago we did a test day with Fedora for folsom. There are 
Quantum commands and setup details (you can use these on other 
distributions too) - 
https://fedoraproject.org/wiki/QA:Testcase_Quantum_V2#Setup

Hope that that helps.
Thanks
Gary


On 02/19/2013 01:55 AM, Greg Chavez wrote:
Third time I'm replying to my own message.  It seems like the initial 
network state is a problem for many first time openstackers.  Surely 
somewhere would be well to assist me.  I'm running out of time to make 
this work.  Thanks.



On Sun, Feb 17, 2013 at 3:08 AM, Greg Chavez > wrote:


I'm replying to my own message because I'm desperate.  My network
situation is a mess.  I need to add this as well: my bridge
interfaces are all down.  On my compute node:

root@kvm-cs-sn-10i:/var/lib/nova/instances/instance-0005# ip
addr show | grep ^[0-9]
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
2: eth0:  mtu 1500 qdisc mq state
UP qlen 1000
3: eth1:  mtu 1500 qdisc mq state
UP qlen 1000
4: eth2:  mtu 1500 qdisc noop state DOWN qlen
1000
5: eth3:  mtu 1500 qdisc noop state DOWN qlen
1000
9: br-int:  mtu 1500 qdisc noop state DOWN
10: br-eth1:  mtu 1500 qdisc noop state DOWN
13: phy-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
14: int-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
15: qbre56c5d9e-b6:  mtu 1500
qdisc noqueue state UP
16: qvoe56c5d9e-b6:  mtu
1500 qdisc pfifo_fast state UP qlen 1000
17: qvbe56c5d9e-b6:  mtu
1500 qdisc pfifo_fast master qbre56c5d9e-b6 state UP qlen 1000
19: qbrb805a9c9-11:  mtu 1500
qdisc noqueue state UP
20: qvob805a9c9-11:  mtu
1500 qdisc pfifo_fast state UP qlen 1000
21: qvbb805a9c9-11:  mtu
1500 qdisc pfifo_fast master qbrb805a9c9-11 state UP qlen 1000
34: qbr2b23c51f-02:  mtu 1500
qdisc noqueue state UP
35: qvo2b23c51f-02:  mtu
1500 qdisc pfifo_fast state UP qlen 1000
36: qvb2b23c51f-02:  mtu
1500 qdisc pfifo_fast master qbr2b23c51f-02 state UP qlen 1000
37: vnet0:  mtu 1500 qdisc
pfifo_fast master qbr2b23c51f-02 state UNKNOWN qlen 500

And on my network node:

root@knet-cs-gen-01i:~# ip addr show | grep ^[0-9]
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
2: eth0:  mtu 1500 qdisc mq state
UP qlen 1000
3: eth1:  mtu 1500 qdisc mq state
UP qlen 1000
4: eth2:  mtu 1500 qdisc
mq state UP qlen 1000
5: eth3:  mtu 1500 qdisc noop state DOWN qlen
1000
6: br-int:  mtu 1500 qdisc noop state DOWN
7: br-eth1:  mtu 1500 qdisc noop state DOWN
8: br-ex:  mtu 1500 qdisc noqueue
state UNKNOWN
22: phy-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
23: int-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000

I gave br-ex an IP and UP'ed it manually.  I assume this is
correct.  By I honestly don't know.

Thanks.




On Fri, Feb 15, 2013 at 6:54 PM, Greg Chavez
mailto:greg.cha...@gmail.com>> wrote:


Sigh.  So I abandoned RHEL 6.3, rekicked my systems and set up
the scale-ready installation described in these instructions:


https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst

Basically:

(o) controller node on a mgmt and public net
(o) network node (quantum and openvs) on a mgmt, net-config,
and public net
(o) compute node is on a mgmt and net-config net

Took me just over an hour and ran into only a few easily-fixed
speed bumps.  But the VM networks are totally non-functioning.
 VMs launch but no network traffic can go in or out.

I'm particularly befuddled by these problems:

( 1 ) This error in nova-compute:

ERROR nova.network.quantumv2 [-] _get_auth_token() failed

( 2 ) No NAT rules on the compute node, which probably
explains why the VMs complain about not finding a network or
being able to get metadata from 169.254.169.254.

root@kvm-cs-sn-10i:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N nova-api-metadat-OUTPUT
-N nova-api-metadat-POSTROUTING
-N nova-api-metadat-PREROUTING
-N nova-api-metadat-float-snat
-N nova-api-metadat-snat
-N nova-compute-OUTPUT
-N nova-compute-POSTROUTING
-N nova-compute-PREROUTING
-N nova-compute-float-snat
-N nova-compute-snat
-N nova-postrouting-bottom
-A PREROUTING -j nova-api-metada

Re: [Openstack] Initial quantum network state broken

2013-02-18 Thread Greg Chavez
Third time I'm replying to my own message.  It seems like the initial
network state is a problem for many first time openstackers.  Surely
somewhere would be well to assist me.  I'm running out of time to make this
work.  Thanks.


On Sun, Feb 17, 2013 at 3:08 AM, Greg Chavez  wrote:

> I'm replying to my own message because I'm desperate.  My network
> situation is a mess.  I need to add this as well: my bridge interfaces are
> all down.  On my compute node:
>
> root@kvm-cs-sn-10i:/var/lib/nova/instances/instance-0005# ip addr
> show | grep ^[0-9]
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> 2: eth0:  mtu 1500 qdisc mq state UP qlen
> 1000
> 3: eth1:  mtu 1500 qdisc mq state UP qlen
> 1000
> 4: eth2:  mtu 1500 qdisc noop state DOWN qlen 1000
> 5: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000
> 9: br-int:  mtu 1500 qdisc noop state DOWN
> 10: br-eth1:  mtu 1500 qdisc noop state DOWN
> 13: phy-br-eth1:  mtu 1500 qdisc
> pfifo_fast state UP qlen 1000
> 14: int-br-eth1:  mtu 1500 qdisc
> pfifo_fast state UP qlen 1000
> 15: qbre56c5d9e-b6:  mtu 1500 qdisc
> noqueue state UP
> 16: qvoe56c5d9e-b6:  mtu 1500
> qdisc pfifo_fast state UP qlen 1000
> 17: qvbe56c5d9e-b6:  mtu 1500
> qdisc pfifo_fast master qbre56c5d9e-b6 state UP qlen 1000
> 19: qbrb805a9c9-11:  mtu 1500 qdisc
> noqueue state UP
> 20: qvob805a9c9-11:  mtu 1500
> qdisc pfifo_fast state UP qlen 1000
> 21: qvbb805a9c9-11:  mtu 1500
> qdisc pfifo_fast master qbrb805a9c9-11 state UP qlen 1000
> 34: qbr2b23c51f-02:  mtu 1500 qdisc
> noqueue state UP
> 35: qvo2b23c51f-02:  mtu 1500
> qdisc pfifo_fast state UP qlen 1000
> 36: qvb2b23c51f-02:  mtu 1500
> qdisc pfifo_fast master qbr2b23c51f-02 state UP qlen 1000
> 37: vnet0:  mtu 1500 qdisc pfifo_fast
> master qbr2b23c51f-02 state UNKNOWN qlen 500
>
> And on my network node:
>
> root@knet-cs-gen-01i:~# ip addr show | grep ^[0-9]
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> 2: eth0:  mtu 1500 qdisc mq state UP qlen
> 1000
> 3: eth1:  mtu 1500 qdisc mq state UP qlen
> 1000
> 4: eth2:  mtu 1500 qdisc mq state
> UP qlen 1000
> 5: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000
> 6: br-int:  mtu 1500 qdisc noop state DOWN
> 7: br-eth1:  mtu 1500 qdisc noop state DOWN
> 8: br-ex:  mtu 1500 qdisc noqueue state
> UNKNOWN
> 22: phy-br-eth1:  mtu 1500 qdisc
> pfifo_fast state UP qlen 1000
> 23: int-br-eth1:  mtu 1500 qdisc
> pfifo_fast state UP qlen 1000
>
> I gave br-ex an IP and UP'ed it manually.  I assume this is correct.  By I
> honestly don't know.
>
> Thanks.
>
>
>
>
> On Fri, Feb 15, 2013 at 6:54 PM, Greg Chavez wrote:
>
>>
>> Sigh.  So I abandoned RHEL 6.3, rekicked my systems and set up the
>> scale-ready installation described in these instructions:
>>
>>
>> https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
>>
>> Basically:
>>
>> (o) controller node on a mgmt and public net
>> (o) network node (quantum and openvs) on a mgmt, net-config, and public
>> net
>> (o) compute node is on a mgmt and net-config net
>>
>> Took me just over an hour and ran into only a few easily-fixed speed
>> bumps.  But the VM networks are totally non-functioning.  VMs launch but no
>> network traffic can go in or out.
>>
>> I'm particularly befuddled by these problems:
>>
>> ( 1 ) This error in nova-compute:
>>
>> ERROR nova.network.quantumv2 [-] _get_auth_token() failed
>>
>> ( 2 ) No NAT rules on the compute node, which probably explains why the
>> VMs complain about not finding a network or being able to get metadata from
>> 169.254.169.254.
>>
>> root@kvm-cs-sn-10i:~# iptables -t nat -S
>> -P PREROUTING ACCEPT
>> -P INPUT ACCEPT
>> -P OUTPUT ACCEPT
>> -P POSTROUTING ACCEPT
>> -N nova-api-metadat-OUTPUT
>> -N nova-api-metadat-POSTROUTING
>> -N nova-api-metadat-PREROUTING
>> -N nova-api-metadat-float-snat
>> -N nova-api-metadat-snat
>> -N nova-compute-OUTPUT
>> -N nova-compute-POSTROUTING
>> -N nova-compute-PREROUTING
>> -N nova-compute-float-snat
>> -N nova-compute-snat
>> -N nova-postrouting-bottom
>> -A PREROUTING -j nova-api-metadat-PREROUTING
>> -A PREROUTING -j nova-compute-PREROUTING
>> -A OUTPUT -j nova-api-metadat-OUTPUT
>> -A OUTPUT -j nova-compute-OUTPUT
>> -A POSTROUTING -j nova-api-metadat-POSTROUTING
>> -A POSTROUTING -j nova-compute-POSTROUTING
>> -A POSTROUTING -j nova-postrouting-bottom
>> -A nova-api-metadat-snat -j nova-api-metadat-float-snat
>> -A nova-compute-snat -j nova-compute-float-snat
>> -A nova-postrouting-bottom -j nova-api-metadat-snat
>> -A nova-postrouting-bottom -j nova-compute-snat
>>
>> (3) A lastly, no default secgroup rules, whose function governs... what
>> exactly?  Connections to the VM's public or private IPs?  I guess I'm just
>> not sure if this is relevant to my overall problem of ZERO VM network
>> connectivity.
>>
>> I seek guidance please.  Thanks.
>>
>>
>> --
>> \*..+.-
>> --Greg Chavez
>> +//..;};
>>
>
>
>
> --
> \*..+.-
> --Greg Chavez
> +//..;};
>



-- 
\*..+.-
--Greg Chavez
+//..;};
_

Re: [Openstack] Initial quantum network state broken

2013-02-17 Thread Greg Chavez
I'm replying to my own message because I'm desperate.  My network situation
is a mess.  I need to add this as well: my bridge interfaces are all down.
 On my compute node:

root@kvm-cs-sn-10i:/var/lib/nova/instances/instance-0005# ip addr show
| grep ^[0-9]
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
2: eth0:  mtu 1500 qdisc mq state UP qlen
1000
3: eth1:  mtu 1500 qdisc mq state UP qlen
1000
4: eth2:  mtu 1500 qdisc noop state DOWN qlen 1000
5: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000
9: br-int:  mtu 1500 qdisc noop state DOWN
10: br-eth1:  mtu 1500 qdisc noop state DOWN
13: phy-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
14: int-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
15: qbre56c5d9e-b6:  mtu 1500 qdisc
noqueue state UP
16: qvoe56c5d9e-b6:  mtu 1500
qdisc pfifo_fast state UP qlen 1000
17: qvbe56c5d9e-b6:  mtu 1500
qdisc pfifo_fast master qbre56c5d9e-b6 state UP qlen 1000
19: qbrb805a9c9-11:  mtu 1500 qdisc
noqueue state UP
20: qvob805a9c9-11:  mtu 1500
qdisc pfifo_fast state UP qlen 1000
21: qvbb805a9c9-11:  mtu 1500
qdisc pfifo_fast master qbrb805a9c9-11 state UP qlen 1000
34: qbr2b23c51f-02:  mtu 1500 qdisc
noqueue state UP
35: qvo2b23c51f-02:  mtu 1500
qdisc pfifo_fast state UP qlen 1000
36: qvb2b23c51f-02:  mtu 1500
qdisc pfifo_fast master qbr2b23c51f-02 state UP qlen 1000
37: vnet0:  mtu 1500 qdisc pfifo_fast
master qbr2b23c51f-02 state UNKNOWN qlen 500

And on my network node:

root@knet-cs-gen-01i:~# ip addr show | grep ^[0-9]
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
2: eth0:  mtu 1500 qdisc mq state UP qlen
1000
3: eth1:  mtu 1500 qdisc mq state UP qlen
1000
4: eth2:  mtu 1500 qdisc mq state
UP qlen 1000
5: eth3:  mtu 1500 qdisc noop state DOWN qlen 1000
6: br-int:  mtu 1500 qdisc noop state DOWN
7: br-eth1:  mtu 1500 qdisc noop state DOWN
8: br-ex:  mtu 1500 qdisc noqueue state
UNKNOWN
22: phy-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000
23: int-br-eth1:  mtu 1500 qdisc
pfifo_fast state UP qlen 1000

I gave br-ex an IP and UP'ed it manually.  I assume this is correct.  By I
honestly don't know.

Thanks.




On Fri, Feb 15, 2013 at 6:54 PM, Greg Chavez  wrote:

>
> Sigh.  So I abandoned RHEL 6.3, rekicked my systems and set up the
> scale-ready installation described in these instructions:
>
>
> https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
>
> Basically:
>
> (o) controller node on a mgmt and public net
> (o) network node (quantum and openvs) on a mgmt, net-config, and public net
> (o) compute node is on a mgmt and net-config net
>
> Took me just over an hour and ran into only a few easily-fixed speed
> bumps.  But the VM networks are totally non-functioning.  VMs launch but no
> network traffic can go in or out.
>
> I'm particularly befuddled by these problems:
>
> ( 1 ) This error in nova-compute:
>
> ERROR nova.network.quantumv2 [-] _get_auth_token() failed
>
> ( 2 ) No NAT rules on the compute node, which probably explains why the
> VMs complain about not finding a network or being able to get metadata from
> 169.254.169.254.
>
> root@kvm-cs-sn-10i:~# iptables -t nat -S
> -P PREROUTING ACCEPT
> -P INPUT ACCEPT
> -P OUTPUT ACCEPT
> -P POSTROUTING ACCEPT
> -N nova-api-metadat-OUTPUT
> -N nova-api-metadat-POSTROUTING
> -N nova-api-metadat-PREROUTING
> -N nova-api-metadat-float-snat
> -N nova-api-metadat-snat
> -N nova-compute-OUTPUT
> -N nova-compute-POSTROUTING
> -N nova-compute-PREROUTING
> -N nova-compute-float-snat
> -N nova-compute-snat
> -N nova-postrouting-bottom
> -A PREROUTING -j nova-api-metadat-PREROUTING
> -A PREROUTING -j nova-compute-PREROUTING
> -A OUTPUT -j nova-api-metadat-OUTPUT
> -A OUTPUT -j nova-compute-OUTPUT
> -A POSTROUTING -j nova-api-metadat-POSTROUTING
> -A POSTROUTING -j nova-compute-POSTROUTING
> -A POSTROUTING -j nova-postrouting-bottom
> -A nova-api-metadat-snat -j nova-api-metadat-float-snat
> -A nova-compute-snat -j nova-compute-float-snat
> -A nova-postrouting-bottom -j nova-api-metadat-snat
> -A nova-postrouting-bottom -j nova-compute-snat
>
> (3) A lastly, no default secgroup rules, whose function governs... what
> exactly?  Connections to the VM's public or private IPs?  I guess I'm just
> not sure if this is relevant to my overall problem of ZERO VM network
> connectivity.
>
> I seek guidance please.  Thanks.
>
>
> --
> \*..+.-
> --Greg Chavez
> +//..;};
>



-- 
\*..+.-
--Greg Chavez
+//..;};
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Initial quantum network state broken

2013-02-15 Thread Greg Chavez
Sigh.  So I abandoned RHEL 6.3, rekicked my systems and set up the
scale-ready installation described in these instructions:

https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst

Basically:

(o) controller node on a mgmt and public net
(o) network node (quantum and openvs) on a mgmt, net-config, and public net
(o) compute node is on a mgmt and net-config net

Took me just over an hour and ran into only a few easily-fixed speed bumps.
 But the VM networks are totally non-functioning.  VMs launch but no
network traffic can go in or out.

I'm particularly befuddled by these problems:

( 1 ) This error in nova-compute:

ERROR nova.network.quantumv2 [-] _get_auth_token() failed

( 2 ) No NAT rules on the compute node, which probably explains why the VMs
complain about not finding a network or being able to get metadata from
169.254.169.254.

root@kvm-cs-sn-10i:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N nova-api-metadat-OUTPUT
-N nova-api-metadat-POSTROUTING
-N nova-api-metadat-PREROUTING
-N nova-api-metadat-float-snat
-N nova-api-metadat-snat
-N nova-compute-OUTPUT
-N nova-compute-POSTROUTING
-N nova-compute-PREROUTING
-N nova-compute-float-snat
-N nova-compute-snat
-N nova-postrouting-bottom
-A PREROUTING -j nova-api-metadat-PREROUTING
-A PREROUTING -j nova-compute-PREROUTING
-A OUTPUT -j nova-api-metadat-OUTPUT
-A OUTPUT -j nova-compute-OUTPUT
-A POSTROUTING -j nova-api-metadat-POSTROUTING
-A POSTROUTING -j nova-compute-POSTROUTING
-A POSTROUTING -j nova-postrouting-bottom
-A nova-api-metadat-snat -j nova-api-metadat-float-snat
-A nova-compute-snat -j nova-compute-float-snat
-A nova-postrouting-bottom -j nova-api-metadat-snat
-A nova-postrouting-bottom -j nova-compute-snat

(3) A lastly, no default secgroup rules, whose function governs... what
exactly?  Connections to the VM's public or private IPs?  I guess I'm just
not sure if this is relevant to my overall problem of ZERO VM network
connectivity.

I seek guidance please.  Thanks.


-- 
\*..+.-
--Greg Chavez
+//..;};
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp