[Yahoo-eng-team] [Bug 1513279] Re: routing with dhcpv6-stateful addressing is broken with DVR

2015-11-07 Thread Ritesh Anand
The bug is not valid.

Ping6 works fine; was missing the security rule for allowing icmp v6.

After adding:
neutron security-group-rule-create --protocol icmp --ethertype IPv6 --direction 
egress 4cf607a6-537f-4893-8531-70ba23895b3d
neutron security-group-rule-create --protocol icmp --ethertype IPv6 --direction 
ingress 4cf607a6-537f-4893-8531-70ba23895b3d

the scenario works fine with DVR.

@Scollins and folks I regret the inconvenience :(
Looks at the bright side; it works with DVR :)

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513279

Title:
  routing with dhcpv6-stateful addressing is broken with DVR

Status in neutron:
  Invalid

Bug description:
  Not able to ping v6 address of vm on a different network.  With legacy router.
  Setup has one controller/network node and two compute nodes.

  Steps:
  0. Add security rules to allow ping traffic. 
  neutron security-group-rule-create --protocol icmp --direction ingress 
94d41516-dab5-413c-9349-7c9bc3a09e75
  1. create two networks.
  2. create ipv4 subnet on each (for accessing vm).
  3. create ipv6 subnet on each with dhcpv6-stateful addressing.
   neutron subnet-create dnet1 :1::1/64 --name d6sub1 --enable-dhcp 
--ip-version 6 --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode 
dhcpv6-stateful
  4. create a router (not distributed).
  5. add interface to router on each of the four subnets.
  6. boot a vm on both networks.
  7. Log into the guest vm and configure inteface to receive inet6 dhcp 
address; use dhclient to get v6 address.
  8. Ping v6 address of the other guest vm. Fails!

  
  ubuntu@dvm11:~$ ping6 :2::4
  PING :2::4(:2::4) 56 data bytes
  From :1::1 icmp_seq=1 Destination unreachable: Address unreachable
  From :1::1 icmp_seq=2 Destination unreachable: Address unreachable
  From :1::1 icmp_seq=3 Destination unreachable: Address unreachable

  
  Note: As we need to modify interface settings and use dhclient, ubuntu cloud 
image was used. One may need to set MTU to 1400 for communicating with ubuntu 
cloud image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513279] [NEW] routing with dhcpv6-stateful addressing is broken

2015-11-04 Thread Ritesh Anand
Public bug reported:

Not able to ping v6 address of vm on a different network.  With legacy router.
Setup has one controller/network node and two compute nodes.

Steps:
0. Add security rules to allow ping traffic. 
neutron security-group-rule-create --protocol icmp --direction ingress 
94d41516-dab5-413c-9349-7c9bc3a09e75
1. create two networks.
2. create ipv4 subnet on each (for accessing vm).
3. create ipv6 subnet on each with dhcpv6-stateful addressing.
 neutron subnet-create dnet1 :1::1/64 --name d6sub1 --enable-dhcp 
--ip-version 6 --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode 
dhcpv6-stateful
4. create a router (not distributed).
5. add interface to router on each of the four subnets.
6. boot a vm on both networks.
7. Log into the guest vm and configure inteface to receive inet6 dhcp address; 
use dhclient to get v6 address.
8. Ping v6 address of the other guest vm. Fails!


ubuntu@dvm11:~$ ping6 :2::4
PING :2::4(:2::4) 56 data bytes
>From :1::1 icmp_seq=1 Destination unreachable: Address unreachable
>From :1::1 icmp_seq=2 Destination unreachable: Address unreachable
>From :1::1 icmp_seq=3 Destination unreachable: Address unreachable


Note: As we need to modify interface settings and use dhclient, ubuntu cloud 
image was used. One may need to set MTU to 1400 for communicating with ubuntu 
cloud image.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513279

Title:
  routing with dhcpv6-stateful addressing is broken

Status in neutron:
  New

Bug description:
  Not able to ping v6 address of vm on a different network.  With legacy router.
  Setup has one controller/network node and two compute nodes.

  Steps:
  0. Add security rules to allow ping traffic. 
  neutron security-group-rule-create --protocol icmp --direction ingress 
94d41516-dab5-413c-9349-7c9bc3a09e75
  1. create two networks.
  2. create ipv4 subnet on each (for accessing vm).
  3. create ipv6 subnet on each with dhcpv6-stateful addressing.
   neutron subnet-create dnet1 :1::1/64 --name d6sub1 --enable-dhcp 
--ip-version 6 --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode 
dhcpv6-stateful
  4. create a router (not distributed).
  5. add interface to router on each of the four subnets.
  6. boot a vm on both networks.
  7. Log into the guest vm and configure inteface to receive inet6 dhcp 
address; use dhclient to get v6 address.
  8. Ping v6 address of the other guest vm. Fails!

  
  ubuntu@dvm11:~$ ping6 :2::4
  PING :2::4(:2::4) 56 data bytes
  From :1::1 icmp_seq=1 Destination unreachable: Address unreachable
  From :1::1 icmp_seq=2 Destination unreachable: Address unreachable
  From :1::1 icmp_seq=3 Destination unreachable: Address unreachable

  
  Note: As we need to modify interface settings and use dhclient, ubuntu cloud 
image was used. One may need to set MTU to 1400 for communicating with ubuntu 
cloud image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504358] [NEW] Trace on nova net-delete for neutron network

2015-10-08 Thread Ritesh Anand
Public bug reported:

Created a neutron network "net3".
By mistake executed nova "net-delete net3".
The case was not gracefully handled.

stack@osctrlr:~/devstack$ nova net-delete net3
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-0780c222-cee5-4e1e-886d-4f0e893d9ca6)
stack@osctrlr:~/devstack$

Attaching n-api logs.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "n-api log"
   
https://bugs.launchpad.net/bugs/1504358/+attachment/4489269/+files/n-api_crash.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504358

Title:
  Trace on nova net-delete  for neutron network

Status in OpenStack Compute (nova):
  New

Bug description:
  Created a neutron network "net3".
  By mistake executed nova "net-delete net3".
  The case was not gracefully handled.

  stack@osctrlr:~/devstack$ nova net-delete net3
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-0780c222-cee5-4e1e-886d-4f0e893d9ca6)
  stack@osctrlr:~/devstack$

  Attaching n-api logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1504358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501969] [NEW] Instance does not get IP from dhcp ipv6 subnet (slaac/slaac) with DVR, when router interface is added after VM creation.

2015-10-01 Thread Ritesh Anand
Public bug reported:

Instance does not get IP from dhcp ipv6 subnet (slaac/slaac) with DVR,
when router interface is added after VM creation.

Instance does get IP when it is booted after interface to the subnet  has 
already been added to the DVR.
This ordering issue is not observed with centralized router.

Easy to recreate.

On compute:
--

NOT getting IP, when router interface is added after VM has been created:
$ ifconfig
eth0  Link encap:Ethernet  HWaddr FA:16:3E:9C:15:B7
  inet6 addr: fe80::f816:3eff:fe9c:15b7/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:14 errors:0 dropped:0 overruns:0 frame:0
  TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:1116 (1.0 KiB)  TX bytes:1138 (1.1 KiB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:12 errors:0 dropped:0 overruns:0 frame:0
  TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:1020 (1020.0 B)  TX bytes:1020 (1020.0 B)

Gets IP when router interface is added before VM is booted.
$
$ ifconfig
eth0  Link encap:Ethernet  HWaddr FA:16:3E:9C:15:B7
  inet6 addr: 4001:db8::f816:3eff:fe9c:15b7/64 Scope:Global
  inet6 addr: fe80::f816:3eff:fe9c:15b7/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:15 errors:0 dropped:0 overruns:0 frame:0
  TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:1226 (1.1 KiB)  TX bytes:1138 (1.1 KiB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:12 errors:0 dropped:0 overruns:0 frame:0
  TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:1020 (1020.0 B)  TX bytes:1020 (1020.0 B)

$

Subnet:
stack@osctrlr:~/devstack$ neutron subnet-show ipv62s1
+---+--+
| Field | Value 
   |
+---+--+
| allocation_pools  | {"start": "4001:db8::2", "end": 
"4001:db8:::::"} |
| cidr  | 4001:db8::/64 
   |
| dns_nameservers   |   
   |
| enable_dhcp   | True  
   |
| gateway_ip| 4001:db8::1   
   |
| host_routes   |   
   |
| id| 2b24b126-f618-4daa-a3a8-24ea8720a0db  
   |
| ip_version| 6 
   |
| ipv6_address_mode | slaac 
   |
| ipv6_ra_mode  | slaac 
   |
| name  | ipv62s1   
   |
| network_id| d9a71eed-0768-46b7-be0e-74664211f28b  
   |
| subnetpool_id |   
   |
| tenant_id | 9fbdd2326fe34e949ece2bef8c8f8c8c  
   |
+---+--+
stack@osctrlr:~/devstack$

Router:
stack@osctrlr:~/devstack$ neutron router-show dvr
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| distributed   | True |
| external_gateway_info |  |
| ha| False|
| id| 3512b48b-a1a8-4923-9a4b-0720dfd71baf |
| name  | dvr  |
| routes|  |
| status| ACTIVE   |
| tenant_id | 9fbdd2326fe34e949ece2bef8c8f8c8c |
+---+--+
stack@osctrlr:~/devstack$

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, 

[Yahoo-eng-team] [Bug 1481540] [NEW] no interface created in DHCP namespace for second subnet (dhcp enabled) on a network

2015-08-04 Thread Ritesh Anand
Public bug reported:

Steps to reproduce:

On devstack:
1. create a network.
2. create a subnet (dhcp enabled); we can see DHCP namespace with one interface 
for the subnet.
3. create another subnet (dhcp enabled).
We do not see another interface for this subnet in DHCP namespace.


LOGS:
==

stack@ritesh05:/opt/stack/neutron$ neutron net-list
+--+-+--+
| id   | name| subnets  
|
+--+-+--+
| bff21881-abcf-4c68-afd7-fae081e87f9c | public  |  
|
| beaabd4a-8211-41f5-906d-d685c1ee6b10 | private | 
0d8d834c-5806-453e-852e-4382f53d956c 20.0.0.0/24 |
|  | | 
f11c53f8-f254-4c88-b6dd-3ba3fec68329 10.0.0.0/24 |
+--+-+--+
stack@ritesh05:/opt/stack/neutron$ neutron subnet-list
+--+-+-++
| id   | name| cidr| 
allocation_pools   |
+--+-+-++
| 0d8d834c-5806-453e-852e-4382f53d956c | private-subnet2 | 20.0.0.0/24 | 
{start: 20.0.0.2, end: 20.0.0.254} |
| f11c53f8-f254-4c88-b6dd-3ba3fec68329 | private-subnet  | 10.0.0.0/24 | 
{start: 10.0.0.2, end: 10.0.0.254} |
+--+-+-++
stack@ritesh05:/opt/stack/neutron$

stack@ritesh05:/opt/stack/neutron$ sudo ip netns exec 
qdhcp-beaabd4a-8211-41f5-906d-d685c1ee6b10 ifconfig
loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

tapc97de8da-97 Link encap:Ethernet  HWaddr fa:16:3e:a2:b8:02
  inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:fea2:b802/64 Scope:Link
  UP BROADCAST RUNNING  MTU:1500  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B)  TX bytes:738 (738.0 B)

stack@ritesh05:/opt/stack/neutron$

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ipam-dhcp neutron-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481540

Title:
  no interface created in DHCP namespace for second subnet (dhcp
  enabled) on a network

Status in neutron:
  New

Bug description:
  Steps to reproduce:

  On devstack:
  1. create a network.
  2. create a subnet (dhcp enabled); we can see DHCP namespace with one 
interface for the subnet.
  3. create another subnet (dhcp enabled).
  We do not see another interface for this subnet in DHCP namespace.

  
  LOGS:
  ==

  stack@ritesh05:/opt/stack/neutron$ neutron net-list
  
+--+-+--+
  | id   | name| subnets
  |
  
+--+-+--+
  | bff21881-abcf-4c68-afd7-fae081e87f9c | public  |
  |
  | beaabd4a-8211-41f5-906d-d685c1ee6b10 | private | 
0d8d834c-5806-453e-852e-4382f53d956c 20.0.0.0/24 |
  |  | | 
f11c53f8-f254-4c88-b6dd-3ba3fec68329 10.0.0.0/24 |
  
+--+-+--+
  stack@ritesh05:/opt/stack/neutron$ neutron subnet-list
  
+--+-+-++
  | id   | name| cidr| 
allocation_pools   |
  
+--+-+-++
  | 0d8d834c-5806-453e-852e-4382f53d956c | private-subnet2 | 20.0.0.0/24 | 
{start: 20.0.0.2, end: 20.0.0.254} |
  | f11c53f8-f254-4c88-b6dd-3ba3fec68329 | private-subnet  | 10.0.0.0/24 | 
{start: 10.0.0.2, end: 10.0.0.254} |