[Yahoo-eng-team] [Bug 1358718] [NEW] duplicate ping packets from dhcp namespace when pinging across DVR subnet VMs

2014-08-19 Thread Sarada
Public bug reported:

1. have a multi node devstack setup in which 1 Controller, 1 NN  2CNs
2. Create two networks  subnets within it.
net1 - 10.1.10.0/24
net2 10.1.8.0/24

3. Create a distributed router. Add two interfaces to the DVR.
4. Spawn VM1 in net1  host it on CN1.
5. Spawn VM2 in net2  host it on CN2.
6. login to NN  from net1 dhcp namespace try to ping VM2 which is part of net2.

As shown below we can see duplicate ping packets.

stack@NN:~/devstack$ sudo ip netns exec 
qdhcp-111de30b-cedf-492d-88b3-5a5fc2a92f4d ifconfig
loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:4 errors:0 dropped:0 overruns:0 frame:0
  TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:328 (328.0 B)  TX bytes:328 (328.0 B)

tap68b11c40-f9 Link encap:Ethernet  HWaddr fa:16:3e:87:67:20
  inet addr:10.1.10.3  Bcast:10.1.10.255  Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:fe87:6720/64 Scope:Link
  UP BROADCAST RUNNING  MTU:1500  Metric:1
  RX packets:179 errors:0 dropped:0 overruns:0 frame:0
  TX packets:104 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:16358 (16.3 KB)  TX bytes:10284 (10.2 KB)

stack@NN:~/devstack$ sudo ip netns exec 
qdhcp-111de30b-cedf-492d-88b3-5a5fc2a92f4d ping 10.1.8.2
PING 10.1.8.2 (10.1.8.2) 56(84) bytes of data.
64 bytes from 10.1.8.2: icmp_req=1 ttl=63 time=3.11 ms
64 bytes from 10.1.8.2: icmp_req=1 ttl=63 time=3.13 ms (DUP!)
64 bytes from 10.1.8.2: icmp_req=2 ttl=63 time=0.515 ms
64 bytes from 10.1.8.2: icmp_req=2 ttl=63 time=0.537 ms (DUP!)
64 bytes from 10.1.8.2: icmp_req=3 ttl=63 time=0.362 ms
64 bytes from 10.1.8.2: icmp_req=3 ttl=63 time=0.385 ms (DUP!)
64 bytes from 10.1.8.2: icmp_req=4 ttl=63 time=0.262 ms
64 bytes from 10.1.8.2: icmp_req=4 ttl=63 time=0.452 ms (DUP!)
^C
--- 10.1.8.2 ping statistics ---
4 packets transmitted, 4 received, +4 duplicates, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.262/1.094/3.132/1.174 ms
stack@qatst231:~/devstack$

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358718

Title:
  duplicate ping packets from dhcp namespace when pinging across DVR
  subnet  VMs

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1. have a multi node devstack setup in which 1 Controller, 1 NN  2CNs
  2. Create two networks  subnets within it.
  net1 - 10.1.10.0/24
  net2 10.1.8.0/24

  3. Create a distributed router. Add two interfaces to the DVR.
  4. Spawn VM1 in net1  host it on CN1.
  5. Spawn VM2 in net2  host it on CN2.
  6. login to NN  from net1 dhcp namespace try to ping VM2 which is part of 
net2.

  As shown below we can see duplicate ping packets.

  stack@NN:~/devstack$ sudo ip netns exec 
qdhcp-111de30b-cedf-492d-88b3-5a5fc2a92f4d ifconfig
  loLink encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:328 (328.0 B)  TX bytes:328 (328.0 B)

  tap68b11c40-f9 Link encap:Ethernet  HWaddr fa:16:3e:87:67:20
inet addr:10.1.10.3  Bcast:10.1.10.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe87:6720/64 Scope:Link
UP BROADCAST RUNNING  MTU:1500  Metric:1
RX packets:179 errors:0 dropped:0 overruns:0 frame:0
TX packets:104 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:16358 (16.3 KB)  TX bytes:10284 (10.2 KB)

  stack@NN:~/devstack$ sudo ip netns exec 
qdhcp-111de30b-cedf-492d-88b3-5a5fc2a92f4d ping 10.1.8.2
  PING 10.1.8.2 (10.1.8.2) 56(84) bytes of data.
  64 bytes from 10.1.8.2: icmp_req=1 ttl=63 time=3.11 ms
  64 bytes from 10.1.8.2: icmp_req=1 ttl=63 time=3.13 ms (DUP!)
  64 bytes from 10.1.8.2: icmp_req=2 ttl=63 time=0.515 ms
  64 bytes from 10.1.8.2: icmp_req=2 ttl=63 time=0.537 ms (DUP!)
  64 bytes from 10.1.8.2: icmp_req=3 ttl=63 time=0.362 ms
  64 bytes from 10.1.8.2: icmp_req=3 ttl=63 time=0.385 ms (DUP!)
  64 bytes from 10.1.8.2: icmp_req=4 ttl=63 time=0.262 ms
  64 bytes from 10.1.8.2: icmp_req=4 ttl=63 time=0.452 ms (DUP!)
  ^C
  --- 10.1.8.2 ping statistics ---
  4 packets transmitted, 4 received, +4 duplicates, 0% packet loss, time 2999ms
  rtt min/avg/max/mdev = 0.262/1.094/3.132/1.174 ms
  stack@qatst231:~/devstack$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358718/+subscriptions

-- 

[Yahoo-eng-team] [Bug 1356734] [NEW] SNAT name space is being hosted on all nodes though agent_mode is set as dvr

2014-08-14 Thread Sarada
Public bug reported:

SNAT name space is being hosted on all nodes though agent_mode is set as
dvr

Here are the steps followed.

1. one service node with agent_mode=snat
2. Two compute nodes with agent_mode=dvr
3. Create an external network. Subnet within it.
4.  Create a private network  subnet(sub1) within it.
5 .Create a Distributed Router(DVR)
6. Set the  external network as gateway to the DVR.
7. Add the ptivate subnet  (sub1) as interface to the DVR.
8. Since the service node has the agent_mode as DVR-snat, the snat namespace 
will sit on this service node.  But the snat namespace is hosted on all 3 
nodes(service node with agent_mode=dvr_snat  Compute nodes with agent_mode=dvr)


Expected behavior:
snat name space should sit only on the node with agent_mode=dvr_snat.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1356734

Title:
  SNAT name space is being hosted on all nodes though agent_mode is set
  as dvr

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  SNAT name space is being hosted on all nodes though agent_mode is set
  as dvr

  Here are the steps followed.

  1. one service node with agent_mode=snat
  2. Two compute nodes with agent_mode=dvr
  3. Create an external network. Subnet within it.
  4.  Create a private network  subnet(sub1) within it.
  5 .Create a Distributed Router(DVR)
  6. Set the  external network as gateway to the DVR.
  7. Add the ptivate subnet  (sub1) as interface to the DVR.
  8. Since the service node has the agent_mode as DVR-snat, the snat namespace 
will sit on this service node.  But the snat namespace is hosted on all 3 
nodes(service node with agent_mode=dvr_snat  Compute nodes with agent_mode=dvr)

  
  Expected behavior:
  snat name space should sit only on the node with agent_mode=dvr_snat.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1356734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355627] [NEW] There should be a parameter to define MTU for DVR interfaces MTU in l3_agent.ini config file

2014-08-12 Thread Sarada
Public bug reported:

There should be a parameter to define MTU for DVR interfaces in
l3_agent.ini config file


Since configuring the MTU as 8900 for DVR router interfaces gives better 
performance, it should be good to provide an option to configure the MTU for 
DVR interfaces in l3_agent.ini config file. 

By default the DVR interface MTU is configured as 1500. Since we have
seen the significant improvements in performance with MTU 8900 for DVR
interfaces, it would be good to provide an attribute/parameter in the
l3_agent config file to enable the user to provide the desired MTU for
the DVR interfaces.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1355627

Title:
  There should be a parameter to define MTU for DVR interfaces MTU in
  l3_agent.ini config file

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There should be a parameter to define MTU for DVR interfaces in
  l3_agent.ini config file

  
  Since configuring the MTU as 8900 for DVR router interfaces gives better 
performance, it should be good to provide an option to configure the MTU for 
DVR interfaces in l3_agent.ini config file. 

  By default the DVR interface MTU is configured as 1500. Since we have
  seen the significant improvements in performance with MTU 8900 for DVR
  interfaces, it would be good to provide an attribute/parameter in the
  l3_agent config file to enable the user to provide the desired MTU for
  the DVR interfaces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1355627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352857] [NEW] few VMs fail to get ip address in devstack

2014-08-05 Thread Sarada
Public bug reported:

I am running Juno int code on multi node devstack environment. I am
trying to boot 10 VMs in each different network. If so, few VMs, will
fail to boot. If i delete  recreate  a VM in the same network, it will
fail to get the ip address.

Here are the steps followed to hit this issue

1. Create a network
2. Create a subnet
3. Create a distributed router. Add the subnet to this DVR
4. Boot a VM.
5. Follow the steps 1-4 for 9 more VMs.
6. In this case, few VMs will not get the ip address. 
7. I see that dhcp request is not reaching the NN. Please note that few more 
VMs hosted on the same CN are getting the ip address.

On both Openstack Controller  Compute node logs does not show any
Errors.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352857

Title:
  few VMs fail to get ip address in devstack

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am running Juno int code on multi node devstack environment. I am
  trying to boot 10 VMs in each different network. If so, few VMs, will
  fail to boot. If i delete  recreate  a VM in the same network, it
  will fail to get the ip address.

  Here are the steps followed to hit this issue

  1. Create a network
  2. Create a subnet
  3. Create a distributed router. Add the subnet to this DVR
  4. Boot a VM.
  5. Follow the steps 1-4 for 9 more VMs.
  6. In this case, few VMs will not get the ip address. 
  7. I see that dhcp request is not reaching the NN. Please note that few more 
VMs hosted on the same CN are getting the ip address.

  On both Openstack Controller  Compute node logs does not show any
  Errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1352857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp