[Yahoo-eng-team] [Bug 1568197] Re: Problems with neutron.common.constants import break HA and DVRHA functionality

2016-05-25 Thread Adolfo Duarte
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1568197

Title:
  Problems with neutron.common.constants import break HA and DVRHA
  functionality

Status in neutron:
  Fix Released

Bug description:
  Many files in the neutron project use this import:

  from neutron.common import constants as l3_const

  or some variation of it.

  For some reason this import is not working correctly. 
  Particularly the following lines from ithe file neturon.common.constants seem 
to not take effect (or value of constant) gets reverted: 

  L24 - L 31:
  # TODO(anilvenkata) Below constants should be added to neutron-lib
  DEVICE_OWNER_HA_REPLICATED_INT = (lib_constants.DEVICE_OWNER_NETWORK_PREFIX +
"ha_router_replicated_interface")
  ROUTER_INTERFACE_OWNERS = lib_constants.ROUTER_INTERFACE_OWNERS + \
  (DEVICE_OWNER_HA_REPLICATED_INT,)
  ROUTER_INTERFACE_OWNERS_SNAT = lib_constants.ROUTER_INTERFACE_OWNERS_SNAT + \
  (DEVICE_OWNER_HA_REPLICATED_INT,)


  The ROUTER_INTERFACE_OWNER and ROUTER_INTERFACE_OWNER_SNAT constants
  do not seem to take the new values assigned to them: original-tuple +
  DEVICE_OWNER_HA_REPLICATED_INT

  In files which use the import mentioned above: "from neutron.common import 
constant" or a variation of it, the values
  of ROUTER_INTERFACE_OWNERS and ROUTER_INTERFACE_OWNERS_SNAT do not contain 
the value "ha_router_replicated_interface"

  This is causing problems with HA router and DVRHA routers because the
  files neutron/db/l3_dvr_db.py and neutron/db/l3_hamode_db.py make use
  of the constants neutron.common.constants.ROUTER_INTERFACE_OWNERS and
  neutron.common.constants.ROUTER_INTERFACE_OWNERS_SNAT to figure out
  what ports belong to a router.

  Since the "ha_router_replicated_interface" is not listed in either one
  of those variables, the neutron server (q-svc) does not include any
  ports which are owned by ha_router_replicated_interface as part of
  router updates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1568197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571113] [NEW] SNAT interface not created for dvrha in some scenarios

2016-04-15 Thread Adolfo Duarte
Public bug reported:

branch: master
commit: 4e3a9c2b9c0ada6c2a471d24a20927aa8e8f5740


Depending on the order neutron router commands are given, the snat interface 
for a dvrha router on the dvr_snat agent might not get created. 


look at following interaction: 


--create a couple of network and subnets we will use (public and private)
neutron net-create public --router:external
neutron subnet-create public 192.168.201.0/24 --disable_dhcp
neutron net-create n1 
neutron subnet-create n1 101.0.0.0/24 --name s1

-- now we create and attach router in particular order: first set
external gateway and then attach private subnet.

neutron router-create dvrha --distributed=True --ha=True
neutron router-gateway-set dvrha public 
neutron router-interface-add dvrha s1


neutron router-port-list dvrha -c fixed_ips
+--+
| fixed_ips 
   |
+--+
| {"subnet_id": "b4a81968-8a08-4054-985f-e4e1cf687945", "ip_address": 
"101.0.0.3"} |
| {"subnet_id": "b4a81968-8a08-4054-985f-e4e1cf687945", "ip_address": 
"101.0.0.1"} |
| {"subnet_id": "782574e8-fabb-41da-a1a0-9817994ec0a5", "ip_address": 
"169.254.192.2"} |
| {"subnet_id": "782574e8-fabb-41da-a1a0-9817994ec0a5", "ip_address": 
"169.254.192.3"} |
| {"subnet_id": "0173f29d-746d-45a1-ad10-16dde529eb34", "ip_address": 
"192.168.201.4"} |
| {"subnet_id": "782574e8-fabb-41da-a1a0-9817994ec0a5", "ip_address": 
"169.254.192.1"} |


*** as you can see there are two interfaces on subnet 101/24: 101.0.0.3, and 
101.0.0.1, one is the router interface the other is the snat interface
THIS IS CORRECT BEHAVIOR

*** next we change the order of the commands:
-- first clean up
neutron router-interface-delete dvrha s1 
neutron router-gateway-clear dvrha 
neutron router-delete dvrha

-- This time, we add the internal interface before setting the external
gateway. (reverse order of steps)

neutron router-create dvrha --distributed=True --ha=True
neutron router-interface-add dvrha s1
neutron router-gateway-set dvrha public 
neutron router-port-list dvrha -c fixed_ips
+--+
| fixed_ips 
   |
+--+
| {"subnet_id": "0173f29d-746d-45a1-ad10-16dde529eb34", "ip_address": 
"192.168.201.7"} |
| {"subnet_id": "e9c7148e-04e5-4343-ac66-ac1651514b88", "ip_address": 
"169.254.192.1"} |
| {"subnet_id": "b4a81968-8a08-4054-985f-e4e1cf687945", "ip_address": 
"101.0.0.1"} |
| {"subnet_id": "e9c7148e-04e5-4343-ac66-ac1651514b88", "ip_address": 
"169.254.192.2"} |
+--+

*** this time the snat interface is NOT created. 
-- we can fix this by toggling the internal subnet connection to the router:
 neutron router-interface-delete dvrha s1 
Removed interface from router dvrha.
neutron router-interface-add dvrha s1 
Added interface c3cb410d-08a0-46a5-84fb-ec7bbead4eb3 to router dvrha.
neutron router-port-list dvrha -c fixed_ips
+--+
| fixed_ips 
   |
+--+
| {"subnet_id": "0173f29d-746d-45a1-ad10-16dde529eb34", "ip_address": 
"192.168.201.7"} |
| {"subnet_id": "e9c7148e-04e5-4343-ac66-ac1651514b88", "ip_address": 
"169.254.192.1"} |
| {"subnet_id": "b4a81968-8a08-4054-985f-e4e1cf687945", "ip_address": 
"101.0.0.5"} |
| {"subnet_id": "b4a81968-8a08-4054-985f-e4e1cf687945", "ip_address": 
"101.0.0.1"} |
| {"subnet_id": "e9c7148e-04e5-4343-ac66-ac1651514b88", "ip_address": 
"169.254.192.2"} |

now  the snat port is created (101.0.0.5)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571113

Title:
  SNAT interface not created for dvrha in some scenarios

Status in neutron:
  New

Bug description:
  branch: master
  commit: 4e3a9c2b9c0ada6c2a471d24a20927aa8e8f5740

  
  Depending on the order neutron router commands are given, the snat interface 
for a dvrha router on the dvr_snat agent might not get created. 

  
  look at following interaction: 

  
  --create a couple of network and subnets we will use (public and private)
  neutron net-create public --router:external
  neutron subnet-create public 192.168.201.0/24 --disable_dhcp
  neutron net-create n1 
  neutron subnet-create n1 101.0.0.0/24 --name s1

  -- now we create and attach router 

[Yahoo-eng-team] [Bug 1570113] [NEW] compute nodes do not get dvrha router update from server

2016-04-13 Thread Adolfo Duarte
Public bug reported:

branch: master (April 13th 2016):  commit
2a305c563073a3066aac3f07aab3c895ec2cd2fb

Topology: 
1 controller (neutronserver)
3 network nodes (l3_agent in dvr_snat mode)
2 compute nodes (l3_agent in dvr mode)

behavior: when a dvr/ha router is created and a vm instantiated on the
compute node, the l3_agent on the compute node does not instantiate a
local router. (the qr-namespace along with all the interfaces).

expected: when a dvr/ha router is created and a vm is instantiated on a
compute node, the l3_agent running on that compute node should
instantiate a local router.  The qr-router-id namespace along with all
the appropriate interfaces should be present locally on the compute
node. Similar in behavior to a dvr only router. (without ha).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570113

Title:
  compute nodes do not get dvrha router update from server

Status in neutron:
  New

Bug description:
  branch: master (April 13th 2016):  commit
  2a305c563073a3066aac3f07aab3c895ec2cd2fb

  Topology: 
  1 controller (neutronserver)
  3 network nodes (l3_agent in dvr_snat mode)
  2 compute nodes (l3_agent in dvr mode)

  behavior: when a dvr/ha router is created and a vm instantiated on the
  compute node, the l3_agent on the compute node does not instantiate a
  local router. (the qr-namespace along with all the interfaces).

  expected: when a dvr/ha router is created and a vm is instantiated on
  a compute node, the l3_agent running on that compute node should
  instantiate a local router.  The qr-router-id namespace along with all
  the appropriate interfaces should be present locally on the compute
  node. Similar in behavior to a dvr only router. (without ha).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568197] [NEW] Problems with neutron.common.constants import break HA and DVRHA functionality

2016-04-08 Thread Adolfo Duarte
Public bug reported:

Many files in the neutron project use this import:

from neutron.common import constants as l3_const

or some variation of it.

For some reason this import is not working correctly. 
Particularly the following lines from ithe file neturon.common.constants seem 
to not take effect (or value of constant) gets reverted: 

L24 - L 31:
# TODO(anilvenkata) Below constants should be added to neutron-lib
DEVICE_OWNER_HA_REPLICATED_INT = (lib_constants.DEVICE_OWNER_NETWORK_PREFIX +
  "ha_router_replicated_interface")
ROUTER_INTERFACE_OWNERS = lib_constants.ROUTER_INTERFACE_OWNERS + \
(DEVICE_OWNER_HA_REPLICATED_INT,)
ROUTER_INTERFACE_OWNERS_SNAT = lib_constants.ROUTER_INTERFACE_OWNERS_SNAT + \
(DEVICE_OWNER_HA_REPLICATED_INT,)


The ROUTER_INTERFACE_OWNER and ROUTER_INTERFACE_OWNER_SNAT constants do
not seem to take the new values assigned to them: original-tuple +
DEVICE_OWNER_HA_REPLICATED_INT

In files which use the import mentioned above: "from neutron.common import 
constant" or a variation of it, the values
of ROUTER_INTERFACE_OWNERS and ROUTER_INTERFACE_OWNERS_SNAT do not contain the 
value "ha_router_replicated_interface"

This is causing problems with HA router and DVRHA routers because the
files neutron/db/l3_dvr_db.py and neutron/db/l3_hamode_db.py make use of
the constants neutron.common.constants.ROUTER_INTERFACE_OWNERS and
neutron.common.constants.ROUTER_INTERFACE_OWNERS_SNAT to figure out what
ports belong to a router.

Since the "ha_router_replicated_interface" is not listed in either one
of those variables, the neutron server (q-svc) does not include any
ports which are owned by ha_router_replicated_interface as part of
router updates.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1568197

Title:
  Problems with neutron.common.constants import break HA and DVRHA
  functionality

Status in neutron:
  New

Bug description:
  Many files in the neutron project use this import:

  from neutron.common import constants as l3_const

  or some variation of it.

  For some reason this import is not working correctly. 
  Particularly the following lines from ithe file neturon.common.constants seem 
to not take effect (or value of constant) gets reverted: 

  L24 - L 31:
  # TODO(anilvenkata) Below constants should be added to neutron-lib
  DEVICE_OWNER_HA_REPLICATED_INT = (lib_constants.DEVICE_OWNER_NETWORK_PREFIX +
"ha_router_replicated_interface")
  ROUTER_INTERFACE_OWNERS = lib_constants.ROUTER_INTERFACE_OWNERS + \
  (DEVICE_OWNER_HA_REPLICATED_INT,)
  ROUTER_INTERFACE_OWNERS_SNAT = lib_constants.ROUTER_INTERFACE_OWNERS_SNAT + \
  (DEVICE_OWNER_HA_REPLICATED_INT,)


  The ROUTER_INTERFACE_OWNER and ROUTER_INTERFACE_OWNER_SNAT constants
  do not seem to take the new values assigned to them: original-tuple +
  DEVICE_OWNER_HA_REPLICATED_INT

  In files which use the import mentioned above: "from neutron.common import 
constant" or a variation of it, the values
  of ROUTER_INTERFACE_OWNERS and ROUTER_INTERFACE_OWNERS_SNAT do not contain 
the value "ha_router_replicated_interface"

  This is causing problems with HA router and DVRHA routers because the
  files neutron/db/l3_dvr_db.py and neutron/db/l3_hamode_db.py make use
  of the constants neutron.common.constants.ROUTER_INTERFACE_OWNERS and
  neutron.common.constants.ROUTER_INTERFACE_OWNERS_SNAT to figure out
  what ports belong to a router.

  Since the "ha_router_replicated_interface" is not listed in either one
  of those variables, the neutron server (q-svc) does not include any
  ports which are owned by ha_router_replicated_interface as part of
  router updates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1568197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552085] [NEW] Router Interfaces Are Instantiated in Compute Nodes Which do not need them.

2016-03-01 Thread Adolfo Duarte
Public bug reported:

Pre-Conditions: three node dvr topology created with devstack script
from master branch

step-by-step:
as " source openrc admin admin"  (adminstrator)

Dvr routers is created
Three Subnets are attached to it.
Only one vm is instantiated  and attach to only one subnet.

Expected Output:
The router is instantiated in the same compute node as the vm with only ONE qr- 
interface attached to BR-INT bridge

Actual Output:
The router is instantiated on the same compute node as the vm and THREE qr- 
interfaces are added to the BR-INT  bridge.
One for each of the subnets attached to the router.

Problem:
Only one interface is needed on the BR-INT, the one for the subnet the vm is 
attached to.
The other two are not doing anything except consuming resources.

Version: Mitaka (master on March 1st, 2016)
 Ubuntu
Perceived Severity:
   probably low. The presence of the other two unused interfaces does not 
affect functionality

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Pre-Conditions: three node dvr topology created with devstack script
  from master branch
  
  step-by-step:
  as " source openrc admin admin"  (adminstrator)
  
  Dvr routers is created
- Three Subnets are attached to it. 
- Only one vm is instantiated  and attach to only one subnet. 
+ Three Subnets are attached to it.
+ Only one vm is instantiated  and attach to only one subnet.
  
- Expected Output: 
+ Expected Output:
  The router is instantiated in the same compute node as the vm with only ONE 
qr- interface attached to BR-INT bridge
  
  Actual Output:
  The router is instantiated on the same compute node as the vm and THREE qr- 
interfaces are added to the BR-INT  bridge.
- One for each of the subnets attached to the router. 
+ One for each of the subnets attached to the router.
  
  Problem:
- Only one interface is needed on the BR-INT, the one for the subnet the vm is 
attached to. 
- The other two are not doing anything except consuming resources. 
+ Only one interface is needed on the BR-INT, the one for the subnet the vm is 
attached to.
+ The other two are not doing anything except consuming resources.
  
  Version: Mitaka (master on March 1st, 2016)
-  Ubuntu
- Perceived Severity: 
-probably low. The presence of the other two unused interfaces does not 
affect functionality
+  Ubuntu
+ Perceived Severity:
+    probably low. The presence of the other two unused interfaces does not 
affect functionality

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552085

Title:
  Router Interfaces Are Instantiated in Compute Nodes Which do not need
  them.

Status in neutron:
  New

Bug description:
  Pre-Conditions: three node dvr topology created with devstack script
  from master branch

  step-by-step:
  as " source openrc admin admin"  (adminstrator)

  Dvr routers is created
  Three Subnets are attached to it.
  Only one vm is instantiated  and attach to only one subnet.

  Expected Output:
  The router is instantiated in the same compute node as the vm with only ONE 
qr- interface attached to BR-INT bridge

  Actual Output:
  The router is instantiated on the same compute node as the vm and THREE qr- 
interfaces are added to the BR-INT  bridge.
  One for each of the subnets attached to the router.

  Problem:
  Only one interface is needed on the BR-INT, the one for the subnet the vm is 
attached to.
  The other two are not doing anything except consuming resources.

  Version: Mitaka (master on March 1st, 2016)
   Ubuntu
  Perceived Severity:
     probably low. The presence of the other two unused interfaces does not 
affect functionality

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501515] [NEW] mysql exception DBReferenceError is raised when calling update_dvr_port_binding

2015-09-30 Thread Adolfo Duarte
Public bug reported:

several tests upstream are failing with a DBReferenceError exception when calls 
to 
File "/opt/stack/new/neutron/neutron/plugins/ml2/db.py", line 208, in 
ensure_dvr_port_binding

This appears to be caused by accessing database resource without locking
it (race condition).

Here is an excerpt of the error: 
2015-09-29 18:39:00.822 7813 ERROR oslo_messaging.rpc.dispatcher 
DBReferenceError: (pymysql.err.IntegrityError) (1452, u'Cannot add or update a 
child row: a foreign key constraint fails (`neutron`.`ml2_dvr_port_bindings`, 
CONSTRAINT `ml2_dvr_port_bindings_ibfk_1` FOREIGN KEY (`port_id`) REFERENCES 
`ports` (`id`) ON DELETE CASCADE)') [SQL: u'INSERT INTO ml2_dvr_port_bindings 
(port_id, host, router_id, vif_type, vif_details, vnic_type, profile, status) 
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)'] [parameters: 
(u'851c0627-5133-43e2-b7a3-da9c29afd4ea', 
u'devstack-trusty-hpcloud-b2-5150973', u'973254dc-d1aa-4177-b952-2ac648bad4b5', 
'unbound', '', 'normal', '', 'DOWN')]

An example of the failure can be found here: 
http://logs.openstack.org/17/227517/3/check/gate-tempest-dsvm-neutron-dvr/fc1efa2/logs/screen-q-svc.txt.gz?level=ERROR

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501515

Title:
  mysql exception DBReferenceError is raised when calling
  update_dvr_port_binding

Status in neutron:
  New

Bug description:
  several tests upstream are failing with a DBReferenceError exception when 
calls to 
  File "/opt/stack/new/neutron/neutron/plugins/ml2/db.py", line 208, in 
ensure_dvr_port_binding

  This appears to be caused by accessing database resource without
  locking it (race condition).

  Here is an excerpt of the error: 
  2015-09-29 18:39:00.822 7813 ERROR oslo_messaging.rpc.dispatcher 
DBReferenceError: (pymysql.err.IntegrityError) (1452, u'Cannot add or update a 
child row: a foreign key constraint fails (`neutron`.`ml2_dvr_port_bindings`, 
CONSTRAINT `ml2_dvr_port_bindings_ibfk_1` FOREIGN KEY (`port_id`) REFERENCES 
`ports` (`id`) ON DELETE CASCADE)') [SQL: u'INSERT INTO ml2_dvr_port_bindings 
(port_id, host, router_id, vif_type, vif_details, vnic_type, profile, status) 
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)'] [parameters: 
(u'851c0627-5133-43e2-b7a3-da9c29afd4ea', 
u'devstack-trusty-hpcloud-b2-5150973', u'973254dc-d1aa-4177-b952-2ac648bad4b5', 
'unbound', '', 'normal', '', 'DOWN')]

  An example of the failure can be found here: 
  
http://logs.openstack.org/17/227517/3/check/gate-tempest-dsvm-neutron-dvr/fc1efa2/logs/screen-q-svc.txt.gz?level=ERROR

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491961] [NEW] On systems with no ports in br-ex, ha network is dropped when external gateway is set for ha router

2015-09-03 Thread Adolfo Duarte
Public bug reported:

when a system is brought up using vxlan  which has no interfaces added
to br-ex bridge, if an ha router is created, the tunnel between the
network nodes is created, however, when a external gateways is set for
the router the vxlan tunnels between the network nodes is dropped.


how to reproduce: 
bring up an openstack system using three nodes: one "controller" running q-vpn, 
q-agt, q-svc, and the other necessary services (keystone, mysql, etc...) , One 
network node running q-l3, and q-agt, and ONE compute node, running the usual 
nova stuff (n-cpu, etc..) and the q-l3  and q-agt

The q-l3 agent on the network node and controllers should be se to
agent_mode=dvr_snat, the l3 agent on the compute node should be set to
agent_mode=dvr

Delete all resources (routers, networks, subnets, etc...) so the system is 
cmpletey clean
Add the following commands to create the necessary networks. 
make sure that there is not interfaces in the br-ex bridges of any of the 
nodes: 
(ovs-vsctl show should sho br-ex bridge with no interfaces)

neutron net-create public --router:exernal 
neutron subnet-create public 123.0.0.0/24 --disable-dhcp

#create the  ha router: 
neutron router-create harouter --ha=True --distributed=False

#check that the ha network has been created: neutron net-list
#check that the tunnels between the controller and network nodes are up: 
execute "sudo ovs-vsctl show " on the nodes. 
# since the controller and network node have an instance of the ha router 
running they will connect to each other to provide
# the ha network via tunnels. 
# it will look something like: 
# Bridge br-tun
#  fail_mode: secure
#   Port patch-int
 #  Interface patch-int
#   type: patch
 #  options: {peer=patch-tun}
#   Port br-tun
#   Interface br-tun
#   type: internal
#   Port "vxlan-65b1"
#   Interface "vxlan-65b1"
#   type: vxlan
#   options: {df_default="true", in_key=flow, 
local_ip="101.0.0.178", out_key=flow, remote_ip="101.0.0.177"}
#   Bridge br-ex
# ...

# next set the external gateway for the router:

neutron router-gateway-set harouter public

# at this point the vxlan tunnels above will drop (disappear from the ovs-vsctl 
show output)
#
# remove the external gateay from the router: 
neutron router-gateway-clear harouter


# it might be necessary to execute the "neutron router-gateway-set routerha 
public / neutron router-gateway-clear harouter"
a few times to see the problem. 


More testing suggests that adding an interface to br-ex (by hand) prevents the 
problem from happening. 

in other words if before you do anything you do something like "sudo
ovs-vsctl add-pr br-ex dummy-interface" to each of the nodes, the
problem will not manifest.

This would suggest that the problem is only present when br-ex has no 
interfaces in it to begin with. 
That case is very unlikey since most of the time br-ex will have an interface 
attached to it (the one connecting to the outside world external 
interface/nice). 
However it could present problems for ci testing if no interface is added to 
br-ex. 

 
This was last tested no master branch commit: 
commit 43710925db0523dfbf0cdabbf2352db4304c6163
Merge: 397fc4d 2759362
Author: Jenkins 
Date:   Wed Sep 2 11:20:18 2015 +

Merge "Remove duplicated codes in two test cases"

attaching local.conf files which can be used to setup system. change ip
addresses respectively

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ha neutron routing

** Attachment added: "compute node local.conf"
   
https://bugs.launchpad.net/bugs/1491961/+attachment/4456949/+files/compute_node_only.conf

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491961

Title:
  On systems with no ports in br-ex, ha network is dropped when external
  gateway is set for ha router

Status in neutron:
  New

Bug description:
  when a system is brought up using vxlan  which has no interfaces added
  to br-ex bridge, if an ha router is created, the tunnel between the
  network nodes is created, however, when a external gateways is set for
  the router the vxlan tunnels between the network nodes is dropped.

  
  how to reproduce: 
  bring up an openstack system using three nodes: one "controller" running 
q-vpn, q-agt, q-svc, and the other necessary services (keystone, mysql, etc...) 
, One network node running q-l3, and q-agt, and ONE compute node, running the 
usual nova stuff (n-cpu, etc..) and the q-l3  and q-agt

  The q-l3 agent on the network node and controllers should be se to
  agent_mode=dvr_snat, the l3 agent on the compute node should be set to
  agent_mode=dvr

  Delete all resources (routers, networks, subnets, etc...) so the system is 
cmpletey clean
  Add the following commands to create the 

[Yahoo-eng-team] [Bug 1489576] [NEW] key error in del l3_agent.pd.routers[router['id']]['subnets']

2015-08-27 Thread Adolfo Duarte
Public bug reported:

currently the following erorr is seen in the q-l3 or q-vpn agents when
deleting a router:  (steps to reproduce below):

Exit code: 0
Stdin:
Stdout: lo
Stderr:  execute /opt/stack/neutron/neutron/agent/linux/utils.py:150
2015-08-27 19:08:38.007 12027 DEBUG neutron.agent.linux.utils [-]
Command: ['ip', 'netns', 'delete', 
u'qrouter-f53025cc-5159-480e-a7f3-19122aa330a3']
Exit code: 0
Stdin:
Stdout:
Stderr:  execute /opt/stack/neutron/neutron/agent/linux/utils.py:150
2015-08-27 19:08:38.009 12027 DEBUG neutron.callbacks.manager [-] Notify 
callbacks for router, after_delete _notify_loop 
/opt/stack/neutron/neutron/callbacks/manager.py:133
2015-08-27 19:08:38.009 12027 DEBUG neutron.callbacks.manager [-] Calling 
callback neutron.agent.linux.pd.remove_router _notify_loop 
/opt/stack/neutron/neutron/callbacks/manager.py:140
2015-08-27 19:08:38.010 12027 DEBUG oslo_concurrency.lockutils [-] Lock 
l3-agent-pd acquired by remove_router :: waited 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:251
2015-08-27 19:08:38.010 12027 DEBUG oslo_concurrency.lockutils [-] Lock 
l3-agent-pd released by remove_router :: held 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:262
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager [-] Error during 
notification for neutron.agent.linux.pd.remove_router router, after_delete
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager Traceback (most 
recent call last):
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager   File 
/opt/stack/neutron/neutron/callbacks/manager.py, line 141, in _notify_loop
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager   File 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py, line 
252, in inner
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager return 
f(*args, **kwargs)
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager   File 
/opt/stack/neutron/neutron/agent/linux/pd.py, line 307, in remove_router
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager del 
l3_agent.pd.routers[router['id']]['subnets']
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager KeyError: 'id'
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager
2015-08-27 19:08:38.015 12027 DEBUG neutron.callbacks.manager [-] Calling 
callback neutron_vpnaas.services.vpn.vpn_service.router_removed_actions 
_notify_loop /opt/stack/neutron/neutron/callbacks/manager.py:140
2015-08-27 19:08:47.697 12027 DEBUG oslo_service.loopingcall [-] Fixed interval 
looping call 'neutron_vpnaas.services.vpn.agent.VPNAgent._report_state' 
sleeping for 30.00 seconds _run_loop 
/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py:121
2015-08-27 19:08:47.756 12027 DEBUG oslo_service.loopingcall [-] Fixed interval 
looping call 'neutron.service.Service.report_state' sleeping for 30.00 seconds 
_run_loop /usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py:121


STEPS TO REPRODUCE: 
create a stack with a neutron server (q-vpn) node,  network server (q-l3), 
and a compute node. 
Then execut the following: 

neutron net-create public --router:external
neutron subnet-create public 123.0.0.0/24 --disable-dhcp
for id in $(neutron security-group-list | grep -v  id  | grep -v \-\- | awk 
'{print $2}'); do neutron security-group-delete $id; done 
neutron security-group-rule-create --protocol icmp --direction ingress default
neutron security-group-rule-create --protocol tcp --port-range-min 1 
--port-range-max 65535 --direction ingress default 
neutron security-group-rule-create --protocol udp --port-range-min 1 
--port-range-max 65535 --direction ingress default 
neutron net-create private
neutron subnet-create private103.0.0.0/24 --name private


neutron router-create r1 --distributed=False --ha=False
neutron router-gateway-set r1 public 

then execute: 
neutron router-delete r1

you will see the error above.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489576

Title:
  key error in del l3_agent.pd.routers[router['id']]['subnets']

Status in neutron:
  New

Bug description:
  currently the following erorr is seen in the q-l3 or q-vpn agents when
  deleting a router:  (steps to reproduce below):

  Exit code: 0
  Stdin:
  Stdout: lo
  Stderr:  execute /opt/stack/neutron/neutron/agent/linux/utils.py:150
  2015-08-27 19:08:38.007 12027 DEBUG neutron.agent.linux.utils [-]
  Command: ['ip', 'netns', 'delete', 
u'qrouter-f53025cc-5159-480e-a7f3-19122aa330a3']
  Exit code: 0
  Stdin:
  Stdout:
  Stderr:  execute /opt/stack/neutron/neutron/agent/linux/utils.py:150
  2015-08-27 19:08:38.009 12027 DEBUG 

[Yahoo-eng-team] [Bug 1461647] [NEW] HA - creating an ha router fails with internal sever error

2015-06-03 Thread Adolfo Duarte
Public bug reported:

When use attempts to create an ha router, the creation fails with an internal 
server error message. 
After the first attempt, the command neutron net-list starts to fail with 
internal error as well. 

The cause seems to be the creation of the ha_network. 
After the first attempt to create the ha router, an ha_network gets created, 
then any access to that network (by net-list or attaching the router to it) 
causes an error which looks like the trace below. 

The problem is that the ha network does not have a PortSecurityBinding
associated with it because it is not created with the appropriate
security_port_enabled value.


2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 83, in resource
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 461, in create
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/l3_hamode_db.py, line 376, in create_router
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
self.delete_router(context, router_dict['id'])
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85, in 
__exit__
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/l3_hamode_db.py, line 372, in create_router
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
self._create_ha_interfaces(context, router_db, ha_network)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/l3_hamode_db.py, line 328, in 
_create_ha_interfaces
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
l3_port_check=False)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85, in 
__exit__
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/l3_hamode_db.py, line 322, in 
_create_ha_interfaces
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
router.tenant_id)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/l3_hamode_db.py, line 303, in add_ha_port
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 'name': 
constants.HA_PORT_NAME % tenant_id}})
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 1002, in create_port
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource result, 
mech_context = self._create_port_db(context, port)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 984, in _create_port_db
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource network = 
self.get_network(context, result['network_id'])
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 669, in get_network
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource result = 
super(Ml2Plugin, self).get_network(context, id, None)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 1024, in get_network
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource return 
self._make_network_dict(network, fields)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/db_base_plugin_v2.py, line 874, in 
_make_network_dict
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
attributes.NETWORKS, res, network)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/db/common_db_mixin.py, line 163, in 
_apply_dict_extend_functions
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource func(*args)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 488, in 
_ml2_md_extend_network_dict
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource 
self.extension_manager.extend_network_dict(session, netdb, result)
2015-06-02 21:50:04.022 56731 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/plugins/ml2/managers.py, line 804, in 

[Yahoo-eng-team] [Bug 1450982] [NEW] DVR: Router Gateway Port cannot be delted in certain circumstances

2015-05-01 Thread Adolfo Duarte
  | network:floatingip_agent_gateway  
 |
| extra_dhcp_opts   |   
 |
| fixed_ips | {subnet_id: 1f4059ee-2b6b-48d0-9687-b7e0fead3cae, 
ip_address: 101.0.0.103} |
| id| 25337691-d558-46d8-b2a2-96cbdef198db  
 |
| mac_address   | fa:16:3e:06:1c:0e 
 |
| name  |   
 |
| network_id| b376c1a7-ecfa-4892-88dc-509145e7fbc7  
 |
| security_groups   |   
 |
| status| DOWN  
 |
| tenant_id |   
 |
+---++


adolfo@devstack:~/dvr$ neutron net-list 
+--++---+
| id   | name   | subnets   
|
+--++---+
| b376c1a7-ecfa-4892-88dc-509145e7fbc7 | wjs_public_network | 
1f4059ee-2b6b-48d0-9687-b7e0fead3cae 101.0.0.0/24 |
+--++---+
adolfo@devstack:~/dvr$

** Affects: neutron
 Importance: Undecided
 Assignee: Adolfo Duarte (adolfo-duarte)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Adolfo Duarte (adolfo-duarte)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450982

Title:
  DVR: Router Gateway Port   cannot be delted in certain circumstances

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Tests have show that following sequence of events on a multinode setup causes 
the network:floatingip_agent_gateway port to be left hanging around. It is not 
possible to delete this port from the cli, even with admin powers.  Restackings 
seems to be the only solution. 
  The problem is furthered complicated because any public network which has the 
left over port as one of its meembers  may not be deleted.
  Here is the list of steps to reproduce. All must be done with admin power 
under admin tenant


  #+ Test Steps: Get ip address of all nodes
  #+ Test Steps:
  #+ Step: User, Tenant, Token, and Image Verification
  #+ Step: Add security Rules
  #+ Step: Router Creations(DVR)
  #+ Step: Networks, and Subnet Attachement
  #+ Step: spin up vms
  #+ Step: Obtain necessary vm information
  #+ Step: Create external Network and Subnet
  #+ Step: Attach external network to router
  #+ Step: Associate FIPs to Vm - 2 RESTfull
  #+ Step: Verify floating ip pluming
  #+ Step: Verify connectivity to vms from outside world
  #+ Step: Verify connectivity to vms from outside world

  04-25-2015 15:16:38 : 
  04-25-2015 15:16:38 :  Removing all resources from system
  04-25-2015 15:16:38 : 
  04-25-2015 15:16:38 : ***Getting User Token***
  04-25-2015 15:16:38 : ***User Token Received***
  04-25-2015 15:16:38 : 
  04-25-2015 15:16:38 : -- Deleting VMs
  04-25-2015 15:16:39 : -- Done deleting VMs
  04-25-2015 15:16:39 : 
  04-25-2015 15:16:39 : -- Deleting Security Groups
  04-25-2015 15:16:39 : 'default' Security Group deleted
  04-25-2015 15:16:39 : -- Done deleting Security Groups
  04-25-2015 15:16:39 : 
  04-25-2015 15:16:39 : --- Deleting Floating IPs
  04-25-2015 15:16:39 : --Done deleting Floating IPs
  04-25-2015 15:16:39 : 
  04-25-2015 15:16:39 : -- Clearing all routes from routers
  04-25-2015 15:16:39 : -- Done Clearing all routes on routers
  04-25-2015 15:16:39 : 
  04-25-2015 15:16:39 : -- Detaching subnets from routers
  04-25-2015 15:16:39 : -- Done detaching subnets from routers
  04-25-2015 15:16:39 : 
  04-25-2015 15:16:39 : -- Deleting Ports
  HTTP Error 409: Conflict
  04-25-2015 15:16:39 : Error deleting 25337691-d558-46d8-b2a2-96cbdef198db
  04-25-2015 15:16:39 : -- Done deleting ports
  04-25-2015 15:16:39 : 
  04-25-2015 15:16:39 : -- Deleting subnets
  HTTP Error 409: Conflict
  04-25-2015 15:16:39 : Error deleting 1f4059ee-2b6b-48d0-9687-b7e0fead3cae
  04-25-2015 15:16:39 : --Done deleting subnets
  04-25-2015 15:16:39 : 
  04-25-2015 15:16:39 : -- Deleting routers
  04-25-2015 15:16:39 : --Done