[Yahoo-eng-team] [Bug 1625334] [NEW] Update port with subnet_id in fixed_ips allocates a new IP when existing one could be used.

2016-09-19 Thread Carl Baldwin
Public bug reported:

This issue has been seen twice causing this related bug [1]. What
happens is the DHCP agent updates the port using the subnet_id but not
the actual ip_address that the port already has. So, the server
allocates a new IP address and throws out the old one.

What it should do is recognize that there is already an IP address on
the port that satisfies the request and avoid the churn.

A previous attempt was made [1] to address this bug but was reverted
because it had a side effect [3].  Need a fix that addresses this issue
without the side-effect.

[1] https://bugs.launchpad.net/neutron/+bug/1622616/
[2] https://review.openstack.org/#/c/369051/
[3] https://bugs.launchpad.net/neutron/+bug/1623800

** Affects: neutron
 Importance: High
 Assignee: Carl Baldwin (carl-baldwin)
 Status: Confirmed


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625334

Title:
  Update port with subnet_id in fixed_ips allocates a new IP when
  existing one could be used.

Status in neutron:
  Confirmed

Bug description:
  This issue has been seen twice causing this related bug [1]. What
  happens is the DHCP agent updates the port using the subnet_id but not
  the actual ip_address that the port already has. So, the server
  allocates a new IP address and throws out the old one.

  What it should do is recognize that there is already an IP address on
  the port that satisfies the request and avoid the churn.

  A previous attempt was made [1] to address this bug but was reverted
  because it had a side effect [3].  Need a fix that addresses this
  issue without the side-effect.

  [1] https://bugs.launchpad.net/neutron/+bug/1622616/
  [2] https://review.openstack.org/#/c/369051/
  [3] https://bugs.launchpad.net/neutron/+bug/1623800

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1625334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564776] Re: DVR l3 agent should check for snat namespace existence before adding or deleting anything from the namespace

2016-09-28 Thread Carl Baldwin
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1564776

Title:
  DVR l3 agent should check for snat namespace existence before adding
  or deleting anything from the namespace

Status in neutron:
  Invalid

Bug description:
  Check for snat_namespace existence in the node before any operation
  in the namespace.

  Today we check the self.snatnamespace which may or may not reflect
  the exact state of the system.

  If the snat_namespace is accidentally deleted and if we try to
  remove the gateway from the router, the agent throws in a bunch of
  error messages and the agent goes in loop constantly spewing error
  messages.

  Here is the link to the error message.

  http://paste.openstack.org/show/492700/

  This can be easily reproduced.

  1. Create a network
  2. Create a subnet
  3. Create a router ( dvr)
  4. Attach the subnet to the router.
  5. Configure default gateway to the router.
  6. Now verify the namespaces in the 'dvr_snat' node.
  7. You should see
  a. snat_namespace
  b. router_namespace
  c. dhcp namespace.
  8. Now delete the snat_namespace.
  9. Try to remove the gateway from the router.
  10. Watch the L3 agent logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1564776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580780] Re: Associate subnets to segments through subnet API

2016-10-14 Thread Carl Baldwin
I just realized I never linked the patch to this bug.

https://review.openstack.org/#/c/374434/

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: openstack-manuals
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580780

Title:
  Associate subnets to segments through subnet API

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/288774
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit f494de47fcef7776f7d29d5ceb2cc4db96bd1efd
  Author: Carl Baldwin 
  Date:   Tue Feb 9 16:39:01 2016 -0700

  Associate subnets to segments through subnet API
  
  Change-Id: Ia1084a94ac659332c126eb9d4787b04a89a4ba90
  DocImpact: Need to add segment_id to API docs
  Partially-Implements: blueprint routed-networks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588593] Re: If Neutron IPAM driver is setted,using 'net-delete' command to delete the network created when ipam_driver is not set,the command seems to cause dead loop.

2016-06-20 Thread Carl Baldwin
We never really provided an official migration.  Some vendors like
InfoBlox have an unofficial one in order to facilitate migrating to
their drivers.  The reason for this is that the internal driver doesn't
provide any advantage over the non-pluggable implementation.  It is
effectively equivalent.

We are planning a unconditional migration in Newton so that the built-in
implementation will be removed entirely.  I would sit tight

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588593

Title:
  If Neutron IPAM driver is setted,using 'net-delete' command to delete
  the network created when ipam_driver is not set,the command seems to
  cause dead loop.

Status in neutron:
  Won't Fix

Bug description:
  In Mitaka,

  When ipam_driver is not setted,created a network with a subnet,then using the 
reference implementation of 
  Neutron IPAM driver by setting 'ipam_driver='internal'',and using 
'net-delete' command to delete the 
  network created when ipam_driver is not set,the command seems to cause dead 
loop.

  
  1)Specifying ‘ipam_driver = ’ in the neutron.conf file,created a 
network with a subnet
  [root@localhost devstack]# neutron net-create net_vlan_01 
--provider:network_type vlan --provider:physical_network physnet1 
--provider:segmentation_id  2 
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2016-06-03T02:42:50  |
  | description   |  |
  | id| 666f8a6a-e3e3-4183-84b3-a43c92b050f5 |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1500 |
  | name  | net_vlan_01  |
  | port_security_enabled | True |
  | provider:network_type | vlan |
  | provider:physical_network | physnet1 |
  | provider:segmentation_id  | 2|
  | qos_policy_id |  |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tags  |  |
  | tenant_id | 69fa49e368d340679ab3d05de3426bfa |
  | updated_at| 2016-06-03T02:42:50  |
  +---+--+
  [root@localhost devstack]# neutron subnet-create net_vlan_01 --name 
subnet_vlan_01 101.1.1.0/24
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "101.1.1.2", "end": "101.1.1.254"} |
  | cidr  | 101.1.1.0/24 |
  | created_at| 2016-06-03T02:42:56  |
  | description   |  |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 101.1.1.1|
  | host_routes   |  |
  | id| 1c60dbd7-ae1e-4d7c-a767-ec3106cc62ad |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | subnet_vlan_01   |
  | network_id| 666f8a6a-e3e3-4183-84b3-a43c92b050f5 |
  | subnetpool_id |  |
  | tenant_id | 69fa49e368d340679ab3d05de3426bfa |
  | updated_at| 2016-06-03T02:42:56  |
  +---+--+
  [root@localhost devstack]# neutron net-list
  
+

[Yahoo-eng-team] [Bug 1597561] [NEW] L3 agent allows multiple gateway ports in fip namespace

2016-06-29 Thread Carl Baldwin
Public bug reported:

At the end of deleting a GW port for a router, l3_dvr_db.py will look
for any more router gw ports on the external network.  If there are
none, then it calls delete_floatingip_agent_gateway_port [1].  This
should fan out to all l3 agents on all compute nodes [2].  Each agent
should then delete the port [3].

In some cases, the fip namespace and the gateway port are not deleted.
I don't know where things are going wrong.  This seems pretty
straight-forward.  Do some agents miss the fanout?  We know at least
some of them are getting the fanout.  So, it is definitely being sent.

When I checked, the port had been deleted from the database.  The fact
that a new one is created supports this because if one existed in the DB
already then it would be returned.


[1] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/db/l3_dvr_db.py#L179
[2] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L166
[3] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/agent/l3/dvr.py#L73

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: l3-dvr-backlog l3-ipam-dhcp

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Tags added: l3-dvr-backlog l3-ipam-dhcp

** Description changed:

- At the end of deleting a GW port for a router, l3_dvr_db.py will look for any
- more router gw ports on the external network.  If there are none, then it 
calls
- delete_floatingip_agent_gateway_port [1].  This should fan out to all l3 
agents
- on all compute nodes [2].  Each agent should then delete the port [3].
+ At the end of deleting a GW port for a router, l3_dvr_db.py will look
+ for any more router gw ports on the external network.  If there are
+ none, then it calls delete_floatingip_agent_gateway_port [1].  This
+ should fan out to all l3 agents on all compute nodes [2].  Each agent
+ should then delete the port [3].
  
- In some cases, the fip namespace and the gateway port are not deleted.  I 
don't
- know where things are going wrong.  This seems pretty straight-forward.  Do
- some agents miss the fanout?  We know at least some of them are getting the
- fanout.  So, it is definitely being sent.
+ In some cases, the fip namespace and the gateway port are not deleted.
+ I don't know where things are going wrong.  This seems pretty
+ straight-forward.  Do some agents miss the fanout?  We know at least
+ some of them are getting the fanout.  So, it is definitely being sent.
  
- When I checked, the port had been deleted from the database.  The fact that a
- new one is created supports this because if one existed in the DB already then
- it would be returned.
+ When I checked, the port had been deleted from the database.  The fact
+ that a new one is created supports this because if one existed in the DB
+ already then it would be returned.
+ 
  
  [1] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/db/l3_dvr_db.py#L179
  [2] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L166
  [3] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/agent/l3/dvr.py#L73

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597561

Title:
  L3 agent allows multiple gateway ports in fip namespace

Status in neutron:
  Confirmed

Bug description:
  At the end of deleting a GW port for a router, l3_dvr_db.py will look
  for any more router gw ports on the external network.  If there are
  none, then it calls delete_floatingip_agent_gateway_port [1].  This
  should fan out to all l3 agents on all compute nodes [2].  Each agent
  should then delete the port [3].

  In some cases, the fip namespace and the gateway port are not deleted.
  I don't know where things are going wrong.  This seems pretty
  straight-forward.  Do some agents miss the fanout?  We know at least
  some of them are getting the fanout.  So, it is definitely being sent.

  When I checked, the port had been deleted from the database.  The fact
  that a new one is created supports this because if one existed in the DB
  already then it would be returned.

  
  [1] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/db/l3_dvr_db.py#L179
  [2] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L166
  [3] 
https://github.com/openstack/neutron/blob/d3cd20151a67289f023875de682a6d3c4ccee645/neutron/agent/l3/dvr.py#L73

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597561/+subscriptions

-- 
Mailin

[Yahoo-eng-team] [Bug 1463784] Re: [RFE] Networking L2 Gateway does not work with DVR

2016-07-01 Thread Carl Baldwin
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags removed: rfe
** Tags added: rfe-approved

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463784

Title:
  [RFE] Networking L2 Gateway does not work with DVR

Status in networking-l2gw:
  In Progress
Status in neutron:
  In Progress

Bug description:
  Currently, networking L2 gateway solution cannot be used with a DVR.
  If a virtual machine is in one subnet and the bare metal server is in
  another, then it makes sense to allow DVR configured on the compute
  node to route the traffic from the VM to the bare metal server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-l2gw/+bug/1463784/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603162] [NEW] IP deallocation failed on external system with pluggable IPAM

2016-07-14 Thread Carl Baldwin
Public bug reported:

This bug is visible when pluggable IPAM is active.  It can be seen with
this patch [1].  It does not cause gate failures but it is still
something that should be understood.  This logstash query [2] seems to
find where they occur.  It is helpful to look at the DEBUG level logging
around the time of the error.  For example see this paste [3].

It seems that the session gets broken with an exception that causes a
rollback.  Then, the IPAM rollback attempts to use the same session for
rollback which fails.  Should the reference pluggable IPAM driver be
using a different session?  Or, should it call rollback?

[1] https://review.openstack.org/#/c/181023
[2] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22IP%20deallocation%20failed%20on%20external%20system%5C%22
[3] http://paste.openstack.org/show/532891/

** Affects: neutron
 Importance: High
 Status: New


** Tags: l3-ipam-dhcp

** Changed in: neutron
   Importance: Undecided => High

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603162

Title:
  IP deallocation failed on external system with pluggable IPAM

Status in neutron:
  New

Bug description:
  This bug is visible when pluggable IPAM is active.  It can be seen
  with this patch [1].  It does not cause gate failures but it is still
  something that should be understood.  This logstash query [2] seems to
  find where they occur.  It is helpful to look at the DEBUG level
  logging around the time of the error.  For example see this paste [3].

  It seems that the session gets broken with an exception that causes a
  rollback.  Then, the IPAM rollback attempts to use the same session
  for rollback which fails.  Should the reference pluggable IPAM driver
  be using a different session?  Or, should it call rollback?

  [1] https://review.openstack.org/#/c/181023
  [2] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22IP%20deallocation%20failed%20on%20external%20system%5C%22
  [3] http://paste.openstack.org/show/532891/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609540] [NEW] Deleting csnat port fails due to no fixed ips

2016-08-03 Thread Carl Baldwin
Public bug reported:

This code [1] ends up emitting an "IndexError: list index out of range"
exception that ends up with a trace like this [2]. Essentially, there
are no fixed ips on the port. Not sure yet how it got in to this state.
This failure is linked to various tempest failures in gate-tempest-dsvm-
neutron-dvr and gate-tempest-dsvm-neutron-dvr-multinode-full. Here are
tests which have failed due to this. This logstash [3] finds them.

tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless
tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac

[1] 
https://github.com/openstack/neutron/blob/5e8b8274fe94ca9eafbfe951134326df3a60373d/neutron/db/l3_dvr_db.py#L906
[2] http://paste.openstack.org/show/547840/
[3] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22if%20p%5B'fixed_ips'%5D%5B0%5D%5B'subnet_id'%5D%20%3D%3D%20subnet_id%5C%22

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure ipv6 l3-dvr-backlog l3-ipam-dhcp

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: gate-failure ipv6 l3-dvr-backlog l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609540

Title:
  Deleting csnat port fails due to no fixed ips

Status in neutron:
  Confirmed

Bug description:
  This code [1] ends up emitting an "IndexError: list index out of
  range" exception that ends up with a trace like this [2]. Essentially,
  there are no fixed ips on the port. Not sure yet how it got in to this
  state. This failure is linked to various tempest failures in gate-
  tempest-dsvm-neutron-dvr and gate-tempest-dsvm-neutron-dvr-multinode-
  full. Here are tests which have failed due to this. This logstash [3]
  finds them.

  
tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless
  
tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac

  [1] 
https://github.com/openstack/neutron/blob/5e8b8274fe94ca9eafbfe951134326df3a60373d/neutron/db/l3_dvr_db.py#L906
  [2] http://paste.openstack.org/show/547840/
  [3] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22if%20p%5B'fixed_ips'%5D%5B0%5D%5B'subnet_id'%5D%20%3D%3D%20subnet_id%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1609540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1610483] [NEW] Pluggable IPAM rollback mechanism is not robust

2016-08-05 Thread Carl Baldwin
Public bug reported:

In looking through the retry mechanism for pluggable IPAM (e.g. [1]), I
found it is not robust. It catches only a very narrow set of errors.
Many other errors would not result in a rollback notification to the
external IPAM system. Basically, if anything else fails during a port
create and causes the DB transaction to be rolled back, the IP
allocations will be forgotten by Neutron but an external IPAM will still
remember them. No notification will be sent to the external system to
reverse what it had done.

There are a couple of options we could pursue. One is a decorator on the
API operation which would take care to call rollback if anything went
wrong. The other is to use an sqlalchemy level hook,
after_transaction_end, to detect DB rollback and call IPAM rollback.

In both cases, the problem is where/how to do the book-keeping. We need
to immediately record successful (de)allocations from the external IPAM
system somewhere where that will be available in the event rollback is
needed. One ideas is to piggy-back off of the context in session.info or
somewhere like that. This discussion in IRC [2] might be useful.

[1] 
https://github.com/openstack/neutron/blob/949aae6a8b92a77a06d04734bf82ed7a917057a7/neutron/db/ipam_pluggable_backend.py#L129-L136
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-08-03.log.html#t2016-08-03T18:08:58

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: l3-ipam-dhcp

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => High

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1610483

Title:
  Pluggable IPAM rollback mechanism is not robust

Status in neutron:
  Confirmed

Bug description:
  In looking through the retry mechanism for pluggable IPAM (e.g. [1]),
  I found it is not robust. It catches only a very narrow set of errors.
  Many other errors would not result in a rollback notification to the
  external IPAM system. Basically, if anything else fails during a port
  create and causes the DB transaction to be rolled back, the IP
  allocations will be forgotten by Neutron but an external IPAM will
  still remember them. No notification will be sent to the external
  system to reverse what it had done.

  There are a couple of options we could pursue. One is a decorator on
  the API operation which would take care to call rollback if anything
  went wrong. The other is to use an sqlalchemy level hook,
  after_transaction_end, to detect DB rollback and call IPAM rollback.

  In both cases, the problem is where/how to do the book-keeping. We
  need to immediately record successful (de)allocations from the
  external IPAM system somewhere where that will be available in the
  event rollback is needed. One ideas is to piggy-back off of the
  context in session.info or somewhere like that. This discussion in IRC
  [2] might be useful.

  [1] 
https://github.com/openstack/neutron/blob/949aae6a8b92a77a06d04734bf82ed7a917057a7/neutron/db/ipam_pluggable_backend.py#L129-L136
  [2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-08-03.log.html#t2016-08-03T18:08:58

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1610483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620746] [NEW] Dead code and model remain for availability ranges

2016-09-06 Thread Carl Baldwin
Public bug reported:

Availability range models and code are effectively obsolete [1] and should've 
been removed
in a previous patch [2] but some of it was left behind.

[1] https://review.openstack.org/#/c/292207
[2] https://review.openstack.org/#/c/303638

** Affects: neutron
 Importance: Low
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

** Changed in: neutron
Milestone: None => newton-rc1

** Changed in: neutron
 Assignee: (unassigned) => Carl Baldwin (carl-baldwin)

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620746

Title:
  Dead code and model remain for availability ranges

Status in neutron:
  In Progress

Bug description:
  Availability range models and code are effectively obsolete [1] and should've 
been removed
  in a previous patch [2] but some of it was left behind.

  [1] https://review.openstack.org/#/c/292207
  [2] https://review.openstack.org/#/c/303638

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428305] Re: Floating IP namespace not created when DVR enabled and with IPv6 enabled in devstack

2015-08-11 Thread Carl Baldwin
** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1428305

Title:
  Floating IP namespace not created when DVR enabled and with IPv6
  enabled in devstack

Status in neutron:
  Invalid

Bug description:
  I just created a new devstack based on the latest Neutron code and the
  l3-agent is failing to create the Floating IP namespace, leading to
  floating IPs not working.  This only happens when DVR is enabled, for
  example, I have this in my local.conf:

  Q_DVR_MODE=dvr_snat

  When I allocate a floating IP and attempt to associate it with a
  running instance I see this in the l3-agent log:

  2015-03-04 20:03:46.082 28696 DEBUG neutron.agent.l3.agent [-] FloatingIP 
agent gateway port received from the plugin: {u'status': u'DOWN', 
u'binding:host_id': u'haleyb-devstack', u'name': u'', u'allowed_address_pairs': 
[], u'admin_state_up': True, u'network_id': 
u'bda13d78-bf4c-45b8-8cb6-dd3449b1d3c5', u'tenant_id': u'', u'extra_dhcp_opts': 
[], u'binding:vif_details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 
u'binding:vif_type': u'ovs', u'device_owner': 
u'network:floatingip_agent_gateway', u'mac_address': u'fa:16:3e:94:74:f0', 
u'binding:profile': {}, u'binding:vnic_type': u'normal', u'fixed_ips': 
[{u'subnet_id': u'99260be2-91ef-423a-8dd8-4ecf15ffb14c', u'ip_address': 
u'172.24.4.4'}, {u'subnet_id': u'97a9534f-eec4-4c06-bdf5-61bab04455b7', 
u'ip_address': u'fe80:cafe:cafe::3'}], u'id': 
u'47f8a65f-6008-4a97-93a3-85f68ea4ff00', u'security_groups': [], u'device_id': 
u'2674f378-26c0-4b29-b920-5637640acffc'} create_dvr_fip_interfaces 
/opt/stack/neutron/neutron/agent/l3/agent.py:6
 27
  2015-03-04 20:03:46.082 28696 ERROR neutron.agent.l3.agent [-] Missing 
subnet/agent_gateway_port

  That error means that no port for the external gateway will be created
  along with the namespace where it lives.

  Subsequent errors confirm that:

  2015-03-04 20:03:47.494 28696 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'fip-bda13d78-bf4c-45b8-8cb6-dd3449b1d3c5', 'ip', '-o', 'link', 'show', 
'fpr-d73fd397-4'] create_process 
/opt/stack/neutron/neutron/agent/linux/utils.py:51
  2015-03-04 20:03:47.668 28696 DEBUG neutron.agent.linux.utils [-]
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'fip-bda13d78-bf4c-45b8-8cb6-dd3449b1d3c5', 'ip', '-o', 'link', 'show', 
'fpr-d73fd397-4']
  Exit code: 1
  Stdout:
  Stderr: Cannot open network namespace 
"fip-bda13d78-bf4c-45b8-8cb6-dd3449b1d3c5": No such file or directory

  $ ip netns
  qdhcp-91416e8f-856e-42ae-a9fd-9abe25d8b47a
  snat-d73fd397-47f8-4272-b55a-b33b2307eaad
  qrouter-d73fd397-47f8-4272-b55a-b33b2307eaad

  This is only happening with Kilo, but I don't have an exact date as to
  when it started, just that I noticed it starting yesterday (March
  3rd).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1428305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1034161] Re: some platforms do not support namespaces

2015-08-14 Thread Carl Baldwin
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1034161

Title:
  some platforms do not support namespaces

Status in neutron:
  Invalid

Bug description:
  this means two things:

  1) we need to document what versions of ubuntu/red hat support namespaces
  2) ideally, we need a way of using quantum (without overlapping IP support) 
that does not require namespaces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1034161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383571] Re: The fip namespace can be destroyed on L3 agent restart

2015-08-27 Thread Carl Baldwin
** Changed in: neutron
   Status: Confirmed => Incomplete

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1383571

Title:
  The fip namespace can be destroyed on L3 agent restart

Status in neutron:
  Invalid

Bug description:
  The scenario is described in a recent patch review [1].  The patch did
  not introduce the problem but it was noticed during review of the
  patch.

  [1]
  https://review.openstack.org/#/c/128131/5/neutron/agent/l3_agent.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1383571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479887] Re: Default subnetpools cannot be defined by name

2015-08-31 Thread Carl Baldwin
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479887

Title:
  Default subnetpools cannot be defined by name

Status in neutron:
  Invalid

Bug description:
  The values for default_ipv4_subnet_pool and default_ipv6_subnet_pool
  currently have to be defined as the UUID of the desired subnetpool.
  This leads to a chicken & egg situation where the admin has to somehow
  enter the UUID into the conf file before neutron is initialised, and
  therefore before the UUID can be generated.

  These values should instead be defined by the name of the desired
  subnetpool, so the admin can create it after neutron is started.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411883] Re: DVR qrouters are not created when VMs are added after the router-interface is added to the router

2015-09-09 Thread Carl Baldwin
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1411883

Title:
  DVR qrouters are not created when VMs are added after the router-
  interface is added to the router

Status in neutron:
  Invalid

Bug description:
  qrouters for DVR routers should be created on demand when VMs are
  created on the compute Node.

  But with the current code, it seems that is broken.

  When a VM is created on a router's subnet after the router-interface
  is added to the router, then the 'qrouter' namespace is not created on
  the compute Node. It is only created on the "dvr_snat" node and not on
  the "dvr" node.

  stack@ubuntu-multinet-ctlr:~/devstack$ neutron net-create net1
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| ccaeaf39-4c33-40b8-b0e0-551414a86ca3 |
  | name  | net1 |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 1003 |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | f822fe0773cd4d21bddb4ecb2477f21d |
  +---+--+
  stack@ubuntu-multinet-ctlr:~/devstack$ neutron subnet-create net1 --name 
subnet1 10.20.0.0/24
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "10.20.0.2", "end": "10.20.0.254"} |
  | cidr  | 10.20.0.0/24 |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 10.20.0.1|
  | host_routes   |  |
  | id| e404b9cb-dd2d-4919-8b6c-72799ae7efed |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | subnet1  |
  | network_id| ccaeaf39-4c33-40b8-b0e0-551414a86ca3 |
  | tenant_id | f822fe0773cd4d21bddb4ecb2477f21d |
  +---+--+
  stack@ubuntu-multinet-ctlr:~/devstack$ neutron router-create router2
  Created a new router:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | distributed   | True |
  | external_gateway_info |  |
  | ha| False|
  | id| e82022cd-d45c-4558-9d35-86a1e5f58462 |
  | name  | router2  |
  | routes|  |
  | status| ACTIVE   |
  | tenant_id | f822fe0773cd4d21bddb4ecb2477f21d |
  +---+--+
  stack@ubuntu-multinet-ctlr:~/devstack$ neutron router-interface-add router2 
subnet1
  Added interface f872d6ea-f2c9-4334-8090-48d9542cdbef to router router2.
  stack@ubuntu-multinet-ctlr:~/devstack$ neutron router-gateway-set router2 
publicSet gateway for router router2
  stack@ubuntu-multinet-ctlr:~/devstack$ 
  stack@ubuntu-multinet-ctlr:~/devstack$ 
  stack@ubuntu-multinet-ctlr:~/devstack$ sudo ip netns list
  qrouter-e82022cd-d45c-4558-9d35-86a1e5f58462
  qdhcp-5f413f3e-573f-404d-9d24-b7b11b141278
  snat-acfc7720-3071-46a3-8be3-1b9430ddb47e
  qrouter-acfc7720-3071-46a3-8be3-1b9430ddb47e
  stack@ubuntu-multinet-ctlr:~/devstack$ sudo ip netns list
  snat-e82022cd-d45c-4558-9d35-86a1e5f58462
  qrouter-e82022cd-d45c-4558-9d35-86a1e5f58462
  qdhcp-5f413f3e-573f-

[Yahoo-eng-team] [Bug 1371696] [NEW] Cannot add or update a child row: a foreign key constraint fails ml2_dvr_port_bindings

2014-09-19 Thread Carl Baldwin
Public bug reported:

We've hit this foreign key constraint error.  This is due to a sync
message coming in from the L3 agent over RPC.  The message contains n
update for  a port that has just been deleted.  This is just log noise
because it all gets worked out quickly following the error.  The full
trace is here [1].

2014-09-18 21:22:39.735 29984 TRACE oslo.messaging.rpc.dispatcher
DBReferenceError: (IntegrityError) (1452, 'Cannot add or update a child
row: a foreign key constraint fails (`neutron`.`ml2_dvr_port_bindings`,
CONSTRAINT `ml2_dvr_port_bindings_ibfk_1` FOREIGN KEY (`port_id`)
REFERENCES `ports` (`id`) ON DELETE CASCADE)') 'INSERT INTO
ml2_dvr_port_bindings (port_id, host, router_id, vif_type, vif_details,
vnic_type, profile, cap_port_filter, driver, segment, status) VALUES
(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)' ('0fe7b532-343e-
4ba0-83d9-c51b1c55f533', 'devstack-trusty-hpcloud-b4-2246632',
'45248fa2-4372-4ad9-8e60-afabe39c6f6a', 'unbound', '', 'normal', '', 0,
None, None, 'DOWN')

[1] http://paste.openstack.org/show/113360/

** Affects: neutron
 Importance: Medium
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Armando Migliaccio (armando-migliaccio)

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1371696

Title:
  Cannot add or update a child row: a foreign key constraint fails
  ml2_dvr_port_bindings

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We've hit this foreign key constraint error.  This is due to a sync
  message coming in from the L3 agent over RPC.  The message contains n
  update for  a port that has just been deleted.  This is just log noise
  because it all gets worked out quickly following the error.  The full
  trace is here [1].

  2014-09-18 21:22:39.735 29984 TRACE oslo.messaging.rpc.dispatcher
  DBReferenceError: (IntegrityError) (1452, 'Cannot add or update a
  child row: a foreign key constraint fails
  (`neutron`.`ml2_dvr_port_bindings`, CONSTRAINT
  `ml2_dvr_port_bindings_ibfk_1` FOREIGN KEY (`port_id`) REFERENCES
  `ports` (`id`) ON DELETE CASCADE)') 'INSERT INTO ml2_dvr_port_bindings
  (port_id, host, router_id, vif_type, vif_details, vnic_type, profile,
  cap_port_filter, driver, segment, status) VALUES (%s, %s, %s, %s, %s,
  %s, %s, %s, %s, %s, %s)' ('0fe7b532-343e-4ba0-83d9-c51b1c55f533',
  'devstack-trusty-hpcloud-b4-2246632',
  '45248fa2-4372-4ad9-8e60-afabe39c6f6a', 'unbound', '', 'normal', '',
  0, None, None, 'DOWN')

  [1] http://paste.openstack.org/show/113360/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1371696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373100] [NEW] New race condition exposed when cleaning up floating ips on router delete

2014-09-23 Thread Carl Baldwin
Public bug reported:

The patch that cleans up floating ips on router deletion [1] has
triggered a race condition that causes spurious failures in the dvr job
in the check queue.  Reverting this patch [2] has shown to stabilize it.

[1] https://review.openstack.org/#/c/120885/
[2] https://review.openstack.org/#/c/121729/

** Affects: neutron
 Importance: Critical
 Assignee: Rajeev Grover (rajeev-grover)
 Status: Confirmed


** Tags: l3-dvr-backlog

** Changed in: neutron
Milestone: None => juno-rc1

** Description changed:

  The patch that cleans up floating ips on router deletion [1] has
  triggered a race condition that causes spurious failures in the dvr job
  in the check queue.  Reverting this patch [2] has shown to stabilize it.
  
  [1] https://review.openstack.org/#/c/120885/
+ [2] https://review.openstack.org/#/c/121729/

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373100

Title:
  New race condition exposed when cleaning up floating ips on router
  delete

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  The patch that cleans up floating ips on router deletion [1] has
  triggered a race condition that causes spurious failures in the dvr
  job in the check queue.  Reverting this patch [2] has shown to
  stabilize it.

  [1] https://review.openstack.org/#/c/120885/
  [2] https://review.openstack.org/#/c/121729/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189909] Re: dhcp-agent does always provide IP address for instances with re-cycled IP addresses.

2014-10-02 Thread Carl Baldwin
It seems this has been fixed with
https://review.openstack.org/#/c/37580/

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1189909

Title:
  dhcp-agent does always provide IP address for instances with re-cycled
  IP addresses.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in “quantum” package in Ubuntu:
  Confirmed
Status in “quantum” package in CentOS:
  New

Bug description:
  Configuration: OpenStack Networking, OpenvSwitch Plugin (GRE tunnels), 
OpenStack Networking Security Groups
  Release: Grizzly

  Sometime when creating instances, the dnsmasq instance associated with
  the tenant l2 network does not have configuration for the requesting
  mac address:

  Jun 11 09:30:23 d7m88-cofgod dnsmasq-dhcp[10083]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45 no address available
  Jun 11 09:30:33 d7m88-cofgod dnsmasq-dhcp[10083]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45 no address available

  Restarting the quantum-dhcp-agent resolved the issue:

  Jun 11 09:30:41 d7m88-cofgod dnsmasq-dhcp[11060]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45
  Jun 11 09:30:41 d7m88-cofgod dnsmasq-dhcp[11060]: DHCPOFFER(tap98031044-d8) 
10.5.0.2 fa:16:3e:da:41:45

  The IP address (10.5.0.2) was re-cycled from an instance that was
  destroyed just prior to creation of this one.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.04
  Package: quantum-dhcp-agent 1:2013.1.1-0ubuntu1
  ProcVersionSignature: Ubuntu 3.8.0-23.34-generic 3.8.11
  Uname: Linux 3.8.0-23-generic x86_64
  ApportVersion: 2.9.2-0ubuntu8.1
  Architecture: amd64
  Date: Tue Jun 11 09:31:38 2013
  MarkForUpload: True
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: quantum
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.quantum.dhcp.agent.ini: [deleted]
  modified.conffile..etc.quantum.rootwrap.d.dhcp.filters: [deleted]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1189909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1156980] Re: Agent is not a L3 Agent or has been disabled

2014-10-02 Thread Carl Baldwin
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1156980

Title:
  Agent is not a L3 Agent or has been disabled

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  [root@node-54-157 baoyongcheng]# quantum agent-list
  
+--++-+---++
  | id   | agent_type | host| 
alive | admin_state_up |
  
+--++-+---++
  | 1a1ec969-9b17-4442-b5c8-cf5962fc8cf6 | DHCP agent | node-54-155 | 
xxx   | True   |
  | 44401498-c9ad-4e9c-afd0-1bfffb6415f7 | Linux bridge agent | node-54-157 | 
:-)   | True   |
  | 573462c4-7d76-4bf6-95f4-58bf3934e864 | L3 agent   | node-54-157 | 
:-)   | True   |
  | 58a2836e-12d2-42aa-a05c-4a79d41a776d | Linux bridge agent | node-54-155 | 
xxx   | True   |
  | afa465a1-da2f-4654-9a8a-030fc044f7f8 | L3 agent   | node-54-155 | 
xxx   | True   |
  | b6f7f7ab-dab0-476f-96ae-644314775e53 | DHCP agent | node-54-157 | 
:-)   | True   |
  
+--++-+---++
  [root@node-54-157 baoyongcheng]# quantum router-list
  
+--+-++
  | id   | name| external_gateway_info  
|
  
+--+-++
  | ae001b60-ea59-4b93-b949-b763f9575365 | router1 | {"network_id": 
"4551f19b-48e2-4ac8-addc-6e97dd5f3f05"} |
  
+--+-++
  [root@node-54-157 baoyongcheng]#
  [root@node-54-157 baoyongcheng]# quantum l3-agent-list-hosting-router router1

  [root@node-54-157 baoyongcheng]# quantum l3-agent-router-add 
573462c4-7d76-4bf6-95f4-58bf3934e864 router1
  Agent 573462c4-7d76-4bf6-95f4-58bf3934e864 is not a L3 Agent or has been 
disabled
  [root@node-54-157 baoyongcheng]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1156980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1224502] Re: Getting "No more IP addresses available" error , though enough IP's available on the pool

2014-10-02 Thread Carl Baldwin
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1224502

Title:
  Getting "No more IP addresses available" error , though enough IP's
  available on the pool

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Hi,

  I was running tempest on my newly deployed H3 environment. Observed
  the below issue

  http://paste.openstack.org/show/46865/

  I have enough free IP's on the pool.

  my external network's subnet allocation pool range and currently allocated 
ports/ip's list
  http://paste.openstack.org/show/46859/

   From the above output its evident that i have  4 more floating IP's free in 
the pool
   however if I create a new floating IP i am getting the error. 

  neutron server.log
  http://paste.openstack.org/show/46868/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1224502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1184519] Re: after quantum API downtime, dhcp-agent gave up the plot

2014-10-02 Thread Carl Baldwin
Closing at Robert's request.  Sorry it took so long.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1184519

Title:
  after quantum API downtime, dhcp-agent gave up the plot

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  After bug 1184484 kicked in for us, we needed to restart the dhcp-
  agent - until we did instances didn't get IPs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1184519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1378398] [NEW] Remove legacy weight from l3 agent _process_routers

2014-10-07 Thread Carl Baldwin
Public bug reported:

Some work in Juno around adding a new router processing queue to the
l3_agent.py obsoleted much of the logic in the _process_routers method.
The following can be simplified.

1. No loop is necessary since the list passed always has exactly one router in 
it.
2. No thread pool is necessary because there is only one thread active and the 
method waits for it to complete at the end.
3. The set logic is no longer needed.

** Affects: neutron
 Importance: Wishlist
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1378398

Title:
  Remove legacy weight from l3 agent _process_routers

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Some work in Juno around adding a new router processing queue to the
  l3_agent.py obsoleted much of the logic in the _process_routers
  method.  The following can be simplified.

  1. No loop is necessary since the list passed always has exactly one router 
in it.
  2. No thread pool is necessary because there is only one thread active and 
the method waits for it to complete at the end.
  3. The set logic is no longer needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1378398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383495] [NEW] L3 agent attempts to disable RA on all interfaces when it should only do so on qrouter namespaces for now

2014-10-20 Thread Carl Baldwin
Public bug reported:

Disableing RA only works on qrouter- namespaces at the moment but the L3
agent is attempting to disable it on all namespaces.  In other
namespaces, it assumes qrouter- is the prefix of the namespace name and
incorrectly computes the router id.

def _cleanup_namespaces(self, router_namespaces, router_ids):
"""Destroy stale router namespaces on host when L3 agent restarts

The argument router_namespaces is the list of all routers namespaces
The argument router_ids is the list of ids for known routers.
"""
# Don't destroy namespaces of routers this agent handles.
ns_to_ignore = self._get_routers_namespaces(router_ids)

ns_to_destroy = router_namespaces - ns_to_ignore
for ns in ns_to_destroy:
ra.disable_ipv6_ra(ns[len(NS_PREFIX):], ns, self.root_helper)  
<- Wrong place for this
try:
self._destroy_namespace(ns)
except RuntimeError:
LOG.exception(_('Failed to destroy stale router namespace '
'%s'), ns)

I also noticed that disable_ipv6_ra is not called in some situation
where _destroy_router_namespace can be called.  If it is important to
disable_ipv6_ra for router namespaces then this call should be moved to
_destroy_router_namespace.

** Affects: neutron
 Importance: Medium
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1383495

Title:
  L3 agent attempts to disable RA on all interfaces when it should only
  do so on qrouter namespaces for now

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Disableing RA only works on qrouter- namespaces at the moment but the
  L3 agent is attempting to disable it on all namespaces.  In other
  namespaces, it assumes qrouter- is the prefix of the namespace name
  and incorrectly computes the router id.

  def _cleanup_namespaces(self, router_namespaces, router_ids):
  """Destroy stale router namespaces on host when L3 agent restarts

  The argument router_namespaces is the list of all routers namespaces
  The argument router_ids is the list of ids for known routers.
  """
  # Don't destroy namespaces of routers this agent handles.
  ns_to_ignore = self._get_routers_namespaces(router_ids)

  ns_to_destroy = router_namespaces - ns_to_ignore
  for ns in ns_to_destroy:
  ra.disable_ipv6_ra(ns[len(NS_PREFIX):], ns, self.root_helper)  
<- Wrong place for this
  try:
  self._destroy_namespace(ns)
  except RuntimeError:
  LOG.exception(_('Failed to destroy stale router namespace '
  '%s'), ns)

  I also noticed that disable_ipv6_ra is not called in some situation
  where _destroy_router_namespace can be called.  If it is important to
  disable_ipv6_ra for router namespaces then this call should be moved
  to _destroy_router_namespace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1383495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383571] [NEW] The fip namespace can be destroyed on L3 agent restart

2014-10-20 Thread Carl Baldwin
Public bug reported:

The scenario is described in a recent patch review [1].  The patch did
not introduce the problem but it was noticed during review of the patch.

[1] https://review.openstack.org/#/c/128131/5/neutron/agent/l3_agent.py

** Affects: neutron
 Importance: Medium
 Assignee: Carl Baldwin (carl-baldwin)
 Status: Confirmed


** Tags: l3-dvr-backlog

** Changed in: neutron
   Importance: Undecided => Medium

** Tags added: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Carl Baldwin (carl-baldwin)

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1383571

Title:
  The fip namespace can be destroyed on L3 agent restart

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  The scenario is described in a recent patch review [1].  The patch did
  not introduce the problem but it was noticed during review of the
  patch.

  [1]
  https://review.openstack.org/#/c/128131/5/neutron/agent/l3_agent.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1383571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287824] Re: l3 agent makes too many individual sudo/ip netns calls

2015-03-24 Thread Carl Baldwin
** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1287824

Title:
  l3 agent makes too many individual sudo/ip netns calls

Status in OpenStack Neutron (virtual network service):
  Fix Committed

Bug description:
  Basically, calls to sudo, root_wrap, and ip netns exec all add
  overhead that can make these calls very expensive.  Developing an
  effecting way of consolidating these calls in to considerably fewer
  calls will be a big win.  This assumes the mechanism for consolidating
  them does not itself add a lot of overhead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1287824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444146] [NEW] Subnet creation from a subnet pool can get wrong ip_version

2015-04-14 Thread Carl Baldwin
Public bug reported:

The following command ends up creating a subnet with ip_version set to 4
even though the pool is an ipv6 pool.

  $ neutron subnet-create --subnetpool ext-subnet-pool --prefixlen 64
network1

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444146

Title:
  Subnet creation from a subnet pool can get wrong ip_version

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  The following command ends up creating a subnet with ip_version set to
  4 even though the pool is an ipv6 pool.

$ neutron subnet-create --subnetpool ext-subnet-pool --prefixlen 64
  network1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450521] Re: remove the gateway validation of subnet for router VM's port

2015-05-06 Thread Carl Baldwin
The reason for not allowing the gateway to be within the allocation
pools  was so that Neutron would not allocate the gateway IP address to
just anything.  It should be specifically requested for a router port or
a service VM like you've done.

If we allow moving the gateway IP to an IP inside the allocation pool
then when you delete your router's port, another VM could come around
and get allocated the gateway IP inadvertently.  So, I don't think this
restriction should be removed.

** Changed in: neutron
   Importance: Undecided => Wishlist

** Changed in: neutron
   Status: Incomplete => Confirmed

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450521

Title:
  remove the gateway validation of subnet for router VM's port

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  This is a problem about the gateway setting in the subnet when one VM could 
act as a router/firewall. When one VM works 
  as a router/firewall in the network, the port where the VM connect to the 
subnet should be the gateway of the subnet. 
  But now, we can’t set the gateway to any VM’s port plugged into the subnet 
because the gateway IP cannot be in the IP allocation pool. 
   
  The usage is like this:
  1.Create subnet with a IP allocation pool, specifying the gateway as 
normal.
  2.Create a router and attach the interfaces with the subnets. With some 
vendor router-plugin, it will create a router VM and connect this VM with 
subnets.
  Router VM would get a IP from the pool, but not the gateway IP.
  This the limitation comes, gateway IP could not be allocated to VM, and 
subnet’s gateway could not be updated with IP which has been assigned to some 
VM. 
   
  GatewayConflictWithAllocationPools exception would be emitted.
  And this verification code related is 
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L1112
  It was added by patch for this bug 
https://bugs.launchpad.net/neutron/+bug/1062061. 
   
  Here is an error example:
  stack@yalie-Studio-XPS-8000:~/job/dev2/devstack$ neutron subnet-update 
subnet2  --gateway  10.0.0.3
  Gateway ip 10.0.0.3 conflicts with allocation pool 10.0.0.2-10.0.0.254
   
  I think we need to remove this API limitation considering the usage listed. 
  I am not sure it's a bug, paste it here for more discussion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453906] [NEW] Implement Routing Networks in Neutron

2015-05-11 Thread Carl Baldwin
Public bug reported:

This feature request proposes to allow using private subnets and public
subnets together on the same physical network. The private network will
be used for router next-hops and other router communication.

This will also allow having an L3 only routed network which spans L2
networks. This will depend on dynamic routing integration with Neutron.

https://blueprints.launchpad.net/neutron/+spec/routing-networks

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453906

Title:
  Implement Routing Networks in Neutron

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This feature request proposes to allow using private subnets and
  public subnets together on the same physical network. The private
  network will be used for router next-hops and other router
  communication.

  This will also allow having an L3 only routed network which spans L2
  networks. This will depend on dynamic routing integration with
  Neutron.

  https://blueprints.launchpad.net/neutron/+spec/routing-networks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453906/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453921] [NEW] Implement Address Scopes

2015-05-11 Thread Carl Baldwin
Public bug reported:

Make address scopes a first class thing in Neutron and make Neutron
routers aware of them.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453921

Title:
  Implement Address Scopes

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Make address scopes a first class thing in Neutron and make Neutron
  routers aware of them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453925] [NEW] BGP Dynamic Routing

2015-05-11 Thread Carl Baldwin
Public bug reported:

We propose create an new dr-agent which speaks BGP on behalf of Neutron
to external routers.  It will only announce routes on an external
network and will not yet learn routers from the external system.

These routes will include floating IPs in IPv4 and IPv6 subnets for
IPv6.  The address scopes blueprint is related and helps determine which
subnets in IPv6 should be announced.

Described in blueprint bgp-dynamic-routing

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453925

Title:
  BGP Dynamic Routing

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We propose create an new dr-agent which speaks BGP on behalf of
  Neutron to external routers.  It will only announce routes on an
  external network and will not yet learn routers from the external
  system.

  These routes will include floating IPs in IPv4 and IPv6 subnets for
  IPv6.  The address scopes blueprint is related and helps determine
  which subnets in IPv6 should be announced.

  Described in blueprint bgp-dynamic-routing

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459030] [NEW] Add dns_label to Neutron port

2015-05-26 Thread Carl Baldwin
Public bug reported:

See the spec for more details https://review.openstack.org/#/c/88623

This dns_label field will be used for DNS resolution of the hostname in
dnsmasq and also will be used when Neutron can integrate with external
DNS systems.

** Affects: neutron
 Importance: Undecided
 Assignee: Miguel Lavalle (minsel)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => Miguel Lavalle (minsel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459030

Title:
  Add dns_label to Neutron port

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  See the spec for more details https://review.openstack.org/#/c/88623

  This dns_label field will be used for DNS resolution of the hostname
  in dnsmasq and also will be used when Neutron can integrate with
  external DNS systems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453921] Re: Implement Address Scopes

2015-05-27 Thread Carl Baldwin
I'm sorry to offend you with this.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453921

Title:
  Implement Address Scopes

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Make address scopes a first class thing in Neutron and make Neutron
  routers aware of them.

  Described in blueprint address-scopes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453906] Re: Implement Routing Networks in Neutron

2015-05-27 Thread Carl Baldwin
Gotta get this one out of here too, I guess.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453906

Title:
  Implement Routing Networks in Neutron

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  This feature request proposes to allow using private subnets and
  public subnets together on the same physical network. The private
  network will be used for router next-hops and other router
  communication.

  This will also allow having an L3 only routed network which spans L2
  networks. This will depend on dynamic routing integration with
  Neutron.

  https://blueprints.launchpad.net/neutron/+spec/routing-networks

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453906/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453925] Re: BGP Dynamic Routing

2015-06-03 Thread Carl Baldwin
I marked this invalid because it is not required for existing specs
until Liberty-1 and I won't update it to follow the new guidelines until
that is necessary.  Please ignore this and use the blueprint for up-to-
date information.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453925

Title:
  BGP Dynamic Routing

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  We propose create an new dr-agent which speaks BGP on behalf of
  Neutron to external routers.  It will only announce routes on an
  external network and will not yet learn routers from the external
  system.

  These routes will include floating IPs in IPv4 and IPv6 subnets for
  IPv6.  The address scopes blueprint is related and helps determine
  which subnets in IPv6 should be announced.

  Described in blueprint bgp-dynamic-routing

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1453925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444146] Re: Subnet creation from a subnet pool can get wrong ip_version

2015-06-04 Thread Carl Baldwin
I just added python-neutronclient to this bug report.  The fact that the
client defaults to IP version 4 even when a version 6 subnet pool has
been selected should be fixed.

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444146

Title:
  Subnet creation from a subnet pool can get wrong ip_version

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in Python client library for Neutron:
  New

Bug description:
  The following command ends up creating a subnet with ip_version set to
  4 even though the pool is an ipv6 pool.

$ neutron subnet-create --subnetpool ext-subnet-pool --prefixlen 64
  network1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470337] [NEW] Some aspects of subnets not validated when using subnet pools

2015-06-30 Thread Carl Baldwin
Public bug reported:

It looks like _validate_subnet is not called when allocating from a
subnet pool.  See here [1] for a discussion about it.

[1]
https://review.openstack.org/#/c/153236/89/neutron/db/db_base_plugin_v2.py

** Affects: neutron
 Importance: Undecided
 Assignee: Ryan Tidwell (ryan-tidwell)
 Status: New


** Tags: l3-ipam-dhcp

** Tags added: l3-dvr-backlog

** Tags removed: l3-dvr-backlog
** Tags added: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => Ryan Tidwell (ryan-tidwell)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470337

Title:
  Some aspects of subnets not validated when using subnet pools

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It looks like _validate_subnet is not called when allocating from a
  subnet pool.  See here [1] for a discussion about it.

  [1]
  https://review.openstack.org/#/c/153236/89/neutron/db/db_base_plugin_v2.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470337/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330993] Re: L3-Agent does not process some router update messages due to missing thread synchronization

2014-06-25 Thread Carl Baldwin
*** This bug is a duplicate of bug 1315467 ***
https://bugs.launchpad.net/bugs/1315467

** This bug is no longer a duplicate of bug 1325800
   Potential Race Condition between L3NATAgent.routers_updated and 
L3NATAgent._rpc_loop.
** This bug has been marked a duplicate of bug 1315467
   Neutron deletes the router interface instead of adding a floatingip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330993

Title:
  L3-Agent does not process some router update messages due to missing
  thread synchronization

Status in OpenStack Neutron (virtual network service):
  Incomplete

Bug description:
  The L3-Agent  does not process some router update messages due to the
  updates being overwritten due to missing thread synchronization. This
  happens when a lot of FloatingIP and Router APIs are invoked.

  The functions _rpc_loop and _sync_routers_task are annotated with
  @lockutils.synchronized('l3-agent', 'neutron-'). This provides
  concurrent access to the global variables updated_routers and
  removed_routers.

  However the same variables would be updated by the RPC methods such as
  routers_updated, router_deleted etc.

  As a temporary fix these methods could also be made synchronized, but
  investigation is needed about a minor redesign to avoid using the
  global variables to keep track of any updates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1330993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325800] Re: Potential Race Condition between L3NATAgent.routers_updated and L3NATAgent._rpc_loop.

2014-06-25 Thread Carl Baldwin
*** This bug is a duplicate of bug 1315467 ***
https://bugs.launchpad.net/bugs/1315467

After looking at the code, I'm confident that the eventlet threading
model does not allow this race condition in the most current code.  Here
are the lines of code in question in the current master branch:

updated_routers = set(self.updated_routers)
self.updated_routers.clear()

and ...

self.updated_routers.update(routers)

Each of these is atomic from an eventlet threading model standpoint.

There was a brief problem with this atomicity that was fixed in
https://bugs.launchpad.net/neutron/+bug/1315467

** This bug has been marked a duplicate of bug 1315467
   Neutron deletes the router interface instead of adding a floatingip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1325800

Title:
  Potential Race Condition between L3NATAgent.routers_updated and
  L3NATAgent._rpc_loop.

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  The _rpc_loop routine takes a snapshot of the L3NATAgent’s
  routers_updated set and then it clears the set.  At the same time,
  L3NATAgent.routers_updated can run, it adds new routers to the
  routers_updated set.  It is possible for both routines to run at the
  same time.  So it is possible that _rpc_loop will clear the
  routers_updated set right after routers_updated routine added a router
  without having the new router included in the snapshot.  The problem
  will manifests itself by having a newly associated floating ip address
  not being configured in the iptables and the qg- device.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1325800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348302] [NEW] DVR snat namespaces are not cleaned up in L3 agent

2014-07-24 Thread Carl Baldwin
Public bug reported:

The newly added DVR code does not clean up snat namespaces when they are
no longer needed.  This was a known backlog item when the DVR code
merged.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Carl Baldwin (carl-baldwin)

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348302

Title:
  DVR snat namespaces are not cleaned up in L3 agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The newly added DVR code does not clean up snat namespaces when they
  are no longer needed.  This was a known backlog item when the DVR code
  merged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348309] [NEW] Migration of legacy router to distributed router not working

2014-07-24 Thread Carl Baldwin
Public bug reported:

This was a know backlog item when DVR code merged.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348309

Title:
  Migration of legacy router to distributed router not working

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This was a know backlog item when DVR code merged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348306] [NEW] L3 agent restart disrupts fip namespace causing connectivity loss

2014-07-24 Thread Carl Baldwin
Public bug reported:

When the L3 agent restarts, it does not preserve the link local
addresses used for each router.  For this reason, it has to reassign
them and rewire everything.  This is very disruptive to network
connectivity.  Connectivity should be preserved as much as possible.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Carl Baldwin (carl-baldwin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348306

Title:
  L3 agent restart disrupts fip namespace causing connectivity loss

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When the L3 agent restarts, it does not preserve the link local
  addresses used for each router.  For this reason, it has to reassign
  them and rewire everything.  This is very disruptive to network
  connectivity.  Connectivity should be preserved as much as possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350028] [NEW] Stack trace in add_arp_entry after L3 agent restart

2014-07-29 Thread Carl Baldwin
Public bug reported:

I saw a stack trace [1] in add_arp_entry of l3_agent.py.  I see this
stack trace on agent restart when the router is not yet in the
self.router_info dict.

[1] http://paste.openstack.org/show/88952/

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1350028

Title:
  Stack trace in add_arp_entry after L3 agent restart

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I saw a stack trace [1] in add_arp_entry of l3_agent.py.  I see this
  stack trace on agent restart when the router is not yet in the
  self.router_info dict.

  [1] http://paste.openstack.org/show/88952/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1350028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350119] [NEW] get_agent_gw_ports_exist_for_network in l3_dvr_db.py should use arrays for filter values

2014-07-29 Thread Carl Baldwin
Public bug reported:

This code is incorrect:
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvr_db.py#L297

The result is that no port is found by the query.  Because of this, a
new port gets created each time the L3 agent restarts and tries to find
the external gw port for the DVR router.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) => Carl Baldwin (carl-baldwin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1350119

Title:
  get_agent_gw_ports_exist_for_network in l3_dvr_db.py should use arrays
  for filter values

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  This code is incorrect:
  https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvr_db.py#L297

  The result is that no port is found by the query.  Because of this, a
  new port gets created each time the L3 agent restarts and tries to
  find the external gw port for the DVR router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1350119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350413] [NEW] Migration of distributed router to legacy (central) not implemented

2014-07-30 Thread Carl Baldwin
Public bug reported:

This is a known backlog item.  I don't anticipate that this will be
fixed in Juno.  Recommend low to medium importance.

** Affects: neutron
 Importance: Undecided
 Status: Confirmed


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1350413

Title:
  Migration of distributed router to legacy (central) not implemented

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  This is a known backlog item.  I don't anticipate that this will be
  fixed in Juno.  Recommend low to medium importance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1350413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362242] [NEW] bridge_mappings isn't bound to any segment warning from l2pop

2014-08-27 Thread Carl Baldwin
Public bug reported:

Rossella asked me about this yesterday [1].  A brief discussion in the
DVR meeting this morning [2] seems to indicate it is not a serious
problem.  But, I thought I'd submit this bug as a place to land for
others who see this warning.  Hopefully at some point we can get it
cleaned up.

Here is the warning line from the log snippet in the pastebin [1].

57582 2014-08-27 14:15:50.401 16987 WARNING
neutron.plugins.ml2.drivers.l2pop.mech_driver [req-
ba914881-f88d-4793-a635-f4844855c9dd None] Port 2aba  57cd-5739
-433e-bf9a-60193b6bc4e8 updated by agent
 isn't bound to any segment

[1] http://paste.openstack.org/raw/101070/
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2014-08-27.log
 at 2014-08-27T15:23:22

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362242

Title:
  bridge_mappings isn't bound to any segment warning from l2pop

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Rossella asked me about this yesterday [1].  A brief discussion in the
  DVR meeting this morning [2] seems to indicate it is not a serious
  problem.  But, I thought I'd submit this bug as a place to land for
  others who see this warning.  Hopefully at some point we can get it
  cleaned up.

  Here is the warning line from the log snippet in the pastebin [1].

  57582 2014-08-27 14:15:50.401 16987 WARNING
  neutron.plugins.ml2.drivers.l2pop.mech_driver [req-
  ba914881-f88d-4793-a635-f4844855c9dd None] Port 2aba  57cd-5739
  -433e-bf9a-60193b6bc4e8 updated by agent
   isn't bound to any segment

  [1] http://paste.openstack.org/raw/101070/
  [2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2014-08-27.log
 at 2014-08-27T15:23:22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263217] [NEW] Unnecessary call to get_dhcp_port from DeviceManager setup

2013-12-20 Thread Carl Baldwin
Public bug reported:

In the file neutron/agent/linux/dhcp.py, the DeviceManager setup method
calls get_device which calls get_dhcp_port.  This results in an RPC
call.  But, we already had the port in the setup method.

I discovered this as I was trying to optimize the number of these RPC
calls.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1263217

Title:
  Unnecessary call to get_dhcp_port from DeviceManager setup

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In the file neutron/agent/linux/dhcp.py, the DeviceManager setup
  method calls get_device which calls get_dhcp_port.  This results in an
  RPC call.  But, we already had the port in the setup method.

  I discovered this as I was trying to optimize the number of these RPC
  calls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1263217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269501] [NEW] With removal of explicit _recycle_ip, the method should be removed from db_base_plugin_v2.py

2014-01-15 Thread Carl Baldwin
Public bug reported:

https://review.openstack.org/#/c/58017 removes the need to explicitly
call _recycle_ip.  More specifically, it makes _recycle_ip a simple
pass-through to _delete_ip_allocation.  A follow-on needs to remove
_recycle_ip and replace calls with _delete_ip_allocation.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269501

Title:
  With removal of explicit _recycle_ip, the method should be removed
  from db_base_plugin_v2.py

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  https://review.openstack.org/#/c/58017 removes the need to explicitly
  call _recycle_ip.  More specifically, it makes _recycle_ip a simple
  pass-through to _delete_ip_allocation.  A follow-on needs to remove
  _recycle_ip and replace calls with _delete_ip_allocation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269505] [NEW] Remove release_lease from DhcpBase

2014-01-15 Thread Carl Baldwin
Public bug reported:

https://review.openstack.org/#/c/56263 removes the need to explicitly
call release_lease on this class.  It should be removed.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269505

Title:
  Remove release_lease from DhcpBase

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  https://review.openstack.org/#/c/56263 removes the need to explicitly
  call release_lease on this class.  It should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269567] [NEW] L3 agent making RPC calls to get external network id

2014-01-15 Thread Carl Baldwin
Public bug reported:

In _process_routers, the L3 agent makes an RPC call each time that
_process_routers is called to get the external network id as long as it
was not configured using the gateway_external_network_id configuration
option.

This adds some time process a router.  Since the external id will not be
changing, we should be able to get away with fetching and saving this
value once.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269567

Title:
  L3 agent making RPC calls to get external network id

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In _process_routers, the L3 agent makes an RPC call each time that
  _process_routers is called to get the external network id as long as
  it was not configured using the gateway_external_network_id
  configuration option.

  This adds some time process a router.  Since the external id will not
  be changing, we should be able to get away with fetching and saving
  this value once.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272565] [NEW] Validation should not log bad user input at error level

2014-01-24 Thread Carl Baldwin
Public bug reported:

I noticed this while reviewing Ic2c87174.  When I read through log
files, I don't want to see errors like this that come from validating
bad user input.  Debug severity is more appropriate.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1272565

Title:
  Validation should not log bad user input at error level

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  I noticed this while reviewing Ic2c87174.  When I read through log
  files, I don't want to see errors like this that come from validating
  bad user input.  Debug severity is more appropriate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1272565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282206] [NEW] Uncaught GreenletExit in ProcessLauncher if wait called after greenlet kill

2014-02-19 Thread Carl Baldwin
Public bug reported:

I'm running wait in ProcessLauncher in a green thread.  I attempted to
kill the green thread and then call wait so that the process launcher
object can reap its child processes cleanly.  This resulted in a trace
resulting from a GreenletExit exception being thrown.

The eventlet documentation states multiple times that GreenletExit is
thrown after .kill() has been called to kill a thread.  I think
ProcessLauncher should expect this and deal with it.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282206

Title:
  Uncaught GreenletExit in ProcessLauncher if wait called after greenlet
  kill

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I'm running wait in ProcessLauncher in a green thread.  I attempted to
  kill the green thread and then call wait so that the process launcher
  object can reap its child processes cleanly.  This resulted in a trace
  resulting from a GreenletExit exception being thrown.

  The eventlet documentation states multiple times that GreenletExit is
  thrown after .kill() has been called to kill a thread.  I think
  ProcessLauncher should expect this and deal with it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1282206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287524] [NEW] ip_lib netns.execute should work with or without namespace

2014-03-03 Thread Carl Baldwin
Public bug reported:

There are a number of places in the neutron code that run an ip command
like this:

if self.network.namespace:
ip_wrapper = ip_lib.IPWrapper(self.root_helper,
  self.network.namespace)
ip_wrapper.netns.execute(cmd)
else:
utils.execute(cmd, self.root_helper)

This code could be simplified if netns.execute simply checked if there
was a namespace defined or not.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1287524

Title:
  ip_lib netns.execute should work with or without namespace

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  There are a number of places in the neutron code that run an ip
  command like this:

  if self.network.namespace:
  ip_wrapper = ip_lib.IPWrapper(self.root_helper,
self.network.namespace)
  ip_wrapper.netns.execute(cmd)
  else:
  utils.execute(cmd, self.root_helper)

  This code could be simplified if netns.execute simply checked if there
  was a namespace defined or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1287524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287824] [NEW] l3 agent makes too many individual sudo/ip netns calls

2014-03-04 Thread Carl Baldwin
Public bug reported:

Basically, calls to sudo, root_wrap, and ip netns exec all add overhead
that can make these calls very expensive.  Developing an effecting way
of consolidating these calls in to considerably fewer calls will be a
big win.  This assumes the mechanism for consolidating them does not
itself add a lot of overhead.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1287824

Title:
  l3 agent makes too many individual sudo/ip netns calls

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Basically, calls to sudo, root_wrap, and ip netns exec all add
  overhead that can make these calls very expensive.  Developing an
  effecting way of consolidating these calls in to considerably fewer
  calls will be a big win.  This assumes the mechanism for consolidating
  them does not itself add a lot of overhead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1287824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289066] [NEW] L3 Agent cannot process RPC messages until _sync_routers_task is finished

2014-03-06 Thread Carl Baldwin
Public bug reported:

When L3 agent starts or restarts, it almost immediately goes in to a
_sync_routers_task run.  This task is synchronized with _rpc_loop so
that only one can happen at a time.

The problem with this is that -- at least at scale -- the
_sync_routers_task can take a VERY LONG time to run.  I've observed it
take 1-2 hours!  This is WAY too long to wait before I can do something
with my router like add a floating ip.

The thing is, _sync_routers_task is important to do periodically but it
is mostly just checking that things are still in the right state.  It
should never take precedence over responding to RPC messages.  The RPC
messages represent work that the system has just been asked to perform.
It is silly to make it wait a long time for a maintenance task to
complete.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1289066

Title:
  L3 Agent cannot process RPC messages until _sync_routers_task is
  finished

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When L3 agent starts or restarts, it almost immediately goes in to a
  _sync_routers_task run.  This task is synchronized with _rpc_loop so
  that only one can happen at a time.

  The problem with this is that -- at least at scale -- the
  _sync_routers_task can take a VERY LONG time to run.  I've observed it
  take 1-2 hours!  This is WAY too long to wait before I can do
  something with my router like add a floating ip.

  The thing is, _sync_routers_task is important to do periodically but
  it is mostly just checking that things are still in the right state.
  It should never take precedence over responding to RPC messages.  The
  RPC messages represent work that the system has just been asked to
  perform.  It is silly to make it wait a long time for a maintenance
  task to complete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1289066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411865] [NEW] pylint is failing in the check queue

2015-01-16 Thread Carl Baldwin
Public bug reported:

I'm getting a pylint error on my newest patch [2] that doesn't seem to
be related to the patch.  I seem to get the same error on master.
Logstash is hinting at something starting to go wrong [1].

"Possible unbalanced tuple unpacking with sequence defined at line 153:
left side has 2 label(s), right side has 0 value(s) (unbalanced-tuple-
unpacking)"

 [1] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidW5iYWxhbmNlZCB0dXBsZSB1bnBhY2tpbmdcIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMTQ0NzYyMTc4MiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
[2]https://review.openstack.org/#/c/147972

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  I'm getting a pylint error on my newest patch [2] that doesn't seem to
- be related.  I seem to get the same error on master.  Logstash is
- hinting at something starting to go wrong.
+ be related to the patch.  I seem to get the same error on master.
+ Logstash is hinting at something starting to go wrong [1].
  
  "Possible unbalanced tuple unpacking with sequence defined at line 153:
  left side has 2 label(s), right side has 0 value(s) (unbalanced-tuple-
  unpacking)"
  
-  [1] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidW5iYWxhbmNlZCB0dXBsZSB1bnBhY2tpbmdcIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMTQ0NzYyMTc4MiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
+  [1] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidW5iYWxhbmNlZCB0dXBsZSB1bnBhY2tpbmdcIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMTQ0NzYyMTc4MiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
  [2]https://review.openstack.org/#/c/147972

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1411865

Title:
  pylint is failing in the check queue

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I'm getting a pylint error on my newest patch [2] that doesn't seem to
  be related to the patch.  I seem to get the same error on master.
  Logstash is hinting at something starting to go wrong [1].

  "Possible unbalanced tuple unpacking with sequence defined at line
  153: left side has 2 label(s), right side has 0 value(s) (unbalanced-
  tuple-unpacking)"

   [1] 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidW5iYWxhbmNlZCB0dXBsZSB1bnBhY2tpbmdcIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyMTQ0NzYyMTc4MiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==
  [2]https://review.openstack.org/#/c/147972

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1411865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562067] Re: neutron network-show should display the router

2016-03-25 Thread Carl Baldwin
We won't support this on the API.  But, I don't think we'd need to
anyway.  Let me explain...

In the Neutron model, routers attach to subnets, a part of the network.
If you show the network, it lists the subnets that belong to it:

$ neutron net-show private
+-+--+
| Field   | Value|
+-+--+
...
| subnets | c03e61b6-... |
| | 45dec6b6-... |
...
+-+--+

Listing a router's ports will show the subnets to which it is attached:

$ neutron router-port-list router1
+--+--+
| id   | fixed_ips  
  |
+--+--+
| 2597d0aa-... | {"subnet_id": "c03e61b6-...", "ip_address": "10.0.0.1"}
  |
| d40960c3-... | {"subnet_id": "45dec6b6-...", "ip_address": 
"fd80:a290:1ca0::1"} |
+--+--+

>From here, it is just a matter of iterating over routers which could be
a pain if there are many routers but it is possible.  The router-port-
list command simply uses the port GET API [1] and passes the device_id
of the router.Looking at that API, I see we could do better.  By
passing network_id instead of the router's device ID, we get a list of
ports on the network.  Filtering by
device_owner="network:router_interface" should give just the router
ports connected to the network.  Within these ports, the device_id will
be the router id of the router connected.  I was able to prove this
works with the existing API using this URL [2] against one of my
devstacks.  It worked!  From here it is just a matter of iterating the
ports in the result and gather the router id(s) and de-duplicating the
list since one router can have multiple ports on the network.

If anything, this is a neutron client request.

[1] http://developer.openstack.org/api-ref-networking-v2.html#ports
[2] 
http://10.224.24.226:9696/v2.0/ports?network_id=31c0cb78-a381-405f-9349-6f2f944aec25&device_owner=network:router_interface

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
   Importance: Undecided => Wishlist

** Changed in: neutron
   Status: New => Won't Fix

** Summary changed:

- neutron network-show should display the router
+ It is difficult to find routers connected given a neutron network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562067

Title:
  It is difficult to find routers connected given a neutron network

Status in neutron:
  Won't Fix
Status in python-neutronclient:
  New

Bug description:
  Neutron net-show should display the router the network is attached to
  if it is attached to a router.  There doesn't appear to be any way to
  start with a network name/ID and determine the router that network is
  using.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1562067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540512] Re: [RFE] Host Aware IPAM

2016-03-28 Thread Carl Baldwin
After some review and reading Neil's final comments, I'm going to close
this.  I'm about to crack IPAM for routed networks.  With this request
in mind, I'll be looking to evolve the IPAM interface so that it can
take host information into consideration.  I'll also be working on the
deferred IP allocation.

I think there will be a way to do all of this which can accommodate both
hard and soft boundaries for cidrs.  I'll have some code up for review
soon and will add Neil and Petra to the review.

In the meantime, I suggest we review and merge Petra's devref
contribution: [1].

[1] https://review.openstack.org/#/c/289460/

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540512

Title:
  [RFE] Host Aware IPAM

Status in neutron:
  Won't Fix

Bug description:
  In some Neutron use cases it is desirable for the IP address(es) that
  Neutron allocates to a VM to depend on the compute host that Nova
  chooses for that VM. For example, in the networking-calico approach
  where data is routed between compute hosts, it's desirable for the IP
  addresses that are used on a given host (or rack) to be clustered
  within a small IP prefix, so that the routes to those IP addresses can
  be aggregated on routers within the data center fabric. Neutron's new
  pluggable IPAM facility allows us in principle to start doing this,
  but we will need to design and implement three other pieces of the
  solution:

  - Firstly, we need a way for the pluggable IPAM framework to pass the
  chosen host into a pluggable IPAM module, such that a module can take
  the host into account if it so wishes.  (If this does not already
  exist - we are not yet sure!)

  - Secondly, to demonstrate that, we need a sample pluggable IPAM module that 
allocates IP addresses in some host-aware way.
  - Thirdly, we eventually need to enhance the port setup exchange between Nova 
and Neutron, such that Neutron can choose an IP address _after_ Nova has chosen 
the compute host.

  This work is being done as part of an Outreachy internship, and the
  last point cannot reasonably fit in that scope.  Hence this RFE
  proposes just the first two points, as a useful and concrete step
  towards the eventual complete picture.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564983] Re: Request Mitaka release for openstack/networking-hyperv

2016-04-01 Thread Carl Baldwin
I pushed the tag to gerrit.  Please confirm that the release was
successful.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1564983

Title:
  Request Mitaka release for openstack/networking-hyperv

Status in neutron:
  Fix Released

Bug description:
  We are requesting that the networking-hyperv release 2.0.0 be created
  from the current head of the master branch. The stable/mitaka branch
  needs to be created as well.

  commit id: f0f7c187e57f2f2c476d7ffd5beec794f1aca43f

  Branch: stable/mitaka

  release version: 2.0.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1564983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568992] [NEW] PD adds a non-sensical check to nonpluggable IPAM

2016-04-11 Thread Carl Baldwin
Public bug reported:

In looking through the IPAM code, I ran across this check [1] which is
as a major difference between the pluggable and non-pluggable IPAM code
paths.  I got to thinking about it, and I don't see a valid use case for
this check.

Let's say an ip_address was specified in fixed_ips and the subnet cidr
is detected by _get_subnet_for_fixed_ip as the provisional PD prefix
(::1/64).  That would mean that the IP address specified is contained in
::1/64 (i.e. ::1 or ::dead:beef:1:1) in order for check_subnet_ip  to
match it [2].  This doesn't make sense in the first place to specify
this kind of IP address for a subnet for which the network address isn't
even known.  We can't allocate such an IP to the port.

Now let's look at what happens if even if you do specify such a
nonsensical address.  This conditional is False and so it drops to the
else clause [3].  Here it is checked if it is a router port and if it is
an auto address subnet.  PD subnets are all auto address subnets, so it
depends on if it is a router port.  If it is, the  subnet is added to
the IPs to allocate *without* the ip address!  If  it isn't, the entire
thing is ignored.  Either way, the specific IP address that was
specified is ignored.  All auto address subnets are added to the list
later so it doesn't even matter that it *might* have been added here.

So, if I understand everything correctly, this check doesn't make sense.

[1] 
https://github.com/openstack/neutron/blob/c5bc5bda34/neutron/db/ipam_non_pluggable_backend.py#L248-249
[2]
https://github.com/openstack/neutron/blob/c5bc5bda34/neutron/ipam/utils.py#L28
[3] 
https://github.com/openstack/neutron/blob/c5bc5bda34/neutron/db/ipam_non_pluggable_backend.py#L268

** Affects: neutron
     Importance: Low
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress


** Tags: l3-ipam-dhcp

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1568992

Title:
  PD adds a non-sensical check to nonpluggable IPAM

Status in neutron:
  In Progress

Bug description:
  In looking through the IPAM code, I ran across this check [1] which is
  as a major difference between the pluggable and non-pluggable IPAM
  code paths.  I got to thinking about it, and I don't see a valid use
  case for this check.

  Let's say an ip_address was specified in fixed_ips and the subnet cidr
  is detected by _get_subnet_for_fixed_ip as the provisional PD prefix
  (::1/64).  That would mean that the IP address specified is contained
  in ::1/64 (i.e. ::1 or ::dead:beef:1:1) in order for check_subnet_ip
  to match it [2].  This doesn't make sense in the first place to
  specify this kind of IP address for a subnet for which the network
  address isn't even known.  We can't allocate such an IP to the port.

  Now let's look at what happens if even if you do specify such a
  nonsensical address.  This conditional is False and so it drops to the
  else clause [3].  Here it is checked if it is a router port and if it
  is an auto address subnet.  PD subnets are all auto address subnets,
  so it depends on if it is a router port.  If it is, the  subnet is
  added to the IPs to allocate *without* the ip address!  If  it isn't,
  the entire thing is ignored.  Either way, the specific IP address that
  was specified is ignored.  All auto address subnets are added to the
  list later so it doesn't even matter that it *might* have been added
  here.

  So, if I understand everything correctly, this check doesn't make
  sense.

  [1] 
https://github.com/openstack/neutron/blob/c5bc5bda34/neutron/db/ipam_non_pluggable_backend.py#L248-249
  [2]
  https://github.com/openstack/neutron/blob/c5bc5bda34/neutron/ipam/utils.py#L28
  [3] 
https://github.com/openstack/neutron/blob/c5bc5bda34/neutron/db/ipam_non_pluggable_backend.py#L268

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1568992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536437] Re: 'module' object has no attribute 'moved_function' failure with required debtcollector version: needs 0.9.0, not 0.8.0

2016-04-14 Thread Carl Baldwin
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1536437

Title:
  'module' object has no attribute 'moved_function' failure with
  required debtcollector version: needs 0.9.0, not 0.8.0

Status in neutron:
  Fix Released

Bug description:
  When building the Debian package of Neutron for the Mitaka b2 release,
  I get the below unit test failures. All other tests are ok (6451
  tests). Please help me to fix these last 3.

  
  ==
  FAIL: 
unittest2.loader._FailedTest.neutron.tests.unit.agent.linux.test_bridge_lib
  unittest2.loader._FailedTest.neutron.tests.unit.agent.linux.test_bridge_lib
  --
  _StringException: Traceback (most recent call last):
  ImportError: Failed to import test module: 
neutron.tests.unit.agent.linux.test_bridge_lib
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 456, in 
_find_test_path
  module = self._get_module_from_name(name)
File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
  __import__(name)
File "neutron/tests/unit/agent/linux/test_bridge_lib.py", line 20, in 

  from neutron.agent.linux import bridge_lib
File "neutron/agent/linux/bridge_lib.py", line 23, in 
  from neutron.i18n import _LE
File "neutron/i18n.py", line 25, in 
  _ = moves.moved_function(neutron._i18n._, '_', __name__, message=message)
  AttributeError: 'module' object has no attribute 'moved_function'

  
  ==
  FAIL: unittest2.loader._FailedTest.neutron.tests.unit.cmd.server
  unittest2.loader._FailedTest.neutron.tests.unit.cmd.server
  --
  _StringException: Traceback (most recent call last):
  ImportError: Failed to import test module: neutron.tests.unit.cmd.server
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 490, in 
_find_test_path
  package = self._get_module_from_name(name)
File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
  __import__(name)
File "neutron/tests/unit/cmd/server/__init__.py", line 16, in 
  from neutron.cmd.eventlet import server
File "neutron/cmd/eventlet/server/__init__.py", line 17, in 
  from neutron.server import wsgi_pecan
File "neutron/server/wsgi_pecan.py", line 23, in 
  from neutron.pecan_wsgi import app as pecan_app 
File "neutron/pecan_wsgi/app.py", line 23, in 
  from neutron.pecan_wsgi import hooks
File "neutron/pecan_wsgi/hooks/__init__.py", line 23, in 
  from neutron.pecan_wsgi.hooks import translation
File "neutron/pecan_wsgi/hooks/translation.py", line 22, in 
  from neutron.i18n import _LE
File "neutron/i18n.py", line 25, in 
  _ = moves.moved_function(neutron._i18n._, '_', __name__, message=message)
  AttributeError: 'module' object has no attribute 'moved_function'

  ==
  FAIL: 
unittest2.loader._FailedTest.neutron.tests.unit.plugins.ml2.drivers.linuxbridge.agent.test_linuxbridge_neutron_agent
  
unittest2.loader._FailedTest.neutron.tests.unit.plugins.ml2.drivers.linuxbridge.agent.test_linuxbridge_neutron_agent
  --
  _StringException: Traceback (most recent call last):
  ImportError: Failed to import test module: 
neutron.tests.unit.plugins.ml2.drivers.linuxbridge.agent.test_linuxbridge_neutron_agent
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 456, in 
_find_test_path
  module = self._get_module_from_name(name)
File "/usr/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
  __import__(name)
File 
"neutron/tests/unit/plugins/ml2/drivers/linuxbridge/agent/test_linuxbridge_neutron_agent.py",
 line 21, in 
  from neutron.agent.linux import bridge_lib
File "neutron/agent/linux/bridge_lib.py", line 23, in 
  from neutron.i18n import _LE
File "neutron/i18n.py", line 25, in 
  _ = moves.moved_function(neutron._i18n._, '_', __name__, message=message)
  AttributeError: 'module' object has no attribute 'moved_function'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1536437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456624] Re: DVR Connection to external network lost when associating a floating IP

2016-04-20 Thread Carl Baldwin
In my review of the patch, I stated that I think the cure is much worse
than the problem.  I don't think anyone has chimed in to change my mind
and so I'm marking this as won't fix.  Ping me if you think it should be
fixed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456624

Title:
  DVR Connection to external network lost when associating a floating IP

Status in neutron:
  Won't Fix

Bug description:
  In DVR, when a floating ip is associated with a port, the current
  connection( ssh or ping) to external network will be hung(and
  unresponsive).

  The connection may be any TCP, UDP, ICMP connections which are tracked
  in conntrack.

  Having a distributed router with interfaces for an internal network
  and external network.

  When Launching a instance and pinging an external network and then 
associating a floating to the instance the connection is lost i.e.
   the ping fails.
  When running the ping command again - it's successful.

  Version
  ==
  RHEL 7.1
  python-nova-2015.1.0-3.el7ost.noarch
  python-neutron-2015.1.0-1.el7ost.noarch

  How to reproduce
  ==
  1. Create a distributed router and attach an internal and an external network 
to it.
  # neutron router-create --distributed True router1
  # neutron router-interface-add router1 
  # neutron router-gateway-set 

  2. Launch an instance and associate it with a floating IP.
  # nova boot --flavor m1.small --image fedora --nic net-id= vm1

  3. Go to the console of the instance and run ping to an external network:
   # ping 8.8.8.8

  4.  Associate a floating IP to the instance:
   # nova floating-ip-associate vm1 

  5. Verify that the ping fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573197] Re: [RFE] Neutron API enhancement for visibility into multi-segmented networks

2016-05-02 Thread Carl Baldwin
After talking with Sukhdev and a Manila person at the summit on Friday
and looking at the Manila use case, I understand much better what the
use case is you're after here.  An ML2 mechanism driver that will bind
the port to the Manila file server will have access to all of the
details it needs.  Wouldn't it?  I tend to that that we should be
thinking along those lines instead of thinking about exposing all of the
internal details through the API so that Manila can connect to neutron
networks outside of Neutron.

At the summit, I asked that this request be written in terms of the
higher level use case needed by Manila, instead of that use case being
presented as an after-thought at the end of the description.  Actually,
the link to the Manila use case [1] is a pretty good start.  If you can
do that, it will be the start of a decent discussion.

[1] https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-
support

** Changed in: neutron
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573197

Title:
  [RFE] Neutron API enhancement for visibility into multi-segmented
  networks

Status in neutron:
  Incomplete

Bug description:
  Neutron networks are, by default, assumed to be single segmented L2
  domains, represented by a single segmentation ID (e.g VLAN ID).
  Current neutron API (neutron net-show) works well with this model.
  However, with the introduction of HPB, this assumption is not true
  anymore. Networks are now multi-segmented. A given network could have
  anywhere from 3 to N number of segments depending upon the
  breadth/size of the data center topology. This will be true with the
  implementation of routed networks as well.

  In general, the segments, in multi-segmented networks, will be
  dynamically created.  As mentioned earlier, the number of these
  segments will grow and shrink dynamically representing the breadth of
  data center topology. Therefore, at the very least, admins would like
  to have visibility into these segments - e.g. which segmentation
  type/id is consumed in which segment of the network.

  Venders and Operators are forced to come up with their hacks to get such 
visibility. 
  This RFE proposes that we enhance neutron API to address this visibility 
issue in a vendor/implementation agnostic way - by either enhancing "neutron 
net-show" or by introducing additional commands such as "neutron 
net-segments-list/neutron net-segment-show". 

  This capability is needed for Neutron-Manila integration as well.
  Manila requires visibility into the segmentation IDs used in specific
  segments of a network. Please see Manila use case here -
  https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-
  support

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577572] Re: [RFE] Routing providers framework

2016-05-05 Thread Carl Baldwin
I think we should do just enough of this for the fast exit rfe [1] and
leave the rest of this for another time.

[1] https://bugs.launchpad.net/neutron/+bug/1577488

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577572

Title:
  [RFE] Routing providers framework

Status in neutron:
  Won't Fix

Bug description:
  It would extremely helpful if Neutron were able to determine whether
  there is a mechanism operating on an external network that can handle
  advertising next-hops to upstream routers. "Fast-exit" DVR
  (https://bugs.launchpad.net/neutron/+bug/1577488) is an example of a
  use case where a framework for providing this information inside
  Neutron would be useful. With the Mitaka release BGP dynamic routing
  functionality was added to Neutron. BGP dyanmic routing exposes the
  concept of bindings between networks and BGP processes, which can
  provide the information described above. However, this is not a
  generic approach that supports other routing providers such as IPv6
  proxy ND, OSPF, or other routing technology. What is needed is a
  framework that allows Neutron to ask whether there is a mechanism
  operating on an external or provider network that is handling routing
  traffic.

  To this end I'm proposing the creation of a simple, yet generic
  framework in Neutron that allows any number of "routing providers" to
  be registered with Neutron. Each routing provider is able to answer
  the question "do you route to next-hops on network X?". Among other
  things, this answer allows fast-exit DVR to be enabled dynamically
  based on whether appropriate routing is in place without placing more
  config file burden on operators or having tenants set attributes on a
  router to get fast-exit treatment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1577572/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547271] Re: Preserve subnet_create behavior in presence of subnet pools

2016-05-19 Thread Carl Baldwin
This is no longer relevant.  The default subnetpool behavior has changed
in Mitaka.  You need to request the use of a default subnetpool
explicitly.  That makes this problem moot.

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: openstack-manuals
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1547271

Title:
  Preserve subnet_create behavior in presence of subnet pools

Status in neutron:
  Invalid
Status in openstack-api-site:
  Invalid
Status in openstack-manuals:
  Invalid

Bug description:
  https://review.openstack.org/279378
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit d38aeade9db7169955b4524a5fc8d814067dec15
  Author: Armando Migliaccio 
  Date:   Thu Feb 11 20:56:20 2016 -0800

  Preserve subnet_create behavior in presence of subnet pools
  
  The development of the auto_allocate extension, which relies on subnet
  pools, revealed some discrepancies in the behavior of the subnet_create
  API: if a user specifies a cidr on subnet_create like he/she is used
  to, the API outcome changes in presence of default subnetpools. For
  instance the command 'neutron subnet-create network ' returns a
  subnet associated to a pool, if a default pool exists, but it does not
  otherwise. At the same time, attempting to create a subnet without
  passing any detail but the ip version also behaves unexpectedly
  depending on the state of the system.
  
  Whilst this could be considered convenient in some circumstances,
  it is problematic for a couple of reasons: a) it breaks a well defined
  contract (backward compat of the subnet-create command), and b) it
  leads to ambiguity of the API.
  
  This patch restores the semantic of the subnet_create API where it is
  mandatory to specify CIDR/IP version regardless of the conditions
  under which the request is issued. On the other hand, associating
  subnets to subnet pools will have to be more prescriptive, and
  require the user to explicitly state his/her intentions when creating
  the subnet: if a user does want a subnet (CIDR) to belong to a subnet
  pool, he/she will have to state so, either by specifying a subnetpool
  name/uuid, or by asking for a default one.
  
  This will be tackled as a follow-up, especially in order to address the
  needs of prefix delegation which currently rely on the ambiguous
  behavior that this patch is fixing.
  
  Closes-bug: 1545199
  
  DocImpact: subnetpools can be used to simplify IPAM, and can be specified
  during subnet creation.
  
  Change-Id: Idf516ed9db24d779742cdff0584b48182a8502d6

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1547271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583759] [NEW] Invalid input for operation: IP allocation requires subnets for network

2016-05-19 Thread Carl Baldwin
Public bug reported:

Several people now, including Brian Haley and me, have been chasing down
this stack trace [1] for a few weeks.  We've seen it in failed jobs and
we begin chasing it down only to find out that it is a red herring.

I'm filing this bug because we ought to capture what we know about it,
figure out if it is correlated with any failures, and hopefully
eliminate the trace so that no longer distracts us from other problems.

I was poking through the stack trace in github.  Since I had the links
handy, I thought I'd include them here [2-11].  Also, this logstash
query might be helpful [12].

[1] http://paste.openstack.org/show/497738/
[2] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/api/rpc/handlers/dhcp_rpc.py#L211
[3] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/api/rpc/handlers/dhcp_rpc.py#L93
[4] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/plugins/common/utils.py#L162
[5] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/plugins/ml2/plugin.py#L1137
[6] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/plugins/ml2/plugin.py#L1106
[7] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/db/db_base_plugin_v2.py#L1247
[8] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/db/ipam_non_pluggable_backend.py#L204
[9] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/db/ipam_non_pluggable_backend.py#L362
[10] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/db/ipam_non_pluggable_backend.py#L245
[11] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/db/ipam_backend_mixin.py#L335-L337
[12] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%20%5C%22InvalidInput%3A%20Invalid%20input%20for%20operation%3A%20IP%20allocation%20requires%20subnets%20for%20network%5C%22

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: l3-ipam-dhcp

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583759

Title:
  Invalid input for operation: IP allocation requires subnets for
  network

Status in neutron:
  New

Bug description:
  Several people now, including Brian Haley and me, have been chasing
  down this stack trace [1] for a few weeks.  We've seen it in failed
  jobs and we begin chasing it down only to find out that it is a red
  herring.

  I'm filing this bug because we ought to capture what we know about it,
  figure out if it is correlated with any failures, and hopefully
  eliminate the trace so that no longer distracts us from other
  problems.

  I was poking through the stack trace in github.  Since I had the links
  handy, I thought I'd include them here [2-11].  Also, this logstash
  query might be helpful [12].

  [1] http://paste.openstack.org/show/497738/
  [2] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/api/rpc/handlers/dhcp_rpc.py#L211
  [3] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/api/rpc/handlers/dhcp_rpc.py#L93
  [4] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/plugins/common/utils.py#L162
  [5] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/plugins/ml2/plugin.py#L1137
  [6] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/plugins/ml2/plugin.py#L1106
  [7] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/db/db_base_plugin_v2.py#L1247
  [8] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/db/ipam_non_pluggable_backend.py#L204
  [9] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/db/ipam_non_pluggable_backend.py#L362
  [10] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/db/ipam_non_pluggable_backend.py#L245
  [11] 
https://github.com/openstack/neutron/blob/79c1d7efc1/neutron/db/ipam_backend_mixin.py#L335-L337
  [12] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%20%5C%22InvalidInput%3A%20Invalid%20input%20for%20operation%3A%20IP%20allocation%20requires%20subnets%20for%20network%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585738] [NEW] ML2 doesn't return fixed_ips on a port update with binding

2016-05-25 Thread Carl Baldwin
Public bug reported:

I found this yesterday while working on deferred IP allocation for
routed networks.  However, it isn't unique to deferred port binding.
With my deferred IP allocation patch [2], I need to be able to make a
port create call [1] without binding information that doesn't allocate
an IP address.  Then, I need to follow it up with a port update which
sends host binding information and allocates an IP address.  But, when I
do that, the response doesn't contain the IP addresses that were
allocated [3].  However, immediately following it with a GET on the same
port shows the allocation [4].

This doesn't happen in other plugins besides ML2.  Only with ML2.  I've
put up a patch to run unit tests with ML2 that expose this problem [5].
The problem can be reproduced on master [6].  I can get it to happen by
creating a network without a subnet, creating a port on the network
(with no IP address), and then calling port update to allocate an IP
address.

If this goes unaddressed, Nova will have to make a GET call after doing
a port update with binding information when working with a port with
deferred IP allocation.

[1] http://paste.openstack.org/show/505419/
[2] https://review.openstack.org/#/c/320631/
[3] http://paste.openstack.org/show/505420/
[4] http://paste.openstack.org/show/505421/
[5] 
http://logs.openstack.org/57/320657/2/check/gate-neutron-python27/153a619/testr_results.html.gz
[6] https://review.openstack.org/321152

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585738

Title:
  ML2 doesn't return fixed_ips on a port update with binding

Status in neutron:
  New

Bug description:
  I found this yesterday while working on deferred IP allocation for
  routed networks.  However, it isn't unique to deferred port binding.
  With my deferred IP allocation patch [2], I need to be able to make a
  port create call [1] without binding information that doesn't allocate
  an IP address.  Then, I need to follow it up with a port update which
  sends host binding information and allocates an IP address.  But, when
  I do that, the response doesn't contain the IP addresses that were
  allocated [3].  However, immediately following it with a GET on the
  same port shows the allocation [4].

  This doesn't happen in other plugins besides ML2.  Only with ML2.
  I've put up a patch to run unit tests with ML2 that expose this
  problem [5].  The problem can be reproduced on master [6].  I can get
  it to happen by creating a network without a subnet, creating a port
  on the network (with no IP address), and then calling port update to
  allocate an IP address.

  If this goes unaddressed, Nova will have to make a GET call after
  doing a port update with binding information when working with a port
  with deferred IP allocation.

  [1] http://paste.openstack.org/show/505419/
  [2] https://review.openstack.org/#/c/320631/
  [3] http://paste.openstack.org/show/505420/
  [4] http://paste.openstack.org/show/505421/
  [5] 
http://logs.openstack.org/57/320657/2/check/gate-neutron-python27/153a619/testr_results.html.gz
  [6] https://review.openstack.org/321152

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586135] Re: Revert "Improve performance of ensure_namespace"

2016-05-26 Thread Carl Baldwin
This was filed from a null-merge.  I think it is okay to mark it
invalid.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586135

Title:
  Revert "Improve performance of ensure_namespace"

Status in neutron:
  Invalid

Bug description:
  https://review.openstack.org/314250
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit a323769143001d67fd1b3b4ba294e59accd09e0e
  Author: Ryan Moats 
  Date:   Tue Oct 20 15:51:37 2015 +

  Revert "Improve performance of ensure_namespace"
  
  This reverts commit 81823e86328e62850a89aef9f0b609bfc0a6dacd.
  
  Unneeded optimization: this commit only improves execution
  time on the order of milliseconds, which is less than 1% of
  the total router update execution time at the network node.
  
  This also
  
  Closes-bug: #1574881
  
  Change-Id: Icbcdf4725ba7d2e743bb6761c9799ae436bd953b

  commit 7fcf0253246832300f13b0aa4cea397215700572
  Author: OpenStack Proposal Bot 
  Date:   Thu Apr 21 07:05:16 2016 +

  Imported Translations from Zanata
  
  For more information about this automatic import see:
  https://wiki.openstack.org/wiki/Translations/Infrastructure
  
  Change-Id: I9e930750dde85a9beb0b6f85eeea8a0962d3e020

  commit 643b4431606421b09d05eb0ccde130adbf88df64
  Author: OpenStack Proposal Bot 
  Date:   Tue Apr 19 06:52:48 2016 +

  Imported Translations from Zanata
  
  For more information about this automatic import see:
  https://wiki.openstack.org/wiki/Translations/Infrastructure
  
  Change-Id: I52d7460b3265b5460b9089e1cc58624640dc7230

  commit 1ffea42ccdc14b7a6162c1895bd8f2aae48d5dae
  Author: OpenStack Proposal Bot 
  Date:   Mon Apr 18 15:03:30 2016 +

  Updated from global requirements
  
  Change-Id: Icb27945b3f222af1d9ab2b62bf2169d82b6ae26c

  commit b970ed5bdac60c0fa227f2fddaa9b842ba4f51a7
  Author: Kevin Benton 
  Date:   Fri Apr 8 17:52:14 2016 -0700

  Clear DVR MAC on last agent deletion from host
  
  Once all agents are deleted from a host, the DVR MAC generated
  for that host should be deleted as well to prevent a buildup of
  pointless flows generated in the OVS agent for hosts that don't
  exist.
  
  Closes-Bug: #1568206
  Change-Id: I51e736aa0431980a595ecf810f148ca62d990d20
  (cherry picked from commit 92527c2de2afaf4862fddc101143e4d02858924d)

  commit eee9e58ed258a48c69effef121f55fdaa5b68bd6
  Author: Mike Bayer 
  Date:   Tue Feb 9 13:10:57 2016 -0500

  Add an option for WSGI pool size
  
  Neutron currently hardcodes the number of
  greenlets used to process requests in a process to 1000.
  As detailed in
  
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html
  
  this can cause requests to wait within one process
  for available database connection while other processes
  remain available.
  
  By adding a wsgi_default_pool_size option functionally
  identical to that of Nova, we can lower the number of
  greenlets per process to be more in line with a typical
  max database connection pool size.
  
  DocImpact: a previously unused configuration value
 wsgi_default_pool_size is now used to affect
 the number of greenlets used by the server. The
 default number of greenlets also changes from 1000
 to 100.
  Change-Id: I94cd2f9262e0f330cf006b40bb3c0071086e5d71
  (cherry picked from commit 9d573387f1e33ce85269d3ed9be501717eed4807)

  commit bf66cc6f74133cfe6c1ab75287d39814ac44b068
  Author: Clayton O'Neill 
  Date:   Thu Mar 24 15:28:21 2016 +

  Don't disconnect br-int from phys br if connected
  
  When starting up, we don't want to delete the patch port between br-int
  and the physical bridges. In liberty the br-int bridge was changed to
  not tear down flows on startup, and  change
  I9801b76829021c9a0e6358982e1136637634a521 will change the physical
  bridges to not tear down flows also.
  
  Without this patch the patch port is torn down and not reinstalled until
  after the initial flows are set back up.
  
  Partial-Bug: #1514056
  Change-Id: I05bf5105a6f3acf6a313ce6799648a095cf8ec96
  (cherry picked from commit a549f30fad93508bf9dfdcfb20cd522f7add27b0)

  commit 93795a4bda47605d5616476b2a456772308aa3c3
  Author: Kevin Benton 
  Date:   Mon Mar 28 14:14:

[Yahoo-eng-team] [Bug 1581931] Re: RBAC -access_as_external - exclude tenant

2016-06-03 Thread Carl Baldwin
Update from Drivers' meeting:
http://eavesdrop.openstack.org/meetings/neutron_drivers/2016/neutron_drivers.2016-06-02-22.00.log.html#l-124

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581931

Title:
  RBAC -access_as_external - exclude tenant

Status in neutron:
  Won't Fix

Bug description:
  If we have 50 tenants and we will want to expose the external network to 49 
of them, we will have to create 49 rbac rules. 
  IMHO it can be not so comfortable. 

  I believe we should make an option to exclude specific tenant/tenants
  from being targeted by the rbac rule.

  May be we can add attribute "exclude_tenants" to the RBAC policy in
  order to make it happen.

  MITAKA

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566191] Re: [RFE] Allow multiple networks with FIP range to be associated with Tenant router

2016-06-09 Thread Carl Baldwin
** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566191

Title:
  [RFE] Allow multiple networks with FIP range to be associated with
  Tenant router

Status in neutron:
  Won't Fix

Bug description:
  This requirement came out during Manila-Neutron integration discussion to 
provide solution for multi-tenant environment to work with File Share store.
  The way to solve it is as following:
  A dedicated NAT based network connection should be established between a 
tenant's private network (where his VMs reside) and a data center local storage 
network. Sticking to IP based authorization, as used by Manila, the NAT 
assigned floating IPs in the storage network are used to check authorization in 
the storage backend, as well as to deal with possible overlapping IP ranges in 
the private networks of different tenants. A dedicated NAT and not the public 
FIP is suggested since public FIPs are usually limited resources.
  In order to be able to orchestrate the above use case, it should be possible 
to associate more than one subnet with 'FIP' range with the router (via router 
interface)  and enable NAT based on the destination subnet. 
  This behaviour was possible in Mitaka and worked for MidoNet plugin, but due 
to the https://bugs.launchpad.net/neutron/+bug/1556884 it won't be possible any 
more. 

  Related bug for security use case that can benefit from the proposed
  behavior is described here
  https://bugs.launchpad.net/neutron/+bug/1250105

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557909] Re: SNAT namespace is not getting cleared after the manual move of SNAT with dead agent

2016-06-10 Thread Carl Baldwin
** Changed in: neutron
   Status: Fix Released => In Progress

** Description changed:

+ Llatest patch (2016-06-10):  https://review.openstack.org/#/c/326729/
+ 
  Stale snat namespace on the controller after recovery of dead l3 agent.
  
  Note: Only on Stable/LIBERTY Branch:
- 
  
  Setup:
  Multiple controller (DVR_SNAT) setup.
  
  Steps:
  1) Create tenant network, subnet and router.
-  2) Create a external network
-  3) Attached internal & external network to a router
-  4) Create VM on above tenant network.
-  5) Make sure VM can reach outside using CSNAT.
-  6) Find router hosting l3 agent and stop the l3 agent.
-  7) Manually move router to other controller (dvr_snat mode). SNAT namespace 
should be create on new controller node.
-  8) Start the l3 agent on the controller (the one that  stopped in step6)
-  9) Notice that snat namespace is now available on 2 controller and it is not 
getting deleted from the agent which is not hosting it.
- 
+  2) Create a external network
+  3) Attached internal & external network to a router
+  4) Create VM on above tenant network.
+  5) Make sure VM can reach outside using CSNAT.
+  6) Find router hosting l3 agent and stop the l3 agent.
+  7) Manually move router to other controller (dvr_snat mode). SNAT namespace 
should be create on new controller node.
+  8) Start the l3 agent on the controller (the one that  stopped in step6)
+  9) Notice that snat namespace is now available on 2 controller and it is not 
getting deleted from the agent which is not hosting it.
  
  Example:
  | cfa97c12-b975-4515-86c3-9710c9b88d76 | L3 agent   | vm2-ctl2-936 | 
:-)   | True   | neutron-l3-agent  |
  | df4ca7c5-9bae-4cfb-bc83-216612b2b378 | L3 agent   | vm1-ctl1-936 | 
:-)   | True   | neutron-l3-agent  |
- 
  
  mysql> select * from csnat_l3_agent_bindings;
  
+--+--+-+--+
  | router_id| l3_agent_id  
| host_id | csnat_gw_port_id |
  
+--+--+-+--+
  | 0fb68420-9e69-41bb-8a88-8ab53b0faabb | cfa97c12-b975-4515-86c3-9710c9b88d76 
| NULL| NULL |
  
+--+--+-+--+
- 
  
  On vm1-ctl1-936
  
  Stale SNAT namespace on Initially hosting controller.
  
  ubuntu@vm1-ctl1-936:~/devstack$ sudo ip netns
  snat-0fb68420-9e69-41bb-8a88-8ab53b0faabb
  qrouter-0fb68420-9e69-41bb-8a88-8ab53b0faabb
  
- 
  On vm2-ctl2-936 (2nd Controller)
  
  ubuntu@vm2-ctl2-936:~$ ip netns
  snat-0fb68420-9e69-41bb-8a88-8ab53b0faabb
  qrouter-0fb68420-9e69-41bb-8a88-8ab53b0faabb

** Description changed:

- Llatest patch (2016-06-10):  https://review.openstack.org/#/c/326729/
+ Latest patch (2016-06-10):  https://review.openstack.org/#/c/326729/
  
  Stale snat namespace on the controller after recovery of dead l3 agent.
  
  Note: Only on Stable/LIBERTY Branch:
  
  Setup:
  Multiple controller (DVR_SNAT) setup.
  
  Steps:
  1) Create tenant network, subnet and router.
   2) Create a external network
   3) Attached internal & external network to a router
   4) Create VM on above tenant network.
   5) Make sure VM can reach outside using CSNAT.
   6) Find router hosting l3 agent and stop the l3 agent.
   7) Manually move router to other controller (dvr_snat mode). SNAT namespace 
should be create on new controller node.
   8) Start the l3 agent on the controller (the one that  stopped in step6)
   9) Notice that snat namespace is now available on 2 controller and it is not 
getting deleted from the agent which is not hosting it.
  
  Example:
  | cfa97c12-b975-4515-86c3-9710c9b88d76 | L3 agent   | vm2-ctl2-936 | 
:-)   | True   | neutron-l3-agent  |
  | df4ca7c5-9bae-4cfb-bc83-216612b2b378 | L3 agent   | vm1-ctl1-936 | 
:-)   | True   | neutron-l3-agent  |
  
  mysql> select * from csnat_l3_agent_bindings;
  
+--+--+-+--+
  | router_id| l3_agent_id  
| host_id | csnat_gw_port_id |
  
+--+--+-+--+
  | 0fb68420-9e69-41bb-8a88-8ab53b0faabb | cfa97c12-b975-4515-86c3-9710c9b88d76 
| NULL| NULL |
  
+--+--+-+--+
  
  On vm1-ctl1-936
  
  Stale SNAT namespace on Initially hosting controller.
  
  ubuntu@vm1-ctl1-936:~/devstack$ sudo ip netns
  snat-0fb68420-9e69-41bb-8a88-8ab53b0faabb
  qrouter-0fb68420-9e69-41bb-8a88-8ab53b0faabb
  
  On vm2-ctl2-936 (2nd Controller)
  
  ubuntu@vm2-ctl2-936:~$ ip netns
  snat-0fb6842

[Yahoo-eng-team] [Bug 1293818] [NEW] Agents don't need root to list namespaces

2014-03-17 Thread Carl Baldwin
Public bug reported:

Given the expense of sudo (at scale) and rootwrap calls, agents should
not be using root for commands that don't need it.  Listing namespaces
is one of those.

(I could have sworn I already fixed this which is why I didn't fix it
until today)

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293818

Title:
  Agents don't need root to list namespaces

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Given the expense of sudo (at scale) and rootwrap calls, agents should
  not be using root for commands that don't need it.  Listing namespaces
  is one of those.

  (I could have sworn I already fixed this which is why I didn't fix it
  until today)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298658] [NEW] Stale external gateway devices left behind

2014-03-27 Thread Carl Baldwin
Public bug reported:

This is a follow on to https://bugs.launchpad.net/neutron/+bug/1244853.

I found today that the same problem can happen with external gateway
devices.  Those should be identified and removed in a manner similar to
the fix before the other bug.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Carl Baldwin (carl-baldwin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1298658

Title:
  Stale external gateway devices left behind

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This is a follow on to
  https://bugs.launchpad.net/neutron/+bug/1244853.

  I found today that the same problem can happen with external gateway
  devices.  Those should be identified and removed in a manner similar
  to the fix before the other bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1298658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301035] [NEW] Nova notifier thread does not run in rpc_worker sub-processes

2014-04-01 Thread Carl Baldwin
Public bug reported:

This reported to me today by Maru.  When an rpc worker is spawned as a
sub-process, that happens after the nova notifier thread has already
started.

eventlet.hubs.use_hub() is the call in
neutron/openstack/common/service.py that causes all thread execution to
stop.

>From the event let documentation:  "Make sure to do this before the
application starts doing any I/O! Calling use_hub completely eliminates
the old hub, and any file descriptors or timers that it had been
managing will be forgotten."

Maru's observation is that this means that thread should not spawn
before forking the process if they need to run in the child process.  I
agree.

The reason that threads spawn is that the plugin gets loaded prior to
forking and the thread for the nova notifier is started in the __init__
method of a sub-class of the plugin.

** Affects: neutron
 Importance: Undecided
     Assignee: Carl Baldwin (carl-baldwin)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301035

Title:
  Nova notifier thread does not run in rpc_worker sub-processes

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This reported to me today by Maru.  When an rpc worker is spawned as a
  sub-process, that happens after the nova notifier thread has already
  started.

  eventlet.hubs.use_hub() is the call in
  neutron/openstack/common/service.py that causes all thread execution
  to stop.

  From the event let documentation:  "Make sure to do this before the
  application starts doing any I/O! Calling use_hub completely
  eliminates the old hub, and any file descriptors or timers that it had
  been managing will be forgotten."

  Maru's observation is that this means that thread should not spawn
  before forking the process if they need to run in the child process.
  I agree.

  The reason that threads spawn is that the plugin gets loaded prior to
  forking and the thread for the nova notifier is started in the
  __init__ method of a sub-class of the plugin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301042] [NEW] Routers may never be torn down if router_delete_namespaces is False

2014-04-01 Thread Carl Baldwin
Public bug reported:

The code recently added https://review.openstack.org/#/c/30988/ very
nicely cleans out stale routers assuming that
self.conf.router_delete_namespaces is true.

The problem is that automatic namespace deletion is still a bit unstable
because of problems with the kernel and the iproute utility.  So, many
users may not have self.conf.router_delete_namespaces set to True.  In
this case, all of the advantages added by the above mentioned patch
don't help us.

The problem arises if a router gets deleted or moved to another agent
while the L3 agent is down.  When the L3 agent comes back up, it will
not touch the router and the router will continue to function as if it
were never deleted.

** Affects: neutron
 Importance: Undecided
 Assignee: Carl Baldwin (carl-baldwin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1301042

Title:
  Routers may never be torn down if router_delete_namespaces is False

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The code recently added https://review.openstack.org/#/c/30988/ very
  nicely cleans out stale routers assuming that
  self.conf.router_delete_namespaces is true.

  The problem is that automatic namespace deletion is still a bit
  unstable because of problems with the kernel and the iproute utility.
  So, many users may not have self.conf.router_delete_namespaces set to
  True.  In this case, all of the advantages added by the above
  mentioned patch don't help us.

  The problem arises if a router gets deleted or moved to another agent
  while the L3 agent is down.  When the L3 agent comes back up, it will
  not touch the router and the router will continue to function as if it
  were never deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1301042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312402] [NEW] Setting gateway in L3 should use "replace default via ..." instead of add.

2014-04-24 Thread Carl Baldwin
Public bug reported:

Just came across this.  I noticed that the ip_lib.py code uses replace
instead of add.  Using the common code would be good anyway.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1312402

Title:
  Setting gateway in L3 should use "replace default via ..." instead of
  add.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Just came across this.  I noticed that the ip_lib.py code uses replace
  instead of add.  Using the common code would be good anyway.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1312402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312467] [NEW] On external networks with multiple subnets, routers need onlink routes for all subnets

2014-04-24 Thread Carl Baldwin
Public bug reported:

This subject came up on IRC here [1]. It relates to the blueprint about
pluggable external network connections and so I jumped in.

There are two reasons that using multiple external networks to allow multiple 
floating ip subnets [2] is not optimal.
- Extra L2 infrastructure needed.
- A neutron router cannot have a gateway connection to more than one external 
network. So, floating IPs wouldn't be able to float as freely as we'd like them 
to.

I cracked open devstack and started playing with it. I tried this first
just to add a second subnet full of floating IPs.

neutron subnet-create ext-net 10.224.24.0/24 --disable-dhcp

In devstack, I needed to add a "gateway router". I did this by adding an
IP to the br-ex interface. In public cloud, we'd need to configure the
upstream router as a gateway on the second subnet. This shouldn't be
difficult. We'd have to run this by the networking team to be sure.

sudo ip addr add 10.224.24.1/24 dev br-ex

At this point, I was able to get a router to host floating IPs on both
subnets! Pretty cool! I was very surprised it worked so easily.

There is one bug which this bug report addresses! Traffic between
floating IPs on the second subnet went up to the router and then back
down. The upstream router sent ICMP redirect packets periodically back
to the Neutron router sourcing the traffic. These did the router no good
because what it really needed to know was that the IP was on link but
the upstream router couldn't tell it that.  Some upstream routers may
not be configured to send redirects or route back through the port of
origin.

The answer to this is to add an on-link route for each subnet on the
external network to each router's gateway interface. This will require
an L3 agent change but should not be very difficult.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2014-04-08.log
 starting at 2014-04-08T23:23:51 (near the bottom)
[2] 
http://docs.openstack.org/admin-guide-cloud/content/adv_cfg_l3_agent_multi_extnet.html

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- On an external network with multiple subnets, routers need online routes for 
all subnets
+ On external networks with multiple subnets, routers need onlink routes for 
all subnets

** Description changed:

- This subject came up on IRC here http://eavesdrop.openstack.org/irclogs
- /%23openstack-neutron/%23openstack-neutron.2014-04-08.log starting at
- 2014-04-08T23:23:51 (near the bottom). It relates to the blueprint about
+ This subject came up on IRC here [1]. It relates to the blueprint about
  pluggable external network connections and so I jumped in.
  
- There are two reasons that using multiple external networks to allow multiple 
floating ip subnets is not optimal
+ There are two reasons that using multiple external networks to allow multiple 
floating ip subnets [2] is not optimal.
  - Extra L2 infrastructure needed.
  - A neutron router cannot have a gateway connection to more than one external 
network. So, floating IPs wouldn't be able to float as freely as we'd like them 
to.
  
- Since I needed to understand the current state of things for my
- blueprint, I cracked open devstack and started playing with it. I tried
- this first just to add a second subnet full of floating IPs.
+ I cracked open devstack and started playing with it. I tried this first
+ just to add a second subnet full of floating IPs.
  
  neutron subnet-create ext-net 10.224.24.0/24 --disable-dhcp
  
  In devstack, I needed to add a "gateway router". I did this by adding an
  IP to the br-ex interface. In public cloud, we'd need to configure the
  upstream router as a gateway on the second subnet. This shouldn't be
  difficult. We'd have to run this by the networking team to be sure.
  
  sudo ip addr add 10.224.24.1/24 dev br-ex
  
  At this point, I was able to get a router to host floating IPs on both
  subnets! Pretty cool! I was very surprised it worked so easily.
  
- There is one caveat! Traffic between floating IPs on the second subnet
- went up to the router and then back down. The upstream router sent ICMP
- redirect packets periodically back to the Neutron router sourcing the
- traffic. These did the router no good because what it really needed to
- know was that the IP was on link but the upstream router couldn't tell
- it that.  Some upstream routers may not be configured to send redirects
- or route back through the port of origin.
+ There is one bug which this bug report addresses! Traffic between
+ floating IPs on the second subnet went up to the router and then back
+ down. The upstream router sent ICMP redirect packets periodically back
+ to the Neutron router sourcing the traffic. These did the router no good
+ because what it really needed to know was that the IP was on link but
+ the upstream router couldn't tell it that.  Some upstream routers may
+ not be configured to send redirects or route back 

[Yahoo-eng-team] [Bug 1324189] [NEW] Python Neutron client cannot update subnet

2014-05-28 Thread Carl Baldwin
Public bug reported:

I was testing https://review.openstack.org/#/c/62042 and found that the
python neutron client is unable to update fields in a subnet, especially
the allocation pools.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1324189

Title:
  Python Neutron client cannot update subnet

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I was testing https://review.openstack.org/#/c/62042 and found that
  the python neutron client is unable to update fields in a subnet,
  especially the allocation pools.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1324189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324189] Re: Python Neutron client cannot update subnet

2014-05-28 Thread Carl Baldwin
** Changed in: neutron
 Assignee: (unassigned) => Carl Baldwin (carl-baldwin)

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1324189

Title:
  Python Neutron client cannot update subnet

Status in Python client library for Neutron:
  New

Bug description:
  I was testing https://review.openstack.org/#/c/62042 and found that
  the python neutron client is unable to update fields in a subnet,
  especially the allocation pools.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1324189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370033] Re: Admin should be able to manually select the active instance of a HA router

2016-01-11 Thread Carl Baldwin
I just adjusted the status and tags to reflect what has been discussed
on the review [1].  Maybe we need to discuss this again in the drivers
team meeting.

[1] https://review.openstack.org/#/c/257299/

** Tags removed: rfe-approved
** Tags added: rfe

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370033

Title:
  Admin should be able to manually select the active instance of a HA
  router

Status in neutron:
  New

Bug description:
  The admin can see where is the active replica of an HA router. Once this bug: 
https://bugs.launchpad.net/neutron/+bug/1401095
  Is solved, the admin will be able to manually move HA routers from one agent 
to the next. Combining the two, it gives a decent if not ideal way to move the 
master by unscheduling it from the master node thereby moving it a backup:

  For example if hosts A and B are hosting router R, and router R is
  active on host A, you can unschedule it from host A, invoking a
  failover and causing B to become the new active replica. You then
  schedule it to host A once more and it'll host the router again, this
  time as standby.

  This RFE is about adding the ability to manually move the master state
  of a router from one agent to another explicitly. I think this can
  only be done with an API modification, or even a new API verb just for
  HA routers. I think that any API modifications need a slim spec and an
  RFE bug is not enough.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370033] Re: Admin should be able to manually select the active instance of a HA router

2016-01-11 Thread Carl Baldwin
** Changed in: neutron
   Status: Won't Fix => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370033

Title:
  Admin should be able to manually select the active instance of a HA
  router

Status in neutron:
  New

Bug description:
  The admin can see where is the active replica of an HA router. Once this bug: 
https://bugs.launchpad.net/neutron/+bug/1401095
  Is solved, the admin will be able to manually move HA routers from one agent 
to the next. Combining the two, it gives a decent if not ideal way to move the 
master by unscheduling it from the master node thereby moving it a backup:

  For example if hosts A and B are hosting router R, and router R is
  active on host A, you can unschedule it from host A, invoking a
  failover and causing B to become the new active replica. You then
  schedule it to host A once more and it'll host the router again, this
  time as standby.

  This RFE is about adding the ability to manually move the master state
  of a router from one agent to another explicitly. I think this can
  only be done with an API modification, or even a new API verb just for
  HA routers. I think that any API modifications need a slim spec and an
  RFE bug is not enough.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543656] [NEW] Cannot connect a router to overlapping subnets with address scopes

2016-02-09 Thread Carl Baldwin
Public bug reported:

This is a known limitation in the reference implementation of address
scopes [1] in the L3 agent that a router cannot be connected to subnets
with overlapping IPs even when the subnets are in different address
scopes and, in theory, there should be no ambiguity.  This was
documented in the devref [2].  I'm filing this bug to capture ideas for
possibly eliminating this limitation in the future.

[1] https://review.openstack.org/#/c/270001/
[2] 
http://docs.openstack.org/developer/neutron/devref/address_scopes.html#address-scopes

** Affects: neutron
 Importance: Wishlist
 Status: Confirmed


** Tags: address-scopes l3-ipam-dhcp

** Tags added: address-scopes l3-ipam-dhcp

** Changed in: neutron
   Importance: Undecided => Wishlist

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543656

Title:
  Cannot connect a router to overlapping subnets with address scopes

Status in neutron:
  Confirmed

Bug description:
  This is a known limitation in the reference implementation of address
  scopes [1] in the L3 agent that a router cannot be connected to
  subnets with overlapping IPs even when the subnets are in different
  address scopes and, in theory, there should be no ambiguity.  This was
  documented in the devref [2].  I'm filing this bug to capture ideas
  for possibly eliminating this limitation in the future.

  [1] https://review.openstack.org/#/c/270001/
  [2] 
http://docs.openstack.org/developer/neutron/devref/address_scopes.html#address-scopes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544768] [NEW] [RFE] Differentiate between static and floating subnets

2016-02-11 Thread Carl Baldwin
Public bug reported:

I've been thinking about this for a little while now.  There seems to be
something different about floating IP subnets and other (I'll call them
static in this context) subnets in some use cases.

- On an external network where operators wish to use private IPs for router 
ports (and DVR FIP ports)  and public for floating IPs.
- Enable using floating IPs on provider networks without routers [1].  This has 
come up a lot.  In many cases, operators want them to be public while the 
static ones are private.
- On routed networks where VM instance and router ports need IPs from their 
segments but floating IPs can be routed more flexibly.

These boil down to two ways I see to differentiate subnets:

- public vs private
- L2 bound vs routed

We could argue the definitions of public and private but I don't think
that's necessary.  Public could mean globally routable or routable
within some organization.  Private would mean not public.

An L2 bound subnet is one used on a segment where arp is expected to
work.  The opposite type can be routed by some L3 mechanism.

One possible way to make this distinction might be to mark certain
subnets as floating subnets.  The rules, roughly would be as follows:

- When allocating floating IPs, prefer floating subnets.  (fallback to 
non-floating to support backward compatibility?)
- Don't allocate non-floating IP ports from floating subnets.

[1] http://lists.openstack.org/pipermail/openstack-
operators/2016-February/009551.html

** Affects: neutron
 Importance: Wishlist
 Status: Confirmed


** Tags: l3-ipam-dhcp rfe

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags added: l3-ipam-dhcp rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544768

Title:
  [RFE] Differentiate between static and floating subnets

Status in neutron:
  Confirmed

Bug description:
  I've been thinking about this for a little while now.  There seems to
  be something different about floating IP subnets and other (I'll call
  them static in this context) subnets in some use cases.

  - On an external network where operators wish to use private IPs for router 
ports (and DVR FIP ports)  and public for floating IPs.
  - Enable using floating IPs on provider networks without routers [1].  This 
has come up a lot.  In many cases, operators want them to be public while the 
static ones are private.
  - On routed networks where VM instance and router ports need IPs from their 
segments but floating IPs can be routed more flexibly.

  These boil down to two ways I see to differentiate subnets:

  - public vs private
  - L2 bound vs routed

  We could argue the definitions of public and private but I don't think
  that's necessary.  Public could mean globally routable or routable
  within some organization.  Private would mean not public.

  An L2 bound subnet is one used on a segment where arp is expected to
  work.  The opposite type can be routed by some L3 mechanism.

  One possible way to make this distinction might be to mark certain
  subnets as floating subnets.  The rules, roughly would be as follows:

  - When allocating floating IPs, prefer floating subnets.  (fallback to 
non-floating to support backward compatibility?)
  - Don't allocate non-floating IP ports from floating subnets.

  [1] http://lists.openstack.org/pipermail/openstack-
  operators/2016-February/009551.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547705] [NEW] There is no way to use the default subnet pool without first looking it up

2016-02-22 Thread Carl Baldwin
Public bug reported:

With the recent resolution of [1], which removed the automatic fallback
to the default subnet pool, the only way to use the default subnetpool
is to manually look it up and specify it on the command land.  This made
things much less convenient for the end user.

While discussing [1], I agreed to provide a new extension to make this
convenient again.  The extension should be added to the server side to
allow any API consumers to make use of it.

[1] https://bugs.launchpad.net/neutron/+bug/1545199

** Affects: neutron
 Importance: High
 Assignee: Carl Baldwin (carl-baldwin)
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1547705

Title:
  There is no way to use the default subnet pool without first looking
  it up

Status in neutron:
  New

Bug description:
  With the recent resolution of [1], which removed the automatic
  fallback to the default subnet pool, the only way to use the default
  subnetpool is to manually look it up and specify it on the command
  land.  This made things much less convenient for the end user.

  While discussing [1], I agreed to provide a new extension to make this
  convenient again.  The extension should be added to the server side to
  allow any API consumers to make use of it.

  [1] https://bugs.launchpad.net/neutron/+bug/1545199

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1547705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547705] Re: There is no way to use the default subnet pool without first looking it up

2016-02-22 Thread Carl Baldwin
neutron:  https://review.openstack.org/#/c/282021/
python-neutronclient:  https://review.openstack.org/#/c/282583/

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
 Assignee: (unassigned) => Carl Baldwin (carl-baldwin)

** Changed in: python-neutronclient
   Importance: Undecided => High

** Changed in: python-neutronclient
   Status: New => In Progress

** Changed in: python-neutronclient
Milestone: None => 4.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1547705

Title:
  There is no way to use the default subnet pool without first looking
  it up

Status in neutron:
  In Progress
Status in python-neutronclient:
  In Progress

Bug description:
  With the recent resolution of [1], which removed the automatic
  fallback to the default subnet pool, the only way to use the default
  subnetpool is to manually look it up and specify it on the command
  land.  This made things much less convenient for the end user.

  While discussing [1], I agreed to provide a new extension to make this
  convenient again.  The extension should be added to the server side to
  allow any API consumers to make use of it.

  [1] https://bugs.launchpad.net/neutron/+bug/1545199

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1547705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552085] Re: Router Interfaces Are Instantiated in Compute Nodes Which do not need them.

2016-03-07 Thread Carl Baldwin
It needs them all!  DVR routing is supposed to be able to do east/west
routing locally on the compute host.  If the router is connected to 10
subnets, it needs to have 10 interfaces in order for traffic to egress
on any one of them.

Did I miss something?

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552085

Title:
  Router Interfaces Are Instantiated in Compute Nodes Which do not need
  them.

Status in neutron:
  Invalid

Bug description:
  Pre-Conditions: three node dvr topology created with devstack script
  from master branch

  step-by-step:
  as " source openrc admin admin"  (adminstrator)

  Dvr routers is created
  Three Subnets are attached to it.
  Only one vm is instantiated  and attach to only one subnet.

  Expected Output:
  The router is instantiated in the same compute node as the vm with only ONE 
qr- interface attached to BR-INT bridge

  Actual Output:
  The router is instantiated on the same compute node as the vm and THREE qr- 
interfaces are added to the BR-INT  bridge.
  One for each of the subnets attached to the router.

  Problem:
  Only one interface is needed on the BR-INT, the one for the subnet the vm is 
attached to.
  The other two are not doing anything except consuming resources.

  Version: Mitaka (master on March 1st, 2016)
   Ubuntu
  Perceived Severity:
     probably low. The presence of the other two unused interfaces does not 
affect functionality

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485732] Re: subnet-update of allocation_pool does not prevent orphaning existing ports

2015-11-21 Thread Carl Baldwin
This is by design.  John Kasperski's assessment is correct.  The
allocation pool is not meant to prevent ports from being created outside
of it.  It only exists to restrict automatic allocation of IP addresses
by Neutron.

I don't think the situation warrants a warning.  If anything, better
documentation.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485732

Title:
  subnet-update of allocation_pool does not prevent orphaning existing
  ports

Status in neutron:
  Won't Fix

Bug description:
  An error should be returned when subnet-update is used to modify the
  allocation_pool such that existing neutron ports are no longer
  included.   This operation should not be permitted.

  Currently the existing allocated neutron ports are not verified when a
  subnet allocation pool is changed.  This can lead to unusual
  statistics such as:   there could be 50 allocated neutron objects
  associated with a subnet, however the allocation pool range only
  includes 10 IP addresses and all 10 of those are not allocated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513574] Re: firewall rules on DVR FIP fails to work for ingress traffic

2015-11-23 Thread Carl Baldwin
At this point, given that fwaas is totally up in the air, I don't think
we're going to take on anymore DVR / FWaaS bugs.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513574

Title:
  firewall rules on DVR FIP fails to work for ingress traffic

Status in neutron:
  Won't Fix

Bug description:
  =
  my env
  =
  controller +network node(dvr_snat) + 2 compute nodes(dvr)
  DVR: enable DVR when using devstack to deploy this env
  FWaaS: manually git clone neutron-fwaas and to configure, using iptables as 
driver

  
  
  steps
  
  1) create net, subnet, boot VM-1 on CN-1, VM-2 on CN-2, create router, and 
attach subnet onto router.
  2) create external network, set as router gateway net, create 2 floating IPs 
and associate to two VMs.
  3) confirm DVR FIP works: fip ns created, iptable rules updated in qrouter 
ns, two VMs are pingable by floating IP.
  floating IP like: 192.168.0.4 and 192.168.0.5
  4) create firewall rules, firewall policy and create firewall on router. 
  firewall rule like: 
  fw-r1: ICMP, source: 192.168.0.184/29(none), dest: 192.168.0.0/28(none), 
allow
  fw-r2: ICMP, source: 192.168.0.0/28(none), dest: 192.168.0.184/29(none), 
allow
  5) confirm firewall rules updated in qrouter ns.
  6) on host who has IP like 192.168.0.190, try to ping floating IPs mentioned 
in step 3.
  expected: floating IPs should be pingable (for IP 192.168.0.190 is in 
192.168.0.184/29, and two firewall rules allows)
  observed: no response, "100% packet loss" from ping command. floating IP fail 
to ping.

  
  
  more details
  
  
  firewall iptable rules:
  
  -A INPUT -j neutron-l3-agent-INPUT
  -A FORWARD -j neutron-filter-top
  -A FORWARD -j neutron-l3-agent-FORWARD
  -A OUTPUT -j neutron-filter-top
  -A OUTPUT -j neutron-l3-agent-OUTPUT
  -A neutron-filter-top -j neutron-l3-agent-local
  -A neutron-l3-agent-FORWARD -o rfp-+ -j neutron-l3-agent-iv4322a9b15
  -A neutron-l3-agent-FORWARD -i rfp-+ -j neutron-l3-agent-ov4322a9b15
  -A neutron-l3-agent-FORWARD -o rfp-+ -j neutron-l3-agent-fwaas-defau
  -A neutron-l3-agent-FORWARD -i rfp-+ -j neutron-l3-agent-fwaas-defau
  -A neutron-l3-agent-INPUT -m mark --mark 0x1/0x -j ACCEPT
  -A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP
  -A neutron-l3-agent-fwaas-defau -j DROP
  -A neutron-l3-agent-iv4322a9b15 -m state --state INVALID -j DROP
  -A neutron-l3-agent-iv4322a9b15 -m state --state RELATED,ESTABLISHED -j ACCEPT
  -A neutron-l3-agent-iv4322a9b15 -s 192.168.0.0/28 -d 192.168.0.184/29 -p icmp 
-j ACCEPT
  -A neutron-l3-agent-iv4322a9b15 -s 192.168.0.184/29 -d 192.168.0.0/28 -p icmp 
-j ACCEPT
  -A neutron-l3-agent-ov4322a9b15 -m state --state INVALID -j DROP
  -A neutron-l3-agent-ov4322a9b15 -m state --state RELATED,ESTABLISHED -j ACCEPT
  -A neutron-l3-agent-ov4322a9b15 -s 192.168.0.0/28 -d 192.168.0.184/29 -p icmp 
-j ACCEPT
  -A neutron-l3-agent-ov4322a9b15 -s 192.168.0.184/29 -d 192.168.0.0/28 -p icmp 
-j ACCEPT

  ---
  DVR FIP nat iptable rules:
  ---
  1) for 192.168.0.4:
  -A PREROUTING -j neutron-l3-agent-PREROUTING
  -A OUTPUT -j neutron-l3-agent-OUTPUT
  -A POSTROUTING -j neutron-l3-agent-POSTROUTING
  -A POSTROUTING -j neutron-postrouting-bottom
  -A neutron-l3-agent-OUTPUT -d 192.168.0.4/32 -j DNAT --to-destination 20.0.1.7
  -A neutron-l3-agent-POSTROUTING ! -i rfp-4bf3186c-d ! -o rfp-4bf3186c-d -m 
conntrack ! --ctstate DNAT -j ACCEPT
  -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp 
--dport 80 -j REDIRECT --to-ports 9697
  -A neutron-l3-agent-PREROUTING -d 192.168.0.4/32 -j DNAT --to-destination 
20.0.1.7
  -A neutron-l3-agent-float-snat -s 20.0.1.7/32 -j SNAT --to-source 192.168.0.4
  -A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
  -A neutron-postrouting-bottom -m comment --comment "Perform source NAT on 
outgoing traffic." -j neutron-l3-agent-snat

  2) for 192.168.0.5:
  -A PREROUTING -j neutron-l3-agent-PREROUTING
  -A OUTPUT -j neutron-l3-agent-OUTPUT
  -A POSTROUTING -j neutron-l3-agent-POSTROUTING
  -A POSTROUTING -j neutron-postrouting-bottom
  -A neutron-l3-agent-OUTPUT -d 192.168.0.5/32 -j DNAT --to-destination 20.0.1.6
  -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp 
--dport 80 -j REDIRECT --to-ports 9697
  -A neutron-l3-agent-PREROUTING -d 192.168.0.5/32 ! -i qr-+ -j DNAT 
--to-destination 20.0.1.6
  -A neutron-l3-agent-float-snat -s 20.0.1.6/32 -j SNAT --to-source 192.168.0.5
  -A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
  -A neutron-postrouting-bottom -m comment --comment "Perform source NAT on 
outgoing traffic." -j neutron-l3-agent-snat

  ---

[Yahoo-eng-team] [Bug 1521909] Re: subnet_allocation extention is no need

2015-12-02 Thread Carl Baldwin
This extension is a shim extension.  It indicates the presence of the
feature but doesn't actually provide the feature.  The reason it is
there is due to some confusion about the future of extensions at the
time of development.  It was developed as a core resource but then at
the very last hour some in the community insisted that an extension be
created for it.  It was agreed that this shim extension would be
sufficient so that the feature could merge for Kilo.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521909

Title:
  subnet_allocation extention is no need

Status in neutron:
  Won't Fix

Bug description:
  subnetpool is a core resource, why is there a extension
  "subnet_allocation", for ml2 plugin, in it's
  _supported_extension_aliases

  _supported_extension_aliases = ["provider", "external-net", "binding",
     "quotas", "security-group", "agent",
     "dhcp_agent_schedler",
     "multi-provider", "allowed-address-pairs",
     "extra_dhcp_opt", "subnet_allocation",
     "net-mtu", "vlan-transparent",
     "address-scope", "dns-integration",
     "availability_zone",
     "network_availability_zone"]
  if we delete subnet_allocation from _supported_extension_aliases, we also can 
create subnetpool and create subnet with subnetpool-id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509295] Re: L3: agent may do double work upon start/resync

2015-12-03 Thread Carl Baldwin
Reopen if this is shown to be more severe than we currently think it is.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509295

Title:
  L3: agent may do double work upon start/resync

Status in neutron:
  Won't Fix

Bug description:
  The issue was noticed during scale testing of DVR.
  When l3 agent starts up it initiates a full sync with neutron server: 
requests full info about all the routers scheduled to it. At the same time 
agent may receive different notifications (router_added/updated/deleted) which 
were sent while agent was offline or starting up. For each of such 
notifications the agent will request router info again, so server will have to 
process it twice (first is for resync request). 

  The following optimization makes sense: when agent is about to
  fullsync we can skip all router notifications since fyllsync should
  bring the agent up to date anyway.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189658] Re: Should choice of DHCP gateway consider whether a subnet's gateway is realized

2015-12-03 Thread Carl Baldwin
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1189658

Title:
  Should choice of DHCP gateway consider whether a subnet's gateway is
  realized

Status in neutron:
  Invalid

Bug description:
  This came up as a comment in this review:
  https://review.openstack.org/#/c/31533/6

  The above patch chooses a gateway for the DHCP namespace that is
  consistent with the gateway that the DHCP server hands to new VMs
  booted on to the network as shown by my devstack testing.

  The DHCP agent doesn't appear to take in to account whether that
  gateway has been realized.  From the comment by gongysh:

  1. there is a quantum router's interface is on the subnet which is taking the 
gateway ip
  2. and the gateway ip is physical one itself.
  3. ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1189658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526672] Re: instance ip not updated after subnet-update and reboot

2015-12-16 Thread Carl Baldwin
The allocation_pools attribute of the subnet only affects the automatic
allocation of IPs for new ports created after the pool is set.  It is
not intended to affect existing or manually created ports.

** Tags added: l3-ipam-dhcp

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526672

Title:
  instance ip not updated after subnet-update and reboot

Status in neutron:
  Won't Fix

Bug description:
  [Summary]
  Instance ip is not updated after subnet-update allocation-pool, even reboot 
instance

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  after allocation-pool update, reboot instance should re-discover ip by 
following dhcp rules.
  subnet-update allocation-pool should clear mapping table.

  [Reproduceable or not]
  reproduceable

  [Recreate Steps]
  1)check devstack network information:
  stack@45-5x:~/devstack$ neutron net-list
  
+--+-+--+
  | id   | name| subnets
  |
  
+--+-+--+
  | c746a5d5-fc58-4600-97bf-c4efa93f7934 | public  | 
5e41b56f-fd95-46c1-a514-b918c68eb9bc |
  |  | | 
d611f012-7c65-4b49-9fec-fcb7013f7fad |
  | 1159f483-6d87-496f-b3a3-97b8043e865d | private | 
99ad1ec2-7424-4d05-9969-8fda3249e932 fdab:b2d9:97c5::/64 |
  |  | | 
8e537f68-fa87-4a2a-8536-8a3c3417fd2e 10.0.0.0/24 |
  | 7ba41081-66c1-422e-9aee-861c2e664473 | ext-net | 
8295a105-c11e-4e41-91d0-da5fe9d60c33 |
  
+--+-+--+

  
  2)use private to provide  our instance ip, boot instance with this network. 
instance ip is 10.0.0.3
  stack@45-5x:~/devstack$ nova boot --flavor 1 --image cirros-0.3.4-x86_64-uec 
--availability-zone nova  --nic net-id=1159f483-6d87-496f-b3a3-97b8043e865d  
linwwu
  stack@45-5x:~/devstack$ nova list
  
+--++++-++
  | ID   | Name   | Status | Task State | Power 
State | Networks   |
  
+--++++-++
  | d87fc01f-3ea4-4f14-90f9-f83cd3625e8c | linwwu | ACTIVE | -  | 
Running | private=fdab:b2d9:97c5:0:f816:3eff:fefd:dc05, 10.0.0.3 |
  
+--++++-++

  
  3)check private network subnet information. ip starts from 10.0.0.2-30 and 
gateway is 10.0.0.1
  stack@45-5x:~/devstack$ neutron subnet-show  
8e537f68-fa87-4a2a-8536-8a3c3417fd2e
  +---+---+
  | Field | Value |
  +---+---+
  | allocation_pools  | {"start": "10.0.0.2", "end": "10.0.0.30"} |
  | cidr  | 10.0.0.0/24   |
  | dns_nameservers   | 8.8.8.8   |
  | enable_dhcp   | True  |
  | gateway_ip| 10.0.0.1  |
  | host_routes   |   |
  | id| 8e537f68-fa87-4a2a-8536-8a3c3417fd2e  |
  | ip_version| 4 |
  | ipv6_address_mode |   |
  | ipv6_ra_mode  |   |
  | name  | private-subnet|
  | network_id| 1159f483-6d87-496f-b3a3-97b8043e865d  |
  | subnetpool_id |   |
  | tenant_id | 4fe5daa4e5c544b58efdab002314f3e2  |
  +---+---+

  4)update subnet allocation-pool, and check subnet details. ip range changed 
to 10.0.0.100-110
  neutron subnet-update --allocation-pool start=10.0.0.100,end=10.0.0.110 
--dns-nameserver 8.8.8.8  8e537f68-fa87-4a2a-8536-8a3c3417fd2e
  stack@45-59:~/devstack$ neutron subnet-show 
8e537f68-fa87-4a2a-8536-8a3c3417fd2e
  +---+--+
  | Field | Value|
  +---+-

[Yahoo-eng-team] [Bug 1527129] Re: enhance the help info of "neutron router-gateway-set"

2015-12-17 Thread Carl Baldwin
** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527129

Title:
  enhance the help info of "neutron router-gateway-set"

Status in python-neutronclient:
  New

Bug description:
  [Summary]
  enhance the help info of "neutron router-gateway-set" 

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  enhance the help info of "neutron router-gateway-set" 

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) below is the help info of "neutron router-gateway-set" :
  root@45-59:~/heat#  neutron router-gateway-set
  usage: neutron router-gateway-set [-h] [--request-format {json,xml}]
[--disable-snat] [--fixed-ip FIXED_IP]
ROUTER EXTERNAL-NETWORK
  neutron router-gateway-set: error: too few arguments
  root@45-59:~/heat# 

  2) if use --fixed-ip option to set gateway for a router, simply specify the 
fixed ip, got an error:
  root@45-59:~/heat# neutron router-gateway-set--fixed-ip 172.168.0.7 r1 
ext-net
  invalid key-value '172.168.0.7', expected format: key=value
  root@45-59:~/heat# 

  
  3) actually, need to  specify subnet_id and ip_address as below, the 
router-gateway-set command can work:
  root@45-59:/opt/stack/devstack# neutron router-gateway-set--fixed-ip 
subnet_id=8295a105-c11e-4e41-91d0-da5fe9d60c33,ip_address=172.168.0.077 r1 
ext-net
  Set gateway for router r1
  root@45-59:/opt/stack/devstack# neutron router-show r1
  
+---+---+
  | Field | Value   

  |
  
+---+---+
  | admin_state_up| True

  |
  | distributed   | False   

  |
  | external_gateway_info | {"network_id": 
"7ba41081-66c1-422e-9aee-861c2e664473", "enable_snat": true, 
"external_fixed_ips": [{"subnet_id": "8295a105-c11e-4e41-91d0-da5fe9d60c33", 
"ip_address": "172.168.0.077"}]} |
  | ha| False   

  |
  | id| 7153e0c1-502f-4160-ad20-d86c3bf43ff2

  |
  | name  | r1  

  |
  | routes| 

  |
  | status| ACTIVE  

  |
  | tenant_id | 6c15aacc1cfe4a9fac35a0c7f8c3e912

  |
  
+---+---+
  root@45-59:/opt/stack/devstack# 

  So, the the help info of "neutron router-gateway-set" is not much helpful.
  should modify it as below:
  root@45-59:~/heat#  neutron router-gateway-set
  usage: neutron router-gateway-set [-h] [--request-format {json,xml}]
[--disable-snat] [--fixed-ip 
subnet_id=SUBNET,ip_address=IP_ADDR]
ROUTER EXTERNAL-NETWORK
  neutron router-gateway-set: error: too few arg

[Yahoo-eng-team] [Bug 1526559] Re: L3 agent parallel configuration of routers might slow things down

2016-01-04 Thread Carl Baldwin
Interesting.  So, having multiple threads doesn't improve the execution
time at all?  That's different than I remember.

One thing that have some number of threads may do is possibly decrease
the latency to process a new request while some threads are working on
other routers.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526559

Title:
  L3 agent parallel configuration of routers might slow things down

Status in neutron:
  Invalid

Bug description:
  In the L3 agent's _process_routers_loop method, it spawns a GreenPool
  with 8 eventlet threads. Those threads then take updates off the
  agent's queue and process router updates. Router updates are
  serialized by router_id so that two threads don't process the same
  router at any given time.

  In an environment running on a powerful baremetal server, on agent
  restart it was trying to sync roughly 600 routers. Around half were HA
  routers, and half were legacy routers. With the default GreenPool size
  of 8, the result was that the server ground to a halt as CPU usage
  skyrocketed to over 600%. The main offenders were ip, bash, keepalived
  and Python. This was on an environment without rootwrap daemon based
  off stable/juno. It took around 60 seconds to configure a single
  router. Changing the GreenPool size from 8 to 1, caused the agent to:

  1) Configure a router in 30 seconds, a 50% improvement.
  2) Reduce CPU load from 600% to 70%, freeing the machine to do other things.

  I'm filing this bug so that:

  1) Someone can confirm my personal experience in a more controlled way - For 
example, graph router configuration time and CPU load as a result of GreenPool 
size.
  2) If my findings are confirmed on master with rootwrap daemon, start 
considering alternatives like multiprocessing instead of eventlet 
multithreading, or at the very least optimize the GreenPool size.

  This was on RHEL 7.1:
  kernel-3.10.0-229.11.1.el7, iproute-3.10.0-21.el7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531375] Re: useless slash in the return message when create a subnet with subnet pool

2016-01-06 Thread Carl Baldwin
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531375

Title:
  useless slash in the return message when create a subnet with subnet
  pool

Status in neutron:
  Invalid

Bug description:
  [Summary]
  useless slash in the return message when create a subnet with subnet pool

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  no useless slash in the return message when create a subnet with subnet pool

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) create a router:
  root@45-59:/opt/stack/devstack# neutron subnet-create --subnetpool pool1 net2 
--name sub2
  Failed to allocate subnet: Insufficient prefix space to allocate subnet size 
/8.  >>ISSUE
  root@45-59:/opt/stack/devstack# 

  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1531375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp