[Yahoo-eng-team] [Bug 1821208] [NEW] [RFE] Only enforce policy when selected option does not match default

2019-03-21 Thread Nate Johnston
Public bug reported:

Certain API behaviors are regulated by oslo.policy policy at a granular
level, but also have default values.  If a user supplies API options
that match the defaults, bypass the policy check since the result will
be the same regardless.

A good example of this is creating a port with the the boolean
"enable_port_security" value, which in a typical deployment defaults to
'True'.  The "create_port:port_security_enabled" policy governs this
behavior, and is typically set to "rule:context_is_advsvc or
rule:admin_or_network_owner" which means a non-admin user that is not
the network owner would fail.  Such a user should be able to specify
port_security=True when creating a port and not have that operation fail
the policy check.

Implementation
--
The policy check occurs almost immediately upon request reciept.  Check for 
calls to enforce() in neutron/api/v2/base.py [1].  A data structure would need 
to be created from the policy-processing code that matches policy names with 
their respective default values.  Then the enforce() call would be made 
contingent on divergence from the default.

[1] example:
https://opendev.org/openstack/neutron/src/branch/master/neutron/api/v2/base.py#L468

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1821208

Title:
  [RFE] Only enforce policy when selected option does not match default

Status in neutron:
  New

Bug description:
  Certain API behaviors are regulated by oslo.policy policy at a
  granular level, but also have default values.  If a user supplies API
  options that match the defaults, bypass the policy check since the
  result will be the same regardless.

  A good example of this is creating a port with the the boolean
  "enable_port_security" value, which in a typical deployment defaults
  to 'True'.  The "create_port:port_security_enabled" policy governs
  this behavior, and is typically set to "rule:context_is_advsvc or
  rule:admin_or_network_owner" which means a non-admin user that is not
  the network owner would fail.  Such a user should be able to specify
  port_security=True when creating a port and not have that operation
  fail the policy check.

  Implementation
  --
  The policy check occurs almost immediately upon request reciept.  Check for 
calls to enforce() in neutron/api/v2/base.py [1].  A data structure would need 
to be created from the policy-processing code that matches policy names with 
their respective default values.  Then the enforce() call would be made 
contingent on divergence from the default.

  [1] example:
  
https://opendev.org/openstack/neutron/src/branch/master/neutron/api/v2/base.py#L468

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1821208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1827489] Re: Wrong IPV6 address provided by openstack server create

2019-05-03 Thread Nate Johnston
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1827489

Title:
  Wrong IPV6 address provided by openstack server create

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  IPV6 address of an interface doesn't have to be derived from its MAC
  address. The newer kernels have addr_gen_mode option which controls
  the behavior of IPV6 calculation, see
  https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt

  I've encountered the problem when I booted up an image (RHEL8 in my
  case) which had the addr_gen_mode option set to 1 (means that IPV6
  address is randomized) by default. OpenStack (I had Rocky deployment)
  didn't recognize this and 'openstack server create' returned wrong
  address which lead to tempest failures because thanks to the
  'openstack server create' output the tests expected different
  addresses on the interfaces.

  Steps to reproduce:

  $ openstack server create --image  --flavor  --network 
 --network  --key-name  instance_name
  
+-++
  | Field   | Value 
 |
  
+-++
  
  | accessIPv4  |   
 |
  | accessIPv6  |   
 |
  | addresses   | 
tempest-network-smoke--884367252=10.100.0.5; 
tempest-network-smoke--18828977=2003::f816:3eff:febb:7456 |
  

  Then ssh to the instance and hit 'ip a' command:
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
  2: eth0:  mtu 1450 qdisc fq_codel state UP 
group default qlen 1000
  link/ether fa:16:3e:48:e8:b5 brd ff:ff:ff:ff:ff:ff
  inet 10.100.0.3/28 brd 10.100.0.15 scope global dynamic noprefixroute eth0
valid_lft 86363sec preferred_lft 86363sec
  inet6 fe80::f816:3eff:fe48:e8b5/64 scope link
valid_lft forever preferred_lft forever
  3: eth1:  mtu 1450 qdisc fq_codel state UP 
group default qlen 1000
  link/ether fa:16:3e:bb:74:56 brd ff:ff:ff:ff:ff:ff
  inet6 2003::b47f:f400:ecca:2a55/64 scope global dynamic noprefixroute
valid_lft 86385sec preferred_lft 14385sec
  inet6 fe80::7615:8d57:775d:fae/64 scope link noprefixroute
valid_lft forever preferred_lft forever

  Notice that eth1 interface has an ipv6 address which seems not to be
  derived from its mac address. Also notice that the output of
  'openstack server create' returned wrong address, a different one than
  it's actually set for eth1. It expected that the ipv6 address would be
  derived from the mac address but it wasn't.

  'openstack server create' should be able to detect the option in the
  image and behave accordingly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1827489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829304] [NEW] Neutron returns HttpException: 500 on certain operations with modified list of policies for non-admin users

2019-05-15 Thread Nate Johnston
rver.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 143, in __call__
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
return resp(environ, start_response)
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 143, in __call__
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
return resp(environ, start_response)
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/routes/middleware.py", line 141, in __call__
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
response = self.app(environ, start_response)
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 143, in __call__
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
return resp(environ, start_response)
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/pecan/middleware/recursive.py", line 56, in 
__call__
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
return self.application(environ, start_response)
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/pecan/core.py", line 840, in __call__
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
return super(Pecan, self).__call__(environ, start_response)
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/pecan/core.py", line 736, in __call__
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
state
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/pecan/core.py", line 865, in handle_hooks
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
return super(Pecan, self).handle_hooks(hooks, *args, **kw)
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/pecan/core.py", line 342, in handle_hooks
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
result = getattr(hook, hook_type)(*args)
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py",
 line 185, in after
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
for item in to_process
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py",
 line 189, in 
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
pluralized=collection))]
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py",
 line 207, in _get_filtered_item
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
neutron_context, controller, resource, collection, data)
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors   File 
"/usr/lib/python3.6/site-packages/neutron/pecan_wsgi/hooks/policy_enforcement.py",
 line 226, in _exclude_attributes_by_policy
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
for attr_name in data.keys():
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
RuntimeError: dictionary changed size during iteration
server.log:2019-05-08 07:33:43.076 22 ERROR oslo_middleware.catch_errors 
}

Version-Release number of selected component (if applicable):
Compose: RHOS_TRUNK-15.0-RHEL-8-20190509.n.1
rpm -qa | grep neutron
puppet-neutron-14.4.1-0.20190420042323.400fd54.el8ost.noarch
python3-neutronclient-6.12.0-0.20190312100012.680b417.el8ost.noarch


How reproducible:
Always


Steps to Reproduce:
1. Deploy Overcloud with modified Neutron APIs
2. Create non admin user/tenant
3. Attempt to list ports

Actual results:
Fail to retrieve ports and receive python exceptions

Expected results:
List of ports is returned

Additional info:

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1829304

Title:
  Neutron returns HttpException: 500 on certain operations with modified
  list of policies for non-admin users

Status in neutron:
  New

B

[Yahoo-eng-team] [Bug 1829890] [NEW] neutron-functional CI job fails with InterfaceAlreadyExists error

2019-05-21 Thread Nate Johnston
Public bug reported:

I have started seeing failures in the neutron-functional jobs.  The
issue is in
neutron.tests.functional.agent.linux.test_bridge_lib.FdbInterfaceTestCase
the tests "test_append(no_namespace)" and
"test_add_delete_dst(no_namespace)".  Those tests fail a percentage of
the time with the error
"neutron.privileged.agent.linux.ip_lib.InterfaceAlreadyExists: Interface
interface already exists."

This typically is also accompanied by a failure in
"test_replace(no_namespace)" in the same module, which reports the
error: "neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound:
Network interface int_vxlan not found in namespace None.".

I checked Logstash [1] and it looks like we have hit this 11 times over
the past 7 days on 5 different changes.

[1]
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Interface%20interface%20already%20exists%5C%22

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1829890

Title:
  neutron-functional CI job fails with InterfaceAlreadyExists error

Status in neutron:
  New

Bug description:
  I have started seeing failures in the neutron-functional jobs.  The
  issue is in
  neutron.tests.functional.agent.linux.test_bridge_lib.FdbInterfaceTestCase
  the tests "test_append(no_namespace)" and
  "test_add_delete_dst(no_namespace)".  Those tests fail a percentage of
  the time with the error
  "neutron.privileged.agent.linux.ip_lib.InterfaceAlreadyExists:
  Interface interface already exists."

  This typically is also accompanied by a failure in
  "test_replace(no_namespace)" in the same module, which reports the
  error:
  "neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound:
  Network interface int_vxlan not found in namespace None.".

  I checked Logstash [1] and it looks like we have hit this 11 times
  over the past 7 days on 5 different changes.

  [1]
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Interface%20interface%20already%20exists%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1829890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831919] [NEW] Impossible to change a list of static routes defined for subnet because of InvalidRequestError with Cisco ACI integration

2019-06-06 Thread Nate Johnston
Public bug reported:

In a deployment running OpenStack with Cisco ACI integration, unable to
change existing list of static routes for all subnets in his
environment: after running any command he gets the following error:

InvalidRequestError: Instance  is not
present in this Session

After looking for similar issues I have found the following upstream bug
and patch:

- https://bugs.launchpad.net/bgpvpn/+bug/1746996
- https://review.opendev.org/#/c/541512/

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831919

Title:
  Impossible to change a list of static routes defined for subnet
  because of InvalidRequestError with Cisco ACI integration

Status in neutron:
  New

Bug description:
  In a deployment running OpenStack with Cisco ACI integration, unable
  to change existing list of static routes for all subnets in his
  environment: after running any command he gets the following error:

  InvalidRequestError: Instance  is not
  present in this Session

  After looking for similar issues I have found the following upstream
  bug and patch:

  - https://bugs.launchpad.net/bgpvpn/+bug/1746996
  - https://review.opendev.org/#/c/541512/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1831919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838473] [NEW] non-IP ethertypes are permitted with iptables_hybrid firewall driver

2019-07-30 Thread Nate Johnston
*** This bug is a security vulnerability ***

Public security bug reported:

Background of the Issue
===
Customers expect that when they set Neutron security group rules such that all 
traffic is blocked that all traffic is in fact blocked by the Neutron firewall 
driver.  Neutron has multiple firewall drivers available, but in certain 
distributions (RHOSP for example) the default is the iptables_hybrid driver.

The iptables_hybrid driver was implemented using the iptables toolset.
This means that it is very effective at filtering IP traffic.  But IPv4
and IPv6 are only 2 of the available ethernet types (ethertypes).  There
are other types of traffic that ride over ethernet that are not IP [1],
and the iptables_hybrid firewall driver does not inspect or control
them.

First contact with this issue: A customer alerted us to an issue because
they were transitioning from the iptables_hybrid firewall driver to the
OpenVSwitch firewall driver (ovsfw).  When they made the switch they
noticed that all of their InfiniBand traffic was being blocked, because
it uses a non-IP ethertype (0x4008).  The fact that with iptables_hybrid
the traffic passed without any specific control enabling it is the
actual issue, we realized as we evaluated the case.

Analysis of the Vulnerability
=
Non-standard ethertypes are layer 3 protocols that are transmitted within the 
layer 2 ethernet frame.  This means that non-IP ethertypes cannot use IP-based 
routing to gain egress from the layer 2 domain.  When the iptables_hybrid 
firewall is in use:

- In clouds using self-service networking, instances on tenant networks depend 
on either Floating IPs or Neutron Routers for egress from their networks.  
Since both of those are IP-specific concepts, traffic with custom ethertypes 
would have no native means to egress the network.
- In clouds using provider networking, instances depend on the provider network 
hardware to route traffic outside of the layer 2 domain.  Some ethertypes may 
be routable by network hardware - some routers understand InfiniBand or FCoE 
natively, for example, and will serve the routing implementation of those 
protocols.  We are not knowdgeable about the variety of routers out there and 
what they may support; it is entirely possible that legacy support for older 
protocols that have different ethertypes like IPX or PPPoE may exist in Cisco, 
Juniper, or other code..
- In ether case, if unauthorized access to a host is obtained by an intruder, 
then other hosts on the same L2 domain may be succeptible to the use of traffic 
travelling over other ethertypes to bypass hostbased firewalling. Exfiltration 
from that network from the compromised host is still subject to the 
restrictions in the above 2 bullet points.

To succinctly summarize the above: because non-IP based ethertypes
bypass all hostbased controls, only network-based controls restrict
them.  The best network-based control is the lack of support for routing
out of the L2 domain, which is mostly but not always true.

Proposed strategy
=
We propose that a response to this issue has two parts.

1. The iptables_hybrid firewall is changed to deny ethertypes other than
IPv4, IPv6, and ARP by default.  Those 3 protocols are not covered
because they are handled through other mechanisms in the Neutron
security group system.  We propose that the iptables_hybrid control to
drop other ethertypes by default be enabled by default but configurable,
so that an operator could disable it either within the upgrade context
to maintain compatibility with conditions pre-upgrade, as an issue
triage step, or as part of a looser security policy.

2. The Neutron security group system must be enhanced with a control
that allows specific ethertypes to be permitted.  In upstream Neutron
master development this should be an addition to the existing Security
Groups API; for older Neutron versions this will be a configuration file
option.  This configuration file option should be manageable through
Director/TripleO.

This issue has been discussed in the upstream Neutron community [2] and
there is consensus on proposed points #1 and #2.

[1] https://en.wikipedia.org/wiki/EtherType#Examples
[2] 
http://eavesdrop.openstack.org/meetings/neutron_drivers/2019/neutron_drivers.2019-06-28-14.00.log.html

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: Confirmed

** Description changed:

  Background of the Issue
  ===
- Customers expect that when they set Neutron security group rules such that all
- traffic is blocked that all traffic is in fact blocked by the Neutron firewall
- driver.  Neutron has multiple firewall drivers available, but in certain 
- distributions (RHOSP for example) the default is the iptables_hybrid driver.
+ Customers expect that when they set Neutron security group rules such that 
all traffic is blocked that all traffic is

[Yahoo-eng-team] [Bug 1842666] [NEW] Bulk port creation with supplied security group also adds default security group

2019-09-04 Thread Nate Johnston
Public bug reported:

When bulk ports are created with a security group supplied, the
resulting port(s) should only have that security group assigned.  But
the resulting ports are getting both the requested security group as
well as the tenant default security group assigned.

** Affects: neutron
 Importance: High
 Assignee: Nate Johnston (nate-johnston)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1842666

Title:
  Bulk port creation with supplied security group also adds default
  security group

Status in neutron:
  In Progress

Bug description:
  When bulk ports are created with a security group supplied, the
  resulting port(s) should only have that security group assigned.  But
  the resulting ports are getting both the requested security group as
  well as the tenant default security group assigned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1842666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1843282] Re: Rally CI not working since jsonschema version bump

2019-09-09 Thread Nate Johnston
** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1843282

Title:
  Rally CI not working since jsonschema version bump

Status in neutron:
  Fix Released
Status in Rally:
  Fix Released

Bug description:
  In [1], requirements.upper-constraints.txt, jsonschema library upper
  version was bumped to 3.0.1.

  Errors in the CI: [2]
  2019-09-09 10:20:39.736 | 2019-09-09 10:20:39.735 2181 WARNING 
rally.common.plugin.discover [-]Failed to load plugins from module 
'rally_openstack' (package: 'rally-openstack 1.5.1.dev38'): (jsonschema 3.0.2 
(/usr/local/lib/python3.6/dist-packages), 
Requirement.parse('jsonschema!=2.5.0,<3.0.0,>=2.0.0'), {'os-faults'}): 
pkg_resources.ContextualVersionConflict: (jsonschema 3.0.2 
(/usr/local/lib/python3.6/dist-packages), 
Requirement.parse('jsonschema!=2.5.0,<3.0.0,>=2.0.0'), {'os-faults'})

  
  [1]https://review.opendev.org/#/c/649789/
  
[2]https://d4b9765f6ab6e1413c28-81a8be848ef91b58aa974b4cb791a408.ssl.cf5.rackcdn.com/680427/2/check/neutron-rally-task/01b2c1c/controller/logs/devstacklog.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1843282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1843924] [NEW] [RFE] Create optional bulk resource_extend

2019-09-13 Thread Nate Johnston
Public bug reported:

When performing bulk actions, one area that is still very much forced
into the singleton method of processing is the resource_extend
framework, where extensions can register to extend object data.  This
RFE is to propose the changes needed to make resource_extend work in a
bulk context.

The details of the proposal are as follows:

1.) Add an optional additional argument "bulk" to
@resource_extend.extends that would default to False.  If it is True
this signals that the function being decorated can support bulk
extensions.  Normally the arguments to such a function are in the form
(resource, db_model); if it is operating in bulk mode then the first
argument would be treated as [resource, model] in an array, and the
second argument would be ignored.  This can be tested for inside the
function by checking if the first argument is an array.

2.) Create a new resource_extend.apply_funcs_bulk function that would
act like the current resource_extend.apply_funcs but would take an array
of object data instead of a single object resource and db model object.
This new function would consult the "bulk" attribute on registered
extender functions and would pass in the array of object data to a bulk
function, but if the function was bulk=False then it would loop through
as it does now.

3.) This would require a revamp of the _resource_extend_functions data
structure in neutron-lib/db/resource_extends.py.  Currently it is a dict
with the resource type as a string key, and the value is an array of
functions to be applied.  This would need to be changed to take
advantage of the bulk attribute.  I am not sure how best to do that at
this point.

One scenario where this would help is with bulk port creation, because
resource extender functions could implement more efficient commits of
their various data.  Another example is a bug from when I was last bug
deputy [1], where a user is complaining of slow responses due to large
quantities of SQL queries when using large numbers of trunk ports.  I
think that with a bulk resource extend the trunk port extender could be
implemented as qa single SQL query that fetches information for all
affected ports, thus significantly increasing performance.

Please let me know what you think of this idea.  I raised it as an RFE
because it requires significant enough modification to an element of
Neutron that is both foundational and that not many people may have
dealt with directly that an open discussion of the merits and
implementation would be a positive contribution.

[1] https://bugs.launchpad.net/neutron/+bug/1842150

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1843924

Title:
  [RFE] Create optional bulk resource_extend

Status in neutron:
  New

Bug description:
  When performing bulk actions, one area that is still very much forced
  into the singleton method of processing is the resource_extend
  framework, where extensions can register to extend object data.  This
  RFE is to propose the changes needed to make resource_extend work in a
  bulk context.

  The details of the proposal are as follows:

  1.) Add an optional additional argument "bulk" to
  @resource_extend.extends that would default to False.  If it is True
  this signals that the function being decorated can support bulk
  extensions.  Normally the arguments to such a function are in the form
  (resource, db_model); if it is operating in bulk mode then the first
  argument would be treated as [resource, model] in an array, and the
  second argument would be ignored.  This can be tested for inside the
  function by checking if the first argument is an array.

  2.) Create a new resource_extend.apply_funcs_bulk function that would
  act like the current resource_extend.apply_funcs but would take an
  array of object data instead of a single object resource and db model
  object.  This new function would consult the "bulk" attribute on
  registered extender functions and would pass in the array of object
  data to a bulk function, but if the function was bulk=False then it
  would loop through as it does now.

  3.) This would require a revamp of the _resource_extend_functions data
  structure in neutron-lib/db/resource_extends.py.  Currently it is a
  dict with the resource type as a string key, and the value is an array
  of functions to be applied.  This would need to be changed to take
  advantage of the bulk attribute.  I am not sure how best to do that at
  this point.

  One scenario where this would help is with bulk port creation, because
  resource extender functions could implement more efficient commits of
  their various data.  Another example is a bug from when I was last bug
  deputy [1], where a user is complaining of slow responses due to large
  quantities of SQL queries when using large numbers of trunk por

[Yahoo-eng-team] [Bug 1845227] [NEW] new l3 agent router factory breaks neutron-fwaas functional tests

2019-09-24 Thread Nate Johnston
Public bug reported:

All of the neutron-fwaas functional tests in class
neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase
fail since the release of openstack/neutron 15.0.0.0b1, which includes
https://review.opendev.org/620349.

This is because the above change adds a required argument,
'router_factory', for the L3AgentExtensionApi class.  The tests in
neutron-fwaas were not adjusted to include such an argument.  Therefore
the following error occurs:

ft1.1: 
neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase.test_stop_logging_when_delete_logtesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/tests/functional/services/logapi/agents/drivers/iptables/test_log.py",
 line 76, in setUp
self.log_driver = self._initialize_iptables_log()
  File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/tests/functional/services/logapi/agents/drivers/iptables/test_log.py",
 line 90, in _initialize_iptables_log
self.agent_api = l3_ext_api.L3AgentExtensionAPI({})
TypeError: __init__() missing 1 required positional argument: 'router_factory'

Source:
https://storage.gra1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f26/678747/4/gate
/neutron-fwaas-functional/f26770c/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1845227

Title:
  new l3 agent router factory breaks neutron-fwaas functional tests

Status in neutron:
  New

Bug description:
  All of the neutron-fwaas functional tests in class
  
neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase
  fail since the release of openstack/neutron 15.0.0.0b1, which includes
  https://review.opendev.org/620349.

  This is because the above change adds a required argument,
  'router_factory', for the L3AgentExtensionApi class.  The tests in
  neutron-fwaas were not adjusted to include such an argument.
  Therefore the following error occurs:

  ft1.1: 
neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase.test_stop_logging_when_delete_logtesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/tests/functional/services/logapi/agents/drivers/iptables/test_log.py",
 line 76, in setUp
  self.log_driver = self._initialize_iptables_log()
File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/tests/functional/services/logapi/agents/drivers/iptables/test_log.py",
 line 90, in _initialize_iptables_log
  self.agent_api = l3_ext_api.L3AgentExtensionAPI({})
  TypeError: __init__() missing 1 required positional argument: 'router_factory'

  Source:
  
https://storage.gra1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f26/678747/4/gate
  /neutron-fwaas-functional/f26770c/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1845227/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1788006] [NEW] neutron_tempest_plugin DNS integration tests fail with "Server [UUID] failed to reach ACTIVE status and task state "None" within the required time ([INTEGER] s). C

2018-08-20 Thread Nate Johnston
Public bug reported:

Testr report: http://logs.openstack.org/74/591074/4/check/neutron-
tempest-plugin-designate-scenario/d02f171/testr_results.html.gz

Job log: http://logs.openstack.org/74/591074/4/check/neutron-tempest-
plugin-designate-scenario/d02f171/job-
output.txt.gz#_2018-08-20_13_11_46_407077

Traceback:

2018-08-20 13:11:46.406514 | controller | Captured traceback:
2018-08-20 13:11:46.406538 | controller | ~~~
2018-08-20 13:11:46.406567 | controller | Traceback (most recent call last):
2018-08-20 13:11:46.406659 | controller |   File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_dns_integration.py",
 line 115, in test_server_with_fip
2018-08-20 13:11:46.406696 | controller | server = 
self._create_server(name=name)
2018-08-20 13:11:46.406771 | controller |   File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_dns_integration.py",
 line 94, in _create_server
2018-08-20 13:11:46.406799 | controller | 
constants.SERVER_STATUS_ACTIVE)
2018-08-20 13:11:46.406844 | controller |   File 
"tempest/common/waiters.py", line 96, in wait_for_server_status
2018-08-20 13:11:46.406893 | controller | raise 
lib_exc.TimeoutException(message)
2018-08-20 13:11:46.406938 | controller | 
tempest.lib.exceptions.TimeoutException: Request timed out
2018-08-20 13:11:46.407077 | controller | Details: 
(DNSIntegrationTests:test_server_with_fip) Server 
9797a641-8925-4f74-abbf-c6111d47f91d failed to reach ACTIVE status and task 
state "None" within the required time (784 s). Current status: BUILD. Current 
task state: spawning.

Logstash link:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22DNSIntegrationTests%3Atest_server_with_fip%5C%22%20AND%20%20%20message%3A%5C%22failed%20to%20reach%20ACTIVE%20status%20and%20task%20state%5C%22%20AND%20%20%20tags%3A%5C%22console%5C%22

40 hits in 7 days, all failures

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1788006

Title:
  neutron_tempest_plugin DNS integration tests fail with "Server [UUID]
  failed to reach ACTIVE status and task state "None" within the
  required time ([INTEGER] s). Current status: BUILD. Current task
  state: spawning."

Status in neutron:
  New

Bug description:
  Testr report: http://logs.openstack.org/74/591074/4/check/neutron-
  tempest-plugin-designate-scenario/d02f171/testr_results.html.gz

  Job log: http://logs.openstack.org/74/591074/4/check/neutron-tempest-
  plugin-designate-scenario/d02f171/job-
  output.txt.gz#_2018-08-20_13_11_46_407077

  Traceback:

  2018-08-20 13:11:46.406514 | controller | Captured traceback:
  2018-08-20 13:11:46.406538 | controller | ~~~
  2018-08-20 13:11:46.406567 | controller | Traceback (most recent call 
last):
  2018-08-20 13:11:46.406659 | controller |   File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_dns_integration.py",
 line 115, in test_server_with_fip
  2018-08-20 13:11:46.406696 | controller | server = 
self._create_server(name=name)
  2018-08-20 13:11:46.406771 | controller |   File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_dns_integration.py",
 line 94, in _create_server
  2018-08-20 13:11:46.406799 | controller | 
constants.SERVER_STATUS_ACTIVE)
  2018-08-20 13:11:46.406844 | controller |   File 
"tempest/common/waiters.py", line 96, in wait_for_server_status
  2018-08-20 13:11:46.406893 | controller | raise 
lib_exc.TimeoutException(message)
  2018-08-20 13:11:46.406938 | controller | 
tempest.lib.exceptions.TimeoutException: Request timed out
  2018-08-20 13:11:46.407077 | controller | Details: 
(DNSIntegrationTests:test_server_with_fip) Server 
9797a641-8925-4f74-abbf-c6111d47f91d failed to reach ACTIVE status and task 
state "None" within the required time (784 s). Current status: BUILD. Current 
task state: spawning.

  Logstash link:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22DNSIntegrationTests%3Atest_server_with_fip%5C%22%20AND%20%20%20message%3A%5C%22failed%20to%20reach%20ACTIVE%20status%20and%20task%20state%5C%22%20AND%20%20%20tags%3A%5C%22console%5C%22

  40 hits in 7 days, all failures

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1788006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1791206] [NEW] Bandit test B111 breaking pep8 tests in gate

2018-09-06 Thread Nate Johnston
Public bug reported:

Bandit 1.5.1 was released earlier today [1], and the pep8 tests started
failing with references to error B111.  The error was not very easy to
see in the zuul logs [2][3] but when running locally was easy to pick
out [4].  Since test B111 is supposed to have been deleted in the latest
version of Bandit [5] we need to suppress the test to unbreak the gate.

[1] https://pypi.org/project/bandit/#history
[2] 
http://logs.openstack.org/47/583847/2/gate/openstack-tox-pep8/f5daa43/job-output.txt.gz#_2018-09-07_01_52_01_804084
[3] http://paste.openstack.org/show/729652/
[4] http://paste.openstack.org/show/729651/
[5] 
https://bandit.readthedocs.io/en/latest/plugins/b111_execute_with_run_as_root_equals_true.html

** Affects: neutron
 Importance: High
 Assignee: Nate Johnston (nate-johnston)
 Status: In Progress


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1791206

Title:
  Bandit test B111 breaking pep8 tests in gate

Status in neutron:
  In Progress

Bug description:
  Bandit 1.5.1 was released earlier today [1], and the pep8 tests
  started failing with references to error B111.  The error was not very
  easy to see in the zuul logs [2][3] but when running locally was easy
  to pick out [4].  Since test B111 is supposed to have been deleted in
  the latest version of Bandit [5] we need to suppress the test to
  unbreak the gate.

  [1] https://pypi.org/project/bandit/#history
  [2] 
http://logs.openstack.org/47/583847/2/gate/openstack-tox-pep8/f5daa43/job-output.txt.gz#_2018-09-07_01_52_01_804084
  [3] http://paste.openstack.org/show/729652/
  [4] http://paste.openstack.org/show/729651/
  [5] 
https://bandit.readthedocs.io/en/latest/plugins/b111_execute_with_run_as_root_equals_true.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1791206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1791218] [NEW] neutron wsgi initialization broken by eventlet 0.24.1

2018-09-06 Thread Nate Johnston
Public bug reported:

The release of eventlet 0.24.1 changed the number of arguments to the
function eventlet.wsgi.HttpProtocol.__init__() from 4 to 3.  This led to
pylint errors and other problems.

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1791218

Title:
  neutron wsgi initialization broken by eventlet 0.24.1

Status in neutron:
  New

Bug description:
  The release of eventlet 0.24.1 changed the number of arguments to the
  function eventlet.wsgi.HttpProtocol.__init__() from 4 to 3.  This led
  to pylint errors and other problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1791218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1791206] Re: Bandit test B111 breaking pep8 tests in gate

2018-09-06 Thread Nate Johnston
** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron
   Importance: High => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1791206

Title:
  Bandit test B111 breaking pep8 tests in gate

Status in neutron:
  Invalid

Bug description:
  Bandit 1.5.1 was released earlier today [1], and the pep8 tests
  started failing with references to error B111.  The error was not very
  easy to see in the zuul logs [2][3] but when running locally was easy
  to pick out [4].  Since test B111 is supposed to have been deleted in
  the latest version of Bandit [5] we need to suppress the test to
  unbreak the gate.

  [1] https://pypi.org/project/bandit/#history
  [2] 
http://logs.openstack.org/47/583847/2/gate/openstack-tox-pep8/f5daa43/job-output.txt.gz#_2018-09-07_01_52_01_804084
  [3] http://paste.openstack.org/show/729652/
  [4] http://paste.openstack.org/show/729651/
  [5] 
https://bandit.readthedocs.io/en/latest/plugins/b111_execute_with_run_as_root_equals_true.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1791206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1791218] Re: neutron wsgi initialization broken by eventlet 0.24.1

2018-09-06 Thread Nate Johnston
Bug a duplicate of https://launchpad.net/bugs/1791178

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1791218

Title:
  neutron wsgi initialization broken by eventlet 0.24.1

Status in neutron:
  Invalid

Bug description:
  The release of eventlet 0.24.1 changed the number of arguments to the
  function eventlet.wsgi.HttpProtocol.__init__() from 4 to 3.  This led
  to pylint errors and other problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1791218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794569] Re: DVR with static routes may cause routed traffic to be dropped

2018-09-26 Thread Nate Johnston
Marking this 'invalid' since, as you suggest, Neutron 9.4.1 (Newton)
reached end of life 10/25/2017, and is no longer supported upstream.  If
you believe this to still be an issue in master then please recomment
and I will change status appropriately.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794569

Title:
  DVR with static routes may cause routed traffic to be dropped

Status in neutron:
  Invalid

Bug description:
  Neutron version: 9.4.1 (EOL, but bug may still be present)
  Network scenario: Openvswitch with DVR
  Openvswitch version: 2.6.1
  OpenStack installation version: Newton
  Operating system: Ubuntu 16.04.5 LTS
  Kernel: 4.4.0-135 x86_64

  Symptoms:
  Instances whose default gateway is a DVR interface (10.10.255.1 in our case) 
occassionaly lose connectivity to non-local networks. Meaning, any packet that 
had to pass through the local virtual router is dropped. Sometimes this 
behavior lasts for a few milliseconds, sometimes tens of seconds. Since 
floating-ip traffic is a subset of those cases, north-south connectivity breaks 
too.

  Steps to reproduce:
  - Use DVR routing mode
  - Configure at least one static route in the virtual router, whose next hop 
is NOT an address managed by Neutron (e.g. a physical interface on a VPN 
gateway; in our case 10.2.0.0/24 with next-hop 10.10.0.254)
  - Have an instance plugged into a Flat or VLAN network, use the virtual 
router as the default gateway
  - Try to reach a host inside the statically-routed network from within the 
instance

  Possible explanation:
  Distributed routers get their ARP caches populated by neutron-l3-agent at its 
startup. The agent takes all the ports in a given subnet and fills in their 
IP-to-MAC mappings inside the qrouter- namespace, as permanent entries (meaning 
they won't expire from the cache). However, if Neutron doesn't manage an IP (as 
is the case with our static route's next-hop 10.10.0.254), a permanent record 
isn't created, naturally.

  So when we try to reach a host in the statically-routed network (e.g.
  10.2.0.10) from inside the instance, the packet goes to default
  gateway (10.10.255.1). After it arrives to the qrouter- namespace,
  there is a static route for this host pointing to 10.10.0.254 as next-
  hop. However qrouter- doesn't have its MAC address, so what it does is
  it sends out an ARP request with source MAC of the distributed
  router's qr- interface.

  And that's the problem. Since ARP requests are usually broadcasts,
  they land on pretty much every hypervisor in the network within the
  same VLAN. Combined with the fact that qr- interfaces in a given
  qrouter- namespace have the same MAC address on every host, this leads
  to a disaster: every integration bridge will recieve that ARP request
  on the port that connects it to the Flat/VLAN network and learns that
  the qr- interface's MAC address is actually there - not on the qr-
  port also attached to br-int. From this moment on, packets from
  instances that need to pass via qrouter- are forwarded to the
  Flat/VLAN network interface, circumventing the qrouter- namespace.
  This is especially problematic with traffic that needs to be SNAT-ed
  on its way out.

  Workarounds:
  - The workaround that we used is creating stub Neutron ports for next-hop 
addresses, with correct MACs. After restarting neutron-l3-agents, they got 
populated into the qrouter- ARP cache as permanent entries.
  - Next option is setting the static route into the instances' routing tables 
instead of the virtual router. This way it's the instance that makes ARP 
discovery and not the qrouter- namespace.
  - Another workaround might consist of using ebtables/arptables on hypervisors 
to block incoming ARP requests from qrouters.

  Possible long-term solution:
  Maybe it would help if ancillary bridges (those connecting Flat/VLAN network 
interfaces to br-int) contained an OVS flow that drops ARP requests with source 
MAC addresses of qr- interfaces originating from the physical interface. Since 
their IPs and MACs are well defined (their device_owner is 
"network:router_interface_distributed"), it shouldn't be a problem setting 
these flows up. However I'm not sure of the shortcomings of this approach.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794569/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794695] Re: resources can't be filtered by tag-related parameters

2018-09-27 Thread Nate Johnston
OK, since openstackclient works correctly and neutronclient is
deprecated, marking this as Won't Fix.  Thanks!

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794695

Title:
  resources can't be filtered by tag-related parameters

Status in neutron:
  Won't Fix

Bug description:
  neutron can add tags to subnets, ports, routers, and subnetpools, but it 
cannot be filtered by the tags-any, not-tags, and not-tags-any parameters.
  For example
  subnet1 tag red
  subnet2 tag red,blue
  subnet3 tag red,blue,green
  subnet4 tag green
  When the "neutron subnet-list --tags-any red,blue" command is executed, the 
information of subnet1, subnet2 and subnet3 should be displayed, but the 
information of all subnets are displayed.

  network resources don't have the above problem, Other resources have
  similar problems.the tag related parameters can't take effect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794919] Re: [RFE] To decide create port with specific IP version

2018-10-01 Thread Nate Johnston
I agree with Slaweq.  According to the API guide:

If you specify only a subnet ID, OpenStack Networking allocates an 
available IP from that subnet to the port.

This approach is more flexible, because it allows you to declare whether
you want IPv4, IPv6, or both depending on which you select.

Please let me know what functionality you find that is not satisfied by
this approach.

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794919

Title:
  [RFE] To decide create port with specific IP version

Status in neutron:
  Opinion

Bug description:
  Recently bug:
  https://bugs.launchpad.net/neutron/+bug/1752903
  and the fix https://review.openstack.org/#/c/599494/
  are trying to create floating IP only including IPv4 version.

  For now, if the public network has both IPv4 and IPv6 subnet
  the floating IP (port) may have both v4 and v6 addresses.
  Furthermore, not only the public network, for tenant network
  the default behavior is also to create one port with both v4 and v6 IP addr.
  Here are some test:
  http://paste.openstack.org/show/731054/

  So, this RFE raises a new approach to the port create API,
  when user try to create the port, they can decide which IP version to use.

  something like this:
  curl POST http://neutron_url/ports -d
  '{"port": {
  "subnet_id": ,
  "ip_version": 
  }
  }'

  So for the ml2 plugin, the IPAM can pick both/only v4/only v6 version
  of subnet to use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1801919] Re: brctl is obsolete use ip

2018-11-12 Thread Nate Johnston
** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1801919

Title:
  brctl is obsolete  use ip

Status in devstack:
  In Progress
Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  bridge-utils (brctl) is obsolete, no modern software should depend on it.
  Used in: neutron/agent/linux/bridge_lib.py

  http://man7.org/linux/man-pages/man8/brctl.8.html

  Please use `ip` for basic bridge operations,
  than we can drop one obsolete dependency..

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1801919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1807239] [NEW] Race condition with DPDK + trunk ports when instance port is deleted then quickly recreated

2018-12-06 Thread Nate Johnston
 --format=json]: 
{"data":[["8370c94d-6920-47f7-811a-51702eee4027","delete","tbr-f4554113-b",65534,["map",[]]],["1f9b6a11-32e5-44bf-a6d3-e6750e087c33","delete","tpt-44d242e1-30",2,["map",[["attached-mac","fa:16:3e:3e:4a:b4"],["iface-id","44d242e1-30aa-4da6-b00c-bad9d64560af",["9951c13a-6bfb-45ba-8499-68fa399d1e59","delete","tpi-44d242e1-30",39,["map",[["attached-mac","fa:16:3e:3e:4a:b4"],["iface-id","44d242e1-30aa-4da6-b00c-bad9d64560af",["fddedd5b-73e8-4e54-ba2a-97364f27950d","delete","vhu44d242e1-30",6,["map",[["attached-mac","fa:16:3e:3e:4a:b4"],["bridge_name","tbr-f4554113-b"],["iface-id","44d242e1-30aa-4da6-b00c-bad9d64560af"],["iface-status","active"],["subport_ids","[]"],["trunk_id","f4554113-b695-4e7d-b550-6927765c6679"],["vm-uuid","1f23fafb-e370-4e53-9bf1-8d5b1d8f7387"],"headings":["row","action","name","ofport","external_ids"]}
 _read_stdout 
/usr/lib/python2.7/site-packages/neutron/agent/linux/async_process.py:239

The creation event detected in step (4) is lost.  When the race
condition does not happen, the neutron-openvswitch-agent logs will
include a "Processing trunk bridge" log line; this does not occur when
the race condition is triggered.

Another indication of the issue is that you will get tracebacks looking
for metadata on the deleted bridges:

2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command [-] Error 
executing command: RowNotFound: Cannot find Bridge with name=tbr-f4554113-b
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command Traceback 
(most recent call last):
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/command.py", line 
35, in execute
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command 
txn.add(self)
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command 
self.gen.next()
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/api.py", line 94, in transaction
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command 
self._nested_txn = None
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/api.py", line 54, in __exit__
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command 
self.result = self.commit()
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
line 62, in commit
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command raise 
result.ex
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command 
RowNotFound: Cannot find Bridge with name=tbr-f4554113-b
2018-11-30 21:00:22.444 61270 ERROR ovsdbapp.backend.ovs_idl.command
2018-11-30 21:00:22.444 61270 ERROR 
neutron.services.trunk.drivers.openvswitch.agent.ovsdb_handler [-] Failed to 
store metadata for trunk f4554113-b695-4e7d-b550-6927765c6679: Cannot find 
Bridge with name=tbr-f4554113-b: RowNotFound: Cannot find Bridge with 
name=tbr-f4554113-b

** Affects: neutron
 Importance: Critical
 Assignee: Nate Johnston (nate-johnston)
 Status: In Progress


** Tags: dpdk queens-backport-potential rocky-backport-potential trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1807239

Title:
  Race condition with DPDK + trunk ports when instance port is deleted
  then quickly recreated

Status in neutron:
  In Progress

Bug description:
  Deployment is Queens with ML2/OVS and DPDK. Instance ports are neutron
  trunk ports with DPDK vhu in vhostuserclient mode.  When an instance
  is rebooted, nova/os_vif deletes the ovs port connected to the trunk
  bridge and then recreates it when the host comes back online.  This
  causes a race condition in the trunk code that handles the tbr trunk
  bridge.  Neutron, seeing the deletion, queues a delete of the tbr
  bridge.  Then the subsequent re-add of the trunk is ignored because
  the delete is still enqueued.

  Here are annotated logs for an instance of the issue from the logfiles
  of nova-compute and neutron-openvswitch-agent:

  # (1) nova-compute deletes the ins

[Yahoo-eng-team] [Bug 1812922] [NEW] neutron functional tests break with oslo.utils 3.39.1 and above

2019-01-22 Thread Nate Johnston
Public bug reported:

The new oslo.service version bump [1] is failing on the Neutron
"test_periodic_worker_lifecycle" unit test [2].  This occurs when
oslo.utils 3.39.1 or higher is installed; technically 3.38.0 is what is
set forth in upper-constraints.txt [3] but you can see oslo.utils 3.40.1
being installed in the failing tests [4].

The change in oslo.config that is precipitating this failure is "Fix
race condition in eventletutils Event" [5].  This is now included as a
result of the bump for oslo.utils in upper-constraints.txt [6] that
occurred yesterday (Jan 21 2019).

As a temporary measure, will alter Neutron requirements.txt to cap the
oslo.utils version at 3.39.0 or less so this issue can be fixed.

[1] https://review.openstack.org/632169
[2] 
neutron.tests.unit.test_worker.PeriodicWorkerTestCase.test_periodic_worker_lifecycle
[3] 
https://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n548
[4] 
http://logs.openstack.org/69/632169/1/check/cross-neutron-py35/6f91a77/job-output.txt.gz#_2019-01-22_02_46_08_984507
[5] https://review.openstack.org/#/c/618482/13/oslo_utils/eventletutils.py L160
[6] https://review.openstack.org/632170/

** Affects: neutron
 Importance: Medium
 Status: Triaged

** Affects: oslo.utils
 Importance: Undecided
 Status: New

** Also affects: oslo.utils
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1812922

Title:
  neutron functional tests break with oslo.utils 3.39.1 and above

Status in neutron:
  Triaged
Status in oslo.utils:
  New

Bug description:
  The new oslo.service version bump [1] is failing on the Neutron
  "test_periodic_worker_lifecycle" unit test [2].  This occurs when
  oslo.utils 3.39.1 or higher is installed; technically 3.38.0 is what
  is set forth in upper-constraints.txt [3] but you can see oslo.utils
  3.40.1 being installed in the failing tests [4].

  The change in oslo.config that is precipitating this failure is "Fix
  race condition in eventletutils Event" [5].  This is now included as a
  result of the bump for oslo.utils in upper-constraints.txt [6] that
  occurred yesterday (Jan 21 2019).

  As a temporary measure, will alter Neutron requirements.txt to cap the
  oslo.utils version at 3.39.0 or less so this issue can be fixed.

  [1] https://review.openstack.org/632169
  [2] 
neutron.tests.unit.test_worker.PeriodicWorkerTestCase.test_periodic_worker_lifecycle
  [3] 
https://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n548
  [4] 
http://logs.openstack.org/69/632169/1/check/cross-neutron-py35/6f91a77/job-output.txt.gz#_2019-01-22_02_46_08_984507
  [5] https://review.openstack.org/#/c/618482/13/oslo_utils/eventletutils.py 
L160
  [6] https://review.openstack.org/632170/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1812922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817953] [NEW] oslopolicy-policy-generator does not work for neutron

2019-02-27 Thread Nate Johnston
Public bug reported:

The oslopolicy-policy-generator tool does not work for neutron.  This
appears to be the same as an old bug [1] that was already fixed for
other services.

[centos@persist devstack]$ oslopolicy-policy-generator --namespace neutron
WARNING:stevedore.named:Could not load neutron
Traceback (most recent call last):
  File "/usr/bin/oslopolicy-policy-generator", line 11, in 
sys.exit(generate_policy())
  File "/usr/lib/python2.7/site-packages/oslo_policy/generator.py", line 338, 
in generate_policy
_generate_policy(conf.namespace, conf.output_file)
  File "/usr/lib/python2.7/site-packages/oslo_policy/generator.py", line 283, 
in _generate_policy
enforcer = _get_enforcer(namespace)
  File "/usr/lib/python2.7/site-packages/oslo_policy/generator.py", line 87, in 
_get_enforcer
enforcer = mgr[namespace].obj
  File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 326, in 
__getitem__
return self._extensions_by_name[name]
KeyError: 'neutron'

[1] https://bugs.launchpad.net/keystone/+bug/1740951

** Affects: neutron
 Importance: Medium
 Assignee: Nate Johnston (nate-johnston)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1817953

Title:
  oslopolicy-policy-generator does not work for neutron

Status in neutron:
  New

Bug description:
  The oslopolicy-policy-generator tool does not work for neutron.  This
  appears to be the same as an old bug [1] that was already fixed for
  other services.

  [centos@persist devstack]$ oslopolicy-policy-generator --namespace neutron
  WARNING:stevedore.named:Could not load neutron
  Traceback (most recent call last):
File "/usr/bin/oslopolicy-policy-generator", line 11, in 
  sys.exit(generate_policy())
File "/usr/lib/python2.7/site-packages/oslo_policy/generator.py", line 338, 
in generate_policy
  _generate_policy(conf.namespace, conf.output_file)
File "/usr/lib/python2.7/site-packages/oslo_policy/generator.py", line 283, 
in _generate_policy
  enforcer = _get_enforcer(namespace)
File "/usr/lib/python2.7/site-packages/oslo_policy/generator.py", line 87, 
in _get_enforcer
  enforcer = mgr[namespace].obj
File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 326, 
in __getitem__
  return self._extensions_by_name[name]
  KeyError: 'neutron'

  [1] https://bugs.launchpad.net/keystone/+bug/1740951

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1817953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852760] [NEW] When running 'openstack floating ip list' on undercloud, client cannot handle NotFoundException

2019-11-15 Thread Nate Johnston
Public bug reported:

There's no such thing as floating IPs on an undercloud, but some
investigative tools ask nevertheless.  Asking for floating IPs on an
undercloud results in a NotFoundException, which openstackclient does
not handle gracefully:

(undercloud) [stack@undercloud-0 ~]$ openstack floating ip list --debug

[output removed for brevity]

REQ: curl -g -i -X GET http://192.168.24.1:9696/v2.0/floatingips -H 
"User-Agent: osc-lib/1.9.0 keystoneauth1/3.4.0 python-requests/2.14.2 
CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}02e85d1ea22075e2a96e97c4b58070887f1b8544"
http://192.168.24.1:9696 "GET /v2.0/floatingips HTTP/1.1" 404 103
RESP: [404] Content-Length: 103 Content-Type: application/json 
X-Openstack-Request-Id: req-ab1bb8f9-205e-4956-9bf4-0a7fabcc996b Date: Fri, 15 
Nov 2019 15:02:52 GMT Connection: keep-alive
RESP BODY: {"NeutronError": {"message": "The resource could not be found.", 
"type": "HTTPNotFound", "detail": ""}}

GET call to network for http://192.168.24.1:9696/v2.0/floatingips used request 
id req-ab1bb8f9-205e-4956-9bf4-0a7fabcc996b
Manager unknown ran task network.GET.floatingips in 0.415860176086s
NotFoundException: Unknown error
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cliff/app.py", line 400, in 
run_subcommand
result = cmd.run(parsed_args)
  File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 41, 
in run
return super(Command, self).run(parsed_args)
  File "/usr/lib/python2.7/site-packages/cliff/display.py", line 119, in run
self.produce_output(parsed_args, column_names, data)
  File "/usr/lib/python2.7/site-packages/cliff/lister.py", line 82, in 
produce_output
parsed_args,
  File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 101, 
in emit_list
self.add_rows(x, column_names, data)
  File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 80, 
in add_rows
first_row = next(data_iter)
  File 
"/usr/lib/python2.7/site-packages/openstackclient/network/v2/floating_ip.py", 
line 399, in 
(utils.get_item_properties(
  File "/usr/lib/python2.7/site-packages/openstack/resource.py", line 898, in 
list
exceptions.raise_from_response(response)
  File "/usr/lib/python2.7/site-packages/openstack/exceptions.py", line 205, in 
raise_from_response
http_status=http_status, request_id=request_id
NotFoundException: NotFoundException: Unknown error
clean_up ListFloatingIP: NotFoundException: Unknown error
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 134, in run
ret_val = super(OpenStackShell, self).run(argv)
  File "/usr/lib/python2.7/site-packages/cliff/app.py", line 279, in run
result = self.run_subcommand(remainder)
  File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 169, in 
run_subcommand
ret_value = super(OpenStackShell, self).run_subcommand(argv)
  File "/usr/lib/python2.7/site-packages/cliff/app.py", line 400, in 
run_subcommand
result = cmd.run(parsed_args)
  File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 41, 
in run
return super(Command, self).run(parsed_args)
  File "/usr/lib/python2.7/site-packages/cliff/display.py", line 119, in run
self.produce_output(parsed_args, column_names, data)
  File "/usr/lib/python2.7/site-packages/cliff/lister.py", line 82, in 
produce_output
parsed_args,
  File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 101, 
in emit_list
self.add_rows(x, column_names, data)
  File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 80, 
in add_rows
first_row = next(data_iter)
  File 
"/usr/lib/python2.7/site-packages/openstackclient/network/v2/floating_ip.py", 
line 399, in 
(utils.get_item_properties(
  File "/usr/lib/python2.7/site-packages/openstack/resource.py", line 898, in 
list
exceptions.raise_from_response(response)
  File "/usr/lib/python2.7/site-packages/openstack/exceptions.py", line 205, in 
raise_from_response
http_status=http_status, request_id=request_id
NotFoundException: NotFoundException: Unknown error

This exception should be handled gracefully by the client and should
produce an error message, not a stack trace.

Downstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=1765497

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852760

Title:
  When running 'openstack flo

[Yahoo-eng-team] [Bug 1852760] Re: When running 'openstack floating ip list' on undercloud, client cannot handle NotFoundException

2019-11-15 Thread Nate Johnston
** Changed in: neutron
   Status: Invalid => Confirmed

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852760

Title:
  When running 'openstack floating ip list' on undercloud, client cannot
  handle NotFoundException

Status in neutron:
  Confirmed

Bug description:
  There's no such thing as floating IPs on an undercloud, but some
  investigative tools ask nevertheless.  Asking for floating IPs on an
  undercloud results in a NotFoundException, which openstackclient does
  not handle gracefully:

  (undercloud) [stack@undercloud-0 ~]$ openstack floating ip list
  --debug

  [output removed for brevity]

  REQ: curl -g -i -X GET http://192.168.24.1:9696/v2.0/floatingips -H 
"User-Agent: osc-lib/1.9.0 keystoneauth1/3.4.0 python-requests/2.14.2 
CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}02e85d1ea22075e2a96e97c4b58070887f1b8544"
  http://192.168.24.1:9696 "GET /v2.0/floatingips HTTP/1.1" 404 103
  RESP: [404] Content-Length: 103 Content-Type: application/json 
X-Openstack-Request-Id: req-ab1bb8f9-205e-4956-9bf4-0a7fabcc996b Date: Fri, 15 
Nov 2019 15:02:52 GMT Connection: keep-alive
  RESP BODY: {"NeutronError": {"message": "The resource could not be found.", 
"type": "HTTPNotFound", "detail": ""}}

  GET call to network for http://192.168.24.1:9696/v2.0/floatingips used 
request id req-ab1bb8f9-205e-4956-9bf4-0a7fabcc996b
  Manager unknown ran task network.GET.floatingips in 0.415860176086s
  NotFoundException: Unknown error
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cliff/app.py", line 400, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 
41, in run
  return super(Command, self).run(parsed_args)
File "/usr/lib/python2.7/site-packages/cliff/display.py", line 119, in run
  self.produce_output(parsed_args, column_names, data)
File "/usr/lib/python2.7/site-packages/cliff/lister.py", line 82, in 
produce_output
  parsed_args,
File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 
101, in emit_list
  self.add_rows(x, column_names, data)
File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 80, 
in add_rows
  first_row = next(data_iter)
File 
"/usr/lib/python2.7/site-packages/openstackclient/network/v2/floating_ip.py", 
line 399, in 
  (utils.get_item_properties(
File "/usr/lib/python2.7/site-packages/openstack/resource.py", line 898, in 
list
  exceptions.raise_from_response(response)
File "/usr/lib/python2.7/site-packages/openstack/exceptions.py", line 205, 
in raise_from_response
  http_status=http_status, request_id=request_id
  NotFoundException: NotFoundException: Unknown error
  clean_up ListFloatingIP: NotFoundException: Unknown error
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 134, in run
  ret_val = super(OpenStackShell, self).run(argv)
File "/usr/lib/python2.7/site-packages/cliff/app.py", line 279, in run
  result = self.run_subcommand(remainder)
File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 169, in 
run_subcommand
  ret_value = super(OpenStackShell, self).run_subcommand(argv)
File "/usr/lib/python2.7/site-packages/cliff/app.py", line 400, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 
41, in run
  return super(Command, self).run(parsed_args)
File "/usr/lib/python2.7/site-packages/cliff/display.py", line 119, in run
  self.produce_output(parsed_args, column_names, data)
File "/usr/lib/python2.7/site-packages/cliff/lister.py", line 82, in 
produce_output
  parsed_args,
File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 
101, in emit_list
  self.add_rows(x, column_names, data)
File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 80, 
in add_rows
  first_row = next(data_iter)
File 
"/usr/lib/python2.7/site-packages/openstackclient/network/v2/floating_ip.py", 
line 399, in 
  (utils.get_item_properties(
File "/usr/lib/python2.7/site-packages/openstack/resource.py", line 898, in 
list
  exceptions.raise_from_response(response)
File "/usr/lib/python2.7/site-packages/openstack/exceptions.py", line 205, 
in raise_from_response
  http_status=http_status, request_id=request_id
  NotFoundException: NotFoundException: Unknown error

  This exception should be handled gracefully by the client and should
  produce an error message, not a stack trace.

  Downstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=1765497

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1852760/+subscriptions

[Yahoo-eng-team] [Bug 1852760] Re: When running 'openstack floating ip list' on undercloud, client cannot handle NotFoundException

2019-11-15 Thread Nate Johnston
Sorry, I guess my submission and Slawek's crossed in the mail, so to
speak.  Opened story https://storyboard.openstack.org/#!/story/2006863
for this.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852760

Title:
  When running 'openstack floating ip list' on undercloud, client cannot
  handle NotFoundException

Status in neutron:
  Invalid

Bug description:
  There's no such thing as floating IPs on an undercloud, but some
  investigative tools ask nevertheless.  Asking for floating IPs on an
  undercloud results in a NotFoundException, which openstackclient does
  not handle gracefully:

  (undercloud) [stack@undercloud-0 ~]$ openstack floating ip list
  --debug

  [output removed for brevity]

  REQ: curl -g -i -X GET http://192.168.24.1:9696/v2.0/floatingips -H 
"User-Agent: osc-lib/1.9.0 keystoneauth1/3.4.0 python-requests/2.14.2 
CPython/2.7.5" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}02e85d1ea22075e2a96e97c4b58070887f1b8544"
  http://192.168.24.1:9696 "GET /v2.0/floatingips HTTP/1.1" 404 103
  RESP: [404] Content-Length: 103 Content-Type: application/json 
X-Openstack-Request-Id: req-ab1bb8f9-205e-4956-9bf4-0a7fabcc996b Date: Fri, 15 
Nov 2019 15:02:52 GMT Connection: keep-alive
  RESP BODY: {"NeutronError": {"message": "The resource could not be found.", 
"type": "HTTPNotFound", "detail": ""}}

  GET call to network for http://192.168.24.1:9696/v2.0/floatingips used 
request id req-ab1bb8f9-205e-4956-9bf4-0a7fabcc996b
  Manager unknown ran task network.GET.floatingips in 0.415860176086s
  NotFoundException: Unknown error
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cliff/app.py", line 400, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 
41, in run
  return super(Command, self).run(parsed_args)
File "/usr/lib/python2.7/site-packages/cliff/display.py", line 119, in run
  self.produce_output(parsed_args, column_names, data)
File "/usr/lib/python2.7/site-packages/cliff/lister.py", line 82, in 
produce_output
  parsed_args,
File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 
101, in emit_list
  self.add_rows(x, column_names, data)
File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 80, 
in add_rows
  first_row = next(data_iter)
File 
"/usr/lib/python2.7/site-packages/openstackclient/network/v2/floating_ip.py", 
line 399, in 
  (utils.get_item_properties(
File "/usr/lib/python2.7/site-packages/openstack/resource.py", line 898, in 
list
  exceptions.raise_from_response(response)
File "/usr/lib/python2.7/site-packages/openstack/exceptions.py", line 205, 
in raise_from_response
  http_status=http_status, request_id=request_id
  NotFoundException: NotFoundException: Unknown error
  clean_up ListFloatingIP: NotFoundException: Unknown error
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 134, in run
  ret_val = super(OpenStackShell, self).run(argv)
File "/usr/lib/python2.7/site-packages/cliff/app.py", line 279, in run
  result = self.run_subcommand(remainder)
File "/usr/lib/python2.7/site-packages/osc_lib/shell.py", line 169, in 
run_subcommand
  ret_value = super(OpenStackShell, self).run_subcommand(argv)
File "/usr/lib/python2.7/site-packages/cliff/app.py", line 400, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/lib/python2.7/site-packages/osc_lib/command/command.py", line 
41, in run
  return super(Command, self).run(parsed_args)
File "/usr/lib/python2.7/site-packages/cliff/display.py", line 119, in run
  self.produce_output(parsed_args, column_names, data)
File "/usr/lib/python2.7/site-packages/cliff/lister.py", line 82, in 
produce_output
  parsed_args,
File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 
101, in emit_list
  self.add_rows(x, column_names, data)
File "/usr/lib/python2.7/site-packages/cliff/formatters/table.py", line 80, 
in add_rows
  first_row = next(data_iter)
File 
"/usr/lib/python2.7/site-packages/openstackclient/network/v2/floating_ip.py", 
line 399, in 
  (utils.get_item_properties(
File "/usr/lib/python2.7/site-packages/openstack/resource.py", line 898, in 
list
  exceptions.raise_from_response(response)
File "/usr/lib/python2.7/site-packages/openstack/exceptions.py", line 205, 
in raise_from_response
  http_status=http_status, request_id=request_id
  NotFoundException: NotFoundException: Unknown error

  This exception should be handled gracefully by the client and should
  produce an error message, not a stack trace.

  Downstream bug: https://bugzilla.redhat.com/show_bug.cgi?id=1765497

To manage notif

[Yahoo-eng-team] [Bug 1854051] Re: py36 unit test cases fails

2019-11-26 Thread Nate Johnston
According to the Project Testing Interface (PTI), py36 isrequired for
Ussuri:
https://governance.openstack.org/tc/reference/runtimes/ussuri.html
#python-runtime-for-ussuri

Reopened the bug and marked as critical.

** Changed in: neutron
   Status: Won't Fix => New

** Changed in: neutron
   Importance: Low => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1854051

Title:
  py36 unit test cases fails

Status in neutron:
  New

Bug description:
  This should be a NOTE, not a bug in case someone who meets this issue
  someday, since the minimum support python version of neutron is 3.7
  now.

  
  Branch: master
  heads:
  2a8b70d Merge "Update security group rule if port range is all ports"
  fd5e292 Merge "Remove neutron-grenade job from Neutron CI queues"
  f6aef3c Merge "Switch neutron-tempest-with-os-ken-master job to zuul v3"
  2174bb0 Merge "Remove old, legacy experimental CI jobs"
  8672029 Merge "HA race condition test for DHCP scheduling"
  71e3cb0 Merge "Parameter 'fileds' value is not used in _get_subnets"
  b5e5082 Merge "Update networking-bgpvpn and networking-bagpipe liuetenants"
  3c1139c Merge "Make network support read and write separation"
  67b613b Merge "NetcatTester.stop_processes skip "No such process" exception"
  185efb3 Update networking-bgpvpn and networking-bagpipe liuetenants
  728d8ee NetcatTester.stop_processes skip "No such process" exception

  
  Tox env was definitely upgraded to meet the requirements.txt and 
test-requirements.txt

  Exceptions:
  ==
  Failed 2 tests - output below:
  ==

  
neutron.tests.unit.plugins.ml2.drivers.openvswitch.agent.test_ovs_neutron_agent.TestOvsDvrNeutronAgentOSKen.test_get_dvr_mac_address_exception
  
--

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File 
"/home/yulong/github/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py",
 line 164, in get_dvr_mac_address'
  b'self.get_dvr_mac_address_with_retry()'
  b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/osprofiler/profiler.py",
 line 160, in wrapper'
  b'result = f(*args, **kwargs)'
  b'  File 
"/home/yulong/github/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py",
 line 184, in get_dvr_mac_address_with_retry'
  b'self.context, self.host)'
  b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/mock/mock.py",
 line 1092, in __call__'
  b'return _mock_self._mock_call(*args, **kwargs)'
  b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/mock/mock.py",
 line 1143, in _mock_call'
  b'raise effect'
  b'oslo_messaging.rpc.client.RemoteError: Remote error: None None'
  b'None.'
  b''
  b'During handling of the above exception, another exception occurred:'
  b''
  b'Traceback (most recent call last):'
  b'  File "/home/yulong/github/neutron/neutron/tests/base.py", line 182, 
in func'
  b'return f(self, *args, **kwargs)'
  b'  File 
"/home/yulong/github/neutron/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py",
 line 3614, in test_get_dvr_mac_address_exception'
  b'self.agent.dvr_agent.get_dvr_mac_address()'
  b'  File 
"/home/yulong/github/neutron/.tox/py36/lib/python3.6/site-packages/osprofiler/profiler.py",
 line 160, in wrapper'
  b'result = f(*args, **kwargs)'
  b'  File 
"/home/yulong/github/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py",
 line 169, in get_dvr_mac_address'
  b"'message: %s', e)"
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1653, in error'
  b'self.log(ERROR, msg, *args, **kwargs)'
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1674, in log'
  b'self.logger.log(level, msg, *args, **kwargs)'
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1374, in log'
  b'self._log(level, msg, args, **kwargs)'
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1443, in _log'
  b'exc_info, func, extra, sinfo)'
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 1413, in 
makeRecord'
  b'sinfo)'
  b'  File "/usr/lib64/python3.6/logging/__init__.py", line 277, in 
__init__'
  b'if (args and len(args) == 1 and isinstance(args[0], 
collections.Mapping)'
  b'  File "/home/yulong/github/neutron/.tox/py36/lib64/python3.6/abc.py", 
line 193, in __instancecheck__'
  b'return cls.__subclasscheck__(subclass)'
  b'  File "/home/yulong/github/neutron/.tox/py36/lib64/python3.

[Yahoo-eng-team] [Bug 1854050] Re: minor versions 14.0.2 & 14.0.3 are not compatible in dvr-ha

2019-11-26 Thread Nate Johnston
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1854050

Title:
  minor versions 14.0.2 & 14.0.3 are not compatible in dvr-ha

Status in neutron:
  Invalid

Bug description:
  Environment is neutron 14.0.2 with DVR and HA (OVS).
  Upgraded a single compute or deployed new with 14.0.3.

  Expected outcome:

  Minor versions should be fully compatible and neutron should work with
  the same major version.

  Actual outcome:

  Can't schedule instances on computes holding this version and neutron
  services spew out errors.

  neutron-server on controller/network node:

  Exception during message handling: InvalidTargetVersion: Invalid target 
version 1.5
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 166, in _process_incoming
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 265, in dispatch
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 194, in _do_dispatch
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 229, in inner
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server return 
func(*args, **kwargs)
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py",
 line 148, in bulk_pull
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server 
**filter_kwargs)]
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 551, in obj_to_primitive
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server raise 
exception.InvalidTargetVersion(version=target_version)
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server 
InvalidTargetVersion: Invalid target version 1.5
  2019-11-26 08:36:51.359 25 ERROR oslo_messaging.rpc.server 

  
  neutron-openvswitch-agent on compute node:

  Error while processing VIF ports: RemoteError: Remote error: 
InvalidTargetVersion Invalid target version 1.5
  [u'Traceback (most recent call last):\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 166, in _process_incoming\nres = 
self.dispatcher.dispatch(message)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 265, in dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, 
args)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 194, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 229, in inner\nreturn func(*args, **kwargs)\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py",
 line 148, in bulk_pull\n**filter_kwargs)]\n', u'  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 551, in obj_to_primitive\nraise 
exception.InvalidTargetVersion(version=target_version)\n', 
u'InvalidTargetVersion: Invalid target version 1.5\n'].
  2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2278, in rpc_loop
  2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
provisioning_needed)
  2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py", 
line 160, in wrapper
  2019-11-26 08:36:45.975 6 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent result = 
f(*args,

[Yahoo-eng-team] [Bug 1856600] [NEW] Unit test jobs are failing with ImportError: cannot import name 'engine' from 'flake8'

2019-12-16 Thread Nate Johnston
Public bug reported:

Neutron unit test CI jobs are failing with the following error:

=
Failures during discovery
=
--- import errors ---
Failed to import test module: neutron.tests.unit.hacking.test_checks
Traceback (most recent call last):
  File "/usr/lib/python3.7/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
  File "/usr/lib/python3.7/unittest/loader.py", line 377, in 
_get_module_from_name
__import__(name)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/hacking/test_checks.py",
 line 15, in 
from flake8 import engine
ImportError: cannot import name 'engine' from 'flake8' 
(/home/zuul/src/opendev.org/openstack/neutron/.tox/py37/lib/python3.7/site-packages/flake8/__init__.py)

Example:
https://e859f0a6f5995c9142c5-a232ce3bdc50fca913ceba9a1c600c62.ssl.cf5.rackcdn.com/572767/23/check
/openstack-tox-py37/1d036e0/job-output.txt

Looks like flake8 no longer has an engine but they had kept the api for
backward compatibility [1], perhaps they broke it somehow.

[1] based on comment in
https://gitlab.com/pycqa/flake8/blob/master/src/flake8/api/legacy.py#L3

** Affects: neutron
 Importance: Critical
 Assignee: Nate Johnston (nate-johnston)
 Status: In Progress


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1856600

Title:
  Unit test jobs are failing with ImportError: cannot import name
  'engine' from 'flake8'

Status in neutron:
  In Progress

Bug description:
  Neutron unit test CI jobs are failing with the following error:

  =
  Failures during discovery
  =
  --- import errors ---
  Failed to import test module: neutron.tests.unit.hacking.test_checks
  Traceback (most recent call last):
File "/usr/lib/python3.7/unittest/loader.py", line 436, in _find_test_path
  module = self._get_module_from_name(name)
File "/usr/lib/python3.7/unittest/loader.py", line 377, in 
_get_module_from_name
  __import__(name)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/hacking/test_checks.py",
 line 15, in 
  from flake8 import engine
  ImportError: cannot import name 'engine' from 'flake8' 
(/home/zuul/src/opendev.org/openstack/neutron/.tox/py37/lib/python3.7/site-packages/flake8/__init__.py)

  Example:
  
https://e859f0a6f5995c9142c5-a232ce3bdc50fca913ceba9a1c600c62.ssl.cf5.rackcdn.com/572767/23/check
  /openstack-tox-py37/1d036e0/job-output.txt

  Looks like flake8 no longer has an engine but they had kept the api
  for backward compatibility [1], perhaps they broke it somehow.

  [1] based on comment in
  https://gitlab.com/pycqa/flake8/blob/master/src/flake8/api/legacy.py#L3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1856600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867214] [NEW] MTU too large error presented on create but not update

2020-03-12 Thread Nate Johnston
 vlan 
--provider-segment 109 private10
Error while executing command: BadRequestException: Unknown error, 
{"NeutronError": {"message": "Invalid input for operation: Requested MTU is too 
big, maximum is 1500.", "type": "InvalidInput", "detail": ""}}
~~~

~~~
(overcloud) [stack@undercloud-0 ~]$ openstack network create --mtu 1500 
--provider-physical-network tenant --provider-network-type vlan 
--provider-segment 109 private10
+---+--+
| Field | Value|
+---+--+
| admin_state_up| UP   |
| availability_zone_hints   |  |
| availability_zones|  |
| created_at| 2020-03-10T15:20:50Z |
| description   |  |
| dns_domain| None |
| id| fb8e96b4-b770-4493-a6ee-3cdae5dbf714 |
| ipv4_address_scope| None |
| ipv6_address_scope| None |
| is_default| False|
| is_vlan_transparent   | None |
| mtu   | 1500 |
| name  | private10|
| port_security_enabled | True |
| project_id| d69c1c6601c741deaa205fa1a7e9c632 |
| provider:network_type | vlan |
| provider:physical_network | tenant   |
| provider:segmentation_id  | 109  |
| qos_policy_id | None |
| revision_number   | 3|
| router:external   | Internal |
| segments  | None |
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tags  |  |
| updated_at| 2020-03-10T15:20:50Z |
+---+--+
(overcloud) [stack@undercloud-0 ~]$ openstack network set private10 --mtu 2500
(overcloud) [stack@undercloud-0 ~]$ openstack network show private10
+---+--+
| Field | Value|
+---+--+
| admin_state_up| UP   |
| availability_zone_hints   |  |
| availability_zones|  |
| created_at| 2020-03-10T15:20:50Z |
| description   |  |
| dns_domain| None |
| id| fb8e96b4-b770-4493-a6ee-3cdae5dbf714 |
| ipv4_address_scope| None |
| ipv6_address_scope| None |
| is_default| None |
| is_vlan_transparent   | None |
| mtu   | 2500 |
| name  | private10|
| port_security_enabled | True |
| project_id| d69c1c6601c741deaa205fa1a7e9c632 |
| provider:network_type | vlan |
| provider:physical_network | tenant   |
| provider:segmentation_id  | 109  |
| qos_policy_id | None |
| revision_number   | 5|
| router:external   | Internal |
| segments  | None |
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tags  |  |
| updated_at| 2020-03-10T15:21:08Z |
+---+--+

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnst

[Yahoo-eng-team] [Bug 1869244] [NEW] RowNotFound: Cannot find Bridge with name=tbr-XXXXXXXX-X when using trunk bridges with DPDK vhostusermode

2020-03-26 Thread Nate Johnston
Public bug reported:

DPDK vhostuser mode (DPDK/vhu) means that when an instance is powered
off the port is deleted, and when an instance is powered on a port is
created.  This means a reboot is functionally a super fast
delete-then-create.  Neutron trunking mode in combination with DPDK/vhu
implements a trunk bridge for each tenant, and the ports for the
instances are created as subports of that bridge.  The standard way a
trunk bridge works is that when all the subports are deleted, a thread
is spawned to delete the trunk bridge, because that is an expensive and
time-consuming operation.  That means that if the port in question is
the only port on the trunk on that compute node, this happens:

1. The port is deleted
2. A thread is spawned to delete the trunk
3. The port is recreated

If the trunk is deleted after #3 happens then the instance has no
networking and is inaccessible; this is the scenario that was dealt with
in a previous change [1].  But there continue to be issues with errors
"RowNotFound: Cannot find Bridge with name=tbr--X".  

2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command [-] Error 
executing command: RowNotFound: Cannot find Bridge with name=tbr--X
2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command Traceback 
(most recent call last):
2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/command.py", line 
37, in execute
2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command 
self.run_idl(None)
2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/schema/open_vswitch/commands.py", 
line 335, in run_idl
2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command br = 
idlutils.row_by_value(self.api.idl, 'Bridge', 'name', self.bridge)
2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 
63, in row_by_value
2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command raise 
RowNotFound(table=table, col=column, match=match)
2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command 
RowNotFound: Cannot find Bridge with name=tbr--X
2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command 
2020-03-02 10:37:45.932 6278 ERROR 
neutron.services.trunk.drivers.openvswitch.agent.ovsdb_handler [-] Cannot 
obtain interface list for bridge tbr--X: Cannot find Bridge with 
name=tbr--X: RowNotFound: Cannot find Bridge with name=tbr--X


What I believe is happening in this case is that the trunk is being 
deleted in the middle of the execution of #3, so that it stops 
existing in the middle of the port creation logic but before the 
port is actually recreated.

This issue was observed in setups running Queens.

** Affects: neutron
     Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1869244

Title:
  RowNotFound: Cannot find Bridge with name=tbr--X when using
  trunk bridges with DPDK vhostusermode

Status in neutron:
  New

Bug description:
  DPDK vhostuser mode (DPDK/vhu) means that when an instance is powered
  off the port is deleted, and when an instance is powered on a port is
  created.  This means a reboot is functionally a super fast
  delete-then-create.  Neutron trunking mode in combination with DPDK/vhu
  implements a trunk bridge for each tenant, and the ports for the
  instances are created as subports of that bridge.  The standard way a
  trunk bridge works is that when all the subports are deleted, a thread
  is spawned to delete the trunk bridge, because that is an expensive and
  time-consuming operation.  That means that if the port in question is
  the only port on the trunk on that compute node, this happens:

  1. The port is deleted
  2. A thread is spawned to delete the trunk
  3. The port is recreated

  If the trunk is deleted after #3 happens then the instance has no
  networking and is inaccessible; this is the scenario that was dealt with
  in a previous change [1].  But there continue to be issues with errors
  "RowNotFound: Cannot find Bridge with name=tbr--X".  

  2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command [-] Error 
executing command: RowNotFound: Cannot find Bridge with name=tbr--X
  2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command Traceback 
(most recent call last):
  2020-03-02 10:37:45.929 6278 ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/command.py", line 
37, in execute
  2020-

[Yahoo-eng-team] [Bug 1878031] [NEW] Unable to delete an instance | Conflict: Port [port-id] is currently a parent port for trunk [trunk-id]

2020-05-11 Thread Nate Johnston
ges/neutronclient/v2_0/client.py", line 331, in 
retry_request
2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] headers=headers, params=params)
2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4]   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 115,in 
wrapper

2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] ret = obj(*args, **kwargs)
2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4]   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 294, in 
do_request
2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] 
self._handle_fault_response(status_code, replybody, resp)
2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4]   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 115,in 
wrapper

2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] ret = obj(*args, **kwargs)
2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4]   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 269, in 
_handle_fault_response
2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] exception_handler_v20(status_code, 
error_body)
2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4]   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 93, in 
exception_handler_v20
2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] request_ids=request_ids)

2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] Conflict: Port 
991e4e50-481a-4ca6-9ea6-69f848c4ca9f is currently a parent port for trunk 
5800ee0f-b558-46cb-bb0b-92799dbe02cf.
2020-03-04 09:52:46.257 1 ERROR nova.network.neutronv2.api [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] Neutron server returns request_ids: 
['req-dbd7a924-a9d2-4da5-aa41-b930580ad4f2']
~~~

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: New


** Tags: trunk

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1878031

Title:
   Unable to delete an instance | Conflict: Port [port-id] is currently
  a parent port for trunk [trunk-id]

Status in neutron:
  New

Bug description:
  When you create a trunk in Neutron you create a parent port for the
  trunk and attach the trunk to the parent.  Then subports can be
  created on the trunk.  When instances are created on the trunk, first
  a port is created and then an instance is associated with a free port.
  It looks to me that's this is the oversight in the logic.

  From the perspective of the code, the parent port looks like any other
  port attached to the trunk bridge.  It doesn't have an instance
  attached to it so it looks like it's not being used for anything
  (which is technically correct).  So it becomes an eligible port for an
  instance to bind to.  That is all fine and dandy until you go to
  delete the instance and you get the "Port [port-id] is currently a
  parent port for trunk [trunk-id]" exception just as happened here.
  Anecdotally, it's seems rare that an instance will actually bind to
  it, but that is what happened for the user in this case and I have had
  several pings over the past year about people in a similar state.

  I propose that when a port is made parent port for a trunk, that the
  trunk be established as the owner of the port.  That way it will be
  ineligible for instances seeking to bind to the port.

  See also old bug: https://bugs.launchpad.net/neutron/+bug/1700428

  Description of problem:

  Attempting to delete instance failed with error in nova-compute

  ~~~
  2020-03-04 09:52:46.257 1 WARNING nova.network.neutronv2.api 
[req-0dd45fe4-861c-46d3-a5ec-7db36352da58 02c6d1bc10fe4ffaa289c786cd09b146 
695c417810ac460480055b074bc41817 - default default] [instance: 
2f9e3740-b425-4f00-a949-e1aacf2239c4] Failed to delete port 
991e4e50-481a-4ca6-9ea6-69f848c4ca9f for instance.: Conflict: Port 
991e4e50-481a-4ca6-9ea6-69f848c4ca9f is currently a parent port for trunk 
5800ee0f-b558-46cb-bb0b-92799dbe02cf.
  ~~~

  ~~~
  [stack@migration-host ~]$ openstack network trunk show 
5800ee0f-b558-46cb-bb0b-92799dbe02cf
  +-+

[Yahoo-eng-team] [Bug 1888258] Re: [neutron-tempest-plugin] greendns query has no attribute "_compute_expiration"

2020-07-27 Thread Nate Johnston
** Changed in: neutron
   Status: New => Fix Committed

** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1888258

Title:
  [neutron-tempest-plugin] greendns query has no attribute
  "_compute_expiration"

Status in neutron:
  Fix Released

Bug description:
  Some tests are failing consistently with the following error:
  http://paste.openstack.org/show/796134/

  "AttributeError: module 'dns.query' has no attribute
  '_compute_expiration'"

  
  Error logs: 
https://8a0f799a619e7f365667-2de10bdd194d323966e80d1fe3d10503.ssl.cf1.rackcdn.com/741957/2/check/neutron-tempest-plugin-designate-scenario/db75334/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1888258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649703] [NEW] neutron-fwaas check jobs for FWaaS v2 fail intermittently

2016-12-13 Thread Nate Johnston
Public bug reported:

The following check jobs fail intermittently.  When one fails the other
usually succeeds.

- gate-neutron-fwaas-v2-dsvm-tempest
- gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv

Here is an example of the multinode failing but the singlenode
succeeding:

- singlenode fail: 
http://logs.openstack.org/92/391392/10/check/gate-neutron-fwaas-v2-dsvm-tempest/cf602b9/testr_results.html.gz
- multinode succeed: 
http://logs.openstack.org/92/391392/10/check/gate-grenade-dsvm-neutron-fwaas-multinode-nv/fb42351/console.html

Here is an example of singlenode failing but multinode succeeding:

- singlenode succeed: 
http://logs.openstack.org/20/408920/1/check/gate-neutron-fwaas-v2-dsvm-tempest/7e38030/testr_results.html.gz
- multinode fail: 
http://logs.openstack.org/20/408920/1/check/gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv/d3bbaac/testr_results.html.gz

Another example of same:

- singlenode succeed: 
http://logs.openstack.org/11/407311/2/check/gate-neutron-fwaas-v2-dsvm-tempest/0e52b7e/testr_results.html.gz
- multinode fail: 
http://logs.openstack.org/11/407311/2/check/gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv/73a4af8/testr_results.html.gz

SridarK commented on https://review.openstack.org/#/c/407311/ that this
appears to occur on delete of fwg.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649703

Title:
  neutron-fwaas check jobs for FWaaS v2 fail intermittently

Status in neutron:
  New

Bug description:
  The following check jobs fail intermittently.  When one fails the
  other usually succeeds.

  - gate-neutron-fwaas-v2-dsvm-tempest
  - gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv

  Here is an example of the multinode failing but the singlenode
  succeeding:

  - singlenode fail: 
http://logs.openstack.org/92/391392/10/check/gate-neutron-fwaas-v2-dsvm-tempest/cf602b9/testr_results.html.gz
  - multinode succeed: 
http://logs.openstack.org/92/391392/10/check/gate-grenade-dsvm-neutron-fwaas-multinode-nv/fb42351/console.html

  Here is an example of singlenode failing but multinode succeeding:

  - singlenode succeed: 
http://logs.openstack.org/20/408920/1/check/gate-neutron-fwaas-v2-dsvm-tempest/7e38030/testr_results.html.gz
  - multinode fail: 
http://logs.openstack.org/20/408920/1/check/gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv/d3bbaac/testr_results.html.gz

  Another example of same:

  - singlenode succeed: 
http://logs.openstack.org/11/407311/2/check/gate-neutron-fwaas-v2-dsvm-tempest/0e52b7e/testr_results.html.gz
  - multinode fail: 
http://logs.openstack.org/11/407311/2/check/gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv/73a4af8/testr_results.html.gz

  SridarK commented on https://review.openstack.org/#/c/407311/ that
  this appears to occur on delete of fwg.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657299] [NEW] Intermittent failures in neutron-fwaas v2 tempest tests

2017-01-17 Thread Nate Johnston
Public bug reported:

Occasionally the neutron-fwaas v2 tempest tests (gate-neutron-fwaas-v2
-dsvm-tempest, gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv, or both)
will fail with the following error on the test
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaasv2_extensions.FWaaSv2ExtensionTestJSON.test_create_show_delete_firewall_group.

  Captured traceback:
  ~~~
 Traceback (most recent call last):
   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/test_fwaasv2_extensions.py",
 line 127, in _try_delete_firewall_group
 self.firewall_groups_client.delete_firewall_group(fwg_id)
   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/services/v2_client.py",
 line 38, in delete_firewall_group
 return self.delete_resource(uri)
   File "tempest/lib/services/network/base.py", line 41, in delete_resource
 resp, body = self.delete(req_uri)
   File "tempest/lib/common/rest_client.py", line 306, in delete
 return self.request('DELETE', url, extra_headers, headers, body)
   File "tempest/lib/common/rest_client.py", line 663, in request
 self._error_checker(resp, resp_body)
   File "tempest/lib/common/rest_client.py", line 826, in _error_checker
 message=message)
 tempest.lib.exceptions.ServerFault: Got server fault
 Details: Request Failed: internal server error while processing your 
request. 

Example: http://logs.openstack.org/08/421408/1/check/gate-neutron-
fwaas-v2-dsvm-tempest/b55ddb2/console.html#_2017-01-17_18_24_59_724184

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657299

Title:
  Intermittent failures in neutron-fwaas v2 tempest tests

Status in neutron:
  New

Bug description:
  Occasionally the neutron-fwaas v2 tempest tests (gate-neutron-fwaas-v2
  -dsvm-tempest, gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv, or
  both) will fail with the following error on the test
  
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaasv2_extensions.FWaaSv2ExtensionTestJSON.test_create_show_delete_firewall_group.

Captured traceback:
~~~
   Traceback (most recent call last):
 File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/test_fwaasv2_extensions.py",
 line 127, in _try_delete_firewall_group
   self.firewall_groups_client.delete_firewall_group(fwg_id)
 File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/services/v2_client.py",
 line 38, in delete_firewall_group
   return self.delete_resource(uri)
 File "tempest/lib/services/network/base.py", line 41, in 
delete_resource
   resp, body = self.delete(req_uri)
 File "tempest/lib/common/rest_client.py", line 306, in delete
   return self.request('DELETE', url, extra_headers, headers, body)
 File "tempest/lib/common/rest_client.py", line 663, in request
   self._error_checker(resp, resp_body)
 File "tempest/lib/common/rest_client.py", line 826, in _error_checker
   message=message)
   tempest.lib.exceptions.ServerFault: Got server fault
   Details: Request Failed: internal server error while processing your 
request. 

  Example: http://logs.openstack.org/08/421408/1/check/gate-neutron-
  fwaas-v2-dsvm-tempest/b55ddb2/console.html#_2017-01-17_18_24_59_724184

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661419] [NEW] neutron-fwaas functional tests on stable/newton fail because db backend not set up

2017-02-02 Thread Nate Johnston
Public bug reported:

The functional tests for neutron-fwaas master all fail with exceptions
like:

Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/tests/functional/db/test_migrations.py", 
line 136, in setUp
super(_TestModelsMigrations, self).setUp()
  File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 283, in 
setUp
self._setup_database_fixtures()
  File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 320, in 
_setup_database_fixtures
self.fail(msg)
  File 
"/opt/stack/new/neutron-fwaas/.tox/dsvm-functional/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: backend 'mysql' unavailable

or

Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/tests/functional/db/test_migrations.py", 
line 136, in setUp
super(_TestModelsMigrations, self).setUp()
  File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 283, in 
setUp
self._setup_database_fixtures()
  File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 320, in 
_setup_database_fixtures
self.fail(msg)
  File 
"/opt/stack/new/neutron-fwaas/.tox/dsvm-functional/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: backend 'postgresql' unavailable

See: http://logs.openstack.org/03/425003/1/check/gate-neutron-fwaas-
dsvm-functional/35cae70/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas gate-failure newton-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661419

Title:
  neutron-fwaas functional tests on stable/newton fail because db
  backend not set up

Status in neutron:
  New

Bug description:
  The functional tests for neutron-fwaas master all fail with exceptions
  like:

  Traceback (most recent call last):
File 
"/opt/stack/new/neutron/neutron/tests/functional/db/test_migrations.py", line 
136, in setUp
  super(_TestModelsMigrations, self).setUp()
File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 283, 
in setUp
  self._setup_database_fixtures()
File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 320, 
in _setup_database_fixtures
  self.fail(msg)
File 
"/opt/stack/new/neutron-fwaas/.tox/dsvm-functional/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
  raise self.failureException(msg)
  AssertionError: backend 'mysql' unavailable

  or

  Traceback (most recent call last):
File 
"/opt/stack/new/neutron/neutron/tests/functional/db/test_migrations.py", line 
136, in setUp
  super(_TestModelsMigrations, self).setUp()
File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 283, 
in setUp
  self._setup_database_fixtures()
File "/opt/stack/new/neutron/neutron/tests/unit/testlib_api.py", line 320, 
in _setup_database_fixtures
  self.fail(msg)
File 
"/opt/stack/new/neutron-fwaas/.tox/dsvm-functional/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
  raise self.failureException(msg)
  AssertionError: backend 'postgresql' unavailable

  See: http://logs.openstack.org/03/425003/1/check/gate-neutron-fwaas-
  dsvm-functional/35cae70/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661418] [NEW] neutron-fwaas functional tests do not execute

2017-02-02 Thread Nate Johnston
Public bug reported:

The neutron-fwaas functional test suite for stable/newton runs tests
[1], but the functional test sute for master (ocata) does not [2].

[1] 
http://logs.openstack.org/03/425003/1/check/gate-neutron-fwaas-dsvm-functional/35cae70/testr_results.html.gz
[2] 
http://logs.openstack.org/51/424551/13/check/gate-neutron-fwaas-dsvm-functional/2596d9d/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661418

Title:
  neutron-fwaas functional tests do not execute

Status in neutron:
  New

Bug description:
  The neutron-fwaas functional test suite for stable/newton runs tests
  [1], but the functional test sute for master (ocata) does not [2].

  [1] 
http://logs.openstack.org/03/425003/1/check/gate-neutron-fwaas-dsvm-functional/35cae70/testr_results.html.gz
  [2] 
http://logs.openstack.org/51/424551/13/check/gate-neutron-fwaas-dsvm-functional/2596d9d/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661420] [NEW] neutron-fwaas tempest v2 job on stable/newton fails with "extension could not be found"

2017-02-02 Thread Nate Johnston
Public bug reported:

The gate-neutron-fwaas-v2-dsvm-tempest is failing for stable/newton jobs
in neutron-fwaas.  The errors look like:

ft1.3: 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_create_update_delete_firewall_rule[id-563564f7-7077-4f5e-8cdc-51f37ae5a2b9]_StringException:
 Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2017-02-02 21:56:08,309 22180 INFO [tempest.lib.common.rest_client] Request 
(FWaaSExtensionTestJSON:setUp): 404 POST 
http://198.61.190.237:9696/v2.0/fw/firewall_rules 0.025s
2017-02-02 21:56:08,310 22180 DEBUG[tempest.lib.common.rest_client] Request 
- Headers: {'X-Auth-Token': '', 'Accept': 'application/json', 
'Content-Type': 'application/json'}
Body: {"firewall_rule": {"name": "fw-rule-1600127867", "protocol": 
"tcp", "action": "allow"}}
Response - Headers: {u'x-openstack-request-id': 
'req-267c7949-c777-4f3f-a63e-55ecf98338aa', u'content-type': 'application/json; 
charset=UTF-8', u'date': 'Thu, 02 Feb 2017 21:56:08 GMT', 'content-location': 
'http://198.61.190.237:9696/v2.0/fw/firewall_rules', 'status': '404', 
u'connection': 'close', u'content-length': '112'}
Body: {"message": "The resource could not be found.\n\n\n", 
"code": "404 Not Found", "title": "Not Found"}
}}}

Traceback (most recent call last):
  File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/test_fwaas_extensions.py",
 line 65, in setUp
protocol="tcp")
  File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/fwaas_client.py",
 line 65, in create_firewall_rule
**kwargs)
  File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/services/client.py",
 line 62, in create_firewall_rule
return self.create_resource(uri, post_data)
  File "tempest/lib/services/network/base.py", line 60, in create_resource
resp, body = self.post(req_uri, req_post_data)
  File "tempest/lib/common/rest_client.py", line 276, in post
return self.request('POST', url, extra_headers, headers, body, chunked)
  File "tempest/lib/common/rest_client.py", line 664, in request
self._error_checker(resp, resp_body)
  File "tempest/lib/common/rest_client.py", line 761, in _error_checker
raise exceptions.NotFound(resp_body, resp=resp)
tempest.lib.exceptions.NotFound: Object not found
Details: {u'title': u'Not Found', u'code': u'404 Not Found', u'message': u'The 
resource could not be found.\n\n\n'}

Example: http://logs.openstack.org/08/427508/1/check/gate-neutron-
fwaas-v2-dsvm-tempest/7ca2bc0/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas gate-failure newton-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661420

Title:
  neutron-fwaas tempest v2 job on stable/newton fails with "extension
  could not be found"

Status in neutron:
  New

Bug description:
  The gate-neutron-fwaas-v2-dsvm-tempest is failing for stable/newton
  jobs in neutron-fwaas.  The errors look like:

  ft1.3: 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.test_create_update_delete_firewall_rule[id-563564f7-7077-4f5e-8cdc-51f37ae5a2b9]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2017-02-02 21:56:08,309 22180 INFO [tempest.lib.common.rest_client] 
Request (FWaaSExtensionTestJSON:setUp): 404 POST 
http://198.61.190.237:9696/v2.0/fw/firewall_rules 0.025s
  2017-02-02 21:56:08,310 22180 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'X-Auth-Token': '', 'Accept': 'application/json', 
'Content-Type': 'application/json'}
  Body: {"firewall_rule": {"name": "fw-rule-1600127867", "protocol": 
"tcp", "action": "allow"}}
  Response - Headers: {u'x-openstack-request-id': 
'req-267c7949-c777-4f3f-a63e-55ecf98338aa', u'content-type': 'application/json; 
charset=UTF-8', u'date': 'Thu, 02 Feb 2017 21:56:08 GMT', 'content-location': 
'http://198.61.190.237:9696/v2.0/fw/firewall_rules', 'status': '404', 
u'connection': 'close', u'content-length': '112'}
  Body: {"message": "The resource could not be found.\n\n\n", "code": "404 Not Found", "title": "Not Found"}
  }}}

  Traceback (most recent call last):
File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/test_fwaas_extensions.py",
 line 65, in setUp
  protocol="tcp")
File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/fwaas_client.py",
 line 65, in create_firewall_rule
  **kwargs)
File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/services/client.py",
 line 62, in create_firewall_rule
  return self.create_resource(uri, post_data)
File "tempest/lib/services/network/base.py", line 60, in create_resource
  resp, body = self.post(req_uri, req_post_data)
File "t

[Yahoo-eng-team] [Bug 1622694] [NEW] [FWaaS] Unit test race condition in creating/updating firewall

2016-09-12 Thread Nate Johnston
Public bug reported:

The FWaaS unit test
neutron_fwaas.tests.unit.services.firewall.test_fwaas_plugin.TestFirewallPluginBase.test_update_firewall_shared_fails_for_non_admin
creates a firewall, and then tries an update.  If that update occurs
before the creation is completed, then the router is still in
PENDING_UPDATE state and a successful return code is returned.  Since
the test is expecting exc.HTTPForbidden.code (HTTP 403), this means the
test fails.  This looks like a race condition, but it should be handled
properly.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622694

Title:
  [FWaaS] Unit test race condition in creating/updating firewall

Status in neutron:
  Confirmed

Bug description:
  The FWaaS unit test
  
neutron_fwaas.tests.unit.services.firewall.test_fwaas_plugin.TestFirewallPluginBase.test_update_firewall_shared_fails_for_non_admin
  creates a firewall, and then tries an update.  If that update occurs
  before the creation is completed, then the router is still in
  PENDING_UPDATE state and a successful return code is returned.  Since
  the test is expecting exc.HTTPForbidden.code (HTTP 403), this means
  the test fails.  This looks like a race condition, but it should be
  handled properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1622694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623183] [NEW] [FWaaS] project_id being returned instead of tenant_id by neutron, breaking some FWaaS unit tests

2016-09-13 Thread Nate Johnston
Public bug reported:

With project_id now being accepted and returned systematically in
Neutron[1], some of the FWaaS unit tests have broken.  This has broken
the FWaaS gate.  There are a couple of reasons why they have broken:

1. They are giving a bad tenant_id and are looking for an exception to
be thrown with error text 'Invalid input for tenant_id', but now this is
coming back as 'Invalid input for project_id'.

2. In some cases dicts are being constructed to represent the expected
response to the API call, and these include 'tenant_id'.  These are then
compared to the response, and the response includes both 'tenant_id' and
'project_id'.

[1]
http://git.openstack.org/cgit/openstack/neutron/commit/?id=ba788da398b31d5433a91bdc72ff2695b475fa41

** Affects: neutron
     Importance: High
 Assignee: Nate Johnston (nate-johnston)
 Status: In Progress


** Tags: fwaas gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623183

Title:
  [FWaaS] project_id being returned instead of tenant_id by neutron,
  breaking some FWaaS unit tests

Status in neutron:
  In Progress

Bug description:
  With project_id now being accepted and returned systematically in
  Neutron[1], some of the FWaaS unit tests have broken.  This has broken
  the FWaaS gate.  There are a couple of reasons why they have broken:

  1. They are giving a bad tenant_id and are looking for an exception to
  be thrown with error text 'Invalid input for tenant_id', but now this
  is coming back as 'Invalid input for project_id'.

  2. In some cases dicts are being constructed to represent the expected
  response to the API call, and these include 'tenant_id'.  These are
  then compared to the response, and the response includes both
  'tenant_id' and 'project_id'.

  [1]
  
http://git.openstack.org/cgit/openstack/neutron/commit/?id=ba788da398b31d5433a91bdc72ff2695b475fa41

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625818] [NEW] FWaaS v2 does not handle _interfaces not present in updated router gracefully

2016-09-20 Thread Nate Johnston
Public bug reported:

When updated router info comes in to firewall_l3_agent_v2.py without the
_interfaces field set, the code does not gracefully detect this and skip
the update.  This was picked up in tempest testing after the RC cutoff.
The fix for master is under review; the fix needs to be backported to
stable/neutron.  Fix in master is:
https://review.openstack.org/#/c/371611

I found an example of an updated router that does not have _interfaces as an 
attribute at [1]:
2016-09-15 19:37:08.356 21040 ERROR 
neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 
[req-b3e3ccbc-92ea-4c8e-8196-2987ecdd0195 - -] FWaaS router update RPC info 
call failed for {u'enable_snat': True, u'gw_port': {u'allowed_address_pairs': 
[], u'extra_dhcp_opts': [], u'updated_at': u'2016-09-15T19:36:31', 
u'device_owner': u'network:router_gateway', u'revision_number': 7, 
u'port_security_enabled': False, u'binding:profile': {}, u'binding:vnic_type': 
u'normal', u'fixed_ips': [{u'subnet_id': 
u'b27366c2-987c-4f6c-85a6-4fa329392ade', u'prefixlen': 24, u'ip_address': 
u'172.24.5.11'}, {u'subnet_id': u'71583438-8240-4bc1-a12f-3fc28087dff3', 
u'prefixlen': 64, u'ip_address': u'2001:db8::6'}], u'id': 
u'38a74555-f603-4287-9667-c13a7bcb8779', u'security_groups': [], 
u'binding:vif_details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 
u'address_scopes': {u'4': None, u'6': None}, u'binding:vif_type': u'ovs', 
u'mac_address': u'fa:16:3e:5f:69:e7', u'project_id': u'', u'stat
 us': u'ACTIVE', u'subnets': [{u'dns_nameservers': [], u'ipv6_ra_mode': None, 
u'gateway_ip': u'2001:db8::2', u'cidr': u'2001:db8::/64', u'id': 
u'71583438-8240-4bc1-a12f-3fc28087dff3', u'subnetpool_id': None}, 
{u'dns_nameservers': [], u'ipv6_ra_mode': None, u'gateway_ip': u'172.24.5.1', 
u'cidr': u'172.24.5.0/24', u'id': u'b27366c2-987c-4f6c-85a6-4fa329392ade', 
u'subnetpool_id': None}], u'binding:host_id': u'ubuntu-trusty-rax-ord-4320605', 
u'description': u'', u'device_id': u'bcb45377-b7b2-4186-98dd-58e77a9b75e3', 
u'name': u'', u'admin_state_up': True, u'network_id': 
u'16004da0-37c9-4481-b769-0706b581d9e9', u'tenant_id': u'', u'created_at': 
u'2016-09-15T19:36:29', u'mtu': 1500, u'extra_subnets': []}, u'updated_at': 
u'2016-09-15T19:37:07', u'revision_number': 7, u'id': 
u'bcb45377-b7b2-4186-98dd-58e77a9b75e3', u'availability_zone_hints': [], 
u'availability_zones': [u'nova'], u'distributed': False, u'project_id': 
u'750d1d0452334b93815857b6b9bc9190', u'status': u'ACTIVE', u'ha_vr_id': 0, u
 'description': u'', u'ha': False, u'gw_port_host': 
u'ubuntu-trusty-rax-ord-4320605', u'external_gateway_info': {u'network_id': 
u'16004da0-37c9-4481-b769-0706b581d9e9', u'enable_snat': True, 
u'external_fixed_ips': [{u'subnet_id': u'b27366c2-987c-4f6c-85a6-4fa329392ade', 
u'ip_address': u'172.24.5.11'}, {u'subnet_id': 
u'71583438-8240-4bc1-a12f-3fc28087dff3', u'ip_address': u'2001:db8::6'}]}, 
u'name': u'tempest-router-smoke-1843134667', u'gw_port_id': 
u'38a74555-f603-4287-9667-c13a7bcb8779', u'admin_state_up': True, u'tenant_id': 
u'750d1d0452334b93815857b6b9bc9190', u'created_at': u'2016-09-15T19:36:28', 
u'flavor_id': None, u'routes': []}
[1] 
http://logs.openstack.org/74/370274/11/check/gate-neutron-fwaas-dsvm-tempest/0768c92/logs/screen-q-l3.txt.gz#_2016-09-15_19_37_08_356

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: New


** Tags: fwaas newton-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625818

Title:
  FWaaS v2 does not handle _interfaces not present in updated router
  gracefully

Status in neutron:
  New

Bug description:
  When updated router info comes in to firewall

[Yahoo-eng-team] [Bug 1627785] [NEW] [RFE] Create FWaaS driver for OVS firewalls

2016-09-26 Thread Nate Johnston
Public bug reported:

Create a back-end driver for FWaaS that will implement firewalls using
OVS flows, similar to the Security Group implementation that uses OVS
flows[1].  This will be implemented within the context of the FWaaS L2
agent extension[2], and the L2 agent extension API will give FWaaS
access to the integration bridge for flow management.

[1] http://docs.openstack.org/developer/neutron/devref/openvswitch_firewall.html
[2] https://review.openstack.org/#/c/323971

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: New


** Tags: fwaas ovs-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627785

Title:
  [RFE] Create FWaaS driver for OVS firewalls

Status in neutron:
  New

Bug description:
  Create a back-end driver for FWaaS that will implement firewalls using
  OVS flows, similar to the Security Group implementation that uses OVS
  flows[1].  This will be implemented within the context of the FWaaS L2
  agent extension[2], and the L2 agent extension API will give FWaaS
  access to the integration bridge for flow management.

  [1] 
http://docs.openstack.org/developer/neutron/devref/openvswitch_firewall.html
  [2] https://review.openstack.org/#/c/323971

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628627] [NEW] In FWaaS, when someone makes a change to a firewall rule we know, Who, What, When, and Where

2016-09-28 Thread Nate Johnston
Public bug reported:

In the FWaaS service, create the ability for administrators to engage an
'audit trail' feature.  The audit trail would notate every change to
firewalls that causes a security change.  The output would be to the
notification queue.

Audit notations should contain all information necessary to process
them.  For example, an audit notation that says "user abcde1234
permitted port 22 traffic from firewall group A to firewall group B" is
not enough information.  In order to determine what needs to be scanned,
the consumer of the audit would need to subsequently query FWaaS to
determine the membership of the 2 firewall groups cited.  Notations
should carry enough information so that no subsequent querying is
required for processing.

The notification should encompass all of:

- Who: Identity of the user initiating the change.
- What: The information on what was changed.  Should include port information, 
whether access was permitted or disallowed, etc.
- Where: A list of all affected ports/IP addresses/instances, grouped by 
connection origin/destination.  This could be abbreviated to indicate an entire 
tenant if that is the target.  
- When: Timestamp indicating when the change was initiated.

Use case: This would allow a customer's security group to subscribe to a
collated feed of all security events in order to detect those events
that should trigger an audit or vulnerability scan.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628627

Title:
  In FWaaS, when someone makes a change to a firewall rule we know, Who,
  What, When, and Where

Status in neutron:
  New

Bug description:
  In the FWaaS service, create the ability for administrators to engage
  an 'audit trail' feature.  The audit trail would notate every change
  to firewalls that causes a security change.  The output would be to
  the notification queue.

  Audit notations should contain all information necessary to process
  them.  For example, an audit notation that says "user abcde1234
  permitted port 22 traffic from firewall group A to firewall group B"
  is not enough information.  In order to determine what needs to be
  scanned, the consumer of the audit would need to subsequently query
  FWaaS to determine the membership of the 2 firewall groups cited.
  Notations should carry enough information so that no subsequent
  querying is required for processing.

  The notification should encompass all of:

  - Who: Identity of the user initiating the change.
  - What: The information on what was changed.  Should include port 
information, whether access was permitted or disallowed, etc.
  - Where: A list of all affected ports/IP addresses/instances, grouped by 
connection origin/destination.  This could be abbreviated to indicate an entire 
tenant if that is the target.  
  - When: Timestamp indicating when the change was initiated.

  Use case: This would allow a customer's security group to subscribe to
  a collated feed of all security events in order to detect those events
  that should trigger an audit or vulnerability scan.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628658] [NEW] [RFE] FWaaS integration with Congress for firewall policy validation

2016-09-28 Thread Nate Johnston
Public bug reported:

FWaaS is a repository for storing and applying security rules to permit
or deny network access. Rules should be able to be validated and
accepted or rejected based on security policy. Since Congress is the
engine for policy validation, work to link FWaaS and Congress.

Use case: For example as a company we may decided that connections from,
say, North Korea (NK ip space, if there is such a thing) should not be
allowed on port 3306. So we may have policy that will check all incoming
firewall rules that may allow that and if they do reject them.

This information has also been enqueued for consideration by Congress,
see [1].

[1] https://blueprints.launchpad.net/congress/+spec/congress-fwaas

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: New


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628658

Title:
  [RFE] FWaaS integration with Congress for firewall policy validation

Status in neutron:
  New

Bug description:
  FWaaS is a repository for storing and applying security rules to
  permit or deny network access. Rules should be able to be validated
  and accepted or rejected based on security policy. Since Congress is
  the engine for policy validation, work to link FWaaS and Congress.

  Use case: For example as a company we may decided that connections
  from, say, North Korea (NK ip space, if there is such a thing) should
  not be allowed on port 3306. So we may have policy that will check all
  incoming firewall rules that may allow that and if they do reject
  them.

  This information has also been enqueued for consideration by Congress,
  see [1].

  [1] https://blueprints.launchpad.net/congress/+spec/congress-fwaas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617268] Re: vpn-agent does not initialize FWaaS

2016-09-29 Thread Nate Johnston
This is no longer needed.  FWaaS no longer inherits from
L3NATAgentWithStateReport, it plugs in using the L3 agent extensions
mechanism.  https://review.openstack.org/#/c/355576/

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617268

Title:
  vpn-agent does not initialize FWaaS

Status in neutron:
  Invalid

Bug description:
  Currently, main class for L3, FWaaS and VPNaaS is as attached
  file(l3_fw_vpn_class_relation.txt).

  * When launching l3-agent without FWaaS, L3NATAgentWithStateReport  class is 
initialized.
  * When launching l3-agent with FWaaS, L3WithFWaaS class is initialized.
  This is achieved by following commit.
  
https://github.com/openstack/neutron-fwaas/commit/debc3595599ed6cd52caf6e04f083af9c93f6fa4

  * When launching vpn-agent with/without FWaaS, VPNAgent class is initialized.
    In this case, L3WithFWaaS class is not initialized even though FWaaS is 
enabled.
    Thus, FWaaS won't be available when using both of FWaaS and VPNaaS.

  Here is log of vpn-agent that is when the agent receives RPC request about 
firewall from neutron.
  ===
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server [-] Exception 
during message handling
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
133, in _process_incoming
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
155, in dispatch
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server raise 
NoSuchMethod(method)
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server NoSuchMethod: 
Endpoint does not support RPC method delete_firewall
  2016-08-26 10:42:35.340 16065 ERROR oslo_messaging.rpc.server
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1617268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1898657] Re: LB health monitor deletion fails with exception "Server-side error: "'NoneType' object has no attribute 'load_balancer_id'"

2020-10-06 Thread Nate Johnston
Reassigned to vmware-nsx as that is probably the place to fix this
binding with Octavia.  If you think the issue is more in the Octavia
space, feel free to submit a bug in storyboard here:
https://storyboard.openstack.org/#!/project/openstack/octavia

** Project changed: neutron => vmware-nsx

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1898657

Title:
  LB health monitor deletion fails with exception "Server-side error:
  "'NoneType' object has no attribute 'load_balancer_id'"

Status in vmware-nsx:
  New

Bug description:
  Failure statement : LB health monitor deletion fails with exception
  "Server-side error: "'NoneType' object has no attribute
  'load_balancer_id'"

  Test executed : 1) Create LB, Virtual servers (HTTP & HTTPS), Pool, Members & 
Health monitors
  2) Delete LB objects randomly
  3) If there is a dependency to a object while its deleting, 
then handle exception 
 BadRequestException, ConflictException
  
  

  Below is the HM tried to delete:
  DELETE /v2.0/lbaas/healthmonitors/50ffba87-3e85-4d59-8c46-d782a8cbe8eb

  
  octavia log trace:

  
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: 2020-10-04 
01:29:01.801 1 DEBUG vmware_nsx.services.lbaas.octavia.octavia_driver [req-
  9c7ab75f-3922-4cb7-81c0-a82b9f645a0c - a0a882c114b741c6a50e2fddc68a67db - 
default default] vmware_nsx.services.lbaas.octavia.octavia_driver.NS
  XOctaviaDriver method __init__ called with arguments () {} wrapper 
/usr/lib/python3.7/site-packages/oslo_log/helpers.py:66

  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: 2020-10-04 
01:29:01.802 1 DEBUG vmware_nsx.services.lbaas.octavia.octavia_driver [req-
  9c7ab75f-3922-4cb7-81c0-a82b9f645a0c - a0a882c114b741c6a50e2fddc68a67db - 
default default] vmware_nsx.services.lbaas.octavia.octavia_driver.NS
  XOctaviaDriver method _init_rpc_messaging called with arguments () {} wrapper 
/usr/lib/python3.7/site-packages/oslo_log/helpers.py:66

  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: 2020-10-04 
01:29:01.808 1 ERROR wsme.api [req-9c7ab75f-3922-4cb7-81c0-a82b9f645a0c - 
a0a882c114b741c6a50e2fddc68a67db - default default] Server-side error: 
"'NoneType' object has no attribute 'load_balancer_id'". Detail: 
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: Traceback (most 
recent call last):
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: 
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]:   File 
"/usr/lib/python3.7/site-packages/wsmeext/pecan.py", line 85, in callfunction
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: result = f(self, 
*args, **kwargs)
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: 
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]:   File 
  
"/usr/lib/python3.7/sitepackages/octavia/api/v2/controllers/health_monitor.py", 
line 413, in delete
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: 
self._test_lb_and_listener_and_pool_statuses(lock_session, db_hm)
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: 
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]:   File 
"/usr/lib/python3.7/site-packages/octavia/api/v2/controllers/health_monitor.py"
  , line 94, in _test_lb_and_listener_and_pool_statuses
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: load_balancer_id 
= pool.load_balancer_id
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: 
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: AttributeError: 
'NoneType' object has no attribute 'load_balancer_id'
  Oct 04 01:29:01 controller-ch9q6cgc8v octavia-api[805]: 
  Oct 04 01:29:23 controller-ch9q6cgc8v octavia-api[805]: 2020-10-04 
01:29:23.215 1 DEBUG vmware_nsx.services.lbaas.octavia.octavia_driver [req-
  b2a8b23c-485c-4257-9f00-5a8f581a5387 - a0a882c114b741c6a50e2fddc68a67db - 
default default] vmware_nsx.services.lbaas.octavia.octavia_driver.NS
  XOctaviaDriver method __init__ called with arguments () {} wrapper 
/usr/lib/python3.7/site packages/oslo_log/helpers.py:66

  Test executed:
  @pytest.mark.esxitest
  @pytest.mark.usefixtures("do_cleanup")
  def test_lb_and_delete_in_random_order_TC27(lb, nsxvc, cloud):
  ''' As part of this test case a LB topology will be brought up.
  Different components of the LB will be tried to be deleted
  in random order '''
  _test_case = inspect.stack()[0][3].split("_")[-1]

  router, network, subnet =\
 ru.create_net_subnet_rtr_lb(cloud, _test_case)

  lb, lsnr, pool, hm, lbfip, mem1, mem2 =\
 create_general_roundrobin_lb_topo(_test_case,
network, subnet, router, cloud, nsxvc)
  # Creating a dictionary of the LB resources
  # Format of dictionary is same as clenaup.py since we are going to
  # use delete functions of clean

[Yahoo-eng-team] [Bug 1899207] [NEW] [OVN][Docs] admin/config-dns-res.html should be updated for OVN

2020-10-09 Thread Nate Johnston
Public bug reported:

Reading https://docs.openstack.org/neutron/latest/admin/config-dns-
res.html it looks like the entire page expects there to be a DHCP agent.
Since this is not the case for OVN, update the documentation to explain
how DNS resolution forwarding (cases 2a/2b in the document) work in an
OVN setting.  If OVN only works with case 1, or through some other
method, this should be explicitly documented.

** Affects: neutron
 Importance: Medium
 Status: Triaged


** Tags: doc ovn

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1899207

Title:
  [OVN][Docs] admin/config-dns-res.html should be updated for OVN

Status in neutron:
  Triaged

Bug description:
  Reading https://docs.openstack.org/neutron/latest/admin/config-dns-
  res.html it looks like the entire page expects there to be a DHCP
  agent.  Since this is not the case for OVN, update the documentation
  to explain how DNS resolution forwarding (cases 2a/2b in the document)
  work in an OVN setting.  If OVN only works with case 1, or through
  some other method, this should be explicitly documented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1899207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1898634] Re: BGP peer is not working

2020-10-12 Thread Nate Johnston
** Changed in: neutron
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1898634

Title:
  BGP peer is not working

Status in neutron:
  In Progress

Bug description:
  I´m trying to configure dynamic routing, but when I associate provider
  network with the bgp speaker I start to receive these errors:

  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in 
_process_incoming
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 276, 
in dispatch
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 196, 
in _do_dispatch
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/api/rpc/handlers/bgp_speaker_rpc.py",
 line 65, in get_bgp_speakers
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server return 
self.plugin.get_bgp_speakers_for_agent_host(context, host)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_dragentscheduler_db.py",
 line 263, in get_bgp_speakers_for_agent_host
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server context, 
binding['bgp_speaker_id'])
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 165, in get_bgp_speaker_with_advertised_routes
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
bgp_speaker_id)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 479, in get_routes_by_bgp_speaker_id
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
bgp_speaker_id)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 673, in _get_central_fip_host_routes_by_bgp_speaker
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
l3_db.Router.id == router_attrs.router_id)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2259, in 
outerjoin
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
from_joinpoint=from_joinpoint,
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"", line 2, in _join
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/base.py", line 220, in 
generate
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server fn(self, 
*args[1:], **kw)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2414, in 
_join
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server left, 
right, onclause, prop, create_aliases, outerjoin, full
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2437, in 
_join_left_to_right
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server ) = 
self._join_determine_implicit_left_side(left, right, onclause)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2526, in 
_join_determine_implicit_left_side
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server "Can't 
determine which FROM clause to join "
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
sqlalchemy.exc.InvalidRequestError: Can't determine which FROM clause to join 
from, there are multiple FROMS which can join to this entity. Try adding an 
explicit ON clause to help resolve the ambiguity.
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server


  
  I made manual installation, ussuri. Couldn´t find any workaround.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1898634/+subscri

[Yahoo-eng-team] [Bug 1899037] Re: ML2OVN migration script does not set proper bridge mappings

2020-10-12 Thread Nate Johnston
** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1899037

Title:
  ML2OVN migration script does not set proper bridge mappings

Status in neutron:
  Invalid

Bug description:
  ML2OVS -> ML2OVN migration on non-DVR environment (3 controllers + 2 
computes) is failing with default (empty) bridge mapping settings on compute 
nodes.
  File 
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml
 contains the following:
ComputeParameters:
  NeutronBridgeMappings: ""

  This causes that after the overcloud update that is performed during
  the migration procedure all existing VMs are not accessible.

  Workaround is to set in 
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml
 
ComputeParameters:
  NeutronBridgeMappings: "tenant:br-isolated"
  before staring the migration.

  The issue does not happen when DVR is enabled.

  In case of SR-IOV or any other no-DVR case when compute nodes are
  connected to the external network and need to be able to launch VM
  instances on the external network we need to specify full value of
  configured bridge mappings (not only tenant) before staring the
  migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1899037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1902950] [NEW] [OVN] DNS resolution not forwarded with OVN driver

2020-11-04 Thread Nate Johnston
Public bug reported:

With ML2/OVS and ML2/LB, instances on tenant networks can resolve in-
cloud and external DNS names even if the tenant network has no router or
outside connectivity. It does this via the dnsmasq instance being
configured as the DNS resolver for the instances. A DNS request from an
instance on one of these private networks will go to dnsmasq.  If the
address is not in the list of static addresses populated in dnsmasq by
neutron, it will then resolve the request using either configured
resolvers or the host resolver.  This is use case 2 in the DNS
Resolution for Instances document [1].

With ML2/OVN, there is no dnsmasq instance. In this case, the request is
"hijacked" by OVN, and if there is a static record that matches, it will
respond with the static entry. If there is no matching static record,
instances without connectivity to the "8.8.8.8" DNS server that is
default in the OVN DHCP packet cannot resolve DNS.  This means that
these instances cannot utilize DNS records published by Designate.

The lack of a masquerading forwarding DNS resolver available to
instances on isolated tenant networks is the feature parity gap between
ML2/OVS and ML2/OVN this bug requests be fixed. The driver for this is
to allow instances on isolated tenant networks to use DNS published by
Designate.

[1] https://docs.openstack.org/neutron/latest/admin/config-dns-
res.html#case-2-dhcp-agents-forward-dns-queries-from-instances

Evidence:

On the host:
$ nslookup www.redhat.com
Server: 127.0.0.53
Address: 127.0.0.53#53

Non-authoritative answer:
www.redhat.com canonical name = ds-www.redhat.com.edgekey.net.
ds-www.redhat.com.edgekey.net canonical name =
ds-www.redhat.com.edgekey.net.globalredir.akadns.net.
ds-www.redhat.com.edgekey.net.globalredir.akadns.net canonical name =
e3396.dscx.akamaiedge.net.
Name: e3396.dscx.akamaiedge.net
Address: 23.64.196.72
Name: e3396.dscx.akamaiedge.net
Address: 2600:1409:12:39e::d44
Name: e3396.dscx.akamaiedge.net
Address: 2600:1409:12:383::d44

 So host name resolution is working correctly.

On a guest on a tenant network:

# nslookup webserver1   

  
Server: 127.0.0.53
Address: 127.0.0.53#53

Non-authoritative answer:
Name: webserver1.openstackgate.local
Address: 172.21.1.154

 It can resolve itself.

# nslookup webserver2   

  
Server: 127.0.0.53
Address: 127.0.0.53#53

Non-authoritative answer:
Name: webserver2.openstackgate.local
Address: 172.21.1.31

### It can resolve other VMs

# nslookup www.redhat.com   

  
;; connection timed out; no servers could be reached

 It cannot resolve anything that is not in the OVN DB. This is the
problem.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dns ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1902950

Title:
  [OVN] DNS resolution not forwarded with OVN driver

Status in neutron:
  New

Bug description:
  With ML2/OVS and ML2/LB, instances on tenant networks can resolve in-
  cloud and external DNS names even if the tenant network has no router
  or outside connectivity. It does this via the dnsmasq instance being
  configured as the DNS resolver for the instances. A DNS request from
  an instance on one of these private networks will go to dnsmasq.  If
  the address is not in the list of static addresses populated in
  dnsmasq by neutron, it will then resolve the request using either
  configured resolvers or the host resolver.  This is use case 2 in the
  DNS Resolution for Instances document [1].

  With ML2/OVN, there is no dnsmasq instance. In this case, the request
  is "hijacked" by OVN, and if there is a static record that matches, it
  will respond with the static entry. If there is no matching static
  record, instances without connectivity to the "8.8.8.8" DNS server
  that is default in the OVN DHCP packet cannot resolve DNS.  This means
  that these instances cannot utilize DNS records published by
  Designate.

  The lack of a masquerading forwarding DNS resolver available to
  instances on isolated tenant networks is the feature parity gap
  between ML2/OVS and ML2/OVN this bug requests be fixed. The driver for
  this is to allow instances on isolated tenant networks to use DNS
  published by Designate.

  [1] https://docs.openstack.o

[Yahoo-eng-team] [Bug 1848851] Re: move fwaas_v2_log constants to neutron-lib

2020-11-23 Thread Nate Johnston
Marking 'Won't Fix" because fo the deprecation of the neutron-fwaas
project.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1848851

Title:
  move fwaas_v2_log constants to neutron-lib

Status in neutron:
  Won't Fix

Bug description:
  sg logging constants has been moved to neutron-lib,related patches[1]
  https://review.opendev.org/#/c/645885/

  I think fw logging constants can alse be moved to neutron-lib.

  In addtion. FIREWALL_LOG_DRIVER_NAME(TODO) can also be moved to
  neutron-lib.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1848851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1949230] [NEW] OVN Octavia provider driver should implement allowed_cidrs to enforce security groups on LB ports

2021-10-29 Thread Nate Johnston
Public bug reported:

Octavia can use OVN as a provider driver using it's driver framework.
The OVN Octavia provider driver, part of ML2/OVN, does not implement all
of the functionality of the Octavia API [1].  One feature that should be
supported is allowed_cidrs.

The Octavia allowed_cidrs functionality allows Octavia to manage and
communicate the CIDR blocks allowed to address an Octavia load balancer.
Implementing this in the OVN provider driver would allow load balancers
to be only accessible from specific CIDR blocks, a requirement for
customer security ina number of scenarios.

[1] https://docs.openstack.org/octavia/latest/user/feature-
classification/index.html#listener-api-features

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1949230

Title:
  OVN Octavia provider driver should implement allowed_cidrs to enforce
  security groups on LB ports

Status in neutron:
  New

Bug description:
  Octavia can use OVN as a provider driver using it's driver framework.
  The OVN Octavia provider driver, part of ML2/OVN, does not implement
  all of the functionality of the Octavia API [1].  One feature that
  should be supported is allowed_cidrs.

  The Octavia allowed_cidrs functionality allows Octavia to manage and
  communicate the CIDR blocks allowed to address an Octavia load
  balancer.  Implementing this in the OVN provider driver would allow
  load balancers to be only accessible from specific CIDR blocks, a
  requirement for customer security ina number of scenarios.

  [1] https://docs.openstack.org/octavia/latest/user/feature-
  classification/index.html#listener-api-features

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1949230/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599590] [NEW] API docs for tenant-id/flavors/detail does not include is_public flag

2016-07-06 Thread Nate Johnston
Public bug reported:

When I fetch the flavors for a given tenant, I can use the API call
below:

https://openstack:8774/v2/52f0574689f14c8a99e7ca22c4eb5720/flavors/detail

But this only shows public flavors.  If I want to see all the flavors,
both public and private, I need to add "?is_public=None" to the API
call:

https://openstack:8774/v2/52f0574689f14c8a99e7ca22c4eb5720/flavors/detail?is_public=None

This is_public option is not documented in the API documentation:
http://developer.openstack.org/api-ref-
compute-v2.1.html#listDetailFlavors

Please document this option.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1599590

Title:
  API docs for tenant-id/flavors/detail does not include is_public flag

Status in OpenStack Compute (nova):
  New

Bug description:
  When I fetch the flavors for a given tenant, I can use the API call
  below:

  https://openstack:8774/v2/52f0574689f14c8a99e7ca22c4eb5720/flavors/detail

  But this only shows public flavors.  If I want to see all the flavors,
  both public and private, I need to add "?is_public=None" to the API
  call:

  
https://openstack:8774/v2/52f0574689f14c8a99e7ca22c4eb5720/flavors/detail?is_public=None

  This is_public option is not documented in the API documentation:
  http://developer.openstack.org/api-ref-
  compute-v2.1.html#listDetailFlavors

  Please document this option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1599590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614673] [NEW] [FWaaS] Rule position testing is insufficient

2016-08-18 Thread Nate Johnston
Public bug reported:

The FWaaS unit tests around rule position nesting are not working with
FWaaS v2, and need to be fixed up.  The specific tests that need to be
fixed are:

test_show_firewall_rule_with_fw_policy_associated
test_delete_firewall_policy_with_rule
test_update_firewall_policy_reorder_rules

in neutron_fwaas/tests/unit/db/firewall/v2/test_firewall_db_v2.py.

** Affects: neutron
 Importance: Wishlist
 Status: Confirmed


** Tags: fwaas unittest

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags added: fwaas unittest

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614673

Title:
  [FWaaS] Rule position testing is insufficient

Status in neutron:
  Confirmed

Bug description:
  The FWaaS unit tests around rule position nesting are not working with
  FWaaS v2, and need to be fixed up.  The specific tests that need to be
  fixed are:

  test_show_firewall_rule_with_fw_policy_associated
  test_delete_firewall_policy_with_rule
  test_update_firewall_policy_reorder_rules

  in neutron_fwaas/tests/unit/db/firewall/v2/test_firewall_db_v2.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614680] [NEW] In FWaaS v2 cross-tenant assignment of policies is inconsistent

2016-08-18 Thread Nate Johnston
Public bug reported:

In the unit tests associated with the FWaaS v2 DB
(neutron_fwaas/tests/unit/db/firewall/v2/test_firewall_db_v2.py), there
are two that demonstrate improper handling of cross-tenant firewall
policy assignment.

First, the logic tested in
test_update_firewall_rule_associated_with_other_tenant_policy succeeds,
but it should not.

Second, the logic tested in test_update_firewall_group_with_public_fwp
fails, but it should succeed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614680

Title:
  In FWaaS v2 cross-tenant assignment of policies is inconsistent

Status in neutron:
  New

Bug description:
  In the unit tests associated with the FWaaS v2 DB
  (neutron_fwaas/tests/unit/db/firewall/v2/test_firewall_db_v2.py),
  there are two that demonstrate improper handling of cross-tenant
  firewall policy assignment.

  First, the logic tested in
  test_update_firewall_rule_associated_with_other_tenant_policy
  succeeds, but it should not.

  Second, the logic tested in test_update_firewall_group_with_public_fwp
  fails, but it should succeed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498957] [NEW] Add a 'dscp' field to security group rules to screen ingress traffic by dscp tag as well as IP address

2015-09-23 Thread Nate Johnston
Public bug reported:

This change will add to the current security group model an additional
option to allow for traffic to be restricted to a given DSCP tag in
addition to the current IP address based restriction.  Incoming traffic
would need to match both the IP address/CIDR block as well as the DSCP
tag - if one is set.

Changes:
* DB model changes to add a DSCP tag column to security groups.
* API changes to allow for DSCP tag configuration options to be supplied to 
security group API calls.
* Neutron agent changes to implement configuring IPTables with the additional 
DSCP tag configuration.

Note: This is complimentary functionality to the "QoS DSCP marking rule
support" change which, when implemented, will provide Neutron with an
interface to configure QoS policies to mark outgoing traffic with DSCP
tags.  See also: QoS DSCP marking rule support:
https://bugs.launchpad.net/neutron/+bug/1468353

** Affects: neutron
 Importance: Undecided
 Assignee: Nate Johnston (nate-johnston)
 Status: New


** Tags: dscp qos rfe

** Changed in: neutron
 Assignee: (unassigned) => Nate Johnston (nate-johnston)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498957

Title:
  Add a 'dscp' field to security group rules to screen ingress traffic
  by dscp tag as well as IP address

Status in neutron:
  New

Bug description:
  This change will add to the current security group model an additional
  option to allow for traffic to be restricted to a given DSCP tag in
  addition to the current IP address based restriction.  Incoming
  traffic would need to match both the IP address/CIDR block as well as
  the DSCP tag - if one is set.

  Changes:
  * DB model changes to add a DSCP tag column to security groups.
  * API changes to allow for DSCP tag configuration options to be supplied to 
security group API calls.
  * Neutron agent changes to implement configuring IPTables with the additional 
DSCP tag configuration.

  Note: This is complimentary functionality to the "QoS DSCP marking
  rule support" change which, when implemented, will provide Neutron
  with an interface to configure QoS policies to mark outgoing traffic
  with DSCP tags.  See also: QoS DSCP marking rule support:
  https://bugs.launchpad.net/neutron/+bug/1468353

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247319] Re: python 2.7 gate job for neutron runs long

2016-03-30 Thread Nate Johnston
** Changed in: neutron
   Status: Incomplete => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1247319

Title:
  python 2.7 gate job for neutron runs long

Status in neutron:
  Opinion
Status in OpenStack Core Infrastructure:
  Opinion

Bug description:
  The timeout for the job is set to 40 minutes. These days the typical
  run take around 35-38 minutes so 40 minutes causes quite a number of
  false positives.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1247319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580239] [NEW] Add agent extension framework for L3 agent

2016-05-10 Thread Nate Johnston
Public bug reported:

Neutron advanced services (*aaS) projects need a standardized method to
gain access to resources internal to the L3 agent.  Previously, the
proper methodology was using inheritance from L3NATAgent and it's
subclasses.  But now it is necessary to decouple these things, so that
each *aaS can be a separate extension that registers with the L3 agent,
can interrogate the L3 agent for necessary information, and receives
notifications of events through callbacks.

Some examples of what this would enable FWaaS and other advanced
services to do are:

- The ability to map router_id to router info so we can program iptables to the 
correct namespace.
- The ability to load the Service Agent - so we have an RPC endpoint in the 
context of L3Agent.

FWaaS can then use the existing observer hierarchy pattern to listen for
notifications.  This would prevent the need to patch the agent code for
advanced services to function.

Note: This must be executed in such a way that multiple *aaS services
can plug in simultaneously without interfering with each other.

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580239

Title:
  Add agent extension framework for L3 agent

Status in neutron:
  New

Bug description:
  Neutron advanced services (*aaS) projects need a standardized method
  to gain access to resources internal to the L3 agent.  Previously, the
  proper methodology was using inheritance from L3NATAgent and it's
  subclasses.  But now it is necessary to decouple these things, so that
  each *aaS can be a separate extension that registers with the L3
  agent, can interrogate the L3 agent for necessary information, and
  receives notifications of events through callbacks.

  Some examples of what this would enable FWaaS and other advanced
  services to do are:

  - The ability to map router_id to router info so we can program iptables to 
the correct namespace.
  - The ability to load the Service Agent - so we have an RPC endpoint in the 
context of L3Agent.

  FWaaS can then use the existing observer hierarchy pattern to listen
  for notifications.  This would prevent the need to patch the agent
  code for advanced services to function.

  Note: This must be executed in such a way that multiple *aaS services
  can plug in simultaneously without interfering with each other.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540748] Re: ml2: port_update and port_delete should not use faout notify

2016-02-02 Thread Nate Johnston
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540748

Title:
  ml2: port_update and port_delete should not use faout notify

Status in neutron:
  Invalid

Bug description:
  Now for ml2 plugin,  neutron-server use faout RPC  message for port_update 
and port_delete, the codes as below:
  def port_update(self, context, port, network_type, segmentation_id,   
   physical_network):
  cctxt = self.client.prepare(topic=self.topic_port_update,fanout=True)
  cctxt.cast(context, 'port_update', port=port,
     network_type=network_type,
     segmentation_id=segmentation_id,
     physical_network=physical_network)

  def port_delete(self, context, port_id):
  cctxt = self.client.prepare(topic=self.topic_port_delete, fanout=True)
  cctxt.cast(context, 'port_delete', port_id=port_id)

  I think neutron-server should directly sends the RPC message to port's
  binding_host, this can offload work for AMQP

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542014] Re: [LBaaS V2] Missing region and endpoint parameters in barbican_acl.py

2016-02-04 Thread Nate Johnston
The fix linked to was proposed on January 22 and was merged on January
26 - either this bug, or the linkage to that fix, are spurious.

** Changed in: neutron
   Importance: Undecided => Wishlist

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542014

Title:
  [LBaaS V2] Missing region and endpoint parameters in barbican_acl.py

Status in neutron:
  Invalid

Bug description:
  Currently, lbaas has no way to pass region and endpoint-type to
  barbican client when accessing the barbican containers.

  This becomes an issue in a cloud with multiple regions and endpoint
  types.  So we would like to have region and endpoint-type as
  parameters while requesting for a barbican client in barbican_acl.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542467] Re: LBAAS Intermittent gate failure TestHealthMonitorBasic.test_health_monitor_basic

2016-02-05 Thread Nate Johnston
Confirmed that I see a spike in failures for this test starting between
13:40 and 13:45 EST, but after 14:50 the rate seems to go back down to
normal levels.  Please recheck and see if the issue is still going on;
if so I will reclassify to 'Confirmed'.

http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:\%22AssertionError:%200%20%3D%3D%200%20:%20No%20IPv4%20addresses%20found%20in:%20[]\%22%20AND%20build_name
:\%22gate-tempest-dsvm-neutron-linuxbridge\%22

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542467

Title:
  LBAAS Intermittent gate failure
  TestHealthMonitorBasic.test_health_monitor_basic

Status in neutron:
  Opinion

Bug description:
  We are seeing intermittent gate failures at an increasing rate for the
  neutron-lbaas scenario test.

  Example gate run:
  
http://logs.openstack.org/50/259550/7/check/gate-neutron-lbaasv2-dsvm-scenario/d48b77d/

  
neutron_lbaas.tests.tempest.v2.scenario.test_healthmonitor_basic.TestHealthMonitorBasic.test_health_monitor_basic
  [456.618148s] ... FAILED

  File "neutron_lbaas/tests/tempest/v2/scenario/test_healthmonitor_basic.py", 
line 45, in test_health_monitor_basic
  File "neutron_lbaas/tests/tempest/v2/scenario/base.py", line 519, in 
_traffic_validation_after_stopping_server
  File "neutron_lbaas/tests/tempest/v2/scenario/base.py", line 509, in 
_send_requests
  socket.error: [Errno 104] Connection reset by peer

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542819] Re: The details of security group contains "null"

2016-02-07 Thread Nate Johnston
I don't think that I agree with you that there is no value in showing
every field, even ones where the value is 'null'.   It might be
meaningless to some users user, but it is more explicit.  Also, ensuring
that every field is represented in a predictable fashion means that the
API may be easier for other services to consume.

I would like to get more opinions on this; marking it as 'opinion' for
now until we get a consensus.

** Changed in: neutron
   Status: New => Opinion

** Tags added: sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542819

Title:
  The details of security group contains "null"

Status in neutron:
  Opinion

Bug description:
  When using security group, I found the some output of security group will be 
"null". This happens when the value is not specified.
  Under the same condition, "neutron security-group-rule-list" will report 
"any". However, "neutron security-group-rule-show" will report empty.

  The details can be found at [1].

  I think, if the value it not specified for a security group rule, we
  can hide it from the output of "neutron security-group-show". It is
  meaningless to show a "null" to user.


  [1]  http://paste.openstack.org/show/486190/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498957] Re: Add a 'dscp' field to security group rules to screen ingress traffic by dscp tag as well as IP address

2015-10-14 Thread Nate Johnston
We have decided that an alternate methodology is preferable, and will
file a new bug for a fresh RFE.  Changing status on this to 'invalid'
and abandoning the associated changeset.

** Changed in: neutron
   Status: Triaged => Invalid

** Changed in: neutron
 Assignee: James Reeves (james-reeves5546) => Nate Johnston (nate-johnston)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498957

Title:
  Add a 'dscp' field to security group rules to screen ingress traffic
  by dscp tag as well as IP address

Status in neutron:
  Invalid

Bug description:
  This change will add to the current security group model an additional
  option to allow for traffic to be restricted to a given DSCP tag in
  addition to the current IP address based restriction.  Incoming
  traffic would need to match both the IP address/CIDR block as well as
  the DSCP tag - if one is set.

  Changes:
  * DB model changes to add a DSCP tag column to security groups.
  * API changes to allow for DSCP tag configuration options to be supplied to 
security group API calls.
  * Neutron agent changes to implement configuring IPTables with the additional 
DSCP tag configuration.

  Note: This is complimentary functionality to the "QoS DSCP marking
  rule support" change which, when implemented, will provide Neutron
  with an interface to configure QoS policies to mark outgoing traffic
  with DSCP tags.  See also: QoS DSCP marking rule support:
  https://bugs.launchpad.net/neutron/+bug/1468353

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503088] Re: Deprecate max_fixed_ips_per_port

2015-10-28 Thread Nate Johnston
It was established in the first attempt that this is not a documentation
issue; the documentation is autogenerated from the neutron code.

** Project changed: openstack-manuals => neutron

** Changed in: neutron
 Assignee: Takanori Miyagishi (miyagishi-t) => Nate Johnston (nate-johnston)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503088

Title:
  Deprecate max_fixed_ips_per_port

Status in neutron:
  In Progress

Bug description:
  https://review.openstack.org/230696
  commit 37277cf4168260d5fa97f20e0b64a2efe2d989ad
  Author: Kevin Benton 
  Date:   Wed Sep 30 04:20:02 2015 -0700

  Deprecate max_fixed_ips_per_port
  
  This option does not have a clear use case since we prevent
  users from setting their own IP addresses on shared networks.
  
  DocImpact
  Change-Id: I211e87790c955ba5c3904ac27b177acb2847539d
  Closes-Bug: #1502356

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1503088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508997] Re: Reusable firewall rules

2015-12-08 Thread Nate Johnston
Determined that the requirements for this request are a duplicate of the
FWaaS API v2.0 spec: https://review.openstack.org/#/c/243873

** Changed in: neutron
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508997

Title:
  Reusable firewall rules

Status in neutron:
  Invalid

Bug description:
  At Comcast we provide a very large private cloud. Each tenant uses
  firewall rules to filter traffic in order to accept traffic only from
  a given list of IPs. This can be done with security groups.   However
  there are two shortcomings with that approach.

  First, in my environment the list of IPs on which to manage ingress
  rules is very large due to non-contiguous IP space, so educating all
  tenants what these IP addresses are problematic at best.

  Second, notifying all tenants when IPs change is not a sustainable
  model.

  We would like to find a solution whereby rules much like security
  groups (that is, filtering by a combination of IP, protocol, and port)
  can be defined and tenants can apply these rules to a given port or
  network. This would allow an admin to define these rules to encompass
  different IP spaces and the tenants could apply them to their VM or
  network as they see fit.

  We would like to model the authorization of these rules so one role
  (such as admin) could create update or remove.  And then the rule
  could be shared with a Tenant or all Tenants to consume.

  Use Cases:

  - As a tenant, I have a heavy CPU workload for a large report. I want
  to spin up 40 instances and apply the "Reporting Infrastructure" rule
  to them.  This and would allow access only to the internal reporting
  infrastructure.

  - As a network admin, when the reporting team needs more IP space,and
  I want to add more subnets So I want to update the "Reporting
  Infrastructure" rule so that any VM that is already using that rule
  can access the new IP space.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp