[Yahoo-eng-team] [Bug 1993483] [NEW] Glance Install Guide & Config Ref Docs Need Updates/Enhancements

2022-10-19 Thread Larry Simon
Public bug reported:

The Install Guide for Glance (Zed) is:

1) outdated; still using configuration fields "stores" and "default_store" in 
the [glance_store] config group 
 (which are deprecated - removed in Rocky) and does not show good examples of 
how to properly use the newer "enabled_backends" field under the [DEFAULT] 
config group.  Unfortunately the Glance Configuration Reference also doesn't 
supply enough information on how to use the "enabled_backends" field.  For 
example, what syntax is expected for the dictionary?,  is the dictionary a 
string:string key value pair?  What kinds of "type" "values" are acceptable and 
how are they used elsewhere (it says the key is used later in the 
"default_backend" field, but the value is never explained, or illustrated with 
a good example in the documents.

2) should be improved to explain what "MY_SERVICE", "MY_PASSWORD" and
"ENTRYPOINT_ID" should be changed to in the [oslo_limit] config group
values being assigned to the various config fields.  I eventually
figured these should be the "glance" user and user password set earlier
but I'm still guessing really.  And as far as the ENTRYPOINT_ID I
figured it was the hash ID of the glance service.  Again, another guess.
And if so, why not use the glance service name? Those terms, MY_SERVICE
etc etc are not explained/referenced anywhere else in the Glance install
document (really confusing since they appear out of nowhere).  They look
to be simply copy pasted from some of the oslo documentation found
elsewhere I think.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1993483

Title:
  Glance Install Guide & Config Ref Docs Need Updates/Enhancements

Status in Glance:
  New

Bug description:
  The Install Guide for Glance (Zed) is:

  1) outdated; still using configuration fields "stores" and "default_store" in 
the [glance_store] config group 
   (which are deprecated - removed in Rocky) and does not show good examples of 
how to properly use the newer "enabled_backends" field under the [DEFAULT] 
config group.  Unfortunately the Glance Configuration Reference also doesn't 
supply enough information on how to use the "enabled_backends" field.  For 
example, what syntax is expected for the dictionary?,  is the dictionary a 
string:string key value pair?  What kinds of "type" "values" are acceptable and 
how are they used elsewhere (it says the key is used later in the 
"default_backend" field, but the value is never explained, or illustrated with 
a good example in the documents.

  2) should be improved to explain what "MY_SERVICE", "MY_PASSWORD" and
  "ENTRYPOINT_ID" should be changed to in the [oslo_limit] config group
  values being assigned to the various config fields.  I eventually
  figured these should be the "glance" user and user password set
  earlier but I'm still guessing really.  And as far as the
  ENTRYPOINT_ID I figured it was the hash ID of the glance service.
  Again, another guess. And if so, why not use the glance service name?
  Those terms, MY_SERVICE etc etc are not explained/referenced anywhere
  else in the Glance install document (really confusing since they
  appear out of nowhere).  They look to be simply copy pasted from some
  of the oslo documentation found elsewhere I think.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1993483/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1992950] Re: [scale] Setting a gateway on router is killing database

2022-10-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/861322
Committed: 
https://opendev.org/openstack/neutron/commit/c33b47edc77520abcdd7176af1f0ae921bd489b3
Submitter: "Zuul (22348)"
Branch:master

commit c33b47edc77520abcdd7176af1f0ae921bd489b3
Author: Arnaud Morin 
Date:   Thu Oct 13 17:59:54 2022 +0200

Do not keep gateway port when notifying for router update

On router update, skip subnet of gateway router.
This will prevent a massive RPC call toward huge number of L3 agents
(all agents in same big public subnet ID).

As a side effect, it will save CPU time on database because L3 agent
receiving such events are then doing a RPC call (sync_routers) even if
router is not used/deployed on this agent.

Closes-Bug: #1992950

Change-Id: Iafa9d43614d528f230cf034103b54f73303ac815
Signed-off-by: Arnaud Morin 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1992950

Title:
  [scale] Setting a gateway on router is killing database

Status in neutron:
  Fix Released

Bug description:
  Context
  ===
  OpenStack Stein (but master seems affected by this as well).
  OVS based deployment.
  L3 routers in DVR and HA mode.
  One big public "external/public" network (with subnets like /21 or /22) used 
by instances and router external gateways.

  Problem description
  ===
  When adding a gateway on a router in HA+DVR, neutron api may send a lot of 
RPC messages toward L3 agents, depending on the size of the subnet used for the 
gateway.

  How to reproduce
  
  Add a gateway on a router:

  $ openstack router set --external-gateway Ext-Net router-arnaud

  On neutron server, in logs (in DEBUG):

  Notify agent at l3_agent.hostxyz

  We see this line for all l3 agents having a port in Ext-Net subnet
  (which can be huge, like 1k).

  Then, all agents are doing another RPC call (sync_routers) which is ending on 
neutron-rpc with this log line:
  Sync routers for ids [abc]

  Behing the Sync router, a big SQL request is done [1]

  When 1k requests like this are done, on each router update, the
  database is killed by too much SQL requests to do.

  
  [1] 
https://github.com/openstack/neutron/blob/stable/stein/neutron/db/l3_dvrscheduler_db.py#L363

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1992950/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993498] [NEW] [CI] Create oslo master branch jobs (apart from py39)

2022-10-19 Thread Rodolfo Alonso
Public bug reported:

In order to have a better CI coverage of the newer features implemented
in the oslo libraries, this bug proposes to create new jobs to be added
to the periodic CI queue.

** Affects: neutron
 Importance: Low
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1993498

Title:
  [CI] Create oslo master branch jobs (apart from py39)

Status in neutron:
  New

Bug description:
  In order to have a better CI coverage of the newer features
  implemented in the oslo libraries, this bug proposes to create new
  jobs to be added to the periodic CI queue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1993498/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993502] [NEW] failing unit tests when not running them all

2022-10-19 Thread Thomas Goirand
Public bug reported:

Looks like a bunch of OVN unit tests are highly depending on the test
order.

When rebuilding the Debian Zed package of Neutron under Debian Unstable,
I get 200+ unit test failures like what's below. Using tox, if running:

tox -e py3 --
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver

this works, however, running a single unit test like this:

tox -e py3 --
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverSecurityGroup

simply fails. Under Debian, I had to add to the blacklist of unit tests
all of:

- plugins.ml2.drivers.ovn.mech_driver.TestOVNMechanismDriverSecurityGroup.*
- services.ovn_l3.test_plugin.OVNL3ExtrarouteTests.*

Please help me fix this.

Below are example of the 2 types of failure.

==
FAIL: 
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverSecurityGroup.test_update_sg_duplicate_rule_multi_ports
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverSecurityGroup.test_update_sg_duplicate_rule_multi_ports
--
testtools.testresult.real._StringException: Traceback (most recent call last):
  File 
"/<>/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 3733, in setUp
cfg.CONF.set_override('dns_servers', ['8.8.8.8'], group='ovn')
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2077, in 
__inner
result = f(self, *args, **kwargs)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2460, in 
set_override
opt_info = self._get_opt_info(name, group)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2869, in 
_get_opt_info
group = self._get_group(group)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2838, in 
_get_group
raise NoSuchGroupError(group_name)
oslo_config.cfg.NoSuchGroupError: no such group [ovn]


==
FAIL: 
neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests.test__notify_gateway_port_ip_changed
neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests.test__notify_gateway_port_ip_changed
--
testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2219, in 
__getattr__
return self._get(name)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2653, in _get
value, loc = self._do_get(name, group, namespace)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2671, in 
_do_get
info = self._get_opt_info(name, group)
  File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2876, in 
_get_opt_info
raise NoSuchOptError(opt_name, group)
oslo_config.cfg.NoSuchOptError: no such option ovn in group [DEFAULT]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/<>/neutron/tests/unit/services/ovn_l3/test_plugin.py", 
line 1738, in setUp
super(test_l3.L3BaseForIntTests, self).setUp(
  File "/<>/neutron/tests/unit/db/test_db_base_plugin_v2.py", line 
163, in setUp
self.api = router.APIRouter()
  File "/<>/neutron/api/v2/router.py", line 21, in APIRouter
return pecan_app.v2_factory(None, **local_config)
  File "/<>/neutron/pecan_wsgi/app.py", line 47, in v2_factory
startup.initialize_all()
  File "/<>/neutron/pecan_wsgi/startup.py", line 39, in 
initialize_all
manager.init()
  File "/<>/neutron/manager.py", line 301, in init
NeutronManager.get_instance()
  File "/<>/neutron/manager.py", line 252, in get_instance
cls._create_instance()
  File "/usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py", line 
414, in inner
return f(*args, **kwargs)
  File "/<>/neutron/manager.py", line 238, in _create_instance
cls._instance = cls()
  File "/<>/neutron/manager.py", line 132, in __init__
self._load_service_plugins()
  File "/<>/neutron/manager.py", line 211, in _load_service_plugins
self._create_and_add_service_plugin(provider)
  File "/<>/neutron/manager.py", line 214, in 
_create_and_add_service_plugin
plugin_inst = self._get_plugin_instance('neutron.service_plugins',
  File "/<>/neutron/manager.py", line 162, in _get_plugin_instance
plugin_inst = plugin_class()
  File "/<>/neutron/quota/resource_registry.py", line 124, in 
wrapper
return f(*args, **kwargs)
  File "/<>/neutron/services/ovn_l3/plugin.py", line 92, in 
__init__
self.scheduler = l3_ovn_scheduler.get_scheduler()
  File "/<>/neutron/scheduler/l3_ovn_scheduler.py", line 153, in 
get_scheduler
return OVN_SCHEDULER_STR_TO_CLASS[ovn_conf.get_ovn_l3_scheduler()]()
  File "/<>/neutron/conf/plugins/ml2/drivers/ovn/ovn_

[Yahoo-eng-team] [Bug 1993503] [NEW] grub_dpkg writes wrong device into debconf

2022-10-19 Thread Daniel Krambrock
Public bug reported:

After auto-installing ubuntu 22.04 onto a LV on a mdraid 1 with two
disks cc_grub_dpkg overrides the correct `grub-pc/install_devices`
debconf entry with a false one on first boot:

```
~# debconf-show grub-pc | grep grub-pc/install_devices:
* grub-pc/install_devices: /dev/disk/by-id/dm-name-vg0-lv_root
```

This breaks grub updates.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1993503

Title:
  grub_dpkg writes wrong device into debconf

Status in cloud-init:
  New

Bug description:
  After auto-installing ubuntu 22.04 onto a LV on a mdraid 1 with two
  disks cc_grub_dpkg overrides the correct `grub-pc/install_devices`
  debconf entry with a false one on first boot:

  ```
  ~# debconf-show grub-pc | grep grub-pc/install_devices:
  * grub-pc/install_devices: /dev/disk/by-id/dm-name-vg0-lv_root
  ```

  This breaks grub updates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1993503/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1042049] Re: [RFE] auto-associate floating ips

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1042049

Title:
  [RFE] auto-associate floating ips

Status in neutron:
  Won't Fix

Bug description:
  nova has a flag that indicates that each VM should automatically have
  a floating IP assigned and associated at the time of VM creation.

  In neutron, the equivalent would be at port creation time.  In Nova
  this is a system wide flag, but in neutron this would likely be an
  option that is enabled per-network (i.e., for a particular network, it
  is automatically assigned a floating IP from external network X).
  this would map to a amazon VPC use case as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1042049/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292293] Re: send_arp_for_ha implied default incorrect in config

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1292293

Title:
  send_arp_for_ha implied default incorrect in config

Status in neutron:
  Fix Released
Status in neutron havana series:
  Fix Released

Bug description:
  After Change-Id I003316bce0f38b7d2ea7d563b5a0a58676834398 was applied
  to Havana, the implied default in the l3_agent.ini file (3) does not
  match the code (0), which leads to some wrong assumptions when
  reading/updating the config file.

  These were both 3 in Grizzly, and both are 0 in master. Only Havana
  stable is effected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1292293/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1264328] Re: incorrect kwargs detail for webob.exc.HTTP*

2022-10-19 Thread Rodolfo Alonso
Neutron bug closed due to lack of activity, please feel free to reopen
if needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1264328

Title:
  incorrect kwargs detail for webob.exc.HTTP*

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When review patch: https://review.openstack.org/64099 I realized that,
  when we instantiate webob.exc.HTTP* class, we should use explanation
  instead of detail, or the http response will get message of default
  explanation instead what we expect.

  Besides, there are some i18n problem in those message, will be fixed
  in the same time. Other i18n problem will not be solved in this bug
  fix

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1264328/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262906] Re: swift unable to start in gate-tempest-dsvm-neutron-large-ops

2022-10-19 Thread Rodolfo Alonso
Neutron bug closed due to lack of activity, please feel free to reopen
if needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262906

Title:
  swift unable to start in gate-tempest-dsvm-neutron-large-ops

Status in devstack:
  Fix Released
Status in neutron:
  Won't Fix

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcInNvY2tldC5lcnJvcjogW0Vycm5vIDExMV0gQ29ubmVjdGlvbiByZWZ1c2VkXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tcy1jb250YWluZXIudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODc0OTM4ODM0MTYsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  
  Starting December 19th 

  "socket.error: [Errno 111] Connection refused" AND
  filename:"logs/screen-s-container.txt"

  http://logs.openstack.org/70/62770/4/check/gate-tempest-dsvm-neutron-
  large-ops/d5f6251/

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1262906/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1182883] Re: List servers matching a regex fails with Neutron

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1182883

Title:
  List servers matching a regex fails with Neutron

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Fix Released

Bug description:
  The test
  
tempest.api.compute.servers.test_list_server_filters:ListServerFiltersTestXML.test_list_servers_filtered_by_ip_regex
  tries to search a server with only a fragment of its IP (GET
  http://XX/v2/$Tenant/servers?ip=10.0.) which calls the following
  Quantum request :
  http://XX/v2.0/ports.json?fixed_ips=ip_address%3D10.0. But it seems
  this regex search is not supporter by Quantum. Thus the tempest test
  fauls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1182883/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1105488] Re: linuxbridge agent needs ability to use pre-configured physical network bridges (nova-related)

2022-10-19 Thread Rodolfo Alonso
Neutron bug closed due to lack of activity, please feel free to reopen
if needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1105488

Title:
  linuxbridge agent needs ability to use pre-configured physical network
  bridges (nova-related)

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The linuxbridge agent currently creates a bridge for each physical
  network used as a flat network, moving any existing IP address from
  the interface to the newly created bridge. This is very helpful in
  some cases, but there are other cases where the ability to use a pre-
  existing bridge is needed. For instance, the same physical network
  might need to be bridged for other purposes, or the agent moving the
  system's IP might not be desired.

  I suggest we add a physical_bridge_mappings configuration variable,
  similar to that used by the openvswitch agent, alongside the current
  physical_interface_mappings variable. When a bridge for a flat network
  is needed, the bridge mappings would be checked first. If a bridge
  mapping for the physical network exists, it would be used. If not, the
  interface mapping would be used and a bridge for the interface would
  be created automatically. Sub-interfaces and bridges for VLAN networks
  would continue to work as they do now, created by the agent using the
  interface mappings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1105488/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486565] Re: Network/Image names allows terminal escape sequence

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486565

Title:
  Network/Image names allows terminal escape sequence

Status in Glance:
  Opinion
Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Opinion
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  This allows a malicious user to create network that will mess with
  administrator terminal when they list network.

  Steps to reproduces:

  As a user: neutron net-create $(echo -e "\E[37mhidden\x1b[f")

  As an admin: neutron net-list

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1486565/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476527] Re: [RFE] Add common classifier resource

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476527

Title:
  [RFE] Add common classifier resource

Status in neutron:
  Won't Fix

Bug description:
  [Existing problem]
  Currently, in neutron each service/s which requires a classifier implements 
the same of their own. This introduce lot of redundancy and maintenance issues.

  [Proposal]
  In order to address the above problem, the specification [1]_ introduces a 
common classifier resource for Neutron.

  [Benefits]
  - The new resource can be leveraged to better realize other Neutron services 
needing traffic classification. Traffic classification is commonly needed by 
many Neutron services (e.g. Service Function Chaining [2]_, QoS [3]_, Tap as a 
Service [4]_, FWaaS [5]_, Group-Based Policy [6]_, and Forwarding Rule [7]_), 
but each service uses its own classifier resource similar to others. A common 
traffic classifier resource should exist in Neutron so that it could be 
leveraged across network services avoiding redundancy and maintenance issues.
  - We can also think of deploying a classifier independently for any 
service/s, which does classification at the ingress/egress. In turn, the 
services will solely be responsible for their required work.
  Example: For Service Function Chaining, flow classifier could be implemented 
by Neutron while the service chaining can be done by the third party ML2 
plug-in.
  - Comparison between this proposal with the exiting one's is also captured 
[9]_.

  [What is the enhancement?]
  - Add traffic classifiers to Neutron API as extension.
  - Classifier can be independently deployed at the ingress/egress without 
depending upon any of the service/s.
  - In future, we can also think of extending the same interface for filtering 
routes which might be required for other ongoing work like BGP dynamic routing 
[8]_ happening in Neutron in the Liberty release.

  [Related information]
  [1] Specification
  https://review.openstack.org/#/c/190463/
  [2] API for Service Chaining,
     https://review.openstack.org/#/c/192933/
  [3] QoS,
     https://review.openstack.org/#/c/88599/
  [4] Tap as a Service,
     https://review.openstack.org/#/c/96149/
  [5] Firewall as a Service,
     
http://specs.openstack.org/openstack/neutron-specs/specs/api/firewall_as_a_service__fwaas_.html
  [6] Group-based Policy,
     https://review.openstack.org/#/c/168444/
  [7] Forwarding Rule,
     https://review.openstack.org/#/c/186663/
  [8] Neutron route policy support for dynamic routing protocols
     
https://blueprints.launchpad.net/neutron/+spec/neutron-route-policy-support-for-dynamic-routing-protocol
  [9] Details about common classifier and it's comparison with SG and other's
     https://etherpad.openstack.org/p/flow-classifier

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476527/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373337] Re: Updating quotas path issue

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373337

Title:
  Updating quotas path issue

Status in neutron:
  Invalid
Status in openstack-api-site:
  Fix Released

Bug description:
  In the docs (http://developer.openstack.org/api-ref-
  networking-v2.html#quotas-ext), it clearly says that to update quota
  values, the request should be:

  PUT /v2.0/quotas

  But I'm getting a 404 when you do this. If you do this instead:

  PUT /v2.0/quotas/foo

  it works as expected, where "foo" can literally be anything. I looked
  at how the python-neutronclient handles this, and they seem to append
  the tenant_id to the end - which is completely undocumented. So:

  1. Is this a bug with the Neutron API or with the Neutron docs?
  2. Why does any arbitrary string get accepted?

  I'm using Neutron on devstack, Icehouse release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373337/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367243] Re: Bad error report when trying to connect VM to an overcrouded Network

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367243

Title:
  Bad error report when trying to connect VM to an overcrouded Network

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Expired

Bug description:
  trying to create a Port when the network has no more addresses, Neutron 
returns this error:
  No more IP addresses available on network 
a4e997dc-ba2e--9394-cfd89f670886.

  However, when trying to create a VM in a network that has no more addresses, 
the VM is created with Error details:
  "No valid host was found"

  That's because the the compute agent is seeing the PortLimitExceeded error 
from neutron and reports the the scheduler.
  The scheduler mistakes that for Hypervisor limit and tries another compute 
(which fails for the same reason) and finally reports the error as "No host 
found", while the error has to be "No more IP addresses available on network 
a4e997dc-ba2e--9394-cfd89f670886"

  
  Neutron however, doesn't register any errors for this flow.

  Recreate:
  1. create a subnet with 31 mask bits (which means no ports available for VMs)
  2. Boot a VM. errors will be seen in nova-compute, and nova-scheduler

  nova compute:
  2014-09-09 14:30:11.517 123433 AUDIT nova.compute.manager 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Starting instance...
  2014-09-09 14:30:11.608 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Attempting claim: memory 64 MB, disk 0 
GB, VCPUs 1
  2014-09-09 14:30:11.608 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Total memory: 31952 MB, used: 1152.00 MB
  2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] memory limit: 47928.00 MB, free: 46776.00 
MB
  2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Total disk: 442 GB, used: 0.00 GB
  2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] disk limit not specified, defaulting to 
unlimited
  2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Total CPUs: 24 VCPUs, used: 10.00 VCPUs
  2014-09-09 14:30:11.610 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] CPUs limit: 384.00 VCPUs, free: 374.00 
VCPUs
  2014-09-09 14:30:11.610 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Claim successful
  2014-09-09 14:30:12.141 123433 WARNING nova.network.neutronv2.api [-] Neutron 
error: quota exceeded
  2014-09-09 14:30:12.142 123433 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager Traceback (most 
recent call last):
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1528, in 
_allocate_network_async
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 360, in 
allocate_for_instance
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager 
LOG.exception(msg, port_id)
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutro

[Yahoo-eng-team] [Bug 1303913] Re: Console logs for unittest failures are > 100MB

2022-10-19 Thread Rodolfo Alonso
This bug is not relevant now.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303913

Title:
  Console logs for unittest failures are > 100MB

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When unittests fail for nova and neutron the resulting console logs
  are quite large.

  Nova:
  http://logs.openstack.org/56/83256/14/check/gate-nova-python26/294f78f/ 142MB
  http://logs.openstack.org/56/83256/14/check/gate-nova-python27/195cbd3/ 142MB

  Neutron:
  http://logs.openstack.org/92/85492/5/check/gate-neutron-python27/fa325bf/ 
122MB
  http://logs.openstack.org/92/85492/5/check/gate-neutron-python26/76c0527/ 
100MB

  This is problematic because it makes it very hard to debug what
  actually happened. We should continue to preserve complete logging in
  the subunit log (we do need the verbose information), but we don't
  need to fill the console log with noisy redundant data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1303913/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573073] Re: [SRU] When router has no ports _process_updated_router fails because the namespace does not exist

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573073

Title:
  [SRU] When router has no ports _process_updated_router fails because
  the namespace does not exist

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released
Status in neutron source package in Yakkety:
  Fix Released
Status in neutron source package in Zesty:
  Fix Released

Bug description:
  [Description]
  [Testcase]
  Happens in Kilo. Cannot test on other releases.

  Steps to reproduce:

  1) create a router and set at least a port, also the gateway is fine
  2) check that the namespace exists with
     ip netns show | grep qrouter-
  3) check the ports are there
     ip netns exec qrouter- ip addr show
  4) delete all ports from the router
  5) check that only loopback interface is present
     ip netns exec qrouter- ip addr show
  6) run the cronjob task that is installed in the file
     /etc/cron.d/neutron-l3-agent-netns-cleanup
  so basically run this command:
     /usr/bin/neutron-netns-cleanup --config-file=/etc/neutron/neutron.conf 
--config-file=/etc/neutron/l3_agent.ini
  7) the namespace should be gone:
     ip netns show | grep qrouter-
  8) delete the neutron router.
  9) check log file /var/log/neutron/vpn-agent.log

  When the router has no ports the namespace is deleted from the network
  node by the cronjob. However this brakes the router updates and the
  file vpn-agent.log is flooded with this traces:

  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Traceback 
(most recent call last):
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 343, in call
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info return 
func(*args, **kwargs)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 628, 
in process
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
self._process_internal_ports()
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 404, 
in _process_internal_ports
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
existing_devices = self._get_existing_devices()
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 328, 
in _get_existing_devices
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info ip_devs 
= ip_wrapper.get_devices(exclude_loopback=True)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 102, in 
get_devices
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 137, in 
execute
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info raise 
RuntimeError(m)
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info RuntimeError:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762', 'find', 
'/sys/class/net', '-maxdepth', '1', '-type', 'l', '-printf', '%f ']
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Exit code: 1
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdin:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdout:
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stderr: 
Cannot open network namespace "qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762": 
No such file or directory
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info
  2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info
  2016-04-21 16:22:17.774 23382 ERROR neutron.agent.l3.agent [-] Failed to 
process compatible router '8fc0f640-35bb-4d0b-bbbd-80c22be0e762'
  2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 467, in 
_process_router_update
 

[Yahoo-eng-team] [Bug 1572548] Re: metering-agent failed to get traffic counters when no router-namespace where meter-label-rules were added

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572548

Title:
  metering-agent failed to get traffic counters when no router-namespace
  where meter-label-rules were added

Status in neutron:
  Won't Fix

Bug description:
  Removing router from l3 agent cause errors from neutron-metering-
  agent. The neutron-meter-agent continues try to get traffic counters
  from iptables on router-namespace, which is not exist after removing
  router from l3-agent.

  Steps to reproduce:
  1. Create internal net, subnet, router. Set external gateway for router, add 
interface to router for created net.
  2. Create neutron-meter-label
  3. Use 'neutron l3-agent-router-remove' command to remove a router from a L3 
agent
  Trace in neutron-metering-agent logs:
  2016-04-20 12:28:26.699 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-c8acd926-88e5-4292-883e-e928fb3f6d32": No such file or directory

  2016-04-20 12:28:26.700 ERROR 
neutron.services.metering.drivers.iptables.iptables_driver [-] Failed to get 
traffic counters, router: {u'status': u'ACTIVE', u'name': u'router05', 
u'gw_port_id': u'f36b30d3-5290-4896-837c-108b8cd4f3dc', u'admin_state_up': 
True, u'tenant_id': u'1c0eb24bdbb1406bb7d1346f36064ebd', u'_metering_labels': 
[{u'rules': [{u'remote_ip_prefix': u'0.0.0.0/0', u'direction': u'egress', 
u'metering_label_id': u'67dae290-38cb-4962-80ab-d6d3404dc6df', u'id': 
u'799f3361-6f90-4e36-a338-311d6e7c9d5b', u'excluded': False}], u'id': 
u'67dae290-38cb-4962-80ab-d6d3404dc6df'}], u'id': 
u'c8acd926-88e5-4292-883e-e928fb3f6d32'}
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver Traceback (most 
recent call last):
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_driver.py",
 line 355, in get_traffic_counters
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver chain, 
wrap=False, zero=True)
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 712, in 
get_traffic_counters
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver current_table = 
self.execute(args, run_as_root=True)
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 137, in execute
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver raise 
RuntimeError(msg)
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver RuntimeError: Exit 
code: 1; Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-c8acd926-88e5-4292-883e-e928fb3f6d32": No such file or directory
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver
  2016-04-20 12:28:26.700 TRACE 
neutron.services.metering.drivers.iptables.iptables_driver

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572548/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566520] Re: [RFE] Upgrade controllers with no API downtime

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566520

Title:
  [RFE] Upgrade controllers with no API downtime

Status in neutron:
  Won't Fix

Bug description:
  Currently pretty much every major upgrade requires full shutdown for
  all neutron-server instances for the time while upgrade process is
  running. The downtime is due to the need to run alembic scripts that
  modify schema and transform data. Neutron-server instances are
  currently not resilient to working with older schema. We also don't
  make an effort to avoid 'contract' migrations.

  The goal of the RFE is to allow upgrading controller services one by
  one, without full shutdown for all of them in an HA setup. This will
  allow to avoid public shutdown for API for rolling upgrades.

  The RFE involves:
  - adopting object facades for all interaction with database models;
  - forbidding contract migrations in alembic;
  - implementing new contract migrations in backwards compatible way in runtime.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566520/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563967] Re: [RFE] Extend l2-extensions-api for flow management

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563967

Title:
  [RFE] Extend l2-extensions-api for flow management

Status in neutron:
  Won't Fix

Bug description:
  The l2-extensions-api that merged in Mitaka allows extensions of the 
OvSNeutronAgent to use the OvS flow table.
  This introduces a new challenge of interoperability between extensions 
creating flows.
  As the OvS flow table matches traffic to flows based on packet 
characteristics in the order of priority assigned to the flow,
  it will inevitably result in two extensions using flows that will block the 
extension that uses the lower priority flows.
  Therefore a flow manager or flow pipeline is necessary that will prevent 
extensions from blocking other flows from matching traffic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1563967/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561824] Re: [RFE] Enhance BGP Dynamic Routing driver with Quagga suppport

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561824

Title:
  [RFE] Enhance BGP Dynamic Routing driver with Quagga suppport

Status in neutron:
  Won't Fix

Bug description:
  Current bgp-dynamic-routing only support ryu bgp. But some time we want to 
Quagga as bgp speaker because:
   1.  Quagga has more features which is wanted like:
1.1) multiple instance support which is need by [3].
1.2) quagga has more flexible route filter
  2.  Quagga is programmed by C language, it will have better performance.
   
  And [1] list the comparison of all possible bgp speakers.

  [1]BGPSpeakersComparison
  https://wiki.openstack.org/wiki/Neutron/DynamicRouting/BGPSpeakersComparison
  [2] bgp-dynamic-routing
  https://blueprints.launchpad.net/neutron/+spec/bgp-dynamic-routing
  [3]bgp-dragent-hosting-multiple-speakers
  https://bugs.launchpad.net/neutron/+bug/1528003

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561824/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552960] Re: Tempest and Neutron duplicate tests

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Confirmed => Won't Fix

** Changed in: neutron
   Status: Won't Fix => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552960

Title:
  Tempest and Neutron duplicate tests

Status in neutron:
  Fix Released

Bug description:
  Problem statement:

  1) Tests overlap between Tempest and Neutron repos - 264 tests last I 
checked. The effect is:
  1.1) Confusion amongst QA & dev contributors and reviewers. I'm writing a 
test, where should it go? Someone just proposed a Tempest patch for a new 
Neutron API, what should I do with this patch?
  1.2) Wasteful from a computing resources point of view - The same tests are 
being run multiple times for every Neutron patchset.
  2) New APIs (Subnet pools, address scopes, QoS, RBAC, port security, service 
flavors and availability zones), are not covered by Tempest tests. Consumers 
have to adapt and run both the Tempest tests and the Neutron tests in two 
separate runs. This causes a surprising amount of grief.

  Proposed solution:

  For problem 1, we eliminate the overlap. We do this by defining a set
  of tests that will live in Tempest, and another set of tests that will
  live in Neutron. More information may be found here:
  https://review.openstack.org/#/c/280427/. After deciding on the line
  in the sand, we will delete any tests in Neutron that should continue
  to live in Tempest. Some Neutron tests were modified after they were
  copied from Tempest, these modifications will have to be examined and
  then proposed to Tempest. Afterwards these tests may be removed from
  Neutron, eliminating the overlap from the Neutron side. Once this is
  done, overlapping tests may be deleted from Tempest.

  For problem 2, we will develop a Neutron Tempest plugin. This will be
  tracked in a separate bug. Note that there's already a patch for this
  up for review: https://review.openstack.org/#/c/274023/

  * The work is also being tracked here:
  https://etherpad.openstack.org/p/neutron-tempest-defork

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552960/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552749] Re: Many BaseOVS functions unused

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552749

Title:
  Many BaseOVS functions unused

Status in neutron:
  Won't Fix

Bug description:
  When running vulture (https://pypi.python.org/pypi/vulture), I found many 
unused functions here:
  neutron/agent/common/ovs_lib.BaseOVS:

  neutron/agent/common/ovs_lib.py:118: Unused function 'add_bridge'
  neutron/agent/common/ovs_lib.py:131: Unused function 'bridge_exists'
  neutron/agent/common/ovs_lib.py:152: Unused function 'clear_db_attribute'
  neutron/agent/common/ovs_lib.py:209: Unused function 'set_standalone_mode'
  neutron/agent/common/ovs_lib.py:278: Unused function 'count_flows'
  neutron/agent/common/ovs_lib.py:373: Unused function 'get_iface_name_list'
  neutron/agent/common/ovs_lib.py:544: Unused function 'get_local_port_mac'
  neutron/agent/common/ovs_lib.py:553: Unused function 
'set_controllers_connection_mode'
  neutron/agent/common/ovs_lib.py:581: Unused function 
'get_egress_bw_limit_for_port'

  I didn't go through them all, but add_bridge and bridge_exists indeed
  don't seem used outside of tests.

  I ran vulture by going to Neutron base source dir, then running:
  vulture neutron --exclude=tests

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552749/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550400] Re: Macvtap driver/agent migrates instances on an invalid physical network

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550400

Title:
  Macvtap driver/agent migrates instances on an invalid physical network

Status in neutron:
  Won't Fix

Bug description:
  Scenario1 - Migration on wrong physical network - High Prio
  ===
  Host1 has physical_interface_mappings: pyhsnet1:eth0, physnet2=eth2
  Host2 has physical_interface_mappings: physnet1:eth1, physnet2=eth0

  Now Live migration from an instance hosted on host1 (connected to physnet1) 
to host2 succeeds. Libvirt just migrates the whole server with its domain.xml 
and the macvtap is plugged on the targets side eth0.
  Now the instance does not have access to its network anymore, but access to 
another physical network. The behavior is documented, however this needs to be 
fixed!

  Scenario2 - Migration fails - Low Prio
  ==
  Host1 has physical_interface_mappings: pyhsnet1:eth0
  Host2 has physical_interface_mappings: physnet1:eth1

  Let's assume a vlan setup. Let's assume a migration from host1 to
  host2. Host to does NOT have a interface eth0. Migration will fail in
  instance will remain active on the source, as nova plug on host2
  failed to create a vlan device on eth0.

  If you have a flat network - definition of he libvirt xml will fail on
  host2.

  Two approaches are thinkable
  * Solve the problem (Scenario 1+2)
  * just prevent such an invalid migration (let scenario 1 fail like scenario 2 
fails today)

  Solve the problem
  =

  #1 Solve it in Nova pre live migration
  --

  This would allow migration although physical_interface mappings are
  different.

  a) On pre live migration nova should change the binding:host to the
  migration target. This will trigger the portbinding and the mech
  driver which will update the vif_details with the right macvtap source
  device information. Libvirt can then adapt the migration-xml to
  reflect the changes.

  Currently the update of the binding is done in post migration, after 
migration succeeded. Can we already do it in pre_live_migration and on failure 
undo it in rollback ?
  - There's no issue for the reference implementations - See the prototype: 
https://review.openstack.org/297100
  - But there might be some mechanisms for external SDN Controllers that might 
shut down ports on the source host as soon as the host_id is being updated. On 
the other hand, if controller rely on this mechanism, they will set the port up 
a little too late today, as the update host_id is sent after live migration 
succeeded.

  b) The alternative would be to allow a port to be bound to multiple hosts 
simultaneously. So in pre_live migration, nova would add a binding for the 
target host and in post_live_migration it would remove the original binding.
  This would require
  - simultaneous port binding. This will be achieved by 
https://bugs.launchpad.net/neutron/+bug/1367391
  - allow such a binding for compute ports as well
  - Update APIs to reflect multiple port_bindings
    - Create / Update / Show Port
    - host_id is not reflect for DVR ports today [1]

  #2 Moved to Prevent section
  ---

  #3 Device renaming in the macvtap agent
  ---
  This would allow migration although physical_interface mappings are different.

  Instead of
   physical_interface_mapping = physnet1:eth0
  use a
   physical_interface_mac_mapping = physnet1:00:11:22:33:44:55:66  
#where 00:11:22:33:44:55:66 is the mac address of the interface to use

  On agent startup, the agent could rename the associated device to
  "physnet1" (or to some other generic value) that is consistent cross
  all hosts!

  We would need to document that this interface should not be used by
  any other application (that relies on the interface name)

  #4 Use generic vlan device names
  

  This solves the problem only for vlan networks! For flat networks it
  still would exist

  Today, the agent generates the vlan device names like this: for eth1
  eth1.. We could get rid of this pattern and use network-
  uuid.vlan instead. Where nework-uuid are the first 10 chars of the id.

  But this would not solve the issue for flat networks. Therefore the
  device renaming like proposed in #3 would be required.

  Prevent invalid migration
  =

  #1 Let Port binding fail
  

  The idea is to detect an invalid migration in the mechanism driver and
  let port binding fail.

  This approach has two problems
  a) Portbinding happens AFTER Migration happened. In post live migration nova 
requests to up

[Yahoo-eng-team] [Bug 1550278] Re: tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* tempest tests are failing repeatedly in the gate for networking-ovn

2022-10-19 Thread Rodolfo Alonso
Neutron bug closed due to lack of activity, please feel free to reopen
if needed.

This error is not present anymore in the CI jobs.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550278

Title:
  tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* tempest tests
  are failing repeatedly in the gate for networking-ovn

Status in networking-odl:
  Invalid
Status in networking-ovn:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  We are seeing a lot of tempest failures for the tests 
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* 
  with the below error.

  Either we should fix the error or at least disable these tests
  temporarily.

  
  t156.9: 
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_stateless_no_ra[id-ae2f4a5d-03ff-4c42-a3b0-ce2fcb7ea832]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2016-02-26 07:29:46,168 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:test_dhcpv6_stateless_no_ra): 404 POST 
http://127.0.0.1:9696/v2.0/subnets 0.370s
  2016-02-26 07:29:46,169 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: {"subnet": {"cidr": "2003::/64", "ip_version": 6, "network_id": 
"4c7de56a-b059-4239-a5a0-94a53ba4929c", "gateway_ip": "2003::1", 
"ipv6_address_mode": "slaac"}}
  Response - Headers: {'content-length': '132', 'status': '404', 'date': 
'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-e21f771f-1a16-452a-9429-8a01f0409ae3'}
  Body: {"NeutronError": {"message": "Port 
598c23eb-1ae4-4010-a263-39f86240fd86 could not be found.", "type": 
"PortNotFound", "detail": ""}}
  2016-02-26 07:29:46,196 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET http://127.0.0.1:9696/v2.0/ports 
0.024s
  2016-02-26 07:29:46,197 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/ports', 'content-length': '13', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-f0966c23-c72f-4a6f-b113-5d88a6dd5912'}
  Body: {"ports": []}
  2016-02-26 07:29:46,250 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET 
http://127.0.0.1:9696/v2.0/subnets 0.052s
  2016-02-26 07:29:46,251 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/subnets', 'content-length': '457', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-3b29ba53-9ae0-4c0f-8c18-ec12db7a6bde'}
  Body: {"subnets": [{"name": "", "enable_dhcp": true, "network_id": 
"4c7de56a-b059-4239-a5a0-94a53ba4929c", "tenant_id": 
"631f9cb1391d41b6aba109afe06bc51b", "dns_nameservers": [], "gateway_ip": 
"2003::1", "ipv6_ra_mode": null, "allocation_pools": [{"start": "2003::2", 
"end": "2003:::::"}], "host_routes": [], "ip_version": 6, 
"ipv6_address_mode": "slaac", "cidr": "2003::/64", "id": 
"6bc2602c-2584-44cc-a6cd-b8af444f6403", "subnetpool_id": null}]}
  2016-02-26 07:29:46,293 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET 
http://127.0.0.1:9696/v2.0/routers 0.041s
  2016-02-26 07:29:46,293 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/routers', 'content-length': '15', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-2b883ce9-b10f-4a49-a854-450c341f9cd9'}
  Body: {"routers": []}
  }}}

  Traceback (most recent call last):
File "tempest/api/network/test_dhcp_ipv6.py", line 129, in 
test_dhcpv6_stateless_no_ra
  real_ip, eui_ip = self._get_ips_from_subnet(**kwargs)
File "tempest/api/network/test_dhcp_ipv6.py", line 91, in 
_get_ips_from_subnet
  subnet = self.create_subnet(self.network, **kwargs)
File "tempest/api/network/base.py", line 196, in create_subnet

[Yahoo-eng-team] [Bug 1528003] Re: [RFE] bgp-dragent-hosting-multiple-speakers

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528003

Title:
  [RFE] bgp-dragent-hosting-multiple-speakers

Status in neutron:
  Won't Fix

Bug description:
  [Existing problem]
  The number of BGP speakers a BGP driver can host may vary. For instance Ryu 
can support only 1 BGP Speaker while Quagga can host multiple. In the current 
BGP dynamic routing implementation [1]_, the BGP DrAgent and DrScheduler cannot 
adjust themselves as per the driver support which might be required for 
effective scheduling.

  [Proposal]
  There could be 2 ways for achieving this:
  1. Admin can hard code the support information in the configuration file and 
the same could be read by BGP DrAgent and DrScheduler during start-up.
  2. New interface can be exposed by BGP DrAgent to DrScheduler using which 
DrScheduler retrieve this information during start-up.

  [Benefits]
  - Effective scheduling.

  [What is the enhancement?]
  - Configuration file changes. [Proposal-1]
  - New interface between BGP DrAgent and DrScheduler will be designed. 
[Proposal-2]

  [Related information]
  [1] Dynamic Advertising Routes for Public Ranges
  
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528003/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528002] Re: [RFE] bgp-route-policing

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528002

Title:
  [RFE] bgp-route-policing

Status in neutron:
  Won't Fix

Bug description:
  [Existing problem]
  Current BGP dynamic routing proposal [1]_ doesn't support route filtering. By 
Default, all the routes will be advertised. There is no way by which an admin 
can filter routes before advertisement.

  [Proposal]
  - Add route-policy support to BGP dynamic routing.

  [Benefits]
  - Adds flexibility for filtering routes per BGP peering session.
  - Can provide more options for modifying route attributes, if required.

  [What is the enhancement?]
  - - Additional API, CLI and DB model will be added.

  [Related information]
  [1] Dynamic Advertising Routes for Public Ranges
  
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528002/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528000] Re: [RFE] bgp-route-aggregation

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528000

Title:
  [RFE] bgp-route-aggregation

Status in neutron:
  Won't Fix

Bug description:
  [Existing problem]
  Current BGP dynamic routing proposal [1]_ doesn't have support for route 
aggregation. Route aggregation could be extremely useful in reducing the size 
of the routing table and improves CPU utilization [2]_.

  [Proposal]
  - Add route-aggregation support to BGP dynamic routing.

  [Benefits]
  - Route aggregation is good as it reduces the size, and slows the growth, of 
the Internet routing table.
  - Amount of resources (e.g., CPU and memory) required to process routing 
information is reduced and route calculation is sped up. 
  - Route flaps becomes limited in number

  [What is the enhancement?]
  - - Additional API, CLI and DB model will be added.

  [Related information]
  [1] Dynamic Advertising Routes for Public Ranges
  
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html
  [2] A Framework for Inter-Domain Route Aggregation
  https://tools.ietf.org/html/rfc2519

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528000/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527993] Re: [RFE] bgp-statistics

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527993

Title:
  [RFE] bgp-statistics

Status in neutron:
  Won't Fix

Bug description:
  [Existing problem]
  Current BGP dynamic routing proposal [1]_ doesn't have support for getting 
BGP peer state and statistical information. Such information could be critical 
for debugging.

  [Proposal]
  - Existing BGP dynamic routing framework will be extended for supporting BGP 
peer state and statistical information.
  - Additional display CLI's will be added.

  [Benefits]
  - Debugging will be strengthened.

  [What is the enhancement?]
  - Add debugging framework.
  - Add interface for retrieving and displaying BGP peer statistics and states. 

  [Related information]
  [1] Dynamic Advertising Routes for Public Ranges
  
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/bgp-dynamic-routing.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527993/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690439] Re: [RFE] Deal with NetworkAmbiguous error

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1690439

Title:
  [RFE] Deal with NetworkAmbiguous error

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Today if a tenant has more than a network visible to it and during
  boot it does not specify any network, the famous NetworkAmbiguous is
  returned. It would be nice if there was a way for Nova and Neutron to
  fall back on the adoption of a tenant-level 'default' network.

  [1] https://etherpad.openstack.org/p/pike-neutron-gman (line 29)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1690439/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690438] Re: [RFE] make get-me-a-network work with any network topology

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1690438

Title:
  [RFE] make get-me-a-network work with any network topology

Status in neutron:
  Won't Fix

Bug description:
  Today Get-me-a-network work in such a way that neutron provisions a
  rather specific network topology for tenant connectivity. It would be
  nice if Neutron were able to support any network topology (e.g. plain
  provider vlans).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1690438/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1683566] Re: [RFE] L2 agent need concurrent processing port

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1683566

Title:
  [RFE] L2 agent need concurrent processing port

Status in neutron:
  Won't Fix

Bug description:
  Our in-tree openvswitch agent now process the port as a serial way.
  This may cause some delay operation and even effect the whole work,
  such as the real vm startup..

  Could we let l2 agent process more quickly or make it process the port
  in concurrent mode? And I know some feature or framework need the
  current way to process the info they concerned. But I think it is
  necessary to promote the l2 agent performance.

  Cases effected:
  At first, we make the assumption that the openstack is running as public 
could services. There are many uncontrolled api calles, including migration 
instances, instances creation, etc..

  Now we can focus on 1 compute node. If there are many requests from
  neutron server, then server rpc to l2 agent and add the updated ports
  to the process list(set). The target port we concerned is at the end
  of the process list, make the assumption that there are 10 ports in
  the list, and each port cost 2 seconds to process. The target port
  will be done after 20s. So the whole process may delay 20s..This is
  just a example, please don't care about the cost time.

  Also, the instance migration, nova side is more quick than neutron, if
  the target port also at the end of the process list, the whole time of
  vm ready is very long.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1683566/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677469] Re: networking-sfc is breaking tacker CI

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1677469

Title:
  networking-sfc is breaking tacker CI

Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/44/448844/6/check/gate-tacker-dsvm-
  functional-ubuntu-xenial-nv/31f9ef1/logs/screen-q-agt.txt.gz

  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
[req-8948a445-84d5-4cd1-8084-551b7b135dcf - -] Agent main thread died of an 
exception
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
Traceback (most recent call last):
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 40, in agent_main_wrapper
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
ovs_agent.main(bridge_classes)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2168, in main
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
agent = OVSNeutronAgent(bridge_classes, cfg.CONF)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 208, in __init__
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.init_extension_manager(self.connection)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 153, in 
wrapper
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return f(*args, **kwargs)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 393, in init_extension_manager
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.agent_api)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/neutron/neutron/agent/agent_extensions_manager.py", line 55, in 
initialize
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
extension.obj.initialize(connection, driver_type)
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/networking-sfc/networking_sfc/services/sfc/agent/extensions/sfc.py",
 line 82, in initialize
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.sfc_driver.initialize()
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/networking-sfc/networking_sfc/services/sfc/agent/extensions/openvswitch/sfc_driver.py",
 line 96, in initialize
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self._clear_sfc_flow_on_int_br()
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/opt/stack/new/networking-sfc/networking_sfc/services/sfc/agent/extensions/openvswitch/sfc_driver.py",
 line 171, in _clear_sfc_flow_on_int_br
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.br_int.delete_group(group_id='all')
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
AttributeError: 'OVSIntegrationBridge' object has no attribute 'delete_group'
  2017-03-30 04:24:18.839 10667 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-sfc/+bug/1677469/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674349] Re: [RFE] Introduce a new rule with service user role in Neutron policy

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1674349

Title:
  [RFE] Introduce a new rule with service user role in Neutron policy

Status in neutron:
  Invalid

Bug description:
  When other services like Nova talks to the Neutron REST API, it uses
  an admin token, in some scenarios like setting the port binding on a
  port. In these cases, the admin token is used to ensure only Nova has
  access to the binding API, not the end user.

  With the introduction of service token, we can use service token instead of 
admin token to perform metadata API services related operations which
  currently uses admin token.

  In Ocata, Nova started sending a service token along with the user token, so 
Neutron already knows it is Nova sending a request on behalf of the user.
  https://review.openstack.org/#/c/410394/

  We can make use new role added by keystoneauth.

  "service_roles": "service:"

  The above role can be used in policy to define level of access for an
  action when service token is used along with user context. This allows to 
perform any actions for which we have added service role in policy 
configurations.

  For example, if we want to perform "binding port to host id" operation
  with service token authentication, we will pass service token along
  with auth token to communicate with Neutron. In this case, Neutron
  policy should also allow performing this operation with
  "service_roles".

  Spec in Nova:
  https://review.openstack.org/#/c/439890/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1674349/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673142] Re: [RFE][ML2] Enforce extension semantics

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1673142

Title:
  [RFE][ML2] Enforce extension semantics

Status in neutron:
  Won't Fix

Bug description:
  The ML2 core plugin implements certain API extensions through mix-in
  classes, and allows additional API extensions to be implemented using
  extension drivers. All of these extensions are then considered to be
  "supported" by the plugin. But supporting an extension's API does not,
  in itself, guarantee that the semantics a user requests using that API
  will be supported and enforced by the applicable mechanism drivers.

  One potential solution might be to require all mechanism drivers to
  provide a property listing the extension aliases they support, and
  then for the plugin to only claim to support those extensions
  supported by all configured mechanism drivers. But this solution does
  not address important use cases involving multiple mechanism drivers.
  For example, an SR-IOV mechanism driver may not be able to enforce
  security groups itself, so shouldn't claim to support that extension.
  But then security groups wouldn't be available for non-SR-IOV ports
  either. This might be resolved by the SR-IOV mechanism driver claiming
  to support security groups, and refusing to bind when the list of
  security groups on the port is not empty. But then what if a ToR
  switch mechanism driver could support security groups? The SR-IOV
  mechanism driver should not have know whether a specific ToR mechanism
  driver is part of the binding and what its capabilities are.
  Furthermore, the ToR mechanism driver might only be able to enforce
  certain kinds of security groups, so whether an SR-IOV binding is
  valid might depend on the specific security group rules applied to the
  port. And, unlike security groups or other mixed-in extensions, those
  supported via configured extension drivers might not even be
  meaningful to some of the configured mechanism drivers. Requiring all
  mechanism drivers to understand and support all extensions is clearly
  not a viable solution.

  We therefore propose a more dynamic solution to ensure that the
  semantics requested via extension APIs are enforced. This solution
  prevents port bindings from being created unless the semantics
  expressed through all configured extension APIs can be provided by
  that port binding. This will result in invalid port bindings being
  passed over until a valid one is found, or potentially in a failure to
  bind. It will not address the harder problem of making Nova's
  scheduling of instances take extension semantics into account, but
  will work in conjunction with using Nova host aggregates to influence
  scheduling.

  The proposed solution adds a method each to the ML2 ExtensionDriver
  and MechanismDriver APIs. It does not require any change in Neutron
  APIs, existing extension APIs, or any database schema. The driver base
  classes will implement default versions of these methods, so existing
  drivers will not be broken. After a potential port binding has been
  determined, within the transaction that will commit the result, ML2
  will perform a new extension semantics validation phase.

  For each registered extension driver, the
  ExtensionDriver.validation_mode method will first be called, which
  will return a "none", "one", or "all" value indicating what type of
  mechanism driver validation is needed for that extension. This method
  can examine the attribute values if needed to determine the mode. In
  the example above, a security group extension driver would examine the
  port's security_groups value and return "none" if there are no
  security groups on the port, and "one" otherwise.

  ML2's extension semantics validation then proceeds based on the
  validation mode determined for that extension. If the validation mode
  is "none", no further validation is needed for that extension.
  Otherwise, the potential binding is validated by calling the
  MechanismDriver.validate_extension method on the mechanism drivers
  making up the potential binding. For an enforcement mode of "one",
  validation proceeds from the bottom to top mechanism drivers until one
  indicates it can enforce that extension's semantics by returning True.
  For "all", each mechanism driver in the potential binding must
  indicate it can it can enforce the semantics. In the example above,
  the openvswitch or linuxbridge mechanism driver would always return
  True to indicate it can enforce any security group semantics, an SR-
  IOV mechanism driver would always return False to indicate it cannot
  enforce any security group semantics, and a ToR switch mechanism
  driver might have to examine the security group rules to determine
  whether it can e

[Yahoo-eng-team] [Bug 1672852] Re: [RFE] Make controllers with different list of supported API extensions to behave identically

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1672852

Title:
  [RFE] Make controllers with different list of supported API extensions
  to behave identically

Status in neutron:
  Won't Fix

Bug description:
  The idea is to make controllers behave the same on API layer
  irrespective of the fact whether they, due to their different major
  versions, or because of different configuration files, support
  different lists of API extensions.

  The primary use case here is when controllers are upgraded in rolling
  mode, when you have different major versions running and probably
  serving API requests in round-robin implemented by a frontend load
  balancer. If version N exposes extensions A,B,C,D, while N+1 exposes
  A,B,C,D,E, then during upgrade when both versions are running, API
  /extensions/ endpoint should return [A,B,C,D]. After all controllers
  get to the new major version, they can switch to [A,B,C,D,E].

  This proposal implies there is mutual awareness of controller services
  about each other and their lists of supported extensions that will be
  achieved by storing lists in a new servers table, similar to agents
  tables we have.

  On service startup, controllers will discover information about other
  controllers from the table and load only those extensions that are
  supported by all controller peers. We may also introduce a mechanism
  where a signal triggers reload of extensions based on current table
  info state, or a periodic reloading thread that will look at the table
  e.g. every 60 seconds. (An alternative could be discovering that info
  on each API request, but that would be too consuming.)

  This proposal does not handle case where we drop an extension in a
  span of a single cycle (like replacing timestamp extension with
  timestamp_core). We may need to handle those cases by some other means
  (the easiest being not allowing such drastic in-place replacement of
  attribute format).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1672852/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671607] Re: OVS agent sometimes uses same request id

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1671607

Title:
  OVS agent sometimes uses same request id

Status in neutron:
  Won't Fix

Bug description:
  The OVS agent uses the same context and subsequently the same request
  ID for some requests it makes to the Neutron server. This makes
  searching for single actions based on request ID very ineffective.

  Similar to DHCP agent https://bugs.launchpad.net/neutron/+bug/1618231
  (and l3-agent), which were fixed within the past year.

  I'll try and update this with an example from the logs when I get a
  new stack up and running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1671607/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657406] Re: admin users can access resources from other projects

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657406

Title:
  admin users can access resources from other projects

Status in neutron:
  Won't Fix

Bug description:
  We're seeing a similar problem like the one described in
  https://bugs.launchpad.net/nova/+bug/1046054 for the Neutron endpoint
  /v2.0/security-groups in Mitaka.

  Making a project-scoped request to this endpoint with an admin user
  returns a list of security groups including all security groups of all
  projects. Also PUT or POST request do work for security group in
  another project. The same applies to endpoint /v2.0/networks/. Note
  that this does not apply for e.g. nova's server resource, but might
  apply to other resources as well.

  OpenStack version: Mitaka

  How to reproduce:
  1. Create two projects A and B
  2. Create a new user 'UserA'
  2. Assign 'UserA' to the project A and give her the role admin
  3. Use the openstack cli or curl to make a GET request to 
/v2.0/security-groups with an auth scope for project A
  => The security groups of project B (and potentially all other projects in 
the OpenStack installation) are part of the response.

  Not sure if this is related to bug
  https://bugs.launchpad.net/keystone/+bug/968696

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657406/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657071] Re: [RFE]Add "log" attribute to firewall_policy

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Opinion

** Changed in: neutron
   Status: Opinion => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1657071

Title:
  [RFE]Add "log" attribute to firewall_policy

Status in neutron:
  Won't Fix

Bug description:
  In general, the firewall we use need the log info about when and why the 
firewall_policy was deleted or updated.
  As users need this info to locate the trouble or illegal operations quickly. 
Also, for security destination, we need log this kind info.

  Maybe the "log" attribute details like:

  
  name type CRUD  default
  Log  bool CRU   False

  We can create/update fw_policy with this attribute. If it is enable, it will 
store the fw_policy related log info. It must contain
  the operation time, the operation reason. And we could call API to query the 
details of them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1657071/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656629] Re: Pagination exception from commit 175bfe048283ae930b60b1cea87874d1fa1d10d6

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656629

Title:
  Pagination exception from commit
  175bfe048283ae930b60b1cea87874d1fa1d10d6

Status in neutron:
  Invalid

Bug description:
  Commit 175bfe048283ae930b60b1cea87874d1fa1d10d6 causes unit tests to
  fail.

  ft104.52: 
vmware_nsx.tests.unit.nsx_v3.test_plugin.TestL3NatTestCase.test_floatingip_list_with_pagination_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network 86846ccf-7d6f-4165-b7c9-40f92f162ee9: no agents available; 
will retry on subsequent port and subnet creation events.
   WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network 57eb1b8f-8ae1-4a9a-81e0-9db04e9ec511: no agents available; 
will retry on subsequent port and subnet creation events.
   WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
   WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to 
schedule network 4ab1629b-67a1-40c6-a455-589f7c132ef7: no agents available; 
will retry on subsequent port and subnet creation events.
 ERROR [neutron.api.v2.resource] index failed: No details.
  Traceback (most recent call last):
File "/tmp/openstack/neutron/neutron/api/v2/resource.py", line 79, in 
resource
  result = method(request=request, **args)
File "/tmp/openstack/neutron/neutron/db/api.py", line 92, in wrapped
  setattr(e, '_RETRY_EXCEEDED', True)
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/tmp/openstack/neutron/neutron/db/api.py", line 88, in wrapped
  return f(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py",
 line 151, in wrapper
  ectxt.value = e.inner_exc
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py",
 line 139, in wrapper
  return f(*args, **kwargs)
File "/tmp/openstack/neutron/neutron/db/api.py", line 128, in wrapped
  traceback.format_exc())
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/tmp/openstack/neutron/neutron/db/api.py", line 123, in wrapped
  return f(*dup_args, **dup_kwargs)
File "/tmp/openstack/neutron/neutron/api/v2/base.py", line 369, in index
  return self._items(request, True, parent_id)
File "/tmp/openstack/neutron/neutron/api/v2/base.py", line 308, in _items
  obj_list = obj_getter(request.context, **kwargs)
File "/tmp/openstack/neutron/neutron/db/api.py", line 163, in wrapped
  return method(*args, **kwargs)
File "/tmp/openstack/neutron/neutron/db/api.py", line 92, in wrapped
  setattr(e, '_RETRY_EXCEEDED', True)
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/tmp/openstack/neutron/neutron/db/api.py", line 88, in wrapped
  return f(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py",
 line 151, in wrapper
  ectxt.value = e.inner_exc
   

[Yahoo-eng-team] [Bug 1654960] Re: SRIOV VF MAC-anti-spoofing check behavior not compatible when port_security extension not configured

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654960

Title:
  SRIOV VF MAC-anti-spoofing check behavior not compatible when
  port_security extension not configured

Status in neutron:
  Won't Fix

Bug description:
  Some NFV test cases require the source MAC address be filled by the
  Application, so when the packets come to the SRIOV eswitch, the source
  MAC address is not the VF's MAC address, if the SRIOV NIC's MAC anti-
  spoofing check is enabled, the packets will be droped, which is not
  desired.

  The solution is disable the MAC-anti-spoofing check. I noticed the following 
bp introduce the ability to control SRIOV MAC-anti-spoofing check:
  
https://specs.openstack.org/openstack/neutron-specs/specs/liberty/sriov-spoofchk.html
  And the implementation was done by the following submit:
  https://review.openstack.org/#/c/192065/

  But the implementation is not compatible if port_security extension driver is 
not configured.
  For example, I use Mellanox SRIOV NICs, MAC-anti-spoofing check is disabled 
by 
default(http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v3_40.pdf
  ), so before the Liberty, VF's MAC-anti-spoofing check is DISABLED, the NFV 
application can specify the source MAC of the outband packets. After Liberty, 
the sriov-nic-agent will enable the MAC-anti-spoofing check, NO MATTER the 
port_security extension driver is configured or not, see the following code, 
spoofcheck has default value, and the value is True, that's means spoof check 
will always be enabled unless port_security_enabled be clearly assigned False:

  def treat_devices_added_updated(self, devices):
  
  spoofcheck = device_details.get('port_security_enabled', 
True)

  

  As my understanding, when port_security extension is not configured, there is 
no ability to control the SRIOV MAC-anti-spoofing check, and the behavior of 
the NICs should leave as it is. It's not reasonable to enable the 
MAC-anti-spoofing check by default.
  When port_security extension is not configured, the behavior should 
compatible with the version before Liberty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1654960/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1651420] Re: Can not clear source or dest port (range) for existing firewall rule

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1651420

Title:
  Can not clear source or dest port  (range)  for existing firewall rule

Status in neutron:
  Won't Fix

Bug description:
  We need to give user a way to update firewall rule to clear source or
  dest port (range).

  We can create a firewall-rule with source-ip-address and 
destination-ip-address set, for example:
  [root@node-1 ~]# neutron firewall-rule-create --source-ip-address 0.0.0.0/0 
--source-port 1234 --destination-ip-address 192.168.2.0/24 --destination-port 
22 --protocol tcp --action allow
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  ++--+
  | action | allow|
  | description|  |
  | destination_ip_address | 192.168.2.0/24   |
  | destination_port   | 22   |
  | enabled| True |
  | firewall_policy_id |  |
  | id | f44e6557-7d1b-44f0-a5e6-aad2e77c9ad1 |
  | ip_version | 4|
  | name   |  |
  | position   |  |
  | protocol   | tcp  |
  | shared | False|
  | source_ip_address  | 0.0.0.0/0|
  | source_port| 1234 |
  | tenant_id  | e8cf9c9245f24f209263465bcb2cc8c4 |
  ++--+
  If we want to update this rule, and don't want set source_port or 
destination_port, we can not done it for now.

  I expect to clear the source_port by using following command:
  [root@node-1 ~]# neutron firewall-rule-update 
47cd4350-6c9e-4803-bda7-749774d36dcc --source-port ''
  Updated firewall_rule: 47cd4350-6c9e-4803-bda7-749774d36dcc
  [root@node-1 ~]# neutron firewall-rule-show 
47cd4350-6c9e-4803-bda7-749774d36dcc
  ++--+
  | Field  | Value|
  ++--+
  | action | allow|
  | description|  |
  | destination_ip_address | 192.168.2.0/24   |
  | destination_port   | 22   |
  | enabled| True |
  | firewall_policy_id |  |
  | id | 47cd4350-6c9e-4803-bda7-749774d36dcc |
  | ip_version | 4|
  | name   |  |
  | position   |  |
  | protocol   | tcp  |
  | shared | False|
  | source_ip_address  | 0.0.0.0/0|
  | source_port|  |
  | tenant_id  | e8cf9c9245f24f209263465bcb2cc8c4 |
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1651420/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645547] Re: The qrouter namespace can't be removed when l3-agent rebooting.

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645547

Title:
  The qrouter namespace can't be removed when l3-agent rebooting.

Status in neutron:
  Won't Fix

Bug description:
  When l3-agent rebooting, if the action removing a qrouter namespace
  happens, and later l3-agent works correctly, but the qrouter namespace
  removed still exists in linux net namespace.

  I try again and again, the action can't be synchronised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645547/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640960] Re: RFE: VIF plugging negotiation between Nova and Neutron

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640960

Title:
  RFE: VIF plugging negotiation between Nova and Neutron

Status in neutron:
  Won't Fix

Bug description:
  Nova has opinions on which VIF types are acceptable, and these types
  vary by circumstance.  vhost-user can be used if QEMU is in use and
  hugepages are used on the VM (specifically, if the VM's memory is
  backed by an open file, in fact); exclusively virtio mechanisms like
  vhost-user cannot be used when QEMU emulates phsyical hardware such as
  E1000s; different hosts can have different hypervisors; and so on.  It
  cannot, today, express those preferences to Neutron, so Neutron
  generally returns a fixed and preconfigured plugging type that may or
  may not actually be useful to Nova - and when it isn't, VMs fail to
  boot.

  https://review.openstack.org/#/c/390512/ is a Nova spec that describes
  how Nova will tell Neutron which types are acceptable to it for a
  specific port when plugging is initiated.  This is the companion RFE
  for Neutron that asks that Neutron's core code and ML2 driver be
  changed to offer up those preferences over the plugin and ML2
  interfaces, respectively (likely no change if we use the port
  structure to pass the information as we bind) and change the in-tree
  OVS and LB mechanism drivers to respect the passed types as
  appropriate.

  (I'm not sure how much change this actually requires in Neutron, if
  any, becase the LB and OVS drivers are not terribly flexible as
  regards plugging types; but if any example code that can be written
  for the LB and OVS drivers I want to make sure that I'm covered as I
  introduce the changes.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1640960/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640956] Re: RFE: Extension for improving Nova-Neutron callflow to cache port data

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640956

Title:
  RFE: Extension for improving Nova-Neutron callflow to cache port data

Status in neutron:
  Won't Fix

Bug description:
  Nova requires some data from a Neutron port and its friends to
  populate its port info cache, and ultimately to create the metadata
  constructs that are passed on to VMs to inform them of their addresses
  and routes.  Today, it wanders the datastructures in Neutron, looking
  at ports' subnets, to get this information.

  Instead, https://review.openstack.org/#/c/390513/ proposes that the
  port gain an attribute or attributes containing the required
  information.  This is a companion RFE to detail that an extension is
  required in Neutron to provide those attributes (by doing the DB
  mining locally and returning additional attributes with the data on
  the port object).

  The Nova spec will track the information and ultimately the format of
  the data (since I don't want to copy it into two locations) and the
  Neutron extension, being the simpler part of the problem, will
  implement the returning of this data.  Please check and comment on the
  nova spec for that part of the detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1640956/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639566] Re: [RFE] Add support for local SNAT

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639566

Title:
  [RFE] Add support for local SNAT

Status in neutron:
  Won't Fix

Bug description:
  [Existing problem]
  Currently, when the User wants to allow multiple VMs to access external 
networks (e.g. internet), he can either assign a floating IP to each VM (DNAT), 
or assign just one floating IP to the router that he uses as a default gateway 
for all the VMs (SNAT).

  The downside of DNAT is that the number of external IP addresses is
  very limited, and therefore it requires that the User either "switch"
  floating IPs between VMs (complicated), or obtain enough external IPs
  (expensive).

  The downside of SNAT is that all outbound traffic from the VMs that
  use it as default gateway will go through the server that hosts the
  router (a Neutron Network Node), effectively creating a network
  bottleneck and single point of failure for multiple VMs.

  [Proposal]
  Add an additional SNAT model (henceforth referred to as "Local SNAT") that 
places the NAT/PAT on each Compute Node, and lets the underlying networking 
infrastructure decide how to handle the outbound traffic. In order for this 
design to work in a real world deployment, the underlying networking 
infrastructure needs to allow Compute Nodes to access the external network 
(e.g. WWW).

  When the Compute Node can route outbound traffic, VMs hosted on it do
  not need to be routed through the Network Node. Instead, they will be
  routed locally from the Compute Node.

  This will require changes to the local routing rules on each Compute
  Node.

  The change should be reflected in Neutron database, as it affects
  router Ports configuration and should be persistent.

  [Benefits]
  Improvement is to be expected, since outbound traffic is routed locally and 
not through the Network Node effectively reducing network bottleneck on Network 
Node. 

  
  [Functionality difference]
  The main functionality difference between the Neutron reference 
implementation of SNAT and "Local SNAT", is that with Neutron SNAT the User 
reserves an external IP address (from a limited pre-allocated pool), which is 
used to masquerade multiple VMs of that same user (therefore, sharing the same 
external IP).

  With the "Local SNAT" solution, in contrast, the User may not reserve
  any external IP in Neutron, and the "external IP" from which each VM
  will go out is arbitrarily selected by the underlying networking
  infrastructure (similar to the way external IPs are allocated to home
  internet routers, or to mobile phones).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1639566/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1636180] Re: Error in docs for creating new Floating IP

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1636180

Title:
  Error in docs for creating new Floating IP

Status in neutron:
  Won't Fix

Bug description:
  According to the docs on this link:

  http://developer.openstack.org/api-
  ref/networking/v2/index.html?expanded=create-floating-ip-
  detail#floating-ips-floatingips

  we should pass an object such as this for creating a new floating IP:

  {
  "floatingip": {
  "floating_network_id": "376da547-b977-4cfe-9cba-275c80debf57"
  }
  }

  But using this object throw us an error. Instead we need to use a much
  simpler object, without the "floatingip" key. It would result in
  something like this:

  body = {"floating_network_id": public_network.id}

  Please fix the API docs to spare anybody else's time.
  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1636180/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630429] Re: Disabling subnetpool feautre

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1630429

Title:
  Disabling subnetpool feautre

Status in neutron:
  Fix Released

Bug description:
  I am facing some issues with subnetpool, so I dont want to use this
  feautre, but i couldnt find any options to disable this. Is there any
  way I can do this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1630429/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627785] Re: [RFE] Create FWaaS driver for OVS firewalls

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627785

Title:
  [RFE] Create FWaaS driver for OVS firewalls

Status in neutron:
  Won't Fix

Bug description:
  Create a back-end driver for FWaaS that will implement firewalls using
  OVS flows, similar to the Security Group implementation that uses OVS
  flows[1].  This will be implemented within the context of the FWaaS L2
  agent extension[2], and the L2 agent extension API will give FWaaS
  access to the integration bridge for flow management.

  [1] 
http://docs.openstack.org/developer/neutron/devref/openvswitch_firewall.html
  [2] https://review.openstack.org/#/c/323971

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627785/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613901] Re: String "..%c0%af" causes 500 errors in multiple locations

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1613901

Title:
  String "..%c0%af" causes 500 errors in multiple locations

Status in Cinder:
  Incomplete
Status in Glance:
  New
Status in OpenStack Identity (keystone):
  Won't Fix
Status in neutron:
  Won't Fix
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:

  While doing some testing on Keystone using Syntribos
  (https://github.com/openstack/syntribos), our team (myself, Michael
  Dong, Rahul U Nair, Vinay Potluri, Aastha Dixit, and Khanak Nangia)
  noticed that we got 500 status codes when the string "..%c0%af" was
  inserted in various places in the URL for different types of requests.

  Here are some examples:

  =

  DELETE /v3/policies/..%c0%af HTTP/1.1
  Host: [REDACTED]:5000
  Connection: close
  Accept-Encoding: gzip, deflate
  Accept: application/json
  User-Agent: python-requests/2.11.0
  X-Auth-Token: [REDACTED]
  Content-Length: 0

  HTTP/1.1 500 Internal Server Error
  Date: Tue, 16 Aug 2016 22:04:27 GMT
  Server: Apache/2.4.7 (Ubuntu)
  Vary: X-Auth-Token
  X-Distribution: Ubuntu
  x-openstack-request-id: req-238fd5a9-be45-41f2-893a-97b513b27af3
  Content-Length: 143
  Connection: close
  Content-Type: application/json

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request.", "code": 500, "title": "Internal Server
  Error"}}

  =

  PATCH /v3/policies/..%c0%af HTTP/1.1
  Host: [REDACTED]:5000
  Connection: close
  Accept-Encoding: gzip, deflate
  Accept: application/json
  User-Agent: python-requests/2.11.0
  Content-type: application/json
  X-Auth-Token: [REDACTED]
  Content-Length: 70

  {"type": "--serialization-mime-type--", "blob": "--serialized-blob--"}

  HTTP/1.1 500 Internal Server Error
  Date: Tue, 16 Aug 2016 22:05:36 GMT
  Server: Apache/2.4.7 (Ubuntu)
  Vary: X-Auth-Token
  X-Distribution: Ubuntu
  x-openstack-request-id: req-57a41600-02b4-4d2a-b3e9-40f7724d65f2
  Content-Length: 143
  Connection: close
  Content-Type: application/json

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request.", "code": 500, "title": "Internal Server
  Error"}}

  =

  GET /v3/domains/0426ac1e48f642ef9544c2251e07e261/groups/..%c0%af/roles 
HTTP/1.1
  Host: [REDACTED]:5000
  Connection: close
  Accept-Encoding: gzip, deflate
  Accept: application/json
  User-Agent: python-requests/2.11.0
  X-Auth-Token: [REDACTED]

  HTTP/1.1 500 Internal Server Error
  Date: Tue, 16 Aug 2016 22:07:09 GMT
  Server: Apache/2.4.7 (Ubuntu)
  Vary: X-Auth-Token
  X-Distribution: Ubuntu
  x-openstack-request-id: req-02313f77-63c6-4aa8-a87e-e3d2a13ad6b7
  Content-Length: 143
  Connection: close
  Content-Type: application/json

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request.", "code": 500, "title": "Internal Server
  Error"}}

  =

  I've marked this as a security issue as a precaution in case it turns
  out that there is a more serious vulnerability underlying these
  errors. We have no reason to suspect that there is a greater
  vulnerability at this time, but given the many endpoints this seems to
  affect, I figured caution was worthwhile since this may be a
  framework-wide issue. Feel free to make this public if it is
  determined not to be security-impacting.

  Here is a (possibly incomplete) list of affected endpoints. Inserting
  the string "..%c0%af" in any or all of the spots labeled "HERE" should
  yield a 500 error. As you can see, virtually all v3 endpoints exhibit
  this behavior.

  =

  [GET|PATCH|DELETE] /v3/endpoints/[HERE]

  [GET|PATCH]   /v3/domains/[HERE]
  GET   /v3/domains/[HERE]/groups/[HERE]/roles
  [HEAD|PUT|DELETE] /v3/domains/[HERE]/groups/[HERE]/roles/[HERE]
  GET   /v3/domains/[HERE]/users/[HERE]/roles
  [HEAD|DELETE] /v3/domains/[HERE]/users/[HERE]/roles/[HERE]

  [GET|PATCH|DELETE] /v3/groups/[HERE]
  [HEAD|PUT|DELETE]  /v3/groups[HERE]/users/[HERE]

  [POST|DELETE] /v3/keys/[HERE]

  [GET|PATCH|DELETE] /v3/policies/[HERE]
  [GET|PUT|DELETE]   /v3/policies/[HERE]/OS-ENDPOINT-POLICY/endpoints/[HERE]
  [GET|HEAD] /v3/policies/[HERE]/OS-ENDPOINT-POLICY/policy
  [GET|PUT|DELETE]   /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/[HERE]
  [PUT|DELETE]   /v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/[HERE]
  [GET|PUT|DELETE]   
/v3/policies/[HERE]/OS-ENDPOINT-POLICY/services/regions/[HERE]

  [GET|PATCH|DELETE] /v3/projects/[HERE]
  [DELETE|PATCH] /v3/projects/[HERE]/cascade
  GET/v3/projects/[HERE]/groups/[HERE]/roles
  GET/v3/projects/[HERE]/users/[HERE]/roles
  [HEAD|PUT|DELETE]  /v3/projects/[HERE]/groups/[HERE]/roles/[HERE]

  [GET|PATCH|DELETE] /v3/regions/[HERE]

  [PA

[Yahoo-eng-team] [Bug 1611746] Re: rpc callback framework doesn't push context

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611746

Title:
  rpc callback framework doesn't push context

Status in neutron:
  Won't Fix

Bug description:
  The logic to push objects onto the wire to the agents
  (neutron/api/rpc/handlers/resources_rpc.py) doesn't send the context
  with it.

  This makes it difficult to trace a request-ID all of the way through
  to actions on the agent. It would be nice if we sent the context to
  preserve the request-ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1611746/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1610924] Re: Add '--from-port-id' for port-create

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1610924

Title:
  Add '--from-port-id'  for port-create

Status in neutron:
  Won't Fix

Bug description:
  For admin, there is a need that clone a port, and i think it is
  conveniently that port-create command is attached a parameter '--from-
  port-id'.

  Use cases
  =
  In certain commercial scenery, such as DC cases using TECS of ZTE, it is 
always been mentioned that a port should be cloned from a existing port. The 
succeeder inherit the reasonable properties of the former, such like parent 
network, mtu, qos-policy and security-group, etc.

  Now like below:
  
  Now, port-create command of neutron has no similar parameter.

  I think it should be:
  =
  [root@localhost devstack]# neutron port-create clone_one --from-port-id 
394d5dc8-57e1-4bb6-8e05-9f7965fe57ff
  Created a new port inherited from 394d5dc8-57e1-4bb6-8e05-9f7965fe57ff:
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | admin_state_up| True
 |
  | allowed_address_pairs | 
 |
  | binding:host_id   | 
 |
  | binding:profile   | {}  
 |
  | binding:vif_details   | {}  
 |
  | binding:vif_type  | unbound 
 |
  | binding:vnic_type | normal  
 |
  | created_at| 2016-08-08T18:08:13 
 |
  | description   | 
 |
  | device_id | 
 |
  | device_owner  | 
 |
  | extra_dhcp_opts   | 
 |
  | fixed_ips | {"subnet_id": 
"fd8cb18b-842d-44f4-97c5-2bc110ff36a1", "ip_address": "10.10.0.9"} |
  | id| 52a75cf5-8447-48c0-a21a-34acf3baa278
 |
  | mac_address   | fa:16:3e:44:74:ac   
 |
  | name  | 
 |
  | network_id| d9da67c4-42c5-4db5-be3e-51376b310ae7
 |
  | port_security_enabled | True
 |
  | security_groups   | 8acc88a6-f4af-478d-963f-f35d8548e7f0
 |
  | status| DOWN
 |
  | tenant_id | 48f4bc5cdb8d44a5a41b2e719c1e54ee
 |
  | updated_at| 2016-08-08T18:08:13 
 |
  
+---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1610924/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609863] Re: neutron-server can't be started as a single process

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609863

Title:
  neutron-server can't be started as a single process

Status in neutron:
  Won't Fix

Bug description:
  When all workers are set to 0, neutron-server starts now at least 3
  processes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1609863/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608475] Re: Live Migration Error on Mitaka release

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608475

Title:
  Live Migration Error on Mitaka release

Status in neutron:
  Won't Fix

Bug description:
  While Live Migrating from one compute node to another compute node in
  Mitaka release, the operation returns SUCCESS but the host system
  remains same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1608475/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606895] Re: DHCP offer wrong router

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606895

Title:
  DHCP offer wrong router

Status in neutron:
  Won't Fix

Bug description:
  Hello, 
  I have a public network with 3 sub-nets and associated gateways: 
  public3   XX.0/27 IPv4XX.30
  publicXX.64/28IPv4XX.78
  public2   XX.160/28   IPv4XX.174  

  From time to time, I saw that on instantiated VM, the default route to
  the GW was missing.

  Checking the DHCP answer, I noticed the proposed router was a wrong
  one.

  We get the .174 instead of .78

  see below DHCP offer :

  Bootstrap Protocol (ACK)
  Message type: Boot Reply (2)
  Hardware type: Ethernet (0x01)
  Hardware address length: 6
  Hops: 0
  Transaction ID: 0x70542239
  Seconds elapsed: 0
  Bootp flags: 0x (Unicast)
  0...    = Broadcast flag: Unicast
  .000    = Reserved flags: 0x
  Client IP address: 0.0.0.0 (0.0.0.0)
  Your (client) IP address: XX.XX.XX.66 (XX.XX.XX.66)
  Next server IP address: XX.XX.XX.65 (XX.XX.XX.65)
  Relay agent IP address: 0.0.0.0 (0.0.0.0)
  Client MAC address: fa:16:3e:9c:ea:c4 (fa:16:3e:9c:ea:c4)
  Client hardware address padding: 
  Server host name not given
  Boot file name not given
  Magic cookie: DHCP
  Option: (53) DHCP Message Type (ACK)
  Length: 1
  DHCP: ACK (5)
  Option: (54) DHCP Server Identifier
  Length: 4
  DHCP Server Identifier: XX.XX.XX.65 (XX.XX.XX.65)
  Option: (51) IP Address Lease Time
  Length: 4
  IP Address Lease Time: (86400s) 1 day
  Option: (58) Renewal Time Value
  Length: 4
  Renewal Time Value: (43200s) 12 hours
  Option: (59) Rebinding Time Value
  Length: 4
  Rebinding Time Value: (75600s) 21 hours
  Option: (1) Subnet Mask
  Length: 4
  Subnet Mask: 255.255.255.240 (255.255.255.240)
  Option: (28) Broadcast Address
  Length: 4
  Broadcast Address: XX.XX.XX.79 (XX.XX.XX.79)
  Option: (15) Domain Name
  Length: 14
  Domain Name: openstacklocal
  Option: (12) Host Name
  Length: 18
  Host Name: host-XX-XX-XX-66
  Option: (3) Router
  Length: 4
  Router: XX.XX.XX.174 (XX.XX.XX.174)
  Option: (121) Classless Static Route
  Length: 32
  Subnet/MaskWidth-Router: 169.254.169.254/32-XX.XX.XX.161
  Subnet/MaskWidth-Router: XX.XX.XX.0/27-0.0.0.0
  Subnet/MaskWidth-Router: XX.XX.XX.64/28-0.0.0.0
  Subnet/MaskWidth-Router: default-XX.XX.XX.174
  Option: (6) Domain Name Server
  Length: 8
  Domain Name Server: XX
  Domain Name Server: XX
  Option: (255) End
  Option End: 255

  
  Could that it be an issue related to multiples sub-nets?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1606895/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605271] Re: Enable multinode grenade gate for Linuxbridge

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605271

Title:
  Enable multinode grenade gate for Linuxbridge

Status in neutron:
  Won't Fix

Bug description:
  OVS is covered in gate, we should do the same for Linuxbridge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605271/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603292] Re: Neutron network tags should not be empty string

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603292

Title:
  Neutron network tags should not be empty string

Status in neutron:
  Fix Released

Bug description:
  Now neutron network tags can be empty string, but I think there is no
  use case for a empty string tag, so we should add a check for tags.

  root@server201:~# neutron tag-add --resource-type network --resource test 
--tag 'test_tag'
  root@server201:~# neutron tag-add --resource-type network --resource test 
--tag '   '
  root@server201:~# neutron net-show test
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2016-07-15T01:45:51  |
  | description   |  |
  | id| f1060382-c7fa-43d5-a214-e8525184e7f0 |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1450 |
  | name  | test |
  | port_security_enabled | True |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 26   |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tags  |  |
  |   | test_tag |
  | tenant_id | 9e211e5ad3c0407aaf6c5803dc307c27 |
  | updated_at| 2016-07-15T01:45:51  |
  +---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603292/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600774] Re: lookup DNS and use it before creating a new one

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600774

Title:
  lookup DNS  and use it before creating a new one

Status in neutron:
  Won't Fix

Bug description:
  In some deployment DNS is already pre-populated for every IP. To make
  Neutron use this DNS name, the attribute dns_name needs to be filled.
  It would be more straightforward to provide in Neutron a way to fill
  automatically this dns_name, doing a reverse DNS lookup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1600774/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600733] Re: MechanismDriverError/sqlite3.OperationalError: no such table: ml2_geneve_allocations

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600733

Title:
  MechanismDriverError/sqlite3.OperationalError: no such table:
  ml2_geneve_allocations

Status in neutron:
  Invalid

Bug description:
  The Openstack Interoperability Lab hit this error: "OperationalError:
  (sqlite3.OperationalError) no such table: ml2_geneve_allocations [SQL:
  u'SELECT ml2_geneve_allocations.geneve_vni AS
  ml2_geneve_allocations_geneve_vni, ml2_geneve_allocations.allocated AS
  ml2_geneve_allocations_allocated \nFROM ml2_geneve_allocations']"

  This manifests itself in the console as "NeutronClientException:
  500-{u'NeutronError': {u'message': u'create_subnet_postcommit
  failed.', u'type': u'MechanismDriverError', u'detail': u''}}"

  We hit this whilst using the following components:

  Block Storage cinder-iscsi
  Compute   nova-kvm
  Database  mysql
  Image Storage glance-swift
  Openstack Version mitaka
  SDN   neutron-calico
  Ubuntu Versionxenial 

  The following traceback was found in /neutron-
  api_*/var/log/neutron/neutron-server.log

  
  2016-07-10 00:45:36.418 15401 ERROR neutron.service [-] Unrecoverable error: 
please check log for details.
  2016-07-10 00:45:36.418 15401 ERROR neutron.service Traceback (most recent 
call last):
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/neutron/service.py", line 107, in serve_wsgi
  2016-07-10 00:45:36.418 15401 ERROR neutron.service service.start()
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/neutron/service.py", line 80, in start
  2016-07-10 00:45:36.418 15401 ERROR neutron.service self.wsgi_app = 
_run_wsgi(self.app_name)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/neutron/service.py", line 234, in _run_wsgi
  2016-07-10 00:45:36.418 15401 ERROR neutron.service app = 
config.load_paste_app(app_name)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/neutron/common/config.py", line 290, in 
load_paste_app
  2016-07-10 00:45:36.418 15401 ERROR neutron.service app = 
loader.load_app(app_name)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/oslo_service/wsgi.py", line 353, in load_app
  2016-07-10 00:45:36.418 15401 ERROR neutron.service return 
deploy.loadapp("config:%s" % self.config_path, name=name)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2016-07-10 00:45:36.418 15401 ERROR neutron.service return loadobj(APP, 
uri, name=name, **kw)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2016-07-10 00:45:36.418 15401 ERROR neutron.service return 
context.create()
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
  2016-07-10 00:45:36.418 15401 ERROR neutron.service return 
self.object_type.invoke(self)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
  2016-07-10 00:45:36.418 15401 ERROR neutron.service **context.local_conf)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
  2016-07-10 00:45:36.418 15401 ERROR neutron.service val = callable(*args, 
**kw)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 28, in urlmap_factory
  2016-07-10 00:45:36.418 15401 ERROR neutron.service app = 
loader.get_app(app_name, global_conf=global_conf)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2016-07-10 00:45:36.418 15401 ERROR neutron.service name=name, 
global_conf=global_conf).create()
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 710, in create
  2016-07-10 00:45:36.418 15401 ERROR neutron.service return 
self.object_type.invoke(self)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 144, in invoke
  2016-07-10 00:45:36.418 15401 ERROR neutron.service **context.local_conf)
  2016-07-10 00:45:36.418 15401 ERROR neutron.service   File 
"/usr/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, in fix_call
  2016-07-10 00:45:36.418 15401 ERROR neutron.service val = callable(*args

[Yahoo-eng-team] [Bug 1599488] Re: [RFE] Enhance Quota API calls to return resource usage per tenant

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599488

Title:
  [RFE] Enhance Quota API calls to return resource usage per tenant

Status in neutron:
  Fix Released

Bug description:
  The existing Neutron Quota API call [1] misses the option to fetch
  'in_use' values for each type of quotable resources, alongside with
  their quota limits. Consequently, Horizon web app has to request a
  list of every resource that is quotable [2], which is pretty
  inefficient. The desired feature on the Neutron side would be to
  support 'in_use' return values in call [1], so that it behaves similar
  to [3, 4]. Whether or not the 'reserved' return values (again, see [3,
  4]) should be supported in the same call is out of the scope of this
  particular feature request, but it certainly won't hurt if reserving
  quotable resources makes sense for Neutron.

  
  [1] 
http://developer.openstack.org/api-ref-networking-v2-ext.html#listQuotasForTenant
  [2] 
https://github.com/openstack/horizon/blob/10.0.0.0b1/openstack_dashboard/usage/quotas.py#L313-L336
  [3] http://developer.openstack.org/api-ref-blockstorage-v2.html#showQuota
  [4] http://developer.openstack.org/api-ref-compute-v2.1.html#listDetailQuotas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1599488/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596813] Re: stable/mitaka branch creation request for networking-midonet

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596813

Title:
  stable/mitaka branch creation request for networking-midonet

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Please cut stable/mitaka branch for networking-midonet
  on commit 77e588761736750725cc96d6a766cd751a3978a6.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1596813/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595250] Re: ml2 Use None for portbinding.host instead of an empty String

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595250

Title:
  ml2 Use None for portbinding.host instead of an empty String

Status in neutron:
  Invalid

Bug description:
  Seems like ML2 portbinding uses '' for indicating that the port is
  bound to no host, while in times before ml2 "None" was used. This
  leads to some strange checks in the code like [1]

  This bug is to clean this up internally. The API still should take
  both values, an empty string and none, but some code at the api layer
  should normalize that to None. In addition the values in the
  ml2_portbinding (and dvr portbinding) table need to be migrated.

  
  Another related patch that introduced None values for ml2 portbiding as well 
was : https://review.openstack.org/#/c/181867/


  
  [1] https://review.openstack.org/#/c/320657/1/neutron/db/ipam_backend_mixin.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595250/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594439] Re: Bad initialization sequence in Neutron agents (and maybe somewhere else)

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594439

Title:
  Bad initialization sequence in Neutron agents (and maybe somewhere
  else)

Status in neutron:
  Won't Fix

Bug description:
  TL;DR: running threads that modify global state (which can be any RPC
  calls, or any other library calls) in background while forking happens
  may cause weird hard-to-debug errors, so background jobs should be
  initialized after forking (or even better in separate process)

  Currently at least metadata and l3 agents start background threads
  that periodically report state before forking and running main loop in
  child threads. Those threads can modify global state so that it is
  inconsistent in the time of forking which will lead to bad service
  behavior. In the ideal world main process shouldn't do anything except
  managing child processes to avoid any global state leakage into child
  processes (open fds, locked locks, etc).

  This bug was uncovered during investigation into weird seemingly
  unrelated error in grenade job in Ann's CR that was changing to new
  oslo.db EngineFacade [0]. Symptoms were: SSHTimeout while instance's
  cloud-init is busy trying to get metadata and failing. In q-meta logs
  we noticed that there's no INFO messages about incoming HTTP requests,
  but there are some DEBUG ones, which means that requests are coming to
  the agent but are not being responded to. Digging deeper we noticed
  that in the normal operation metadata agent should:

  - receive request;
  - do RPC call to neutron-server;
  - do HTTP request to Nova;
  - send response.

  There were no RPC "CALL" in logs, only CASTs from state reporting and
  the very first one CALL for state reporting.

  Since it's hard to reproduce gate jobs, especially multinode ones,
  we've created another CR [1] that added tracing of every Python LOC to
  see what really happens. You can find the long log with all the
  tracing at [2] or in attachment (logs don't live forever). It
  uncovered the following chain of events:

  - main thread in main process starts background thread for state reporting;
  - some time later that thread starts reporting, wants to do the first CALL 
(it does CALL once and then it does CASTs);
  - to get 'reply_q' (essentially, connection shared for replies IIUC), it 
acquires a lock;
  - since there are no connections available at that time (it's the first time 
RPC is used), oslo_messaging starts connecting to RabbitMQ;
  - background thread yield execution to main thread;
  - main thread forks bunch of WSGI workers;
  - in WSGI workers when requests come, handler tries to do CALL to 
neutron-server;
  - to get 'reply_q' it tries to acquire a lock, but it is already has been 
"taken" by background thread in the main process;
  - it hangs forever, which can be seen in Guru Meditation Report.

  There are several problems here (including eventlet having not fork-
  aware locks), but the one that can be fixed to fix them all is to
  start such background threads after all forking happens. I've
  published CR [3] to verify that changing initialization order will fix
  this issue, and it did.

  Note that from what I've been told, forks can still happen if child
  worker unexpectedly dies and main process reforks it. To properly fix
  this issue we should not do anything in the main process that can
  spoil global state that can influence child process. It means that
  we'll need to either run state reporting in a separate process or have
  an isolated oslo_messaging environment (context? IANA expert in
  oslo_messaging) for it.

  [0] https://review.openstack.org/312393
  [1] https://review.openstack.org/331485
  [2] 
http://logs.openstack.org/85/331485/5/check/gate-grenade-dsvm-neutron-multinode/1c65056/logs/new/screen-q-meta.txt.gz
  [3] https://review.openstack.org/331672

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594439/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593793] Re: [RFE] No notification on floating ip status change

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593793

Title:
  [RFE] No notification on floating ip status change

Status in neutron:
  Won't Fix
Status in OpenStack Searchlight:
  New

Bug description:
  Tested on neutron/master in devstack.

  When I associate or disassociate a floating IP I get a notification
  indicating the new fixed IP and port association, but the status still
  indicates the current value (so for associate, the notification
  contains "fixed_ip_address":"a.b.c.d", "status": "DOWN") because
  there's a short latency time before the FIP is marked as ACTIVE.

  I see the status change after a very short period after in the q-svc
  log:

  New status for floating IP 8470e622-0f3b-419f-ac21-deed658d3260:
  ACTIVE

  There's no notification emitted representing this state change,
  however. The same is true on disassociate; the port and fixed IP are
  cleared but the status is indicated ACTIVE and there's no notification
  marking it DOWN.

  E.g.
  $ neutron floatingip-create 0e97853d-3ac0-4c08-a8b3-f6bfa443dd24

  {"event_type": "floatingip.create.end", "payload": {"floatingip":
  {"router_id": null, "status": "DOWN", "description": "", "tenant_id":
  "c1b853bf78bc43c69daa8d42cb9fb07f", "floating_network_id":
  "0e97853d-3ac0-4c08-a8b3-f6bfa443dd24", "fixed_ip_address": null,
  "floating_ip_address": "172.25.0.11", "port_id": null, "id":
  "6442165e-4caa-437c-9377-196f4db638f9"}}, "publisher_id":
  "network.devstack", "ctxt": {...}, "metadata": {"timestamp":
  "2016-06-17 16:44:20.600943", "message_id":
  "547b0f34-1bc0-4fbc-8e58-a0c2d2dc1751"}}

  $ neutron floatingip-associate 6442165e-4caa-437c-9377-196f4db638f9
  8068c884-1ab4-42d9-97d0-f65848fca2b0

  {"event_type": "floatingip.update.end", "payload": {"floatingip":
  {"router_id": "390ab366-0eee-46e9-a07f-f49cf0b31652", "status":
  "DOWN", "description": "", "tenant_id":
  "c1b853bf78bc43c69daa8d42cb9fb07f", "floating_network_id":
  "0e97853d-3ac0-4c08-a8b3-f6bfa443dd24", "fixed_ip_address":
  "172.40.0.4", "floating_ip_address": "172.25.0.11", "port_id":
  "8068c884-1ab4-42d9-97d0-f65848fca2b0", "id":
  "6442165e-4caa-437c-9377-196f4db638f9"}}, "publisher_id":
  "network.devstack", "ctxt": {...}, "metadata": {"timestamp":
  "2016-06-17 16:44:50.149631", "message_id":
  "f6f7ec1f-6363-49b0-b10f-84fd81e0a506"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593793/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592918] Re: [RFE] Adding Port Statistics to Neutron Metering Agent

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592918

Title:
  [RFE] Adding Port Statistics to Neutron Metering Agent

Status in neutron:
  Won't Fix

Bug description:
  [Existing Implementation]
  Currently the Neutron Metering Agent only measures the bandwidth usage of the 
Routers. 

  [Proposal]
  The Neutron Metering Agent will be extended to also support Port statistics.

  {collisions=0, 
  rx_bytes=874, 
  rx_crc_err=0, 
  rx_dropped=0, 
  rx_errors=0, 
  rx_frame_err=0, 
  rx_over_err=0, 
  rx_packets=11, 
  tx_bytes=70, 
  tx_dropped=0, 
  tx_errors=0, 
  tx_packets=1}

  In this implemention, the support for Port statistics will be added
  using the OpenvSwitch tools.

  [Suggesting Initial Implementation]
  - Add method _notify_port_stats which collects ports statistics from ovsdb 
and sends the notification.
  - The method _notify_port_stats is called the from _metering_loop to keep 
sending the port statistics.

  Looking forward to feedbacks and suggestions.

  [Benefits]
  These metrics can further be exposed to Ceilometer and can be helpful for 
metering and monitoring purposes.  

  [Related Information]
  https://wiki.openstack.org/wiki/Internship_ideas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592918/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586352] Re: [RFE] Grouping remote_ip_prefix by ipset

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586352

Title:
  [RFE] Grouping remote_ip_prefix by ipset

Status in neutron:
  Won't Fix

Bug description:
  This RFE is requesting a feature for grouping IP addresses of
  remote_ip_prefix using ipset to improve network performance.

  [Background]
  To allow access from/to specific IP address, remote_ip_prefix can be used in 
security group rule. This is used for:
  o To allow access from/to machines which is outside OpenStack environment.
  o To allow access between datacenter in multi-site environment.

  [Existing problem]
  For the usage above, We have to specify IP address(CIDR) as remote_ip_prefix 
one by one and the number of security group rules are increased. It caused 
performance degradation of network.

  We had the same problem about remote_group_id and it is solved by
  enable_ipset feature[1]. This feature improves network performance by
  grouping IP addresses which belong to the same security group. I want
  to extend this feature for remote_ip_prefix.

  [Proposal]
  To solve the problem above, I propose the following feature.

  1) Introduce new feature to group many CIDRs into one set and specify the set 
as remote_ip_prefix collectively.
  2) Improve neutron and L2 agent so that L2 agent updates iptables using ipset 
for this set of CIDRs.

  By this feature, we can reduce the number of security group rules when
  many remote_ip_prefix is used.

  When we specify IP address to allow access, we need to do like this:

  neutron security-group-rule-create --remote-ip-prefix 192.168.100.10/32
  neutron security-group-rule-create --remote-ip-prefix 192.168.101.15/32
  neutron security-group-rule-create --remote-ip-prefix 192.168.102.20/32
  ...

  Therefore many security group rules are generated and it causes
  degradation of network performance. I propose new feature to group
  these CIDRs into one set and specify this set as remote_ip_prefix.

  And then, L2 agent converts this CIDR set to the ipset group and
  apples it to iptables. This is the same manner as converting
  remote_group_id to ipset group.

  [1]
  
https://specs.openstack.org/openstack/neutron-specs/specs/juno/add-ipset-to-security.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1586352/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1732458] Re: deleted_ports memory leak in dhcp agent

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1732458

Title:
  deleted_ports memory leak in dhcp agent

Status in neutron:
  Fix Released

Bug description:
  The "deleted_ports" member variable in the DHCP NetworkCache [1] can
  potentially leak memory.  As ports are deleted uuid values are added
  to this set but they are never removed.  If ports are continuously
  added and deleted on a network this set will grow unbounded.

  [1] neutron.agent.dhcp.agent.NetworkCache.deleted_ports

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1732458/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716043] Re: no CLI for quota details

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716043

Title:
  no CLI for quota details

Status in neutron:
  Fix Released

Bug description:
  Patch [1] added the quota detail API.
  However to the best of my knowledge this support didn't make it into the CLI 
(neutron neutronclient or OSC).

  
  This bug is to track the integration of quota details into the CLI.

  
  [1] https://review.openstack.org/#/c/383673/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1716043/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711165] Re: Error during full sync with QoS

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1711165

Title:
  Error during full sync with QoS

Status in networking-odl:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Error when specifying qos extension driver and service plugin:

  2017-08-15 17:47:41.859 64185 INFO neutron.wsgi [-] 192.168.24.3 "OPTIONS / 
HTTP/1.0" status: 200  len: 249 time: 0.0029640
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task 
[req-5a639c24-7563-4b14-9f42-7d151264fe84 - - - - -] Failed during periodic 
task operation full_sync.: TypeError: get_objects() argument after ** must be a 
mapping, not NoneType
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task 
Traceback (most recent call last):
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task   
File 
"/usr/lib/python2.7/site-packages/networking_odl/journal/periodic_task.py", 
line 61, in _execute_op
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task 
operation(context=context)
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task   
File "/usr/lib/python2.7/site-packages/networking_odl/journal/full_sync.py", 
line 84, in full_sync
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task 
collection_name)
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task   
File "/usr/lib/python2.7/site-packages/networking_odl/journal/full_sync.py", 
line 120, in _sync_resources
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task 
resources = obj_getter(context)
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task   
File "/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_common.py", 
line 62, in inner_filter
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task 
result = f(*args, **kwargs)
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task   
File "/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_common.py", 
line 48, in inner
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task 
result = f(*args, **kwargs)
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task   
File "/usr/lib/python2.7/site-packages/neutron/services/qos/qos_plugin.py", 
line 267, in get_policies
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task 
**filters)
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task 
TypeError: get_objects() argument after ** must be a mapping, not NoneType
  2017-08-15 17:47:42.007 63696 ERROR networking_odl.journal.periodic_task

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1711165/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707156] Re: [RFE] Adoption of "os-vif" in Neutron

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707156

Title:
  [RFE] Adoption of "os-vif" in Neutron

Status in neutron:
  Won't Fix

Bug description:
  [Existing problem]
  From `os-vif Nova SPEC`_, whenever a new Neutron mechanism driver is created, 
this results in the definition of a new VIF type. This situation generates two 
problems:

  * Nova developers need to maintain the plug/unplug code in the VIF drivers, 
which is defined bt the needs of the Neutron mechanism.
  * The format of the data passed between Nova and Neutron for the VIF port 
binding is fairly loosely defined (no versioning or formal definition).

  "os-vif" is being adopted progressively in Nova. As said before, "os-
  vif" is in charge of the plugging and unplugging operations for the
  existing VIF types.

  
  [Proposal]
  To adopt "os-vif" project in Neutron, decoupling any plug/unplug operations 
from Neutron.

  This RFE could be the container smaller contributions migrating step
  by step al VIF types in Neutron.

  This topic will be discussed during the Denver PTG [2].

  The proposed solution (to be discussed) is to add a new class in each
  mech driver agent, leveraging "os-vif" directly in our agent's
  plugging logic.

  
  [Benefits]
  * Centralize the plug/unplug operations in one common place.
  * Provide to Neutron a way to define VIF types as required by Nova. The 
definitions contained in "os-vif" will be used by both projects.
  * Remove from Neutron any plug/unplug driver specific operation, leaving to 
"os-vif" these actions. Neutron is supposed to be a L2-L4 SDN controller.

  
  [References]
  [1] `os-vif Nova SPEC`_: https://review.openstack.org/#/c/287090/
  [2] https://etherpad.openstack.org/p/neutron-queens-ptg
  [3] Nova os-vif library: 
https://review.openstack.org/#/q/topic:bp/os-vif-library,n,z

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1707156/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1705719] Re: [RFE] QinQ network driver

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1705719

Title:
  [RFE] QinQ network driver

Status in neutron:
  Won't Fix

Bug description:
  QinQ networks allows service providers to extend it's clients virtual
  broadcast domains across their sites, without interfering with each
  others VLANs. Where the s_tag will be used to transport the traffic
  from the provider to the customer and the c_tag will be used for the
  customer's network.

  We can implement this by first refactoring VLAN's allocation logic and
  then extending it to handle a second layer of VLAN tagging.
  Essentially replacing vlan_id with a s_tag:c_tag pair.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1705719/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1705536] Re: [RFE] L3-agent agent-backend ovs.

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1705536

Title:
  [RFE] L3-agent agent-backend ovs.

Status in neutron:
  Won't Fix

Bug description:
  The use of linux network namespaces in the l3 agent routers causes a choke 
point for bandwidth on east/west and north/south traffic.
  In the case of east/west traffic, the source and destination interfaces are 
known to Neutron and could be routed using Open vSwitch if it is selected as 
the mechanism_driver for the L2-agent.
  This should allow the l3-agent to be compatible with DPDK and Windows.

  When using network namespaces with Open vSwitch to route an l3 ping packet:
  - arp from source vm -> tap1 (vlan tagging skipped) + broadcast to other ports
  - tap1-> kernel network stack
  - kernel sends arp reply tap1
  - tap1-> source vm (vlan tagging skipped)
  - icmp from source vm -> tap1(vlan tagging skipped)
  - kernel receives icmp on tap1 and send arp request to dest vm via 
tap2(broadcast)
  - arp via tap2 -> dest vm (vlan tagging skipped)
  - dest vm replies -> tap2
  - kernel updates dest mac and decrement ttl the forward icmp packet to tap2
  - tap2 -> dest vm-> dest vm replies->tap2.(vlan tagging skipped)
  - kernel updates dest mac and decrement ttl the forward icmp reply packet to 
tap1
  - tap1-> source vm

  When OpenFlow is used to route the same traffic:
  - arp from source vm -> arp rewritten to reply -> sent to source vm ( single 
openflow action).
  - icmp from source vm -> destination mac update, ttl decremented -> dest vm ( 
single openflow action)
  - icmp from dest vm -> destination mac update, ttl decremented -> source vm ( 
single openflow action)

  Introducing a new agent_backend configuration would allow an operator
  to select which implementation is most suitable to their use case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1705536/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1699938] Re: DB deadlock when bulk delete subnet

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1699938

Title:
  DB deadlock when bulk delete subnet

Status in neutron:
  Won't Fix

Bug description:
  My environment is stable/newton, I use rally to do stability tests, the 
details is below:
  1. config a network with 2 dhcp-agent
  2. config rally to do 40 bulk operation, each process is:create 
network->create subnet->delete subnet->delete network

  I total do 100 times, the success rate is 1.6%, i check the neutron-server 
log with many db deadlock erros, there were
  5 tables happen 
deadlock:ipamallocations,ipamsubnets,standardattributes,ports,provisioningblocks.
 
  the neutron-server log is http://paste.openstack.org/show/613359/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1699938/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1693240] Re: Support SRIOV VF VLAN Filtering

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1693240

Title:
  Support SRIOV VF VLAN Filtering

Status in neutron:
  Won't Fix

Bug description:
  a user wants to instantiate a vm on a sriov port and allow the vm to
  use a permitted list of vlans. The way we would do that is to use the
  existing trunk api to query subports to generate that list. Then take
  that list and provide it to VFd to configure the sriov capable NIC to
  do the filtering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1693240/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736650] Re: linuxbrige manages non linuxbridge ports

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736650

Title:
  linuxbrige manages non linuxbridge ports

Status in neutron:
  Won't Fix

Bug description:
  In our ML2 environment we have 2 mech drivers, linuxbridge and
  midonet.

  We have linuxbridge and midnet networks bound to instances on the same
  compute nodes. All works well except the midonet ports get marked as
  DOWN. I've traced this back to the linuxbridge agent.

  It seems to mark the midonet ports as DOWN. I can see the midonet port
  IDs in the linuxbridge logs.

  Steps to reproduce:

  Config:

  [ml2]
  type_drivers=flat,midonet,uplink
  path_mtu=0
  tenant_network_types=midonet
  mechanism_drivers=linuxbridge,midonet

  
  Boot an instance with a midonet nic, you will note port is DOWN.
  Stop linuxbridge agent and repeat, port will be ACTIVE
  start linux bridge agent and existing midonet ports will change to DOWN

  
  This is running stable/ocata

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1736650/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1732362] Re: legacy-cross-networking-midonet-python35 is broken

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1732362

Title:
  legacy-cross-networking-midonet-python35 is broken

Status in neutron:
  Won't Fix

Bug description:
  probably since zuul v3 transition.

  eg. http://logs.openstack.org/24/519124/13/check/legacy-cross-
  networking-midonet-python35/a880347/job-output.txt.gz

  2017-11-14 22:52:08.767923 | primary | + /usr/zuul-env/bin/zuul-cloner -m 
/tmp/tmp.YuKWO0QOEU --cache-dir /opt/git git://git.openstack.org 
openstack/networking-midonet openstack/requirements
  2017-11-14 22:52:08.867102 | primary | Workspace path set to: 
/home/zuul/workspace
  2017-11-14 22:52:08.867220 | primary | Mapping projects to workspace...
  2017-11-14 22:52:08.867265 | primary |   openstack/networking-midonet -> 
/home/zuul/workspace
  2017-11-14 22:52:08.867298 | primary |   openstack/requirements -> 
/tmp/tmp.yPCSBPcMc3
  2017-11-14 22:52:08.867331 | primary | Checking overlap in destination 
directories...
  2017-11-14 22:52:08.867353 | primary | Expansion completed.
  2017-11-14 22:52:08.867404 | primary | cp -dRl 
/home/zuul/src/git.openstack.org/openstack/networking-midonet/. 
/home/zuul/workspace
  2017-11-14 22:52:08.867454 | primary |
  2017-11-14 22:52:08.867490 | primary | **
  2017-11-14 22:52:08.867537 | primary | ERROR! 
/home/zuul/src/git.openstack.org/openstack/requirements not found
  2017-11-14 22:52:08.867574 | primary | In Zuul v3 all repositories used need 
to be declared
  2017-11-14 22:52:08.867609 | primary | in the 'required-projects' parameter 
on the job.
  2017-11-14 22:52:08.867633 | primary | To fix this issue, add:
  2017-11-14 22:52:08.867648 | primary |
  2017-11-14 22:52:08.867673 | primary |   openstack/requirements
  2017-11-14 22:52:08.867688 | primary |
  2017-11-14 22:52:08.867713 | primary | to 'required-projects'.
  2017-11-14 22:52:08.867727 | primary |
  2017-11-14 22:52:08.867767 | primary | While you're at it, it's worth noting 
that zuul-cloner itself
  2017-11-14 22:52:08.867807 | primary | is deprecated and this shim is only 
present for transition
  2017-11-14 22:52:08.867846 | primary | purposes. Start thinking about how to 
rework job content to
  2017-11-14 22:52:08.867881 | primary | just use the git repos that zuul will 
place into
  2017-11-14 22:52:08.867914 | primary | /home/zuul/src/git.openstack.org 
directly.
  2017-11-14 22:52:08.867946 | primary | **
  2017-11-14 22:52:08.867961 | primary |
  2017-11-14 22:52:08.872008 | primary | + cleanup
  2017-11-14 22:52:08.872094 | primary | + mkdir -p /home/zuul/workspace
  2017-11-14 22:52:08.873635 | primary | + rm -rf /tmp/tmp.YuKWO0QOEU 
/tmp/tmp.yPCSBPcMc3
  2017-11-14 22:52:09.329678 | primary | ERROR
  2017-11-14 22:52:09.329925 | primary | {
  2017-11-14 22:52:09.330020 | primary |   "delta": "0:00:00.123047",
  2017-11-14 22:52:09.330105 | primary |   "end": "2017-11-14 22:52:08.875417",
  2017-11-14 22:52:09.330191 | primary |   "failed": true,
  2017-11-14 22:52:09.330276 | primary |   "rc": 1,
  2017-11-14 22:52:09.330356 | primary |   "start": "2017-11-14 22:52:08.752370"
  2017-11-14 22:52:09.330435 | primary | }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1732362/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1932093] Re: "oslo_config.cfg.DuplicateOptError: duplicate option: host" using OVN Octavia provider on stable/train

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1932093

Title:
  "oslo_config.cfg.DuplicateOptError: duplicate option: host" using OVN
  Octavia provider on stable/train

Status in neutron:
  Fix Released

Bug description:
  Some recent changes to the networking-ovn repository have broken the
  OVN Octavia provider that is in-tree.  When staring the octavia-api
  process for tempest scenario tests we get:

  Jun 15 13:54:56.154717 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia.api.drivers.driver_factory [-] 
Unable to load provider driver ovn due to: duplicate option: host: 
oslo_config.cfg.DuplicateOptError: duplicate option: host
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: CRITICAL octavia [-] Unhandled error: 
octavia.common.exceptions.ProviderNotFound: Provider 'ovn' was not found.
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia Traceback (most recent call last):
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/opt/stack/octavia/octavia/api/drivers/driver_factory.py", line 44, in 
get_driver
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia invoke_on_load=True).driver
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/stevedore/driver.py", line 61, in 
__init__
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia 
warn_on_missing_entrypoint=warn_on_missing_entrypoint
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/stevedore/named.py", line 81, in 
__init__
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia verify_requirements)
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/stevedore/extension.py", line 203, in 
_load_plugins
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia 
self._on_load_failure_callback(self, ep, err)
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/stevedore/extension.py", line 195, in 
_load_plugins
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia verify_requirements,
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/stevedore/named.py", line 158, in 
_load_one_plugin
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia verify_requirements,
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/stevedore/extension.py", line 223, in 
_load_one_plugin
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia plugin = ep.resolve()
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/usr/local/lib/python3.6/dist-packages/pkg_resources/__init__.py", line 2456, 
in resolve
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia module = 
__import__(self.module_name, fromlist=['__name__'], level=0)
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/opt/stack/networking-ovn/networking_ovn/octavia/ovn_driver.py", line 42, in 

  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia from networking_ovn.ovsdb 
import impl_idl_ovn
  Jun 15 13:54:56.163102 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/opt/stack/networking-ovn/networking_ovn/ovsdb/impl_idl_ovn.py", line 35, in 

  Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia from networking_ovn.ovsdb 
import ovsdb_monitor
  Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia   File 
"/opt/stack/networking-ovn/networking_ovn/ovsdb/ovsdb_monitor.py", line 32, in 

  Jun 15 13:54:56.164409 ubuntu-bionic-inap-mtl01-0025122610 
devstack@o-api.service[1675]: ERROR octavia

[Yahoo-eng-team] [Bug 1929832] Re: stable/ussuri py38 support for keepalived-state-change monitor

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1929832

Title:
  stable/ussuri py38 support for keepalived-state-change monitor

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive ussuri series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Invalid
Status in neutron source package in Focal:
  Fix Released

Bug description:
  [Impact]
  Please see original bug description. Without this fix, the neutron-l3-agent 
is unable to teardown an HA router and leaves it partially configured on every 
node it was running on.

  [Test Plan]
  * deploy Openstack ussuri on Ubuntu Focal
  * enable L3 HA
  * create a router and vm on network attached to router
  * disable or delete the router and check for errors like the one below
  * ensure that the following line exists in /etc/neutron/rootwrap.d/l3.filters:

  kill_keepalived_monitor_py38: KillFilter, root, python3.8, -15, -9

  -

  The victoria release of Openstack received patch [1] which allows the
  neutron-l3-agent to SIGKILL or SIGTERM the keepalived-state-change
  monitor when running under py38. This patch is needed in Ussuri for
  users running with py38 so we need to backport it.

  The consequence of not having this is that you get the following when
  you delete or disable a router:

  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
[req-8c69af29-8f9c-4721-9cba-81ff4e9be92c - 9320f5ac55a04fb280d9ceb0b1106a6e - 
- -] Error while deleting router ab63ccd8-1197-48d0-815e-31adc40e5193: 
neutron_lib.exceptions.ProcessExecutionError: Exit code: 99; Stdin: ; Stdout: ; 
Stderr: /usr/bin/neutron-rootwrap: Unauthorized command: kill -15 2516433 (no 
filter matched)
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/agent.py", line 512, in 
_safe_router_removed
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
self._router_removed(ri, router_id)
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/agent.py", line 548, in 
_router_removed
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
self.router_info[router_id] = ri
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
self.force_reraise()
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/six.py", line 703, in reraise
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent raise value
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/agent.py", line 545, in 
_router_removed
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent ri.delete()
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/dvr_edge_router.py", line 236, 
in delete
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
super(DvrEdgeRouter, self).delete()
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/ha_router.py", line 492, in 
delete
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
self.destroy_state_change_monitor(self.process_monitor)
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/l3/ha_router.py", line 438, in 
destroy_state_change_monitor
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
pm.disable(sig=str(int(signal.SIGTERM)))
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/linux/external_process.py", line 
113, in disable
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
utils.execute(cmd, run_as_root=self.run_as_root)
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python3/dist-packages/neutron/agent/linux/utils.py", line 147, in 
execute
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent raise 
exceptions.ProcessExecutionError(msg,
  2021-05-26 02:11:44.653 3457514 ERROR neutron.agent.l3.agent 
n

[Yahoo-eng-team] [Bug 1905726] Re: Qos plugin performs too many queries

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905726

Title:
  Qos plugin performs too many queries

Status in neutron:
  Fix Released

Bug description:
  Whenever retrieving the port list while having the QoS plugin enabled,
  Neutron performs about 10 DB queries per port, most of them being QoS
  related: http://paste.openstack.org/raw/800461/

  For 1000 ports, we end up with 10 000 sequential DB queries. A simple
  "neutron port-list" or "nova list" command will exceed 1 minute, which
  is likely to hit timeouts.

  This seems to be the problem:
  
https://github.com/openstack/neutron/blob/17.0.0/neutron/db/db_base_plugin_v2.py#L1566-L1570

  For each of the retrieved ports, the plugins are then supposed to
  provide additional details, so for each port we get a certain number
  of extra queries.

  One idea would be to add a flag such as 'detailed' or
  'include_extensions' to 'get_ports' and then propagate it to
  '_make_port_dict' through the 'process_extensions' parameter. Another
  idea would be to let the plugins extend the query but that might be
  less feasible.

  Worth mentioning that there were a couple of commits meant to reduce the 
number of queries but it's still excessive:
  https://review.opendev.org/c/openstack/neutron/+/667998
  https://review.opendev.org/c/openstack/neutron/+/667981/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1905726/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1900763] Re: [OVN Octavia Provider] OVN provider status update failures can leave orphaned resources

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1900763

Title:
  [OVN Octavia Provider] OVN provider status update failures can leave
  orphaned resources

Status in neutron:
  Fix Released

Bug description:
  This is related to Octavia issue
  https://storyboard.openstack.org/#!/story/2008254

  When the OVN Octavia Provider driver calls into octavia-lib to update
  the status of a loadbalancer, for example, the code in
  helper:_update_status_to_octavia() it might fail:

  DEBUG ovn_octavia_provider.helper [-] Updating status to octavia: 
{'listeners': [{'id': '7033179d-2ddb-4714-9c06-b7d399498238', 
'provisioning_status>
  ERROR ovn_octavia_provider.helper [-] Error while updating the load balancer 
status: 'NoneType' object has no attribute 'update': octavia_lib.api.dr>
  ERROR ovn_octavia_provider.helper [-] Unexpected exception in 
request_handler: octavia_lib.api.drivers.exceptions.UpdateStatusError: ('The 
status up>
  ERROR ovn_octavia_provider.helper Traceback (most recent call last):
  ERROR ovn_octavia_provider.helper   File 
"/opt/stack/ovn-octavia-provider/ovn_octavia_provider/helper.py", line 318, in 
_update_status_to_octavia
  ERROR ovn_octavia_provider.helper 
self._octavia_driver_lib.update_loadbalancer_status(status)
  ERROR ovn_octavia_provider.helper   File 
"/usr/local/lib/python3.8/dist-packages/octavia_lib/api/drivers/driver_lib.py", 
line 121, in update_loadbal>
  ERROR ovn_octavia_provider.helper raise 
driver_exceptions.UpdateStatusError(
  ERROR ovn_octavia_provider.helper 
octavia_lib.api.drivers.exceptions.UpdateStatusError: 'NoneType' object has no 
attribute 'update'

  This is failing because the listener associated with the loadbalancer
  was not found, it's DB transaction was in-flight.  That's the related
  Octavia issue from above, but a fix for that will not solve the
  problem.

  A side-effect is this listener is now "stuck":

  $ openstack loadbalancer listener delete 7033179d-2ddb-4714-9c06-b7d399498238
  Load Balancer 2cc1d429-b176-48e5-adaa-946be2af0d51 is immutable and cannot be 
updated. (HTTP 409) (Request-ID: req-0e1e53ac-4db9-4779-b1f3-11210fe46f7f)

  The provider driver needs to retry the operation, typically even the
  very next call succeeds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1900763/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1838793] Re: "KeepalivedManagerTestCase" tests failing during namespace deletion

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1838793

Title:
  "KeepalivedManagerTestCase" tests failing during namespace deletion

Status in neutron:
  Won't Fix

Bug description:
  During the execution of those two test cases 
(test_keepalived_spawns_conflicting_pid_base_process, 
  test_keepalived_spawns_conflicting_pid_vrrp_subprocess), sometimes the 
namespace fixture fails during the deletion.

  Logstash information:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22fixtures._fixtures.timeout.TimeoutException%5C%22%20AND%20%20project%3A%5C%22openstack%2Fneutron%5C%22

  Example: http://logs.openstack.org/50/670850/3/check/neutron-
  functional-python27/1d27dda/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1838793/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1824095] Re: Fullstack job broken in stable/stein branch

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1824095

Title:
  Fullstack job broken in stable/stein branch

Status in neutron:
  Fix Released

Bug description:
  Fullstack job in stable/stein branch is failing 100% times because of
  failures in neutron.tests.fullstack.test_qos.TestMinBwQoSOvs

  Stacktrace:

  ft1.1: 
neutron.tests.fullstack.test_qos.TestMinBwQoSOvs.test_bw_limit_qos_port_removed(egress,openflow-cli)testtools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/base.py", 
line 174, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/fullstack/test_qos.py",
 line 688, in test_bw_limit_qos_port_removed
  self._add_min_bw_rule, MIN_BANDWIDTH, self.direction)])
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/fullstack/test_qos.py",
 line 101, in _prepare_vm_with_qos_policy
  rule_add(qos_policy)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/fullstack/test_qos.py",
 line 626, in _add_min_bw_rule
  self.tenant_id, qos_policy_id, min_bw, direction)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/fullstack/resources/client.py",
 line 214, in create_minimum_bandwidth_rule
  body={'minimum_bandwidth_rule': rule})
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/neutronclient/v2_0/client.py",
 line 1910, in create_minimum_bandwidth_rule
  body=body)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/neutronclient/v2_0/client.py",
 line 359, in post
  headers=headers, params=params)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/neutronclient/v2_0/client.py",
 line 294, in do_request
  self._handle_fault_response(status_code, replybody, resp)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/neutronclient/v2_0/client.py",
 line 269, in _handle_fault_response
  exception_handler_v20(status_code, error_body)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/.tox/dsvm-fullstack/lib/python3.6/site-packages/neutronclient/v2_0/client.py",
 line 93, in exception_handler_v20
  request_ids=request_ids)
  neutronclient.common.exceptions.Conflict: Rule minimum_bandwidth is not 
supported by port 38fa073e-eb94-40bc-9d36-e026850bba97
  Neutron server returns request_ids: 
['req-c432946d-cb79-427c-a213-50ed45959532']

  
  Example failure: 
http://logs.openstack.org/38/650238/1/check/neutron-fullstack/d2f4ccb/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1824095/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1799555] Re: Fullstack test neutron.tests.fullstack.test_dhcp_agent.TestDhcpAgentHA.test_reschedule_network_on_new_agent timeout

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1799555

Title:
  Fullstack test
  
neutron.tests.fullstack.test_dhcp_agent.TestDhcpAgentHA.test_reschedule_network_on_new_agent
  timeout

Status in neutron:
  Fix Released

Bug description:
  Job: neutron-fullstack-*
  Failed test: 
neutron.tests.fullstack.test_dhcp_agent.TestDhcpAgentHA.test_reschedule_network_on_new_agent(Linux
 bridge agent)
  Sample failure: 
http://logs.openstack.org/88/555088/29/gate/neutron-fullstack-python36/34299ab/job-output.txt.gz

   {0}
  
neutron.tests.fullstack.test_dhcp_agent.TestDhcpAgentHA.test_reschedule_network_on_new_agent(Linux
  bridge agent) [152.363616s] ... FAILED

   Captured traceback:
   ~~~
   b'Traceback (most recent call last):'
   b'  File "/opt/stack/new/neutron/neutron/common/utils.py", line 641, in 
wait_until_true'
   b'eventlet.sleep(sleep)'
   b'  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-python35/lib/python3.5/site-packages/eventlet/greenthread.py",
 line 36, in sleep'
   b'hub.switch()'
   b'  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-python35/lib/python3.5/site-packages/eventlet/hubs/hub.py",
 line 297, in switch'
   b'return self.greenlet.switch()'
   b'eventlet.timeout.Timeout: 60 seconds'
   b''
   b'During handling of the above exception, another exception occurred:'
   b''
   b'Traceback (most recent call last):'
   b'  File "/opt/stack/new/neutron/neutron/tests/base.py", line 151, in 
func'
   b'return f(self, *args, **kwargs)'
   b'  File 
"/opt/stack/new/neutron/neutron/tests/fullstack/test_dhcp_agent.py", line 168, 
in test_reschedule_network_on_new_agent'
   b'self._wait_until_network_rescheduled(network_dhcp_agents[0])'
   b'  File 
"/opt/stack/new/neutron/neutron/tests/fullstack/test_dhcp_agent.py", line 137, 
in _wait_until_network_rescheduled'
   b'common_utils.wait_until_true(_agent_rescheduled)'
   b'  File "/opt/stack/new/neutron/neutron/common/utils.py", line 646, in 
wait_until_true'
   b'raise WaitTimeout(_("Timed out after %d seconds") % timeout)'
   b'neutron.common.utils.WaitTimeout: Timed out after 60 seconds'
   b''

  Logstash:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22test_reschedule_network_on_new_agent%5C%22%20

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1799555/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1785656] Re: test_internal_dns.InternalDNSTest fails even though dns-integration extension isn't loaded

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: networking-odl
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1785656

Title:
  test_internal_dns.InternalDNSTest fails even though dns-integration
  extension isn't loaded

Status in networking-odl:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  We're seeing this on the Networking-ODL CI [1].

  The test
  neutron_tempest_plugin.scenario.test_internal_dns.InternalDNSTest is
  being executed even though there's a decorator to prevent it from
  running [2]

  Either the checker isn't working or something is missing, since other
  DNS tests are being skipped automatically due to the extension not
  being loaded.

  [1] 
http://logs.openstack.org/91/584591/5/check/networking-odl-tempest-oxygen/df17c02/
  [2] 
http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/scenario/test_internal_dns.py#n28

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1785656/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1502933] Re: [OSSA-2016-009] ICMPv6 anti-spoofing rules are too permissive (CVE-2015-8914)

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1502933

Title:
  [OSSA-2016-009] ICMPv6 anti-spoofing rules are too permissive
  (CVE-2015-8914)

Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  ICMPv6 default firewall rules are too permissive on the hypervisors
  leaving VMs able to do ICMPv6 source address spoofing.

  Pre-condition:

  - having a provider-network providing IPv6 connectivity to the VMs
  - in my case the controllers are providing statefull DHCPv6 and my physical 
router provides the default gateway using Router Advertisements.

  How to reproduce:

  - spin a VM and attach to it an IPv6 enabled network
  - obtain an IPv6 address using #dhclient -6
  - try to ping6 an IPv6 enabled host
  - remove your IPv6 address from the interface: #sudo ip addr del 
2001:0DB8::100/32 dev eth0
  - add a forged IPv6 address to your interface, into the same subnet of the 
original IPv6 address: #sudo ip addr add 2001:0DB8::200/32 dev eth0
  - try to ping6 the previous IPv6 enabled host, it will still work
  - try to assign another IPv6 address to your NIC, completely outside your 
IPv6 assignment: sudo ip addr add 2001:dead:beef::1/64 dev eth0
  - try to ping6 the previous IPv6 enabled host -> the destination will still 
receive your echo requests with your forget address but you won't receive 
answers, they won't be router back to you.

  Expected behavior:

  - VMs should not be able to spoof their IPv6 address and issue forged
  ICMPv6 packets. The firewall rules on the hypervisor should restrict
  ICMPv6 egress to the VMs link-local and global-unicast addresses.

  Affected versions:

  - I saw the issue into OpenStack Juno, under Ubuntu 14.04. But
  according to the upstream code, the issue is still present into the
  master branch, into; neutron/agent/linux/iptables_firewall.py, into
  line 385:

  ipv6_rules += [comment_rule('-p icmpv6 -j RETURN',
  comment=ic.IPV6_ICMP_ALLOW)]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1502933/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993555] [NEW] live snapshot of a running instance with cinder volume (nfs) attached fails

2022-10-19 Thread Rafael Madrid
Public bug reported:

Description
===
When creating a live snapshot of a running instance that has a Cinder volume 
(Quobyte or NFS backend) attached, the snapshot creation fails. 

Our images have the qemu-agent installed and the properties
hw_qemu_guest_agent=yes and os_require_quiesce=yes. We have confirmed
that the freeze and thaw commands work as expected.

Steps to reproduce (from Horizon)
==
1. Create a volume-backed (on Quobyte or nfs-based storage) instance.
2. Try to create a snapshot of the running instance.

Expected result
===
The snapshot is created successfully with status Available. Users should be 
able to create new instances from that snapshot.

Actual result
=
The snapshot is created but has an Error status. It cannot be used to launch 
new instances.

Environment
===
1. OpenStack Release: Xena
2. Hypervisor: Libvirt + KVM
3. Cinder Storage: Quobyte, NFS

Logs
==
Nova Compute Log

2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver 
[req-43572110-44ac-4ec0-8e00-9b713cf7c71e 899761a635c847e483536855cd6a9af9 
3bfb8905aa84474c9e8611749f5f5329 - default default] [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] Error occurred during 
volume_snapshot_create, sending error status to Cinder.: libvirt.libvirtError: 
internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': 
The command guest-fsfreeze-freeze has been disabled for this instance
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] Traceback (most recent call last):
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98]   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", 
line 3417, in volume_snapshot_create
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] volume_id, create_info['new_file'])
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98]   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", 
line 3348, in _volume_snapshot_create
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] reuse_ext=True, quiesce=True)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98]   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", 
line 550, in snapshot
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] 
self._domain.snapshotCreateXML(device_xml, flags=flags)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98]   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 193, 
in doit
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98]   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 151, 
in proxy_call
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] rv = execute(f, *args, **kwargs)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98]   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 132, 
in execute
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] six.reraise(c, e, tb)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98]   File 
"/usr/local/lib/python3.6/site-packages/six.py", line 719, in reraise
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] raise value
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98]   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 86, 
in tworker
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] rv = meth(*args, **kwargs)
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98]   File 
"/usr/lib64/python3.6/site-packages/libvirt.py", line 3070, in snapshotCreateXML
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] raise 
libvirtError('virDomainSnapshotCreateXML() failed')
2022-10-18 21:03:20.941 7 ERROR nova.virt.libvirt.driver [instance: 
b2d46a43-c6c7-4dd7-b524-0b018b689d98] libvirt.libvirtError: internal error: 
unable to execute QEMU a

[Yahoo-eng-team] [Bug 1979661] Re: [stable/train] member_batch_update breaks contract with octavia-api interface

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1979661

Title:
  [stable/train] member_batch_update  breaks contract with octavia-api
  interface

Status in neutron:
  Fix Released

Bug description:
  The prototype of the member_batch_update function in Octavia provider API in 
Train (16.x) [1]
  is member_batch_update(self, members). A change [2] was later done in Ussuri, 
to move to member_batch_update(self, pool_id, members) but it was not 
backported to train to keep backwards compatibility.

  So looks like the change on ovn-octavia-provider [3] was backported by
  error to stable/train and now the  call from octavia to the provider
  is triggering:

  2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils 
[req-8ad8a6ce-59cc-4531-94fa-d7918406d19f - 25b1efe043b54816ab6bd8e2b6b0d9c8 - 
default default] Provider 'ovn' raised an unknown error: member_batch_update() 
missing 1 required positional argument: 'members': TypeError: 
member_batch_update() missing 1 required positional argument: 'members'
  2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils Traceback (most 
recent call last):
  2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils   File 
"/usr/lib/python3.6/site-packages/octavia/api/drivers/utils.py", line 55, in 
call_provider
  2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils return 
driver_method(*args, **kwargs)
  2022-06-22 14:05:54.634 14 ERROR octavia.api.drivers.utils TypeError: 
member_batch_update() missing 1 required positional argument: 'members'

  [1] 
https://opendev.org/openstack/octavia/src/branch/stable/train/octavia/api/drivers/amphora_driver/v2/driver.py#L217
  [2] https://review.opendev.org/c/openstack/octavia/+/688548/
  [3] https://review.opendev.org/c/openstack/networking-ovn/+/746134

  This is an issue just only over stable/train branch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1979661/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1959573] Re: test_local_ip_connectivity test failing in stable/xena jobs

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1959573

Title:
  test_local_ip_connectivity test failing in stable/xena jobs

Status in neutron:
  Fix Released

Bug description:
  Example failures:-
  - https://zuul.opendev.org/t/openstack/build/dc2e6aa9be504a39a930b17645dee287
  - https://zuul.opendev.org/t/openstack/build/4ad0d2fd426b40b2b249585e612dbf1f

  Failing since https://review.opendev.org/c/openstack/neutron-tempest-
  plugin/+/823007 is merged.

  
  It's happening as master version of openvswitch jobs is running in 
stable/xena branch and that feature is only available in master branch. This 
needs to be fixed by switch to xena-jobs in stable/xena branch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1959573/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815871] Re: neutron-server api don't shutdown gracefully

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815871

Title:
  neutron-server api don't shutdown gracefully

Status in neutron:
  Won't Fix

Bug description:
  When stop neutron-server, api worker will shutdown immediately no matter 
there are ongoing requests.
  And the ongoing requests will abort immediately.

  After testing, go through codes, compare with nova and cinder codes.
  The reason is that the stop and wait function in WorkerService in 
neutron/wsgi.py have issue.

  def wait(self):
  if isinstance(self._server, eventlet.greenthread.GreenThread):
  self._server.wait()

  def stop(self):
  if isinstance(self._server, eventlet.greenthread.GreenThread):
  self._server.kill()
  self._server = None

  Check the neutron codes above.
  After kill in stop function, self._server is forced to set to None, which 
makes nothing to do in wait function. This leads to api worker shutdown 
immediately without wait.

  Nova has the correct logic, check: 
https://github.com/openstack/nova/blob/master/nova/wsgi.py#L197
  Cinder use the oslo_service.wsgi, which has the same codes like nova.

  
  My Debugging as follows:
  I add log at line 978: 

  https://github.com/eventlet/eventlet/blob/master/eventlet/wsgi.py#L979

  serv.log.info('({0}) wsgi exiting, {1}'.format(serv.pid,
  pool.__dict__))

  I updated a neutron API to sleep for 10s, then I curl this API, at the
  same time I kill neutorn-server.

  Bellow is the neutorn-server log, I have 4 api workers. You can see. process 
329 has a coroutines_running, but it does not log 'wsgi exited' because 
pool.waitall() in 
https://github.com/eventlet/eventlet/blob/master/eventlet/wsgi.py#L979 , 
  other 3 processes have no coroutines_running, so they log 'wsgi exited'.
  At last, these 4 child processes all exited with status 0.

  
  2019-02-13 17:37:31.193 319 INFO oslo_service.service [-] Caught SIGTERM, 
stopping children
  2019-02-13 17:37:31.194 319 DEBUG oslo_concurrency.lockutils [-] Acquired 
semaphore "singleton_lock" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212
  2019-02-13 17:37:31.194 319 DEBUG oslo_concurrency.lockutils [-] Releasing 
semaphore "singleton_lock" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:225
  2019-02-13 17:37:31.195 319 DEBUG oslo_service.service [-] Stop services. 
stop /usr/lib/python2.7/site-packages/oslo_service/service.py:611
  2019-02-13 17:37:31.195 319 DEBUG oslo_service.service [-] Killing children. 
stop /usr/lib/python2.7/site-packages/oslo_service/service.py:616
  2019-02-13 17:37:31.195 319 INFO oslo_service.service [-] Waiting on 4 
children to exit
  2019-02-13 17:37:31.196 328 INFO neutron.wsgi [-] (328) wsgi exiting, {'sem': 
, 'coroutines_running': set([]), 
'no_coros_running': , 'size': 100}
  2019-02-13 17:37:31.196 329 INFO neutron.wsgi [-] (329) wsgi exiting, {'sem': 
, 'coroutines_running': 
set([]), 'no_coros_running': 
, 'size': 100}
  2019-02-13 17:37:31.196 331 INFO neutron.wsgi [-] (331) wsgi exiting, {'sem': 
, 'coroutines_running': set([]), 
'no_coros_running': , 'size': 100}
  2019-02-13 17:37:31.197 330 INFO neutron.wsgi [-] (330) wsgi exiting, {'sem': 
, 'coroutines_running': set([]), 
'no_coros_running': , 'size': 100}
  2019-02-13 17:37:31.210 329 DEBUG oslo_concurrency.lockutils 
[req-d813d601-8563-4d0f-8b16-1418f81ddcc1 - - - - -] Acquired semaphore 
"singleton_lock" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212
  2019-02-13 17:37:31.211 329 DEBUG oslo_concurrency.lockutils 
[req-d813d601-8563-4d0f-8b16-1418f81ddcc1 - - - - -] Releasing semaphore 
"singleton_lock" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:225
  2019-02-13 17:37:31.212 328 INFO neutron.wsgi [-] (328) wsgi exited, 
is_accepting=True
  2019-02-13 17:37:31.216 328 DEBUG oslo_concurrency.lockutils 
[req-d813d601-8563-4d0f-8b16-1418f81ddcc1 - - - - -] Acquired semaphore 
"singleton_lock" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212
  2019-02-13 17:37:31.217 331 INFO neutron.wsgi [-] (331) wsgi exited, 
is_accepting=True
  2019-02-13 17:37:31.218 328 DEBUG oslo_concurrency.lockutils 
[req-d813d601-8563-4d0f-8b16-1418f81ddcc1 - - - - -] Releasing semaphore 
"singleton_lock" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:225
  2019-02-13 17:37:31.218 330 INFO neutron.wsgi [-] (330) wsgi exited, 
is_accepting=True
  2019-02-13 17:37:31.219 331 DEBUG oslo_concurrency.lockutils 
[req-d813d601-8563-4d0f-8b16-1418f81ddcc1 - - - - -] Acquired semaphore 
"singleton_lock" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212
  2019-02-13 17:37:31.220 331 DEBUG oslo_concurrency.

[Yahoo-eng-team] [Bug 1800599] Re: [RFE]Neutron API server: unexpected behavior with multiple long live clients

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

** Changed in: neutron
   Status: Won't Fix => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1800599

Title:
  [RFE]Neutron API server: unexpected behavior with multiple long live
  clients

Status in neutron:
  Fix Released

Bug description:
  High level description:
  The current openstack API server uses eventlet.wsgi.server implementation. 
The default behavior of eventlet.wsgi.server will do an accept() call before 
knowing whether a greenthread is available in the pool to service that socket. 
If all socket connections are shortlived then this is not an issue as a 
greenthread will eventually become available and the request will be serviced 
(hopefully before the client times out waiting).

  But in some scenarios in real system, such as during large system
  deployment stage, there are many compute nodes which caused many long-
  lived connections from nova-compute to the neutron API, this will
  cause issue/unexpected behavior as below:

  1. for single neutron server case:
  if neutron server has all of its greenthreads tied up on open sockets, when 
one more connection request arrives, the server call accept() but will never 
distribute it to a working thread to process it and the client will timeout 
with long time waiting (e.g. CONF.client_socket_timeout)

  Expect behavior: return quick TCP connect timeout if no processing
  thread available

  2. for multiple neutron server cases (e.g. cfg.CONF.api_workers>1 or 
cpu_count>1):
  in this case, there are multiple neutron server child processes waiting for 
client requests (e.g. doing accept() on the same socket), if one neutron 
server's accept() is invoked by linux kernel to accept a client request but all 
of its greenthreads had tied up on open sockets then the client will timeout 
with long time waiting. But actually, at this time, other neutron child 
processes may still have available greenthreads to process this request but 
there is no opportunity for them to process it (as accepted by the first 
neutron server child process).

  Expect behavior: the request can be processed if any of neutron server
  process has available greenthread or return quick TCP connect timeout
  if no processing thread available

  Version: latest devstack

  Potential solution: implement a custom pool for wsgi.server which will
  block the spawn_n call (e.g. by sem.acquire()) to avoid calling
  accept() until green working thread available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1800599/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1788556] Re: dhcp agent error reading lease file

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1788556

Title:
  dhcp agent error reading lease file

Status in neutron:
  Fix Released

Bug description:
  With a large number of VMs, at some point, the dhcp agent throws this
  index error trying to read the lease file:

  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent 
[req-cf1c7a8e-b718-4c46-b7be-999747c7e526 afe623c9e78e47febd76617008b9138e 
c4f22248feb9430093858a0404b779d5 - - -] Unable to reload_allocations dhcp for 
ef71f918-dc0d-4a6e-8d37-0f5f0720e295.: IndexError: list index out of range
  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/venv/neutron-20180718T154642Z/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py",
 line 142, in call_driver
  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/venv/neutron-20180718T154642Z/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py",
 line 512, in reload_allocations
  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent 
self._release_unused_leases()
  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/venv/neutron-20180718T154642Z/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py",
 line 825, in _release_unused_leases
  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent v6_leases = 
self._read_v6_leases_file_leases(leases_filename)
  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/venv/neutron-20180718T154642Z/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py",
 line 810, in _read_v6_leases_file_leases
  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent (iaid, ip, 
client_id) = parts[1], parts[2], parts[4]
  2018-08-15 16:17:40.771 40391 ERROR neutron.agent.dhcp.agent IndexError: list 
index out of range

  When this happens, the agent calls sync_state to fully resync the
  agent state, which is a serious problem when dealing with a lot of
  ports in a scale environment.

  Is it possible to avoid a full resync of all ports?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1788556/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1785189] Re: Floatingip and router bandwidth speed limit failure

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1785189

Title:
  Floatingip and router bandwidth speed limit failure

Status in neutron:
  Won't Fix

Bug description:
  Environment version: centos7.4
  Neutron version: newton(also in pike,queen)

  I have added these L3 QoS patches into newton branch:
  https://review.openstack.org/#/c/453458/
  https://review.openstack.org/#/c/424466/
  https://review.openstack.org/#/c/521079/

  But I don't think these patch is useful. For large bandwidths, the
  speed limit does not work at all.As long as the router speed limit,
  floatingip speed limit, scp file has been falling from 2Mbps, and
  finally interrupted. The iperf test is extremely unstable, sometimes
  10 Mbps, sometimes 0bps.

  For example,The rate limit rule of the router is limited to 1 Gbps, router 
netns is iperf client,
  controller code is iperf server. Here is test result:

  [root@node-1 ~]# ip netns exec qrouter-bf800d13-9ce6-4aa7-9259-fab54ec5ac05 
tc -s -p filter show dev qg-d2e58140-fa
  filter parent 1: protocol ip pref 1 u32
  filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1
  filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 
0 flowid :1  (rule hit 7557 success 7525)
    match IP src 172.18.0.133/32 (success 7525 )
   police 0x15a rate 1024Mbit burst 100Mb mtu 2Kb action drop overhead 0b
  ref 1 bind 1

   Sent 12795449 bytes 8549 pkts (dropped 969, overlimits 969)

  iperf tests:
  [root@node-1 ~]# ip netns exec qrouter-bf800d13-9ce6-4aa7-9259-fab54ec5ac05 
iperf3 -c 172.18.0.4 -i 1
  Connecting to host 172.18.0.4, port 5201
  [  4] local 172.18.0.133 port 51674 connected to 172.18.0.4 port 5201
  [ ID] Interval   Transfer Bandwidth   Retr  Cwnd
  [  4]   0.00-1.00   sec   119 KBytes   972 Kbits/sec   18   2.83 KBytes
  [  4]   1.00-2.00   sec  0.00 Bytes  0.00 bits/sec5   2.83 KBytes
  [  4]   2.00-3.00   sec  0.00 Bytes  0.00 bits/sec5   2.83 KBytes
  [  4]   3.00-4.00   sec  0.00 Bytes  0.00 bits/sec5   2.83 KBytes
  [  4]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec5   2.83 KBytes
  [  4]   5.00-6.00   sec  63.6 KBytes   522 Kbits/sec   37   2.83 KBytes
  [  4]   6.00-7.00   sec  1.64 MBytes  13.7 Mbits/sec  336   4.24 KBytes
  [  4]   7.00-8.00   sec  1.34 MBytes  11.2 Mbits/sec  279   2.83 KBytes
  [  4]   8.00-9.00   sec  1.96 MBytes  16.5 Mbits/sec  406   2.83 KBytes
  [  4]   9.00-10.00  sec   334 KBytes  2.73 Mbits/sec   75   2.83 KBytes
  - - - - - - - - - - - - - - - - - - - - - - - - -
  [ ID] Interval   Transfer Bandwidth   Retr
  [  4]   0.00-10.00  sec  5.44 MBytes  4.56 Mbits/sec  1171 sender
  [  4]   0.00-10.00  sec  5.34 MBytes  4.48 Mbits/sec  receiver

  iperf Done.

  It is normal to use the command to delete the tc rule and do the
  bandwidth test.

  [root@node-1 ~]# ip netns exec qrouter-bf800d13-9ce6-4aa7-9259-fab54ec5ac05 
tc filter del dev qg-d2e58140-fa parent 1: prio 1 handle 800::800 u32
  [root@node-1 ~]# ip netns exec qrouter-bf800d13-9ce6-4aa7-9259-fab54ec5ac05 
tc -s -p filter show dev qg-d2e58140-fa
  [root@node-1 ~]# ip netns exec qrouter-bf800d13-9ce6-4aa7-9259-fab54ec5ac05 
iperf3 -c 172.18.0.4 -i 1
  Connecting to host 172.18.0.4, port 5201
  [  4] local 172.18.0.133 port 47530 connected to 172.18.0.4 port 5201
  [ ID] Interval   Transfer Bandwidth   Retr  Cwnd
  [  4]   0.00-1.00   sec  88.2 MBytes   740 Mbits/sec1407 KBytes
  [  4]   1.00-2.00   sec   287 MBytes  2.41 Gbits/sec  354491 KBytes
  [  4]   2.00-3.00   sec  1.04 GBytes  8.94 Gbits/sec  1695932 KBytes
  [  4]   3.00-4.00   sec  1008 MBytes  8.45 Gbits/sec  4233475 KBytes
  [  4]   4.00-5.00   sec  1.03 GBytes  8.85 Gbits/sec  1542925 KBytes
  [  4]   5.00-6.00   sec  1008 MBytes  8.45 Gbits/sec  4507748 KBytes
  [  4]   6.00-7.00   sec  1.05 GBytes  9.06 Gbits/sec  1550798 KBytes
  [  4]   7.00-8.00   sec  1.06 GBytes  9.08 Gbits/sec  1251933 KBytes
  [  4]   8.00-9.00   sec  1.02 GBytes  8.77 Gbits/sec  3595942 KBytes
  [  4]   9.00-10.00  sec  1024 MBytes  8.59 Gbits/sec  3867897 KBytes
  - - - - - - - - - - - - - - - - - - - - - - - - -
  [ ID] Interval   Transfer Bandwidth   Retr
  [  4]   0.00-10.00  sec  8.54 GBytes  7.33 Gbits/sec  22595 sender
  [  4]   0.00-10.00  sec  8.54 GBytes  7.33 Gbits/sec  receiver

  iperf Done.

  I am not sure if it is an individual phenomenon or someone else has
  encountered it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1785189/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Uns

[Yahoo-eng-team] [Bug 1764867] Re: Linuxbridge agent crashes when specifying cfg.CONF.VXLAN.udp_dstport option

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1764867

Title:
  Linuxbridge agent crashes when specifying cfg.CONF.VXLAN.udp_dstport
  option

Status in neutron:
  Fix Released

Bug description:
  This error takes place with Neutron master branch. To reproduce,
  specify a value for 'dstport' in the agent's configuration file in the
  [vxlan] section, for example 4789, which is the the IANA defined
  standard value for the UDP port used for VXLAN communication. When re-
  starting the agent with this option, the following traceback ensues:
  http://paste.openstack.org/show/719412/.

  This happens because the value of the 'dstport' option is converted to
  a string here
  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/ip_lib.py#L262,
  before being passed as argument 'vxlan_port' to the pyroute2 library
  to create a VXLAN tunnel. The value passed to pyroute2 in 'vxlan_port'
  must be an integer.

  After changing the code and restarting the agent, we can see that it
  is listening in the specified port (4789 in my example) in 'dstport':

  $ netstat -vaun
  Active Internet connections (servers and established)
  Proto Recv-Q Send-Q Local Address   Foreign Address State 
 
  udp0  0 0.0.0.0:47890.0.0.0:*

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1764867/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742872] Re: the status of the firewall is wrong

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1742872

Title:
  the status of the firewall is wrong

Status in neutron:
  Won't Fix

Bug description:
  a router is associated with one firewall and the router is the only
  one that assiciated with the  firewall, when the router is deleted
  ,the status of the firewall should be INACTIVE ,but it is ACTIVE
  actually (now the firewall is associated with no router).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1742872/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677746] Re: neutron-sanity-check always requires ovs to be present

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1677746

Title:
  neutron-sanity-check always requires ovs to be present

Status in neutron:
  Won't Fix

Bug description:
  The neutron-sanity-check command has a hard dependency on ovs being
  present on the system being checked, even if you have all the --noovs*
  flags set. In my environment I'm running without ovs installed or
  configured and I hit failures like:

  http://paste.openstack.org/show/604926/

  even though I ran the command with: --noovsdb_native --noovs_patch
  --noovs_geneve --noovs_conntrack --noovs_vxlan

  From that log it looks like the command is trying to always start the
  ovs daemon even if all the checks are disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1677746/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1670451] Re: Neutron OVS Agent does not set the tag for the ports

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1670451

Title:
  Neutron OVS Agent does not set the tag for the ports

Status in neutron:
  Won't Fix

Bug description:
  The neutron OVS agent on Windows does not set the tags on the VM ports
  when using the native implementation.

  There is no error shown in the logs, the the port does not have any tags.
  From the following output of the command "ovs-vsctl show" 
(http://paste.openstack.org/show/601605/) we can see that the VM port named 
"65b8a556-6d1a-4f9f-aa38-347b0edfa494" under br-int does not have any tags 
associated with it.

  If the agent is restarted on the host, it will fail to start again
  with the following trace: http://paste.openstack.org/show/601606/

  Expected behaviour: The VM port present under bridge br-int should
  have an associated tag set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1670451/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1747510] Re: Migrate legacy jobs to zuul v3 syntax

2022-10-19 Thread Rodolfo Alonso
We have completed this task.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1747510

Title:
  Migrate legacy jobs to zuul v3 syntax

Status in neutron:
  Fix Released

Bug description:
  It looks neutron still use the legacy jobs syntax. This bug suggested
  to migrate to zuul v3 syntax. See comment from Andreas Jaeger at
  https://review.openstack.org/#/c/511983/35 (Patch Set 35).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1747510/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421626] Re: _sync_vlan_allocations thowing DBDuplicateEntry with postgres HA

2022-10-19 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421626

Title:
  _sync_vlan_allocations thowing DBDuplicateEntry with postgres HA

Status in neutron:
  Won't Fix

Bug description:
  _sync_vlan_allocations is thowing DBDuplicateEntry when 2 neutron
  servers are rebooted at the same time.  Postgres is HA, the FOR UPDATE
  lock is not working and both server try to write the data in the DB at
  the same time.

  2014-12-15 14:20:39.644 14746 TRACE neutron Traceback (most recent call last):
  2014-12-15 14:20:39.644 14746 TRACE neutron   File "/usr/bin/neutron-server", 
line 10, in 
  2014-12-15 14:20:39.644 14746 TRACE neutron sys.exit(main())
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/neutron/server/__init__.py", line 48, in 
main
  2014-12-15 14:20:39.644 14746 TRACE neutron neutron_api = 
service.serve_wsgi(service.NeutronApiService)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/neutron/service.py", line 112, in serve_wsgi
  2014-12-15 14:20:39.644 14746 TRACE neutron 
LOG.exception(_('Unrecoverable error: please check log '
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/neutron/openstack/common/excutils.py", line 
82, in __exit__
  2014-12-15 14:20:39.644 14746 TRACE neutron six.reraise(self.type_, 
self.value, self.tb)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/neutron/service.py", line 105, in serve_wsgi
  2014-12-15 14:20:39.644 14746 TRACE neutron service.start()
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/neutron/service.py", line 74, in start
  2014-12-15 14:20:39.644 14746 TRACE neutron self.wsgi_app = 
_run_wsgi(self.app_name)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/neutron/service.py", line 173, in _run_wsgi
  2014-12-15 14:20:39.644 14746 TRACE neutron app = 
config.load_paste_app(app_name)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/neutron/common/config.py", line 170, in 
load_paste_app
  2014-12-15 14:20:39.644 14746 TRACE neutron app = 
deploy.loadapp("config:%s" % config_path, name=app_name)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2014-12-15 14:20:39.644 14746 TRACE neutron return loadobj(APP, uri, 
name=name, **kw)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2014-12-15 14:20:39.644 14746 TRACE neutron return context.create()
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/paste/deploy/loadwsgi.py", line 710, in 
create
  2014-12-15 14:20:39.644 14746 TRACE neutron return 
self.object_type.invoke(self)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/paste/deploy/loadwsgi.py", line 144, in 
invoke
  2014-12-15 14:20:39.644 14746 TRACE neutron **context.local_conf)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/paste/deploy/util.py", line 55, in fix_call
  ...skipping...
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/neutron/plugins/ml2/drivers/type_vlan.py", 
line 160, in initialize
  2014-12-15 14:20:39.644 14746 TRACE neutron self._sync_vlan_allocations()
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/neutron/plugins/ml2/drivers/type_vlan.py", 
line 154, in _sync_vlan_alloc
  ations
  2014-12-15 14:20:39.644 14746 TRACE neutron session.delete(alloc)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py", line 447, in 
__exit__
  2014-12-15 14:20:39.644 14746 TRACE neutron self.rollback()
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/util/langhelpers.py", line 58, 
in __exit__
  2014-12-15 14:20:39.644 14746 TRACE neutron compat.reraise(exc_type, 
exc_value, exc_tb)
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py", line 444, in 
__exit__
  2014-12-15 14:20:39.644 14746 TRACE neutron self.commit()
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py", line 354, in 
commit
  2014-12-15 14:20:39.644 14746 TRACE neutron self._prepare_impl()
  2014-12-15 14:20:39.644 14746 TRACE neutron   File 
"/usr/lib64/python2.6/si

[Yahoo-eng-team] [Bug 1911153] Re: [FT] DB migration "test_walk_versions" failing frequently

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1911153

Title:
  [FT] DB migration "test_walk_versions" failing frequently

Status in neutron:
  Fix Released

Bug description:
  Error log:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_63b/769390/3/check/neutron-
  functional-with-uwsgi/63b3a7f/testr_results.html

  Snippet: http://paste.openstack.org/show/801553/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1911153/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1885891] Re: DB exception when updating a "ml2_port_bindings" object

2022-10-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1885891

Title:
  DB exception when updating a "ml2_port_bindings" object

Status in neutron:
  Invalid

Bug description:
  When a port is updated, the port bindings are updated too. This is
  done calling the mech driver method "update_port_precommit".

  Currently, the port binding information in the port context is handled
  using the native SQLA object [1]. OVO PortBinding is currently
  implemented and used; this code section should migrate to use OVO and
  that will fix this problem: http://paste.openstack.org/show/795423/

  
  
[1]https://github.com/openstack/neutron/blob/efcc60ddec5737ac2dd1b504990ce89245248614/neutron/plugins/ml2/driver_context.py#L132-L134

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1885891/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >