[Yahoo-eng-team] [Bug 2060974] Re: neutron-dhcp-agent attemps to read pid.haproxy but can't

2024-04-16 Thread Bernard Cafarelli
Thanks for the update and confirmation, the links will come useful if
people stumble on this LP

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2060974

Title:
  neutron-dhcp-agent attemps to read pid.haproxy but can't

Status in neutron:
  Invalid

Bug description:
  Hi,

  From neutron-dhcp-agent.log, I can see it's trying to access:

  /var/lib/neutron/external/pids/*.pid.haproxy

  It used to be that these files where having the unix rights (at least
  in Debian 11, aka Bullseye):

  -rw-r--r--

  However, in Debian 12 (aka Bookworm), for a reason, they now are:

  -rw-r-

  and then the agent doesn't have the necessary rights to read these
  files.

  Note that in devstack, these PIDs are owned by the stack user, so
  that's not an issue. But that's not the case in a Debian package,
  where haproxy writes these pid files as root:root, when the neutron-
  dhcp-agent is running under neutron:neutron, and therefore, can't read
  the files.

  One possibility would be reading the PIDs through privsep.

  Another fix would be to understand why the PID files aren't world
  readable. At this point, I can't tell why.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2060974/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025486] [NEW] [stable/wallaby] neutron-tempest-plugin-scenario-ovn-wallaby fails on ovn git clone

2023-06-30 Thread Bernard Cafarelli
Public bug reported:

Since 2023-06-30, the neutron-tempest-plugin-scenario-ovn-wallaby started to 
fail 100% in stable/wallaby backports:
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-tempest-plugin-scenario-ovn-wallaby=openstack/neutron


Sample failure grabbed from 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_288/887253/2/check/neutron-tempest-plugin-scenario-ovn-wallaby/288071d/job-output.txt

2023-06-30 11:00:07.584319 | controller | + functions-common:git_timed:644  
 :   timeout -s SIGINT 0 git clone https://github.com/ovn-org/ovn.git 
/opt/stack/ovn --branch 36e3ab9b47e93af0599a818e9d6b2930e49473f0
2023-06-30 11:00:07.587213 | controller | Cloning into '/opt/stack/ovn'...
2023-06-30 11:00:07.828809 | controller | fatal: Remote branch 
36e3ab9b47e93af0599a818e9d6b2930e49473f0 not found in upstream origin

I think I recall some recent fixes (to devstack maybe) to change git
clone/checkout, is it related and just a missing backport to wallaby?
Newer branches are fine

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025486

Title:
  [stable/wallaby] neutron-tempest-plugin-scenario-ovn-wallaby fails on
  ovn git clone

Status in neutron:
  New

Bug description:
  Since 2023-06-30, the neutron-tempest-plugin-scenario-ovn-wallaby started to 
fail 100% in stable/wallaby backports:
  
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-tempest-plugin-scenario-ovn-wallaby=openstack/neutron

  
  Sample failure grabbed from 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_288/887253/2/check/neutron-tempest-plugin-scenario-ovn-wallaby/288071d/job-output.txt

  2023-06-30 11:00:07.584319 | controller | + functions-common:git_timed:644
   :   timeout -s SIGINT 0 git clone https://github.com/ovn-org/ovn.git 
/opt/stack/ovn --branch 36e3ab9b47e93af0599a818e9d6b2930e49473f0
  2023-06-30 11:00:07.587213 | controller | Cloning into '/opt/stack/ovn'...
  2023-06-30 11:00:07.828809 | controller | fatal: Remote branch 
36e3ab9b47e93af0599a818e9d6b2930e49473f0 not found in upstream origin

  I think I recall some recent fixes (to devstack maybe) to change git
  clone/checkout, is it related and just a missing backport to wallaby?
  Newer branches are fine

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025486/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2020363] [NEW] [stable/train/] openstacksdk-functional-devstack fails with POST_FAILURE

2023-05-22 Thread Bernard Cafarelli
Public bug reported:

This was spotted in recent train backport
https://review.opendev.org/c/openstack/neutron/+/883429

100% failing and blocking train gate, sample run:
https://zuul.opendev.org/t/openstack/build/e795cfe1007042668194b398553d2b19

Obtaining file:///opt/stack/heat
openstack-heat requires Python '>=3.8' but the running Python is 2.7.17

Note the end result/fix may be just dropping the job from this old EM
branch

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2020363

Title:
  [stable/train/] openstacksdk-functional-devstack fails with
  POST_FAILURE

Status in neutron:
  New

Bug description:
  This was spotted in recent train backport
  https://review.opendev.org/c/openstack/neutron/+/883429

  100% failing and blocking train gate, sample run:
  https://zuul.opendev.org/t/openstack/build/e795cfe1007042668194b398553d2b19

  Obtaining file:///opt/stack/heat
  openstack-heat requires Python '>=3.8' but the running Python is 2.7.17

  Note the end result/fix may be just dropping the job from this old EM
  branch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2020363/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988577] Re: [OVN] neutron-ovn-db-sync-util fails without qos extension driver

2023-02-22 Thread Bernard Cafarelli
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

** Changed in: neutron
 Assignee: Bernard Cafarelli (bcafarel) => Jake Yip (waipengyip)

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1988577

Title:
  [OVN] neutron-ovn-db-sync-util fails without qos extension driver

Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Confirmed

Bug description:
  Running neutron-ovn-db-sync-util with the following
  `extension_drivers=port_security` fails with

   CRITICAL neutron_ovn_db_sync_util [req-9697a0e0-6097-4d8c-a24b-2c86b6921d7f 
- - - - -] Unhandled error: 
neutron.plugins.ml2.common.exceptions.ExtensionDriverNotFound: Extension driver 
qos required for service plugin qos not found.
   ERROR neutron_ovn_db_sync_util Traceback (most recent call last):
   ERROR neutron_ovn_db_sync_util   File "/usr/lib/python3.8/runpy.py", line 
194, in _run_module_as_main
   ERROR neutron_ovn_db_sync_util return _run_code(code, main_globals, None,
   ERROR neutron_ovn_db_sync_util   File "/usr/lib/python3.8/runpy.py", line 
87, in _run_code
   ERROR neutron_ovn_db_sync_util exec(code, run_globals)
   ERROR neutron_ovn_db_sync_util   File 
"/opt/xena/lib/python3.8/site-packages/debugpy/__main__.py", line 39, in 

   ERROR neutron_ovn_db_sync_util cli.main()
   ERROR neutron_ovn_db_sync_util   File 
"/opt/xena/lib/python3.8/site-packages/debugpy/server/cli.py", line 430, in main
   ERROR neutron_ovn_db_sync_util run()
   ERROR neutron_ovn_db_sync_util   File 
"/opt/xena/lib/python3.8/site-packages/debugpy/server/cli.py", line 284, in 
run_file
   ERROR neutron_ovn_db_sync_util runpy.run_path(target, 
run_name="__main__")
   ERROR neutron_ovn_db_sync_util   File 
"/opt/xena/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py",
 line 321, in run_path
   ERROR neutron_ovn_db_sync_util return _run_module_code(code, 
init_globals, run_name,
   ERROR neutron_ovn_db_sync_util   File 
"/opt/xena/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py",
 line 135, in _run_module_code
   ERROR neutron_ovn_db_sync_util _run_code(code, mod_globals, init_globals,
   ERROR neutron_ovn_db_sync_util   File 
"/opt/xena/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py",
 line 124, in _run_code
   ERROR neutron_ovn_db_sync_util exec(code, run_globals)
   ERROR neutron_ovn_db_sync_util   File 
"/opt/xena/bin/neutron-ovn-db-sync-util", line 10, in 
   ERROR neutron_ovn_db_sync_util sys.exit(main())
   ERROR neutron_ovn_db_sync_util   File 
"/opt/neutron/neutron/cmd/ovn/neutron_ovn_db_sync_util.py", line 222, in main
   ERROR neutron_ovn_db_sync_util manager.init()
   ERROR neutron_ovn_db_sync_util   File "/opt/neutron/neutron/manager.py", 
line 301, in init
   ERROR neutron_ovn_db_sync_util NeutronManager.get_instance()
   ERROR neutron_ovn_db_sync_util   File "/opt/neutron/neutron/manager.py", 
line 252, in get_instance
   ERROR neutron_ovn_db_sync_util cls._create_instance()
   ERROR neutron_ovn_db_sync_util   File 
"/opt/xena/lib/python3.8/site-packages/oslo_concurrency/lockutils.py", line 
360, in inner
   ERROR neutron_ovn_db_sync_util return f(*args, **kwargs)
   ERROR neutron_ovn_db_sync_util   File "/opt/neutron/neutron/manager.py", 
line 238, in _create_instance
   ERROR neutron_ovn_db_sync_util cls._instance = cls()
   ERROR neutron_ovn_db_sync_util   File "/opt/neutron/neutron/manager.py", 
line 126, in __init__
   ERROR neutron_ovn_db_sync_util plugin = 
self._get_plugin_instance(CORE_PLUGINS_NAMESPACE,
   ERROR neutron_ovn_db_sync_util   File "/opt/neutron/neutron/manager.py", 
line 162, in _get_plugin_instance
   ERROR neutron_ovn_db_sync_util plugin_inst = plugin_class()
   ERROR neutron_ovn_db_sync_util   File 
"/opt/neutron/neutron/quota/resource_registry.py", line 124, in wrapper
   ERROR neutron_ovn_db_sync_util return f(*args, **kwargs)
   ERROR neutron_ovn_db_sync_util   File 
"/opt/neutron/neutron/plugins/ml2/plugin.py", line 279, in __init__
   ERROR neutron_ovn_db_sync_util 
self._verify_service_plugins_requirements()
   ERROR neutron_ovn_db_sync_util   File 
"/opt/neutron/neutron/plugins/ml2/plugin.py", line 311, in 
_verify_service_plugins_requirements
   ERROR neutron_ovn_db_sync_util raise ml2_exc.ExtensionDriverNotFound(
   ERROR neutron_ovn_db_sync_util 
neutron.plugins.ml2.common.exceptions.ExtensionDriverNotFound: Extension driver 
qos 

[Yahoo-eng-team] [Bug 2006953] [NEW] [stable/wallaby] test_established_tcp_session_after_re_attachinging_sg is unstable on ML2/OVS iptables_hybrid job

2023-02-10 Thread Bernard Cafarelli
Public bug reported:

In recent weeks this job started to fail regularly on stable/wallaby (not 100% 
some backports are passing)
neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_established_tcp_session_after_re_attachinging_sg

Most of the time this was seen in neutron-tempest-plugin-scenario-
openvswitch-iptables_hybrid-wallaby job

Two recent backports affected by issue:
https://review.opendev.org/c/openstack/neutron/+/871759
https://review.opendev.org/c/openstack/neutron/+/868087

And some sample logs:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_26b/871759/2/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/26b64ec/testr_results.html
https://f450bef156b424c8c132-a0541882d2023eca9a1cc07087449de0.ssl.cf1.rackcdn.com/868087/3/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/55a6ead/testr_results.html


Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 80, in wait_until_true
eventlet.sleep(sleep)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
hub.switch()
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 10 seconds

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 320, in test_established_tcp_session_after_re_attachinging_sg
con.test_connection(should_pass=False)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 202, in test_connection
wait_until_true(
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 85, in wait_until_true
raise WaitTimeout("Timed out after %d seconds" % timeout)
neutron_tempest_plugin.common.utils.WaitTimeout: Timed out after 10 seconds

I wonder if this is similar to
https://bugs.launchpad.net/neutron/+bug/1936911 where the test was
unstable on linuxbridge backend

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure stable

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2006953

Title:
  [stable/wallaby] test_established_tcp_session_after_re_attachinging_sg
  is unstable on ML2/OVS iptables_hybrid job

Status in neutron:
  New

Bug description:
  In recent weeks this job started to fail regularly on stable/wallaby (not 
100% some backports are passing)
  
neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_established_tcp_session_after_re_attachinging_sg

  Most of the time this was seen in neutron-tempest-plugin-scenario-
  openvswitch-iptables_hybrid-wallaby job

  Two recent backports affected by issue:
  https://review.opendev.org/c/openstack/neutron/+/871759
  https://review.opendev.org/c/openstack/neutron/+/868087

  And some sample logs:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_26b/871759/2/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/26b64ec/testr_results.html
  
https://f450bef156b424c8c132-a0541882d2023eca9a1cc07087449de0.ssl.cf1.rackcdn.com/868087/3/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/55a6ead/testr_results.html

  
  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 80, in wait_until_true
  eventlet.sleep(sleep)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  hub.switch()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 10 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 320, in test_established_tcp_session_after_re_attachinging_sg
  con.test_connection(should_pass=False)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 202, in test_connection
  wait_until_true(
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 85, in wait_until_true
  raise WaitTimeout("Timed out after %d seconds" % 

[Yahoo-eng-team] [Bug 1555670] Re: L3 HA: MessagingTimeout during running rally create_and_delete_routers

2022-12-02 Thread Bernard Cafarelli
Old bug on specific env rally failure and many things changed since,
closing

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555670

Title:
  L3 HA: MessagingTimeout during running  rally
  create_and_delete_routers

Status in neutron:
  Invalid

Bug description:
  During running rally test create_and_delete_routers (standard test) on
  env with L3 HA enabled (3 controllers, 1 compute, liberty) getting
  about 50% failure with "Not enough l3 agents available to ensure HA.
  Minimum required 2, available 1."

  Logs of l3 agent full of errors
  http://paste.openstack.org/show/490011/, so agent considered to be
  dead for server http://paste.openstack.org/show/490010/ If concurrency
  for test is  lower than 10 the number of failing tests is reduced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1555670/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598337] Re: lbaasv2:The range of weight in member is usually [0, 255], not [0, 256]

2022-12-02 Thread Bernard Cafarelli
neutron-lbaas is retired, closing bug

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598337

Title:
  lbaasv2:The range of weight in member is usually [0,255], not [0,256]

Status in neutron:
  Won't Fix

Bug description:
  Weight in member is created with the range [0,256],
  'weight': {'allow_post': True, 'allow_put': True,
 'default': 1,
 'validate': {'type:range': [0, 256]},
 'convert_to': attr.convert_to_int,
 'is_visible': True},

  Usually,this is [0,255]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1598337/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1718509] Re: python paste dumping raw input

2022-12-02 Thread Bernard Cafarelli
Removing neutron as it is purely in common code and paste itself

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1718509

Title:
  python paste dumping raw input

Status in paste:
  New
Status in python-eventlet package in Ubuntu:
  Confirmed

Bug description:
  juju-7de47d-1-lxd-2:~ telnet localhost 9696
  Trying 127.0.0.1...
  Connected to localhost.
  Escape character is '^]'.
  GET cross_site_scripting.nasl

  HTTP/1.1 500 Internal Server Error
  Content-Type: text/plain
  Content-Length: 596
  Date: Tue, 19 Sep 2017 20:17:09 GMT
  Connection: close

  Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 481, in 
handle_one_response
  result = self.application(self.environ, start_response)
  File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 198, in __call__
  path_info = self.normalize_url(path_info, False)[1]
  File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 122, in 
normalize_url
  "URL fragments must start with / or http:// (you gave %r)" % url)
  AssertionError: URL fragments must start with / or http:// (you gave 
'cross_site_scripting.nasl')
  Connection closed by foreign host.
  ➜ juju-7de47d-1-lxd-2:~

   juju-7de47d-1-lxd-2:~ telnet localhost 9696
  Trying 127.0.0.1...
  Connected to localhost.
  Escape character is '^]'.
  GET document.cookie%22testgppq=1191;%22

  HTTP/1.1 500 Internal Server Error
  Content-Type: text/plain
  Content-Length: 602
  Date: Tue, 19 Sep 2017 20:33:26 GMT
  Connection: close

  Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/eventlet/wsgi.py", line 481, in 
handle_one_response
  result = self.application(self.environ, start_response)
  File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 198, in __call__
  path_info = self.normalize_url(path_info, False)[1]
  File "/usr/lib/python2.7/dist-packages/paste/urlmap.py", line 122, in 
normalize_url
  "URL fragments must start with / or http:// (you gave %r)" % url)
  AssertionError: URL fragments must start with / or http:// (you gave 
'document.cookie"testgppq=1191;"')
  Connection closed by foreign host.
  ➜ juju-7de47d-1-lxd-2:~

To manage notifications about this bug go to:
https://bugs.launchpad.net/paste/+bug/1718509/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921514] Re: VPNaaS strongSwan driver does not reload secrets

2022-12-02 Thread Bernard Cafarelli
Patch was merged a while ago and backported in neutron-vpnaas, and does
not affect neutron itself, updating status

** Changed in: neutron-vpnaas (Ubuntu)
   Status: New => Fix Released

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1921514

Title:
  VPNaaS strongSwan driver does not reload secrets

Status in neutron-vpnaas package in Ubuntu:
  Fix Released

Bug description:
  When a new IPsec Site Connection is added to VPN Service which already
  hosts a connection, it is not being properly propagated to L3 Agent
  with vpnaas plugin using strongSwan driver.

  See following fragment: https://opendev.org/openstack/neutron-
  
vpnaas/src/branch/master/neutron_vpnaas/services/vpn/device_drivers/strongswan_ipsec.py#L159-L171

  `ipsec reload` command only reloads the ipsec.conf configuration. If a
  new connection is added with different PSK credentials, then we also
  need to run `ipsec rereadsecrets`, otherwise charon will try to use
  "%any - " credentials.

  Preparations:
  1. Enable charon filelog in StrongSwan template. Add the following lines to 
/etc/neutron/l3_agent.ini:
  [strongswan]
  strongswan_config_template = /etc/neutron/strongswan.conf.template
  2. Create /etc/neutron/strongswan.conf.template: 
http://paste.openstack.org/show/803952/
  3. AppArmor systems only - temporarily place charon in complain mode in order 
to allow it to write logs to /var/log/neutron directory: aa-complain 
/usr/lib/ipsec/charon
  4. Restart neutron-l3-agent so it will regenerate all VPN configurations with 
logging enabled.

  Steps to reproduce the problem:
  1. Create a new router.
  2. Create a VPN service associated with new router.
  3. Create a IPsec Site Connection and associate it with VPN service.
  4. Create another IPsec Site Connection, with different PSK in the same VPN 
service.

  Expected behavior:
  New IPsec Site Connection should be in Active state.

  Actual behavior:
  New IPsec Site Connection does not start. Authentication errors can be seen 
on both sides. See the following log snippet which should be present in 
/var/log/neutron/neutron-vpnaas-charon-.log: 
http://paste.openstack.org/show/803954/

  Discovered on OpenStack Rocky-based deployment, but this issue still
  seems to be present in master branch of neutron-vpnaas (see the
  opendev.org link above)

  I am attaching a patch which should fix the issue, I have deployed it
  in a test environment and initial tests show that it works correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neutron-vpnaas/+bug/1921514/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996788] Re: The virtual network is broken on the node after neutron-openvswitch-agent is restarted if RPC requests return an error for a while.

2022-11-18 Thread Bernard Cafarelli
** Tags added: ovs

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996788

Title:
  The virtual network is broken on the node after neutron-openvswitch-
  agent is restarted if RPC requests return an error for a while.

Status in neutron:
  Opinion

Bug description:
  We ran into a problem in our openstack cluster, when traffic does not go 
through the virtual network on the node on which the neutron-openvswitch-agent 
was restarted.
  We had an update from one version of the Openstack to another and by chance 
we had a inconsistency of the DB and neutron-server: any port select from the 
DB returned an error.
  For a while neutron-openvswitch-agent (just after restart) couldn't get any 
information via RCP in its rpc_loop iterations due to DB/neutron-server 
inconsistency.
  But after updating the database, we got a broken virtual network on the node 
where the neutron-openvswitch-agent was restarted.

  It seems to me that I have found a problem place in the logic of 
neutron-ovs-agent.
  To demonstrate, better to emulate the RPC request fail from neutron-ovs-agent 
to neutron-server.

  Here are the steps to reproduce on devstack setup from the master branch.
  Two nodes: node0 is controller, node1 is compute.

  0) Prepare a vxlan based network and a VM.
  [root@node0 ~]# openstack network create net1
  [root@node0 ~]# openstack subnet create sub1 --network net1 --subnet-range 
192.168.1.0/24
  [root@node0 ~]# openstack server create vm1 --network net1 --flavor m1.tiny 
--image cirros-0.5.2-x86_64-disk --host node1

  Just after creating the VM, there is a message in the devstack@q-agt
  logs:

  Nov 16 09:53:35 node1 neutron-openvswitch-agent[374810]: INFO
  neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None
  req-77753b72-cb23-4dae-b68a-7048b63faf8b None None] Assigning 1 as
  local vlan for net-id=710bcfcd-44d9-445d-a895-8ec522f64016, seg-id=466

  So, local vlan which is used on node1 for the network is `1`
  A ping from the node0 to the VM on node1 success works:

  [root@node0 ~]# ip netns exec qdhcp-710bcfcd-44d9-445d-a895-8ec522f64016 ping 
192.168.1.211
  PING 192.168.1.211 (192.168.1.211) 56(84) bytes of data.
  64 bytes from 192.168.1.211: icmp_seq=1 ttl=64 time=1.86 ms
  64 bytes from 192.168.1.211: icmp_seq=2 ttl=64 time=0.891 ms

  1) Now, please don't misunderstand me, I don't want to be read that I'm 
patching the code and then clearly something won't work,
  I just want to emulate a problem that's hard enough to reproduce in a normal 
way but it can.
  So, emulate a problem that method get_resource_by_id returns an error just 
after neutron-ovs-agent restart (RPC based method actually):

  [root@node1 neutron]# git diff
  diff --git a/neutron/agent/rpc.py b/neutron/agent/rpc.py
  index 9a133afb07..299eb25981 100644
  --- a/neutron/agent/rpc.py
  +++ b/neutron/agent/rpc.py
  @@ -327,6 +327,11 @@ class CacheBackedPluginApi(PluginApi):

   def get_device_details(self, context, device, agent_id, host=None,
  agent_restarted=False):
  +import time
  +if not hasattr(self, '_stime'):
  +self._stime = time.time()
  +if self._stime + 5 > time.time():
  +raise Exception('Emulate RPC error in get_resource_by_id call')
   port_obj = self.remote_resource_cache.get_resource_by_id(
   resources.PORT, device, agent_restarted)
   if not port_obj:

  
  Restart neutron-openvswitch-agent agent and try to ping after 1-2 mins:

  [root@node1 ~]# systemctl restart devstack@q-agt

  [root@node0 ~]# ip netns exec qdhcp-710bcfcd-44d9-445d-a895-8ec522f64016 ping 
-c 2 192.168.1.234
  PING 192.168.1.234 (192.168.1.234) 56(84) bytes of data.

  --- 192.168.1.234 ping statistics ---
  2 packets transmitted, 0 received, 100% packet loss, time 1058ms

  [root@node0 ~]#

  Ping doesn't work.
  Just after the neutron-ovs-agent restart and when the RPC starts working 
correctly, there are logs:

  Nov 16 09:55:13 node1 neutron-openvswitch-agent[375032]: INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None 
req-135ae96d-905e-485f-8c1f-b0a70616b4c7 None None] Assigning 2 as local vlan 
for net-id=710bcfcd-44d9-445d-a895-8ec522f64016, seg-id=466
  Nov 16 09:55:13 node1 neutron-openvswitch-agent[375032]: INFO 
neutron.agent.securitygroups_rpc [None req-135ae96d-905e-485f-8c1f-b0a70616b4c7 
None None] Preparing filters for devices 
{'40d82f69-274f-4de5-84d9-6290159f288b'}
  Nov 16 09:55:13 node1 neutron-openvswitch-agent[375032]: INFO 
neutron.agent.linux.openvswitch_firewall.firewall [None 
req-135ae96d-905e-485f-8c1f-b0a70616b4c7 None None] Initializing port 
40d82f69-274f-4de5-84d9-6290159f288b that was already initialized.

  So, `Assigning 2 as local vlan` followed by `Initializing port ...
  that was already 

[Yahoo-eng-team] [Bug 1996528] Re: No output for "openstack port list --project project_name" in case of non-admin user

2022-11-15 Thread Bernard Cafarelli
After checking on IRC [0], this is working as designed on the keystone side, 
regular users aren't allowed to list projects
As this is the way used to find the project ID, this is why non-admin users get 
an empty list

[0] https://meetings.opendev.org/irclogs/%23openstack-
neutron/%23openstack-neutron.2022-11-15.log.html#t2022-11-15T14:53:04

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996528

Title:
  No output for "openstack port list  --project project_name" in case of
  non-admin user

Status in neutron:
  Won't Fix

Bug description:
  Bug
  
  openstack port list --project project_id command works for both admin and 
non-admin users.
  openstack port list --project project_name command works for only admin users.

  
  Expected behavior
  ==
  openstack port list --project project_name command should work for both admin 
and non-admin users.

  Steps to reproduce
  ===
  1. source openrc admin admin
  2. openstack port list --project [this works]
  3, source openrc demo demo
  4. openstack port list --project   [this works]
  5. openstack port list --project   [No output]

  On running with --debug flag, seems like non-admin(i.e. demo) users
  don't have authorization to list projects and so name resolution from
  project_name to project_id fails. The query forwarded to neutron with
  project_name instead of project_id. The neutron then filters DB using
  {project_id: project_name} and query returns empty result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1996528/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1912320] Re: TestTimer breaks VPNaaS functional tests

2022-10-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1912320

Title:
  TestTimer breaks VPNaaS functional tests

Status in neutron:
  Fix Released

Bug description:
  Some functional tests for neutron-vpnaas make use of the NamespaceFixture.
  If the tests are run in combination with a recent neutron version some tests
  fail because the TestTimer raises a TestTimerTimeout even if the namespace
  cleanup finishes before the timeout.

  In the test setup the tox env for dsvm-functional-sswan will normally
  install neutron 17.0.0 (victoria), but for my tests I needed a recent
  neutron, so I installed it as an additional step in the setup of the tox env.

  The test setup steps are like these, on an Ubuntu 20.04 host:

  git clone https://git.openstack.org/openstack-dev/devstack
  git clone https://opendev.org/openstack/neutron
  git clone https://opendev.org/openstack/neutron-vpnaas
  cd neutron-vpnaas
  VENV=dsvm-functional-sswan ./tools/configure_for_vpn_func_testing.sh 
../devstack -i
  tox -e dsvm-functional-sswan --notest
  source .tox/dsvm-functional-sswan/bin/activate
  python -m pip install ../neutron
  deactivate

  Then run the neutron-vpnaas functional tests:

  tox -e dsvm-functional-sswan

  Some tests fail and you see the TestTimerTimeout exception.

  The tests were fine with neutron 17.0.0.
  The TestTimer was introduced later.
  See
  Change set https://review.opendev.org/c/openstack/neutron/+/754938/
  Related bug https://bugs.launchpad.net/neutron/+bug/1838793

  I could narrow the problem with the TestTimer down.
  In at least one neutron-vpnaas test
  
(neutron_vpnaas.tests.functional.strongswan.test_netns_wrapper.TestNetnsWrapper.test_netns_wrap_success)
  the NamespaceFixture is used.
  The TestTimer is set up, the test completes and the namespace is deleted
  successfully before the 5 seconds of the timer are over. But shortly after
  that the timer still fires.

  The problem is the following: on timer start the old signal handler is
  stored (Handler.SIG_DFL in my case) and the remaining time of any existing
  alarm (in my case 0). On exit the signal handler is supposed to be reset
  and the alarm too. But neither happens.
  The signal handler is not set back, because Handler.SIG_DFL is falsy.
  The alarm is not stopped because the old value was 0 (there was no ongoing
  alarm). So in the end the alarm started by TestTimer will eventually be
  signalled.

  References:
  Change set where the TestTimer was introduced:
  https://review.opendev.org/c/openstack/neutron/+/754938/
  That related to bug #1838793

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1912320/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1915341] Re: neutron-linuxbridge-agent not starting due to nf_tables rules

2022-10-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1915341

Title:
  neutron-linuxbridge-agent not starting due to nf_tables rules

Status in neutron:
  Fix Released

Bug description:
  * Description
  When restarting neutron-linuxbridge-agent it fails, because it cannot remove 
nf_tables chains

  * Pre-conditions
  Openstack Ussuri on Ubuntu 20.04 installed as described in 
https://docs.openstack.org/install-guide on real hardware.

  * reproduction steps
  When you remove an instance there seem to remain some rules in neutronARP-* 
and neutronMAC-* tables. When restarting neutron-linuxbridge-agent then, it 
fails:

  neutron_lib.exceptions.ProcessExecutionError: Exit code: 4; Stdin: ; Stdout: 
; Stderr: ebtables v1.8.4 (nf_tables):  CHAIN_USER_DEL failed (Device or 
resource busy): chain neutronARP-tap0a9b5e3a-21
  INFO neutron.plugins.ml2.drivers.agent._common_agent [-] Stopping Linux 
bridge agent agent.

  After flushing these chains the agent can be started.

  # openstack --version
  openstack 5.2.0

  * severity: blocker

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1915341/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940312] Re: Ussuri-only: Network segments are not tried out for allocation

2022-10-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940312

Title:
  Ussuri-only: Network segments are not tried out for allocation

Status in neutron:
  Fix Released

Bug description:
  * High level description: When we get a list of segments to choose
  from, and the first segment is already allocated, it fails right away
  returning RetryRequest exception, the the other segments are never
  tried out.

  I explain it a little further on the comments of PatchSet 1 here:
  https://review.opendev.org/c/openstack/neutron/+/803986/1

  This actually works at master due to a side effect of a refactoring
  that was done on
  
https://opendev.org/openstack/neutron/commit/6eaa6d83d7c7f07fd4bf04879c91582de504eff4
  to randomize the selection of segments, but on stable/ussuri, when not
  specifying the provider_network_type, we got into a situation where we
  had segments to allocate in vlan but neutron was allocating vxlan
  instead.

  * Pre-conditions: network_segment_range plugin enabled and several
  vlan project networks created on the system

  * Step-by-step reproduction steps: openstack --os-username
  'project1_admin' --os-password '**' --os-project-name project1
  --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-
  user-domain-name Default --os-project-domain-name Default --os-
  identity-api-version 3 --os-interface internal --os-region-name
  RegionOne network create network11

  * Expected output: network created successfully (there was available
  ranges)

  * Actual output: HttpException: 503, Unable to create the network. No
  tenant network is available for allocation.

  * Version:
    ** OpenStack version: stable/ussuri
    ** Linux distro: Centos 7
    ** Deployment: StarlingX Openstack

  * Perceived severity: Major - System is usable in some circumstances

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940312/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1892861] Re: [neutron-tempest-plugin] If paramiko SSH client connection fails because of authentication, cannot reconnect

2022-10-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1892861

Title:
  [neutron-tempest-plugin] If paramiko SSH client connection fails
  because of authentication, cannot reconnect

Status in neutron:
  Fix Released

Bug description:
  In the VM boot process, cloud-init copies the SSH keys.

  If the tempest test tries to connect to the VM before the SSH keys are
  copied, the SSH client will raise a
  paramiko.ssh_exception.AuthenticationException. From this point, even
  when the SSH keys are copied into the VM, the SSH client cannot
  reconnect anymore into the VM using the pkey.

  If a bigger sleep time is added manually (to avoid this race
  condition: try to connect when the IP is available in the port but the
  SSH keys are still not present in the VM), the SSH client connects
  without any problem.

  [1]http://paste.openstack.org/show/797127/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1892861/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1905568] Re: Sanity checks missing port_name while adding tunnel port

2022-10-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905568

Title:
  Sanity checks missing port_name while adding tunnel port

Status in neutron:
  Fix Released

Bug description:
  Functions ovs_vxlan_supported and ovs_vxlan_supported from
  neutron.cmd.sanity.checks module are creating tunnel port by using
  neutron.agent.common.ovs_lib.OVSBridge.add_tunnel_port() method but
  they are not passing port name as a first argument. That argument is
  mandatory so should be passed there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1905568/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1957453] Re: [tempest ] neutron-tempest-plugin-dynamic jobs are failing

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1957453

Title:
  [tempest ] neutron-tempest-plugin-dynamic jobs  are failing

Status in neutron:
  Fix Released

Bug description:
  Failing jobs:
  neutron-tempest-plugin-dynamic-routing
  neutron-tempest-plugin-dynamic-routing-wallaby (occasionally)
  neutron-tempest-plugin-dynamic-routing-xena (occasionally)

  
  logs:

  
https://5cd2a6b7473dd470b4dc-56e56a993b15e32d01b886b8d73f896b.ssl.cf5.rackcdn.com/806689/14/check/neutron-
  tempest-plugin-dynamic-routing/f049e9f/testr_results.html

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_88d/794841/10/check/neutron-
  tempest-plugin-dynamic-routing/88d9e54/testr_results.html

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_beb/806689/14/check/neutron-
  tempest-plugin-dynamic-routing-xena/beb2792/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1957453/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978389] Re: Basic networking in Neutron - max VLAN should be 4094, not 4095

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1978389

Title:
  Basic networking in Neutron - max VLAN should be 4094, not 4095

Status in neutron:
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: It reads "Each VLAN has an 
associated numerical ID, between 1 and 4095", but 4095 is reserved and should 
not be used for a VLAN.
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - The mailing list: https://lists.openstack.org
   - IRC: 'openstack' channel on OFTC

  ---
  Release: 20.1.1.dev10 on 2017-11-24 14:13:00
  SHA: fda2594fe96c31695b199d5229813a091d2b6698
  Source: 
https://opendev.org/openstack/neutron/src/doc/source/admin/intro-basic-networking.rst
  URL: https://docs.openstack.org/neutron/yoga/admin/intro-basic-networking.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1978389/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555524] Re: Any tempest plugin import failure leads other plugin tests fails

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/124

Title:
  Any tempest plugin import failure leads other plugin tests fails

Status in Ironic:
  Invalid
Status in Magnum:
  Invalid
Status in neutron:
  Fix Released
Status in tempest:
  Confirmed

Bug description:
  There is failure for all projects runs tempest plugin tests. That was
  due to fwaas tempest plugin having import error.

  When testr try to list all tests and it also list the loaded plugin
  tests, if there is any import error in any plugin, it will leads
  failure to other plugin also.

  I think plugin should work in isolated way, mean if Tempest detect any
  error from any plugin, it should just log the error and ignore that
  plugin tests and run others. Tempest does while registering conf
  options or loading plugin.

  But we should have same way for import error also.

  error-  http://logs.openstack.org/50/289650/4/check/gate-congress-
  dsvm-api/6a27be7/console.html

  2016-03-10 03:42:34.018 | all-plugin runtests: commands[1] | bash 
tools/pretty_tox.sh --concurrency=4 congress_tempest_tests
  2016-03-10 03:42:36.139 | running testr
  2016-03-10 03:42:39.664 | 
/usr/local/lib/python2.7/dist-packages/tempest_lib/__init__.py:28: 
DeprecationWarning: tempest-lib is deprecated for future bug-fixes and code 
changes in favor of tempest. Please change your imports from tempest_lib to 
tempest.lib
  2016-03-10 03:42:39.664 |   DeprecationWarning)
  2016-03-10 03:42:39.916 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  2016-03-10 03:42:39.916 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  2016-03-10 03:42:39.916 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \
  2016-03-10 03:42:39.916 | 
OS_TEST_LOCK_PATH=${OS_TEST_LOCK_PATH:-${TMPDIR:-'/tmp'}} \
  2016-03-10 03:42:39.917 | ${PYTHON:-python} -m subunit.run discover -t 
${OS_TOP_LEVEL:-./} ${OS_TEST_PATH:-./tempest/test_discover} --list 
  2016-03-10 03:42:39.917 | --- imNon-zero exit code (2) from test listing.
  2016-03-10 03:42:39.949 | perror: testr failed (3)o
  2016-03-10 03:42:39.950 | rt errors ---
  2016-03-10 03:42:39.983 | Failed to import test module: 
neutron_fwaas.tests.tempest_plugin.tests.api.test_fwaas_extensions
  2016-03-10 03:42:39.985 | Traceback (most recent call last):
  2016-03-10 03:42:39.989 |   File 
"/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 456, in 
_find_test_path
  2016-03-10 03:42:39.989 | module = self._get_module_from_name(name)
  2016-03-10 03:42:39.989 |   File 
"/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
  2016-03-10 03:42:39.989 | __import__(name)
  2016-03-10 03:42:39.989 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/test_fwaas_extensions.py",
 line 23, in 
  2016-03-10 03:42:39.989 | from 
neutron_fwaas.tests.tempest_plugin.tests.api import base
  2016-03-10 03:42:39.989 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/base.py",
 line 18, in 
  2016-03-10 03:42:39.989 | from neutron_fwaas.tests.tempest_plugin.tests 
import fwaas_client
  2016-03-10 03:42:39.989 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/fwaas_client.py",
 line 25, in 
  2016-03-10 03:42:39.990 | from 
neutron_fwaas.tests.tempest_plugin.services import client
  2016-03-10 03:42:39.990 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/services/client.py",
 line 18, in 
  2016-03-10 03:42:39.990 | from tempest.services.network.json import base
  2016-03-10 03:42:39.990 | ImportError: cannot import name base
  2016-03-10 03:42:39.990 | 
  2016-03-10 03:42:39.990 | Failed to import test module: 
neutron_fwaas.tests.tempest_plugin.tests.scenario.test_fwaas
  2016-03-10 03:42:39.990 | Traceback (most recent call last):
  2016-03-10 03:42:39.990 |   File 
"/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 456, in 
_find_test_path
  2016-03-10 03:42:39.990 | module = self._get_module_from_name(name)
  2016-03-10 03:42:39.990 |   File 
"/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 395, in 
_get_module_from_name
  2016-03-10 03:42:39.990 | __import__(name)
  2016-03-10 03:42:39.990 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/scenario/test_fwaas.py",
 line 21, in 
  2016-03-10 03:42:39.991 | from 
neutron_fwaas.tests.tempest_plugin.tests.scenario import base
  2016-03-10 03:42:39.991 |   File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/scenario/base.py",
 line 21, in 
  2016-03-10 03:42:39.991 | from neutron_fwaas.tests.tempest_plugin.tests 
import fwaas_client
  2016-03-10 03:42:39.991 |   File 

[Yahoo-eng-team] [Bug 1823716] Re: url --> endpoint_template in Install and configure controller node in neutron section of nova.conf

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1823716

Title:
  url --> endpoint_template in Install and configure controller node in
  neutron section of nova.conf

Status in neutron:
  Fix Released

Bug description:
  https://docs.openstack.org/neutron/rocky/install/controller-install-
  rdo.html says to set url in the [cinder] section of nova.conf.

  But, according to the provided nova.conf template the endpoint_override 
option should be used instead since
  url is deprecated for removal

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1823716/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1828437] Re: Remove unneeded compatibility conversions

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1828437

Title:
  Remove unneeded compatibility conversions

Status in neutron:
  Fix Released

Bug description:
  In order to remove complexity and cleanup the code, those
  compatibility conversions present for more than two releases can be
  removed now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1828437/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840579] Re: excessive number of dvrs where vm got a fixed ip on floating network

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1840579

Title:
  excessive number of dvrs where vm got a fixed ip on floating network

Status in neutron:
  Fix Released

Bug description:
  we are running into an unexpected situation where number of dvr
  routers is increasing to nearly 2000 on a compute node on which some
  instances got a nic on floating ip network.

  We are using Queens release,

  neutron-common/xenial,now 2:12.0.5-5~u16.04+mcp155 all [installed,automatic]
  neutron-l3-agent/xenial,now 2:12.0.5-5~u16.04+mcp155 all [installed]
  neutron-metadata-agent/xenial,now 2:12.0.5-5~u16.04+mcp155 all 
[installed,automatic]
  neutron-openvswitch-agent/xenial,now 2:12.0.5-5~u16.04+mcp155 all [installed]
  python-neutron/xenial,now 2:12.0.5-5~u16.04+mcp155 all [installed,automatic]
  python-neutron-fwaas/xenial,xenial,now 2:12.0.1-1.0~u16.04+mcp6 all 
[installed,automatic]
  python-neutron-lib/xenial,xenial,now 1.13.0-1.0~u16.04+mcp9 all 
[installed,automatic]
  python-neutronclient/xenial,xenial,now 1:6.7.0-1.0~u16.04+mcp17 all 
[installed,automatic]

  Currently, my guess is that some applications mistakenly invokes rpc
  calls like this
  
https://github.com/openstack/neutron/blob/490471ebd3ac56d0cee164b9c1c1211687e49437/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L166
  with dvr associated with a floating ip address on a host which has
  fixed ip address allocated from floating network (aka device_owner
  prefix with compute:). Then such router will be kept by this
  
https://github.com/openstack/neutron/blob/490471ebd3ac56d0cee164b9c1c1211687e49437/neutron/db/l3_dvrscheduler_db.py#L427
  function, because `get_subnet_ids_on_router` does not filter out
  router:gateway ports.

  I think this is a bug because as long as we do not have ports with
  specific device owners we should not have a dvr router on it.


  besides it is pretty easy to replay this bug.

  First create a dvr router with an external gateway on floating network
  Then create on virtual machine with fixed ip on floating network
  Then call `routers_updated_on_host` manually, then this dvr will be created 
on the host where vm resides on, but actually it should be there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1840579/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1844705] Re: Ironic events notifier needs to be migrated from ironicclient to openstacksdk

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1844705

Title:
  Ironic events notifier needs to be migrated from ironicclient to
  openstacksdk

Status in neutron:
  Fix Released

Bug description:
  Ironic events notifier needs to be migrated from ironicclient to
  openstacksdk

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1844705/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1862178] Re: Fullstack tests failing due to "hang" neutron-server process

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1862178

Title:
  Fullstack tests failing due to "hang" neutron-server process

Status in neutron:
  Fix Released

Bug description:
  From time to time some fullstack tests are failing and it seems that the 
problem is with not responding neutron-server process.
  There is almost nothing in neutron-server's logs in such case.

  Example of such failure:
  
https://c355270b22583c2d2af0-42801c5a43c64ea303a559bec7f7cdd7.ssl.cf5.rackcdn.com/705903/2/check/neutron-
  fullstack/79cf197/testr_results.html

  neutron-server logs
  
(https://c355270b22583c2d2af0-42801c5a43c64ea303a559bec7f7cdd7.ssl.cf5.rackcdn.com/705903/2/check/neutron-
  fullstack/79cf197/controller/logs/dsvm-fullstack-
  
logs/TestBwLimitQoSOvs.test_bw_limit_qos_policy_rule_lifecycle_egress_/neutron-
  server--2020-02-05--11-00-19-067345_log.txt) ends on:

  2020-02-05 11:00:36.851 3180 INFO neutron.plugins.ml2.managers 
[req-03044462-4bac-4f8e-acac-d6aae381a3d9 - - - - -] Initializing driver for 
type 'gre'
  2020-02-05 11:00:36.852 3180 INFO neutron.plugins.ml2.drivers.type_tunnel 
[req-03044462-4bac-4f8e-acac-d6aae381a3d9 - - - - -] gre ID ranges: [(1, 1000)]
  2020-02-05 11:00:37.796 3180 INFO neutron.plugins.ml2.managers 
[req-03044462-4bac-4f8e-acac-d6aae381a3d9 - - - - -] Initializing driver for 
type 'local'
  2020-02-05 11:00:37.797 3180 INFO neutron.plugins.ml2.managers 
[req-03044462-4bac-4f8e-acac-d6aae381a3d9 - - - - -] Initializing driver for 
type 'vlan'
  2020-02-05 11:01:11.476 3180 INFO neutron.plugins.ml2.drivers.type_vlan 
[req-03044462-4bac-4f8e-acac-d6aae381a3d9 - - - - -] VlanTypeDriver 
initialization complete
  2020-02-05 11:01:11.479 3180 INFO neutron.plugins.ml2.managers 
[req-03044462-4bac-4f8e-acac-d6aae381a3d9 - - - - -] Initializing driver for 
type 'vxlan'
  2020-02-05 11:01:11.480 3180 INFO neutron.plugins.ml2.drivers.type_tunnel 
[req-03044462-4bac-4f8e-acac-d6aae381a3d9 - - - - -] vxlan ID ranges: [(1001, 
2000)]

  So it seems that it didn't initialize properly ml2 extension drivers
  and mechanism drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1862178/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1885695] Re: [OVS] "vsctl" implementation does not allow empty transactions

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1885695

Title:
  [OVS] "vsctl" implementation does not allow empty transactions

Status in neutron:
  Fix Released

Bug description:
  "vsctl" implementation was removed in Train. [1] introduced in Stein a
  bug when using the "vsctl" implementation. A transaction using this
  implementation cannot be empty, should have at least one operation.

  In [2], if qos and queue are None, this transaction won't have any operation. 
As commented, in "vsctl" implementation this will raise the following exception:
Exit code: 1; Stdin: ; Stdout: ; Stderr: ovs-vsctl: missing command name 
(use --help for help)

  **Only stable Stein is affected with this problem.**

  [1]https://review.opendev.org/#/c/738171/
  [2]https://review.opendev.org/#/c/738171/1/neutron/agent/common/ovs_lib.py@914

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1885695/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1885899] Re: test_qos_basic_and_update test is failing

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1885899

Title:
  test_qos_basic_and_update test is failing

Status in neutron:
  Fix Released

Bug description:
  It seems that sometimes nc isn't spawned properly on the guest vm.
  Failure example:
  
https://20f4a85411442f4e3555-9f5a5e2736e26bdd8715596753fafe10.ssl.cf2.rackcdn.com/734876/1/check/neutron-tempest-plugin-scenario-openvswitch/a31f86b/testr_results.html

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%20%5C%22line%20487%2C%20in%20ensure_nc_listen%5C%22

  
  Traceback:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_qos.py",
 line 263, in test_qos_basic_and_update
  sleep=1)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/neutron_tempest_plugin/common/utils.py",
 line 79, in wait_until_true
  while not predicate():
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_qos.py",
 line 261, in 
  expected_bw=QoSTest.LIMIT_BYTES_SEC * 2),
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_qos.py",
 line 84, in _check_bw
  self.ensure_nc_listen(ssh_client, port, "tcp")
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 487, in ensure_nc_listen
  utils.wait_until_true(spawn_and_check_process)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/neutron_tempest_plugin/common/utils.py",
 line 85, in wait_until_true
  raise WaitTimeout("Timed out after %d seconds" % timeout)
  neutron_tempest_plugin.common.utils.WaitTimeout: Timed out after 60 seconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1885899/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1887992] Re: [neutron-tempest-plugin] glance service failing during the installation

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1887992

Title:
  [neutron-tempest-plugin] glance service failing during the
  installation

Status in neutron:
  Fix Released

Bug description:
  In some neutron-tempest-plugin CI jobs, "glance" service is not starting:
  
https://133eaf5aa1613130d74d-11eee2f9a8da0aa272aa38b865c1ef08.ssl.cf5.rackcdn.com/715482/15/check/neutron-tempest-plugin-api/9c57856/job-output.txt

  Error:
  [ERROR] /opt/stack/devstack/lib/glance:361 g-api did not start

  That could be caused because of a recent patch in devstack:
  https://review.opendev.org/#/c/741258

  According to the owner, that should be fixed with
  https://review.opendev.org/#/c/741687

  In those jobs, the service "tls-proxy" is disabled:
  https://github.com/openstack/neutron-tempest-
  plugin/blob/master/zuul.d/base.yaml

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1887992/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1895196] Re: pecan>=1.4.0 is greater than upper-constraints.txt

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1895196

Title:
  pecan>=1.4.0 is greater than upper-constraints.txt

Status in neutron:
  Fix Released

Bug description:
  The pecan min version in requirements.txt is higher than upper-
  constraints.

  upper-constraints.txt currently has:
  pecan===1.3.3

  https://github.com/openstack/requirements/blob/master/upper-
  constraints.txt

  but requirements.txt in neutron has:
  pecan>=1.4.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1895196/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1908711] Re: [neutron-lib] Bump PyYAML to 5.3.1

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1908711

Title:
  [neutron-lib] Bump PyYAML to 5.3.1

Status in neutron:
  Fix Released

Bug description:
  This is the minimum version required by bandit>=1.6.3, that is the
  minimum version installed by this library.

  LOG:
  
https://629fe5679f8dfd9396e8-e24395b1f226ab7bf437be1dd808e069.ssl.cf2.rackcdn.com/767586/2/check/openstack-
  tox-lower-constraints/456ed34/job-output.txt

  Snippet: http://paste.openstack.org/show/801148/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1908711/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1910717] Re: "test_set_igmp_snooping_flood" fails because "ports_other_config" is None

2022-10-26 Thread Bernard Cafarelli
** Tags removed: neutron-proactive-backport-potential

** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1910717

Title:
  "test_set_igmp_snooping_flood" fails because "ports_other_config" is
  None

Status in neutron:
  Fix Released

Bug description:
  "test_set_igmp_snooping_flood" fails because "ports_other_config" is
  None after being set in the previous line.

self.ovs.set_igmp_snooping_flood(port_name, True)
ports_other_config = self.ovs.db_get_val('Port', port_name, 'other_config')

  Log:
  
https://b03ae727f46a68c51347-58d582b1ad987403ec8ac78ac19177df.ssl.cf5.rackcdn.com/769880/1/check/neutron-
  functional-with-uwsgi/157d413/testr_results.html

  Snippet: http://paste.openstack.org/show/801508/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1910717/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1915310] Re: Resource name and collection name for address groups are incorrectly defined in neutron-lib

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1915310

Title:
  Resource name and collection name for address groups are incorrectly
  defined in  neutron-lib

Status in neutron:
  Fix Released

Bug description:
  RESOURCE_NAME and COLLECTION_NAME are incorrectly defined in neutron-
  lib: https://github.com/openstack/neutron-
  
lib/blob/2dabcc5cf7d68bf2a4640f07d5e170aa8b911390/neutron_lib/api/definitions/address_group.py#L25-L26

  Their definitions should have '_' instead of '-'. This incorrect
  definition prevents the proper setup in policy FieldCheck, which uses
  the policy engine match to look for the resource in the API attributes
  table:
  
https://github.com/openstack/neutron/blob/a3dc80b509d72c8d1a3ea007cb657a9e217ba66a/neutron/policy.py#L359-L377

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1915310/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1918274] Re: Driver VLAN do not create the VlanAllocation registers if "network_vlan_ranges" only specify the network

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1918274

Title:
  Driver VLAN do not create the VlanAllocation registers if
  "network_vlan_ranges" only specify the network

Status in neutron:
  Fix Released

Bug description:
  "network_vlan_ranges" allows to define a list of physical networks and
  VLAN ranges [1]. But is possible to just define the physical network
  without the VLAN ID ranges; in that case, the VLAN driver should take
  all valid VLAN IDs (from 1 to 4094).

  If no VLAN range is defined, when "VlanTypeDriver._sync_vlan_allocations" is 
called, the "ranges" variable is:
ranges = {'physical_net': []}

  We have two options here:
  - Modify the sync method.
  - Change the parser to populate, in case of not passing any VLAN range, the 
whole list of possible VLAN IDs for this physnet (I think last option is 
better).

  
[1]https://github.com/openstack/neutron/blob/ab78d29c68b3cfdc6732d2ed2bcec0677b1a34f3/neutron/conf/plugins/ml2/drivers/driver_type.py#L68-L76

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1918274/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1726434] Re: delete router from horizon causes critical error in neutron logs

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1726434

Title:
  delete router from horizon causes critical error in neutron logs

Status in neutron:
  Fix Released

Bug description:
  While deleting a router from the horizon interface the following error
  is observed in the neutron-agent-container.

  root@infra1-neutron-agents-container-efc7805b:~# tail -45 
/var/log/neutron/neutron.log
  2017-10-23 17:12:15.472 20348 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/openstack/venvs/neutron-16.0.1/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'kill', '-9', '20349'] create_process 
/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/linux/utils.py:92
  2017-10-23 17:12:15.488 20348 CRITICAL neutron [-] Unhandled error: 
AssertionError: do not call blocking functions from the mainloop
  2017-10-23 17:12:15.488 20348 ERROR neutron Traceback (most recent call last):
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/bin/neutron-keepalived-state-change", line 11, 
in 
  2017-10-23 17:12:15.488 20348 ERROR neutron sys.exit(main())
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/cmd/keepalived_state_change.py",
 line 19, in main
  2017-10-23 17:12:15.488 20348 ERROR neutron keepalived_state_change.main()
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/l3/keepalived_state_change.py",
 line 156, in main
  2017-10-23 17:12:15.488 20348 ERROR neutron cfg.CONF.monitor_cidr).start()
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/linux/daemon.py",
 line 253, in start
  2017-10-23 17:12:15.488 20348 ERROR neutron self.run()
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/l3/keepalived_state_change.py",
 line 69, in run
  2017-10-23 17:12:15.488 20348 ERROR neutron for iterable in self.monitor:
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/linux/async_process.py",
 line 261, in _iter_queue
  2017-10-23 17:12:15.488 20348 ERROR neutron yield queue.get(block=block)
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/queue.py",
 line 313, in get
  2017-10-23 17:12:15.488 20348 ERROR neutron return waiter.wait()
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/queue.py",
 line 141, in wait
  2017-10-23 17:12:15.488 20348 ERROR neutron return get_hub().switch()
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
  2017-10-23 17:12:15.488 20348 ERROR neutron return self.greenlet.switch()
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in run
  2017-10-23 17:12:15.488 20348 ERROR neutron self.wait(sleep_time)
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 85, in wait
  2017-10-23 17:12:15.488 20348 ERROR neutron presult = 
self.do_poll(seconds)
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/hubs/epolls.py",
 line 62, in do_poll
  2017-10-23 17:12:15.488 20348 ERROR neutron return self.poll.poll(seconds)
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/l3/keepalived_state_change.py",
 line 133, in handle_sigterm
  2017-10-23 17:12:15.488 20348 ERROR neutron self._kill_monitor()
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/l3/keepalived_state_change.py",
 line 130, in _kill_monitor
  2017-10-23 17:12:15.488 20348 ERROR neutron run_as_root=True)
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/linux/utils.py",
 line 223, in kill_process
  2017-10-23 17:12:15.488 20348 ERROR neutron execute(['kill', '-%d' % 
signal, pid], run_as_root=run_as_root)
  2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/linux/utils.py",
 line 131, in execute
  2017-10-23 17:12:15.488 20348 ERROR neutron 

[Yahoo-eng-team] [Bug 1824477] Re: dhcp: need to reorder classless static route record

2022-10-26 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1824477

Title:
  dhcp: need to reorder classless static route record

Status in neutron:
  Fix Released

Bug description:
  Dear colleagues,

  if there are Subnet1 with Host1 and Subnet2 with Host2 and Host2 serves 
Network2 behind it, in order to reach Network2 it is required to add Network2 
to Subnet1's host_routes with Host2 as a
  nexthop. In this case Neutron will offer in Subnet1 the following set of 
RFC3442 routes:

  Network2 -> Host2
  Subnet2_cidr -> 0.0.0.0

  which will lead to fail installing Network2 route on the Host1 since
  at the moment of Network2's appearance there is no information about
  Subnet2 route.

  I've submitted for review the patch, which orders routes in such a way
  that 'connected' routes (0.0.0.0) will be placed first and, thus, all
  subsequent routes will be installed successfully.

  https://review.openstack.org/#/c/651994/

  Thank you.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1824477/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1986682] [NEW] [stable/stein][stable/rocky}[stable/queens] CI jobs failing

2022-08-16 Thread Bernard Cafarelli
Public bug reported:

Backport of I320ac2306e0f25ff933d8271203e192486062d61 showed that gates in 
stein and older branches are not in good shape currently:
[stable/stein] https://review.opendev.org/c/openstack/neutron/+/852752
  neutron-tempest-plugin-api-stein
  neutron-tempest-plugin-scenario-linuxbridge-stein
  openstack-tox-py35
  neutron-grenade
  neutron-functional-python27
  neutron-grenade-multinode
  neutron-grenade-dvr-multinode
  grenade-py3
[stable/rocky] https://review.opendev.org/c/openstack/neutron/+/852753
  all jobs failed with RETRY_LIMIT
  File 
"/tmp/ansible_zuul_console_payload_jmrk60u_/ansible_zuul_console_payload.zip/ansible/modules/zuul_console.py",
 line 188
conn.send(f'{ZUUL_CONSOLE_PROTO_VERSION}\n'.encode('utf-8'))
  ^
SyntaxError: invalid syntax

 this may have been already fixed recently in rocky (general openstack CI issue)
[stable/queens] https://review.opendev.org/c/openstack/neutron/+/852754
  similar sea of red RETRY_LIMIT and same error as rocky


With recent "Debian Buster with OpenStack Rocky will receive LTS support" mail 
from 
https://lists.openstack.org/pipermail/openstack-discuss/2022-August/029910.html 
we should at least get stein and then rocky back in shape

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1986682

Title:
  [stable/stein][stable/rocky}[stable/queens] CI jobs failing

Status in neutron:
  New

Bug description:
  Backport of I320ac2306e0f25ff933d8271203e192486062d61 showed that gates in 
stein and older branches are not in good shape currently:
  [stable/stein] https://review.opendev.org/c/openstack/neutron/+/852752
neutron-tempest-plugin-api-stein
neutron-tempest-plugin-scenario-linuxbridge-stein
openstack-tox-py35
neutron-grenade
neutron-functional-python27
neutron-grenade-multinode
neutron-grenade-dvr-multinode
grenade-py3
  [stable/rocky] https://review.opendev.org/c/openstack/neutron/+/852753
all jobs failed with RETRY_LIMIT
File 
"/tmp/ansible_zuul_console_payload_jmrk60u_/ansible_zuul_console_payload.zip/ansible/modules/zuul_console.py",
 line 188
  conn.send(f'{ZUUL_CONSOLE_PROTO_VERSION}\n'.encode('utf-8'))
^
  SyntaxError: invalid syntax

   this may have been already fixed recently in rocky (general openstack CI 
issue)
  [stable/queens] https://review.opendev.org/c/openstack/neutron/+/852754
similar sea of red RETRY_LIMIT and same error as rocky

  
  With recent "Debian Buster with OpenStack Rocky will receive LTS support" 
mail from 
https://lists.openstack.org/pipermail/openstack-discuss/2022-August/029910.html 
we should at least get stein and then rocky back in shape

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1986682/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1984109] Re: "click" library is missing from requirements

2022-08-11 Thread Bernard Cafarelli
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1984109

Title:
  "click" library is missing from requirements

Status in neutron:
  Won't Fix

Bug description:
  "click" library is used in "ml2ovn_trace", "download_gerrit_change"
  and "migrate_names" scripts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1984109/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967893] [NEW] [stable/yoga] tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest fails in neutron-ovs-tempest-multinode-full job

2022-04-05 Thread Bernard Cafarelli
Public bug reported:

tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest.*
test fail in a reproducible way in neutron-ovs-tempest-multinode-full
job (only for yoga branch).

Sample log failure:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0ea/835863/1/check/neutron-ovs-tempest-multinode-full/0ea66ae/testr_results.html
from:
https://review.opendev.org/c/openstack/neutron/+/835863/

>From Lajos' review, port-resource-request-groups extension is loaded but
it is missing from the api_extensions list

These tests in this job worked in the first days after yoga branching, but are 
failing since around 2022-03-31:
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-tempest-multinode-full=openstack%2Fneutron=stable%2Fyoga

At first glance I did not see any potential culprit in recent neutron
backports, or tempest/neutron-tempest-plugin merged changes

** Affects: neutron
 Importance: Critical
 Assignee: Lajos Katona (lajos-katona)
 Status: Confirmed


** Tags: gate-failure qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1967893

Title:
  [stable/yoga]
  tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest
  fails in neutron-ovs-tempest-multinode-full job

Status in neutron:
  Confirmed

Bug description:
  tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest.*
  test fail in a reproducible way in neutron-ovs-tempest-multinode-full
  job (only for yoga branch).

  Sample log failure:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0ea/835863/1/check/neutron-ovs-tempest-multinode-full/0ea66ae/testr_results.html
  from:
  https://review.opendev.org/c/openstack/neutron/+/835863/

  From Lajos' review, port-resource-request-groups extension is loaded
  but it is missing from the api_extensions list

  These tests in this job worked in the first days after yoga branching, but 
are failing since around 2022-03-31:
  
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-tempest-multinode-full=openstack%2Fneutron=stable%2Fyoga

  At first glance I did not see any potential culprit in recent neutron
  backports, or tempest/neutron-tempest-plugin merged changes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1967893/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1950346] [NEW] [stable/train] neutron-tempest-plugin-designate-scenario-train fails with AttributeError: 'Manager' object has no attribute 'zones_client'

2021-11-09 Thread Bernard Cafarelli
Public bug reported:

This is kind of a follow-up to neutron-tempest-plugin jobs failing in
https://bugs.launchpad.net/neutron/+bug/1948804 as I see that designate
jobs all fail on train branch.

Recent backport example and relevant logs:
https://review.opendev.org/c/openstack/neutron/+/816518/
https://zuul.opendev.org/t/openstack/build/a6a1142368b742248be710f902f541f5

Traceback (most recent call last):

  File "/opt/stack/tempest/tempest/test.py", line 181, in setUpClass
raise value.with_traceback(trace)

  File "/opt/stack/tempest/tempest/test.py", line 171, in setUpClass
cls.setup_clients()

  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py",
 line 50, in setup_clients
cls.dns_client = cls.os_tempest.zones_client

AttributeError: 'Manager' object has no attribute 'zones_client'

I did not see it when testing the fix for 1948804 as DNM patch to run it
did not trigger designate job

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dns gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1950346

Title:
  [stable/train] neutron-tempest-plugin-designate-scenario-train  fails
  with AttributeError: 'Manager' object has no attribute 'zones_client'

Status in neutron:
  New

Bug description:
  This is kind of a follow-up to neutron-tempest-plugin jobs failing in
  https://bugs.launchpad.net/neutron/+bug/1948804 as I see that
  designate jobs all fail on train branch.

  Recent backport example and relevant logs:
  https://review.opendev.org/c/openstack/neutron/+/816518/
  https://zuul.opendev.org/t/openstack/build/a6a1142368b742248be710f902f541f5

  Traceback (most recent call last):

File "/opt/stack/tempest/tempest/test.py", line 181, in setUpClass
  raise value.with_traceback(trace)

File "/opt/stack/tempest/tempest/test.py", line 171, in setUpClass
  cls.setup_clients()

File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py",
 line 50, in setup_clients
  cls.dns_client = cls.os_tempest.zones_client

  AttributeError: 'Manager' object has no attribute 'zones_client'

  I did not see it when testing the fix for 1948804 as DNM patch to run
  it did not trigger designate job

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1950346/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1948804] [NEW] [stable/train] neutron-tempest-plugin scenario jobs fail "sudo: guestmount: command not found"

2021-10-26 Thread Bernard Cafarelli
Public bug reported:

These jobs started failing a few days ago on stable/train:

neutron-tempest-plugin-scenario-linuxbridge-train
neutron-tempest-plugin-scenario-openvswitch-train
neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-train
neutron-tempest-plugin-designate-scenario-train

So basically all neutron-tempest-plugin scenario jobs

Here is a sample backport:
https://review.opendev.org/c/openstack/neutron/+/811974
with failed test run:
https://zuul.opendev.org/t/openstack/build/56b44cb5072648fc8e5ad67df22f8c03

+ 
/opt/stack/neutron-tempest-plugin/tools/customize_ubuntu_image:mount_image:108 
:   sudo -E guestmount -i --add 
/tmp/tmp.lUZGwTOkSx/ubuntu-20.04-minimal-cloudimg-amd64.img --pid-file 
/tmp/tmp.lUZGwTOkSx/pid /tmp/tmp.lUZGwTOkSx/mount
sudo: guestmount: command not found
+ /opt/stack/neutron-tempest-plugin/tools/customize_ubuntu_image:mount_image:1 
:   cleanup
+ /opt/stack/neutron-tempest-plugin/tools/customize_ubuntu_image:cleanup:166 :  
 error=1

Maybe something changed in the base image recently? Though newer jobs
look OK

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1948804

Title:
  [stable/train] neutron-tempest-plugin scenario jobs fail "sudo:
  guestmount: command not found"

Status in neutron:
  New

Bug description:
  These jobs started failing a few days ago on stable/train:

  neutron-tempest-plugin-scenario-linuxbridge-train
  neutron-tempest-plugin-scenario-openvswitch-train
  neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-train
  neutron-tempest-plugin-designate-scenario-train

  So basically all neutron-tempest-plugin scenario jobs

  Here is a sample backport:
  https://review.opendev.org/c/openstack/neutron/+/811974
  with failed test run:
  https://zuul.opendev.org/t/openstack/build/56b44cb5072648fc8e5ad67df22f8c03

  + 
/opt/stack/neutron-tempest-plugin/tools/customize_ubuntu_image:mount_image:108 
:   sudo -E guestmount -i --add 
/tmp/tmp.lUZGwTOkSx/ubuntu-20.04-minimal-cloudimg-amd64.img --pid-file 
/tmp/tmp.lUZGwTOkSx/pid /tmp/tmp.lUZGwTOkSx/mount
  sudo: guestmount: command not found
  + 
/opt/stack/neutron-tempest-plugin/tools/customize_ubuntu_image:mount_image:1 :  
 cleanup
  + /opt/stack/neutron-tempest-plugin/tools/customize_ubuntu_image:cleanup:166 
:   error=1

  Maybe something changed in the base image recently? Though newer jobs
  look OK

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1948804/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1946748] [NEW] [stable/stein] neutron-tempest-plugin jobs fail with "AttributeError: module 'tempest.common.utils' has no attribute 'is_network_feature_enabled'"

2021-10-12 Thread Bernard Cafarelli
Public bug reported:

These recent backports started to fail recently on stein:
neutron-tempest-plugin-scenario-linuxbridge-stein 
neutron-tempest-plugin-scenario-openvswitch-stein 
neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-stein 

Sample backport and log failure:
https://review.opendev.org/c/openstack/neutron/+/808501
https://zuul.opendev.org/t/openstack/build/868e5fd3b7c643b7865a7a40f50c5d95

setUpClass (neutron_tempest_plugin.scenario.test_metadata.MetadataTest)
---

Captured traceback:
~~~
b'Traceback (most recent call last):'
b'  File "/opt/stack/tempest/tempest/test.py", line 188, in setUpClass'
b'six.reraise(etype, value, trace)'
b'  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/six.py", line 693, 
in reraise'
b'raise value'
b'  File "/opt/stack/tempest/tempest/test.py", line 165, in setUpClass'
b'cls.skip_checks()'
b'  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_metadata.py",
 line 51, in skip_checks'
b"if not utils.is_network_feature_enabled('ipv6_metadata'):"
b"AttributeError: module 'tempest.common.utils' has no attribute 
'is_network_feature_enabled'"
b''

The relevant line in neutron-tempest-plugin has been there for almost a
year (8bbd743, 2020-11-04), so I suspect a recent change in tempest
configuration (for stein branch)? Maybe related to previous failure we
saw in https://bugs.launchpad.net/tempest/+bug/1946321 and now we use a
version of tempest too old?

Train and newer branches are fine, rocky looks OK (though latest
backport was on Oct 7, so maybe before that started to fail)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1946748

Title:
  [stable/stein] neutron-tempest-plugin jobs fail with "AttributeError:
  module 'tempest.common.utils' has no attribute
  'is_network_feature_enabled'"

Status in neutron:
  New

Bug description:
  These recent backports started to fail recently on stein:
  neutron-tempest-plugin-scenario-linuxbridge-stein 
  neutron-tempest-plugin-scenario-openvswitch-stein 
  neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-stein 

  Sample backport and log failure:
  https://review.opendev.org/c/openstack/neutron/+/808501
  https://zuul.opendev.org/t/openstack/build/868e5fd3b7c643b7865a7a40f50c5d95

  setUpClass (neutron_tempest_plugin.scenario.test_metadata.MetadataTest)
  ---

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File "/opt/stack/tempest/tempest/test.py", line 188, in setUpClass'
  b'six.reraise(etype, value, trace)'
  b'  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/six.py", line 693, 
in reraise'
  b'raise value'
  b'  File "/opt/stack/tempest/tempest/test.py", line 165, in setUpClass'
  b'cls.skip_checks()'
  b'  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_metadata.py",
 line 51, in skip_checks'
  b"if not utils.is_network_feature_enabled('ipv6_metadata'):"
  b"AttributeError: module 'tempest.common.utils' has no attribute 
'is_network_feature_enabled'"
  b''

  The relevant line in neutron-tempest-plugin has been there for almost
  a year (8bbd743, 2020-11-04), so I suspect a recent change in tempest
  configuration (for stein branch)? Maybe related to previous failure we
  saw in https://bugs.launchpad.net/tempest/+bug/1946321 and now we use
  a version of tempest too old?

  Train and newer branches are fine, rocky looks OK (though latest
  backport was on Oct 7, so maybe before that started to fail)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1946748/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1885262] Re: Add stateless firewall support to OVN

2021-09-16 Thread Bernard Cafarelli
Updating status on this one, neutron and ovsdbapp work was completed:

https://review.opendev.org/c/openstack/neutron/+/789974/
https://review.opendev.org/c/openstack/ovsdbapp/+/794342/
https://review.opendev.org/c/openstack/releases/+/796473/

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Status: Confirmed => Fix Released

** Changed in: neutron
Milestone: None => xena-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1885262

Title:
  Add stateless firewall support to OVN

Status in neutron:
  Fix Released

Bug description:
  In Ussuri, we added support for stateless firewall [1]

  This added support for stateful attribute in security group, with
  needed parts in API extensions "stateful-security-group", database,
  ... [2]

  However implementation is currently only done for the iptables drivers
  in ML2/OVS, this limitation is noted in release notes for the feature.

  As proposed discussed in the Victoria PTG [3], we should add support
  for this attribute in OVN driver.

  It should be easy to do [4] and give feature support parity in OVN

  [1] https://bugs.launchpad.net/neutron/+bug/1753466
  [2] https://review.opendev.org/#/c/572767/
  [3] https://etherpad.opendev.org/p/neutron-victoria-ptg L162
  [4] http://www.openvswitch.org/support/dist-docs/ovn-northd.8.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1885262/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938262] [NEW] [stable/ussuri] Functional jobs timeout, many tests with fixtures._fixtures.timeout.TimeoutException

2021-07-28 Thread Bernard Cafarelli
Public bug reported:

This started recently (last backport successfully merged was on July 20th), 
functional tests now fail 100% on  recent backports with TIMED_OUT. For example:
https://review.opendev.org/c/openstack/neutron/+/801882
https://review.opendev.org/c/openstack/neutron/+/802528

I confirmed it with dummy change:
https://review.opendev.org/c/openstack/neutron/+/802552

Many tests fail with:
2021-07-27 17:34:20.939074 | controller |   File 
"/usr/lib/python3.6/threading.py", line 295, in wait
2021-07-27 17:34:20.939093 | controller | waiter.acquire()
2021-07-27 17:34:20.939107 | controller |
2021-07-27 17:34:20.939120 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/semaphore.py",
 line 115, in acquire
2021-07-27 17:34:20.939134 | controller | hubs.get_hub().switch()
2021-07-27 17:34:20.939147 | controller |
2021-07-27 17:34:20.939161 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/hub.py",
 line 298, in switch
2021-07-27 17:34:20.939177 | controller | return self.greenlet.switch()
2021-07-27 17:34:20.939191 | controller |
2021-07-27 17:34:20.939205 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/hub.py",
 line 350, in run
2021-07-27 17:34:20.939219 | controller | self.wait(sleep_time)
2021-07-27 17:34:20.939233 | controller |
2021-07-27 17:34:20.939246 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/poll.py",
 line 80, in wait
2021-07-27 17:34:20.939260 | controller | presult = self.do_poll(seconds)
2021-07-27 17:34:20.939316 | controller |
2021-07-27 17:34:20.939338 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/epolls.py",
 line 31, in do_poll
2021-07-27 17:34:20.939352 | controller | return self.poll.poll(seconds)
2021-07-27 17:34:20.939366 | controller |
2021-07-27 17:34:20.939380 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
2021-07-27 17:34:20.939394 | controller | raise TimeoutException()
2021-07-27 17:34:20.939407 | controller |
2021-07-27 17:34:20.939421 | controller | 
fixtures._fixtures.timeout.TimeoutException

Start of backtrace depends on tests (namespace creation, ip_lib, ...) so
it does look like generic issue in related package

Note this is specific to stable/ussuri, the first backport mentioned
passed in newer branches and in stable/train without issue. Functional
tests are passing in train with same OS, same python version 3.6

Nothing suspicious logged in the tests functional output itself

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938262

Title:
  [stable/ussuri] Functional jobs timeout, many tests with
  fixtures._fixtures.timeout.TimeoutException

Status in neutron:
  New

Bug description:
  This started recently (last backport successfully merged was on July 20th), 
functional tests now fail 100% on  recent backports with TIMED_OUT. For example:
  https://review.opendev.org/c/openstack/neutron/+/801882
  https://review.opendev.org/c/openstack/neutron/+/802528

  I confirmed it with dummy change:
  https://review.opendev.org/c/openstack/neutron/+/802552

  Many tests fail with:
  2021-07-27 17:34:20.939074 | controller |   File 
"/usr/lib/python3.6/threading.py", line 295, in wait
  2021-07-27 17:34:20.939093 | controller | waiter.acquire()
  2021-07-27 17:34:20.939107 | controller |
  2021-07-27 17:34:20.939120 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/semaphore.py",
 line 115, in acquire
  2021-07-27 17:34:20.939134 | controller | hubs.get_hub().switch()
  2021-07-27 17:34:20.939147 | controller |
  2021-07-27 17:34:20.939161 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/hub.py",
 line 298, in switch
  2021-07-27 17:34:20.939177 | controller | return self.greenlet.switch()
  2021-07-27 17:34:20.939191 | controller |
  2021-07-27 17:34:20.939205 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/eventlet/hubs/hub.py",
 line 350, in run
  2021-07-27 17:34:20.939219 | controller | self.wait(sleep_time)
  2021-07-27 17:34:20.939233 | controller |
  2021-07-27 17:34:20.939246 | controller |   File 

[Yahoo-eng-team] [Bug 1933086] [NEW] [stable/ussuri] neutron-tempest-slow-py3 fails with "ERROR: This script does not work on Python 2.7"

2021-06-21 Thread Bernard Cafarelli
Public bug reported:

This started a few days ago, all ussuri backports fail 100% on this job,
sample backport and relevant log at [0][1]

It seems this job still uses python2? install_pip fails with:
2021-06-21 07:51:08.542161 | controller | ERROR: This script does not work on 
Python 2.7 The minimum supported Python version is 3.6. Please use 
https://bootstrap.pypa.io/pip/2.7/get-pip.py instead.

This job inherits from tempest-slow-py3 but it seems we are the only
consumers in stable/ussuri [2]

It may be fixable with USE_PYTHON3, as per ussuri devstack doc it looks
like it was not default yet at that time [3], but it feels strange to
have to set it with parent job having -py3 in title?


[0] https://review.opendev.org/c/openstack/neutron/+/797000
[1] 
https://abfd123151705ca58b12-2045be852d43868eb95da6cc3429b40d.ssl.cf1.rackcdn.com/797000/1/check/neutron-tempest-slow-py3/44d649a/job-output.txt
[2] 
https://zuul.opendev.org/t/openstack/builds?job_name=tempest-slow-py3=stable%2Fussuri
[3] https://docs.openstack.org/devstack/ussuri/configuration.html#use-python3

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1933086

Title:
  [stable/ussuri] neutron-tempest-slow-py3 fails with "ERROR: This
  script does not work on Python 2.7"

Status in neutron:
  New

Bug description:
  This started a few days ago, all ussuri backports fail 100% on this
  job, sample backport and relevant log at [0][1]

  It seems this job still uses python2? install_pip fails with:
  2021-06-21 07:51:08.542161 | controller | ERROR: This script does not work on 
Python 2.7 The minimum supported Python version is 3.6. Please use 
https://bootstrap.pypa.io/pip/2.7/get-pip.py instead.

  This job inherits from tempest-slow-py3 but it seems we are the only
  consumers in stable/ussuri [2]

  It may be fixable with USE_PYTHON3, as per ussuri devstack doc it
  looks like it was not default yet at that time [3], but it feels
  strange to have to set it with parent job having -py3 in title?

  
  [0] https://review.opendev.org/c/openstack/neutron/+/797000
  [1] 
https://abfd123151705ca58b12-2045be852d43868eb95da6cc3429b40d.ssl.cf1.rackcdn.com/797000/1/check/neutron-tempest-slow-py3/44d649a/job-output.txt
  [2] 
https://zuul.opendev.org/t/openstack/builds?job_name=tempest-slow-py3=stable%2Fussuri
  [3] https://docs.openstack.org/devstack/ussuri/configuration.html#use-python3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1933086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1925451] Re: [stable/rocky] grenade job is broken

2021-06-11 Thread Bernard Cafarelli
** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1925451

Title:
  [stable/rocky] grenade job is broken

Status in neutron:
  Fix Released

Bug description:
  I just saw in the neutron-grenade job in stable/rocky branch error
  like:

  2021-04-22 08:11:54.188 | Complete output from command python setup.py 
egg_info:
  2021-04-22 08:11:54.188 | Couldn't find index page for 'pbr' (maybe 
misspelled?)
  2021-04-22 08:11:54.188 | No local packages or download links found for 
pbr>=1.8
  2021-04-22 08:11:54.188 | Traceback (most recent call last):
  2021-04-22 08:11:54.188 |   File "", line 1, in 
  2021-04-22 08:11:54.188 |   File 
"/tmp/pip-build-9jeigq7n/devstack-tools/setup.py", line 29, in 
  2021-04-22 08:11:54.188 | pbr=True)
  2021-04-22 08:11:54.188 |   File "/usr/lib/python3.5/distutils/core.py", 
line 108, in setup
  2021-04-22 08:11:54.188 | _setup_distribution = dist = klass(attrs)
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/setuptools/dist.py", line 269, in __init__
  2021-04-22 08:11:54.188 | 
self.fetch_build_eggs(attrs['setup_requires'])
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/setuptools/dist.py", line 313, in 
fetch_build_eggs
  2021-04-22 08:11:54.188 | replace_conflicting=True,
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 826, in resolve
  2021-04-22 08:11:54.188 | dist = best[req.key] = env.best_match(req, 
ws, installer)
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1092, in 
best_match
  2021-04-22 08:11:54.188 | return self.obtain(req, installer)
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1104, in obtain
  2021-04-22 08:11:54.188 | return installer(requirement)
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/setuptools/dist.py", line 380, in 
fetch_build_egg
  2021-04-22 08:11:54.188 | return cmd.easy_install(req)
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 657, 
in easy_install
  2021-04-22 08:11:54.188 | raise DistutilsError(msg)
  2021-04-22 08:11:54.188 | distutils.errors.DistutilsError: Could not find 
suitable distribution for Requirement.parse('pbr>=1.8')

  Failure in
  
https://447f476affa473a2-ba0bbef8fa5bd9d33ddbd8694210833c.ssl.cf5.rackcdn.com/777123/4/check
  /neutron-grenade/e900d19/logs/grenade.sh.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1925451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1924315] [NEW] [stable/rocky] neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-rocky job fails

2021-04-15 Thread Bernard Cafarelli
Public bug reported:

With some other failures (pip mirrors, grenade,
https://bugs.launchpad.net/bugs/1923413 , etc) now fixed, rocky
backports are almost back in green except this job.

Sample backports failing:
https://review.opendev.org/c/openstack/neutron/+/779780
https://review.opendev.org/c/openstack/neutron/+/777123

We have 2 separate issues, first the testr_results.html generation fails on 
subunit2html
UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 124173: 
ordinal not in range(128)

This is https://review.opendev.org/c/openstack/os-testr/+/700778 but it
is not included in version used in rocky (and newer do not support py2)

This should probably be not needed if tests are all passing (we just
need a bit more digging to find failing tests)

Second problem, some tests fail 100% or most of the time:
neutron_tempest_plugin.scenario.test_port_forwardings.PortForwardingTestJSON.test_port_forwarding_to_2_servers
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_subport_connectivity
(both seen in all rechecks I checked)
neutron_tempest_plugin.scenario.test_connectivity.NetworkConnectivityTest.test_connectivity_router_east_west_traffic
neutron_tempest_plugin.scenario.test_portsecurity.PortSecurityTest.test_port_security_removed_added
(I saw these a few times)

At least one of the tests fails on missing ncat:
2021-04-15 04:18:09.915121 | controller | 2021-04-15 03:39:35,818 25332 
INFO [tempest.lib.common.ssh] ssh connection to cirros@172.24.5.201 
successfully created
2021-04-15 04:18:09.915129 | controller | 2021-04-15 03:39:36,071 25332 
DEBUG[neutron_tempest_plugin.common.shell] Executing command 'ncat 
--version 2>&1' on local host (timeout=None)...
2021-04-15 04:18:09.915141 | controller | 2021-04-15 03:39:36,080 25332 
DEBUG[neutron_tempest_plugin.common.shell] Command 'ncat --version 2>&1' 
failed (exit_status=127):
2021-04-15 04:18:09.915167 | controller | stderr:
2021-04-15 04:18:09.915180 | controller |
2021-04-15 04:18:09.915188 | controller | stdout:
2021-04-15 04:18:09.915197 | controller | /bin/sh: 1: ncat: not found

Maybe something changed in base image we use?

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1924315

Title:
  [stable/rocky] neutron-tempest-plugin-scenario-openvswitch-
  iptables_hybrid-rocky job fails

Status in neutron:
  New

Bug description:
  With some other failures (pip mirrors, grenade,
  https://bugs.launchpad.net/bugs/1923413 , etc) now fixed, rocky
  backports are almost back in green except this job.

  Sample backports failing:
  https://review.opendev.org/c/openstack/neutron/+/779780
  https://review.opendev.org/c/openstack/neutron/+/777123

  We have 2 separate issues, first the testr_results.html generation fails on 
subunit2html
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 124173: 
ordinal not in range(128)

  This is https://review.opendev.org/c/openstack/os-testr/+/700778 but
  it is not included in version used in rocky (and newer do not support
  py2)

  This should probably be not needed if tests are all passing (we just
  need a bit more digging to find failing tests)

  Second problem, some tests fail 100% or most of the time:
  
neutron_tempest_plugin.scenario.test_port_forwardings.PortForwardingTestJSON.test_port_forwarding_to_2_servers
  neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_subport_connectivity
  (both seen in all rechecks I checked)
  
neutron_tempest_plugin.scenario.test_connectivity.NetworkConnectivityTest.test_connectivity_router_east_west_traffic
  
neutron_tempest_plugin.scenario.test_portsecurity.PortSecurityTest.test_port_security_removed_added
  (I saw these a few times)

  At least one of the tests fails on missing ncat:
  2021-04-15 04:18:09.915121 | controller | 2021-04-15 03:39:35,818 25332 
INFO [tempest.lib.common.ssh] ssh connection to cirros@172.24.5.201 
successfully created
  2021-04-15 04:18:09.915129 | controller | 2021-04-15 03:39:36,071 25332 
DEBUG[neutron_tempest_plugin.common.shell] Executing command 'ncat 
--version 2>&1' on local host (timeout=None)...
  2021-04-15 04:18:09.915141 | controller | 2021-04-15 03:39:36,080 25332 
DEBUG[neutron_tempest_plugin.common.shell] Command 'ncat --version 2>&1' 
failed (exit_status=127):
  2021-04-15 04:18:09.915167 | controller | stderr:
  2021-04-15 04:18:09.915180 | controller |
  2021-04-15 04:18:09.915188 | controller | stdout:
  2021-04-15 04:18:09.915197 | controller | /bin/sh: 1: ncat: not found

  Maybe something changed in base image we use?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1924315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1923413] Re: [stable/rocky and older] Tempest jobs fail on alembic dependency

2021-04-12 Thread Bernard Cafarelli
OK, there were some failed tests, but not that specific issue. Maybe the
stackviz fixes from
https://review.opendev.org/q/Ifee04f28ecee52e74803f1623aba5cfe5ee5ec90
helped here too?

Anyway, marking as invalid

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1923413

Title:
  [stable/rocky and older] Tempest jobs fail on alembic dependency

Status in neutron:
  Invalid

Bug description:
  alembic dropped some python versions support apparently, and we do not have 
an upper cap on it. So tempest jobs fail with POST_FAILURE, like (rocky):
  https://review.opendev.org/c/openstack/neutron/+/783544
  https://zuul.opendev.org/t/openstack/build/f300e1a82627435da71bc133445bc279

  Collecting alembic>=0.8.10 (from subunit2sql>=0.8.0->stackviz==0.0.1.dev320)
Downloading 
http://mirror.gra1.ovh.opendev.org/wheel/ubuntu-16.04-x86_64/alembic/alembic-1.5.5-py2.py3-none-any.whl
 (156kB)

  :stderr: alembic requires Python
  '!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7' but the
  running Python is 3.5.2

  And similar failure in queens, for example in
  https://review.opendev.org/c/openstack/neutron/+/776455

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1923413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1923413] [NEW] [stable/rocky and older] Tempest jobs fail on alembic dependency

2021-04-12 Thread Bernard Cafarelli
Public bug reported:

alembic dropped some python versions support apparently, and we do not have an 
upper cap on it. So tempest jobs fail with POST_FAILURE, like (rocky):
https://review.opendev.org/c/openstack/neutron/+/783544
https://zuul.opendev.org/t/openstack/build/f300e1a82627435da71bc133445bc279

Collecting alembic>=0.8.10 (from subunit2sql>=0.8.0->stackviz==0.0.1.dev320)
  Downloading 
http://mirror.gra1.ovh.opendev.org/wheel/ubuntu-16.04-x86_64/alembic/alembic-1.5.5-py2.py3-none-any.whl
 (156kB)

:stderr: alembic requires Python
'!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7' but the running
Python is 3.5.2

And similar failure in queens, for example in
https://review.opendev.org/c/openstack/neutron/+/776455

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1923413

Title:
  [stable/rocky and older] Tempest jobs fail on alembic dependency

Status in neutron:
  New

Bug description:
  alembic dropped some python versions support apparently, and we do not have 
an upper cap on it. So tempest jobs fail with POST_FAILURE, like (rocky):
  https://review.opendev.org/c/openstack/neutron/+/783544
  https://zuul.opendev.org/t/openstack/build/f300e1a82627435da71bc133445bc279

  Collecting alembic>=0.8.10 (from subunit2sql>=0.8.0->stackviz==0.0.1.dev320)
Downloading 
http://mirror.gra1.ovh.opendev.org/wheel/ubuntu-16.04-x86_64/alembic/alembic-1.5.5-py2.py3-none-any.whl
 (156kB)

  :stderr: alembic requires Python
  '!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7' but the
  running Python is 3.5.2

  And similar failure in queens, for example in
  https://review.opendev.org/c/openstack/neutron/+/776455

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1923413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1923412] [NEW] [stable/stein] Tempest fails with unrecognized arguments: --exclude-regex

2021-04-12 Thread Bernard Cafarelli
Public bug reported:

Stein neutron-tempest-plugin jobs with exclude regexes now fail 100%, for 
example:
https://review.opendev.org/c/openstack/neutron/+/774258
https://zuul.opendev.org/t/openstack/build/cf9c6880833041ffabd7726059875090

all run-test: commands[1] | tempest run --regex 
'(^neutron_tempest_plugin.scenario)|(^tempest.api.compute.servers.test_attach_interfaces)|(^tempest.api.compute.servers.test_multiple_create)'
 --concurrency=3 
'--exclude-regex=(^neutron_tempest_plugin.scenario.test_vlan_transparency.VlanTransparencyTest)'
usage: tempest run [-h] [--workspace WORKSPACE]
   [--workspace-path WORKSPACE_PATH]
   [--config-file CONFIG_FILE] [--smoke | --regex REGEX]
   [--black-regex BLACK_REGEX]
   [--whitelist-file WHITELIST_FILE]
   [--blacklist-file BLACKLIST_FILE] [--load-list LOAD_LIST]
   [--list-tests] [--concurrency CONCURRENCY]
   [--parallel | --serial] [--save-state] [--subunit]
   [--combine]
tempest run: error: unrecognized arguments: 
--exclude-regex=(^neutron_tempest_plugin.scenario.test_vlan_transparency.VlanTransparencyTest)
ERROR: InvocationError for command /opt/stack/tempest/.tox/tempest/bin/tempest 
run --regex 
'(^neutron_tempest_plugin.scenario)|(^tempest.api.compute.servers.test_attach_interfaces)|(^tempest.api.compute.servers.test_multiple_create)'
 --concurrency=3 
'--exclude-regex=(^neutron_tempest_plugin.scenario.test_vlan_transparency.VlanTransparencyTest)'
 (exited with code 2)


Most probably caused by 
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/775257 change, 
jobs for stein should be updated to keep old parameter name

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1923412

Title:
  [stable/stein] Tempest fails with unrecognized arguments: --exclude-
  regex

Status in neutron:
  New

Bug description:
  Stein neutron-tempest-plugin jobs with exclude regexes now fail 100%, for 
example:
  https://review.opendev.org/c/openstack/neutron/+/774258
  https://zuul.opendev.org/t/openstack/build/cf9c6880833041ffabd7726059875090

  all run-test: commands[1] | tempest run --regex 
'(^neutron_tempest_plugin.scenario)|(^tempest.api.compute.servers.test_attach_interfaces)|(^tempest.api.compute.servers.test_multiple_create)'
 --concurrency=3 
'--exclude-regex=(^neutron_tempest_plugin.scenario.test_vlan_transparency.VlanTransparencyTest)'
  usage: tempest run [-h] [--workspace WORKSPACE]
 [--workspace-path WORKSPACE_PATH]
 [--config-file CONFIG_FILE] [--smoke | --regex REGEX]
 [--black-regex BLACK_REGEX]
 [--whitelist-file WHITELIST_FILE]
 [--blacklist-file BLACKLIST_FILE] [--load-list LOAD_LIST]
 [--list-tests] [--concurrency CONCURRENCY]
 [--parallel | --serial] [--save-state] [--subunit]
 [--combine]
  tempest run: error: unrecognized arguments: 
--exclude-regex=(^neutron_tempest_plugin.scenario.test_vlan_transparency.VlanTransparencyTest)
  ERROR: InvocationError for command 
/opt/stack/tempest/.tox/tempest/bin/tempest run --regex 
'(^neutron_tempest_plugin.scenario)|(^tempest.api.compute.servers.test_attach_interfaces)|(^tempest.api.compute.servers.test_multiple_create)'
 --concurrency=3 
'--exclude-regex=(^neutron_tempest_plugin.scenario.test_vlan_transparency.VlanTransparencyTest)'
 (exited with code 2)

  
  Most probably caused by 
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/775257 change, 
jobs for stein should be updated to keep old parameter name

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1923412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1922727] [NEW] [stable] py2 jobs installation fails - setuptools requires Python '>=3.5' but the running Python is 2.7.17

2021-04-06 Thread Bernard Cafarelli
Public bug reported:

It looks like latest release of setuptools breaks our stable CI for
train and older branches:

+ lib/infra:install_infra:31   :   local 
PIP_VIRTUAL_ENV=/opt/stack/requirements/.venv
+ lib/infra:install_infra:32   :   '[' '!' -d 
/opt/stack/requirements/.venv ']'
+ lib/infra:install_infra:32   :   virtualenv 
/opt/stack/requirements/.venv
New python executable in /opt/stack/requirements/.venv/bin/python2
Also creating executable in /opt/stack/requirements/.venv/bin/python
Installing setuptools, pkg_resources, pip, wheel...
  Complete output from command /opt/stack/requirements/.venv/bin/python2 - 
setuptools pkg_resources pip wheel:
  Collecting setuptools
/opt/stack/requirements/.venv/share/python-wheels/urllib3-1.22-py2.py3-none-any.whl/urllib3/connectionpool.py:860:
 InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
/opt/stack/requirements/.venv/share/python-wheels/urllib3-1.22-py2.py3-none-any.whl/urllib3/connectionpool.py:860:
 InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
/opt/stack/requirements/.venv/share/python-wheels/urllib3-1.22-py2.py3-none-any.whl/urllib3/connectionpool.py:860:
 InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  Downloading 
https://mirror-int.ord.rax.opendev.org/pypifiles/packages/af/e7/02db816dc88c598281bacebbb7ccf2c9f1a6164942e88f1a0fded8643659/setuptools-45.0.0-py2.py3-none-any.whl
 (583kB)
setuptools requires Python '>=3.5' but the running Python is 2.7.17

...Installing setuptools, pkg_resources, pip, wheel...done.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/virtualenv.py", line 2375, in 
main()
  File "/usr/lib/python3/dist-packages/virtualenv.py", line 724, in main
symlink=options.symlink)
  File "/usr/lib/python3/dist-packages/virtualenv.py", line 992, in 
create_environment
download=download,
  File "/usr/lib/python3/dist-packages/virtualenv.py", line 922, in 
install_wheel
call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=SCRIPT)
  File "/usr/lib/python3/dist-packages/virtualenv.py", line 817, in 
call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /opt/stack/requirements/.venv/bin/python2 - setuptools 
pkg_resources pip wheel failed with error code 1
Running virtualenv with interpreter /usr/bin/python2
+ lib/infra:install_infra:1:   exit_trap

Sample stable/train failure: 
https://review.opendev.org/c/openstack/neutron/+/778701 with log 
https://zuul.opendev.org/t/openstack/build/e565a5ed50514877a8e88bee05a8e50c
Sample stable/stein failure: 
https://review.opendev.org/c/openstack/neutron/+/774258
Sample stable/queens failure: 
https://review.opendev.org/c/openstack/neutron/+/777124 (UT need a cap on zipp 
there too apparently)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1922727

Title:
  [stable] py2 jobs installation fails - setuptools requires Python
  '>=3.5' but the running Python is 2.7.17

Status in neutron:
  New

Bug description:
  It looks like latest release of setuptools breaks our stable CI for
  train and older branches:

  + lib/infra:install_infra:31   :   local 
PIP_VIRTUAL_ENV=/opt/stack/requirements/.venv
  + lib/infra:install_infra:32   :   '[' '!' -d 
/opt/stack/requirements/.venv ']'
  + lib/infra:install_infra:32   :   virtualenv 
/opt/stack/requirements/.venv
  New python executable in /opt/stack/requirements/.venv/bin/python2
  Also creating executable in /opt/stack/requirements/.venv/bin/python
  Installing setuptools, pkg_resources, pip, wheel...
Complete output from command /opt/stack/requirements/.venv/bin/python2 - 
setuptools pkg_resources pip wheel:
Collecting setuptools
  
/opt/stack/requirements/.venv/share/python-wheels/urllib3-1.22-py2.py3-none-any.whl/urllib3/connectionpool.py:860:
 InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  
/opt/stack/requirements/.venv/share/python-wheels/urllib3-1.22-py2.py3-none-any.whl/urllib3/connectionpool.py:860:
 InsecureRequestWarning: Unverified HTTPS request is being made. Adding 
certificate verification is strongly advised. See: 

[Yahoo-eng-team] [Bug 1829042] Re: Some API requests (GET networks) fail with "Accept: application/json; charset=utf-8" header and WebOb>=1.8.0

2021-04-06 Thread Bernard Cafarelli
Follow-up is waiting for other packages that are apparently broken by
newer pecan (but did not receive many patches this cycle...). We are
waiting for
https://review.opendev.org/c/openstack/requirements/+/747419

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1829042

Title:
  Some API requests (GET networks) fail with "Accept: application/json;
  charset=utf-8" header and WebOb>=1.8.0

Status in neutron:
  Confirmed

Bug description:
  Original downstream bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1706222

  On versions newer than Rocky, we have WebOb 1.8 in requirements. This causes 
the following API calls to end with 500 error:
  GET http://localhost:9696/v2.0/ports
  GET http://localhost:9696/v2.0/subnets
  GET http://localhost:9696/v2.0/networks

  when setting an Accept header with charset like "Accept:
  application/json; charset=utf-8"

  These calls do not go through neutron.api.v2 and wsgi.request as other
  resources, is it something that should be fixed too?

  To reproduce (on master too):
  $ curl -s -H "Accept: application/json; charset=utf-8" -H "X-Auth-Token: 
$OS_TOKEN" "http://localhost:9696/v2.0/ports; | python -mjson.tool
  {
  "NeutronError": {
  "detail": "",
  "message": "The server could not comply with the request since it is 
either malformed or otherwise incorrect.",
  "type": "HTTPNotAcceptable"
  }
  }

  mai 14 18:16:19 devstack neutron-server[1519]: DEBUG neutron.wsgi [-] (1533) 
accepted ('127.0.0.1', 47790) {{(pid=1533) server 
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:956}}
  mai 14 18:16:19 devstack neutron-server[1519]: ERROR pecan.core [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] content type None
  mai 14 18:16:19 devstack neutron-server[1519]: ERROR pecan.core [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] Controller 'index' defined 
does not support content_type 'None'. Supported type(s): ['application/json']
  mai 14 18:16:19 devstack neutron-server[1519]: INFO 
neutron.pecan_wsgi.hooks.translation [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] GET failed (client error): 
The server could not comply with the request since it is either malformed or 
otherwise incorrect.
  mai 14 18:16:19 devstack neutron-server[1519]: INFO neutron.wsgi [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] 127.0.0.1 "GET /v2.0/ports 
HTTP/1.1" status: 406  len: 360 time: 0.2243972

  Relevant WebOb warning:
  https://github.com/Pylons/webob/blob/master/docs/whatsnew-1.8.txt#L24

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1829042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865889] Re: [RFE] Routed provider networks support in OVN

2021-03-25 Thread Bernard Cafarelli
Doc is also merged, updating status on this one

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1865889

Title:
  [RFE] Routed provider networks support in OVN

Status in neutron:
  Fix Released

Bug description:
  The routed provider networks feature doesn't work properly with OVN
  backend. While API doesn't return any errors, all the ports are
  allocated to the same OVN Logical Switch and besides providing no
  Layer2 isolation whatsoever, it won't work when multiple segments
  using different physnets are added to such network.

  The reason for the latter is that, currently, in core OVN, only one
  localnet port is supported per Logical Switch so only one physical net
  can be associated to it. I can think of two different approaches:

  1) Change the OVN mech driver to logically separate Neutron segments:

  a) Create an OVN Logical Switch *per Neutron segment*. This has some
  challenges from a consistency point of view as right now there's a 1:1
  mapping between a Neutron Network and an OVN Logical Switch. Revision
  numbers, maintenance task, OVN DB Sync script, etcetera.

  b) Each of those Logical Switches will have a localnet port associated
  to the physnet of the Neutron segment.

  c) The port still belongs to the parent network so all the CRUD operations 
over a port will require to figure out which underlying OVN LS applies 
(depending on which segment the port lives in).
  The same goes for other objects (e.g. OVN Load Balancers, gw ports -if 
attaching a multisegment network to a Neutron router as a gateway is a valid 
use case at all-).

  e) Deferred allocation. A port can be created in a multisegment
  Neutron network but the IP allocation is deferred to the time where a
  compute node is assigned to an instance. In this case the OVN mech
  driver might need to move around the Logical Switch Port from the
  Logical Switch of the parent to that of the segment where it falls
  (can be prone to race conditions :?).

  
  2) Core OVN changes:

  The current limitation is that right now only one localnet port is
  allowed per Logical Switch so we can't map different physnets to it.
  If we add support for multiple localnet ports in core OVN, we can have
  all the segments living in the same OVN Logical Switch.

  My idea here would be:

  a) Per each Neutron segment, we create a localnet port in the single
  OVN Logical Switch with its physnet and vlan id (if any). Eg.

  name: provnet-f7038db6-7376-4b83-b57b-3f456bea2b80
  options : {network_name=segment1}
  parent_name : []
  port_security   : []
  tag : 2016
  tag_request : []
  type: localnet

  
  name: provnet-84487aa7-5ac7-4f07-877e-1840d325e3de
  options : {network_name=segment2}
  parent_name : []
  port_security   : []
  tag : 2017
  tag_request : []
  type: localnet

  And both ports would belong to the LS corresponding to the
  multisegment Neutron network.

  b) In this case, when ovn-controller sees that a port in that network
  has been bound to it, all it needs to create is the patch port to the
  provider bridge that the bridge mappings configuration dictates.

  E.g

  compute1:bridge-mappings = segment1:br-provider1
  compute2:bridge-mappings = segment2:br-provider2

  When a port in the multisegment network gets bound to compute1, ovn-
  controller will create a patch-port between br-int and br-provider1.
  The restriction here is that on a given hypervisor, only ports
  belonging to the same segment will be present. ie. we can't mix VMs on
  different segments on the same hypervisor.

  
  c) Minor changes are required in the Neutron side (just creating the localnet 
port upon segment creation).

  
  We need to discuss if the restriction mentioned earlier makes sense. If not, 
perhaps we need to drop this approach completely or look for core OVN 
alternatives.

  
  I'd lean on approach number 2 as it seems the less invasive in terms of code 
changes but there's the catch described that may make it a no-go or explore 
another ways to eliminate that restriction somehow in core OVN.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1865889/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1911128] Re: Neutron with ovn driver failed to start on Fedora

2021-03-12 Thread Bernard Cafarelli
https://review.opendev.org/c/openstack/neutron/+/779494 merged, next
periodic run should be green

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1911128

Title:
  Neutron with ovn driver failed to start on Fedora

Status in neutron:
  Fix Released

Bug description:
  Our periodic job neutron-ovn-tempest-ovs-master-fedora is failing
  since some time 100% of times. It fails due to failed neutron-server
  start. Error in the neutron logs is like below:

  Jan 11 06:30:42.031451 fedora-32-limestone-regionone-0022468210 
neutron-server[88725]: 
/usr/local/lib64/python3.8/site-packages/sqlalchemy/orm/relationships.py:1994: 
SAWarning: Setting backref / back_populates on relationship 
QosNetworkPolicyBinding.port to refer to viewonly relationship 
Port.qos_network_policy_binding should include sync_backref=False set on the 
QosNetworkPolicyBinding.port relationship.  (this warning may be suppressed 
after 10 occurrences)
  Jan 11 06:30:42.031451 fedora-32-limestone-regionone-0022468210 
neutron-server[88725]:   util.warn_limited(
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager [-] Error during 
notification for 
neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-627086
 process, after_init: Exception: Could not retrieve schema from 
ssl:10.4.70.225:6641
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager Traceback (most 
recent call last):
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python3.8/site-packages/neutron_lib/callbacks/manager.py", line 
197, in _notify_loop
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 272, in post_fork_initialize
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager 
self._wait_for_pg_drop_event()
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 340, in _wait_for_pg_drop_event
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager idl = 
ovsdb_monitor.OvnInitPGNbIdl.from_server(
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py",
 line 582, in from_server
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager helper = 
idlutils.get_schema_helper(connection_string, schema_name)
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python3.8/site-packages/ovsdbapp/backend/ovs_idl/idlutils.py", 
line 204, in get_schema_helper
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager raise 
Exception("Could not retrieve schema from %s" % connection)
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager Exception: Could not 
retrieve schema from ssl:10.4.70.225:6641
  Jan 11 06:30:42.032081 fedora-32-limestone-regionone-0022468210 
neutron-server[89301]: ERROR neutron_lib.callbacks.manager 
  Jan 11 06:30:42.033155 fedora-32-limestone-regionone-0022468210 
neutron-server[89302]: ERROR neutron_lib.callbacks.manager [-] Error during 
notification for 
neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.post_fork_initialize-627086
 process, after_init: Exception: Could not retrieve schema from 
ssl:10.4.70.225:6641
  Jan 11 06:30:42.033155 fedora-32-limestone-regionone-0022468210 
neutron-server[89302]: ERROR neutron_lib.callbacks.manager Traceback (most 
recent call last):
  Jan 11 06:30:42.033155 fedora-32-limestone-regionone-0022468210 
neutron-server[89302]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python3.8/site-packages/neutron_lib/callbacks/manager.py", line 
197, in _notify_loop
  Jan 11 06:30:42.033155 

[Yahoo-eng-team] [Bug 1915530] Re: Openvswitch firewall - removing and adding security group breaks connectivity

2021-03-10 Thread Bernard Cafarelli
Patch merged in master

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1915530

Title:
  Openvswitch firewall - removing and adding security group breaks
  connectivity

Status in neutron:
  Fix Released

Bug description:
  How to reproduce the issue:

  1. use neutron-ovs-agent with openvswitch firewall driver,
  2. spawn vm with SG which has some rule to allow some kind of traffic (can be 
e.g. ssh to the instance)
  3. establish connection according to the rule(s) in SG (e.g. connect through 
ssh to the instance)
  4. keep established connection and remove security group from port,
  5. add security group again to the port
  6. Your connection will not be "restored" becuase in the conntrack table 
there are entries like:

  tcp  6 296 ESTABLISHED src=10.0.0.2 dst=10.0.0.44 sport=34660
  dport=22 src=10.0.0.44 dst=10.0.0.2 sport=22 dport=34660 [ASSURED]
  mark=1 zone=4 use=1

  Connection will be restored when that entry will be deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1915530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1916041] [NEW] tempest-slow-py3 ipv6 gate job fails on stable/stein

2021-02-18 Thread Bernard Cafarelli
Public bug reported:

tempest-slow-py3 recently started to fail (100% from backports I checked) on 8 
IPv6 tests, for example https://review.opendev.org/c/openstack/neutron/+/775102 
and https://review.opendev.org/c/openstack/neutron/+/774258 :
tempest.scenario.test_network_v6.TestGettingAddress
test_dhcp6_stateless_from_os[compute,id-d7e1f858-187c-45a6-89c9-bdafde619a9f,network,slow]
test_dualnet_dhcp6_stateless_from_os[compute,id-76f26acd-9688-42b4-bc3e-cd134c4cb09e,network,slow]
test_dualnet_multi_prefix_dhcpv6_stateless[compute,id-cf1c4425-766b-45b8-be35-e2959728eb00,network,slow]
test_dualnet_multi_prefix_slaac[compute,id-9178ad42-10e4-47e9-8987-e02b170cc5cd,network,slow]
test_dualnet_slaac_from_os[compute,id-b6399d76-4438-4658-bcf5-0d6c8584fde2,network,slow]
test_multi_prefix_dhcpv6_stateless[compute,id-7ab23f41-833b-4a16-a7c9-5b42fe6d4123,network,slow]
test_multi_prefix_slaac[compute,id-dec222b1-180c-4098-b8c5-cc1b8342d611,network,slow]
test_slaac_from_os[compute,id-2c92df61-29f0-4eaa-bee3-7c65bef62a43,network,slow]

Typical error message:
tempest.lib.exceptions.BadRequest: Bad request
Details: {'type': 'BadRequest', 'message': 'Bad router request: Cidr 
2001:db8::/64 of subnet b7a37aa7-22c6-4eed-b4de-98587b434556 overlaps with cidr 
2001:db8::/64 of subnet d6a91c9f-c268-4724-b2aa-ac5187d167da.', 'detail': ''}


(grabbed from 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b92/77
 5102/1/check/tempest-slow-py3/b924f4d/testr_results.html sample failure)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1916041

Title:
  tempest-slow-py3 ipv6 gate job fails on stable/stein

Status in neutron:
  New

Bug description:
  tempest-slow-py3 recently started to fail (100% from backports I checked) on 
8 IPv6 tests, for example 
https://review.opendev.org/c/openstack/neutron/+/775102 and 
https://review.opendev.org/c/openstack/neutron/+/774258 :
  tempest.scenario.test_network_v6.TestGettingAddress
  
test_dhcp6_stateless_from_os[compute,id-d7e1f858-187c-45a6-89c9-bdafde619a9f,network,slow]
  
test_dualnet_dhcp6_stateless_from_os[compute,id-76f26acd-9688-42b4-bc3e-cd134c4cb09e,network,slow]
  
test_dualnet_multi_prefix_dhcpv6_stateless[compute,id-cf1c4425-766b-45b8-be35-e2959728eb00,network,slow]
  
test_dualnet_multi_prefix_slaac[compute,id-9178ad42-10e4-47e9-8987-e02b170cc5cd,network,slow]
  
test_dualnet_slaac_from_os[compute,id-b6399d76-4438-4658-bcf5-0d6c8584fde2,network,slow]
  
test_multi_prefix_dhcpv6_stateless[compute,id-7ab23f41-833b-4a16-a7c9-5b42fe6d4123,network,slow]
  
test_multi_prefix_slaac[compute,id-dec222b1-180c-4098-b8c5-cc1b8342d611,network,slow]
  
test_slaac_from_os[compute,id-2c92df61-29f0-4eaa-bee3-7c65bef62a43,network,slow]

  Typical error message:
  tempest.lib.exceptions.BadRequest: Bad request
  Details: {'type': 'BadRequest', 'message': 'Bad router request: Cidr 
2001:db8::/64 of subnet b7a37aa7-22c6-4eed-b4de-98587b434556 overlaps with cidr 
2001:db8::/64 of subnet d6a91c9f-c268-4724-b2aa-ac5187d167da.', 'detail': ''}

  
  (grabbed from 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b92/77
 5102/1/check/tempest-slow-py3/b924f4d/testr_results.html sample failure)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1916041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1915272] Re: Split logfiles for neutron-functional-with-uwsgi job are empty

2021-02-16 Thread Bernard Cafarelli
https://review.opendev.org/c/openstack/neutron/+/774865 merged

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1915272

Title:
  Split logfiles for neutron-functional-with-uwsgi  job are empty

Status in neutron:
  Fix Released

Bug description:
  Noted in a recent meeting, the split logfiles in functional job are
  mostly empty (the ones in dsvm-functional-logs/ directory).

  Some recent examples with failures:
  * 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_24e/774245/1/check/neutron-functional-with-uwsgi/24e5ec7/testr_results.html
 test run, sample empty log 
http://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_24e/774245/1/check/neutron-functional-with-uwsgi/24e5ec7/controller/logs/dsvm-functional-logs/neutron.tests.functional.agent.common.test_ovs_lib.BaseOVSTestCase.test_get_egress_min_bw_for_port.txt
  * 
https://868396e386f8d4e621fb-ed8719f87969200ae690156046a9dd5f.ssl.cf1.rackcdn.com/774621/1/check/neutron-functional-with-uwsgi/e6c177b/testr_results.html
 test run, sample empty log 
https://868396e386f8d4e621fb-ed8719f87969200ae690156046a9dd5f.ssl.cf1.rackcdn.com/774621/1/check/neutron-functional-with-uwsgi/e6c177b/controller/logs/dsvm-functional-logs/neutron.tests.functional.agent.linux.test_keepalived.KeepalivedManagerTestCase.test_keepalived_respawns.txt

  My initial suspect is something different between the -with-uwsgi env
  and the older neutron-functional, as logs are fine in recent backport
  CI runs for this job, for example https://3bc87cc3e5077ab38d1d-
  
5c64cb79e45e2166ffcdcfd3b17f7c1b.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/victoria
  /neutron-functional/b335138/controller/logs/dsvm-functional-
  
logs/neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_impl_idl.TestSbApi.test_set_get_chassis_metadata_networks.txt

  Note that neutron-fullstack-with-uwsgi has proper split log files, so this 
seems specific to functional. Example for fullstack: 
  
https://4748b0ca40390754af3b-5ee617b83cee2a735a2ed8296934818f.ssl.cf1.rackcdn.com/767186/5/check/neutron-fullstack-with-uwsgi/99b7342/controller/logs/dsvm-fullstack-logs/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1915272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1915272] [NEW] Split logfiles for neutron-functional-with-uwsgi job are empty

2021-02-10 Thread Bernard Cafarelli
Public bug reported:

Noted in a recent meeting, the split logfiles in functional job are
mostly empty (the ones in dsvm-functional-logs/ directory).

Some recent examples with failures:
* 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_24e/774245/1/check/neutron-functional-with-uwsgi/24e5ec7/testr_results.html
 test run, sample empty log 
http://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_24e/774245/1/check/neutron-functional-with-uwsgi/24e5ec7/controller/logs/dsvm-functional-logs/neutron.tests.functional.agent.common.test_ovs_lib.BaseOVSTestCase.test_get_egress_min_bw_for_port.txt
* 
https://868396e386f8d4e621fb-ed8719f87969200ae690156046a9dd5f.ssl.cf1.rackcdn.com/774621/1/check/neutron-functional-with-uwsgi/e6c177b/testr_results.html
 test run, sample empty log 
https://868396e386f8d4e621fb-ed8719f87969200ae690156046a9dd5f.ssl.cf1.rackcdn.com/774621/1/check/neutron-functional-with-uwsgi/e6c177b/controller/logs/dsvm-functional-logs/neutron.tests.functional.agent.linux.test_keepalived.KeepalivedManagerTestCase.test_keepalived_respawns.txt

My initial suspect is something different between the -with-uwsgi env
and the older neutron-functional, as logs are fine in recent backport CI
runs for this job, for example https://3bc87cc3e5077ab38d1d-
5c64cb79e45e2166ffcdcfd3b17f7c1b.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/victoria
/neutron-functional/b335138/controller/logs/dsvm-functional-
logs/neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_impl_idl.TestSbApi.test_set_get_chassis_metadata_networks.txt

Note that neutron-fullstack-with-uwsgi has proper split log files, so this 
seems specific to functional. Example for fullstack: 
https://4748b0ca40390754af3b-5ee617b83cee2a735a2ed8296934818f.ssl.cf1.rackcdn.com/767186/5/check/neutron-fullstack-with-uwsgi/99b7342/controller/logs/dsvm-fullstack-logs/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1915272

Title:
  Split logfiles for neutron-functional-with-uwsgi  job are empty

Status in neutron:
  New

Bug description:
  Noted in a recent meeting, the split logfiles in functional job are
  mostly empty (the ones in dsvm-functional-logs/ directory).

  Some recent examples with failures:
  * 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_24e/774245/1/check/neutron-functional-with-uwsgi/24e5ec7/testr_results.html
 test run, sample empty log 
http://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_24e/774245/1/check/neutron-functional-with-uwsgi/24e5ec7/controller/logs/dsvm-functional-logs/neutron.tests.functional.agent.common.test_ovs_lib.BaseOVSTestCase.test_get_egress_min_bw_for_port.txt
  * 
https://868396e386f8d4e621fb-ed8719f87969200ae690156046a9dd5f.ssl.cf1.rackcdn.com/774621/1/check/neutron-functional-with-uwsgi/e6c177b/testr_results.html
 test run, sample empty log 
https://868396e386f8d4e621fb-ed8719f87969200ae690156046a9dd5f.ssl.cf1.rackcdn.com/774621/1/check/neutron-functional-with-uwsgi/e6c177b/controller/logs/dsvm-functional-logs/neutron.tests.functional.agent.linux.test_keepalived.KeepalivedManagerTestCase.test_keepalived_respawns.txt

  My initial suspect is something different between the -with-uwsgi env
  and the older neutron-functional, as logs are fine in recent backport
  CI runs for this job, for example https://3bc87cc3e5077ab38d1d-
  
5c64cb79e45e2166ffcdcfd3b17f7c1b.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/victoria
  /neutron-functional/b335138/controller/logs/dsvm-functional-
  
logs/neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_impl_idl.TestSbApi.test_set_get_chassis_metadata_networks.txt

  Note that neutron-fullstack-with-uwsgi has proper split log files, so this 
seems specific to functional. Example for fullstack: 
  
https://4748b0ca40390754af3b-5ee617b83cee2a735a2ed8296934818f.ssl.cf1.rackcdn.com/767186/5/check/neutron-fullstack-with-uwsgi/99b7342/controller/logs/dsvm-fullstack-logs/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1915272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1912596] Re: neutron-server report 500 error when update floating ip port forwarding

2021-02-09 Thread Bernard Cafarelli
** Changed in: neutron
 Assignee: (unassigned) => yangjianfeng (yangjianfeng)

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1912596

Title:
  neutron-server report 500 error when update floating ip port
  forwarding

Status in neutron:
  Fix Released

Bug description:
  I create two floating ip port forwarding, like below:
  # openstack floating ip port forwarding list 
16c83a6f-8ab5-455f-a744-dccec61e408d -f value
  6173dee8-8e7a-422f-bbb3-35e5726bf879 76a1f0d2-08ad-4975-a2ee-02d877960b35 
192.168.5.4 7687 65530 udp None
  fea45432-321b-4362-b2d4-525680a4b6d9 76a1f0d2-08ad-4975-a2ee-02d877960b35 
192.168.5.4 65535 6554 tcp None

  Then, I execute the below command to update one of them, like below:
  # openstack floating ip port forwarding set 
16c83a6f-8ab5-455f-a744-dccec61e408d fea45432-321b-4362-b2d4-525680a4b6d9 
--internal-protocol-port 7687

  The neutron server report 500 error:
  HttpException: 500: Server Error for url: 
http://10.2.36.148:19696/v2.0/floatingips/16c83a6f-8ab5-455f-a744-dccec61e408d/port_forwardings/fea45432-321b-4362-b2d4-525680a4b6d9,
 Request Failed: internal server error while processing your request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1912596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1903689] Re: [stable/ussuri] Functional job fails - AttributeError: module 'neutron_lib.constants' has no attribute 'DEVICE_OWNER_DISTRIBUTED'

2021-01-08 Thread Bernard Cafarelli
ussuri just merged, marking this fixed

** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: bgpvpn
   Status: New => Fix Released

** Changed in: networking-sfc
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1903689

Title:
  [stable/ussuri] Functional job fails - AttributeError: module
  'neutron_lib.constants' has no attribute 'DEVICE_OWNER_DISTRIBUTED'

Status in networking-bgpvpn:
  Fix Released
Status in networking-sfc:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Spotted with this review:
  https://review.opendev.org/#/c/748135/
  https://zuul.opendev.org/t/openstack/build/cc5ece62f011441cad5c82926eba1466

  Functional job now fails 100% on stable/ussuri:
  =
  Failures during discovery
  =
  --- import errors ---
  Failed to import test module: 
networking_sfc.tests.functional.db.test_migrations
  Traceback (most recent call last):
File "/usr/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
  module = self._get_module_from_name(name)
File "/usr/lib/python3.6/unittest/loader.py", line 369, in 
_get_module_from_name
  __import__(name)
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/networking_sfc/tests/functional/db/test_migrations.py",
 line 17, in 
  from neutron.tests.functional.db import test_migrations
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/tests/functional/db/test_migrations.py",
 line 33, in 
  from neutron.db.migration.models import head as head_models
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/migration/models/head.py",
 line 31, in 
  from neutron.db import l3_dvrscheduler_db  # noqa
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_dvrscheduler_db.py",
 line 34, in 
  from neutron.db import l3_dvr_db
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_dvr_db.py",
 line 45, in 
  from neutron.db import l3_db
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_db.py",
 line 56, in 
  from neutron.objects import ports as port_obj
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/objects/ports.py",
 line 23, in 
  from neutron.common import _constants
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/common/_constants.py",
 line 79, in 
  constants.DEVICE_OWNER_DISTRIBUTED]
  AttributeError: module 'neutron_lib.constants' has no attribute 
'DEVICE_OWNER_DISTRIBUTED'
  [...]

  
  Possible fix, capping neutron-lib for this branch?

  
  neutron-fwaas has similar issue:
  https://review.opendev.org/#/c/752882/
  https://zuul.opendev.org/t/openstack/build/7fdfc3ab0f0a458ea296d25444d895b1

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1903689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1902843] Re: [rocky]Rally job is broken in the Neutron stable/rocky branch

2020-11-16 Thread Bernard Cafarelli
*** This bug is a duplicate of bug 1902775 ***
https://bugs.launchpad.net/bugs/1902775

** This bug has been marked a duplicate of bug 1902775
   neutron-rally-task fails on stable/rocky: SyntaxError: invalid syntax

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1902843

Title:
  [rocky]Rally job is broken in the Neutron stable/rocky branch

Status in neutron:
  Confirmed

Bug description:
  It fails to deploy with error like:

  2020-11-04 04:55:48.070581 | controller | Traceback (most recent call last):
  2020-11-04 04:55:48.070657 | controller |   File "/usr/local/bin/rally", line 
7, in 
  2020-11-04 04:55:48.070689 | controller | from rally.cli.main import main
  2020-11-04 04:55:48.070718 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/cli/main.py", line 20, in 
  2020-11-04 04:55:48.070748 | controller | from rally.cli import cliutils
  2020-11-04 04:55:48.070777 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/cli/cliutils.py", line 28, in 

  2020-11-04 04:55:48.070806 | controller | from rally import api
  2020-11-04 04:55:48.070835 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/api.py", line 29, in 
  2020-11-04 04:55:48.070864 | controller | from rally.common import logging
  2020-11-04 04:55:48.070892 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/common/logging.py", line 337
  2020-11-04 04:55:48.070921 | controller | f"Module `{target}` moved to 
`{new_module}` since Rally v{release}. "

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1902843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1902775] Re: neutron-rally-task fails on stable/rocky: SyntaxError: invalid syntax

2020-11-16 Thread Bernard Cafarelli
https://review.opendev.org/761391 is merged, marking this fixed

** Changed in: neutron
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1902775

Title:
  neutron-rally-task fails on stable/rocky: SyntaxError: invalid syntax

Status in neutron:
  Fix Released

Bug description:
  Now that linuxbridge job is fixed on that branch, it is rally job turn. Some 
recent failures:
  https://zuul.opendev.org/t/openstack/build/9d0eb20faf0d47128e123abda6c2f894
  https://zuul.opendev.org/t/openstack/build/f04ac29742cc4750ada366ebba1f2d8b

  Same error:
  2020-11-03 16:47:03.768342 | controller | ++ 
/opt/stack/rally-openstack/devstack/plugin.sh:source:17 :   echo_summary 
'Initializing Rally-OpenStack'
  2020-11-03 16:47:03.770975 | controller | ++ ./stack.sh:echo_summary:440  
:   [[ -t 3 ]]
  2020-11-03 16:47:03.773583 | controller | ++ ./stack.sh:echo_summary:446  
:   echo -e Initializing Rally-OpenStack
  2020-11-03 16:47:03.776354 | controller | ++ 
/opt/stack/rally-openstack/devstack/plugin.sh:source:18 :   init_rally
  2020-11-03 16:47:03.779017 | controller | ++ 
/opt/stack/rally-openstack/devstack/lib/rally:init_rally:135 :   
recreate_database rally utf8
  2020-11-03 16:47:03.781742 | controller | ++ 
lib/database:recreate_database:112   :   local db=rally
  2020-11-03 16:47:03.784508 | controller | ++ 
lib/database:recreate_database:113   :   recreate_database_mysql rally
  2020-11-03 16:47:03.786891 | controller | ++ 
lib/databases/mysql:recreate_database_mysql:55 :   local db=rally
  2020-11-03 16:47:03.789347 | controller | ++ 
lib/databases/mysql:recreate_database_mysql:56 :   mysql -uroot 
-psecretdatabase -h127.0.0.1 -e 'DROP DATABASE IF EXISTS rally;'
  2020-11-03 16:47:03.793603 | controller | mysql: [Warning] Using a password 
on the command line interface can be insecure.
  2020-11-03 16:47:03.827334 | controller | ++ 
lib/databases/mysql:recreate_database_mysql:57 :   mysql -uroot 
-psecretdatabase -h127.0.0.1 -e 'CREATE DATABASE rally CHARACTER SET utf8;'
  2020-11-03 16:47:03.830713 | controller | mysql: [Warning] Using a password 
on the command line interface can be insecure.
  2020-11-03 16:47:03.863165 | controller | ++ 
/opt/stack/rally-openstack/devstack/lib/rally:init_rally:137 :   rally 
--config-file /etc/rally/rally.conf db recreate
  2020-11-03 16:47:04.675544 | controller | Traceback (most recent call last):
  2020-11-03 16:47:04.675631 | controller |   File "/usr/local/bin/rally", line 
7, in 
  2020-11-03 16:47:04.675662 | controller | from rally.cli.main import main
  2020-11-03 16:47:04.675691 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/cli/main.py", line 20, in 
  2020-11-03 16:47:04.675719 | controller | from rally.cli import cliutils
  2020-11-03 16:47:04.675746 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/cli/cliutils.py", line 28, in 

  2020-11-03 16:47:04.675774 | controller | from rally import api
  2020-11-03 16:47:04.675801 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/api.py", line 29, in 
  2020-11-03 16:47:04.675829 | controller | from rally.common import logging
  2020-11-03 16:47:04.675856 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/common/logging.py", line 337
  2020-11-03 16:47:04.675883 | controller | f"Module `{target}` moved to 
`{new_module}` since Rally v{release}. "
  2020-11-03 16:47:04.675910 | controller | 
^
  2020-11-03 16:47:04.675949 | controller | SyntaxError: invalid syntax
  2020-11-03 16:47:04.709052 | controller | + 
/opt/stack/rally-openstack/devstack/lib/rally:init_rally:1 :   exit_trap
  2020-11-03 16:47:04.711587 | controller | + ./stack.sh:exit_trap:521  
   :   local r=1

  Looks like we may need to pin some other component?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1902775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1903689] Re: [stable/ussuri] Functional job fails - AttributeError: module 'neutron_lib.constants' has no attribute 'DEVICE_OWNER_DISTRIBUTED'

2020-11-10 Thread Bernard Cafarelli
openstack-tox-py38 job for networking-bgpvpn stable/ussuri shows similar error:
https://review.opendev.org/#/c/743487
https://zuul.opendev.org/t/openstack/build/8628803415a34b6e84fa3635eda6946b

** Also affects: bgpvpn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1903689

Title:
  [stable/ussuri] Functional job fails - AttributeError: module
  'neutron_lib.constants' has no attribute 'DEVICE_OWNER_DISTRIBUTED'

Status in networking-bgpvpn:
  New
Status in networking-sfc:
  New
Status in neutron:
  New

Bug description:
  Spotted with this review:
  https://review.opendev.org/#/c/748135/
  https://zuul.opendev.org/t/openstack/build/cc5ece62f011441cad5c82926eba1466

  Functional job now fails 100% on stable/ussuri:
  =
  Failures during discovery
  =
  --- import errors ---
  Failed to import test module: 
networking_sfc.tests.functional.db.test_migrations
  Traceback (most recent call last):
File "/usr/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
  module = self._get_module_from_name(name)
File "/usr/lib/python3.6/unittest/loader.py", line 369, in 
_get_module_from_name
  __import__(name)
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/networking_sfc/tests/functional/db/test_migrations.py",
 line 17, in 
  from neutron.tests.functional.db import test_migrations
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/tests/functional/db/test_migrations.py",
 line 33, in 
  from neutron.db.migration.models import head as head_models
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/migration/models/head.py",
 line 31, in 
  from neutron.db import l3_dvrscheduler_db  # noqa
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_dvrscheduler_db.py",
 line 34, in 
  from neutron.db import l3_dvr_db
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_dvr_db.py",
 line 45, in 
  from neutron.db import l3_db
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_db.py",
 line 56, in 
  from neutron.objects import ports as port_obj
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/objects/ports.py",
 line 23, in 
  from neutron.common import _constants
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/common/_constants.py",
 line 79, in 
  constants.DEVICE_OWNER_DISTRIBUTED]
  AttributeError: module 'neutron_lib.constants' has no attribute 
'DEVICE_OWNER_DISTRIBUTED'
  [...]

  
  Possible fix, capping neutron-lib for this branch?

  
  neutron-fwaas has similar issue:
  https://review.opendev.org/#/c/752882/
  https://zuul.opendev.org/t/openstack/build/7fdfc3ab0f0a458ea296d25444d895b1

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1903689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1903689] [NEW] [stable/ussuri] Functional job fails - AttributeError: module 'neutron_lib.constants' has no attribute 'DEVICE_OWNER_DISTRIBUTED'

2020-11-10 Thread Bernard Cafarelli
Public bug reported:

Spotted with this review:
https://review.opendev.org/#/c/748135/
https://zuul.opendev.org/t/openstack/build/cc5ece62f011441cad5c82926eba1466

Functional job now fails 100% on stable/ussuri:
=
Failures during discovery
=
--- import errors ---
Failed to import test module: networking_sfc.tests.functional.db.test_migrations
Traceback (most recent call last):
  File "/usr/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
module = self._get_module_from_name(name)
  File "/usr/lib/python3.6/unittest/loader.py", line 369, in 
_get_module_from_name
__import__(name)
  File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/networking_sfc/tests/functional/db/test_migrations.py",
 line 17, in 
from neutron.tests.functional.db import test_migrations
  File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/tests/functional/db/test_migrations.py",
 line 33, in 
from neutron.db.migration.models import head as head_models
  File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/migration/models/head.py",
 line 31, in 
from neutron.db import l3_dvrscheduler_db  # noqa
  File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_dvrscheduler_db.py",
 line 34, in 
from neutron.db import l3_dvr_db
  File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_dvr_db.py",
 line 45, in 
from neutron.db import l3_db
  File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_db.py",
 line 56, in 
from neutron.objects import ports as port_obj
  File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/objects/ports.py",
 line 23, in 
from neutron.common import _constants
  File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/common/_constants.py",
 line 79, in 
constants.DEVICE_OWNER_DISTRIBUTED]
AttributeError: module 'neutron_lib.constants' has no attribute 
'DEVICE_OWNER_DISTRIBUTED'
[...]


Possible fix, capping neutron-lib for this branch?


neutron-fwaas has similar issue:
https://review.opendev.org/#/c/752882/
https://zuul.opendev.org/t/openstack/build/7fdfc3ab0f0a458ea296d25444d895b1

** Affects: networking-sfc
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1903689

Title:
  [stable/ussuri] Functional job fails - AttributeError: module
  'neutron_lib.constants' has no attribute 'DEVICE_OWNER_DISTRIBUTED'

Status in networking-sfc:
  New
Status in neutron:
  New

Bug description:
  Spotted with this review:
  https://review.opendev.org/#/c/748135/
  https://zuul.opendev.org/t/openstack/build/cc5ece62f011441cad5c82926eba1466

  Functional job now fails 100% on stable/ussuri:
  =
  Failures during discovery
  =
  --- import errors ---
  Failed to import test module: 
networking_sfc.tests.functional.db.test_migrations
  Traceback (most recent call last):
File "/usr/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
  module = self._get_module_from_name(name)
File "/usr/lib/python3.6/unittest/loader.py", line 369, in 
_get_module_from_name
  __import__(name)
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/networking_sfc/tests/functional/db/test_migrations.py",
 line 17, in 
  from neutron.tests.functional.db import test_migrations
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/tests/functional/db/test_migrations.py",
 line 33, in 
  from neutron.db.migration.models import head as head_models
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/migration/models/head.py",
 line 31, in 
  from neutron.db import l3_dvrscheduler_db  # noqa
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_dvrscheduler_db.py",
 line 34, in 
  from neutron.db import l3_dvr_db
File 
"/home/zuul/src/opendev.org/openstack/networking-sfc/.tox/dsvm-functional/lib/python3.6/site-packages/neutron/db/l3_dvr_db.py",
 line 45, in 
  from neutron.db import l3_db
File 

[Yahoo-eng-team] [Bug 1902775] [NEW] neutron-rally-task fails on stable/rocky: SyntaxError: invalid syntax

2020-11-03 Thread Bernard Cafarelli
Public bug reported:

Now that linuxbridge job is fixed on that branch, it is rally job turn. Some 
recent failures:
https://zuul.opendev.org/t/openstack/build/9d0eb20faf0d47128e123abda6c2f894
https://zuul.opendev.org/t/openstack/build/f04ac29742cc4750ada366ebba1f2d8b

Same error:
2020-11-03 16:47:03.768342 | controller | ++ 
/opt/stack/rally-openstack/devstack/plugin.sh:source:17 :   echo_summary 
'Initializing Rally-OpenStack'
2020-11-03 16:47:03.770975 | controller | ++ ./stack.sh:echo_summary:440
  :   [[ -t 3 ]]
2020-11-03 16:47:03.773583 | controller | ++ ./stack.sh:echo_summary:446
  :   echo -e Initializing Rally-OpenStack
2020-11-03 16:47:03.776354 | controller | ++ 
/opt/stack/rally-openstack/devstack/plugin.sh:source:18 :   init_rally
2020-11-03 16:47:03.779017 | controller | ++ 
/opt/stack/rally-openstack/devstack/lib/rally:init_rally:135 :   
recreate_database rally utf8
2020-11-03 16:47:03.781742 | controller | ++ lib/database:recreate_database:112 
  :   local db=rally
2020-11-03 16:47:03.784508 | controller | ++ lib/database:recreate_database:113 
  :   recreate_database_mysql rally
2020-11-03 16:47:03.786891 | controller | ++ 
lib/databases/mysql:recreate_database_mysql:55 :   local db=rally
2020-11-03 16:47:03.789347 | controller | ++ 
lib/databases/mysql:recreate_database_mysql:56 :   mysql -uroot 
-psecretdatabase -h127.0.0.1 -e 'DROP DATABASE IF EXISTS rally;'
2020-11-03 16:47:03.793603 | controller | mysql: [Warning] Using a password on 
the command line interface can be insecure.
2020-11-03 16:47:03.827334 | controller | ++ 
lib/databases/mysql:recreate_database_mysql:57 :   mysql -uroot 
-psecretdatabase -h127.0.0.1 -e 'CREATE DATABASE rally CHARACTER SET utf8;'
2020-11-03 16:47:03.830713 | controller | mysql: [Warning] Using a password on 
the command line interface can be insecure.
2020-11-03 16:47:03.863165 | controller | ++ 
/opt/stack/rally-openstack/devstack/lib/rally:init_rally:137 :   rally 
--config-file /etc/rally/rally.conf db recreate
2020-11-03 16:47:04.675544 | controller | Traceback (most recent call last):
2020-11-03 16:47:04.675631 | controller |   File "/usr/local/bin/rally", line 
7, in 
2020-11-03 16:47:04.675662 | controller | from rally.cli.main import main
2020-11-03 16:47:04.675691 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/cli/main.py", line 20, in 
2020-11-03 16:47:04.675719 | controller | from rally.cli import cliutils
2020-11-03 16:47:04.675746 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/cli/cliutils.py", line 28, in 

2020-11-03 16:47:04.675774 | controller | from rally import api
2020-11-03 16:47:04.675801 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/api.py", line 29, in 
2020-11-03 16:47:04.675829 | controller | from rally.common import logging
2020-11-03 16:47:04.675856 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/rally/common/logging.py", line 337
2020-11-03 16:47:04.675883 | controller | f"Module `{target}` moved to 
`{new_module}` since Rally v{release}. "
2020-11-03 16:47:04.675910 | controller |   
  ^
2020-11-03 16:47:04.675949 | controller | SyntaxError: invalid syntax
2020-11-03 16:47:04.709052 | controller | + 
/opt/stack/rally-openstack/devstack/lib/rally:init_rally:1 :   exit_trap
2020-11-03 16:47:04.711587 | controller | + ./stack.sh:exit_trap:521
 :   local r=1

Looks like we may need to pin some other component?

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1902775

Title:
  neutron-rally-task fails on stable/rocky: SyntaxError: invalid syntax

Status in neutron:
  New

Bug description:
  Now that linuxbridge job is fixed on that branch, it is rally job turn. Some 
recent failures:
  https://zuul.opendev.org/t/openstack/build/9d0eb20faf0d47128e123abda6c2f894
  https://zuul.opendev.org/t/openstack/build/f04ac29742cc4750ada366ebba1f2d8b

  Same error:
  2020-11-03 16:47:03.768342 | controller | ++ 
/opt/stack/rally-openstack/devstack/plugin.sh:source:17 :   echo_summary 
'Initializing Rally-OpenStack'
  2020-11-03 16:47:03.770975 | controller | ++ ./stack.sh:echo_summary:440  
:   [[ -t 3 ]]
  2020-11-03 16:47:03.773583 | controller | ++ ./stack.sh:echo_summary:446  
:   echo -e Initializing Rally-OpenStack
  2020-11-03 16:47:03.776354 | controller | ++ 
/opt/stack/rally-openstack/devstack/plugin.sh:source:18 :   init_rally
  2020-11-03 16:47:03.779017 | controller | ++ 
/opt/stack/rally-openstack/devstack/lib/rally:init_rally:135 :   
recreate_database rally utf8
  2020-11-03 16:47:03.781742 | controller | ++ 
lib/database:recreate_database:112   :   local db=rally
  2020-11-03 

[Yahoo-eng-team] [Bug 1902512] [NEW] neutron-ovn-tripleo-ci-centos-8-containers-multinode fails on private networ creation (mtu size)

2020-11-02 Thread Bernard Cafarelli
Public bug reported:

I saw same error on recent runs of this job so it seems to fail pretty 
reliably, for example:
https://16132fc56e10b0b2e527-31515d5a2aacd89ce671710b35dfe46f.ssl.cf2.rackcdn.com/758098/7/check/neutron-ovn-tripleo-ci-centos-8-containers-multinode/4c5218e/job-output.txt
https://zuul.opendev.org/t/openstack/build/f1898afb8970451d9bf48aefe8d3289c/logs

TASK [os_tempest : Ensure private network exists] **
Thursday 29 October 2020  10:21:23 + (0:00:00.091)   1:39:11.917 **
FAILED - RETRYING: Ensure private network exists (5 retries left).
FAILED - RETRYING: Ensure private network exists (4 retries left).
FAILED - RETRYING: Ensure private network exists (3 retries left).
FAILED - RETRYING: Ensure private network exists (2 retries left).
FAILED - RETRYING: Ensure private network exists (1 retries left).
fatal: [undercloud -> 127.0.0.2]: FAILED! => {
"attempts": 5,
"changed": false
}

MSG:

BadRequestException: 400: Client Error for url:
http://192.168.24.30:9696/v2.0/networks.json, Invalid input for
operation: Requested MTU is too big, maximum is 1292.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1902512

Title:
  neutron-ovn-tripleo-ci-centos-8-containers-multinode fails on private
  networ creation (mtu size)

Status in neutron:
  New

Bug description:
  I saw same error on recent runs of this job so it seems to fail pretty 
reliably, for example:
  
https://16132fc56e10b0b2e527-31515d5a2aacd89ce671710b35dfe46f.ssl.cf2.rackcdn.com/758098/7/check/neutron-ovn-tripleo-ci-centos-8-containers-multinode/4c5218e/job-output.txt
  
https://zuul.opendev.org/t/openstack/build/f1898afb8970451d9bf48aefe8d3289c/logs

  TASK [os_tempest : Ensure private network exists] 
**
  Thursday 29 October 2020  10:21:23 + (0:00:00.091)   1:39:11.917 
**
  FAILED - RETRYING: Ensure private network exists (5 retries left).
  FAILED - RETRYING: Ensure private network exists (4 retries left).
  FAILED - RETRYING: Ensure private network exists (3 retries left).
  FAILED - RETRYING: Ensure private network exists (2 retries left).
  FAILED - RETRYING: Ensure private network exists (1 retries left).
  fatal: [undercloud -> 127.0.0.2]: FAILED! => {
  "attempts": 5,
  "changed": false
  }

  MSG:

  BadRequestException: 400: Client Error for url:
  http://192.168.24.30:9696/v2.0/networks.json, Invalid input for
  operation: Requested MTU is too big, maximum is 1292.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1902512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1899917] [NEW] neutron-tempest-plugin-scenario-linuxbridge jobs are broken on rocky/queens branches

2020-10-15 Thread Bernard Cafarelli
Public bug reported:

Sample rocky failure: https://review.opendev.org/#/c/757062/
neutron-tempest-plugin-scenario-linuxbridge-rocky   ERROR Unable to find 
role in 
/var/lib/zuul/builds/c1394a26f4254553ad8253f923e9fe96/ansible/pre_playbook_3/role_0/neutron
 in 1m 12s

Sample queens failure: https://review.opendev.org/#/c/757063/
neutron-tempest-plugin-scenario-linuxbridge-queens  ERROR Unable to find 
role in 
/var/lib/zuul/builds/c7ba6995cd1448d5be2fd0e2373d22e7/ansible/pre_playbook_3/role_0/neutron
 in 1m 05s

No additional log available sadly, and I do not see that failure on more
recent branches

The job definition itself does not seem to have any specific roles, so I
am not sure what changed (and broke)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1899917

Title:
  neutron-tempest-plugin-scenario-linuxbridge jobs are broken on
  rocky/queens branches

Status in neutron:
  New

Bug description:
  Sample rocky failure: https://review.opendev.org/#/c/757062/
  neutron-tempest-plugin-scenario-linuxbridge-rocky ERROR Unable to find 
role in 
/var/lib/zuul/builds/c1394a26f4254553ad8253f923e9fe96/ansible/pre_playbook_3/role_0/neutron
 in 1m 12s

  Sample queens failure: https://review.opendev.org/#/c/757063/
  neutron-tempest-plugin-scenario-linuxbridge-queensERROR Unable to find 
role in 
/var/lib/zuul/builds/c7ba6995cd1448d5be2fd0e2373d22e7/ansible/pre_playbook_3/role_0/neutron
 in 1m 05s

  No additional log available sadly, and I do not see that failure on
  more recent branches

  The job definition itself does not seem to have any specific roles, so
  I am not sure what changed (and broke)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1899917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1897713] Re: networking-ovn-tempest-dsvm-ovs-release job fails on stable/train

2020-09-30 Thread Bernard Cafarelli
https://review.opendev.org/#/c/749955/ passed with 754960 merged

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1897713

Title:
  networking-ovn-tempest-dsvm-ovs-release job fails on stable/train

Status in neutron:
  Fix Released

Bug description:
  Starting recently, this voting check job fails 100% on stable/train branch, 
for example with:
  https://review.opendev.org/#/c/749955/

  Failed test is
  
neutron_tempest_plugin.scenario.test_metadata.MetadataTest.test_metadata_routed

  From https://review.opendev.org/#/c/750355/ this test should be
  skipped before victoria. It is skipped properly in neutron-tempest-
  plugin, and I think in networking-ovn, but not neutron stable/train
  (checking for ussuri).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1897713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1897713] [NEW] networking-ovn-tempest-dsvm-ovs-release job fails on stable/train

2020-09-29 Thread Bernard Cafarelli
Public bug reported:

Starting recently, this voting check job fails 100% on stable/train branch, for 
example with:
https://review.opendev.org/#/c/749955/

Failed test is
neutron_tempest_plugin.scenario.test_metadata.MetadataTest.test_metadata_routed

>From https://review.opendev.org/#/c/750355/ this test should be skipped
before victoria. It is skipped properly in neutron-tempest-plugin, and I
think in networking-ovn, but not neutron stable/train (checking for
ussuri).

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1897713

Title:
  networking-ovn-tempest-dsvm-ovs-release job fails on stable/train

Status in neutron:
  Confirmed

Bug description:
  Starting recently, this voting check job fails 100% on stable/train branch, 
for example with:
  https://review.opendev.org/#/c/749955/

  Failed test is
  
neutron_tempest_plugin.scenario.test_metadata.MetadataTest.test_metadata_routed

  From https://review.opendev.org/#/c/750355/ this test should be
  skipped before victoria. It is skipped properly in neutron-tempest-
  plugin, and I think in networking-ovn, but not neutron stable/train
  (checking for ussuri).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1897713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1894857] [NEW] lower-constraints job fail on Focal

2020-09-08 Thread Bernard Cafarelli
Public bug reported:

Requirements now pass after recent patch was merged, but UT fail with versions 
from lower-requirements, as seen on https://review.opendev.org/#/c/738163/:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_1ed/738163/16/check/openstack-tox-lower-constraints/1ed7287/testr_results.html

2 main error types:
RuntimeError: generator raised StopIteration
and
  File "/usr/lib/python3.8/ssl.py", line 720, in verify_mode
super(SSLContext, SSLContext).verify_mode.__set__(self, value)
RecursionError: maximum recursion depth exceeded

As UT pass, this is mostly a task to find and update needed versions in
lower-constraints

At least for the RecursionError one the fix is apparently bumped
eventlet version

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1894857

Title:
  lower-constraints job fail on Focal

Status in neutron:
  Confirmed

Bug description:
  Requirements now pass after recent patch was merged, but UT fail with 
versions from lower-requirements, as seen on 
https://review.opendev.org/#/c/738163/:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_1ed/738163/16/check/openstack-tox-lower-constraints/1ed7287/testr_results.html

  2 main error types:
  RuntimeError: generator raised StopIteration
  and
File "/usr/lib/python3.8/ssl.py", line 720, in verify_mode
  super(SSLContext, SSLContext).verify_mode.__set__(self, value)
  RecursionError: maximum recursion depth exceeded

  As UT pass, this is mostly a task to find and update needed versions
  in lower-constraints

  At least for the RecursionError one the fix is apparently bumped
  eventlet version

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1894857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890064] Re: CI failture: ImportError: cannot import decorate

2020-08-03 Thread Bernard Cafarelli
master, stable/ussuri and stable/train in neutron are also affected:
https://zuul.opendev.org/t/openstack/builds?job_name=openstack-tox-lower-constraints=openstack/neutron
Sample failure log:
https://zuul.opendev.org/t/openstack/build/48f207fdd3b546e5ae99e395d30a16a8

** Also affects: neutron
   Importance: Undecided
   Status: New

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1890064

Title:
  CI failture: ImportError: cannot import decorate

Status in neutron:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  2020-08-01 08:57:09.910403 | ubuntu-bionic | b'--- import errors ---\nFailed 
to import test module: nova.tests.unit\nTraceback (most recent call last):\n  
File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/lower-constraints/lib/python3.6/site-packages/unittest2/loader.py",
 line 490, in _find_test_path\npackage = self._get_module_from_name(name)\n 
 File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/lower-constraints/lib/python3.6/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name\n__import__(name)\n  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/tests/unit/__init__.py", line 
30, in \nobjects.register_all()\n  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/objects/__init__.py", line 27, 
in register_all\n__import__(\'nova.objects.agent\')\n  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/objects/agent.py", line 15, in 
\nfrom nova.db import api as db\n  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/db/api.py", line 33, in 
\nimport nova.conf\n  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/conf/__init__.py", line 25, in 
\nfrom nova.conf import cache\n  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/conf/cache.py", line 18, in 
\nfrom oslo_cache import core\n  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/lower-constraints/lib/python3.6/site-packages/oslo_cache/__init__.py",
 line 14, in \nfrom oslo_cache.core import *  # noqa\n  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/lower-constraints/lib/python3.6/site-packages/oslo_cache/core.py",
 line 39, in \nimport dogpile.cache\n  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/lower-constraints/lib/python3.6/site-packages/dogpile/cache/__init__.py",
 line 1, in \nfrom .region import CacheRegion  # noqa\n  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/lower-constraints/lib/python3.6/site-packages/dogpile/cache/region.py",
 line 12, in \nfrom decorator import decorate\nImportError: cannot 
import name \'decorate\'\n'
  2020-08-01 08:57:09.910428 | ubuntu-bionic | 

  2020-08-01 08:57:09.910450 | ubuntu-bionic | The above traceback was 
encountered during test discovery which imports all the found test modules in 
the specified test_path.
  2020-08-01 08:57:09.946349 | ubuntu-bionic | ERROR: InvocationError for 
command 
/home/zuul/src/opendev.org/openstack/nova/.tox/lower-constraints/bin/stestr run 
(exited with code 100)
  2020-08-01 08:57:09.946502 | ubuntu-bionic | lower-constraints finish: 
run-test  after 0.90 seconds
  2020-08-01 08:57:09.947081 | ubuntu-bionic | lower-constraints start: 
run-test-post
  2020-08-01 08:57:09.947119 | ubuntu-bionic | lower-constraints finish: 
run-test-post  after 0.00 seconds
  2020-08-01 08:57:09.947526 | ubuntu-bionic | 
___ summary 
  2020-08-01 08:57:09.947557 | ubuntu-bionic | ERROR:   lower-constraints: 
commands failed
  2020-08-01 08:57:09.947730 | ubuntu-bionic | cleanup 
/home/zuul/src/opendev.org/openstack/nova/.tox/.tmp/package/1/nova-21.1.0.dev338.zip
  2020-08-01 08:57:10.490807 | ubuntu-bionic | ERROR
  2020-08-01 08:57:10.491170 | ubuntu-bionic | {
  2020-08-01 08:57:10.491273 | ubuntu-bionic |   "delta": "0:00:23.042446",
  2020-08-01 08:57:10.491368 | ubuntu-bionic |   "end": "2020-08-01 
08:57:09.973813",
  2020-08-01 08:57:10.491459 | ubuntu-bionic |   "msg": "non-zero return code",
  2020-08-01 08:57:10.491547 | ubuntu-bionic |   "rc": 1,
  2020-08-01 08:57:10.491663 | ubuntu-bionic |   "start": "2020-08-01 
08:56:46.931367"
  2020-08-01 08:57:10.491753 | ubuntu-bionic | }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1890064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1885262] [NEW] Add stateless firewall support to OVN

2020-06-26 Thread Bernard Cafarelli
Public bug reported:

In Ussuri, we added support for stateless firewall [1]

This added support for stateful attribute in security group, with needed
parts in API extensions "stateful-security-group", database, ... [2]

However implementation is currently only done for the iptables drivers
in ML2/OVS, this limitation is noted in release notes for the feature.

As proposed discussed in the Victoria PTG [3], we should add support for
this attribute in OVN driver.

It should be easy to do [4] and give feature support parity in OVN

[1] https://bugs.launchpad.net/neutron/+bug/1753466
[2] https://review.opendev.org/#/c/572767/
[3] https://etherpad.opendev.org/p/neutron-victoria-ptg L162
[4] http://www.openvswitch.org/support/dist-docs/ovn-northd.8.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1885262

Title:
  Add stateless firewall support to OVN

Status in neutron:
  New

Bug description:
  In Ussuri, we added support for stateless firewall [1]

  This added support for stateful attribute in security group, with
  needed parts in API extensions "stateful-security-group", database,
  ... [2]

  However implementation is currently only done for the iptables drivers
  in ML2/OVS, this limitation is noted in release notes for the feature.

  As proposed discussed in the Victoria PTG [3], we should add support
  for this attribute in OVN driver.

  It should be easy to do [4] and give feature support parity in OVN

  [1] https://bugs.launchpad.net/neutron/+bug/1753466
  [2] https://review.opendev.org/#/c/572767/
  [3] https://etherpad.opendev.org/p/neutron-victoria-ptg L162
  [4] http://www.openvswitch.org/support/dist-docs/ovn-northd.8.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1885262/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1885261] [NEW] Add stateless firewall support to OVS firewall

2020-06-26 Thread Bernard Cafarelli
Public bug reported:

In Ussuri, we added support for stateless firewall [1]

This added support for stateful attribute in security group, with needed
parts in API extensions "stateful-security-group", database, ... [2]

However implementation is currently only done for the iptables drivers,
this limitation is noted in release notes for the feature.

As proposed discussed in the Victoria PTG [3], we should add support for
this attribute in OVS firewall driver (default in devstack, and also
needed for hardware offlad).

Most changes would be around skipping any parts involving conntrack. An
implementation example also existed in networking-ovs-dpdk [4]

[1] https://bugs.launchpad.net/neutron/+bug/1753466
[2] https://review.opendev.org/#/c/572767/
[3] https://etherpad.opendev.org/p/neutron-victoria-ptg L162
[4] 
https://opendev.org/x/networking-ovs-dpdk/src/branch/stable/rocky/networking_ovs_dpdk/agent/ovs_dpdk_firewall.py

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovs-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1885261

Title:
  Add stateless firewall support to OVS firewall

Status in neutron:
  New

Bug description:
  In Ussuri, we added support for stateless firewall [1]

  This added support for stateful attribute in security group, with
  needed parts in API extensions "stateful-security-group", database,
  ... [2]

  However implementation is currently only done for the iptables
  drivers, this limitation is noted in release notes for the feature.

  As proposed discussed in the Victoria PTG [3], we should add support
  for this attribute in OVS firewall driver (default in devstack, and
  also needed for hardware offlad).

  Most changes would be around skipping any parts involving conntrack.
  An implementation example also existed in networking-ovs-dpdk [4]

  [1] https://bugs.launchpad.net/neutron/+bug/1753466
  [2] https://review.opendev.org/#/c/572767/
  [3] https://etherpad.opendev.org/p/neutron-victoria-ptg L162
  [4] 
https://opendev.org/x/networking-ovs-dpdk/src/branch/stable/rocky/networking_ovs_dpdk/agent/ovs_dpdk_firewall.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1885261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881297] [NEW] Neutron agents process name changed after neutron-server setproctitle change

2020-05-29 Thread Bernard Cafarelli
Public bug reported:

In bug 1816485 we pushed code [0] to have nice process names for
neutron-server workers (indicating RPC workers, ...). This was done via
setproctitle.

Code itself does not affect other neutron components, but simply loading
the setproctitle module will affect the process environment [1] in
/proc/xx/environ.

This is quite visible when checking "ps -e" output, form:
# ps -e|grep neutron
 4712 ?00:00:02 neutron-openvsw
 4775 ?00:00:00 neutron-rootwra
 4821 ?00:00:02 neutron-dhcp-ag
 4852 ?00:00:01 neutron-l3-agen
 4932 ?00:00:00 neutron-rootwra
 5790 ?00:00:02 neutron-server
 5844 ?00:00:00 neutron-server
 5845 ?00:00:00 neutron-server

to:
# ps -e|grep neutron
28447 ?00:00:00 neutron-rootwra
28805 ?00:00:00 neutron-server:
28806 ?00:00:00 neutron-server:
28807 ?00:00:00 neutron-server:
31253 ?00:00:00 neutron-rootwra

A shorter test, "ps -e | grep $(pgrep -f neutron-openvswitch-agent)"
reported neutron-openvswitch-agent in old systems, and now pythonx.x

Using setproctitle's SPT_NOENV feature to avoid clobbering does not work
as proper environment name is the full "/usr/bin/python3.6
/usr/local/bin/neutron-openvswitch-agent --config-file
/etc/neutron/neutron.conf --config-file
/etc/neutron/plugins/ml2/ml2_conf.in" (or local equivalent)

While using other toosl (or ps options) to find the agent process work
fine, some monitoring solutions only work on env name like "ps -e"
output

As we added process names for neutron-keepalived-state-change [2], I
think the "best of both ways" fix would be to set process names in
agents starting, with a format like "neutron-openvswitch-agent
($original_proc_title)"

Bonus question: I wonder about backportability of such a fix, as it
keeps old process name it should be mostly backwards-compatible and
helps with other use-cases, but it may break for those using exact
matchin

[0] https://review.opendev.org/#/c/637019/
[1] https://pypi.org/project/setproctitle/
[2] https://review.opendev.org/#/c/660905/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881297

Title:
  Neutron agents process name changed after neutron-server setproctitle
  change

Status in neutron:
  New

Bug description:
  In bug 1816485 we pushed code [0] to have nice process names for
  neutron-server workers (indicating RPC workers, ...). This was done
  via setproctitle.

  Code itself does not affect other neutron components, but simply
  loading the setproctitle module will affect the process environment
  [1] in /proc/xx/environ.

  This is quite visible when checking "ps -e" output, form:
  # ps -e|grep neutron
   4712 ?00:00:02 neutron-openvsw
   4775 ?00:00:00 neutron-rootwra
   4821 ?00:00:02 neutron-dhcp-ag
   4852 ?00:00:01 neutron-l3-agen
   4932 ?00:00:00 neutron-rootwra
   5790 ?00:00:02 neutron-server
   5844 ?00:00:00 neutron-server
   5845 ?00:00:00 neutron-server

  to:
  # ps -e|grep neutron
  28447 ?00:00:00 neutron-rootwra
  28805 ?00:00:00 neutron-server:
  28806 ?00:00:00 neutron-server:
  28807 ?00:00:00 neutron-server:
  31253 ?00:00:00 neutron-rootwra

  A shorter test, "ps -e | grep $(pgrep -f neutron-openvswitch-agent)"
  reported neutron-openvswitch-agent in old systems, and now pythonx.x

  Using setproctitle's SPT_NOENV feature to avoid clobbering does not
  work as proper environment name is the full "/usr/bin/python3.6
  /usr/local/bin/neutron-openvswitch-agent --config-file
  /etc/neutron/neutron.conf --config-file
  /etc/neutron/plugins/ml2/ml2_conf.in" (or local equivalent)

  While using other toosl (or ps options) to find the agent process work
  fine, some monitoring solutions only work on env name like "ps -e"
  output

  As we added process names for neutron-keepalived-state-change [2], I
  think the "best of both ways" fix would be to set process names in
  agents starting, with a format like "neutron-openvswitch-agent
  ($original_proc_title)"

  Bonus question: I wonder about backportability of such a fix, as it
  keeps old process name it should be mostly backwards-compatible and
  helps with other use-cases, but it may break for those using exact
  matchin

  [0] https://review.opendev.org/#/c/637019/
  [1] https://pypi.org/project/setproctitle/
  [2] https://review.opendev.org/#/c/660905/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1877146] Re: neutron-dynamic-plugin tempest jobs fail on stable/train branch

2020-05-27 Thread Bernard Cafarelli
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1877146

Title:
  neutron-dynamic-plugin tempest jobs fail on stable/train branch

Status in neutron:
  Fix Released

Bug description:
  Sample review with failure:
  https://review.opendev.org/#/c/725790/
  
https://1220c88f5f3f7dc2943f-5c9bf716eadc2e193f03b25f937f429b.ssl.cf1.rackcdn.com/725790/1/check/neutron-dynamic-routing-dsvm-tempest-api/19f9000/job-output.txt

  2020-05-06 09:12:36.370490 | primary | all-plugin run-test: commands[3] | 
tempest run --regex '^neutron_dynamic_routing.tests.tempest.api\.' 
--concurrency=4
  2020-05-06 09:12:40.817171 | primary |
  2020-05-06 09:12:40.817256 | primary | =
  2020-05-06 09:12:40.817278 | primary | Failures during discovery
  2020-05-06 09:12:40.817303 | primary | =
  2020-05-06 09:12:40.817322 | primary | --- import errors ---
  2020-05-06 09:12:40.817342 | primary | Failed to import test module: 
neutron_dynamic_routing.tests.tempest.api.test_bgp_speaker_extensions
  2020-05-06 09:12:40.817362 | primary | Traceback (most recent call last):
  2020-05-06 09:12:40.817382 | primary |   File 
"/usr/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
  2020-05-06 09:12:40.817401 | primary | module = 
self._get_module_from_name(name)
  2020-05-06 09:12:40.817420 | primary |   File 
"/usr/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
  2020-05-06 09:12:40.817440 | primary | __import__(name)
  2020-05-06 09:12:40.817459 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/api/test_bgp_speaker_extensions.py",
 line 21, in 
  2020-05-06 09:12:40.817478 | primary | from neutron_tempest_plugin.api 
import base
  2020-05-06 09:12:40.817498 | primary | ModuleNotFoundError: No module named 
'neutron_tempest_plugin'
  2020-05-06 09:12:40.817519 | primary |
  2020-05-06 09:12:40.817538 | primary | Failed to import test module: 
neutron_dynamic_routing.tests.tempest.api.test_bgp_speaker_extensions_negative
  2020-05-06 09:12:40.817557 | primary | Traceback (most recent call last):
  2020-05-06 09:12:40.817575 | primary |   File 
"/usr/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
  2020-05-06 09:12:40.817594 | primary | module = 
self._get_module_from_name(name)
  2020-05-06 09:12:40.817613 | primary |   File 
"/usr/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
  2020-05-06 09:12:40.817635 | primary | __import__(name)
  2020-05-06 09:12:40.817654 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/api/test_bgp_speaker_extensions_negative.py",
 line 21, in 
  2020-05-06 09:12:40.817673 | primary | from 
neutron_dynamic_routing.tests.tempest.api import test_bgp_speaker_extensions as 
test_base  # noqa
  2020-05-06 09:12:40.817692 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/api/test_bgp_speaker_extensions.py",
 line 21, in 
  2020-05-06 09:12:40.817711 | primary | from neutron_tempest_plugin.api 
import base
  2020-05-06 09:12:40.817730 | primary | ModuleNotFoundError: No module named 
'neutron_tempest_plugin'
  2020-05-06 09:12:40.817748 | primary |
  2020-05-06 09:12:40.817767 | primary | Failed to import test module: 
neutron_dynamic_routing.tests.tempest.scenario.basic.test_4byte_asn
  2020-05-06 09:12:40.817786 | primary | Traceback (most recent call last):
  2020-05-06 09:12:40.817804 | primary |   File 
"/usr/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
  2020-05-06 09:12:40.817823 | primary | module = 
self._get_module_from_name(name)
  2020-05-06 09:12:40.817842 | primary |   File 
"/usr/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
  2020-05-06 09:12:40.817861 | primary | __import__(name)
  2020-05-06 09:12:40.817879 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/scenario/basic/test_4byte_asn.py",
 line 17, in 
  2020-05-06 09:12:40.817898 | primary | from 
neutron_dynamic_routing.tests.tempest.scenario import base
  2020-05-06 09:12:40.817934 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/scenario/base.py",
 line 26, in 
  2020-05-06 09:12:40.817954 | primary | from neutron_tempest_plugin.api 
import base
  2020-05-06 09:12:40.817973 | primary | ModuleNotFoundError: No module named 
'neutron_tempest_plugin'
  2020-05-06 09:12:40.817992 | primary |
  2020-05-06 09:12:40.818011 | primary | Failed to import test module: 
neutron_dynamic_routing.tests.tempest.scenario.basic.test_basic
  2020-05-06 09:12:40.818029 | primary | Traceback (most recent call last):
  2020-05-06 09:12:40.818048 | primary 

[Yahoo-eng-team] [Bug 1877146] [NEW] neutron-dynamic-plugin tempest jobs fail on stable/train branch

2020-05-06 Thread Bernard Cafarelli
06 09:12:40.818104 | primary | __import__(name)
2020-05-06 09:12:40.818122 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/scenario/basic/test_basic.py",
 line 21, in 
2020-05-06 09:12:40.818142 | primary | from 
neutron_dynamic_routing.tests.tempest.scenario import base as s_base
2020-05-06 09:12:40.818161 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/scenario/base.py",
 line 26, in 
2020-05-06 09:12:40.818179 | primary | from neutron_tempest_plugin.api 
import base
2020-05-06 09:12:40.818198 | primary | ModuleNotFoundError: No module named 
'neutron_tempest_plugin'
2020-05-06 09:12:40.818216 | primary |
2020-05-06 09:12:40.818234 | primary | Failed to import test module: 
neutron_dynamic_routing.tests.tempest.scenario.ipv4.test_ipv4
2020-05-06 09:12:40.818253 | primary | Traceback (most recent call last):
2020-05-06 09:12:40.818272 | primary |   File 
"/usr/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
2020-05-06 09:12:40.818290 | primary | module = 
self._get_module_from_name(name)
2020-05-06 09:12:40.818313 | primary |   File 
"/usr/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
2020-05-06 09:12:40.818332 | primary | __import__(name)
2020-05-06 09:12:40.818350 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/scenario/ipv4/test_ipv4.py",
 line 20, in 
2020-05-06 09:12:40.818369 | primary | from 
neutron_dynamic_routing.tests.tempest.scenario import base
2020-05-06 09:12:40.818388 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/scenario/base.py",
 line 26, in 
2020-05-06 09:12:40.818407 | primary | from neutron_tempest_plugin.api 
import base
2020-05-06 09:12:40.818425 | primary | ModuleNotFoundError: No module named 
'neutron_tempest_plugin'
2020-05-06 09:12:40.818443 | primary |
2020-05-06 09:12:40.818475 | primary | Failed to import test module: 
neutron_dynamic_routing.tests.tempest.scenario.ipv6.test_ipv6
2020-05-06 09:12:40.818494 | primary | Traceback (most recent call last):
2020-05-06 09:12:40.818513 | primary |   File 
"/usr/lib/python3.6/unittest/loader.py", line 428, in _find_test_path
2020-05-06 09:12:40.818532 | primary | module = 
self._get_module_from_name(name)
2020-05-06 09:12:40.818551 | primary |   File 
"/usr/lib/python3.6/unittest/loader.py", line 369, in _get_module_from_name
2020-05-06 09:12:40.818569 | primary | __import__(name)
2020-05-06 09:12:40.818588 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/scenario/ipv6/test_ipv6.py",
 line 20, in 
2020-05-06 09:12:40.818607 | primary | from 
neutron_dynamic_routing.tests.tempest.scenario import base
2020-05-06 09:12:40.818626 | primary |   File 
"/opt/stack/new/neutron-dynamic-routing/neutron_dynamic_routing/tests/tempest/scenario/base.py",
 line 26, in 
2020-05-06 09:12:40.818644 | primary | from neutron_tempest_plugin.api 
import base
2020-05-06 09:12:40.818663 | primary | ModuleNotFoundError: No module named 
'neutron_tempest_plugin'
2020-05-06 09:12:40.818681 | primary |
2020-05-06 09:12:40.818699 | primary | 

2020-05-06 09:12:40.818718 | primary | The above traceback was encountered 
during test discovery which imports all the found test modules in the specified 
test_path.
2020-05-06 09:12:40.930894 | primary | ERROR: InvocationError for command 
/opt/stack/new/tempest/.tox/all-plugin/bin/tempest run --regex 
'^neutron_dynamic_routing.tests.tempest.api\.' --concurrency=4 (exited with 
code 100)


This may be related to recent change to skip plugin installation 
https://review.opendev.org/#/c/725774/1

** Affects: neutron
 Importance: Undecided
 Assignee: Bernard Cafarelli (bcafarel)
 Status: In Progress


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1877146

Title:
  neutron-dynamic-plugin tempest jobs fail on stable/train branch

Status in neutron:
  In Progress

Bug description:
  Sample review with failure:
  https://review.opendev.org/#/c/725790/
  
https://1220c88f5f3f7dc2943f-5c9bf716eadc2e193f03b25f937f429b.ssl.cf1.rackcdn.com/725790/1/check/neutron-dynamic-routing-dsvm-tempest-api/19f9000/job-output.txt

  2020-05-06 09:12:36.370490 | primary | all-plugin run-test: commands[3] | 
tempest run --regex '^neutron_dynamic_routing.tests.tempest.api\.' 
--concurrency=4
  2020-05-06 09:12:40.817171 | primary |
  2020-05-06 09:12:40.817256 | primary | =
  2020-05-06 09:12:40.817278 | prima

[Yahoo-eng-team] [Bug 1873776] Re: [stable/stein] Neutron-tempest-plugin jobs are failing due to wrong python version

2020-04-20 Thread Bernard Cafarelli
** Changed in: neutron
   Status: Confirmed => Won't Fix

** Changed in: neutron
   Status: Won't Fix => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1873776

Title:
  [stable/stein] Neutron-tempest-plugin jobs are failing due to wrong
  python version

Status in neutron:
  In Progress

Bug description:
  Issue like in
  
https://b5dda6b7d9fe09cc6972-899ca8db232a7bf5fd9d6d869408b0bd.ssl.cf1.rackcdn.com/720051/1/check
  /neutron-tempest-plugin-api-
  stein/5a092fb/controller/logs/devstacklog.txt happens in all neutron-
  tempest-plugin jobs in stable/stein branch.

  Error:

  2020-04-20 00:19:32.058 | Ignoring typed-ast: markers 'python_version == 
"3.6"' don't match your environment
  2020-04-20 00:19:32.120 | Obtaining file:///opt/stack/neutron-tempest-plugin
  2020-04-20 00:19:32.987 | neutron-tempest-plugin requires Python '>=3.6' but 
the running Python is 2.7.17
  2020-04-20 00:19:33.080 | You are using pip version 9.0.3, however version 
20.0.2 is available.
  2020-04-20 00:19:33.080 | You should consider upgrading via the 'pip install 
--upgrade pip' command.

  I think it should be run with python 3.6 or we should use tag 0.9.0
  which is last version with support for python 2.7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1873776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1871596] Re: Rally job on stable branches is failing

2020-04-15 Thread Bernard Cafarelli
rally passed fine on recent backports train to rocky, so we can mark
this one fixed with both fixes mentioned in comment #4

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1871596

Title:
  Rally job on stable branches is failing

Status in neutron:
  Fix Released

Bug description:
  It fails with error like:

  2020-04-07 23:46:53.244226 | TASK [snapshot-available-os-resources : Dump all 
available OpenStack resources]
  2020-04-07 23:46:57.520106 | controller | 
/usr/local/lib/python3.6/dist-packages/rally/common/fileutils.py:23: 
UserWarning: Module `rally.common.fileutils` is deprecated since Rally v3.0.0 
and may be removed in further releases.
  2020-04-07 23:46:57.520343 | controller |   f"Module `{__name__}` is 
deprecated since Rally v3.0.0 and may be "
  2020-04-07 23:46:57.520373 | controller | 
/usr/local/lib/python3.6/dist-packages/rally/common/logging.py:337: 
UserWarning: Module `rally.common.sshutils` moved to `rally.utils.sshutils` 
since Rally v3.0.0. The import from old place is deprecated and may be removed 
in further releases.
  2020-04-07 23:46:57.520406 | controller |   f"Module `{target}` moved to 
`{new_module}` since Rally v{release}. "
  2020-04-07 23:46:57.520437 | controller | 
/usr/local/lib/python3.6/dist-packages/rally/common/logging.py:337: 
UserWarning: Module `rally.plugins.common.scenarios.dummy.dummy` moved to 
`rally.plugins.task.scenarios.dummy.dummy` since Rally v3.0.0. The import from 
old place is deprecated and may be removed in further releases.
  2020-04-07 23:46:57.520460 | controller |   f"Module `{target}` moved to 
`{new_module}` since Rally v{release}. "
  2020-04-07 23:46:57.520477 | controller | Old fashion plugin configuration 
detected in  `NeutronNetworks.create_and_bind_ports@openstack' scenario. Use 
full name for  `networking_agents' context like networking_agents@platform 
where 'platform' is a name of context platform (openstack, k8s, etc).
  2020-04-07 23:46:57.520518 | controller | Old fashion plugin configuration 
detected in  `NeutronNetworks.create_and_bind_ports@openstack' scenario. Use 
full name for  `network' context like network@platform where 'platform' is a 
name of context platform (openstack, k8s, etc).
  2020-04-07 23:46:57.704959 | controller | 
/usr/local/lib/python3.6/dist-packages/rally/common/logging.py:337: 
UserWarning: Module `rally.common.yamlutils` moved to `rally.cli.yamlutils` 
since Rally v3.0.0. The import from old place is deprecated and may be removed 
in further releases.
  2020-04-07 23:46:57.705030 | controller |   f"Module `{target}` moved to 
`{new_module}` since Rally v{release}. "
  2020-04-07 23:46:57.705047 | controller | 
/usr/local/lib/python3.6/dist-packages/rally/common/logging.py:337: 
UserWarning: Module `rally.plugins.common.verification.testr` moved to 
`rally.plugins.verification.testr` since Rally v3.0.0. The import from old 
place is deprecated and may be removed in further releases.
  2020-04-07 23:46:57.705061 | controller |   f"Module `{target}` moved to 
`{new_module}` since Rally v{release}. "
  2020-04-07 23:46:57.705074 | controller | usage: osresources.py [-h] 
[--credentials ]
  2020-04-07 23:46:57.705090 | controller |   (--dump-list 
 | --compare-with-list )
  2020-04-07 23:46:57.705124 | controller | osresources.py: error: one of the 
arguments --dump-list --compare-with-list is required
  2020-04-07 23:46:58.438577 | controller | ERROR

  
  I think that this is caused by 
https://github.com/openstack/rally-openstack/commit/c4f0526dd8025956bacfedcd24c8a0e5a1b6005b
 and we should use version 1.7.0 of rally-openstack for stable branches

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1871596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1871327] Re: stable/stein tempest-full job fails with "tempest requires Python '>=3.6' but the running Python is 2.7.17"

2020-04-07 Thread Bernard Cafarelli
Closing on neutron side, fix as mentioned is going in devstack by gmann:
https://review.opendev.org/#/q/I60949fb735c82959fb2cfcb6aeef9e33fb0445b6

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1871327

Title:
  stable/stein tempest-full job fails with "tempest requires Python
  '>=3.6' but the running Python is 2.7.17"

Status in neutron:
  Invalid
Status in tempest:
  New

Bug description:
  Only seen in stable/stein, tempest-full (and also non-voting 
neutron-tempest-dvr-ha-multinode-full) job fails with:
  Obtaining file:///opt/stack/tempest
  tempest requires Python '>=3.6' but the running Python is 2.7.17

  Example failure on https://review.opendev.org/#/c/717336/2:
  https://zuul.opendev.org/t/openstack/build/f05569c475f44327bff7b7ec58faef8c
  https://zuul.opendev.org/t/openstack/build/651ca00e67ab42fd814ec5edad437997

  While backport on rocky passed both:
  https://review.opendev.org/#/c/717337/2
  https://zuul.opendev.org/t/openstack/build/c9c0139cda4f45cd825e169765e6854c
  https://zuul.opendev.org/t/openstack/build/6f318c4897ea4864b7cd2691dc2a36ab

  
  and on train:
  https://review.opendev.org/#/c/717335/2
  https://zuul.opendev.org/t/openstack/build/f84209f049f2459eabd453058ad11ccf

  For neutron-tempest-dvr-ha-multinode-full parent in stein is tempest-
  multinode-full so it looks like common issue in tempest-full job
  definition for this branch?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1871327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1871327] [NEW] stable/stein tempest-full job fails with "tempest requires Python '>=3.6' but the running Python is 2.7.17"

2020-04-07 Thread Bernard Cafarelli
Public bug reported:

Only seen in stable/stein, tempest-full (and also non-voting 
neutron-tempest-dvr-ha-multinode-full) job fails with:
Obtaining file:///opt/stack/tempest
tempest requires Python '>=3.6' but the running Python is 2.7.17

Example failure on https://review.opendev.org/#/c/717336/2:
https://zuul.opendev.org/t/openstack/build/f05569c475f44327bff7b7ec58faef8c
https://zuul.opendev.org/t/openstack/build/651ca00e67ab42fd814ec5edad437997

While backport on rocky passed both:
https://review.opendev.org/#/c/717337/2
https://zuul.opendev.org/t/openstack/build/c9c0139cda4f45cd825e169765e6854c
https://zuul.opendev.org/t/openstack/build/6f318c4897ea4864b7cd2691dc2a36ab


and on train:
https://review.opendev.org/#/c/717335/2
https://zuul.opendev.org/t/openstack/build/f84209f049f2459eabd453058ad11ccf

For neutron-tempest-dvr-ha-multinode-full parent in stein is tempest-
multinode-full so it looks like common issue in tempest-full job
definition for this branch?

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1871327

Title:
  stable/stein tempest-full job fails with "tempest requires Python
  '>=3.6' but the running Python is 2.7.17"

Status in neutron:
  New
Status in tempest:
  New

Bug description:
  Only seen in stable/stein, tempest-full (and also non-voting 
neutron-tempest-dvr-ha-multinode-full) job fails with:
  Obtaining file:///opt/stack/tempest
  tempest requires Python '>=3.6' but the running Python is 2.7.17

  Example failure on https://review.opendev.org/#/c/717336/2:
  https://zuul.opendev.org/t/openstack/build/f05569c475f44327bff7b7ec58faef8c
  https://zuul.opendev.org/t/openstack/build/651ca00e67ab42fd814ec5edad437997

  While backport on rocky passed both:
  https://review.opendev.org/#/c/717337/2
  https://zuul.opendev.org/t/openstack/build/c9c0139cda4f45cd825e169765e6854c
  https://zuul.opendev.org/t/openstack/build/6f318c4897ea4864b7cd2691dc2a36ab

  
  and on train:
  https://review.opendev.org/#/c/717335/2
  https://zuul.opendev.org/t/openstack/build/f84209f049f2459eabd453058ad11ccf

  For neutron-tempest-dvr-ha-multinode-full parent in stein is tempest-
  multinode-full so it looks like common issue in tempest-full job
  definition for this branch?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1871327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1868691] Re: neutron-rally-task fails 100% on stable branches

2020-03-26 Thread Bernard Cafarelli
** Changed in: networking-ovn
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

** Changed in: networking-ovn
   Status: New => Fix Released

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1868691

Title:
  neutron-rally-task fails 100% on stable branches

Status in networking-ovn:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Since March 23, the job started to fail on train:
  https://zuul.opendev.org/t/openstack/build/0bea3158833944c185b5101f823af9c1
  https://zuul.opendev.org/t/openstack/build/0b24c107a4e44d91addda71bf193ccfa
  https://zuul.opendev.org/t/openstack/build/20b752ab736a45c09ac25921ad19c7d9

  recheck does not help, jobs fail with:
  ++ /opt/stack/rally-openstack/devstack/lib/rally:init_rally:142 :   rally 
--config-file /etc/rally/rally.conf deployment create --name devstack 
--filename /tmp/tmp.U7S1L63idY
  2020-03-23 23:47:41.570 31668 WARNING rally.common.plugin.discover [-]
 Failed to load plugins from module 'rally_openstack' (package: 
'rally-openstack 1.7.1.dev9'): (kubernetes 10.0.1 
(/usr/local/lib/python3.6/dist-packages), 
Requirement.parse('kubernetes>=11.0.0')): pkg_resources.VersionConflict: 
(kubernetes 10.0.1 (/usr/local/lib/python3.6/dist-packages), 
Requirement.parse('kubernetes>=11.0.0'))
  Env manager got invalid spec: 
  ['There is no Platform plugin `existing` in openstack platform.']
  + /opt/stack/rally-openstack/devstack/lib/rally:init_rally:1 :   exit_trap

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1868691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1868691] Re: neutron-rally-task fails 100% on stable branches

2020-03-24 Thread Bernard Cafarelli
** Also affects: networking-ovn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1868691

Title:
  neutron-rally-task fails 100% on stable branches

Status in networking-ovn:
  New
Status in neutron:
  In Progress

Bug description:
  Since March 23, the job started to fail on train:
  https://zuul.opendev.org/t/openstack/build/0bea3158833944c185b5101f823af9c1
  https://zuul.opendev.org/t/openstack/build/0b24c107a4e44d91addda71bf193ccfa
  https://zuul.opendev.org/t/openstack/build/20b752ab736a45c09ac25921ad19c7d9

  recheck does not help, jobs fail with:
  ++ /opt/stack/rally-openstack/devstack/lib/rally:init_rally:142 :   rally 
--config-file /etc/rally/rally.conf deployment create --name devstack 
--filename /tmp/tmp.U7S1L63idY
  2020-03-23 23:47:41.570 31668 WARNING rally.common.plugin.discover [-]
 Failed to load plugins from module 'rally_openstack' (package: 
'rally-openstack 1.7.1.dev9'): (kubernetes 10.0.1 
(/usr/local/lib/python3.6/dist-packages), 
Requirement.parse('kubernetes>=11.0.0')): pkg_resources.VersionConflict: 
(kubernetes 10.0.1 (/usr/local/lib/python3.6/dist-packages), 
Requirement.parse('kubernetes>=11.0.0'))
  Env manager got invalid spec: 
  ['There is no Platform plugin `existing` in openstack platform.']
  + /opt/stack/rally-openstack/devstack/lib/rally:init_rally:1 :   exit_trap

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1868691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1868691] [NEW] neutron-rally-task fails 100% on stable/train branch

2020-03-24 Thread Bernard Cafarelli
Public bug reported:

Since March 23, the job started to fail on train:
https://zuul.opendev.org/t/openstack/build/0bea3158833944c185b5101f823af9c1
https://zuul.opendev.org/t/openstack/build/0b24c107a4e44d91addda71bf193ccfa
https://zuul.opendev.org/t/openstack/build/20b752ab736a45c09ac25921ad19c7d9

recheck does not help, jobs fail with:
++ /opt/stack/rally-openstack/devstack/lib/rally:init_rally:142 :   rally 
--config-file /etc/rally/rally.conf deployment create --name devstack 
--filename /tmp/tmp.U7S1L63idY
2020-03-23 23:47:41.570 31668 WARNING rally.common.plugin.discover [-]   Failed 
to load plugins from module 'rally_openstack' (package: 'rally-openstack 
1.7.1.dev9'): (kubernetes 10.0.1 (/usr/local/lib/python3.6/dist-packages), 
Requirement.parse('kubernetes>=11.0.0')): pkg_resources.VersionConflict: 
(kubernetes 10.0.1 (/usr/local/lib/python3.6/dist-packages), 
Requirement.parse('kubernetes>=11.0.0'))
Env manager got invalid spec: 
['There is no Platform plugin `existing` in openstack platform.']
+ /opt/stack/rally-openstack/devstack/lib/rally:init_rally:1 :   exit_trap

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1868691

Title:
  neutron-rally-task fails 100% on stable/train branch

Status in neutron:
  New

Bug description:
  Since March 23, the job started to fail on train:
  https://zuul.opendev.org/t/openstack/build/0bea3158833944c185b5101f823af9c1
  https://zuul.opendev.org/t/openstack/build/0b24c107a4e44d91addda71bf193ccfa
  https://zuul.opendev.org/t/openstack/build/20b752ab736a45c09ac25921ad19c7d9

  recheck does not help, jobs fail with:
  ++ /opt/stack/rally-openstack/devstack/lib/rally:init_rally:142 :   rally 
--config-file /etc/rally/rally.conf deployment create --name devstack 
--filename /tmp/tmp.U7S1L63idY
  2020-03-23 23:47:41.570 31668 WARNING rally.common.plugin.discover [-]
 Failed to load plugins from module 'rally_openstack' (package: 
'rally-openstack 1.7.1.dev9'): (kubernetes 10.0.1 
(/usr/local/lib/python3.6/dist-packages), 
Requirement.parse('kubernetes>=11.0.0')): pkg_resources.VersionConflict: 
(kubernetes 10.0.1 (/usr/local/lib/python3.6/dist-packages), 
Requirement.parse('kubernetes>=11.0.0'))
  Env manager got invalid spec: 
  ['There is no Platform plugin `existing` in openstack platform.']
  + /opt/stack/rally-openstack/devstack/lib/rally:init_rally:1 :   exit_trap

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1868691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1803745] Re: neutron-dynamic-routing: unit test failures with master branch of neutron

2020-03-04 Thread Bernard Cafarelli
Updating status on old gate-failure bug

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1803745

Title:
  neutron-dynamic-routing: unit test failures with master branch of
  neutron

Status in neutron:
  Fix Released
Status in neutron-dynamic-routing package in Ubuntu:
  Triaged

Bug description:
  neutron-dynamic-routing unit tests currently fail with the tip of the
  master branch of neutron; project has neutron is its requirements.txt
  however the latest release version on pypi is from the rocky release.

  ==
  Failed 3 tests - output below:
  ==

  
neutron_dynamic_routing.tests.unit.services.bgp.scheduler.test_bgp_dragent_scheduler.TestRescheduleBgpSpeaker.test_no_schedule_with_non_available_dragent
  
-

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/.tox/py37/lib/python3.7/site-packages/neutron/tests/base.py",
 line 151, in func'
  b'return f(self, *args, **kwargs)'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/neutron_dynamic_routing/tests/unit/services/bgp/scheduler/test_bgp_dragent_scheduler.py",
 line 341, in test_no_schedule_with_non_available_dragent'
  b'self.assertEqual(binds, [])'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/.tox/py37/lib/python3.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual'
  b'self.assertThat(observed, matcher, message)'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/.tox/py37/lib/python3.7/site-packages/testtools/testcase.py",
 line 498, in assertThat'
  b'raise mismatch_error'
  b'testtools.matchers._impl.MismatchError: !=:'
  b"reference = 
[]"
  b'actual= []'
  b''
  b''

  
neutron_dynamic_routing.tests.unit.services.bgp.scheduler.test_bgp_dragent_scheduler.TestRescheduleBgpSpeaker.test_schedule_unbind_bgp_speaker
  
--

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/.tox/py37/lib/python3.7/site-packages/neutron/tests/base.py",
 line 151, in func'
  b'return f(self, *args, **kwargs)'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/neutron_dynamic_routing/tests/unit/services/bgp/scheduler/test_bgp_dragent_scheduler.py",
 line 349, in test_schedule_unbind_bgp_speaker'
  b'self.assertEqual(binds, [])'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/.tox/py37/lib/python3.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual'
  b'self.assertThat(observed, matcher, message)'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/.tox/py37/lib/python3.7/site-packages/testtools/testcase.py",
 line 498, in assertThat'
  b'raise mismatch_error'
  b'testtools.matchers._impl.MismatchError: !=:'
  b"reference = 
[]"
  b'actual= []'
  b''
  b''

  
neutron_dynamic_routing.tests.unit.services.bgp.scheduler.test_bgp_dragent_scheduler.TestRescheduleBgpSpeaker.test_reschedule_bgp_speaker_bound_to_down_dragent
  
---

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/.tox/py37/lib/python3.7/site-packages/neutron/tests/base.py",
 line 151, in func'
  b'return f(self, *args, **kwargs)'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/neutron_dynamic_routing/tests/unit/services/bgp/scheduler/test_bgp_dragent_scheduler.py",
 line 333, in test_reschedule_bgp_speaker_bound_to_down_dragent'
  b'self.assertEqual(binds[0].agent_id, agents[1].id)'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/.tox/py37/lib/python3.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual'
  b'self.assertThat(observed, matcher, message)'
  b'  File 
"/home/jamespage/src/openstack/neutron-dynamic-routing/.tox/py37/lib/python3.7/site-packages/testtools/testcase.py",
 line 498, in assertThat'
  b'raise mismatch_error'
  b'testtools.matchers._impl.MismatchError: !=:'
  b"reference = '1129a824-aa3f-4a8a-aba3-62fccf1b4d12'"
  b"actual= 

[Yahoo-eng-team] [Bug 1843413] Re: neutron-tempest-iptables_hybrid-fedora job is failing with RETRY_LIMIT constantly

2020-03-04 Thread Bernard Cafarelli
That was fixed some time ago, updating status

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1843413

Title:
  neutron-tempest-iptables_hybrid-fedora job is failing with RETRY_LIMIT
  constantly

Status in neutron:
  Fix Released

Bug description:
  It happens since at least couple of days.

  Example of failure
  
https://db6e53ffb305ec0848f1-fa8c367d29960a6ba7cc5d4b52d5b2a7.ssl.cf2.rackcdn.com/638641/13/check
  /neutron-tempest-iptables_hybrid-fedora/2d1ff9a/job-output.txt

  It is failing on:
  2019-09-10 08:35:42.868413 | 
  2019-09-10 08:35:42.868692 | PLAY [tempest]
  2019-09-10 08:35:42.972968 | 
  2019-09-10 08:35:42.973254 | TASK [fetch-subunit-output : Find stestr or 
testr executable]
  2019-09-10 08:35:44.218559 | controller | ERROR
  2019-09-10 08:35:44.418916 | controller | {
  2019-09-10 08:35:44.419116 | controller |   "msg": "non-zero return code",
  2019-09-10 08:35:44.419227 | controller |   "rc": 1
  2019-09-10 08:35:44.419348 | controller | }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1843413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864418] Re: has wrong with use apache to start neutron api in docker container

2020-02-24 Thread Bernard Cafarelli
>From a quick read it looks like some issue in kolla packages, adding
them

** Also affects: kolla
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864418

Title:
  has wrong with use apache to start neutron api  in docker container

Status in kolla:
  New
Status in neutron:
  New

Bug description:
  I started the neutron api with apache in the docker container use kolla.There 
was no problem with the first three requests (use openstack networking list) 
after startup. but the fourth started to be problematic.
  I started the RPC service:
  # /usr/bin/neutron-rpc-server --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini

  
  I refer to this article : 
https://docs.openstack.org/neutron/latest/admin/config-wsgi.html

  
  /etc/httpd/conf.d/wsgi-neutron.conf

  Listen [fd00:1001::101]:9696

  
  
  AllowOverride None
  Options None
  Require all granted
  
  

  
  WSGIDaemonProcess neutron-server processes=1 threads=1 user=neutron 
group=neutron display-name=%{GROUP}
  WSGIProcessGroup neutron-server
  WSGIScriptAlias / /var/lib/kolla/venv/bin/neutron-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  = 2.4>
ErrorLogFormat "%{cu}t %M"
  
  ErrorLog "/var/log/neutron/neutron-error.log"
  LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b %D \"%{Referer}i\" 
\"%{User-Agent}i\"" logformat
  CustomLog "/var/log/neutron/neutron-error.log" logformat
  

  Alias /networking /var/lib/kolla/venv/bin/neutron-api
  
  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup neutron-server
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  

  WSGISocketPrefix /var/run/apache2



  
  The following problem occurs after the fourth request is sent 

  var/log/kolla/neutron/neutron-api.log

  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors 
[req-dc4bf26f-7078-4857-978e-60bffa96df16 51a6efff6aa34b44926fcb95c5a01b3b 
e474c27f32bf462d87eecb392f373424 - default default] An error occurred during 
processing the request: GET /v2.0/networks HTTP/1.1^M
  Accept: application/json^M
  Accept-Encoding: gzip, deflate^M
  Connection: keep-alive^M
  Host: [fd00:1001::111]:9696^M
  User-Agent: openstacksdk/0.38.0 keystoneauth1/3.18.0 python-requests/2.22.0 
CPython/2.7.5^M
  X-Auth-Token: *
  X-Forwarded-For: fd00:1001::111^M
  X-Forwarded-Proto: https: AttributeError: 'module' object has no attribute 
'poll'
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors Traceback 
(most recent call last):
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_middleware/catch_errors.py",
 line 40, in __call__
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors response = 
req.get_response(self.application)
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/webob/request.py", line 1314, 
in send
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors 
application, catch_exc_info=False)
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/webob/request.py", line 1278, 
in call_application
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors app_iter = 
application(self.environ, start_response)
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/webob/dec.py", line 129, in 
__call__
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors resp = 
self.call_func(req, *args, **kw)
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/webob/dec.py", line 193, in 
call_func
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors return 
self.func(req, *args, **kwargs)
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/osprofiler/web.py", line 112, 
in __call__
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors return 
request.get_response(self.application)
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/webob/request.py", line 1314, 
in send
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors 
application, catch_exc_info=False)
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/webob/request.py", line 1278, 
in call_application
  2020-02-23 22:04:39.272 861 ERROR oslo_middleware.catch_errors app_iter = 
application(self.environ, 

[Yahoo-eng-team] [Bug 1863982] Re: Upgrade from Rocky to Stein, router namespace disappear

2020-02-20 Thread Bernard Cafarelli
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863982

Title:
  Upgrade from Rocky to Stein, router namespace disappear

Status in kolla-ansible:
  Confirmed
Status in neutron:
  New

Bug description:
  Upgrade All-in-one from Rocky to Stein.
  Upgrading finished but the router namespace disappears.

  
  Before:
  ip netns list
  qrouter-79658dd5-e3b4-4b13-a361-16d696ed1d1c (id: 1)
  qdhcp-4a183162-64f5-49f9-a615-7c0fd63cf2a8 (id: 0)

  After:
  ip netns list
  
  After about 1 minutes, dhcp ns has appeared and no error on dhcp-agent,
  but qrouter ns is still missing, until manually restart the docker container 
l3-agent.

  l3-agent error after upgrade:
  2020-02-20 02:57:07.306 12 INFO neutron.common.config [-] Logging enabled!
  2020-02-20 02:57:07.308 12 INFO neutron.common.config [-] 
/var/lib/kolla/venv/bin/neutron-l3-agent version 14.0.4
  2020-02-20 02:57:08.616 12 INFO neutron.agent.l3.agent 
[req-95654890-dab3-4106-b56d-c2685fb96f29 - - - - -] Agent HA routers count 0
  2020-02-20 02:57:08.619 12 INFO neutron.agent.agent_extensions_manager 
[req-95654890-dab3-4106-b56d-c2685fb96f29 - - - - -] Loaded agent extensions: []
  2020-02-20 02:57:08.657 12 INFO eventlet.wsgi.server [-] (12) wsgi starting 
up on http:/var/lib/neutron/keepalived-state-change
  2020-02-20 02:57:08.710 12 INFO neutron.agent.l3.agent [-] L3 agent started
  2020-02-20 02:57:10.716 12 INFO oslo.privsep.daemon 
[req-681aad3f-ae14-4315-b96d-5e95225cdf92 - - - - -] Running privsep helper: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', 
'--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmpg8Ihqa/privsep.sock']
  2020-02-20 02:57:11.750 12 INFO oslo.privsep.daemon 
[req-681aad3f-ae14-4315-b96d-5e95225cdf92 - - - - -] Spawned new privsep daemon 
via rootwrap
  2020-02-20 02:57:11.614 29 INFO oslo.privsep.daemon [-] privsep daemon 
starting
  2020-02-20 02:57:11.622 29 INFO oslo.privsep.daemon [-] privsep process 
running with uid/gid: 0/0
  2020-02-20 02:57:11.627 29 INFO oslo.privsep.daemon [-] privsep process 
running with capabilities (eff/prm/inh): 
CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
  2020-02-20 02:57:11.628 29 INFO oslo.privsep.daemon [-] privsep daemon 
running as pid 29
  2020-02-20 02:57:14.449 12 INFO neutron.agent.l3.agent [-] Starting router 
update for 79658dd5-e3b4-4b13-a361-16d696ed1d1c, action 3, priority 2, 
update_id 49908db7-8a8c-410f-84a7-9e95a3dede16. Wait time elapsed: 0.000
  2020-02-20 02:57:24.160 12 ERROR neutron.agent.linux.utils [-] Exit code: 4; 
Stdin: # Generated by iptables_manager


  2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info 
self.process_floating_ip_address_scope_rules()
  2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info 
self.gen.next()
  2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py",
 line 438, in defer_apply
  2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info raise 
l3_exc.IpTablesApplyException(msg)
  2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info 
IpTablesApplyException: Failure applying iptables rules
  2020-02-20 02:57:26.388 12 ERROR neutron.agent.l3.router_info
  2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router: 79658dd5-e3b4-4b13-a361-16d696ed1d1c: 
IpTablesApplyException: Failure applying iptables rules
  2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", 
line 723, in _process_routers_if_compatible
  2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", 
line 567, in _process_router_if_compatible
  2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent 
self._process_added_router(router)
  2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/l3/agent.py", 
line 575, in _process_added_router
  2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent ri.process()
  2020-02-20 02:57:26.389 12 ERROR neutron.agent.l3.agent   File 

[Yahoo-eng-team] [Bug 1864015] [NEW] neutron-tempest-plugin-designate-scenario fails on stable branches with "SyntaxError: invalid syntax" installing dnspython

2020-02-20 Thread Bernard Cafarelli
Public bug reported:

Most probably a follow-up of https://review.opendev.org/#/c/704084/ dropping 
py2.7 testing, stable branches reviews now fail on 
neutron-tempest-plugin-designate-scenario job with failure like:
2020-02-20 08:05:43.987705 | controller | 2020-02-20 08:05:43.987 | Collecting 
dnspython3!=1.13.0,!=1.14.0,>=1.12.0 (from designate-tempest-plugin==0.7.1.dev1)
2020-02-20 08:05:44.204759 | controller | 2020-02-20 08:05:44.204 |   
Downloading 
http://mirror.ord.rax.opendev.org/pypifiles/packages/f0/bb/f41cbc8eaa807afb9d44418f092aa3e4acf0e4f42b439c49824348f1f45c/dnspython3-1.15.0.zip
2020-02-20 08:05:44.734839 | controller | 2020-02-20 08:05:44.734 | 
Complete output from command python setup.py egg_info:
2020-02-20 08:05:44.734956 | controller | 2020-02-20 08:05:44.734 | 
Traceback (most recent call last):
2020-02-20 08:05:44.735018 | controller | 2020-02-20 08:05:44.734 |   File 
"", line 1, in 
2020-02-20 08:05:44.735078 | controller | 2020-02-20 08:05:44.735 |   File 
"/tmp/pip-build-gSLP9k/dnspython3/setup.py", line 25
2020-02-20 08:05:44.735165 | controller | 2020-02-20 08:05:44.735 | 
"""+"="*78, file=sys.stdout)
2020-02-20 08:05:44.735231 | controller | 2020-02-20 08:05:44.735 | 
^
2020-02-20 08:05:44.735290 | controller | 2020-02-20 08:05:44.735 | 
SyntaxError: invalid syntax

Sample reviews:
https://review.opendev.org/#/c/708576/ (queens)
https://review.opendev.org/#/c/705421/ (rocky)
https://review.opendev.org/#/c/708488/ (stein)

As "[ussuri][goal] Drop python 2.7 support and testing" is the only new
commit since tag 0.7.0 in designate-tempest-plugin, we can try to use
this tag for our py2 stable branches jobs

** Affects: neutron
 Importance: Critical
 Status: New


** Tags: dns gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864015

Title:
  neutron-tempest-plugin-designate-scenario fails on stable branches
  with "SyntaxError: invalid syntax" installing dnspython

Status in neutron:
  New

Bug description:
  Most probably a follow-up of https://review.opendev.org/#/c/704084/ dropping 
py2.7 testing, stable branches reviews now fail on 
neutron-tempest-plugin-designate-scenario job with failure like:
  2020-02-20 08:05:43.987705 | controller | 2020-02-20 08:05:43.987 | 
Collecting dnspython3!=1.13.0,!=1.14.0,>=1.12.0 (from 
designate-tempest-plugin==0.7.1.dev1)
  2020-02-20 08:05:44.204759 | controller | 2020-02-20 08:05:44.204 |   
Downloading 
http://mirror.ord.rax.opendev.org/pypifiles/packages/f0/bb/f41cbc8eaa807afb9d44418f092aa3e4acf0e4f42b439c49824348f1f45c/dnspython3-1.15.0.zip
  2020-02-20 08:05:44.734839 | controller | 2020-02-20 08:05:44.734 | 
Complete output from command python setup.py egg_info:
  2020-02-20 08:05:44.734956 | controller | 2020-02-20 08:05:44.734 | 
Traceback (most recent call last):
  2020-02-20 08:05:44.735018 | controller | 2020-02-20 08:05:44.734 |   
File "", line 1, in 
  2020-02-20 08:05:44.735078 | controller | 2020-02-20 08:05:44.735 |   
File "/tmp/pip-build-gSLP9k/dnspython3/setup.py", line 25
  2020-02-20 08:05:44.735165 | controller | 2020-02-20 08:05:44.735 | 
"""+"="*78, file=sys.stdout)
  2020-02-20 08:05:44.735231 | controller | 2020-02-20 08:05:44.735 |   
  ^
  2020-02-20 08:05:44.735290 | controller | 2020-02-20 08:05:44.735 | 
SyntaxError: invalid syntax

  Sample reviews:
  https://review.opendev.org/#/c/708576/ (queens)
  https://review.opendev.org/#/c/705421/ (rocky)
  https://review.opendev.org/#/c/708488/ (stein)

  As "[ussuri][goal] Drop python 2.7 support and testing" is the only
  new commit since tag 0.7.0 in designate-tempest-plugin, we can try to
  use this tag for our py2 stable branches jobs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1859988] [NEW] neutron-tempest-plugin tests fail for stable/queens

2020-01-16 Thread Bernard Cafarelli
Public bug reported:

Recent stable/queens backport fail on neutron-tempest-plugin scenario tests, 
sample here:
https://c896d480cfbd9dee637c-6e2dfe610262db0cf157ed36bc183b08.ssl.cf2.rackcdn.com/688719/5/check/neutron-tempest-plugin-scenario-openvswitch-queens/f080f61/testr_results.html

Traceback (most recent call last):
  File "tempest/common/utils/__init__.py", line 108, in wrapper
return func(*func_args, **func_kwargs)
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/test_internal_dns.py",
 line 72, in test_dns_domain_and_name
timeout=CONF.validation.ping_timeout * 10)
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 309, in check_remote_connectivity
timeout=timeout))
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 299, in _check_remote_connectivity
ping_remote, timeout or CONF.validation.ping_timeout, 1)
  File "tempest/lib/common/utils/test_utils.py", line 107, in call_until_true
if func(*args, **kwargs):
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 283, in ping_remote
fragmentation=fragmentation)
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 278, in ping_host
return source.exec_command(cmd)
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/tenacity/__init__.py",
 line 311, in wrapped_f
return self.call(f, *args, **kw)
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/tenacity/__init__.py",
 line 391, in call
do = self.iter(retry_state=retry_state)
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/tenacity/__init__.py",
 line 338, in iter
return fut.result()
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/concurrent/futures/_base.py",
 line 455, in result
return self.__get_result()
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/tenacity/__init__.py",
 line 394, in call
result = fn(*args, **kwargs)
  File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/common/ssh.py",
 line 205, in exec_command
return super(Client, self).exec_command(cmd=cmd, encoding=encoding)
  File "tempest/lib/common/ssh.py", line 153, in exec_command
with transport.open_session() as channel:
AttributeError: 'NoneType' object has no attribute 'open_session'


Queens jobs were pinned to 0.7.0 version of the plugin in 
a4962ec62808fc469eaad73b1408447d8e3bc7ec it looks like we now need to also pin 
tempest itself to a "queens version"

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1859988

Title:
  neutron-tempest-plugin tests fail for stable/queens

Status in neutron:
  New

Bug description:
  Recent stable/queens backport fail on neutron-tempest-plugin scenario tests, 
sample here:
  
https://c896d480cfbd9dee637c-6e2dfe610262db0cf157ed36bc183b08.ssl.cf2.rackcdn.com/688719/5/check/neutron-tempest-plugin-scenario-openvswitch-queens/f080f61/testr_results.html

  Traceback (most recent call last):
File "tempest/common/utils/__init__.py", line 108, in wrapper
  return func(*func_args, **func_kwargs)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/test_internal_dns.py",
 line 72, in test_dns_domain_and_name
  timeout=CONF.validation.ping_timeout * 10)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 309, in check_remote_connectivity
  timeout=timeout))
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 299, in _check_remote_connectivity
  ping_remote, timeout or CONF.validation.ping_timeout, 1)
File "tempest/lib/common/utils/test_utils.py", line 107, in call_until_true
  if func(*args, **kwargs):
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 283, in ping_remote
  fragmentation=fragmentation)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 278, in ping_host
  return source.exec_command(cmd)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/tenacity/__init__.py",
 line 311, in wrapped_f
  return self.call(f, *args, **kw)
File 
"/opt/stack/tempest/.tox/tempest/local/lib/python2.7/site-packages/tenacity/__init__.py",
 line 391, in call
  do = 

[Yahoo-eng-team] [Bug 1853171] Re: Deprecate and remove any "ofctl" code in Neutron and related projects

2019-11-20 Thread Bernard Cafarelli
** Also affects: networking-sfc
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1853171

Title:
  Deprecate and remove any "ofctl" code in Neutron and related projects

Status in networking-sfc:
  New
Status in neutron:
  Confirmed

Bug description:
  This bug should track all changes related to deprecate and remove all
  "ofctl" CLI application code in Neutron and related projects (e.g.:
  networking-sfc).

  Base function that should be removed:
  
https://github.com/openstack/neutron/blob/0fa7e74ebb386b178d36ae684ff04f03bdd6cb0d/neutron/agent/common/ovs_lib.py#L343

  Any Open Flow call should use the native implementation, using os-ken
  library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-sfc/+bug/1853171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1847019] Re: test_update_firewall_calls_get_dvr_hosts_for_router failure on rocky

2019-10-11 Thread Bernard Cafarelli
They are (I was surprised recently when I did not find a neutron-fwaas
project trying to move a neutron bug to it), adding back neutron also
for https://review.opendev.org/#/c/687085/

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1847019

Title:
  test_update_firewall_calls_get_dvr_hosts_for_router failure on rocky

Status in networking-midonet:
  New
Status in neutron:
  In Progress

Bug description:
  eg.
  
https://storage.bhs1.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_5f4/686959/1/check
  /openstack-tox-py27/5f405a7/testr_results.html.gz

  ft1.24: 
midonet.neutron.tests.unit.test_extension_fwaas.FirewallTestCaseML2.test_update_firewall_calls_get_dvr_hosts_for_router_StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/base.py",
 line 181, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/base.py",
 line 181, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/neutron_fwaas/tests/unit/services/firewall/test_fwaas_plugin.py",
 line 398, in test_update_firewall_calls_get_dvr_hosts_for_router
  'get_l3_agents_hosting_routers') as s_hosts, \
File 
"/home/zuul/src/opendev.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1369, in __enter__
  original, local = self.get_original()
File 
"/home/zuul/src/opendev.org/openstack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1343, in get_original
  "%s does not have the attribute %r" % (target, name)
  AttributeError: 
 does not have the attribute 'get_l3_agents_hosting_routers'

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1847019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1846507] Re: ovs VXLAN over IPv6 conflicts with linux native VXLAN over IPv4 using standard port

2019-10-04 Thread Bernard Cafarelli
** Tags added: ipv6

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1846507

Title:
  ovs VXLAN over IPv6 conflicts with linux native VXLAN over IPv4 using
  standard port

Status in kolla-ansible:
  Triaged
Status in neutron:
  Opinion

Bug description:
  This has been observed while testing CI IPv6.

  ovs-agent tries to run VXLAN over IPv6 (default port so 4789)

  linux provides network for CI hosts via native VXLAN (kernel interface of the 
vxlan type) over IPv4 using standard port (the 4789)
  kernel bound to IPv4-only UDP 0.0.0.0:4789

  ovs-agent:

  2019-10-02 19:42:55.073 6 ERROR
  neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-
  103d9f69-76c7-46bc-aeac-81b22af3e6b2 - - - - -] Failed to set-up vxlan
  tunnel port to fd::3

  vswitchd:

  2019-10-02T19:42:55.017Z|00059|dpif_netlink|WARN|Failed to create 
vxlan-srtluqpm3 with rtnetlink: Address already in use
  2019-10-02T19:42:55.053Z|00060|dpif|WARN|system@ovs-system: failed to add 
vxlan-srtluqpm3 as port: Address already in use
  2019-10-02T19:42:55.053Z|00061|bridge|WARN|could not add network device 
vxlan-srtluqpm3 to ofproto (Address already in use)

  For some reason this conflict does *not* arise when ovs is using IPv4
  tunnels (kinda counter-intuitively).

  Workarounded by using different port. This has no real life meaning
  (IMHO) but is undoubtedly an interesting phenomenon.

  Environment description:
  Distro: Ubuntu 18.04 (Bionic)
  Kernel: 4.15.0-65.74 (generic on x86_64)
  Neutron release: current master
  OVS: 2.11.0-0ubuntu2~cloud0 (from UCA - Ubuntu Cloud Archive for Train on 
Bionic)

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1846507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1845900] Re: ha router appear double vip

2019-10-04 Thread Bernard Cafarelli
** Also affects: kolla
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1845900

Title:
  ha router appear double vip

Status in kolla:
  New
Status in neutron:
  Invalid

Bug description:
  Environment:
  Deploy with kolla has three controler.

  Action:
  select one router, and router's vip in control01.
  execute docker stop neutron_l3_agent in control01

  Found:
  control01 and control02 both have vip.

  
  I suspect that the neutron account does not have permission to send a signal 
to keepalived. causes keepalived to withdraw forcibly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla/+bug/1845900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1846198] Re: packet loss during active L3 HA agent restart

2019-10-04 Thread Bernard Cafarelli
Thanks for the confirmation, marking invalid for neutron then and adding
OSA

** Changed in: neutron
   Status: Incomplete => Invalid

** Project changed: neutron => openstack-ansible

** Changed in: openstack-ansible
   Status: Invalid => New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1846198

Title:
   packet loss during active L3 HA agent restart

Status in neutron:
  Invalid
Status in openstack-ansible:
  New

Bug description:
  Deployment:

  Openstack-ansible 19.0.3(Stein) with two dedicated network 
nodes(is_metal=True) + linuxbridge + vxlan.
  Ubuntu 16.04.6 4.15.0-62-generic

  neutron l3-agent-list-hosting-router R1
  
+--+---++---+--+
  | id   | host  | admin_state_up | 
alive | ha_state |
  
+--+---++---+--+
  | 1b3b1b5d-08e7-48a1-ab8d-256d94099fb6 | test-network2 | True   | :-) 
  | standby  |
  | fa402ada-7716-4ad4-a004-7f8114fb1edf | test-network1 | True   | :-) 
  | active   |
  
+--+---++---+--+

  How to reproduce: Restart the active l3 agent. (systemctl restart
  neutron-l3-agent.service)

  test-network1 server side events:

  systemctl restart neutron-l3-agent: @02:58:56.135635630
  ip monitor terminated (kill -9) @02:58:56.208922038
  vip ips removed @02:58:56.268074480
  keepalived terminated   @02:58:57.318596743
  l3-agent terminated @02:59:07.504366398
  keepalived-state-change terminated  @03:01:07.735281710

  test-network1 journal:
    @02:58:56 test-network1 systemd[1]: Stopping neutron-l3-agent service...
    @02:58:56 test-network1 Keepalived_vrrp[24400]: VRRP_Instance(VR_217) sent 
0 priority
    @02:58:56 test-network1 Keepalived_vrrp[24400]: VRRP_Instance(VR_217) 
removing protocol Virtual Routes
    @02:58:56 test-network1 Keepalived_vrrp[24400]: VRRP_Instance(VR_217) 
removing protocol VIPs.
    @02:58:56 test-network1 Keepalived_vrrp[24400]: VRRP_Instance(VR_217) 
removing protocol E-VIPs.
    @02:58:56 test-network1 Keepalived[24394]: Stopping
    @02:58:56 test-network1 neutron-keepalived-state-change[24278]: 2019-10-01 
02:58:56.193 24278 DEBUG neutron.agent.linux.utils [-] enax_custom_log: pid: 
24283, signal: 9 kill_process 
/openstack/venvs/neutron-19.0.4.dev1/lib/python2.7/site-packages/neutron/agent/linux/utils.py:243
    @02:58:56 test-network1 audit[24089]: USER_END pid=24089 uid=0 
auid=4294967295 ses=4294967295 msg='op=PAM:session_close acct="root" 
exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
    @02:58:56 test-network1 sudo[24089]: pam_unix(sudo:session): session closed 
for user root
    @02:58:56 test-network1 audit[24089]: CRED_DISP pid=24089 uid=0 
auid=4294967295 ses=4294967295 msg='op=PAM:setcred acct="root" 
exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
    @02:58:57 test-network1 Keepalived_vrrp[24400]: Stopped
    @02:58:57 test-network1 Keepalived[24394]: Stopped Keepalived v1.3.9 
(10/21,2017)

  TCPDUMP qrouter-24010932-a0a4-4454-9539-27c1535c5ed8 ha-57528491-1b:
    @02:58:53.130735 IP 169.254.195.168 > 224.0.0.18: VRRPv2, Advertisement, 
vrid 217, prio 50, authtype simple, intvl 2s, length 20
    @02:58:55.131926 IP 169.254.195.168 > 224.0.0.18: VRRPv2, Advertisement, 
vrid 217, prio 50, authtype simple, intvl 2s, length 20
    @02:58:56.188558 IP 169.254.195.168 > 224.0.0.18: VRRPv2, Advertisement, 
vrid 217, prio 0, authtype simple, intvl 2s, length 20
    @02:58:56.215889 IP 169.254.195.168 > 224.0.0.22: igmp v3 report, 1 group 
record(s)
    @02:58:56.539804 IP 169.254.195.168 > 224.0.0.22: igmp v3 report, 1 group 
record(s)
    @02:58:56.995386 IP 169.254.194.242 > 224.0.0.18: VRRPv2, Advertisement, 
vrid 217, prio 50, authtype simple, intvl 2s, length 20
    @02:58:58.998565 ARP, Request who-has 169.254.0.217 (ff:ff:ff:ff:ff:ff) 
tell 169.254.0.217, length 28
    @02:58:59.000138 ARP, Request who-has 169.254.0.217 (ff:ff:ff:ff:ff:ff) 
tell 169.254.0.217, length 28
    @02:58:59.001063 ARP, Request who-has 169.254.0.217 (ff:ff:ff:ff:ff:ff) 
tell 169.254.0.217, length 28
    @02:58:59.002173 ARP, Request who-has 169.254.0.217 (ff:ff:ff:ff:ff:ff) 
tell 169.254.0.217, length 28
@02:58:59.003018 ARP, Request who-has 169.254.0.217 (ff:ff:ff:ff:ff:ff) 
tell 169.254.0.217, length 28
@02:58:59.003860 IP 169.254.194.242 > 224.0.0.18: VRRPv2, Advertisement, 
vrid 217, prio 50, authtype simple, intvl 2s, length 20
@02:59:01.004772 IP 169.254.194.242 > 224.0.0.18: VRRPv2, Advertisement, 
vrid 217, prio 50, authtype simple, intvl 2s, 

[Yahoo-eng-team] [Bug 1839595] [NEW] neutron.tests.unit.scheduler.test_dhcp_agent_scheduler.TestNetworksFailover.test_filter_bindings test can fail depending on generated UUIDs

2019-08-09 Thread Bernard Cafarelli
Public bug reported:

On my CentOS system, this test can locally fail 30-50% of the time - tested 
from queens to master:
==
Failed 1 tests - output below:
==

neutron.tests.unit.scheduler.test_dhcp_agent_scheduler.TestNetworksFailover.test_filter_bindings


Captured traceback:
~~~
Traceback (most recent call last):
  File "neutron/tests/base.py", line 177, in func
return f(self, *args, **kwargs)
  File "neutron/tests/unit/scheduler/test_dhcp_agent_scheduler.py", line 
527, in test_filter_bindings
self.assertIn(network_ids[2], res_ids)
  File 
"/home/stack/neutron/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertIn
self.assertThat(haystack, Contains(needle), message)
  File 
"/home/stack/neutron/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 
'8a6b1ce0-fc2e-40b0-bc98-a59722d96a3f' not in 
['6f6c1774-b351-4763-a8d8-184c3bdf01de', '8cee4a38-63e8-4c71-99e9-611b61aa4c90']


The test creates 4 networks (random UUIDs), then 4 NetworkDhcpAgentBindings,
It then gets a list of these bindings with 
NetworkDhcpAgentBinding.get_objects() which uses a sort based on network_id by 
default.
This list is then used with _filter_bindings:
  with mock.patch.object(self, 'agent_starting_up',   
 side_effect=[True, False]):
 
  res = [b for b in self._filter_bindings(None, bindings_objs)]
[...]
  res_ids = [b.network_id for b in res] 
  self.assertIn(network_ids[2], res_ids)
  self.assertIn(network_ids[3], res_ids)

But as network_ids is not sorted, this can fail depending on the generated 
UUIDs. One example on my system:
network_ids:
['6f6c1774-b351-4763-a8d8-184c3bdf01de', 
'8cee4a38-63e8-4c71-99e9-611b61aa4c90', '8a6b1ce0-fc2e-40b0-bc98-a59722d96a3f', 
'4f476b11-085f-47bf-a27f-e300eb9a85b4']

binding_objs:
NetworkDhcpAgentBinding(dhcp_agent_id=1d6b3b74-afa1-410b-880e-be4f87ce7c6d,network_id=4f476b11-085f-47bf-a27f-e300eb9a85b4)
NetworkDhcpAgentBinding(dhcp_agent_id=07030f55-35cd-4386-853f-70777cdcae2b,network_id=6f6c1774-b351-4763-a8d8-184c3bdf01de)
NetworkDhcpAgentBinding(dhcp_agent_id=1d6b3b74-afa1-410b-880e-be4f87ce7c6d,network_id=8a6b1ce0-fc2e-40b0-bc98-a59722d96a3f)
NetworkDhcpAgentBinding(dhcp_agent_id=07030f55-35cd-4386-853f-70777cdcae2b,network_id=8cee4a38-63e8-4c71-99e9-611b61aa4c90)

which will give a (failing the test) res_ids:
['6f6c1774-b351-4763-a8d8-184c3bdf01de',
'8cee4a38-63e8-4c71-99e9-611b61aa4c90']

I am not sure why upstream gates never seem to have the problem (as far
as I have checked), but sorting the network_ids makes the test passing
all the time

** Affects: neutron
     Importance: Undecided
 Assignee: Bernard Cafarelli (bcafarel)
 Status: New


** Tags: unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1839595

Title:
  
neutron.tests.unit.scheduler.test_dhcp_agent_scheduler.TestNetworksFailover.test_filter_bindings
  test can fail depending on generated UUIDs

Status in neutron:
  New

Bug description:
  On my CentOS system, this test can locally fail 30-50% of the time - tested 
from queens to master:
  ==
  Failed 1 tests - output below:
  ==

  
neutron.tests.unit.scheduler.test_dhcp_agent_scheduler.TestNetworksFailover.test_filter_bindings
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/base.py", line 177, in func
  return f(self, *args, **kwargs)
File "neutron/tests/unit/scheduler/test_dhcp_agent_scheduler.py", line 
527, in test_filter_bindings
  self.assertIn(network_ids[2], res_ids)
File 
"/home/stack/neutron/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 417, in assertIn
  self.assertThat(haystack, Contains(needle), message)
File 
"/home/stack/neutron/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 
'8a6b1ce0-fc2e-40b0-bc98-a59722d96a3f' not in 
['6f6c1774-b351-4763-a8d8-184c3bdf01de', '8cee4a38-63e8-4c71-99e9-611b61aa4c90']

  
  The test creates 4 networks (random UUIDs), then 4 NetworkDhcpAgentBindings,
  It then gets a list of these bindings with 
NetworkDhcpAgentBind

[Yahoo-eng-team] [Bug 1754062] Re: openstack client does not pass prefixlen when creating subnet

2019-08-01 Thread Bernard Cafarelli
Marking as fix released in openstacksdk with
https://review.opendev.org/#/c/550558/

** Changed in: python-openstacksdk
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1754062

Title:
  openstack client does not pass prefixlen when creating subnet

Status in neutron:
  Fix Released
Status in OpenStack SDK:
  Fix Released

Bug description:
  Version: Pike
  OpenStack Client: 3.12.0

  When testing Subnet Pool functionality, I found that the behavior
  between the openstack and neutron clients is different.

  Subnet pool:

  root@controller01:~# openstack subnet pool show MySubnetPool
  +---+--+
  | Field | Value|
  +---+--+
  | address_scope_id  | None |
  | created_at| 2018-03-07T13:18:22Z |
  | default_prefixlen | 8|
  | default_quota | None |
  | description   |  |
  | id| e49703d8-27f4-4a16-9bf4-91a6cf00fff3 |
  | ip_version| 4|
  | is_default| False|
  | max_prefixlen | 32   |
  | min_prefixlen | 8|
  | name  | MySubnetPool |
  | prefixes  | 172.31.0.0/16|
  | project_id| 9233b6b4f6a54386af63c0a7b8f043c2 |
  | revision_number   | 0|
  | shared| False|
  | tags  |  |
  | updated_at| 2018-03-07T13:18:22Z |
  +---+--+

  When attempting to create a /28 subnet from that pool with the
  openstack client, the following error is observed:

  root@controller01:~# openstack subnet create \
  > --subnet-pool MySubnetPool \
  > --prefix-length 28 \
  > --network MyVLANNetwork2 \
  > MyFlatSubnetFromPool
  HttpException: Internal Server Error (HTTP 500) (Request-ID: 
req-61b3f00a-9764-4bcb-899d-e85d66f54e5a), Failed to allocate subnet: 
Insufficient prefix space to allocate subnet size /8.

  However, the same request is successful with the neutron client:

  root@controller01:~# neutron subnet-create --subnetpool MySubnetPool 
--prefixlen 28 --name MySubnetFromPool MyVLANNetwork2
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new subnet:
  +---+---+
  | Field | Value |
  +---+---+
  | allocation_pools  | {"start": "172.31.0.2", "end": "172.31.0.14"} |
  | cidr  | 172.31.0.0/28 |
  | created_at| 2018-03-07T13:35:35Z  |
  | description   |   |
  | dns_nameservers   |   |
  | enable_dhcp   | True  |
  | gateway_ip| 172.31.0.1|
  | host_routes   |   |
  | id| 43cb9dda-1b7e-436d-9dc1-5312866a1b63  |
  | ip_version| 4 |
  | ipv6_address_mode |   |
  | ipv6_ra_mode  |   |
  | name  | MySubnetFromPool  |
  | network_id| e01ca743-607c-4a94-9176-b572a46fba84  |
  | project_id| 9233b6b4f6a54386af63c0a7b8f043c2  |
  | revision_number   | 0 |
  | service_types |   |
  | subnetpool_id | e49703d8-27f4-4a16-9bf4-91a6cf00fff3  |
  | tags  |   |
  | tenant_id | 9233b6b4f6a54386af63c0a7b8f043c2  |
  | updated_at| 2018-03-07T13:35:35Z  |
  +---+---+

  The payload is different between these clients - the openstack client
  fails to send the prefixlen key.

  openstack client:

  REQ: curl -g -i -X POST http://controller01:9696/v2.0/subnets -H "User-Agent: 
openstacksdk/0.9.17 keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12" 
-H "Content-Type: application/json" -H "X-Auth-Token: 

[Yahoo-eng-team] [Bug 1835344] Re: neutron doesn't check the validity of gateway_ip as a subnet had been created

2019-07-05 Thread Bernard Cafarelli
** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1835344

Title:
  neutron doesn't check the validity of gateway_ip as a subnet had been
  created

Status in neutron:
  Opinion

Bug description:
  neutron doesn't check the validity of gateway_ip as a subnet had been created.
  Then we attach interface into a router for this subnet, the neutron-server 
will report a error, like: "IP address 10.10.13.254 is not a valid IP for the 
specified subnet."

  How to reproduce:
  1. create a subnet, specify the gateway_ip which isn't in the cidr range.
  # neutron subnet-create  --name -subnet12 --gateway 10.10.13.254 xxx-net1 
10.10.13.0/25

  2. create a router:
  # neutron router-create xxx-router1

  3. attach interface into a router for this subnet.
  # neutron router-interface-add xxx-router1 xxx-subnet123

  result:
  expected: success
  real: unsuccessful, "IP address 10.10.13.254 is not a valid IP for the 
specified subnet."

  Improve:
  So, I think we should check the validity of gateway_ip for subnet when create 
a subnet.

  
  tests:
  [root@]#  neutron subnet-create  --name xxx-subnet12 --gateway 
10.10.13.254 xxx-pool-net1 10.10.13.0/25
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new subnet:
  +-++
  | Field   | Value  |
  +-++
  | allocation_pools| {"start": "10.10.13.1", "end": "10.10.13.126"} |
  | available_ip_number | 126|
  | available_ips   | {"start": "10.10.13.1", "end": "10.10.13.126"} |
  | cidr| 10.10.13.0/25  |
  | created_at  | 2019-07-04T02:37:07Z   |
  | description ||
  | dns_nameservers ||
  | enable_dhcp | True   |
  | gateway_ip  | 10.10.13.254   |
  | host_routes ||
  | id  | 16dc9a28-f4d2-4b1e-9922-d78b4453147a   |
  | ip_version  | 4  |
  | ipv6_address_mode   ||
  | ipv6_ra_mode||
  | name| xxx-subnet12|
  | network_id  | 2fa614ec-8532-46a4-a23a-d599d1c1aaf8   |
  | project_id  | f0208ec2708e436fa02bb79bb3851f86   |
  | revision_number | 0  |
  | service_types   ||
  | subnetpool_id   ||
  | tags||
  | tenant_id   | f0208ec2708e436fa02bb79bb3851f86   |
  | updated_at  | 2019-07-04T02:37:07Z   |
  +-++
  [root@xxx]#  neutron router-interface-add xxx-router123 xxx-subnet12
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  IP address 10.10.13.254 is not a valid IP for the specified subnet.
  Neutron server returns request_ids: 
['req-c58bab6f-2152-4cd5-8cbd-e6f8cb7052ed']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1835344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1706592] Re: Fix for 1694897 broke existing agent function

2019-07-03 Thread Bernard Cafarelli
Marking as fixed, Ocata has been maintenance-only for some time now (and
Pike too) and newer releases are not affected

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1706592

Title:
  Fix for 1694897 broke existing agent function

Status in neutron:
  Fix Released

Bug description:
  Due to change made for 1694897 -
  
(https://github.com/openstack/neutron/commit/b50fed17fa338f4137646c9c8d8e47634b7f5ff7
  #diff-47274e5d515466fd8a373b382009b419) broke existing functionality.

  From neutron server log :
  2017-07-26 08:17:35.737 4946 INFO neutron.wsgi 
[req-13b37d24-f287-44e1-964b-89a740148970 
09190e437c2fd33046047c5458396ba91f2468cc22a1c33bbc12a0148d0885e4 
5f0805bffd4349c4a53a81a5516c5631 - - -] 127.0.0.1 - - [26/Jul/2017 08:17:35] 
"GET /v2.0/networks.json HTTP/1.1" 200 936 19.272352
  2017-07-26 08:17:46.385 4949 ERROR neutron.api.v2.resource 
[req-8669ee8b-eb35-4178-8a30-bd99bdd9efa7 
09190e437c2fd33046047c5458396ba91f2468cc22a1c33bbc12a0148d0885e4 
5f0805bffd4349c4a53a81a5516c5631 - - -] index failed: No details.
  2017-07-26 08:17:46.385 4949 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2017-07-26 08:17:46.385 4949 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 91, in 
resource
  2017-07-26 08:17:46.385 4949 ERROR neutron.api.v2.resource args['id'] = 
'.'.join([args['id'], fmt])
  2017-07-26 08:17:46.385 4949 ERROR neutron.api.v2.resource KeyError: 'id'
  2017-07-26 08:17:46.385 4949 ERROR neutron.api.v2.resource

  args variable is empty dictionary : {}

  It is due to this code in the same resource.py resource method:
  route_args = request.environ.get('wsgiorg.routing_args')
  if route_args:
  args = route_args[1].copy()
  else:
  args = {}

  args can be empty dictionary. args['id'] will be invalid.

  This problem is found in ocata release of OpenStack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1706592/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833125] [NEW] Remaining neutron-lbaas relevant code and documentation

2019-06-17 Thread Bernard Cafarelli
Public bug reported:

neutron-lbaas was deprecated for some time and is now completely retired
in Train cycle [0]

>From a quick grep in neutron repository, we still have references to it
as of June 17.

Some examples:
* Admin guide page [1] on configuration and usage
* LBaaS related policies in neutron/conf/policies/agent.py
* L3 DVR checking device_owner names DEVICE_OWNER_LOADBALANCER and 
DEVICE_OWNER_LOADBALANCERV2
* Relevant unit tests (mostly related to previous feature)

We should drop all of these from neutron repository

[0] http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006142.html
[1] https://docs.openstack.org/neutron/latest/admin/config-lbaas.html

** Affects: neutron
 Importance: Low
 Status: New


** Tags: doc lbaas low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833125

Title:
  Remaining neutron-lbaas relevant code and documentation

Status in neutron:
  New

Bug description:
  neutron-lbaas was deprecated for some time and is now completely
  retired in Train cycle [0]

  From a quick grep in neutron repository, we still have references to
  it as of June 17.

  Some examples:
  * Admin guide page [1] on configuration and usage
  * LBaaS related policies in neutron/conf/policies/agent.py
  * L3 DVR checking device_owner names DEVICE_OWNER_LOADBALANCER and 
DEVICE_OWNER_LOADBALANCERV2
  * Relevant unit tests (mostly related to previous feature)

  We should drop all of these from neutron repository

  [0] 
http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006142.html
  [1] https://docs.openstack.org/neutron/latest/admin/config-lbaas.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1833125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833122] [NEW] FWaaS admin documentation is outdated

2019-06-17 Thread Bernard Cafarelli
Public bug reported:

https://docs.openstack.org/neutron/latest/admin/fwaas.html

This page has some issues:
* still references FWaaS v1
* mentions upcoming features in Ocata (did we implement it in the end?)
* may not be up-to-date for v2 API (features implemented in the meantime)

** Affects: neutron
 Importance: Low
 Status: New


** Tags: doc fwaas low-hanging-fruit

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833122

Title:
  FWaaS admin documentation is outdated

Status in neutron:
  New

Bug description:
  https://docs.openstack.org/neutron/latest/admin/fwaas.html

  This page has some issues:
  * still references FWaaS v1
  * mentions upcoming features in Ocata (did we implement it in the end?)
  * may not be up-to-date for v2 API (features implemented in the meantime)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1833122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829042] [NEW] Some API requests (GET networks) fail with "Accept: application/json; charset=utf-8" header and WebOb>=1.8.0

2019-05-14 Thread Bernard Cafarelli
Public bug reported:

Original downstream bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1706222

On versions newer than Rocky, we have WebOb 1.8 in requirements. This causes 
the following API calls to end with 500 error:
GET http://localhost:9696/v2.0/ports
GET http://localhost:9696/v2.0/subnets
GET http://localhost:9696/v2.0/networks

when setting an Accept header with charset like "Accept:
application/json; charset=utf-8"

These calls do not go through neutron.api.v2 and wsgi.request as other
resources, is it something that should be fixed too?

To reproduce (on master too):
$ curl -s -H "Accept: application/json; charset=utf-8" -H "X-Auth-Token: 
$OS_TOKEN" "http://localhost:9696/v2.0/ports; | python -mjson.tool
{
"NeutronError": {
"detail": "",
"message": "The server could not comply with the request since it is 
either malformed or otherwise incorrect.",
"type": "HTTPNotAcceptable"
}
}

mai 14 18:16:19 devstack neutron-server[1519]: DEBUG neutron.wsgi [-] (1533) 
accepted ('127.0.0.1', 47790) {{(pid=1533) server 
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:956}}
mai 14 18:16:19 devstack neutron-server[1519]: ERROR pecan.core [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] content type None
mai 14 18:16:19 devstack neutron-server[1519]: ERROR pecan.core [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] Controller 'index' defined 
does not support content_type 'None'. Supported type(s): ['application/json']
mai 14 18:16:19 devstack neutron-server[1519]: INFO 
neutron.pecan_wsgi.hooks.translation [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] GET failed (client error): 
The server could not comply with the request since it is either malformed or 
otherwise incorrect.
mai 14 18:16:19 devstack neutron-server[1519]: INFO neutron.wsgi [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] 127.0.0.1 "GET /v2.0/ports 
HTTP/1.1" status: 406  len: 360 time: 0.2243972

Relevant WebOb warning:
https://github.com/Pylons/webob/blob/master/docs/whatsnew-1.8.txt#L24

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1829042

Title:
  Some API requests (GET networks) fail with "Accept: application/json;
  charset=utf-8" header and WebOb>=1.8.0

Status in neutron:
  New

Bug description:
  Original downstream bug:
  https://bugzilla.redhat.com/show_bug.cgi?id=1706222

  On versions newer than Rocky, we have WebOb 1.8 in requirements. This causes 
the following API calls to end with 500 error:
  GET http://localhost:9696/v2.0/ports
  GET http://localhost:9696/v2.0/subnets
  GET http://localhost:9696/v2.0/networks

  when setting an Accept header with charset like "Accept:
  application/json; charset=utf-8"

  These calls do not go through neutron.api.v2 and wsgi.request as other
  resources, is it something that should be fixed too?

  To reproduce (on master too):
  $ curl -s -H "Accept: application/json; charset=utf-8" -H "X-Auth-Token: 
$OS_TOKEN" "http://localhost:9696/v2.0/ports; | python -mjson.tool
  {
  "NeutronError": {
  "detail": "",
  "message": "The server could not comply with the request since it is 
either malformed or otherwise incorrect.",
  "type": "HTTPNotAcceptable"
  }
  }

  mai 14 18:16:19 devstack neutron-server[1519]: DEBUG neutron.wsgi [-] (1533) 
accepted ('127.0.0.1', 47790) {{(pid=1533) server 
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:956}}
  mai 14 18:16:19 devstack neutron-server[1519]: ERROR pecan.core [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] content type None
  mai 14 18:16:19 devstack neutron-server[1519]: ERROR pecan.core [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] Controller 'index' defined 
does not support content_type 'None'. Supported type(s): ['application/json']
  mai 14 18:16:19 devstack neutron-server[1519]: INFO 
neutron.pecan_wsgi.hooks.translation [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] GET failed (client error): 
The server could not comply with the request since it is either malformed or 
otherwise incorrect.
  mai 14 18:16:19 devstack neutron-server[1519]: INFO neutron.wsgi [None 
req-0848fbc9-5c8a-4713-b436-029814f89a32 None demo] 127.0.0.1 "GET /v2.0/ports 
HTTP/1.1" status: 406  len: 360 time: 0.2243972

  Relevant WebOb warning:
  https://github.com/Pylons/webob/blob/master/docs/whatsnew-1.8.txt#L24

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1829042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821925] Re: Filter out failing tempest volume test on stable Ocata branch

2019-05-14 Thread Bernard Cafarelli
** Summary changed:

- Limit test coverage for Extended Maintenance stable branches
+ Filter out failing tempest volume test on stable Ocata branch

** Changed in: neutron
   Status: Incomplete => Fix Released

** Changed in: neutron
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1821925

Title:
  Filter out failing tempest volume test on stable Ocata branch

Status in neutron:
  Fix Released

Bug description:
  Per [0] "There is no statement about the level of testing and upgrades
  from Extended Maintenance are not supported within the Community."

  In Ocata (currently in EM) and Pike (soon to be) branches, we see Zuul
  check failures from time to time on unstable tests, that require a few
  rechecks before the backport gets in.

  For some issues it is better to fix the test/setup itself when it is
  easy (see [1] and [2] for recent examples), but for some failing tests
  (testing exotic cases or not directly related to networking), we
  should start filtering them out.

  An initial example is 
tempest.api.volume.test_volumes_extend.VolumesExtendTest.test_volume_extend_when_volume_has_snapshot
 which fails regularly on ocata and often on pike:
  
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22test_volume_extend_when_volume_has_snapshot%5C%22%20AND%20project:%5C%22openstack%2Fneutron%5C%22%20AND%20build_status:%5C%22FAILURE%5C%22

  We can use this bug to track later similar additions

  [0] 
https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance
  [1] https://bugs.launchpad.net/neutron/+bug/1821815 fixes cover jobs
  [2] https://review.openstack.org/#/c/648046/ fixes a functional test failure 
(simple backport)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1821925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >