[Yahoo-eng-team] [Bug 1916041] Re: tempest-slow-py3 ipv6 gate job fails on stable/stein

2021-03-02 Thread Slawek Kaplonski
Fix https://review.opendev.org/c/openstack/neutron/+/777389 merged. This
bug should be solved now.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1916041

Title:
  tempest-slow-py3 ipv6 gate job fails on stable/stein

Status in neutron:
  Fix Released

Bug description:
  tempest-slow-py3 recently started to fail (100% from backports I checked) on 
8 IPv6 tests, for example 
https://review.opendev.org/c/openstack/neutron/+/775102 and 
https://review.opendev.org/c/openstack/neutron/+/774258 :
  tempest.scenario.test_network_v6.TestGettingAddress
  
test_dhcp6_stateless_from_os[compute,id-d7e1f858-187c-45a6-89c9-bdafde619a9f,network,slow]
  
test_dualnet_dhcp6_stateless_from_os[compute,id-76f26acd-9688-42b4-bc3e-cd134c4cb09e,network,slow]
  
test_dualnet_multi_prefix_dhcpv6_stateless[compute,id-cf1c4425-766b-45b8-be35-e2959728eb00,network,slow]
  
test_dualnet_multi_prefix_slaac[compute,id-9178ad42-10e4-47e9-8987-e02b170cc5cd,network,slow]
  
test_dualnet_slaac_from_os[compute,id-b6399d76-4438-4658-bcf5-0d6c8584fde2,network,slow]
  
test_multi_prefix_dhcpv6_stateless[compute,id-7ab23f41-833b-4a16-a7c9-5b42fe6d4123,network,slow]
  
test_multi_prefix_slaac[compute,id-dec222b1-180c-4098-b8c5-cc1b8342d611,network,slow]
  
test_slaac_from_os[compute,id-2c92df61-29f0-4eaa-bee3-7c65bef62a43,network,slow]

  Typical error message:
  tempest.lib.exceptions.BadRequest: Bad request
  Details: {'type': 'BadRequest', 'message': 'Bad router request: Cidr 
2001:db8::/64 of subnet b7a37aa7-22c6-4eed-b4de-98587b434556 overlaps with cidr 
2001:db8::/64 of subnet d6a91c9f-c268-4724-b2aa-ac5187d167da.', 'detail': ''}

  
  (grabbed from 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b92/77
 5102/1/check/tempest-slow-py3/b924f4d/testr_results.html sample failure)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1916041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837908] Re: Dashboard Hangs while trying to create and associate floating IP to instance at the same time

2021-03-02 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1824484 ***
https://bugs.launchpad.net/bugs/1824484

As mentioned in #2, this issue was fixed as bug 1824484. Marking it as
duplicate.

** This bug has been marked a duplicate of bug 1824484
   workflow modal add_item_link is broken

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1837908

Title:
  Dashboard Hangs while trying to create and associate floating IP to
  instance at the same time

Status in OpenStack Dashboard (Horizon):
  Fix Committed

Bug description:
  OS: Ubuntu 18.04
  Horizon Installation: Manual via apt
  Package repo: ubuntu cloud archive
  Openstack release: Stein
  Architecture: 3 node, controller, network, compute.

  Services 
  controller: keystone, nova, glance, neutron-server/ml2-plugin, designate, 
horizon, heat 
  Network: neutron-l3-agent neutron-metadata-agent neutron-dhcp-agent 
neutron-linuxbridge-agent Compute: nova-compute, neutron-linuxbridge-agent
  All services behind HA proxy with TLS.

  
  Description: 
  I have tested this exact bug twice, and have the exact same issue on both 
installations, one production, one test.

  The UI does not allow the creation and association of a floating IP at
  the same time.

  However succeds using the CLI
  create a floating IP manually using CLI and attaching to instance in one 
command succeed

  openstack floating ip create --dns-domain=stack.lon.example.com.
  --dns-name=testy101 --fixed-ip-address=192.1.2.51 --port 59934e2e-
  6c6f-4c8f-b0cf-933bcf3497c0 91217bab-6250-4ff5-ae61-0accd79a5d41

  ability to ping host afterwards verified.

  when using the UI the following was carried out.

  Reproducible steps
  log into UI and select ( associate floating IP on instance ), then select ( 
plus+ ) button, then select ( Allocate IP ) this is the part where a new 
floating IP is created and attached to the instance I had this working in 
ocata, but now it goes into a continuous loop with the working twirler never 
stopping. 

  Refreshing the page, the floating IP is confirmed as being created but
  never actually attached.

  no errors in logs

  
  tail -f -n1000 /var/log/apache2/horizon/error.log
  [Thu Jul 25 13:47:44.477285 2019] [wsgi:error] [pid 11933:tid 
140288635377408] [remote 172.30.0.2:50814] DEBUG:stevedore.extension:found 
extension EntryPoint.parse('http = oslo_policy._external:HttpCheck')
  [Thu Jul 25 13:47:44.477908 2019] [wsgi:error] [pid 11933:tid 
140288635377408] [remote 172.30.0.2:50814] DEBUG:stevedore.extension:found 
extension EntryPoint.parse('https = oslo_policy._external:HttpsCheck')
  [Thu Jul 25 13:47:58.151732 2019] [wsgi:error] [pid 11932:tid 
140288710911744] [remote 172.30.0.2:51324] DEBUG:stevedore.extension:found 
extension EntryPoint.parse('http = oslo_policy._external:HttpCheck')
  [Thu Jul 25 13:47:58.152439 2019] [wsgi:error] [pid 11932:tid 
140288710911744] [remote 172.30.0.2:51324] DEBUG:stevedore.extension:found 
extension EntryPoint.parse('https = oslo_policy._external:HttpsCheck')


  tail -f -n1000 /var/log/apache2/horizon/access.log
  172.30.0.2 - - [25/Jul/2019:14:47:22 +0100] "GET /horizon/project/instances/ 
HTTP/1.1" 200 8564 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) 
Gecko/20100101 Firefox/68.0"
  172.30.0.2 - - [25/Jul/2019:14:47:45 +0100] "GET 
/horizon/static/dashboard/css/361cca58bb99.css HTTP/1.1" 200 4729 
"https://openstack.lon.example.com/horizon/project/instances/"; "Mozilla/5.0 
(Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
  172.30.0.2 - - [25/Jul/2019:14:47:45 +0100] "GET 
/horizon/i18n/js/horizon+openstack_dashboard/ HTTP/1.1" 200 3612 
"https://openstack.lon.example.com/horizon/project/instances/"; "Mozilla/5.0 
(Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
  172.30.0.2 - - [25/Jul/2019:14:47:45 +0100] "GET 
/horizon/static/dashboard/js/b2bb2963e6de.js HTTP/1.1" 200 37926 
"https://openstack.lon.example.com/horizon/project/instances/"; "Mozilla/5.0 
(Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
  172.30.0.2 - - [25/Jul/2019:14:47:45 +0100] "GET 
/horizon/static/dashboard/css/7b50ccce00d0.css HTTP/1.1" 200 60951 
"https://openstack.lon.example.com/horizon/project/instances/"; "Mozilla/5.0 
(Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
  172.30.0.2 - - [25/Jul/2019:14:47:45 +0100] "GET 
/horizon/static/dashboard/js/787a5a315d99.js HTTP/1.1" 200 122901 
"https://openstack.lon.example.com/horizon/project/instances/"; "Mozilla/5.0 
(Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
  172.30.0.2 - - [25/Jul/2019:14:47:45 +0100] "GET 
/horizon/static/dashboard/js/c927fd827a6d.js HTTP/1.1" 200 429891 
"https://openstack.lon.example.com/horizon/project/instances/"; "Mozilla/5.0 
(Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0"
  172.30.0.2 - - [25/Jul/2019:14:

[Yahoo-eng-team] [Bug 1771559] Re: error while loading icon with pyscss 1.3.5 or later

2021-03-02 Thread Akihiro Motoki
https://review.opendev.org/c/openstack/horizon/+/730288 has been merged.

** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1771559

Title:
  error while loading icon with pyscss 1.3.5 or later

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in python-pyscss package in Ubuntu:
  Confirmed

Bug description:
  Hello,

  Impossible to fetch font awesome icons with material design theme (provided 
in the package)
  No error in the web development console, but browser show symbol number like 
"xf44f" instead of showing icons.

  STEPS TO REPRODUCE:
  Use Chrome web browser
  Login to horizon
  Change theme to "Material"
  Try to use Horizon as usual

  Regards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1771559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1917508] [NEW] Router create fails when router with same name already exists

2021-03-02 Thread Carlos Goncalves
Public bug reported:

Test
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_preserve_preexisting_port
failed with what appears to be an issue in the OVN ML2 driver. According
to Terry, "it looks like the create_lrouter call in impl_idl_ovn.py
passes may_exist, but the one in ovn_client.py which you are hitting
does not."

https://64e4c686e8a5385bf7e9-3a9e3dcf5065ad1abf1d1a27741d8ba4.ssl.cf5.rackcdn.com/775444/9/check
/tripleo-ci-centos-8-containers-
multinode/e812d2f/logs/undercloud/var/log/tempest/tempest_run.log

https://zuul.opendev.org/t/openstack/build/e812d2fb618b45118fca269af335d0f4/log/logs/subnode-1/var/log/containers/neutron/server.log#9668-9736

2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource 
[req-c24e459e-3cd0-457f-8111-4d6bd5e07d05 3cb9d2fb8de645868d440abcd947ea0c 
c2be1e148cc3473e9f880590eb3ae771 - default default] create failed: No details.: 
RuntimeError: OVSDB Error: {"details":"Transaction causes multiple rows in 
\"Logical_Router_Port\" table to have identical values 
(lrp-ffeb33eb-75bf-4330-9d96-188c1529bf18) for index on column \"name\".  First 
row, with UUID c4340403-af73-46c4-a27c-77969d0522fd, existed in the database 
before this transaction and was not modified by the transaction.  Second row, 
with UUID 42b137d3-14ad-4ead-bcdc-b2da5684511b, was inserted by this 
transaction.","error":"constraint violation"}
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource Traceback (most recent 
call last):
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron/api/v2/resource.py", line 98, in 
resource
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron/api/v2/base.py", line 437, in create
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 139, in wrapped
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource 
self.force_reraise()
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource raise self.value
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 135, in wrapped
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_db/api.py", line 154, in wrapper
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource 
self.force_reraise()
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource raise self.value
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_db/api.py", line 142, in wrapper
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 183, in wrapped
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource LOG.debug("Retry 
wrapper got retriable exception: %s", e)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource 
self.force_reraise()
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource raise self.value
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 179, in wrapped
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource return 
f(*dup_args, **dup_kwargs)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron/api/v2/base.py", line 561, in _create
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource obj = 
do

[Yahoo-eng-team] [Bug 1602081] Re: Use oslo.context's policy dict

2021-03-02 Thread Goutham Pacha Ravi
** Changed in: manila
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1602081

Title:
  Use oslo.context's policy dict

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is a cross project goal to standardize the values available to
  policy writers and to improve the basic oslo.context object. It is
  part of the follow up work to bug #1577996 and bug #968696.

  There has been an ongoing problem for how we define the 'admin' role.
  Because tokens are project scoped having the 'admin' role on any
  project granted you the 'admin' role on all of OpenStack. As a
  solution to this keystone defined an is_admin_project field so that
  keystone defines a single project that your token must be scoped to to
  perform admin operations. This has been implemented.

  The next phase of this is to make all the projects understand the X
  -Is-Admin-Project header from keystonemiddleware and pass it to
  oslo_policy. However this pattern of keystone changes something and
  then goes to every project to fix it has been repeated a number of
  times now and we would like to make it much more automatic.

  Ongoing work has enhanced the base oslo.context object to include both
  the load_from_environ and to_policy_values methods. The
  load_from_environ classmethod takes an environment dict with all the
  standard auth_token and oslo middleware headers and loads them into
  their standard place on the context object.

  The to_policy_values() then creates a standard credentials dictionary
  with all the information that should be required to enforce policy
  from the context. The combination of these two methods means in future
  when authentication information needs to be passed to policy it can be
  handled entirely by oslo.context and does not require changes in each
  individual service.

  Note that in future a similar pattern will hopefully be employed to
  simplify passing authentication information over RPC to solve the
  timeout issues. This is a prerequisite for that work.

  There are a few common problems in services that are required to make
  this work:

  1. Most service context.__init__ functions take and discard **kwargs.
  This is so if the context.from_dict receives arguments it doesn't know
  how to handle (possibly because new things have been added to the base
  to_dict) it ignores them. Unfortunately to make the load_from_environ
  method work we need to pass parameters to __init__ that are handled by
  the base class.

  To make this work we simply have to do a better job of using
  from_dict. Instead of passing everything to __init__ and ignoring what
  we don't know we have from_dict extract only the parameters that
  context knows how to use and call __init__ with those.

  2. The parameters passed to the base context.__init__ are old.
  Typically they are user and tenant where most services expect user_id
  and project_id. There is ongoing work to improve this in oslo.context
  but for now we have to ensure that the subclass correctly sets and
  uses the right variable names.

  3. Some services provide additional information to the policy
  enforcement method. To continue to make this function we will simply
  override the to_policy_values method in the subclasses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1602081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1917498] [NEW] Cold migration/resize failure with encrypted volumes can leave instance in error and volumes attaching

2021-03-02 Thread Mark Goddard
Public bug reported:

Description
===
Due to the differences in nova, cinder and barbican policies described in bug 
1895848, a user cannot migrate an instance with an encrypted volume (using 
barbican) that belongs to a user in a different project. Furthermore, if a cold 
migration or resize is attempted and fails when accessing the encryption key, 
the instance will go to an 'error' state, and the volumes will get stuck in the 
'attaching' state.

Steps to reproduce
==
Prerequisites: users A & B, where B has the admin role.

As user A in project A, create an instance with an encrypted volume.
As user B in project B, attempt to cold migrate the instance.

Expected result
===
Cold migration is unsuccessful. Instance remains active with volume attached.

Actual result
=
Cold migration is unsuccessful. Instance is in ERROR state and shutoff. The 
volume appears to be attached from the nova perspective, but in Cinder its 
status is attaching. The volume has lost the attachment record.

Environment
===
Seen in Stein, CentOS 7, deployed via Kolla Ansible.

Logs

Will follow up with more info.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1917498

Title:
  Cold migration/resize failure with encrypted volumes can leave
  instance in error and volumes attaching

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Due to the differences in nova, cinder and barbican policies described in bug 
1895848, a user cannot migrate an instance with an encrypted volume (using 
barbican) that belongs to a user in a different project. Furthermore, if a cold 
migration or resize is attempted and fails when accessing the encryption key, 
the instance will go to an 'error' state, and the volumes will get stuck in the 
'attaching' state.

  Steps to reproduce
  ==
  Prerequisites: users A & B, where B has the admin role.

  As user A in project A, create an instance with an encrypted volume.
  As user B in project B, attempt to cold migrate the instance.

  Expected result
  ===
  Cold migration is unsuccessful. Instance remains active with volume attached.

  Actual result
  =
  Cold migration is unsuccessful. Instance is in ERROR state and shutoff. The 
volume appears to be attached from the nova perspective, but in Cinder its 
status is attaching. The volume has lost the attachment record.

  Environment
  ===
  Seen in Stein, CentOS 7, deployed via Kolla Ansible.

  Logs
  
  Will follow up with more info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1917498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1917487] [NEW] [FT] "IpNetnsCommand.add" command fails frequently

2021-03-02 Thread Rodolfo Alonso
Public bug reported:

"IpNetnsCommand.add" command fails frequently, as reported in:
* 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_018/772460/13/check/neutron-functional-with-uwsgi/0181e4f/testr_results.html
* 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_018/772460/13/check/neutron-functional-with-uwsgi/0181e4f/testr_results.html
* 
https://3d423a08ba57e3349bef-667e59a55d2239af414b0984e42f005a.ssl.cf5.rackcdn.com/771621/7/check/neutron-functional-with-uwsgi/c8d3396/testr_results.html
* 
https://05d521de885e88e86394-43c5916241b7d2ec064c20be68c7ab53.ssl.cf1.rackcdn.com/774626/4/check/neutron-functional-with-uwsgi/06c04c5/testr_results.html

There are two failing points:
1) During the linux namespace creation: http://paste.openstack.org/show/803147/
2) During the sysctl command execution: http://paste.openstack.org/show/803146/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1917487

Title:
  [FT] "IpNetnsCommand.add" command fails frequently

Status in neutron:
  New

Bug description:
  "IpNetnsCommand.add" command fails frequently, as reported in:
  * 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_018/772460/13/check/neutron-functional-with-uwsgi/0181e4f/testr_results.html
  * 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_018/772460/13/check/neutron-functional-with-uwsgi/0181e4f/testr_results.html
  * 
https://3d423a08ba57e3349bef-667e59a55d2239af414b0984e42f005a.ssl.cf5.rackcdn.com/771621/7/check/neutron-functional-with-uwsgi/c8d3396/testr_results.html
  * 
https://05d521de885e88e86394-43c5916241b7d2ec064c20be68c7ab53.ssl.cf1.rackcdn.com/774626/4/check/neutron-functional-with-uwsgi/06c04c5/testr_results.html

  There are two failing points:
  1) During the linux namespace creation: 
http://paste.openstack.org/show/803147/
  2) During the sysctl command execution: 
http://paste.openstack.org/show/803146/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1917487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1917483] [NEW] Horizon loads policy files partially when one of policy files are written in a wrong format

2021-03-02 Thread Takashi Kajinami
Public bug reported:

Currently horizon stops loading policy rules when it detects any file in a 
wrong format.
This results in a situation where horizon works with very incomplete policy 
rules because of only one invalid policy file.
For example when horizon susceeds to load the keystone policy firstly and then 
fails to load the nova policy secondly, it recognizes policy rules for only 
keystone and doesn't recognize policy rules for not only nova but the remaining 
services like neutron, cinder and so on.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1917483

Title:
  Horizon loads policy files partially when one of policy files are
  written in a wrong format

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently horizon stops loading policy rules when it detects any file in a 
wrong format.
  This results in a situation where horizon works with very incomplete policy 
rules because of only one invalid policy file.
  For example when horizon susceeds to load the keystone policy firstly and 
then fails to load the nova policy secondly, it recognizes policy rules for 
only keystone and doesn't recognize policy rules for not only nova but the 
remaining services like neutron, cinder and so on.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1917483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1917472] [NEW] Duplicate attempts to destroy instance when calling os-stop and os-start

2021-03-02 Thread Lee Yarwood
Public bug reported:

Description
===

The libvirt driver's implementation of power_on currently uses
_hard_reboot under the hood to recreate the underlying domain, volume
connections and vifs. As part of this we attempt to destroy the domain
but in the context of power_on that has already been handled by an
earlier call to power_off. AFAICT the existing exception handling within
destroy stops this ever being an issue but it is still inefficient and
should be removed.

Steps to reproduce
==

* Stop an instance
* Start an instance
* Note the duplicate attempts to destroy the instance

Expected result
===

No attempt is made to destroy the instance while starting it.

Actual result
=

An attempt to destroy the instance is made and while this doesn't result
in an error it is inefficient.

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

   master

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

   libvirt

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   N/A

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

   N/A

Logs & Configs
==

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1917472

Title:
  Duplicate attempts to destroy instance when calling os-stop and os-
  start

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  The libvirt driver's implementation of power_on currently uses
  _hard_reboot under the hood to recreate the underlying domain, volume
  connections and vifs. As part of this we attempt to destroy the domain
  but in the context of power_on that has already been handled by an
  earlier call to power_off. AFAICT the existing exception handling
  within destroy stops this ever being an issue but it is still
  inefficient and should be removed.

  Steps to reproduce
  ==

  * Stop an instance
  * Start an instance
  * Note the duplicate attempts to destroy the instance

  Expected result
  ===

  No attempt is made to destroy the instance while starting it.

  Actual result
  =

  An attempt to destroy the instance is made and while this doesn't
  result in an error it is inefficient.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 master

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 libvirt

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1917472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1917469] [NEW] non-admins can delete in-use images

2021-03-02 Thread Felix Huettner
Public bug reported:

Hello everyone,

we have an issue in the following (simplified) setup:

* we have an user which is uploading images. This user only has default member 
rights and is not an admin
* the user is uploading new versions of the images regularly
* the user tries to delete previous versions of the images to clean up space. 
Some of the deletes fail if the image is still in use
* the user then disables such images to ensure that no new instances are 
spawned from it

When the user now tries to delete such disabled image they will always
succeed. Independent if the image is acutally in use or not. The
deletion only happens on the Database of glance. The image is still
present in ceph.

Note that this issue does not happen if an admin tries to delete the
disabled image. Then the image is correctly checked if it's still in
use.


Some general information regarding the environment:

* Openstack release queens
* ceph as a backend of cinder and glance
* show_image_direct_url is enabled to allow direct clones


In order to reproduce the issue the following steps are necessary (please run 
with a non-admin user):

[root@openstackclient-5fc7564495-vstnc /]# openstack image create --file 
img.raw testimage
+--+---+
| Field| Value  
   |
+--+---+
| checksum | 01e7d1515ee776be3228673441d449e6   
   |
| container_format | bare   
   |
| created_at   | 2021-03-02T14:09:38Z   
   |
| disk_format  | raw
   |
| file | /v2/images/b8a48536-4b46-4a7b-b0ed-2e818ace11a2/file   
   |
| id   | b8a48536-4b46-4a7b-b0ed-2e818ace11a2   
   |
| min_disk | 0  
   |
| min_ram  | 0  
   |
| name | testimage  
   |
| owner| 4e6fb48327204e94b0021d17f1544e08   
   |
| properties   | 
direct_url='rbd://2a38b93e-cfd9-403c-b5fd-6fa26a58898e/glance-pool/b8a48536-4b46-4a7b-b0ed-2e818ace11a2/snap'
 |
| protected| False  
   |
| schema   | /v2/schemas/image  
   |
| size | 117440512  
   |
| status   | active 
   |
| tags |
   |
| updated_at   | 2021-03-02T14:09:44Z   
   |
| virtual_size | None   
   |
| visibility   | shared 
   |
+--+---+

[root@openstackclient-5fc7564495-vstnc /]# openstack volume create
--image b8a48536-4b46-4a7b-b0ed-2e818ace11a2 --size 10 testvol

[root@openstackclient-5fc7564495-vstnc /]# openstack image delete 
b8a48536-4b46-4a7b-b0ed-2e818ace11a2
Failed to delete image with name or ID 'b8a48536-4b46-4a7b-b0ed-2e818ace11a2': 
409 Conflict: Image b8a48536-4b46-4a7b-b0ed-2e818ace11a2 could not be deleted 
because it is in use: The image cannot be deleted because it is in use through 
the backend store outside of Glance. (HTTP 409)
Failed to delete 1 of 1 images.

[root@openstackclient-5fc7564495-vstnc /]# openstack image set
--deactivate b8a48536-4b46-4a7b-b0ed-2e818ace11a2

[root@openstackclie

[Yahoo-eng-team] [Bug 1917448] [NEW] GRE tunnels over IPv6 have wrong packet_type set in OVS

2021-03-02 Thread Slawek Kaplonski
Public bug reported:

In https://review.opendev.org/c/openstack/neutron/+/763204 we added
support for creating GRE tunnels using IPv6 addresses. But it seems that
by mistake we set wrong packet_type on such GRE interface in ovs.
Instead of "legacy" it should be "legacy_l2" - see
https://github.com/openvswitch/ovs/blob/master/Documentation/faq/configuration.rst

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: gre ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1917448

Title:
  GRE tunnels over IPv6 have wrong packet_type set in OVS

Status in neutron:
  Confirmed

Bug description:
  In https://review.opendev.org/c/openstack/neutron/+/763204 we added
  support for creating GRE tunnels using IPv6 addresses. But it seems
  that by mistake we set wrong packet_type on such GRE interface in ovs.
  Instead of "legacy" it should be "legacy_l2" - see
  
https://github.com/openvswitch/ovs/blob/master/Documentation/faq/configuration.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1917448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890432] Re: Create subnet is failing under high load with OVN

2021-03-02 Thread James Page
Backports

https://review.opendev.org/c/openstack/neutron/+/774256
https://review.opendev.org/c/openstack/neutron/+/774135

** No longer affects: charm-neutron-api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1890432

Title:
  Create subnet is failing under high load with OVN

Status in neutron:
  Fix Committed
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  Triaged
Status in neutron source package in Groovy:
  Triaged

Bug description:
  Under a high concurrency level create subnet is starting to fail.
  (12-14% failure rate) The bundle is OVN / Ussuri.

  neutronclient.common.exceptions.Conflict: Unable to complete operation
  on subnet  This subnet is being modified by another concurrent
  operation.

  Stacktrace: https://pastebin.ubuntu.com/p/sQ5CqD6NyS/
  Rally task:

  {% set flavor_name = flavor_name or "m1.medium" %}
  {% set image_name = image_name or "bionic-kvm" %}

  ---
NeutronNetworks.create_and_delete_subnets:
  -
args:
  network_create_args: {}
  subnet_create_args: {}
  subnet_cidr_start: "1.1.0.0/30"
  subnets_per_network: 2
runner:
  type: "constant"
  times: 100
  concurrency: 10
context:
  network: {}
  users:
tenants: 30
users_per_tenant: 1
  quotas:
neutron:
  network: -1
  subnet: -1

  Concurrency level set to 1 instead of 10 is not triggering the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1890432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890432] Re: Create subnet is failing under high load with OVN

2021-03-02 Thread James Page
Hirsute/Wallaby packages include the fix from:

https://review.opendev.org/c/openstack/neutron/+/745330/

So marked "Fix Released" for this target.

Focal/Ussuri and Groovy/Wallaby - fix has been merged into the neutron
table branch for each release however no new point releases from Neutron
for these two release targets yet.


** Changed in: neutron
   Status: In Progress => Fix Committed

** Changed in: neutron (Ubuntu)
   Status: Triaged => Invalid

** Changed in: neutron (Ubuntu)
   Status: Invalid => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1890432

Title:
  Create subnet is failing under high load with OVN

Status in neutron:
  Fix Committed
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  Triaged
Status in neutron source package in Groovy:
  Triaged

Bug description:
  Under a high concurrency level create subnet is starting to fail.
  (12-14% failure rate) The bundle is OVN / Ussuri.

  neutronclient.common.exceptions.Conflict: Unable to complete operation
  on subnet  This subnet is being modified by another concurrent
  operation.

  Stacktrace: https://pastebin.ubuntu.com/p/sQ5CqD6NyS/
  Rally task:

  {% set flavor_name = flavor_name or "m1.medium" %}
  {% set image_name = image_name or "bionic-kvm" %}

  ---
NeutronNetworks.create_and_delete_subnets:
  -
args:
  network_create_args: {}
  subnet_create_args: {}
  subnet_cidr_start: "1.1.0.0/30"
  subnets_per_network: 2
runner:
  type: "constant"
  times: 100
  concurrency: 10
context:
  network: {}
  users:
tenants: 30
users_per_tenant: 1
  quotas:
neutron:
  network: -1
  subnet: -1

  Concurrency level set to 1 instead of 10 is not triggering the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1890432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1917437] [NEW] Enable querier for multicast (IGMP) in OVN

2021-03-02 Thread Kamil Sambor
Public bug reported:

Core OVN supports IGMP querier we should also add support for it in the
OVN driver in Neutron.

In order to enable it via the OVN driver in neutron one should set:

other_config:mcast_querier='true',
other_config:mcast_eth_src=,
other_config:mcast_ip4_src=

For the smac and sip I would use the values from the logical router port
connected to the switch if any. Otherwise, just set mcast_snoop

** Affects: neutron
 Importance: Undecided
 Assignee: Kamil Sambor (ksambor)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Kamil Sambor (ksambor)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1917437

Title:
  Enable querier for multicast (IGMP) in OVN

Status in neutron:
  In Progress

Bug description:
  Core OVN supports IGMP querier we should also add support for it in
  the OVN driver in Neutron.

  In order to enable it via the OVN driver in neutron one should set:

  other_config:mcast_querier='true',
  other_config:mcast_eth_src=,
  other_config:mcast_ip4_src=

  For the smac and sip I would use the values from the logical router
  port connected to the switch if any. Otherwise, just set mcast_snoop

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1917437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp