[Yahoo-eng-team] [Bug 1968645] Re: Concurrent migration of vms with the same multiattach volume fails

2022-04-11 Thread haobing1
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1968645

Title:
  Concurrent migration of vms with the same multiattach volume fails

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  reproduce:
  1. Create multiple vms
  2. Create a multiattach volume
  3. Attach the volume to all vms
  4. Shut down all vms and migrate all vms at the same time
  5. It is possible to find that a vm migration failed

  The nova-compute log is as follows:
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager 
[req-95d6268a-95eb-4ea2-98e0-a9e973b8f19c cb6c975e503c4b1ca741f64a42d09d50 
68dd5eeecb434da0aa5ebcdda19a8db6 - default default] [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] Setting instance vm_state to ERROR: 
nova.exception.InvalidInput: Invalid input received: Invalid volume: Volume 
e269257b-831e-4be0-a1e6-fbb2aac922a6 status must be available or in-use or 
downloading to reserve, but the current status is attaching. (HTTP 400) 
(Request-ID: req-3515d919-aee2-40f4-887e-d5abb34a9d2e)
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] Traceback (most recent call last):
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d]   File 
"/usr/local/lib/python3.6/site-packages/nova/volume/cinder.py", line 396, in 
wrapper
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] res = method(self, ctx, *args, 
**kwargs)
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d]   File 
"/usr/local/lib/python3.6/site-packages/nova/volume/cinder.py", line 432, in 
wrapper
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] res = method(self, ctx, volume_id, 
*args, **kwargs)
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d]   File 
"/usr/local/lib/python3.6/site-packages/nova/volume/cinder.py", line 807, in 
attachment_create
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] instance_uuid=instance_id)
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d]   File 
"/usr/local/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in 
__exit__
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] self.force_reraise()
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d]   File 
"/usr/local/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] raise self.value
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d]   File 
"/usr/local/lib/python3.6/site-packages/nova/volume/cinder.py", line 795, in 
attachment_create
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] volume_id, _connector, instance_id)
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d]   File 
"/usr/local/lib/python3.6/site-packages/cinderclient/api_versions.py", line 
423, in substitution
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] return method.func(obj, *args, 
**kwargs)
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d]   File 
"/usr/local/lib/python3.6/site-packages/cinderclient/v3/attachments.py", line 
39, in create
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] retval = self._create('/attachments', 
body, 'attachment')
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d]   File 
"/usr/local/lib/python3.6/site-packages/cinderclient/base.py", line 300, in 
_create
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] resp, body = 
self.api.client.post(url, body=body)
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d]   File 
"/usr/local/lib/python3.6/site-packages/cinderclient/client.py", line 217, in 
post
  2022-04-11 16:49:46.685 23871 ERROR nova.compute.manager [instance: 
17fc694e-284a-43f0-b6c6-c640a02db23d] return self._cs_request(url, 'POST', 
**kwargs)
  2022-04-11 16:49:46.685 238

[Yahoo-eng-team] [Bug 1951872] Re: OVN: Missing reverse DNS for instances

2022-04-11 Thread Felipe Alencastro
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951872

Title:
  OVN: Missing reverse DNS for instances

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Fix Released

Bug description:
  When using OVN, reverse DNS for instances is not working. With dhcp-
  agent:

  ubuntu@vm1:~$ host 10.0.0.11 10.0.0.3
  Using domain server:
  Name: 10.0.0.3
  Address: 10.0.0.3#53
  Aliases: 

  11.0.0.10.in-addr.arpa domain name pointer vm3.openstackgate.local.

  With OVN:

  ubuntu@vm1:~$ host 10.0.0.11 8.8.8.8
  Using domain server:
  Name: 8.8.8.8
  Address: 8.8.8.8#53
  Aliases: 

  Host 11.0.0.10.in-addr.arpa. not found: 3(NXDOMAIN)

  Expected result: Get the same answer as with ML2/OVS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1951872/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1968618] [NEW] get-vnc-console returns 500 if instance in invalid task_state

2022-04-11 Thread Ksenia Kargina
Public bug reported:

Description
===
Nova returns 500 when you create console for instance which is in transition 
state like deleting or migrating.


Steps to reproduce
==
1. Instance in task_state deleting or migrating

mysql> select task_state from instances where 
uuid='35a5c36e-5464-4747-97a8-c160da093101';
++
| task_state |
++
| migrating  |
++

2. Try to get console

# openstack console url show --novnc 35a5c36e-5464-4747-97a8-c160da093101
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-8dd68e78-1956-4e0c-90ee-d2a5e1be54bf)

Same error with "nova get-vnc-console"


Expected result
===
Nova usually returns HTTP 409 when catches InstanceInvalidState.


Environment
===
Currently seeing in Stein. But it seems nova doesn't catch this exception in 
other versions.


Logs & Configs
==
2022-04-11 18:59:24,404.404 11 ERROR nova.api.openstack.wsgi 
[req-f057dae1-9a9a-4db5-8faf-db655b904a0e b6ba9c75146a49829a7427a3e8cc3c10 
192796e61c174f718d6147b129f3f2ff] Unexpected exception in API method: 
nova.exception.InstanceInvalidState: Instance 
35a5c36e-5464-4747-97a8-c160da093101 in task_state migrating. Cannot 
get_vnc_console while the instance is in this state.
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi Traceback (most 
recent call last):
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/api/openstack/wsgi.py", 
line 671, in wrapped
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi return 
f(*args, **kwargs)
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi return 
func(*args, **kwargs)
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi return 
func(*args, **kwargs)
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/api/openstack/compute/remote_consoles.py",
 line 168, in create
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi output = 
handler(context, instance, console_type)
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/compute/api.py", line 199, 
in wrapped
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi return 
function(self, context, instance, *args, **kwargs)
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi   File 
"/var/lib/openstack/lib/python3.6/site-packages/nova/compute/api.py", line 187, 
in inner
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi 
attr='task_state', state=instance.task_state)
2022-04-11 18:59:24,404.404 11 TRACE nova.api.openstack.wsgi 
nova.exception.InstanceInvalidState: Instance 
35a5c36e-5464-4747-97a8-c160da093101 in task_state migrating. Cannot 
get_vnc_console while the instance is in this state.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1968618

Title:
  get-vnc-console returns 500 if instance in invalid task_state

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Nova returns 500 when you create console for instance which is in transition 
state like deleting or migrating.

  
  Steps to reproduce
  ==
  1. Instance in task_state deleting or migrating

  mysql> select task_state from instances where 
uuid='35a5c36e-5464-4747-97a8-c160da093101';
  ++
  | task_state |
  ++
  | migrating  |
  ++

  2. Try to get console

  # openstack console url show --novnc 35a5c36e-5464-4747-97a8-c160da093101
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-8dd68e78-1956-4e0c-90ee-d2a5e1be54bf)

  Same error with "nova get-vnc-console"

  
  Expected result
  ===
  Nova usually returns HTTP 409 when catches InstanceInvalidState.

  
  Environment
  ===
  Currently seeing in Stein. But it seems nova doesn't catch this exception in 
other versions.

  
  Logs & Configs
  ==
  2022-04-11 18:59:24,404.404 11 ERROR nova.api.openstack.wsgi 
[req-f057dae1-9a9a-4db5-8faf-db655b904a0e b6ba9c75146a49829a7427a3e8cc3c10 
192796e61c174f718d6147b129f3f2ff] Unexpected exception in API method: 
nova.exception.InstanceInvalidS

[Yahoo-eng-team] [Bug 1967893] Re: [stable/yoga] tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest fails in neutron-ovs-tempest-multinode-full job

2022-04-11 Thread Lajos Katona
the port-resource-request-groups extension was missing from devstack list, fix:
https://review.opendev.org/c/openstack/devstack/+/836671

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1967893

Title:
  [stable/yoga]
  tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest
  fails in neutron-ovs-tempest-multinode-full job

Status in neutron:
  Fix Released

Bug description:
  tempest.scenario.test_network_qos_placement.MinBwAllocationPlacementTest.*
  test fail in a reproducible way in neutron-ovs-tempest-multinode-full
  job (only for yoga branch).

  Sample log failure:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0ea/835863/1/check/neutron-ovs-tempest-multinode-full/0ea66ae/testr_results.html
  from:
  https://review.opendev.org/c/openstack/neutron/+/835863/

  From Lajos' review, port-resource-request-groups extension is loaded
  but it is missing from the api_extensions list

  These tests in this job worked in the first days after yoga branching, but 
are failing since around 2022-03-31:
  
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-tempest-multinode-full&project=openstack%2Fneutron&branch=stable%2Fyoga

  At first glance I did not see any potential culprit in recent neutron
  backports, or tempest/neutron-tempest-plugin merged changes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1967893/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1968605] [NEW] Nova Server api broken (error 500)

2022-04-11 Thread Dr. Clemens Hardewig
Public bug reported:

Description:

Updating to Yoga broke the Nova server api. Using Default policies and
applying

# openstack server list

Expected result:

List of servers in project

Shown result:

/usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: 
CryptographyDeprecationWarning: int_from_bytes is deprecated, use 
int.from_bytes instead
  from cryptography.utils import int_from_bytes
/usr/lib/python3/dist-packages/secretstorage/util.py:19: 
CryptographyDeprecationWarning: int_from_bytes is deprecated, use 
int.from_bytes instead
  from cryptography.utils import int_from_bytes
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-ee8ebc01-e78c-41be-b449-af3994e88ec3)


Nova API log shows

2022-04-11 19:49:41.073 3391739 WARNING oslo_policy.policy 
[req-ee8ebc01-e78c-41be-b449-af3994e88ec3 9fce36209f42437bb9d4e5d4423586ae - - 
default default] Policies ['rule:admin_or_owner', 
'os_compute_api:os-attach-interfaces', 'os_compute_api:os-deferred-delete', 
'os_compute_api:os-floating-ips', 'os_compute_api:os-instance-actions', 
'os_compute_api:os-multinic', 'os_compute_api:os-networks:view', 
'os_compute_api:os-rescue', 'os_compute_api:os-security-groups', 
'os_compute_api:os-server-password', 
'os_compute_api:os-flavor-extra-specs:index', 
'os_compute_api:os-tenant-networks', 'os_compute_api:os-volumes', 
'project_member_api', 'project_reader_api', 'project_reader_or_admin', 
'os_compute_api:os-admin-password', 'os_compute_api:os-attach-interfaces:list', 
'os_compute_api:os-attach-interfaces:show', 
'os_compute_api:os-attach-interfaces:create', 
'os_compute_api:os-attach-interfaces:delete', 
'os_compute_api:os-console-output', 'os_compute_api:os-create-backup', 
'os_compute_api:os-deferred-delete:restore', 
'os_compute_api:os-deferred-delete:force', 
'os_compute_api:os-flavor-extra-specs:show', 
'os_compute_api:os-floating-ips:add', 'os_compute_api:os-floating-ips:remove', 
'os_compute_api:os-floating-ips:list', 'os_compute_api:os-floating-ips:create', 
'os_compute_api:os-floating-ips:show', 'os_compute_api:os-floating-ips:delete', 
'os_compute_api:os-instance-actions:list', 
'os_compute_api:os-instance-actions:show', 'os_compute_api:ips:show', 
'os_compute_api:ips:index', 'os_compute_api:os-lock-server:lock', 
'os_compute_api:os-lock-server:unlock', 'os_compute_api:os-multinic:add', 
'os_compute_api:os-multinic:remove', 'os_compute_api:os-networks:list', 
'os_compute_api:os-networks:show', 'os_compute_api:os-pause-server:pause', 
'os_compute_api:os-pause-server:unpause', 'os_compute_api:os-quota-sets:show', 
'os_compute_api:os-quota-sets:detail', 'os_compute_api:os-remote-consoles', 
'os_compute_api:os-unrescue', 'os_compute_api:os-security-groups:get', 
'os_compute_api:os-security-groups:show', 
'os_compute_api:os-security-groups:create', 
'os_compute_api:os-security-groups:update', 
'os_compute_api:os-security-groups:delete', 
'os_compute_api:os-security-groups:rule:create', 
'os_compute_api:os-security-groups:rule:delete', 
'os_compute_api:os-security-groups:list', 
'os_compute_api:os-security-groups:add', 
'os_compute_api:os-security-groups:remove', 
'os_compute_api:os-server-groups:create', 
'os_compute_api:os-server-groups:delete', 
'os_compute_api:os-server-groups:index', 
'os_compute_api:os-server-groups:show', 'os_compute_api:server-metadata:index', 
'os_compute_api:server-metadata:show', 'os_compute_api:server-metadata:create', 
'os_compute_api:server-metadata:update_all', 
'os_compute_api:server-metadata:update', 
'os_compute_api:server-metadata:delete', 
'os_compute_api:os-server-password:show', 
'os_compute_api:os-server-password:clear', 
'os_compute_api:os-server-tags:delete_all', 
'os_compute_api:os-server-tags:index', 
'os_compute_api:os-server-tags:update_all', 
'os_compute_api:os-server-tags:delete', 'os_compute_api:os-server-tags:update', 
'os_compute_api:os-server-tags:show', 'compute:server:topology:index', 
'os_compute_api:servers:index', 'os_compute_api:servers:detail', 
'os_compute_api:servers:show', 
'os_compute_api:servers:show:flavor-extra-specs', 
'os_compute_api:servers:create', 'os_compute_api:servers:create:attach_volume', 
'os_compute_api:servers:create:attach_network', 
'os_compute_api:servers:create:trusted_certs', 'os_compute_api:servers:delete', 
'os_compute_api:servers:update', 'os_compute_api:servers:confirm_resize', 
'os_compute_api:servers:revert_resize', 'os_compute_api:servers:reboot', 
'os_compute_api:servers:resize', 'os_compute_api:servers:rebuild', 
'os_compute_api:servers:rebuild:trusted_certs', 
'os_compute_api:servers:create_image', 
'os_compute_api:servers:create_image:allow_volume_backed', 
'os_compute_api:servers:start', 'os_compute_api:servers:stop', 
'os_compute_api:servers:trigger_crash_dump', 'os_compute_api:os-shelve:shelve', 
'os_compute_api:os-shelve:unshelve', 
'os_compute_api:os-simple-tenant-usage:show', 
'os_compute_api:os-suspend-server:re

[Yahoo-eng-team] [Bug 1968606] [NEW] Importing neutron.common.config module registers config options

2022-04-11 Thread Jakub Libosvar
Public bug reported:

In case neutron.common.config is imported some config options are
registered as a side effect without calling any function. This is
causing errors for projects that import also other modules that end up
importing neutron.common.config - such as neutorn.db.models_v2. If a
project needs Neutron basic DB models and uses same config options then
there is no way how to make it work. Performing any action on imports is
an anti-pattern in Python.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1968606

Title:
  Importing neutron.common.config module registers config options

Status in neutron:
  New

Bug description:
  In case neutron.common.config is imported some config options are
  registered as a side effect without calling any function. This is
  causing errors for projects that import also other modules that end up
  importing neutron.common.config - such as neutorn.db.models_v2. If a
  project needs Neutron basic DB models and uses same config options
  then there is no way how to make it work. Performing any action on
  imports is an anti-pattern in Python.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1968606/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967144] Re: [OVN] Live migration can fail due to wrong revision id during setting requested chassis in ovn

2022-04-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/836618
Committed: 
https://opendev.org/openstack/neutron/commit/4f75c6a616d3cb549153fcc496926358dfc9178a
Submitter: "Zuul (22348)"
Branch:master

commit 4f75c6a616d3cb549153fcc496926358dfc9178a
Author: Slawek Kaplonski 
Date:   Tue Apr 5 11:26:32 2022 +0200

Retry port_update in the OVN if revision mismatch during live-migration

This is terrible hack but it seems that there is no other way to
fix/workaround the race which may happen during live-migration between:
- port update event comming from the OVN db (port DOWN on the src node),
- API call from nova-compute to activate port binding on the destination
node.

If those 2 events will be executed in specific order by different
workers it may happen that port binding activation will not update
"requested_chassis" of the port in OVN northd due to revision mismatch
(ovn_revision and neutron_revision will be already bumped by the worker
which processes "port update" OVN event).
If "requested_chassis" will not be updated, OVN will not claim port on
the dest node thus connectivity to the vm will be broken.

To workaround that issue, port_update_postcommit method from the OVN
mechanism driver will catch RevisionMismatch exception raised by the
ovn_client and in case that this was port_update after live_migration,
will get port data from neutron db and try to update port in the OVN
northd once again.

Closes-bug: #1967144
Change-Id: If6e1c6e0fc772101bcd3427601800aaae84381dd


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1967144

Title:
  [OVN] Live migration can fail due to wrong revision id during setting
  requested chassis in ovn

Status in neutron:
  Fix Released

Bug description:
  During the live-migration of vm, when Nova calls /binding/activate API to 
activate port binding on the destination node, Neutron calls mechanism drivers' 
port_update_postcommit() method and in the ovn mechanism driver at that point 
there should be updated "requested chassis" field for the LSP.
  Unfortunately we saw recently in our d/s ci race condition when one worker 
was processing such port binding activate request and other worker was 
processing OVN event related to the same port.
  Finally there was mismatch of the revision numbers in ovn db and neutron and 
requested chassis wasn't updated for the LSP. Due to that port wasn't claimed 
by OVN on the destination node thus connectivity to the vm was broken.

  Some more details can be found in our d/s bugzilla
  https://bugzilla.redhat.com/show_bug.cgi?id=2068065

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1967144/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1959333] Re: Error 500 after request with Invalid Scope

2022-04-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-lib/+/835234
Committed: 
https://opendev.org/openstack/neutron-lib/commit/da2baf389c85548b5c0b475a651aaf22bd7d9701
Submitter: "Zuul (22348)"
Branch:master

commit da2baf389c85548b5c0b475a651aaf22bd7d9701
Author: Slawek Kaplonski 
Date:   Fri Mar 25 14:50:33 2022 +0100

Add oslo_policy.InvalidScope exception to the api faults map

With enforcing scopes enabled in Neutron, oslo_policy can raise
InvalidScope exception while enforcing policy rules. So this exception
type should be handled in the same way as it is with
PolicyNotAuthorized. Otherwise neutron returns 500 if InvalidScope
exception was raised by the policy enforce.

Closes-Bug: #1959333
Change-Id: Iad1e2c9f797091d728d419c6b9dc67d861d4214a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1959333

Title:
  Error 500 after request with Invalid Scope

Status in neutron:
  Fix Released

Bug description:
  After patch https://review.opendev.org/c/openstack/neutron/+/821208
  was merged, when scope enforcement is enabled and API request with
  wrong scope is made, there is unhandled InvalidScope exception raised
  and error 500 returned to user. It should be properly handled and some
  better error returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1959333/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1968343] Re: Security Group Rule create with forged integer security_group_id causes exceptions

2022-04-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/837168
Committed: 
https://opendev.org/openstack/neutron/commit/c0bf560fa36aac798ad8783749fa78ddf766bdec
Submitter: "Zuul (22348)"
Branch:master

commit c0bf560fa36aac798ad8783749fa78ddf766bdec
Author: Andrew Karpow 
Date:   Fri Apr 8 18:32:03 2022 +0200

Force security_group_id uuid validation of sg rules

security_groups_db._check_security_group is supposed to check the
security_group_id of the _create_security_group_rule payload.
When using an integer e.g. 0, as security_group_id, the check
succededs because mysql accepts following query:

SELECT * FROM securitygroups WHERE id in (0)

Forcing validation of security_group_id as uuid fixes the problem

Closes-Bug: #1968343
Change-Id: I7c36b09309c1ef66608afacfb281b6f4b06ea5b8


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1968343

Title:
  Security Group Rule create with forged integer security_group_id
  causes exceptions

Status in neutron:
  Fix Released

Bug description:
  Assuming a project xyz has Security Groups, following POST requests
  fails with HTTP 500 ValueError:

  /v2.0/security-group-rules
  {
"security_group_rule": {
"direction": "egress",
"ethertype": "IPv4",
"port_range_max": 443,
"port_range_min": 443,
"project_id": "xyz",
"protocol": "tcp",
"remote_ip_prefix": "34.231.24.224/32",
"security_group_id": 0
}
  }

  The value error is raised by python uuid with `badly formed hexadecimal UUID 
string`.
  This is because the prior validation _check_security_group in 
securitygroups_db.py is using 
  sg_obj.SecurityGroup.objects_exist(context, id=id) which yields true with 
MySQL, e.g.:

  MariaDB [neutron]> SELECT count(*) FROM securitygroups WHERE 
securitygroups.id IN (0);
  +--+
  | count(*) |
  +--+
  |   15 |
  +--+
  1 row in set, 46 warnings (0.001 sec)

  MariaDB [neutron]> SHOW WARNINGS LIMIT 1;
  
+-+--+--+
  | Level   | Code | Message
  |
  
+-+--+--+
  | Warning | 1292 | Truncated incorrect DOUBLE value: 
'77dd53b2-59c0-4208-b03c-9f9f65bf9a28' |
  
+-+--+--+
  1 row in set (0.000 sec)

  Thus, the validation succeeds and the code path is followed till the
  id is converted to a UUID - which causes the unexpected exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1968343/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1968555] [NEW] evacuate after network issue will cause vm running on two host

2022-04-11 Thread shews
Public bug reported:

Environment
===
openstack queen + libvirt 4.5.0 + qemu 2.12 running on centos7, with ceph rbd 
storage

Description
===
If the management network of the compute host is abnormal, it may cause 
nova-compute down but the openstack-nova-compute.service is still running on 
that host. Now you evacuate a vm on that host, the evacuate will succeed, the 
vm will be running both on the old host and the new host even after the 
management network of old host recover, it may cause vm error.   

Steps to reproduce
==
1. Manually turn down the management network port of the compute host, like 
ifconfig eth0 down
2. After the nova-compute of that host see down with openstack compute service 
list, evacuate one vm on that host:
nova evacuate 
3. After evacuate succeed, you can find the vm running on two host.
4. Manually turn up the management network port of the old compute host, like 
ifconfig eth0 up, you can find the vm still running on this host, it can't be 
auto destroy unless you restart the openstack-nova-compute.service on that host.

Expected result
===
Maybe we can add a periodic task to auto destroy this vm?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1968555

Title:
  evacuate after network issue will cause vm running on two host

Status in OpenStack Compute (nova):
  New

Bug description:
  Environment
  ===
  openstack queen + libvirt 4.5.0 + qemu 2.12 running on centos7, with ceph rbd 
storage

  Description
  ===
  If the management network of the compute host is abnormal, it may cause 
nova-compute down but the openstack-nova-compute.service is still running on 
that host. Now you evacuate a vm on that host, the evacuate will succeed, the 
vm will be running both on the old host and the new host even after the 
management network of old host recover, it may cause vm error.   

  Steps to reproduce
  ==
  1. Manually turn down the management network port of the compute host, like 
ifconfig eth0 down
  2. After the nova-compute of that host see down with openstack compute 
service list, evacuate one vm on that host:
  nova evacuate 
  3. After evacuate succeed, you can find the vm running on two host.
  4. Manually turn up the management network port of the old compute host, like 
ifconfig eth0 up, you can find the vm still running on this host, it can't be 
auto destroy unless you restart the openstack-nova-compute.service on that host.

  Expected result
  ===
  Maybe we can add a periodic task to auto destroy this vm?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1968555/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967839] Re: [L3] NDP extension not handing "ha_state_change" correctly

2022-04-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/836581
Committed: 
https://opendev.org/openstack/neutron/commit/d73ec5000bcb3a6add158eb50057af0a619c7f7f
Submitter: "Zuul (22348)"
Branch:master

commit d73ec5000bcb3a6add158eb50057af0a619c7f7f
Author: Rodolfo Alonso Hernandez 
Date:   Mon Mar 21 04:48:11 2022 +

[L3] Fix "NDPProxyAgentExtension.ha_state_change" call

The parameter "data" passed to the method "ha_state_change" is not
a router but a dictionary with "router_id" info.

The method "NDPProxyAgentExtension._process_router" requires the
router ID and the "enable_ndp_proxy" value, stored in the agent
router cache.

Closes-Bug: #1967839
Related-Bug: #1877301
Change-Id: Iab163e69f7e3641e2e1a451374231b6ccfa74c3e


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1967839

Title:
  [L3] NDP extension not handing "ha_state_change" correctly

Status in neutron:
  Fix Released

Bug description:
  The L3 extension NDP proxy is not handling the "ha_state_change" call
  correctly. From the functional test logs: [1].

  The method "NDPProxyAgentExtension._process_router" is retrieving the
  router ID from "data" dictionary using "id" key [2], instead of
  "router_id" [3].

  
  [1]https://paste.opendev.org/show/bkBUKu9WfB1u84BX5IvV/
  
[2]https://github.com/openstack/neutron/blob/3615cd85a4cc6aeecf7f066c4eb21c3cdca71d4c/neutron/agent/l3/extensions/ndp_proxy.py#L351
  
[3]https://github.com/openstack/neutron/blob/3615cd85a4cc6aeecf7f066c4eb21c3cdca71d4c/neutron/agent/l3/ha.py#L185

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1967839/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp