[Yahoo-eng-team] [Bug 2086205] Re: [OVN] The security group create command is not creating the SG rules revision number registers

2024-11-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/933969
Committed: 
https://opendev.org/openstack/neutron/commit/e0ee8bd7726a24747ee5028cb31f9b62cfcfcc29
Submitter: "Zuul (22348)"
Branch:master

commit e0ee8bd7726a24747ee5028cb31f9b62cfcfcc29
Author: Rodolfo Alonso Hernandez 
Date:   Thu Oct 31 23:33:58 2024 +

[OVN] Create the SG rules revision number registers

During a security group creation, the default security group rules are
also added. This patch is creating the security group rules revision
number registers and bumping them to their first revision.

Closes-Bug: #2086205
Change-Id: Idc6ad29bcac23c2397e32f290addfd1877b8b3e0


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2086205

Title:
  [OVN] The security group create command is not creating the SG rules
  revision number registers

Status in neutron:
  Fix Released

Bug description:
  When a security group is created, a set of default security group
  rules is added too. The ML2/OVN extension method for the security
  group creation is not creating the security group rules revision
  number registers, needed when a resource [1] is created, in order to
  track the parity state between the OVN DB and the Neutron DB.

  
[1]https://github.com/openstack/neutron/blob/f2d76280dc58e78a7fdc0eb4810174a2e3dd8481/neutron/common/ovn/constants.py#L258-L267

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2086205/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2085975] Re: Compute fails to clean up after evacuated instance if the evacuation still in progress

2024-11-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/933734
Committed: 
https://opendev.org/openstack/nova/commit/2c76fd3bafc90b23ed9d9e6a7f84919082dc0076
Submitter: "Zuul (22348)"
Branch:master

commit 2c76fd3bafc90b23ed9d9e6a7f84919082dc0076
Author: Balazs Gibizer 
Date:   Wed Oct 30 13:24:41 2024 +0100

Route shared storage RPC to evac dest at startup

If a compute is started up while an evacuation of an instance from this
host is still in progress then the destroy_evacuated_instances call will
try to check if the instance is on shared storage to decide if the local
disk needs to deleted from the source node or not. However this call
uses the instance.host to target the RPC call. If the evacuation is
still ongoing then the instance.host might still be set to the source
node. This means the source node during init_host tries to call  RPC
on itself. This will always time out as the RPC server is only started
after init_host. Also it is wrong as the shared storage check RPC
should be called on another host. Moreover when this wrongly routed RPC
times out the source compute logs the exception, ignores it, and the
assume the disk is on shared storage so won't clean it up. This means
that a later evacuation of this VM targeting this node will fails as the
instance directory is already present on the node.

The fix is simple, the destroy_evacuated_instances call should always
send the shared storage check RPC call to the destination node of the
evacuation based on the migration record. It will be correct even if the
evacuation is still in progress or even if it is already finished.

Closes-Bug: #2085975
Change-Id: If5ad213649d68da995dad146f0a0c3cacc369309


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2085975

Title:
  Compute fails to clean up after evacuated instance if the evacuation
  still in progress

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Reproduce:
  * have a two node devstack hostA, hostB both with simple local storage
  * start an instance on hostA
  * inject a sleep in nova.virt.driver.rebuild to simulate that rebuild take 
time
  * stop hostA
  * evacuate the VM 
  * while the evacuation is still in progress on hostB start up hostA

  Actual:
  hostA will try to check if the VM is using shared storage and sends an RPC 
call to the instance.host as that is not yet set to the destination the RPC 
call hits hostA that is still in init_host so the RPC never answered and 
hostA'a destroy_evacuated_instances call will get a MessagingTimeout exception. 
That is logged and then ignored. But nova defaults the shared_storage flag to 
true so in this case the local instance dir is not cleaned.

  Expected:
  hostA sends the RPC call to hostB that responds and the local instance dir on 
hostkA is cleaned up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2085975/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2085543] Re: [OVN] Port device_owner is not set in the Trunk subport

2024-11-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/933836
Committed: 
https://opendev.org/openstack/neutron/commit/c0bdb0c8a33286acb4d44ad865f309fc79b6
Submitter: "Zuul (22348)"
Branch:master

commit c0bdb0c8a33286acb4d44ad865f309fc79b6
Author: Rodolfo Alonso Hernandez 
Date:   Wed Oct 30 18:08:15 2024 +

[OVN] Check LSP.up status before setting the port host info

Before executing updating the Logical_Swith_Port host information, it
is needed to check the current status of the port. If it doesn't match
with the event calling this update, the host information is not updated.

Closes-Bug: #2085543
Change-Id: I92afb190375caf27c815f9fe1cb627e87c49d4ca


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2085543

Title:
  [OVN] Port device_owner is not set in the Trunk subport

Status in neutron:
  Fix Released

Bug description:
  This issue was found in the test
  
``neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle``.
  The subport "463f7c45-3c06-4340-a509-e96b2faae525" [1] is created and
  then assigned to a trunk. When the port is assigned to the trunk, the
  device owner is not updated to "trunk:subport"

  UPDATE: the description is not accurate. The problem is that the port
  deactivation is executed when the port activation didn't finish. When
  the port is activated (the VM starts and binds the parent port and
  subports), the method ``set_port_status_up`` is called from the event
  ``LogicalSwitchPortUpdateUpEvent``. The problem is that the event
  actions are executed in a loop thread
  (``RowEventHandler.notify_loop``) that is not synchronous with the API
  call. The API call exists before the ``set_port_status_up`` finishes.

  The tempest test checks that the subport is ACTIVE and proceeds to
  unbind it (remove from the trunk). That removes the port device_owner
  and binding host. That's a problem because the method
  ``set_port_status_up`` is still being executed and needs the "older"
  values (device_owner="trunk:subport").

  In a nutshell, this is a race condition because the OVN event
  processing is done asynchronously to the API call.

  Logs:
  
https://f918f4eca95000e5dd6c-6bcda3a769a6c31ee12f465dd60bb9a2.ssl.cf5.rackcdn.com/933210/3/check/neutron-
  tempest-plugin-ovn-10/43a1557/testr_results.html

  [1]https://paste.opendev.org/show/bzmtiytDBKKkgi4IgZ15/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2085543/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2085946] Re: [OVN] Revision number registers must be filtered by resource ID and type

2024-11-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/933752
Committed: 
https://opendev.org/openstack/neutron/commit/a298a37fe7ee41d25db02fdde36e134b01ef5d9a
Submitter: "Zuul (22348)"
Branch:master

commit a298a37fe7ee41d25db02fdde36e134b01ef5d9a
Author: Rodolfo Alonso Hernandez 
Date:   Wed Oct 30 00:58:16 2024 +

[OVN] Fix the revision number retrieval method

The "ovn_revision_numbers" table has a unique constraint that is a
combination of the "resource_uuid" and the "resource_type". There is
a case where the resource_uuid can be the same for two registers.
A router interface will create a single Neutron DB register ("ports")
but it will require two OVN DB registers ("Logical_Switch_Port" and
"Logical_Router_Ports"). In this case is needed to define the
"resource_type" when retrieving the revision number.

The exception "RevisionNumberNotDefined" will be thrown if only the
"resource_uuid" is provided in the related case.

Closes-Bug: #2085946
Change-Id: I12079de78773f7409503392d4791848aea90cb7b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2085946

Title:
  [OVN] Revision number registers must be filtered by resource ID and
  type

Status in neutron:
  Fix Released

Bug description:
  The OVN revision numbers have a multicolumn index: (resource_uuid,
  resource_type) [1]. In particular because of the Neutron ports that
  belong to a router. A router interface is a single Neutron register
  ("ports"). But in OVN two registers are created:
  "Logical_Switch_Ports" and "Logical_Router_Ports".

  When retrieving a register "ovn_revision_numbers" from the Neutron
  database, it is needed to provide both the resource_uuid and the
  resource_type [2].

  
[1]https://github.com/openstack/neutron/blob/febdfb5d8b1cf261c13b40e330d91a5bcb6c7642/neutron/db/models/ovn.py#L41-L46
  
[2]https://github.com/openstack/neutron/blob/febdfb5d8b1cf261c13b40e330d91a5bcb6c7642/neutron/db/ovn_revision_numbers_db.py#L159-L167

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2085946/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2085824] Re: The documentation of [pci]alias numa_policy does not state the socket option

2024-11-04 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/933636
Committed: 
https://opendev.org/openstack/nova/commit/df4cb00b719b819ab95a4b80deb598d79f34b6e8
Submitter: "Zuul (22348)"
Branch:master

commit df4cb00b719b819ab95a4b80deb598d79f34b6e8
Author: Balazs Gibizer 
Date:   Tue Oct 29 11:04:38 2024 +0100

[doc]Add `socket` option to [pci]alias numa_policy

The numa_policy field in the pci alias supports the same value as the
flavor extra_spec hw:pci_numa_affinity_policy but the config doc was not
updated when the socket value is implemented.

Closes-Bug: #2085824
Change-Id: I997d10638020fc9d60e784e64e395e6e0a9c9430


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2085824

Title:
  The documentation of [pci]alias numa_policy does not state the socket
  option

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The doc
  https://docs.openstack.org/nova/latest/configuration/config.html#pci.alias
  states:

  numa_policy
  Required NUMA affinity of device. Valid values are: legacy, preferred and 
required.

  But the code uses a json schema[1][2] to validate the field value:

  "numa_policy": {
  "type": "string",
  "enum": list(obj_fields.PCINUMAAffinityPolicy.ALL),

  where the enum contains[3] socket as well:

  
  class PCINUMAAffinityPolicy(BaseNovaEnum):

  REQUIRED = "required"
  LEGACY = "legacy"
  PREFERRED = "preferred"
  SOCKET = "socket"

  ALL = (REQUIRED, LEGACY, PREFERRED, SOCKET)

  
  However the original spec does not mention that the change affects the 
[pci]alias as well[4]. But our PCI passthrough documentation[5] does state that 
the value of the flavor extra spec can be used for the [pci]alias config as 
well:

  
  You can also configure this for PCI passthrough devices by specifying the 
policy in the alias configuration via pci.alias. For more information, refer to 
the documentation.

  
  So I conclude that this a config doc bug.

  
[1]https://github.com/openstack/nova/blob/a8733bae3c1e27ae30de30cfc6f4c9a72d7c5ca1/nova/pci/request.py#L120-L136
  
[2]https://github.com/openstack/nova/blob/a8733bae3c1e27ae30de30cfc6f4c9a72d7c5ca1/nova/pci/request.py#L105-L107
  
[3]https://github.com/openstack/nova/blob/a8733bae3c1e27ae30de30cfc6f4c9a72d7c5ca1/nova/objects/fields.py#L813-L820
  
[4]https://specs.openstack.org/openstack/nova-specs/specs/wallaby/implemented/pci-socket-affinity.html
  
[5]https://docs.openstack.org/nova/latest/admin/pci-passthrough.html#pci-numa-affinity-policies

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2085824/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1578401] Re: tokens in memcache have no/improper expiration

2024-10-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/oslo.cache/+/932301
Committed: 
https://opendev.org/openstack/oslo.cache/commit/58b06c40d227af7ea5f70d61b485ba7392c343d1
Submitter: "Zuul (22348)"
Branch:master

commit 58b06c40d227af7ea5f70d61b485ba7392c343d1
Author: Takashi Kajinami 
Date:   Sun Oct 13 23:49:07 2024 +0900

Support expiration time in backend

Current implementation of expiration time relies on the generation time
stored in actual cache data, thus expired cache records are not removed
from backends automatically.

Add the new option to additionally set the expiration time supported by
the cache backend, so that operators can limit amount of spaces
(especially memory spaces) used for cache data.

Closes-Bug: #1578401
Change-Id: If61871f030560079482ecbbefeb940d8d3c18968


** Changed in: oslo.cache
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1578401

Title:
  tokens in memcache have no/improper expiration

Status in OpenStack Identity (keystone):
  Invalid
Status in oslo.cache:
  Fix Released

Bug description:
  tokens stored in memcache have no (improper) expiration data when set.

  Found on stable/mikata and stable/liberty using cachepool backend and
  the non-pooled backend.

  When you store in memcache you can optionally pass in a time at which
  the value is no good, a ttl. Keystone should be doing this for it's
  local token cache but it doesn't look like it is.

  Here is a dump of some slabs that have tokens in them, last field is
  expiration time:

  stats cachedump 11 10
  ITEM 8302fc81f2ffb552d5ba8d3e5f0e182ee285786a [724 b; 1460583765 s]
  ITEM 2ffe5d0821302a8501068a8411ce1749cea0645b [776 b; 1460583765 s]
  ITEM eb6e6f7e9118133a9a98944da874ac1b59c5675b [724 b; 1460583765 s]
  ITEM ee076b853dd5e5956366854abf6c49dbdf5ee4c2 [723 b; 1460583765 s]

  Lets see if these are really tokens:

  get 8302fc81f2ffb552d5ba8d3e5f0e182ee285786a
  VALUE 8302fc81f2ffb552d5ba8d3e5f0e182ee285786a 1 724
  cdogpile.cache.api
  CachedValue
  p0
  ((dp1
  S'access'
  p2
  (dp3
  S'token'
  p4
  (dp5
  S'issued_at'
  p6
  S'2016-05-04T21:20:27.00Z'
  p7
  sS'expires'
  p8
  S'2016-05-04T23:20:27.146911Z'
  p9
  sS'id'
  p10
  V

  Yep thats a Fernet token.

  Dumping older and older stuff, I can find cached tokens that are 24
  hours old in here, 22 hours past our valid token deadline.

  
  So lets compare that to some tokens that keystone_authtoken middleware is 
caching for control services:

  stats cachedump 21 100
  ITEM tokens/418b2c5a0e67d022b0578fbc4c96abf4a4509e94aca4ca1595167f8f01448957 
[8463 b; 1462397763 s]
  ITEM tokens/2b5a26e3bdf4dec0caae141353297f0316b55daf683b4bc0fcd1ab7bf4ba8f9b 
[8312 b; 1462397539 s]
  ITEM tokens/778329eb53545cbd664fa67e6429f48692679f428077b48baa4991f13cc1817c 
[8312 b; 1462397538 s]
  ITEM tokens/b80b06cf688c37f8688c368a983c2fd533c662b7b3063c6a2665c59def958cdd 
[8312 b; 1462397537 s]
  ITEM tokens/61cd52b0654641a21c62831f6e5be9f0328898d05026d6bb91c787d79cb8b460 
[8463 b; 1462397536 s]

  All have valid and different expiration times so it's respecting my
  settings.

  So what's that timestamp in the earlier list? Well it's 4/13/2016,
  3:42:45 PM GMT-6:00 DST. That happens to be the last time memcache
  restarted and so I assume it's just filler.

  What's the impact?

  I've not figured out if there is one yet for sure. I have a token
  valid time of 2 hours and I had set cache time to the same. I did try
  to dig out an old token but it would not validate so I don't think
  there's a security issue. I suspect the main issue is that my keystone
  memcache always runs completely 100% full. We max memcache at 20% of
  RAM on a box, and that's a lot (20% of 256G). I suspect with no valid
  expiration time memcache is lazily evicting old tokens when it runs
  out of ram rather than replacing expired ones and not allocating more
  RAM.

  [PROD] mfisch@east-keystone-001:~$ cat /proc/3937/status
  Name: memcached
  State:S (sleeping)
  Tgid: 3937
  Ngid: 3937
  Pid:  3937
  PPid: 1
  TracerPid:0
  Uid:  65534   65534   65534   65534
  Gid:  65534   65534   65534   65534
  FDSize:   1024
  Groups:   0
  VmPeak:   54500552 kB
  VmSize:  54500552 kB<-- that's a lot of twinkies

  I feel this merits deeper investigation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1578401/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2085447] Re: Fast exit ``_ensure_external_network_default_value_callback`` if network not external

2024-10-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/933132
Committed: 
https://opendev.org/openstack/neutron/commit/a03afc1bc2d855e66633228fdfd5797c7414b39c
Submitter: "Zuul (22348)"
Branch:master

commit a03afc1bc2d855e66633228fdfd5797c7414b39c
Author: Rodolfo Alonso Hernandez 
Date:   Mon Oct 21 22:21:55 2024 +

Exit fast checking the external network default

If the network is not external, there is no need to check
if there is an associated ``externalnetwork`` register.

Closes-Bug: #2085447
Change-Id: I54de12dd8df99c605bf3da6dea4c6c5a074e3b86


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2085447

Title:
  Fast exit ``_ensure_external_network_default_value_callback`` if
  network not external

Status in neutron:
  Fix Released

Bug description:
  The method ``_ensure_external_network_default_value_callback`` [1] can
  fast exit if the network request is not for an external network. That
  will save one DB query [2].

  
[1]https://github.com/openstack/neutron/blob/9347c427b5354b608c61b11d29aebff889cd0213/neutron/services/auto_allocate/db.py#L43-L79
  
[2]https://github.com/openstack/neutron/blob/9347c427b5354b608c61b11d29aebff889cd0213/neutron/services/auto_allocate/db.py#L72-L73

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2085447/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2071596] Re: netifaces is archived, please remove from dependecies

2024-10-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/oslo.utils/+/931583
Committed: 
https://opendev.org/openstack/oslo.utils/commit/2ec1ce3661099eb73134d36eb8a4437938b304e9
Submitter: "Zuul (22348)"
Branch:master

commit 2ec1ce3661099eb73134d36eb8a4437938b304e9
Author: Takashi Kajinami 
Date:   Sun Oct 6 20:15:47 2024 +0900

Drop dependency on netifaces

The netifaces library was abandoned and archived. Replace it by own
parse logic from proc files + psutil.

Closes-Bug: #2071596
Change-Id: I334e10b869694eaa8c6afd842ce8d4dc606a4f5b


** Changed in: oslo.utils
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2071596

Title:
  netifaces is archived, please remove from dependecies

Status in ironic-python-agent:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in openstacksdk:
  In Progress
Status in oslo.utils:
  Fix Released

Bug description:
  The oslo.utils python package is using as a dependency netifaces.

  This python package is not maintained since 2021

  https://github.com/al45tair/netifaces/issues/78

  Please remove it as a dependency and find an alternative

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic-python-agent/+bug/2071596/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2067239] Re: Security group rule quota is not working well with default security group rule.

2024-10-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/921909
Committed: 
https://opendev.org/openstack/neutron/commit/1a440dd61b04b37d0e2a9434e802f5a1ee3c198b
Submitter: "Zuul (22348)"
Branch:master

commit 1a440dd61b04b37d0e2a9434e802f5a1ee3c198b
Author: kyu0 
Date:   Thu Jun 13 12:46:54 2024 +0900

Modify the default SG rule count logic when creating SG

During the creation of SG, not to exceed the SG rule quota, the number
of default SG rules that will be automatically created must be counted.
It is always 2 (in case of the default SG, it is 4), but it is wrong
since it depends on the default SG rules.

Closes-Bug: #2067239
Change-Id: Ic86826b71c1160a6891f09ca1e40135049a8948a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2067239

Title:
  Security group rule quota is not working well with default security
  group rule.

Status in neutron:
  Fix Released

Bug description:
  OpenStack Version: 2023.2

  How to reproduce :
  1. Remove all of default-security-group-rules.
  2. Create a new project and set the quota of security-group-rules to 5.
  3. Create a new security-group, and create 4 security-group-rules in this 
security-group.
  4. Create another new security-group.

  Expected :
  At step 4, the security-group will be created without any 
security-group-rules since I removed all of default-security-group-rules at 
step 1.
  There will be no problem with the security-group-rules quota. (I have 4 
rules, and the quota is 5.)

  Actual :
  Failed to create the security-group at step 4 with the message below.
  - Error: Unable to create security group: %s Details
  - Quota exceeded for resources: ['security_group_rule'].

  It seems the security-group-rules quota validation logic in the
  security group creation code has to be modified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2067239/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2081732] Re: oslo_utils.secretutils.constant_time_compare is redundant

2024-10-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/octavia/+/931150
Committed: 
https://opendev.org/openstack/octavia/commit/119b14ed93b856bada54eaea9155f44b8b0629ff
Submitter: "Zuul (22348)"
Branch:master

commit 119b14ed93b856bada54eaea9155f44b8b0629ff
Author: Takashi Kajinami 
Date:   Wed Oct 2 18:32:34 2024 +0900

Replace deprecated constant_time_compare

The method is being deprecated now[1].

[1] https://review.opendev.org/c/openstack/oslo.utils/+/930198

Closes-Bug: #2081732
Change-Id: Icf9f8086e7f413247532d3f234a036b2474b7ef3


** Changed in: octavia
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2081732

Title:
  oslo_utils.secretutils.constant_time_compare is redundant

Status in Ceilometer:
  New
Status in keystonemiddleware:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in octavia:
  Fix Released
Status in oslo.utils:
  Fix Released
Status in osprofiler:
  In Progress

Bug description:
  The constant_time_compare function is equivalent to
  hmac.compare_digest in Python 3, because the constant_time_compare
  function has been available since Python 3.3 .

  [1] https://docs.python.org/3/library/hmac.html#hmac.compare_digest

  We can get rid of the redundant wrapper and use the built-in
  implementation instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/2081732/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2083832] Re: [OVN] Host_id value on lsp for router gateway is not updated on failover

2024-10-25 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/931632
Committed: 
https://opendev.org/openstack/neutron/commit/4b032bdbb2a6843b776c367486d1620ea6ae71a5
Submitter: "Zuul (22348)"
Branch:master

commit 4b032bdbb2a6843b776c367486d1620ea6ae71a5
Author: Aleksandr 
Date:   Mon Oct 7 13:06:59 2024 +0300

[OVN] Update lsp host id when cr port is updated with chassis

When a chassisredirect port is updated with chassis, the
PortBindingChassisEvent event would only update the binding
host id in the neutron database, while it is also usefull to keep the
information in the OVN database up to date with the host information.

Similar to change [1], but for router's gateway ports.

[1] https://review.opendev.org/c/openstack/neutron/+/896883

Other plugins that connect to the OVN database can then also rely on the
information stored in the OVN DB's

Closes-Bug: #2083832

Change-Id: Ibe8bda2f81bda7a89e3a994db55cd394a18decb8


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2083832

Title:
  [OVN] Host_id value on lsp for router gateway is not updated on
  failover

Status in neutron:
  Fix Released

Bug description:
  Neutron fills neutron:host_id within external_ids values for router gateway 
OVN logical switch ports with the hostname of the chassis.
  If a failover happens, binding_host_id in router gateway port is updated with 
new host in neutron DB, while external_ids:neutron:host_id in OVN database 
remains old.

  Steps to reproduce:
  1. Create a router with external gateway set
  2. Check binding_host_id for router gateway port in neutron and 
external_ids:neutron:host_id on lsp in OVN database
  3. Initiate failover (for example turn off the host where port was binded)
  4. Check binding_host_id for router gateway port and 
external_ids:neutron:host_id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2083832/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2065198] Re: Allow ml2 MechanismDrivers to start own rpc listeners

2024-10-25 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/919590
Committed: 
https://opendev.org/openstack/neutron/commit/6bb9535c452e14deefc50d563100418656a24f63
Submitter: "Zuul (22348)"
Branch:master

commit 6bb9535c452e14deefc50d563100418656a24f63
Author: Sebastian Lohff 
Date:   Tue May 14 16:55:00 2024 +0200

Allow ml2 drivers to start their own RPC listeners

To allow MechanismDrivers to start their own RPC listeners (e.g. for
communication with custom agents) the MechanismManager will now call
start_rpc_listeners() of each driver. This is done as part of
Ml2Plugin.start_rpc_listeners(). It is added as an alternative to create
the backends in initialize(), as in cases where a driver is split up
into the API and RPC part we want to make sure these backends are only
started in the RPC part of neutron.

This patch depends on MechanismDrivers.start_rpc_listeners() in
neutron-lib[0].

[0] https://review.opendev.org/c/openstack/neutron-lib/+/919589

Change-Id: I31e253180f474abf6d266d23c50f9dc89f17f687
Depends-On: https://review.opendev.org/c/openstack/neutron-lib/+/919589
Closes-Bug: #2065198


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2065198

Title:
  Allow ml2 MechanismDrivers to start own rpc listeners

Status in neutron:
  Fix Released

Bug description:
  Some MechanismDrivers need their own RPC backend that can be used by
  an agent to talk to this driver. When running in uwsgi/rpc mode we
  need to make sure these RPCs listeners are started as part of the
  neutron-rpc-server. Currently this is only done for service plugins,
  but MechanismDrivers are part of the Ml2Plugin (which is itself a
  service plugin).

  To allow drivers to start their own RPC listener I propose that the
  ML2Plugin takes care of this. Ml2Plugin can call
  MechanismManager.start_driver_rpc_listeners(), which then calls
  start_rpc_listeners() on each MechanismDriver. The resulting server
  plugins could then also be returned as part of
  Ml2Plugin.start_rpc_listeners().

  I have a PoC for this running in my dev environment and if we agree on
  a rough concept would be willing to provide a patch for this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2065198/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2085462] Re: [OVN] "test_trunk_subport_lifecycle" unstable

2024-10-24 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/933212
Committed: 
https://opendev.org/openstack/neutron/commit/63d14a3ff225faa75a825991cf0b33b2fd745b9b
Submitter: "Zuul (22348)"
Branch:master

commit 63d14a3ff225faa75a825991cf0b33b2fd745b9b
Author: Rodolfo Alonso Hernandez 
Date:   Tue Oct 22 14:15:13 2024 +

Skip LSP host info update for trunk subports

In ML2/OVN, the subports bindings are not updated with the host
information. This patch skips the LSP update in that case.

Currently the method ``update_lsp_host_info`` is stuck executing
``_wait_for_port_bindings_host``. During this time the subport
can be deleted or removed from the trunk. That will clash with
the newer operation that tries to remove the LSP port host info
and is the cause of the related bug.

Closes-Bug: #2085462
Change-Id: Ic68f9b5aa3b06bc4e1cbfbe577efc33b4b617b45


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2085462

Title:
  [OVN] "test_trunk_subport_lifecycle" unstable

Status in neutron:
  Fix Released

Bug description:
  Related to https://bugs.launchpad.net/neutron/+bug/1874447

  Test
  
``neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle``
  fails randomly.

  Logs:
  
https://8a3a5dd881cf95e2c6a0-6d94365b9652dc6406843da38342bcca.ssl.cf5.rackcdn.com/932677/2/check/neutron-
  tempest-plugin-ovn/42e33e7/testr_results.html

  Snippet: https://paste.opendev.org/show/bPDxKOky8cMWJBqQYaOF/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2085462/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2024258] Fix included in openstack/nova 27.5.1

2024-10-24 Thread OpenStack Infra
This issue was fixed in the openstack/nova 27.5.1  release.

** Changed in: nova/antelope
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2024258

Title:
  Performance degradation archiving DB with large numbers of FK related
  records

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive ussuri series:
  Fix Committed
Status in Ubuntu Cloud Archive yoga series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) antelope series:
  Fix Released
Status in OpenStack Compute (nova) wallaby series:
  Won't Fix
Status in OpenStack Compute (nova) xena series:
  Won't Fix
Status in OpenStack Compute (nova) yoga series:
  Won't Fix
Status in OpenStack Compute (nova) zed series:
  Won't Fix
Status in nova package in Ubuntu:
  Won't Fix
Status in nova source package in Focal:
  Fix Released
Status in nova source package in Jammy:
  Fix Released

Bug description:
  [Impact]
  Originally, Nova archives deleted rows in batches consisting of a maximum 
number of parent rows (max_rows) plus their child rows, all within a single 
database transaction.
  This approach limits the maximum value of max_rows that can be specified by 
the caller due to the potential size of the database transaction it could 
generate.
  Additionally, this behavior can cause the cleanup process to frequently 
encounter the following error:
  oslo_db.exception.DBError: (pymysql.err.InternalError) (3100, "Error on 
observer while running replication hook 'before_commit'.")

  The error arises when the transaction exceeds the group replication 
transaction size limit, a safeguard implemented to prevent potential MySQL 
crashes [1].
  The default value for this limit is approximately 143MB.

  [Fix]
  An upstream commit has changed the logic to archive one parent row and its 
related child rows in a single database transaction.
  This change allows operators to choose more predictable values for max_rows 
and achieve more progress with each invocation of archive_deleted_rows.
  Additionally, this commit reduces the chances of encountering the issue where 
the transaction size exceeds the group replication transaction size limit.

  commit 697fa3c000696da559e52b664c04cbd8d261c037
  Author: melanie witt 
  CommitDate: Tue Jun 20 20:04:46 2023 +

  database: Archive parent and child rows "trees" one at a time

  [Test Plan]
  1. Create an instance and delete it in OpenStack.
  2. Log in to the Nova database and confirm that there is an entry with a 
deleted_at value that is not NULL.
  select display_name, deleted_at from instances where deleted_at <> 0;
  3. Execute the following command, ensuring that the timestamp specified in 
--before is later than the deleted_at value:
  nova-manage db archive_deleted_rows --before "XXX-XX-XX XX:XX:XX" --verbose 
--until-complete
  4. Log in to the Nova database again and confirm that the entry has been 
archived and removed.
  select display_name, deleted_at from instances where deleted_at <> 0;

  [Where problems could occur]
  The commit changes the logic for archiving deleted entries to reduce the size 
of transactions generated during the operation.
  If the patch contains errors, it will only impact the archiving of deleted 
entries and will not affect other functionalities.

  [1] https://bugs.mysql.com/bug.php?id=84785

  [Original Bug Description]

  Observed downstream in a large scale cluster with constant create/delete
  server activity and hundreds of thousands of deleted instances rows.

  Currently, we archive deleted rows in batches of max_rows parents +
  their child rows in a single database transaction. Doing it that way
  limits how high a value of max_rows can be specified by the caller
  because of the size of the database transaction it could generate.

  For example, in a large scale deployment with hundreds of thousands of
  deleted rows and constant server creation and deletion activity, a
  value of max_rows=1000 might exceed the database's configured maximum
  packet size or timeout due to a database deadlock, forcing the operator
  to use a much lower max_rows value like 100 or 50.

  And when the operator has e.g. 500,000 deleted instances rows (and
  millions of deleted rows total) they are trying to archive, being
  forced to use a max_rows value several orders of magnitude lower than
  the number of rows they need to archive is a poor user experience and
  makes it unclear if archive progress is actually being made.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2024258/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHel

[Yahoo-eng-team] [Bug 2065927] Fix included in openstack/nova 27.5.1

2024-10-24 Thread OpenStack Infra
This issue was fixed in the openstack/nova 27.5.1  release.

** Changed in: nova/antelope
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2065927

Title:
  cpu power management can fail  with OSError: [Errno 16] Device or
  resource busy

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) 2024.1 series:
  Fix Committed
Status in OpenStack Compute (nova) antelope series:
  Fix Released
Status in OpenStack Compute (nova) bobcat series:
  Triaged

Bug description:
  as reported downstream in https://issues.redhat.com/browse/OSPRH-7103

  if you create a vm, reboot the host, start the vm, 
  and finally delete it.

  that may fail

  May 16 15:54:26 edpm-compute-0 nova_compute[3396]: Traceback (most recent 
call last):
  May 16 15:54:26 edpm-compute-0 nova_compute[3396]:   File 
"/usr/lib/python3.9/site-packages/nova/filesystem.py", line 57, in write_sys
  May 16 15:54:26 edpm-compute-0 nova_compute[3396]: fd.write(data)
  May 16 15:54:26 edpm-compute-0 nova_compute[3396]: OSError: [Errno 16] Device 
or resource busy

  this prevents the VM from being deleted on the inial request but it
  can then be deleted if you try again

  this race condition with the kernel is unlikely to happen and appeared
  to be timing related.

  i.e. there is a short period of time where onlineing or offlining of a
  CPU may not be possible

  
  to mitigation this nova should retry the operation with a backoff and then 
eventually squash the error allowing the vm to delete without failing if we 
cant offline the core.

  
  power management of the core should never block or cause the vm delete to 
fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2065927/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2084794] Re: Volume tab/volume quotas completely missing in Zuul deployed UI.

2024-10-18 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/932603
Committed: 
https://opendev.org/openstack/horizon/commit/179c2e3771942f435d9857da8ae38d41d5cc5ea3
Submitter: "Zuul (22348)"
Branch:master

commit 179c2e3771942f435d9857da8ae38d41d5cc5ea3
Author: Takashi Kajinami 
Date:   Thu Oct 17 20:57:45 2024 +0900

cinder: Use 'block-storage' service type to detect cinder

The official service type name for cinder is not volume (or volumevN)
but block-storage. Use the block-storage type to detect availability
of cinder, in addition to legacy volume/volumev3 service type.

'block-store' is also a valid alias and should be added as well.

Closes-Bug: #2084794
Change-Id: Ifbeaba033c6dae0fa704a2be568b2f4e2cb7426a


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2084794

Title:
  Volume tab/volume quotas completely missing in Zuul deployed UI.

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in python-cinderclient:
  New

Bug description:
  Hello,
  We (In Horizon UI) are facing an issue in Zuul deployment that this 
deployment completely missing Volume tab, Volume quotas, etc.
  So it looks like the Cinder does not work at all in the Zuul deploy.

  Our last patch was merged October 1, then tests passed without any issue for 
a few days and from October 10 our tests are failing because the Volume tab is 
missing in UI.
  Horizon opendev:
  https://review.opendev.org/q/project:openstack/horizon+branch:master

  Failing tests because of the missing volume tab:
  https://zuul.opendev.org/t/openstack/build/bb060700efa84a9ab2e6bf6c4c70162e

  It blocks us from merging any new patch for Horizon.

  Screenshot of missing Volume tab in attachment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2084794/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062425] Re: Nova/Placement creating x86 trait for ARM Compute node

2024-10-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/926521
Committed: 
https://opendev.org/openstack/nova/commit/ab18f3763c096d1f4c0da6ad825d670dd5a06b94
Submitter: "Zuul (22348)"
Branch:master

commit ab18f3763c096d1f4c0da6ad825d670dd5a06b94
Author: Amit Uniyal 
Date:   Mon Aug 19 07:42:43 2024 +

Libvirt: updates resource provider trait list

This change updates resource provider trait list for hw architecture and
hw emulation architecture

Closes-Bug: #2062425
Change-Id: Ia571c5e5e881162d331b638ae2d4a332807d17f5


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2062425

Title:
  Nova/Placement creating x86 trait for ARM Compute node

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  I have a 2023.2 based deployment with both x84 and aarch64 based compute 
nodes. For the arm node, placement is showing it having an x86 HW trait, 
causing scheduling of arm architecture images onto it to fail. It also causes 
it to try and schedule x86 images onto here, which will fail.

  Steps to reproduce
  ==
  1. I deployed a new 2023.2 deployment with Kolla-ansible. 
  2. Add hw_architecture=aarch64 to a valid glance image
  3. Ensure that image_metadata_prefilter = True in nova.conf on all nova 
services 
  4. Try and deploy an instance with that image, it will fail with no valid 
host found
  5. Observe the following in the placement-api logs:

  placement-api.log:41054:2024-04-18 20:39:04.271 21 DEBUG
  placement.requestlog [req-0114c318-5dfd-4588-807b-e591a82ce098 req-
  bd588ea0-5700-4b8e-a43f-0eb15a7275e8 - - - - - -] Starting request:
  10.27.10.33 "GET
  
/allocation_candidates?limit=1000&member_of=in%3Aceceb7fb-e0ed-4304-a69f-b327da7ca63f&resources=DISK_GB%3A60%2CMEMORY_MB%3A8192%2CVCPU%3A4&root_required=HW_ARCH_AARCH64%2C%21COMPUTE_STATUS_DISABLED"
  __call__ /var/lib/kolla/venv/lib/python3.10/site-
  packages/placement/requestlog.py:55

  placement-api.log:41055:2024-04-18 20:39:04.317 21 DEBUG
  placement.objects.research_context
  [req-0114c318-5dfd-4588-807b-e591a82ce098 req-
  bd588ea0-5700-4b8e-a43f-0eb15a7275e8 8ce24731fb34492c9354f05050216395
  c48da85ca48f4296b59bacb7b3c2fdfd - - default default] found no
  providers satisfying required traits: {'HW_ARCH_AARCH64'} and
  forbidden traits: {'COMPUTE_STATUS_DISABLED'} _process_anchor_traits
  /var/lib/kolla/venv/lib/python3.10/site-
  packages/placement/objects/research_context.py:243


  Resource providers:
  openstack resource provider list
  
+--+---++--+--+
  | uuid | name  | 
generation | root_provider_uuid   | parent_provider_uuid |
  
+--+---++--+--+
  | a6aa43fb-c819-4dae-b172-b5ed76901591 | infra-prod-compute-04 |  
7 | a6aa43fb-c819-4dae-b172-b5ed76901591 | None |
  | 2a019b35-25ac-4085-a13d-07802bda6828 | infra-prod-compute-03 | 
10 | 2a019b35-25ac-4085-a13d-07802bda6828 | None |
  | a008c58b-d16c-4b80-8f58-ca96d1fce2a3 | infra-prod-compute-05 |  
7 | a008c58b-d16c-4b80-8f58-ca96d1fce2a3 | None |
  | e97340aa-5848-4939-a409-701e5ad52396 | infra-prod-compute-02 | 
31 | e97340aa-5848-4939-a409-701e5ad52396 | None |
  | 9345e4d0-fc49-4e51-9f38-faeabec1b053 | infra-prod-compute-01 | 
18 | 9345e4d0-fc49-4e51-9f38-faeabec1b053 | None |
  | 41611dae-3006-4449-9c8b-3369d9b0feb8 | infra-prod-compile-01 |  
5 | 41611dae-3006-4449-9c8b-3369d9b0feb8 | None |
  | 7fecff4c-9e2d-4d89-a345-91ab4d8c1857 | infra-prod-compile-02 |  
5 | 7fecff4c-9e2d-4d89-a345-91ab4d8c1857 | None |
  | fbd4030a-1cc9-455a-bca2-2b606fcb3c4d | infra-prod-compile-03 |  
5 | fbd4030a-1cc9-455a-bca2-2b606fcb3c4d | None |
  | 4d3b29fd-0048-4768-93fa-b7a98f81c125 | infra-prod-compute-06 |  
9 | 4d3b29fd-0048-4768-93fa-b7a98f81c125 | None |
  | f888bda6-8fb7-4f84-8b87-c9af3b36a6ae | infra-prod-compute-07 |  
7 | f888bda6-8fb7-4f84-8b87-c9af3b36a6ae | None |
  | 4f53c8d0-bf1d-44d3-89d5-b8f5436ee66a | infra-prod-compile-04 |  
5 | 4f53c8d0-bf1d-44d3-89d5-b8f5436ee66a | None |
  | 7b6a42c8-b9b4-44a6-9111-2f732c7074e1 | infra-prod-compile-05 |  
5 | 7b6a42c8-b9b4-44a6-9111-2f732c7074e1 | None |
  | 8312a824-8d88-4646-9eb5-c4937329dab9 | infra-prod-compu

[Yahoo-eng-team] [Bug 1668791] Re: document glance IPv6 support

2024-10-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/glance/+/932427
Committed: 
https://opendev.org/openstack/glance/commit/f640f5e2dd8745de0e962d7fa9ed684569321d21
Submitter: "Zuul (22348)"
Branch:master

commit f640f5e2dd8745de0e962d7fa9ed684569321d21
Author: Cyril Roelandt 
Date:   Tue Oct 15 17:22:43 2024 +0200

Remove no-longer valid comment about IPv6 workaround

This has not been true since we merged
3988a9956e40a5b2f739eb8851ccb1d0b431a2e8 .

Closes-Bug: #1668791
Change-Id: I217d998a63d3d31ead0432e9661a422ee1913a88


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1668791

Title:
  document glance IPv6 support

Status in Glance:
  Fix Released

Bug description:
  Following up on Change-Id: Ic2bdc9a780ee98df87a7cdd5413a9db42e5e7131
  (https://review.openstack.org/#/c/421162/), need to document that Glance 
supports IPv6, in particular, if there are any particular steps an operator 
needs to take to run Glance on IPv6.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1668791/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2081009] Re: oslo_config.cfg.NotInitializedError when switching default policy_file in oslo.policy

2024-10-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/glance/+/929720
Committed: 
https://opendev.org/openstack/glance/commit/562a2eb48b4fc10cf97bd65d9a2f2e6d2a739eba
Submitter: "Zuul (22348)"
Branch:master

commit 562a2eb48b4fc10cf97bd65d9a2f2e6d2a739eba
Author: Takashi Kajinami 
Date:   Wed Sep 18 13:49:47 2024 +0900

Do not call Enforcer.__call__ at module level

... because the method may need to user some functionalities which can
be used after CONF instance is initialized and module level import
makes it difficult to guarantee the order.

Closes-Bug: #2081009
Change-Id: Id40ceab2a84bb7047dfd130bf8c1ac4c8073b79b


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2081009

Title:
  oslo_config.cfg.NotInitializedError when switching default policy_file
  in oslo.policy

Status in Glance:
  Fix Released

Bug description:
  While we attempted to update the default policy file in
  https://review.opendev.org/c/openstack/oslo.policy/+/929714 , we
  observed the glance-api can't start and complains the error below.

  ```
  Traceback (most recent call last):
File "/opt/stack/data/venv/bin/glance-wsgi-api", line 6, in 
  from glance.common.wsgi_app import init_app
File "/opt/stack/glance/glance/common/wsgi_app.py", line 24, in 
  from glance.common import config
File "/opt/stack/glance/glance/common/config.py", line 643, in 
  policy.Enforcer(CONF)
File "/opt/stack/oslo.policy/oslo_policy/policy.py", line 543, in __init__
  self.policy_file = policy_file or pick_default_policy_file(
File "/opt/stack/oslo.policy/oslo_policy/policy.py", line 378, in 
pick_default_policy_file
  if conf.find_file(conf.oslo_policy.policy_file):
File 
"/opt/stack/data/venv/lib/python3.10/site-packages/oslo_config/cfg.py", line 
2782, in find_file
  Traceback (most recent call last):
File "/opt/stack/data/venv/bin/glance-wsgi-api", line 6, in 
  from glance.common.wsgi_app import init_app
File "/opt/stack/glance/glance/common/wsgi_app.py", line 24, in 
  from glance.common import config
File "/opt/stack/glance/glance/common/config.py", line 643, in 
  policy.Enforcer(CONF)
File "/opt/stack/oslo.policy/oslo_policy/policy.py", line 543, in __init__
  self.policy_file = policy_file or pick_default_policy_file(
File "/opt/stack/oslo.policy/oslo_policy/policy.py", line 378, in 
pick_default_policy_file
  raise NotInitializedError()
  oslo_config.cfg.NotInitializedError: call expression on parser has not been 
invoked
  ```

  The problem here is that Enforcer() is called directly at the module
  level in glance.common.config and we can't guarantee the module is
  imported after CONF instance is initialized.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2081009/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2083682] Re: Slowness of security groups list API

2024-10-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/932041
Committed: 
https://opendev.org/openstack/neutron/commit/adbc3e23b7d2251cc7de088e2a757674a41c2f6a
Submitter: "Zuul (22348)"
Branch:master

commit adbc3e23b7d2251cc7de088e2a757674a41c2f6a
Author: Rodolfo Alonso Hernandez 
Date:   Thu Oct 10 08:49:44 2024 +

Optimize the SG rule retrieval

There are some operations where the SG DB object can be used instead of
the SG OVO. That saves conversion time, including the conversion of the
SG rule OVOs, that are child resources of the SG OVO.

This optimization applies to the following methods:
* SecurityGroupDbMixin.get_security_groups
* SecurityGroupDbMixin.update_security_group (partially)

The Nova query to retrieve the SG list in the "server list" command,
has been benchmarked. The testing environment had a single SG with
250 SG rules. Call:
  "GET 
/networking/v2.0/security-groups?id=81f64aa4-2cea-46db-8fea-cd944f106aab
 &fields=id&fields=name HTTP/1.1"

* Without this patch: around 1.25 seconds
* With this patch: around 0.025 second (50x improvement).

Closes-bug: #2083682
Change-Id: Ibd032ea77c5bfbc1fa80b3b3ee9ba7d5c36bb1bc


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2083682

Title:
  Slowness of security groups list API

Status in neutron:
  Fix Released

Bug description:
  Input:
  - OpenStack cluster of 2024.1 release
  - Total number of VMs = 9k
  - Total number of security groups = 6.4k
  - Total number of security groups rules = 122k

  Problem description:

  Nova servers list API exceeded 60s timeout in processing request of 
retrieving detailed information of 1k servers(default limit).
  OpenStack SDK equivalent call `conn.compute.servers(all_projects=True, 
paginated=False)`.
  The debugging showed that it takes <5s the retrieve all info from Nova's db 
and all remaining time is wasted by calling Neutron to retrieve information 
about security groups.

  Nova's logic to retrieve security group info - 
https://github.com/openstack/nova/blob/stable/2024.1/nova/network/security_group_api.py#L532
 :
  - retrieving all ports for servers. Nova does a separate call to neutron for 
each 150 items to not exceed URL size limit - 
https://github.com/openstack/nova/blob/stable/2024.1/nova/network/security_group_api.py#L471-L497.
 Each such call takes less than 0.5s to complete.
  - retrieving discovered security groups. Same here, separate call for each 
150 items - 
https://github.com/openstack/nova/blob/stable/2024.1/nova/network/security_group_api.py#L500-L529
 . Nova passes fields=["id", "name"] filter to neutron API - 
https://github.com/openstack/nova/blob/stable/2024.1/nova/network/security_group_api.py#L547
 to avoid neutron fetching security group rules which can be a heavy operation. 
Each such call takes ~9s.

  https://review.opendev.org/c/openstack/neutron/+/929967 is applied to
  neutron server's. It improved the case, but has not resolved it.

  Additional info: Nova uses python-neutronclient library, which in my
  experiments behaves quicker than openstacksdk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2083682/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2074056] Re: Invalid documented security group rule protocol "any"

2024-10-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926498
Committed: 
https://opendev.org/openstack/neutron/commit/81375f0b2be1727e2223393562b309f23ae4fa49
Submitter: "Zuul (22348)"
Branch:master

commit 81375f0b2be1727e2223393562b309f23ae4fa49
Author: Brian Haley 
Date:   Sat Aug 17 19:37:36 2024 -0400

Add special treatment for 'any' in SG rule API

The openstack client changes the protocol to None in
the case that 'any' is given as an argument when creating
a security group rule. But using 'any' in a POST call
will return an error saying it is invalid.

Add special treatment for 'any' as a protocol value in
the API by treating it the same as None, but do not
use the 'any' string when creating the DB entry, it is
only treated as an alias.

Closes-bug: #2074056
Change-Id: Ic88ae2c249eb2cd1af1ebbf6707c707f51a52638


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2074056

Title:
  Invalid documented security group rule protocol "any"

Status in neutron:
  Fix Released

Bug description:
  The Networking API specification v2.0 for security group rule
  creation[1] states that:

  > The string any (or integer 0) means all IP protocols.

  However, attempting to create a security group rule with protocol
  "any" results in a 400 Bad Request:

  ```
  curl -g -i -X POST \
  'https://neutron.example:13696/v2.0/security-group-rules' \
  -H "Content-Type: application/json" \
  -H "X-Auth-Token: valid-token" \
  -d '{"security_group_rule": {"ethertype": "IPv4", 
"security_group_id": "f2746bac-1c1f-42b6-8791-fc1b1448fa0e", 
"remote_ip_prefix": "0.0.0.0/0", "direction": "ingress", "protocol": "any"}}'

  HTTP/1.1 400 Bad Request
  content-type: application/json
  content-length: 450
  x-openstack-request-id: req-a2d167b4-5d7f-4bf3-9c60-7823b2122efc
  date: Thu, 25 Jul 2024 08:11:49 GMT

  {"NeutronError": {"type": "SecurityGroupRuleInvalidProtocol", "message": 
"Security group rule protocol any not supported. Only protocol values [None, 
'ah', 'dccp', 'egp', 'esp', 'gre', 'hopopt', 'icmp', 'igmp', 'ip', 'ipip', 
'ipv6-encap', 'ipv6-frag', 'ipv6-icmp', 'icmpv6', 'ipv6-nonxt', 'ipv6-opts', 
'ipv6-route', 'ospf', 'pgm', 'rsvp', 'sctp', 'tcp', 'udp', 'udplite', 'vrrp'] 
and integer representations [0 to 255] are supported.", "detail": ""}}
  ```

  Tested on RHOSP 17.1, which is based on Wallaby according to its
  docs[2].

  There appear to be multiple ways to create security group rules that apply 
regardless of the protocol:
  - protocol value set to number zero or string zero: `"protocol": 0` 
`"protocol": "0"`
  - protocol value set to null or unset: `"protocol": null`
  - protocol value set to the empty string: `"protocol": ""`

  I have grouped them by how they conflict. In other words: you can have
  a security group containing three of these rules (zero, null, empty)
  that won't conflict with each other at creation.

  My questions:
  - These three "protocol" values are stored differently. Do they behave 
exactly the same?
  - Is there a preferred way to create a rule that applies to any protocol?
  - Is the documentation effectively wrong about the value "any", or am I 
missing something?

  Thank you.

  [1]: https://docs.openstack.org/api-ref/network/v2/#create-security-group-rule
  [2]: 
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/release_notes/chap-introduction#about-this-release_relnotes-intro

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2074056/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2081945] Re: [postgresql] AttributeError: 'NoneType' object has no attribute 'id'

2024-10-08 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/930408
Committed: 
https://opendev.org/openstack/neutron/commit/144e140e750987a286e6adc74ff0ffad1da474d6
Submitter: "Zuul (22348)"
Branch:master

commit 144e140e750987a286e6adc74ff0ffad1da474d6
Author: Rodolfo Alonso Hernandez 
Date:   Wed Sep 25 07:17:07 2024 +

Use the declarative attribute ``standard_attr_id``

In those Neutron objects and DB definitions where the declarative
attribute ``standard_attr_id`` is defined, use it instead of accessing
to the ``standard_attr`` child object.

Closes-Bug: #2081945
Change-Id: Iadfbeff79c0200c3a6b90f785b910dc391f9deb3


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2081945

Title:
  [postgresql] AttributeError: 'NoneType' object has no attribute 'id'

Status in neutron:
  Fix Released

Bug description:
  Neutron API failing with the following error:
  ```
  AttributeError: 'NoneType' object has no attribute 'id'
  ```

  Log: https://e22ff80d6617e0aebdbb-
  
fab3512704550cb902a3eea3d4491f2b.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-
  ovn-tempest-postgres-full/6b355b7/controller/logs/screen-q-svc.txt

  Snippet: https://paste.opendev.org/show/bx6fuJBVC2RvjVHDH7Ug/

  This bug is related to
  https://bugs.launchpad.net/neutron/+bug/2078787.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2081945/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2080199] Re: Functional jobs failing with "Too many open files"

2024-10-08 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/928759
Committed: 
https://opendev.org/openstack/neutron/commit/6970f39a49b83f279b9e0479f7637d03a123a40e
Submitter: "Zuul (22348)"
Branch:master

commit 6970f39a49b83f279b9e0479f7637d03a123a40e
Author: elajkat 
Date:   Tue Sep 10 09:36:32 2024 +0200

[CI] Functional: Increase Ulimit to 4096

Functional tests started to fail with
"Too many open files" randomly, the default ulimit in
OS is configured to 1024, increasing this to 4096
to avoid these random failures.

Closes-Bug: #2080199
Change-Id: Iff86599678ebdd5189d5b56d11f3373c9b138562


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2080199

Title:
  Functional jobs failing with "Too many open files"

Status in neutron:
  Fix Released

Bug description:
  Neutron functional jobs started to fail with "Too many open files", example:
  
https://14d65eceddbce78ddf51-8bfb5d70b83a273fa97d15d51d14f1ae.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-pyroute2-master/82341c6/testr_results.html

  Opensearch query:
  
https://opensearch.logs.openstack.org/_dashboards/app/data-explorer/discover/?security_tenant=global#?_a=(discover:(columns:!(build_name),interval:auto,sort:!()),metadata:(indexPattern:'94869730-aea8-11ec-9e6a-83741af3fdcd',view:discover))&_q=(filters:!(),query:(language:kuery,query:'%20message:%22Too%20many%20open%20files%22'))&_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))

  As I see from the job-output.txt the ulimit number is set to 2024:
  2024-09-07 02:22:28.005219 | controller | +++ stackrc:source:935: 
ULIMIT_NOFILE=2048

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2080199/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2079996] Re: [OVN] OVN metadata agent check to restart the HAProxy container

2024-10-08 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/929604
Committed: 
https://opendev.org/openstack/neutron/commit/7b7f8d986a4f818d289149c6960c9eb8b62b432d
Submitter: "Zuul (22348)"
Branch:master

commit 7b7f8d986a4f818d289149c6960c9eb8b62b432d
Author: Rodolfo Alonso Hernandez 
Date:   Sat Sep 14 16:17:18 2024 +

[OVN] Check metadata HA proxy configuration before restart

Since [1], the OVN Metadata agent has support for IPv6. If the agent
is updated, the HA proxy instances need to be reconfigured and
restarted. However, that needs to be done only once; the next time
the OVN agent is restarted, if the HA proxy instances are updated
(have IPv6 support), they won't be restarted.

[1]https://review.opendev.org/c/openstack/neutron/+/894026

Closes-Bug: #2079996
Change-Id: Id0f678c7ffe162df42e18dfebb97dce677fc79fc


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2079996

Title:
  [OVN] OVN metadata agent check to restart the HAProxy container

Status in neutron:
  Fix Released

Bug description:
  Since [1], we restart the HAProxy process of each network (datapath)
  in order to "honor any potential changes in their configuration." [2].

  This process could slow down the OVN Metadata agent restart and could
  potentially interfere with a VM boot-up if the HAProxy process is
  restarted in the middle.

  This bug proposes an optimization that checks the IPv6 support of the
  HAProxy running process to decide to restart it or not.

  
[1]https://github.com/openstack/neutron/commit/d9c8731af36d4eb53d9266733fec24659f2dc5a8
  
[2]https://github.com/openstack/neutron/commit/d9c8731af36d4eb53d9266733fec24659f2dc5a8#diff-95903c989a1d043a90abe006cedd7ec20bd7a36855c3219cd74580cfa125c82fR349-R351

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2079996/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2071596] Re: netifaces is archived, please remove from dependecies

2024-10-08 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/ironic-python-agent/+/931584
Committed: 
https://opendev.org/openstack/ironic-python-agent/commit/42ea1dbd1a106147c8ed332782e942ee745d6c74
Submitter: "Zuul (22348)"
Branch:master

commit 42ea1dbd1a106147c8ed332782e942ee745d6c74
Author: Takashi Kajinami 
Date:   Sun Oct 6 22:08:13 2024 +0900

Drop dependency on netifaces

The netifaces library was abandoned and archived. Replace it by psutil
which is already part of the requirements.

Closes-Bug: #2071596
Change-Id: Ibca206ec2af1374199d0c0cfad897dded1298733


** Changed in: ironic-python-agent
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2071596

Title:
  netifaces is archived, please remove from dependecies

Status in ironic-python-agent:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in openstacksdk:
  In Progress
Status in oslo.utils:
  In Progress

Bug description:
  The oslo.utils python package is using as a dependency netifaces.

  This python package is not maintained since 2021

  https://github.com/al45tair/netifaces/issues/78

  Please remove it as a dependency and find an alternative

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic-python-agent/+bug/2071596/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2074045] Re: keystone dev environment setup document is out of date

2024-10-07 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/925010
Committed: 
https://opendev.org/openstack/keystone/commit/9fd7b952d3f837a49ea3969d6d6d440e1243d733
Submitter: "Zuul (22348)"
Branch:master

commit 9fd7b952d3f837a49ea3969d6d6d440e1243d733
Author: Artem Goncharov 
Date:   Fri Jul 26 14:00:06 2024 +0200

Update development setup doc

Python evolves and certain things from the development setup simply
stopped working. This was also mentioned few times by the people
struggling to start development version of Keystone locally. In addition
to that certain steps are not being ordered properly what confuses
people not knowing technical details.

This change modifies the doc by explicitly using python 3.11, reodrering
steps as required and adding clarifications on the database setup.

Closes-Bug: 2074045
Change-Id: I2eefc91594bac516e076cc60ae1fcdd7e704eab4


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2074045

Title:
  keystone dev environment setup document is out of date

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc 
https://docs.openstack.org/keystone/latest/contributor/set-up-keystone.html is 
inaccurate in this way:
    1. It mentions python3.6 is required but this doesn't seem to be the case. 
Maybe it should simply point here ? 
https://governance.openstack.org/tc/reference/runtimes/
    2. The layout of the document is somewhat confusing as well. It would be 
helpful to outline the steps clearly in a numbered fashion.
    3. Following the instructions the step : keystone-manage bootstrap command 
seems to fail with the error below.
    4. keystone-manage db_sync fails with the same error as well.
5. This command : uwsgi --http 127.0.0.1:5000 --wsgi-file $(which 
keystone-wsgi-public) : from the above link for running locally seems to be out 
of date as well. The flag --wsgi-file for one isn't supported anymore.

  ERROR keystone sqlalchemy.exc.NoSuchModuleError: Can't load plugin:
  sqlalchemy.plugins:dbcounter

  It will be helpful to newcomers to keep this up to date.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2074045/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2083456] Re: Unable to detect from OVN DB if Neutron uses distributed floating IPs or not

2024-10-07 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/931067
Committed: 
https://opendev.org/openstack/neutron/commit/1300110ccb9963e48a7c19e70599194d5c7da92c
Submitter: "Zuul (22348)"
Branch:master

commit 1300110ccb9963e48a7c19e70599194d5c7da92c
Author: Jakub Libosvar 
Date:   Tue Oct 1 16:54:18 2024 -0400

Set distributed flag to NB_Global

The patch introduces a new maintenance routine that always sets
NB_Global.external_ids:fip-distributed value in Northbound OVN DB to the
same value that enable_distributed_floating_ip config option has.

This is useful for projects that do not use RPC and rely on data only in
the OVN database.

Closes-Bug: #2083456
Change-Id: I7f30e6e030292b762dc9fc785c494c0dc215c749
Signed-off-by: Jakub Libosvar 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2083456

Title:
  Unable to detect from OVN DB if Neutron uses distributed floating IPs
  or not

Status in neutron:
  Fix Released

Bug description:
  Actually I don't know if this should be an RFE or a bug.

  Previously Neutron set external_mac in a NAT entry if the floating IP
  was distributed and it was left unset if the traffic was centralized.
  Nowadays, the behavior is inconsistent and external_mac is set after
  the port associated with the FIP transitions to the UP state - meaning
  that while the port is down, we are not able to say if the traffic
  will be distributed or not.

  It would be good to have this configuration option stored at one place
  - for example NB_Global.external_ids column can store a value for
  distributed routing since it's a global option for the whole Neutron.
  There are projects, like ovn-bgp-agent that could use this information
  before-hand so the agent knows if the floating IP should be exposed on
  the compute node or on the network node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2083456/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055411] Re: Nova VMwareapi Resize of Volume Backed server fails

2024-10-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/910627
Committed: 
https://opendev.org/openstack/nova/commit/6a3ca95a36f4cae7a5b73b9f5663ef2c605f8bbb
Submitter: "Zuul (22348)"
Branch:master

commit 6a3ca95a36f4cae7a5b73b9f5663ef2c605f8bbb
Author: Fabian Wiesel 
Date:   Thu Mar 3 13:04:22 2022 +0100

Vmware: Remove uuid parameter from get_vmdk_info call

We changed the code to ignore the file-name as
- a vmotion will result in renaming of the files
- booting from a volume names the volume by its uuid,
both breaking the heuristic to detect the root disk.

We simply take the first hard disk in the default boot-order.
If we boot from an ISO, it will be attached as a CD-ROM, not as a
disk. Any snapshots would be taken from the first actual disk, so it
still can be used as a method to install an OS to an empty disk.

Disks for rescue operations are attached later in the default
boot-order, but the boot order will be changed to allow booting from
that disk.

Closes-Bug: #2055411
Change-Id: Ib3088cfce4f7a0b24f05d45e7830b011c4a39f42


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2055411

Title:
  Nova VMwareapi Resize of Volume Backed server fails

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  More specifically the following tempest test in master fails:
  
tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server

  
  Steps to reproduce
  ==
  * Install Devstack from master
  * Run tempest test 
`tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server`

  Expected result
  ===
  The test succeeds.

  Actual result
  =
  What happened instead of the expected result?
  How did the issue look like?

  Environment
  ===
  1. Git 1858cf18b940b3636e54eb5aafaf4050bdd02939 (master). So essentially this:
   https://review.opendev.org/c/openstack/nova/+/909474
  As instance creation is impossible without that patch.

  2. Which hypervisor did you use? What's the version of that?

  vmwareapi (VSphere 7.0.3 & ESXi 7.0.3)

  2. Which storage type did you use?

  vmdk on NFS 4.1

  3. Which networking type did you use?

  networking-nsx-t (https://github.com/sapcc/networking-nsx-t)

  Logs & Configs
  ==

  Can be found here: http://openstack-ci-
  
logs.global.cloud.sap/openstack/nova/1858cf18b940b3636e54eb5aafaf4050bdd02939/index.html

  The critical exception for this bug report is (abbreviated and reformatted 
for clarity):
  
   req-7aa5ded6-ea97-4010-93c8-9e39389cbfe0 
tempest-ServerActionsTestOtherA-839537081
  [  865.017199] env[58735]: ERROR nova.compute.manager [instance: 
b4d9131c-fc91-4fd4-813b-13b4bdfe1647] 
  Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 10856, in 
_error_out_instance_on_exception
  yield
File "/opt/stack/nova/nova/compute/manager.py", line 6096, in 
_resize_instance
  disk_info = self.driver.migrate_disk_and_power_off(
File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 263, in 
migrate_disk_and_power_off
  return self._vmops.migrate_disk_and_power_off(context, instance,
File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 1467, in 
migrate_disk_and_power_off
  self._resize_disk(instance, vm_ref, vmdk, flavor)
File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 1398, in 
_resize_disk
  self._volumeops.detach_disk_from_vm(vm_ref, instance, vmdk.device)
File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 121, in 
detach_disk_from_vm
  disk_key = device.key
  AttributeError: 'NoneType' object has no attribute 'key'

  ---

  
  The bug is actually in the function 
`nova.virt.vmwareapi.vm_util.get_vmdk_info` here:
  
https://opendev.org/openstack/nova/src/branch/master/nova/virt/vmwareapi/vm_util.py#L690

  The code works with the assumption, that the root-disk is named as the 
instance.
  This assumption breaks in several cases, but most for this test-case, the 
root volume is actually a cinder volume.
  It will also break when the the disk gets migrated to another datastore, 
either through a live-migration with no shared storage, or simply automatically 
with SDRS..

  I have an alternative implementation here: 
https://github.com/sapcc/nova/blob/stable/xena-m3/nova/virt/vmwareapi/vm_util.py#L997-L1034
  I'll provide a bug fix from it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2055411/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-t

[Yahoo-eng-team] [Bug 1953170] Re: [RFE] Unify quota engine API

2024-10-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926725
Committed: 
https://opendev.org/openstack/neutron/commit/a859760323a878ced58c2c01b6da8d0f803028be
Submitter: "Zuul (22348)"
Branch:master

commit a859760323a878ced58c2c01b6da8d0f803028be
Author: Rodolfo Alonso Hernandez 
Date:   Thu Aug 8 18:23:28 2024 +

Neutron quota engine checks the resource usage by default

Now the Neutron quota engine always checks the current resource usage
before updating the quota limits. Only when the CLI "--force"
parameter is passed, this check is skipped. That aligns the Neutron
quota engine behaviour with other projects.

The Neutron quota commands now always check the resource limits. The
CLI parameter "--check-limits" is no longer needed, as this is the
default behaviour.

Depends-On: 
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/928770

Closes-Bug: #1953170
Change-Id: I2a9cd89cfe40ef635892cefeb61264272fe7bf16


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1953170

Title:
  [RFE] Unify quota engine API

Status in neutron:
  Fix Released

Bug description:
  DESCRIPTION
  ---
  In sake of simplicity, Neutron should adopt the same quota API behaviour as 
Nova and Cinder. Current behaviour when a new quota limit is set:
  1) Neutron:
 $ openstack quota set --network 100 
   # Quota is set WITHOUT checking the current network usage.

 $ openstack quota set --network 100 --check-limit 
   # Quota is set AFTER checking the current network usage.

  2) Nova: (just the opposite)
 $ openstack quota set --ram 100 
   # Quota is set AFTER checking the current network usage.

 $ openstack quota set --ram 100 --force 
   # Quota is set WITHOUT checking the current network usage.

  That means Neutron forces by default the quota update while in Nova
  you need to specify the "--force" parameter.

  The goal of this RFE is to plan a smooth migration to the Nova quota
  API behaviour: by default, always check the resource limit; if "--
  force" is provided, no check will be done.

  
  STEPS
  -
  1) Implement in OSC both "--force" and "--check-limit" parameters for "quota 
set" command. The "--force" parameter is already present. "--check-limit" will 
be merged in [1]. The functionality in Neutron quota system to check the 
resource usage is merged in Neutron [2]. This step can be considered as DONE.

  2) Modify the quota engine to accept "--force" parameter. We'll
  discard it because this is the default behaviour.

  3) Write a warning message in the logs. If no parameter is passed
  (force, check-limit), that means the user is using the "old" API. In
  this case, we'll inform in this message about the future change we'll
  make in the API (with references to this LP bug).

  4) In 2 or 3 releases, change the behaviour in the Neutron quota
  engine: by default, we'll always check the resource limits. Remove the
  warning message.

  5) Remove from OSC the parameter "--check-limit" (unnecessary now).

  6) Remove from Neutron quota engine the "--check-limit" input
  parameter.

  
  [1]https://review.opendev.org/c/openstack/python-openstackclient/+/806016
  [2]https://review.opendev.org/c/openstack/neutron/+/801470

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1953170/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1804327] Re: occasional connection reset on SNATed after tcp retries

2024-10-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/618208
Committed: 
https://opendev.org/openstack/neutron/commit/ab94c6b02116ccc24c67c1ed8e09c6840d092424
Submitter: "Zuul (22348)"
Branch:master

commit ab94c6b02116ccc24c67c1ed8e09c6840d092424
Author: Dirk Mueller 
Date:   Thu Nov 15 17:19:35 2018 +0100

Enable liberal TCP connection tracking for SNAT namespaces

This can avoid connections rarely hanging due to tcp window
scaling not correctly being observed by the TCP connection
tracking. this seems to happen when retransmits are occurring
occassionally.
Setting this parameter turns off validating the window scaling
checks for the purpose of matching whether a packet matches
an existing connection tracked flow, which avoids the SNAT
namespace from interfering and letting the connection peers
recover the connection via retransmits/Selective ACKs instead
of the SNAT terminating one side of the connection and letting
it stall permanently.

Closes-Bug: #1804327
Change-Id: I5e58bb2850bfa8e974e62215af0b4d7bc0592c13


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1804327

Title:
  occasional connection reset on SNATed after tcp retries

Status in neutron:
  Fix Released

Bug description:
  When neutron ports are connected to DVR routers that are without
  floating ip, the traffic is going via SNAT on the network node.

  In some cases when the tcp connections that are nat'ed end up
  retransmitting, sometimes a packet is being retransmitted by the
  remote that is outside what the Linux kernel connection tracking
  considers part of valid tcp window. When this happens, the flow is
  receiving a RST, terminating the connection on the sender side, while
  leaving the receiver side (the neutron port attached VM) hanging.

  A similar issue is described elsewhere, e.g.
  https://github.com/docker/libnetwork/issues/1090 and the workaround
  documented there of setting ip_conntrack_tcp_be_liberal seems to help
  in avoiding conntrack to dismiss packets outside the observed tcp
  window size which lets the tcp retransmit logic to eventually recover
  the connection.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1804327/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2081859] Re: Nova not initializing os-brick

2024-09-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/849328
Committed: 
https://opendev.org/openstack/nova/commit/8c1a47c9cf6e1001fbefd6ff3b76314e39c81d71
Submitter: "Zuul (22348)"
Branch:master

commit 8c1a47c9cf6e1001fbefd6ff3b76314e39c81d71
Author: Gorka Eguileor 
Date:   Thu Jul 7 16:22:42 2022 +0200

Support os-brick specific lock_path

Note: Initially this patch was related to new feature, but now it has
become a bug since os-brick's `setup` method is not being called and it
can create problems if os-brick changes.

As a new feature, os-brick now supports setting the location of file
locks in a different location from the locks of the service.

The functionality is intended for HCI deployments and hosts that are
running Cinder and Glance using Cinder backend.  In those scenarios the
service can use a service specific location for its file locks while
only sharing the location of os-brick with the other services.

To leverage this functionality the new os-brick code is needed and
method ``os_brick.setup`` needs to be called once the service
configuration options have been loaded.

The default value of the os-brick ``lock_path`` is the one set in
``oslo_concurrency``.

This patch adds support for this new feature in a non backward
compatible way, so it requires an os-brick version bump in the
requirements.

The patch also ensures that ``tox -egenconfig`` includes the os-brick
configuration options when generating the sample config.

Closes-Bug: #2081859
Change-Id: I1b81eb65bd145869e8cf6f3aabc6ade58f832a19


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2081859

Title:
  Nova not initializing os-brick

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In the Zed release os-brick started needing to be initialized by
  calling a `setup` method before the library could be used.

  At that time there was only 1 feature that depended on it and it was
  possible to introduce a failsafe for that instance so things wouldn't
  break.

  In the Antelope release that failsafe should have been removed from
  os-brick and all projects should have been calling the `setup` method.

  Currently nova is not initializing os-brick, so if os-brick removes
  the failsafe the behavior in os-brick locks will break backward
  compatibility.

  Related os-brick patch: https://review.opendev.org/c/openstack/os-
  brick/+/849324

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2081859/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2083227] Re: [neutron-lib] pep8 job failing with pylint=3.3.1

2024-09-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-lib/+/930886
Committed: 
https://opendev.org/openstack/neutron-lib/commit/939839cb838db12c589f6fe54bc5907cb6303590
Submitter: "Zuul (22348)"
Branch:master

commit 939839cb838db12c589f6fe54bc5907cb6303590
Author: Rodolfo Alonso Hernandez 
Date:   Mon Sep 30 09:57:24 2024 +

Skip pylint recommendation "too-many-positional-arguments"

This warning was introduced in [1] as is present in pytlint==3.3.0


[1]https://github.com/pylint-dev/pylint/commit/de6e6fae34cccd2e7587a46450c833258e3000cb

Closes-Bug: #2083227
Change-Id: I124d5ff7d34dd868dd2861b72e55d62190dcc3f7


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2083227

Title:
  [neutron-lib] pep8 job failing with pylint=3.3.1

Status in neutron:
  Fix Released

Bug description:
  pylint==3.3.1 was released on Set 24, 2024 [1]. Prior to this,
  neutron-lib pep8 was using pylint=3.2.7.

  Current output (failing checks):
  * Module neutron_lib.context
  neutron_lib/context.py:36:4: R0917: Too many positional arguments (8/5) 
(too-many-positional-arguments)
  * Module neutron_lib.placement.client
  neutron_lib/placement/client.py:337:4: R0917: Too many positional arguments 
(6/5) (too-many-positional-arguments)
  * Module neutron_lib.callbacks.manager
  neutron_lib/callbacks/manager.py:36:4: R0917: Too many positional arguments 
(6/5) (too-many-positional-arguments)
  * Module neutron_lib.callbacks.events
  neutron_lib/callbacks/events.py:73:4: R0917: Too many positional arguments 
(6/5) (too-many-positional-arguments)
  neutron_lib/callbacks/events.py:114:4: R0917: Too many positional arguments 
(7/5) (too-many-positional-arguments)
  neutron_lib/callbacks/events.py:154:4: R0917: Too many positional arguments 
(9/5) (too-many-positional-arguments)
  * Module neutron_lib.db.model_query
  neutron_lib/db/model_query.py:74:0: R0917: Too many positional arguments 
(6/5) (too-many-positional-arguments)
  neutron_lib/db/model_query.py:302:0: R0917: Too many positional arguments 
(9/5) (too-many-positional-arguments)
  neutron_lib/db/model_query.py:350:0: R0917: Too many positional arguments 
(10/5) (too-many-positional-arguments)
  * Module neutron_lib.services.qos.base
  neutron_lib/services/qos/base.py:30:4: R0917: Too many positional arguments 
(6/5) (too-many-positional-arguments)
  * Module neutron_lib.agent.linux.interface
  neutron_lib/agent/linux/interface.py:24:4: R0917: Too many positional 
arguments (10/5) (too-many-positional-arguments)

  [1]https://pypi.org/project/pylint/#history

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2083227/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2082344] Re: [unmaintained] ``NetworkWritableMtuTest`` test don't work in ML2/OVN

2024-09-27 Thread OpenStack Infra
Reviewed:  
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/930594
Committed: 
https://opendev.org/openstack/neutron-tempest-plugin/commit/68e1139b8c721d760d2d1dfc1d67a359478400ea
Submitter: "Zuul (22348)"
Branch:master

commit 68e1139b8c721d760d2d1dfc1d67a359478400ea
Author: Rodolfo Alonso Hernandez 
Date:   Thu Sep 26 15:01:32 2024 +

Exclude ``NetworkWritableMtuTest`` test class in ML2/OVN

E/W packet fragmentation is not supported in ML2/OVN [1].

[1]https://docs.openstack.org/neutron/latest/ovn/gaps.html

Closes-Bug: #2082344
Change-Id: Ieeed73b2a5fc8319b3c199cdd2888e0090139077


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2082344

Title:
  [unmaintained] ``NetworkWritableMtuTest`` test don't work in ML2/OVN

Status in neutron:
  Fix Released

Bug description:
  This bug is related to https://review.opendev.org/c/openstack/neutron-
  tempest-plugin/+/929633.

  The test case in ``NetworkWritableMtuTest`` cannot be executed in
  ML2/OVN. E/W packet fragmentation is not supported [1].

  This test class is failing in several unmaintained CI:
  * Yoga: 
https://zuul.opendev.org/t/openstack/build/e9534f8759c24bb5bfe8ac452d5cb931
  * Zed: 
https://zuul.opendev.org/t/openstack/build/739a64647ad146138f37652c1af8d47e

  [1]https://docs.openstack.org/neutron/latest/ovn/gaps.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2082344/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2082066] Re: Logging not supported in unmaintained branches

2024-09-27 Thread OpenStack Infra
Reviewed:  
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/930546
Committed: 
https://opendev.org/openstack/neutron-tempest-plugin/commit/a107a258a6f389c6e1b240aeb85027701660f594
Submitter: "Zuul (22348)"
Branch:master

commit a107a258a6f389c6e1b240aeb85027701660f594
Author: Rodolfo Alonso Hernandez 
Date:   Thu Sep 26 09:20:45 2024 +

Disable the execution of ``LoggingTestJSON`` in older branches

The OVN logging feature needed for Neutron was implemented in [1].
This patch is provided in v20.12.0.

The unmatained branches running with ML2/OVN in Focal (Ubuntu 20.04),
use the provided OVN package, v20.03, that doesn't have this patch.


[1]https://github.com/ovn-org/ovn/commit/880dca99eaf73db7e783999c29386d03c82093bf

Closes-Bug: #2082066
Change-Id: Id3ecf975516d459358bb5fcd01085ec6a3bdbd26


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2082066

Title:
  Logging not supported in unmaintained branches

Status in neutron:
  Fix Released

Bug description:
  Network logging is not supported in unmaintained branches, when running with 
ML2/OVN. The neutron-tempest-plugin class ``LoggingTestJSON`` tests cannot be 
executed in unmaintained branches:
  * yoga: 
https://zuul.opendev.org/t/openstack/build/18e52813deff45d7a4e0d705e6b22edc
  * xena: 
https://zuul.opendev.org/t/openstack/build/917f7eae275a46d4a8e371fecf7498e5

  These jobs are running in Focal. The OVN version is 20.03. The support
  was provided in [1] (v20.12.0).

  [1]https://github.com/ovn-
  org/ovn/commit/880dca99eaf73db7e783999c29386d03c82093bf

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2082066/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2079850] Re: Ephemeral with vfat format fails inspection

2024-09-24 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/oslo.utils/+/928448
Committed: 
https://opendev.org/openstack/oslo.utils/commit/3c33e37d64e44addc9a818bd556f5919ed2e9002
Submitter: "Zuul (22348)"
Branch:master

commit 3c33e37d64e44addc9a818bd556f5919ed2e9002
Author: Dan Smith 
Date:   Fri Sep 6 07:51:13 2024 -0700

Avoid detecting FAT VBR as an MBR

The 1980s FAT filesystem has a VBR in the first sector, which looks
almost exactly like an MBR with zero partitions. To avoid detecting
these as MBRs, look for some extra attributes that indicate that the
structure is a VBR and avoid matching it as a GPT/MBR in that case.

We can add an inspector for this as a separate thing, but at the
moment we don't have that immediate need.

Closes-Bug: #2079850
Change-Id: Ibad87743b5a3b6469bd708d4caafe7911b045855


** Changed in: oslo.utils
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2079850

Title:
  Ephemeral with vfat format fails inspection

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.utils:
  Fix Released

Bug description:
  When configured to format ephemerals as vfat, we get this failure:

  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.358 2 
DEBUG oslo_utils.imageutils.format_inspector [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Format inspector failed, 
aborting: Signature KDMV not found: b'\xebX\x90m' _process_chunk 
/usr/lib/python3.9/site-packages/oslo_utils/imageutils/format_inspector.py:1302
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.365 2 
DEBUG oslo_utils.imageutils.format_inspector [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Format inspector failed, 
aborting: Region signature not found at 3 _process_chunk 
/usr/lib/python3.9/site-packages/oslo_utils/imageutils/format_inspector.py:1302
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.366 2 
WARNING oslo_utils.imageutils.format_inspector [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Safety check mbr on gpt 
failed because GPT MBR has no partitions defined: 
oslo_utils.imageutils.format_inspector.SafetyViolation: GPT MBR has no 
partitions defined
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.366 2 
WARNING nova.virt.libvirt.imagebackend [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Base image 
/var/lib/nova/instances/_base/ephemeral_1_0706d66 failed safety check: Safety 
checks failed: mbr: oslo_utils.imageutils.format_inspector.SafetyCheckFailed: 
Safety checks failed: mbr
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [None req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 
60ed4d3e522640b6ad19633b28c5b5bb ae43aec9c3c242a785c8256abdda1747 - - default 
default] [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] Instance failed to 
spawn: nova.exception.InvalidDiskInfo: Disk info file is invalid: Base image 
failed safety check
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
Traceback (most recent call last):
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a]   
File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py", line 
685, in create_image
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
inspector.safety_check()
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a]   
File 
"/usr/lib/python3.9/site-packages/oslo_utils/imageutils/format_inspector.py", 
line 430, in safety_check
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
raise SafetyCheckFailed(failures)
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
oslo_utils.imageutils.format_inspector.SafetyCheckFailed: Safety checks failed: 
mbr
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
  Sep 03 17:34:28 compute-2 nova_c

[Yahoo-eng-team] [Bug 2079831] Re: [tempest] VM ports have status=DOWN when calling ``TestNetworkBasicOps._setup_network_and_servers``

2024-09-21 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/tempest/+/928471
Committed: 
https://opendev.org/openstack/tempest/commit/d6437c9dd175371cd13d0a5d305a8863bda5
Submitter: "Zuul (22348)"
Branch:master

commit d6437c9dd175371cd13d0a5d305a8863bda5
Author: Brian Haley 
Date:   Fri Sep 6 16:09:26 2024 -0400

Wait for all instance ports to become ACTIVE

get_server_port_id_and_ip4() gets a list of neutron ports
for an instance, but it could be one or more of those have
not completed provisioning at the time of the call, so are
still marked DOWN.

Wait for all ports to become active since it could just be
neutron has not completed its work yet.

Added new waiter function and tests to verify it worked.

Closes-bug: #2079831
Change-Id: I758e5eeb8ab05e79d6bdb2b560aa0f9f38c5992c


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2079831

Title:
  [tempest] VM ports have status=DOWN when calling
  ``TestNetworkBasicOps._setup_network_and_servers``

Status in neutron:
  Invalid
Status in tempest:
  Fix Released

Bug description:
  The method ``TestNetworkBasicOps._setup_network_and_servers`` is used
  in several tempest tests. It creates a set of resources (network,
  servers, FIPs, etc). This method has a race condition when the config
  option "project_networks_reachable" if False (by default).

  The server is created [1] but there is no connectivity test [2] (due
  to project_networks_reachable=False). The next step is to create a FIP
  [3]. Because we are not passing the port_id, we first retrieve all the
  VM ports [4]. The issue happens at [5]: the ports are created but are
  still down.

  An example of this can be seen in [5][6]:
  1) The tempest test list the VM ports (only one in this case) but the port is 
down: https://paste.opendev.org/show/bSLi4joS6blqipbwa7Pq/

  2) The Neutron API finishes processing the port activation at the same
  time the port list call was made:
  https://paste.opendev.org/show/brRqntkQYdDoVeEqCeXF/

  
  It is needed to add an active wait in the method 
``get_server_port_id_and_ip4`` in order to wait all ports to be active.

  
  
[1]https://github.com/openstack/tempest/blob/0a0e1070e573674332cb5126064b95f17099307e/tempest/scenario/test_network_basic_ops.py#L120
  
[2]https://github.com/openstack/tempest/blob/0a0e1070e573674332cb5126064b95f17099307e/tempest/scenario/test_network_basic_ops.py#L124
  
[3]https://github.com/openstack/tempest/blob/0a0e1070e573674332cb5126064b95f17099307e/tempest/scenario/test_network_basic_ops.py#L128
  
[4]https://github.com/openstack/tempest/blob/0a0e1070e573674332cb5126064b95f17099307e/tempest/scenario/manager.py#L1143
  
[5]https://3fdd3adccbbbca8893fe-55e7a9d33a731efe4f7611907a31a4a1.ssl.cf1.rackcdn.com/924317/10/experimental/neutron-ovn-tempest-ovs-master/038956b/controller/logs/screen-neutron-api.txt
  
[6]https://3fdd3adccbbbca8893fe-55e7a9d33a731efe4f7611907a31a4a1.ssl.cf1.rackcdn.com/924317/10/experimental/neutron-ovn-tempest-ovs-master/038956b/controller/logs/tempest_log.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2079831/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998268] Re: Fernet uid/gid logic issue

2024-09-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/866096
Committed: 
https://opendev.org/openstack/keystone/commit/1cf7d94d6eb27aff92d3a612ee05efcc19e08917
Submitter: "Zuul (22348)"
Branch:master

commit 1cf7d94d6eb27aff92d3a612ee05efcc19e08917
Author: Sam Morrison 
Date:   Wed Nov 30 12:16:40 2022 +1100

Fix logic of fernet creation when running as root

Running `keystone-manage fernet_rotate
--keystone-user root --keystone-group keystone`

Will cause group to be root not keystone due to
checking the uid (0) against false, as opposed to None.

Closes-Bug: #1998268

Change-Id: Ib20550bf698f4fab381b48571ff8d096a2ae3335


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1998268

Title:
  Fernet uid/gid logic issue

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Running

  keystone-manage fernet_rotate --keystone-user root --keystone-group
  keystone

  Will not work as expected due to some wrong logic when uid is set to 0
  due to 0 == False

  The new 0 key will have ownership of root:root, not root:keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1998268/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078999] Re: nova_manage: Image property restored after migration

2024-09-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/924319
Committed: 
https://opendev.org/openstack/nova/commit/2a1fad41453ca7ce15b1cd9b517055c4ccdd12cf
Submitter: "Zuul (22348)"
Branch:master

commit 2a1fad41453ca7ce15b1cd9b517055c4ccdd12cf
Author: zhong.zhou 
Date:   Wed Jul 17 18:29:46 2024 +0800

nova-manage: modify image properties in request_spec

At present, we can modify the properties in the instance
system_metadata through the sub command image_property of
nova-manage, but there may be inconsistencies between their
values and those in request_specs.

And the migration is based on request_specs, so the same image
properties are also written to request_specs.

Closes-Bug: 2078999
Change-Id: Id36ecd022cb6f7f9a0fb131b0d202b79715870a9


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2078999

Title:
  nova_manage: Image property restored after migration

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  I use "nova-manage image_property" to modify image meta of an
  instance, however, the change lost and restored to the older prop
  after migration.

  Steps to reproduce
  ==

  1.create an instance and set a property in image like 
hw_qemu_guest_agent=False
  2.use nova-manage image_property set to modify the instance and the prop 
expected to True
  3.migration the instance

  Expected result
  ===
  hw_qemu_guest_agent is always True after migration

  Actual result
  =
  hw_qemu_guest_agent was restored to False after migration

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2078999/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2081087] Re: Performance regression in neutron-server from 2023.1 to 2024.1 when fetching a Security Group

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/929941
Committed: 
https://opendev.org/openstack/neutron/commit/c1b05e29adf9d0d68c1ac636013a8a363a92eb85
Submitter: "Zuul (22348)"
Branch:master

commit c1b05e29adf9d0d68c1ac636013a8a363a92eb85
Author: Rodolfo Alonso Hernandez 
Date:   Thu Sep 19 14:00:57 2024 +

Change the load method of SG rule "default_security_group"

Since [1], the SG rule SQL view also retrieves the table
"default_security_group", using a complex relationship [2].
When the number of SG rules of a SG is high (above 50 it
is clearly noticeable the performance degradation), the
API call can take several seconds. For example, for 100
SG rules it can take up to one minute.

This patch changes the load method of the SG rule
"default_security_group" relationship to "selectin".
Benchmarks with a single default SG and 100 rules,
doing "openstack security group show $sg":
* 2023.2 (without this feature): around 0.05 seconds
* master: between 45-50 seconds (1000x time increase)
* loading method "selectin" or "dynamic": around 0.5 seconds.

NOTE: this feature [1] was implemented in 2024.1. At this
time, SQLAlchemy version was <2.0 and "selectin" method was
not available. For this version, "dynamic" can be used instead.

[1]https://review.opendev.org/q/topic:%22bug/2019960%22

[2]https://github.com/openstack/neutron/blob/08fff4087dc342be40db179fca0cd9bbded91053/neutron/db/models/securitygroup.py#L120-L121

Closes-Bug: #2081087
Change-Id: I46af1179f6905307c0d60b5c0fdee264a40a4eac


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2081087

Title:
  Performance regression in neutron-server from 2023.1 to 2024.1 when
  fetching a Security Group

Status in neutron:
  Fix Released

Bug description:
  With upgrade from 2023.1 to 2024.1 with driver ML2/OVS we've spotted a
  significant (10 times) performance regression on some operations.

  As best example - we can take security groups operations.

  Neutron is running in eventlet, since uWSGI is not yet fully
  functional for 2024.1 (see
  https://review.opendev.org/c/openstack/neutron/+/926922).

  So neutron-server is just being launched with exactly same database
  and config, just from different venvs.

  ```
  # cat /etc/systemd/system/neutron-server.service 
  [Unit]
  Description = neutron-server service
  After = network-online.target
  After = syslog.target

  [Service]
  Type = simple
  User = neutron
  Group = neutron
  ExecStart = /openstack/venvs/neutron-29.0.2/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  ExecReload = /bin/kill -HUP $MAINPID
  # Give a reasonable amount of time for the server to start up/shut down
  TimeoutSec = 120
  Restart = on-failure
  RestartSec = 2
  # This creates a specific slice which all services will operate from
  #  The accounting options give us the ability to see resource usage through
  #  the `systemd-cgtop` command.
  Slice = neutron.slice
  # Set Accounting
  CPUAccounting = True
  BlockIOAccounting = True
  MemoryAccounting = True
  TasksAccounting = True
  # Set Sandboxing
  PrivateTmp = False
  PrivateDevices = False
  PrivateNetwork = False
  PrivateUsers = False

  [Install]
  WantedBy = multi-user.target

  # time curl -X GET 
http://127.0.0.1:9696/v2.0/security-groups?project_id=${OS_PROJECT_ID} -H 
"X-Auth-Token: ${TOKEN}"
  ...
  real0m24.450s
  user0m0.008s
  sys 0m0.010s
  # time curl -X GET 
http://127.0.0.1:9696/v2.0/security-groups/${security_group_uuid} -H 
"X-Auth-Token: ${TOKEN}"
  ...
  real0m54.841s
  user0m0.010s
  sys 0m0.012s
  # sed -i 's/29.0.2/27.4.0/g' /etc/systemd/system/neutron-server.service
  # systemctl daemon-reload
  # systemctl restart neutron-server
  # time curl -X GET 
http://127.0.0.1:9696/v2.0/security-groups?project_id=${OS_PROJECT_ID} -H 
"X-Auth-Token: ${TOKEN}"
  ...
  real0m1.040s
  user0m0.011s
  sys 0m0.007s
  # time curl -X GET 
http://127.0.0.1:9696/v2.0/security-groups/${security_group_uuid} -H 
"X-Auth-Token: ${TOKEN}"
  ...
  real0m0.589s
  user0m0.012s
  sys 0m0.007s
  ```

  So as you might see, difference in response time is very significant,
  while the only change I've made is to use previous codebase for the
  service.

  I am also providing pip freeze for both venvs for comparison, though both of 
them were using upper-constraints:
  # /openstack/venvs/neutron-27.4.0/bin/pip freeze
  alembic==1.8.1
  amqp==5.1.1
  appdirs==1.4.4
  attrs==22.1.0
  autopage==0.5.1
  bcrypt==4.0.0
  cachetools==5.2.0
  certifi==2023.11.17
  cffi==1.15.1
  charset-normalizer==2.1.1
  cliff==4.2.0
  cmd2==2.4.2
  cryptography==38.0.2
  debtcollector==2.5.0
  decor

[Yahoo-eng-team] [Bug 2081174] Re: Handle EndpointNotFound in nova notifier

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/929920
Committed: 
https://opendev.org/openstack/neutron/commit/7d1a20ed4d458c6682a52679b71b6bc8dea20d07
Submitter: "Zuul (22348)"
Branch:master

commit 7d1a20ed4d458c6682a52679b71b6bc8dea20d07
Author: yatinkarel 
Date:   Thu Sep 19 18:32:11 2024 +0530

Handle EndpointNotFound in nova notifier

Currently if the nova endpoint do not exist
exception is raised. Even the endpoint gets created
notification keeps on failing until the session
expires.
If the endpoint not exist the session is not useful
so marking it as invalid, this will ensure if endpoint is
created later the notification do not fail.

Closes-Bug: #2081174
Change-Id: I1f7fd1d1371ca0a3c4edb409cffd2177d44a1f23


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2081174

Title:
  Handle EndpointNotFound in nova notifier

Status in neutron:
  Fix Released

Bug description:
  When nova endpoint for endpoint_type(public/internal/admin) is not
  exist, following traceback is raised:-

  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova [-] Failed to notify 
nova on events: [{'name': 'network-changed', 'server_uuid': 
'3c634df2-eb78-4f49-bb01-ae1c546411af', 'tag': 
'feaa6ca6-7c33-4778-a33f-cd065112cc99'}]: 
keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for 
compute service in regionOne region not found
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova Traceback (most 
recent call last):
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/neutron/notifiers/nova.py", line 282, in 
send_events
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova response = 
novaclient.server_external_events.create(
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/novaclient/v2/server_external_events.py", 
line 38, in create
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova return 
self._create('/os-server-external-events', body, 'events',
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/novaclient/base.py", line 363, in _create
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova resp, body = 
self.api.client.post(url, body=body)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 401, in post
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova return 
self.request(url, 'POST', **kwargs)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/novaclient/client.py", line 69, in request
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova resp, body = 
super(SessionClient, self).request(url,
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 554, in 
request
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova resp = 
super(LegacyJsonAdapter, self).request(*args, **kwargs)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 257, in 
request
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova return 
self.session.request(url, method, **kwargs)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 811, in 
request
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova base_url = 
self.get_endpoint(auth, allow=allow,
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 1243, in 
get_endpoint
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova return 
auth.get_endpoint(self, **kwargs)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/identity/base.py", line 375, in 
get_endpoint
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova endpoint_data = 
self.get_endpoint_data(
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/identity/base.py", line 275, in 
get_endpoint_data
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova endpoint_data = 
service_catalog.endpoint_data_for(
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/access/service_catalog.py", 
line 462, in endpoint_data_for
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova raise 
exceptions.EndpointNotFound(msg)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova 
keystoneauth1.exceptions.catalog.EndpointNotFound: inte

[Yahoo-eng-team] [Bug 2080933] Re: neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase is broken

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-fwaas/+/929658
Committed: 
https://opendev.org/openstack/neutron-fwaas/commit/caca5ae4a0adbf5a2f2eeabbd746dac9d3ac37e6
Submitter: "Zuul (22348)"
Branch:master

commit caca5ae4a0adbf5a2f2eeabbd746dac9d3ac37e6
Author: Brian Haley 
Date:   Tue Sep 17 10:58:57 2024 -0400

Account for iptables-save output spacing differences

There are places where the iptables-save output is not
exactly as the input, for example:

1) extra space after '-j NFLOG --nflog-prefix'
2) '#/sec' instead of '#/s' for limit-burst
3) '-j REJECT --reject-with icmp-port-unreachable' instead
   of '-REJECT'

Account for that in the code so when iptables debug is
enabled the functional tests pass.

Related-bug: #2079048
Closes-bug: #2080933

Change-Id: I98fe93019b7d1b84d0622b4430e56b37b7cc0250


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2080933

Title:
  
neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase
  is broken

Status in neutron:
  Fix Released

Bug description:
  The test cases in
  
neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase
  are consistently failing now, which blocks the neutron-fwaas-
  functional job.

  Example build:
  https://zuul.opendev.org/t/openstack/build/05d7f31ef63c449d9de275e9a121704b

  Example failure:

  ```
  
neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase.test_start_logging_when_create_log
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/.tox/dsvm-functional-gate/lib/python3.10/site-packages/neutron/tests/base.py",
 line 178, in func
  return f(self, *args, **kwargs)

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/tests/functional/services/logapi/agents/drivers/iptables/test_log.py",
 line 301, in test_start_logging_when_create_log
  self.run_start_logging(ipt_mgr,

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/tests/functional/services/logapi/agents/drivers/iptables/test_log.py",
 line 250, in run_start_logging
  self.log_driver.start_logging(self.context,

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/services/logapi/agents/drivers/iptables/log.py",
 line 241, in start_logging
  self._create_firewall_group_log(context, resource_type,

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/services/logapi/agents/drivers/iptables/log.py",
 line 309, in _create_firewall_group_log
  ipt_mgr.defer_apply_off()

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/.tox/dsvm-functional-gate/lib/python3.10/site-packages/neutron/agent/linux/iptables_manager.py",
 line 451, in defer_apply_off
  self._apply()

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/.tox/dsvm-functional-gate/lib/python3.10/site-packages/neutron/agent/linux/iptables_manager.py",
 line 478, in _apply
  raise l3_exc.IpTablesApplyException(msg)

  neutron_lib.exceptions.l3.IpTablesApplyException: IPTables Rules did not 
converge. Diff: # Generated by iptables_manager
  *filter
  -D run.py-accepted 1
  -I run.py-accepted 1 -i qr-b0f055da-3f -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 12158444994202490671
  -D run.py-accepted 2
  -I run.py-accepted 2 -o qr-b0f055da-3f -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 12158444994202490671
  -D run.py-accepted 3
  -I run.py-accepted 3 -i qr-790b0516-f4 -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 13796087923523008474
  -D run.py-accepted 4
  -I run.py-accepted 4 -o qr-790b0516-f4 -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 13796087923523008474
  -D run.py-rejected 1
  -I run.py-rejected 1 -j REJECT
  COMMIT
  # Completed by iptables_manager
  # Generated by iptables_manager
  *filter
  -D run.py-accepted 1
  -I run.py-accepted 1 -i qr-b0f055da-3f -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 12158444994202490671
  -D run.py-accepted 2
  -I run.py-accepted 2 -o qr-b0f055da-3f -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 12158444994202490671
  -D run.py-accepted 3
  -I run.py-accepted 3 -i qr-790b0516-f4 -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 13796087923523008474
  -D run.py-accepted 4
  -I run.py-accepted 4 -o qr-790b0516-f4 -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 137

[Yahoo-eng-team] [Bug 2068644] Re: Issue associating floating IP with OVN load balancer

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/921663
Committed: 
https://opendev.org/openstack/neutron/commit/d8a4ad9167afd824a3f823d86a8fd33fb67c4abd
Submitter: "Zuul (22348)"
Branch:master

commit d8a4ad9167afd824a3f823d86a8fd33fb67c4abd
Author: Will Szumski 
Date:   Mon Jun 10 13:44:14 2024 +0100

Correct logic error when associating FIP with OVN LB

Fixes a logic error which meant that we didn't iterate over all logical
switches when associating a FIP to an OVN loadbalancer. The symptom was
that the FIP would show in neutron, but would not exist in OVN.

Closes-Bug: #2068644
Change-Id: I6d1979dfb4d6f455ca419e64248087047fbf73d7
Co-Authored-By: Brian Haley 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2068644

Title:
  Issue associating floating IP with OVN load balancer

Status in neutron:
  Fix Released

Bug description:
  Version: yoga

  I'm seeing this failure when trying to associate a floating IP to a
  OVN based loadbalancer:

  Maintenance task: Failed to fix resource 
990f1d44-2401-49ba-b8c5-aedf7fb0c1ec (type: floatingips): TypeError: 'NoneType' 
object is not iterable
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance Traceback (most 
recent call last):
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py",
 line 400, in check_for_inconsistencies
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
self._fix_create_update(admin_context, row)
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py",
 line 239, in _fix_create_update
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
res_map['ovn_create'](context, n_obj)
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py",
 line 467, in _create_floatingip_and_pf
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
self._ovn_client.create_floatingip(context, floatingip)
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1201, in create_floatingip
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
LOG.error('Unable to create floating ip in gateway '
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
227, in __exit__
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
self.force_reraise()
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
200, in force_reraise
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance raise 
self.value
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1197, in create_floatingip
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
self._create_or_update_floatingip(floatingip, txn=txn)
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1007, in _create_or_update_floatingip
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
commands.extend(
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance TypeError: 
'NoneType' object is not iterable
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 

  Unsure if that is masking another issue, but seems like even in master
  _handle_lb_fip_cmds can return None e.g:

  

[Yahoo-eng-team] [Bug 2080556] Re: old nova instances cant be started on post victoria deployments

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/929187
Committed: 
https://opendev.org/openstack/nova/commit/2a870323c3d44d2056b326c184c435a484513532
Submitter: "Zuul (22348)"
Branch:master

commit 2a870323c3d44d2056b326c184c435a484513532
Author: Sean Mooney 
Date:   Thu Sep 12 21:05:54 2024 +0100

allow upgrade of pre-victoria InstanceNUMACells

This change ensures that if we are upgrading a
InstanceNUMACell object created before victoria
<1.5 that we properly set pcpuset=set() when
loading the object form the db.

This is requried to support instances with a numa
topology that do not use cpu pinning.

Depends-On: 
https://review.opendev.org/c/openstack/python-openstackclient/+/929236
Closes-Bug: #2080556
Change-Id: Iea55aabe71c250d8c8e93c61421450b909a7fa3d


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2080556

Title:
  old nova instances cant be started on post victoria deployments

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Downstream we had an interesting but report
  https://bugzilla.redhat.com/show_bug.cgi?id=2311875

  Instances created after liberty but before victoria
  that request a numa topology but do not have CPU pinning
  cannot be started on post victoria nova.

  as part of the 
  
https://specs.openstack.org/openstack/nova-specs/specs/train/implemented/cpu-resources.html
  spec we started tracking cpus as PCVU and VCPU resource classes but since a 
given instance
  would either have pinned cpus or floating cpus  no changes too the instance 
numa topology object
  were required.

  with the introduction of mixed cpus in a single instnace

  https://specs.openstack.org/openstack/nova-
  specs/specs/victoria/implemented/use-pcpu-vcpu-in-one-instance.html

  the instnace numa topology object was extended with a new pcpuset
  field.

  as part of that work the _migrate_legacy_object function was extended to 
default pcpuset to an empty set
  
https://github.com/openstack/nova/commit/867d4471013bf6a70cd3e9e809daf80ea358df92#diff-ed76deb872002cf64931c6d3f2d5967396240dddcb93da85f11886afc7dc4333R212
  for numa topologies that predate ovo

  and

  an new _migrate_legacy_dedicated_instance_cpuset function was added to
  migrate existing pinned instances and instnace with ovo in the  db.

  what we missed in the review is that unpinned guests should have had the 
cell.pcpuset set to the empty set
  here
  
https://github.com/openstack/nova/commit/867d4471013bf6a70cd3e9e809daf80ea358df92#diff-ed76deb872002cf64931c6d3f2d5967396240dddcb93da85f11886afc7dc4333R178

  The new filed is not nullable and is not present in the existing json 
serialised object
  as a result accessing cell.pcpuset on object returned form the db will raise 
a NotImplementedError because it is unset if the VM was created between liberty 
and victoria.
  this only applies to non-pinned vms with a numa topology i.e. 
  hw:mem_page_size= or hw:numa_nodes=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2080556/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2080436] Re: Live migration breaks VM on NUMA enabled systems with shared storage

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/928970
Committed: 
https://opendev.org/openstack/nova/commit/035b8404fce878b0a88c4741bea46135b6af51e8
Submitter: "Zuul (22348)"
Branch:master

commit 035b8404fce878b0a88c4741bea46135b6af51e8
Author: Matthew N Heler 
Date:   Wed Sep 11 12:28:15 2024 -0500

Fix regression with live migration on shared storage

The commit c1ccc1a3165ec1556c605b3b036274e992b0a09d introduced
a regression when NUMA live migration was done on shared storage

The live migration support for the power mgmt feature means we need to
call driver.cleanup() for all NUMA instances to potentially offline
pcpus that are not used any more after the instance is migrated away.
However this change exposed an issue with the disk cleanup logic. Nova
should never delete the instance directory if that directory is on
shared storage (e.g. the nova instances path is backed by NFS).

This patch will fix that behavior so live migration will function

Closes-Bug: #2080436
Change-Id: Ia2bbb5b4ac728563a8aabd857ed0503449991df1


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2080436

Title:
  Live migration breaks VM on NUMA enabled systems with shared storage

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The commit c1ccc1a3165ec1556c605b3b036274e992b0a09d introduced
  a regression when NUMA live migration was done on shared storage

  power_management_possible = (
  'dst_numa_info' in migrate_data and
  migrate_data.dst_numa_info is not None)
  # No instance booting at source host, but instance dir
  # must be deleted for preparing next block migration
  # must be deleted for preparing next live migration w/o shared
  # storage
  # vpmem must be cleaned
  do_cleanup = (not migrate_data.is_shared_instance_path or
has_vpmem or has_mdevs or power_management_possible)

  Based on the commit, if any type of NUMA system is used with shared
  storage. Live migration will delete the backing folder for the VM,
  making the VM unusable for future operations.

  My team is experiencing this issue on 2024.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2080436/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2079850] Re: Ephemeral with vfat format fails inspection

2024-09-18 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/928829
Committed: 
https://opendev.org/openstack/nova/commit/8de15e9a276dc4261dd0656e26ca5a917825f441
Submitter: "Zuul (22348)"
Branch:master

commit 8de15e9a276dc4261dd0656e26ca5a917825f441
Author: Sean Mooney 
Date:   Tue Sep 10 14:41:15 2024 +0100

only safety check bootable files created from glance

For blank files that are created by nova such as swap
disks and ephemeral disks we do not need need to safety
check them as they always are just bare filesystems.

In the future we should refactor the qcow imagebackend to
not require backing files for swap and ephemeral disks
but for now we simply disable the check to workaround
the addition of the gpt image inspector and the incompatiblity
with vfat. future versions of oslo will account for vfat boot
recored. this is a minimal patch to avoid needing a new oslo
release for 2024.2

Closes-Bug: #2079850
Change-Id: I7df3d9859aa4be3a012ff919f375a7a3d9992af4


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2079850

Title:
  Ephemeral with vfat format fails inspection

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.utils:
  In Progress

Bug description:
  When configured to format ephemerals as vfat, we get this failure:

  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.358 2 
DEBUG oslo_utils.imageutils.format_inspector [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Format inspector failed, 
aborting: Signature KDMV not found: b'\xebX\x90m' _process_chunk 
/usr/lib/python3.9/site-packages/oslo_utils/imageutils/format_inspector.py:1302
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.365 2 
DEBUG oslo_utils.imageutils.format_inspector [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Format inspector failed, 
aborting: Region signature not found at 3 _process_chunk 
/usr/lib/python3.9/site-packages/oslo_utils/imageutils/format_inspector.py:1302
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.366 2 
WARNING oslo_utils.imageutils.format_inspector [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Safety check mbr on gpt 
failed because GPT MBR has no partitions defined: 
oslo_utils.imageutils.format_inspector.SafetyViolation: GPT MBR has no 
partitions defined
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.366 2 
WARNING nova.virt.libvirt.imagebackend [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Base image 
/var/lib/nova/instances/_base/ephemeral_1_0706d66 failed safety check: Safety 
checks failed: mbr: oslo_utils.imageutils.format_inspector.SafetyCheckFailed: 
Safety checks failed: mbr
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [None req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 
60ed4d3e522640b6ad19633b28c5b5bb ae43aec9c3c242a785c8256abdda1747 - - default 
default] [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] Instance failed to 
spawn: nova.exception.InvalidDiskInfo: Disk info file is invalid: Base image 
failed safety check
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
Traceback (most recent call last):
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a]   
File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py", line 
685, in create_image
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
inspector.safety_check()
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a]   
File 
"/usr/lib/python3.9/site-packages/oslo_utils/imageutils/format_inspector.py", 
line 430, in safety_check
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
raise SafetyCheckFailed(failures)
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
oslo_utils.imageutils.format_inspector.SafetyCheckFailed: Safety checks failed: 
mbr
  Sep 03 17:34

[Yahoo-eng-team] [Bug 2075349] Re: JSONDecodeError when OIDCRedirectURI is the same as the Keystone OIDC auth endpoint

2024-09-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/puppet-keystone/+/928755
Committed: 
https://opendev.org/openstack/puppet-keystone/commit/fdf2a2b31a6de76973a35a2494455ef176eee936
Submitter: "Zuul (22348)"
Branch:master

commit fdf2a2b31a6de76973a35a2494455ef176eee936
Author: Takashi Kajinami 
Date:   Tue Sep 10 13:39:46 2024 +0900

Fix default OIDCRedirectURI hiding keystone federation auth endpoint

This updates the default OIDCRedirectURI according to the change made
in the example file in keystone repo[1].

[1] https://review.opendev.org/925553

Closes-Bug: #2075349
Change-Id: Ia0f3cbb842a4c01e6a3ca44ca66dc9a8a731720c


** Changed in: puppet-keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2075349

Title:
  JSONDecodeError when OIDCRedirectURI is the same as the Keystone OIDC
  auth endpoint

Status in OpenStack Keystone OIDC Integration Charm:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in puppet-keystone:
  Fix Released

Bug description:
  This bug is about test failures for jammy-caracal, jammy-bobcat, and
  jammy-antelope in cherry-pick commits from this change:

  https://review.opendev.org/c/openstack/charm-keystone-openidc/+/922049

  That change fixed some bugs in the Keystone OpenIDC charm and added
  some additional configuration options to help with proxies.

  The tests all fail with a JSONDecodeError during the Zaza tests for
  the Keystone OpenIDC charm. Here is an example of the error:

  Expecting value: line 1 column 1 (char 0)
  Traceback (most recent call last):
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/requests/models.py", line 
974, in json
  return complexjson.loads(self.text, **kwargs)
    File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
  return _default_decoder.decode(s)
    File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
  raise JSONDecodeError("Expecting value", s, err.value) from None
  json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "/home/jadon/py3-venv/lib/python3.10/site-packages/cliff/app.py", line 
414, in run_subcommand
  self.prepare_to_run_command(cmd)
    File "/home/jadon/py3-venv/lib/python3.10/site-packages/osc_lib/shell.py", 
line 516, in prepare_to_run_command
  self.client_manager.auth_ref
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/osc_lib/clientmanager.py", 
line 208, in auth_ref
  self._auth_ref = self.auth.get_auth_ref(self.session)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/federation.py",
 line 62, in get_auth_ref
  auth_ref = self.get_unscoped_auth_ref(session)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/oidc.py",
 line 293, in get_unscoped_auth_ref
  return access.create(resp=response)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/access/access.py",
 line 36, in create
  body = resp.json()
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/requests/models.py", line 
978, in json
  raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
  requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
  clean_up ListServer: Expecting value: line 1 column 1 (char 0)
  END return value: 1

  According to debug output, the failure happens during the OIDC
  authentication flow. Testing using the OpenStack CLI shows the failure
  happen right after this request:

  REQ: curl -g -i --insecure -X POST 
https://10.70.143.111:5000/v3/OS-FEDERATION/identity_providers/keycloak/protocols/openid/auth
 -H "Authorization: 
{SHA256}45dbb29ea555e0bd24995cbb1481c8ac66c2d03383bc0c335be977d0daaf6959" -H 
"User-Agent: openstacksdk/3.3.0 keystoneauth1/5.7.0 python-requests/2.32.3 
CPython/3.10.12"
  Starting new HTTPS connection (1): 10.70.143.111:5000
  RESP: [200] Connection: Keep-Alive Content-Length: 0 Date: Tue, 30 Jul 2024 
19:28:17 GMT Keep-Alive: timeout=75, max=1000 Server: Apache/2.4.52 (Ubuntu)
  RESP BODY: Omitted, Content-Type is set to None. Only text/plain, 
application/json responses have their bodies logged.

  This request is unusual in that the request is a POST request with no
  request body, and the response is an empty response. The empty
  response causes the JSONDecodeError because the keystoneauth package
  expects a JSON document to return from the request for a Keystone
  token. The empty response causes the JSONDecodeError because an empty
  string is not a valid document.

  This strange beh

[Yahoo-eng-team] [Bug 2079813] Re: [ovn-octavia-provider] Fully populated LB wrong member subnet id when not specified

2024-09-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/928335
Committed: 
https://opendev.org/openstack/ovn-octavia-provider/commit/6e2ba02339cdb06a63abc74a2b58f993d0560d9c
Submitter: "Zuul (22348)"
Branch:master

commit 6e2ba02339cdb06a63abc74a2b58f993d0560d9c
Author: Fernando Royo 
Date:   Fri Sep 6 12:27:56 2024 +0200

Fix member subnet id on a fully populated LB

When a fully populated LB is created, if the member is not created
indicating the subnet_id to whom it belongs, the LB vip_network_id
is inherit by error as member.subnet_id.

This patch fix this behaviour to inherit the member.subnet_id from
the loadbalancer.vip_subnet_id that is always passed from Octavia
API when call is redirect to the OVN-provider.

Closes-Bug: #2079813
Change-Id: I098afab053119d1a6eac86a12c1a20cc312b06ef


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2079813

Title:
  [ovn-octavia-provider] Fully populated LB  wrong member subnet id when
  not specified

Status in neutron:
  Fix Released

Bug description:
  When a fully populated LB is created, if the member is not created
  indicating the subnet_id to whom it belongs, the LB vip_network_id is
  inherit by error as member.subnet_id [1]

  If the member subnet_id is indicated in the call or added after LB
  creation in a later step this issue is not happening.

  [1] https://opendev.org/openstack/ovn-octavia-
  
provider/blame/commit/0673f16fc68d80c364ed8907b26c061be9b8dec1/ovn_octavia_provider/driver.py#L118

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2079813/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078476] Re: rbd_store_chunk_size defaults to 8M not 4M

2024-09-09 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/glance/+/927844
Committed: 
https://opendev.org/openstack/glance/commit/39e407e9ffe956d40a261905ab98c13b5455e27d
Submitter: "Zuul (22348)"
Branch:master

commit 39e407e9ffe956d40a261905ab98c13b5455e27d
Author: Cyril Roelandt 
Date:   Tue Sep 3 17:25:54 2024 +0200

Documentation: fix default value for rbd_store_chunk_size

Closes-Bug: #2078476
Change-Id: I3b83e57eebf306c4de28fd58589522970e62cf42


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2078476

Title:
  rbd_store_chunk_size defaults to 8M not 4M

Status in Glance:
  Fix Released

Bug description:
  Versions affected: from current master to at least Antelope.

  The documentation
  
(https://docs.openstack.org/glance/2024.1/configuration/configuring.html#configuring-
  the-rbd-storage-backend) states that the default rbd_store_chunk_size
  defaults to 4M while in reality it's 8M. This could have been 'only' a
  documentation bug, but there are two concerns here:

  1) Was it the original intention to have 8M chunk size (which is
  different from Ceph's defaults = 4M) or was it an inadvertent effect
  of other changes?

  2) Cinder defaults to rbd_store_chunk_size=4M. Having volumes created
  from Glance images results in an inherited chunk size of 8M (due to
  snapshotting) and could have unpredicted performance consequences. It
  feels like this scenario should at least be documented, if not
  avoided.

  Steps to reproduce:
  - deploy Glance with RBD backend enabled and default config;
  - query stores information for the configured chunk size 
(/v2/info/stores/detail)
  Optional:
  - have an image created in Ceph pool and validate its chunk size with rbd 
info command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2078476/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073836] Re: "Tagging" extension cannot add tags with charater "/"

2024-09-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/924724
Committed: 
https://opendev.org/openstack/neutron/commit/5a558b7d132b6d5cdda2720a1b345643e08246e2
Submitter: "Zuul (22348)"
Branch:master

commit 5a558b7d132b6d5cdda2720a1b345643e08246e2
Author: Rodolfo Alonso Hernandez 
Date:   Sat Jul 20 20:01:40 2024 +

Add new "tagging" API method: create (POST)

This new method allows to create multiple tags for a single resource.
The tags are passed as arguments in the ``POST`` call. That solves
the issue with the usage of URI reserved characters in the name of
the tags.

Bumped neutron-lib library to version 3.15.0, that contains [1].

[1]https://review.opendev.org/c/openstack/neutron-lib/+/924700

APIImpact add create method for service pluging "tagging"
Closes-Bug: #2073836

Change-Id: I9709da13c321695f324fe8d6c1cdc03756660a03


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073836

Title:
  "Tagging" extension cannot add tags with charater "/"

Status in neutron:
  Fix Released

Bug description:
  The calls to add and remove an individual tag, accept the tag as part
  of the URL path. However, a url-encoded slash character (as `%2F`) is
  interpreted as a literal slash (`/`) BEFORE path splitting:

  ```
  curl -g -i -X PUT \
  
'https://neutron.example:13696/v2.0/security-groups/51d6c739-dc9e-454e-bf72-54beb2afc5f8/tags/one%2Ftwo'
 \
  -H "X-Auth-Token: "
  HTTP/1.1 404 Not Found
  content-length: 103
  content-type: application/json
  x-openstack-request-id: req-3d5911e5-10be-41e2-b83f-5b6ea5b0bbdf
  date: Mon, 22 Jul 2024 08:57:16 GMT

  {"NeutronError": {"type": "HTTPNotFound", "message": "The resource
  could not be found.", "detail": ""}}

  
  Bugzilla reference: https://bugzilla.redhat.com/show_bug.cgi?id=2299208

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073836/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2076916] Re: VMs cannot access metadata when connected to a network with only IPv6 subnets with the ML2/OVS and ML2/LB backends in the Neutron zuul gate

2024-09-06 Thread OpenStack Infra
Reviewed:  
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/926503
Committed: 
https://opendev.org/openstack/neutron-tempest-plugin/commit/4a0b2343d723ea1227e85e0776fc58988a6b9e35
Submitter: "Zuul (22348)"
Branch:master

commit 4a0b2343d723ea1227e85e0776fc58988a6b9e35
Author: Miguel Lavalle 
Date:   Sun Aug 18 17:20:51 2024 -0500

Test metadata query over IPv6 only network with OVS and LB

This change enables the testing of querying the metadata service over an
IPv6 only network

Depends-On: https://review.opendev.org/c/openstack/neutron/+/922264

Change-Id: I56b1b7e5ca69e2fb01d359ab302e676773966aca
Related-Bug: #2069482
Closes-Bug: 2076916


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2076916

Title:
  VMs cannot access metadata when connected to a network with only IPv6
  subnets with the ML2/OVS and ML2/LB backends in the Neutron zuul gate

Status in neutron:
  Fix Released

Bug description:
  While fixing https://bugs.launchpad.net/neutron/+bug/2069482 "[OVN]
  VMs cannot access metadata when connected to a network with only IPv6
  subnets", a neutron-tempest-plugin test case was proposed to make sure
  in the CI system that VM's can access the metadata service over an
  IPv6 only network: https://review.opendev.org/c/openstack/neutron-
  tempest-plugin/+/925928. While the new test case succeeds with the
  ML2/OVN backend thanks to this fix
  https://review.opendev.org/c/openstack/neutron/+/922264, it also
  showed the following failure with the ML2/OVS and ML2/LB backends:

  curl: (28) Failed to connect to fe80::a9fe:a9fe port 80: Connection
  timed out

  Steps to reproduce:

  Recheck https://review.opendev.org/c/openstack/neutron-tempest-
  plugin/+/925928 and see the logs for the ML2/OVS and ML2/LB jobs

  
  How reproducible: 100%

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2076916/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2060916] Re: [RFE] Add 'trusted_vif' field to the port attributes

2024-09-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926068
Committed: 
https://opendev.org/openstack/neutron/commit/104cbf9e60001329968bcab2e6d95ef38168cbc5
Submitter: "Zuul (22348)"
Branch:master

commit 104cbf9e60001329968bcab2e6d95ef38168cbc5
Author: Slawek Kaplonski 
Date:   Fri Aug 9 16:47:04 2024 +0200

Add trusted vif api extension for the port

This patch adds implementation of the "port_trusted_vif" API extension
as ml2 extension.
With this extension enabled, it is now possible for ADMIN users to set
port as trusted without modifying directly 'binding:profile' field
which is supposed to be just for machine to machine communication.

Value set in the 'trusted' attribute of the port is included in the
port's binding:profile so that it is still in the same place where e.g.
Nova expects it.

For now setting this flag directly in the port's binding:profile field
is not forbidden and only warning is generated in such case but in
future releases it should be forbiden and only allowed to be done using
this new attribute of the port resource.

This patch implements also definition of the new API extension directly
in Neutron. It is temporary and will be removed once patch [1] in
neutron-lib will be merged and released.

[1] https://review.opendev.org/c/openstack/neutron-lib/+/923860

Closes-Bug: #2060916
Change-Id: I69785c5d72a5dc659c5a2f27e043c686790b4d2b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2060916

Title:
  [RFE] Add 'trusted_vif' field to the port attributes

Status in neutron:
  Fix Released

Bug description:
  Currently 'trusted=true' can be passed to Neutron by admin user
  through the port's "binding:profile" field but this field originally
  was intended to be used only for the machine-machine communication,
  and not to be used by any cloud user. There is even info about that in
  the api-ref:

  "A dictionary that enables the application running on the specific
  host to pass and receive vif port information specific to the
  networking back-end. This field is only meant for machine-machine
  communication for compute services like Nova, Ironic or Zun to pass
  information to a Neutron back-end. It should not be used by multiple
  services concurrently or by cloud end users. The existing
  counterexamples (capabilities: [switchdev] for Open vSwitch hardware
  offload and trusted=true for Trusted Virtual Functions) are due to be
  cleaned up. The networking API does not define a specific format of
  this field. ..."

  
  This will be even worst with the new S-RBAC policies where "binding:profile" 
field is allowed to be changed only for the SERVICE role users, not even for 
admins.

  So this small RFE is proposal to add new API extension which will add
  field, like "trusted_vif" to the port object. This field would be then
  accesible for ADMIN role users in the Secure-RBAC policies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2060916/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078787] Re: [postgresql] CI job randomly failing during "get_ports" command

2024-09-04 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/927801
Committed: 
https://opendev.org/openstack/neutron/commit/c7d07b7421034c2722fb0d0cfd2371e052928b97
Submitter: "Zuul (22348)"
Branch:master

commit c7d07b7421034c2722fb0d0cfd2371e052928b97
Author: Rodolfo Alonso Hernandez 
Date:   Tue Sep 3 10:31:24 2024 +

Protect the "standardattr" retrieval from a concurrent deletion

The method ``_extend_tags_dict`` can be called from a "list" operation.
If one resource and its "standardattr" register is deleted concurrently,
the "standard_attr" field retrieval will  fail.

The "list" operation is protected with a READER transaction context;
however this is failing with the DB PostgreSQL backend.

Closes-Bug: #2078787
Change-Id: I55142ce21cec8bd8e2d6b7b8b20c0147873699da


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078787

Title:
  [postgresql] CI job randomly failing during "get_ports" command

Status in neutron:
  Fix Released

Bug description:
  This issue is happening in master and stable branches.

  The Neutron API fails during a "get_ports" command with the following error:
  * Logs: 
https://1a2314758f28e1d7bdcb-9b5b0c3ad08d4708e738c2961a946a92.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-ovn-tempest-postgres-full/ac172f5/testr_results.html
  * Snippet: https://paste.opendev.org/show/boCN2S0gesS1VldBuxpj/

  It seems that, during the port retrieval in the "get_ports" command,
  one of the ports is concurrently deleted along with the
  "standard_attr" related register. This is happening despite of the
  reader context that should protect the "get_ports" command. This is
  not happening with MySQL/MariaDB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2078787/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075178] Re: test_snapshot_running test fails if qemu-img binary is missing

2024-09-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/925208
Committed: 
https://opendev.org/openstack/nova/commit/0809f75d7921fe01a6832211081e756a11b3ad4e
Submitter: "Zuul (22348)"
Branch:master

commit 0809f75d7921fe01a6832211081e756a11b3ad4e
Author: Julien Le Jeune 
Date:   Tue Jul 30 15:45:48 2024 +0200

Skip snapshot test when missing qemu-img

Since the commit the remove AMI snapshot format special casing
has merged, we're now running the libvirt snapshot tests as expected.
However, for those tests qemu-img binary needs to be installed.
Because these tests have been silently and incorrectly skipped for so long,
they didn't receive the same maintenance as other tests as the failures 
went unnoticed.

Change-Id: Ia90eedbe35f4ab2b200bdc90e0e35e5a86cc2110
Closes-bug: #2075178
Signed-off-by: Julien Le Jeune 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2075178

Title:
  test_snapshot_running test fails if qemu-img binary is missing

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When qemu-img binary is not present on the system, this test fails
  like we can see on that log:

  ==
  ERROR: 
nova.tests.unit.virt.test_virt_drivers.LibvirtConnTestCase.test_snapshot_running
  --
  pythonlogging:'': {{{
  2024-07-30 15:47:15,058 INFO [nova.db.migration] Applying migration(s)
  2024-07-30 15:47:15,170 INFO [nova.db.migration] Migration(s) applied
  2024-07-30 15:47:15,245 INFO [nova.db.migration] Applying migration(s)
  2024-07-30 15:47:15,901 INFO [nova.db.migration] Migration(s) applied
  2024-07-30 15:47:15,997 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-30 15:47:15,998 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-30 15:47:16,000 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'project_member_api', 'project_reader_api', 'project_member_or_admin', 
'project_reader_or_admin', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell', 
'os_compute_api:os-shelve:unshelve_to_host'] specified in policy files are the 
same as the defaults provided by the service. You can remove these rules from 
policy files which will make maintenance easier. You can detect these redundant 
rules by ``oslopolicy-list-redundant`` tool also.
  2024-07-30 15:47:17,245 INFO [os_vif] Loaded VIF plugins: linux_bridge, noop, 
ovs
  2024-07-30 15:47:17,426 INFO [nova.virt.libvirt.driver] Creating image(s)
  2024-07-30 15:47:17,560 INFO [nova.virt.libvirt.host] kernel doesn't support 
AMD SEV
  2024-07-30 15:47:17,642 INFO [nova.virt.libvirt.driver] Instance spawned 
successfully.
  2024-07-30 15:47:17,711 INFO [nova.virt.libvirt.driver] Beginning live 
snapshot process
  }}}

  Traceback (most recent call last):
File "/home/jlejeune/dev/pci_repos/stash/nova/nova/virt/libvirt/driver.py", 
line 3110, in snapshot
  metadata['location'] = root_disk.direct_snapshot(
File "/usr/lib/python3.10/unittest/mock.py", line 1114, in __call__
  return self._mock_call(*args, **kwargs)
File "/usr/lib/python3.10/unittest/mock.py", line 1118, in _mock_call
  return self._execute_mock_call(*args, **kwargs)
File "/usr/lib/python3.10/unittest/mock.py", line 1173, in 
_execute_mock_call
  raise effect
  NotImplementedError: direct_snapshot() is not implemented

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/virt/test_virt_drivers.py",
 line 60, in wrapped_func
  return f(self, *args, **kwargs)
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/virt/test_virt_drivers.py"

[Yahoo-eng-team] [Bug 2078518] Re: neutron designate scenario job failing with new RBAC

2024-09-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/designate/+/927792
Committed: 
https://opendev.org/openstack/designate/commit/4388f00d267c4090b7de6bc94da9e2970abdf0cc
Submitter: "Zuul (22348)"
Branch:master

commit 4388f00d267c4090b7de6bc94da9e2970abdf0cc
Author: Slawek Kaplonski 
Date:   Tue Sep 3 10:49:04 2024 +0200

Add "admin" role to the designate user created by devstack plugin

Service user with name "designate" had only "service" role up to now but
it seems that with oslo.policy 4.4.0 where "enforce_new_defaults" is set
to True by default, this breaks integration between Neutron and
Designate as e.g. Neutron's creation of the recordset fails with
Forbidden exception as this seems to be allowed only for admin user or
shared or primary zone.

This patch adds also "admin" role for this "designate" service user to
workaround that issue, at least until Designate will support "service"
role usage with Secure RBAC policies.

Closes-Bug: #2078518
Change-Id: I477cc96519e7396a614f92d10986707ec388


** Changed in: designate
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078518

Title:
  neutron designate scenario job failing with new RBAC

Status in Designate:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  Oslo.policy 4.4.0 enabled the new RBAC defaults by default, which does
  not change any config on the neutron side because neutron already
  enabled the new defaults, but it enabled the designated new RBAC. That
  is causing the neutron-tempest-plugin-designate-scenario job failing.

  It is failing here
  - https://review.opendev.org/c/openstack/neutron/+/926085

  And this is a debugging change
  - https://review.opendev.org/c/openstack/neutron/+/926945/7

  I see from the log that the admin designate client is getting the
  error. If you see the below log, its designate_admin is getting an
  error while creating the recordset in the designate

  Aug 09 19:08:30.539307 np0038166723 neutron-server[86674]: ERROR
  neutron_lib.callbacks.manager
  designate_admin.recordsets.create(in_addr_zone_name,

  
https://zuul.opendev.org/t/openstack/build/7a18c093d50242ebbea666d92c671945/log/controller/logs/screen-
  q-svc.txt#7665

  
https://github.com/openstack/neutron/blob/b847d89ac1f922362945ad610c9787bc28f37457/neutron/services/externaldns/drivers/designate/driver.py#L92

  which is caused by the GET Zone returning 403 in designateclient

  
https://zuul.opendev.org/t/openstack/build/7a18c093d50242ebbea666d92c671945/log/controller/logs/screen-q-svc.txt#7674
  I compared the designate Zone RBAC default if any change in that causing it:

  Old policy: admin or owner
  New policy: admin or project reader

  
https://github.com/openstack/designate/blob/50f686fcffd007506e0cd88788a668d4f57febc3/designate/common/policies/zone.py
  Only difference in policy is if it is not admin then it check role also 
member and reader needs only have access. But here neutron try to access with 
admin role only.

  I tried to query designate with "'all_projects': True" in admin
  designate client request but still it fail

  
https://zuul.opendev.org/t/openstack/build/25be97774e3a4d72a39eb6b2d2bed4a0/log/controller/logs/screen-
  q-svc.txt#7716

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/2078518/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078789] Re: [SR-IOV] The "auto" VF status has precedence over the "enable"/"disable" status

2024-09-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/927795
Committed: 
https://opendev.org/openstack/neutron/commit/8211c29158d6fc8a1af938c326dfbaa685428a4a
Submitter: "Zuul (22348)"
Branch:master

commit 8211c29158d6fc8a1af938c326dfbaa685428a4a
Author: Rodolfo Alonso Hernandez 
Date:   Tue Sep 3 09:30:54 2024 +

[SR-IOV] The port status=DOWN has precedence in the VF link status

If a ML2/SR-IOV port is disabled (status=DOWN), it will have precedence
on the VF link state value over the "auto" value. That will stop any
transmission from the VF.

Closes-Bug: #2078789
Change-Id: I11d973d245dd391623e501aa14b470daa780b4db


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078789

Title:
  [SR-IOV] The "auto" VF status has precedence over the
  "enable"/"disable" status

Status in neutron:
  Fix Released

Bug description:
  This bug only applies to ML2/SR-IOV.

  The port field "propagate_uplink_status" defines if the port (VF) will
  follow the parent port (PF) status (enabled/disabled). The "auto"
  status has precedence over the "enable"/"disable" status. However,
  this could be a security issue: if the port owner wants to stop the VF
  (VM port) from transmitting any traffic, it is needed first to unset
  the "propagate_uplink_status" field [1] and the set the port status to
  "disabled".

  Scope of this bug: The "disabled" status must have precedence over the
  "auto" or "enabled" statuses, for security reasons.

  
  [1]https://bugs.launchpad.net/neutron/+bug/2078661

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2078789/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078382] Re: [OVN] User defined router flavor with no LSP associated to router interfaces

2024-09-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/917800
Committed: 
https://opendev.org/openstack/neutron/commit/44cbbba369ad12bfdc8276319c7bcea173ddaa96
Submitter: "Zuul (22348)"
Branch:master

commit 44cbbba369ad12bfdc8276319c7bcea173ddaa96
Author: Miguel Lavalle 
Date:   Tue Apr 30 20:15:23 2024 -0500

User defined router flavor driver with no LSP

There is a use case where a user defined router flavor requires router
interfaces that don't have a corresponding OVN LSP. In this use case,
Neutron acts only as an IP address manager for the router interfaces.

This change adds a user defined router flavor driver that implements
the described use case. The new functionality is completely contained in
the new driver, with no logic added to the rest of ML2/OVN. This is
accomplished as follows:

1) When an interface is added to a router, the driver deletes the LSP
and the OVN revision number.

2) When an interface is about to be removed from a router, the driver
re-creates the LSP and the OVN revision number. In this way, ML2/OVN
can later delete the port normally.

Closes-Bug: #2078382

Change-Id: I14d675af2da281cc5cd435cae947ccdb13ece12b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078382

Title:
  [OVN] User defined router flavor with no LSP associated to router
  interfaces

Status in neutron:
  Fix Released

Bug description:
  There is at least one OpenStack operator that requires the ability to
  create a ML2/OVN user defined router flavor that doesn't have Logical
  Switch Ports associated to their router interfaces. This allows a user
  defined flavor driver to process traffic bypassing the OVN pipeline.
  In this use case, Neutron acts only as an IP address manager for the
  router interfaces.

  The associated functionality shouldn't conflict with the ML2/OVN
  mechanism manager when deleting the associated Neutron ports, which
  would happen if the LSP is just removed without making provisions for
  the eventual removal of the router interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2078382/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075723] Re: Wrong token expiration time format with expiring application credentials

2024-09-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/925596
Committed: 
https://opendev.org/openstack/keystone/commit/d01cde5a19d83736c9be235b27af8cc84ee01ed6
Submitter: "Zuul (22348)"
Branch:master

commit d01cde5a19d83736c9be235b27af8cc84ee01ed6
Author: Boris Bobrov 
Date:   Fri Aug 2 15:16:10 2024 +0200

Correct format for token expiration time

Tokens with expiration time limited by application credentials had an
incorrect format.

Fix the format, control it with the test.

Closes-Bug: 2075723
Change-Id: I09fe34541615090766a5c4a010a3f39756debedc


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2075723

Title:
  Wrong token expiration time format with expiring application
  credentials

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In bug #1992183, token expiration time was limited to the application
  credentials expiration time. Unfortunately, the format used in the
  token is not the one specified in api-ref.

  Steps to reproduce:
  1. Create application credentials expiring very soon
  2. Issue a token with the application credentials
  3. Validate the token and check token expiration time

  Observed:
  "expires_at": "2024-08-02T13:47:05",

  Expected:
  "expires_at": "2024-08-02T13:47:05.00Z",

  I expect this, because:
  1. 
https://docs.openstack.org/api-ref/identity/v3/#validate-and-show-information-for-token
 - our docs say so
  2. The format is with Z in the end in all other authentication plugins

  This is also expected by tools that parse the token and convert it to
  objects, and are more strict to the formats.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2075723/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075349] Re: JSONDecodeError when OIDCRedirectURI is the same as the Keystone OIDC auth endpoint

2024-09-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/925553
Committed: 
https://opendev.org/openstack/keystone/commit/7ac0c3cd33214ff3c926e2b5316b637892d701fb
Submitter: "Zuul (22348)"
Branch:master

commit 7ac0c3cd33214ff3c926e2b5316b637892d701fb
Author: Jadon Naas 
Date:   Thu Aug 1 21:10:43 2024 -0400

Update OIDC Apache config to avoid masking Keystone API endpoint

The current configuration for the OIDCRedirectURI results in
mod_auth_openidc masking the Keystone federation authentication
endpoint, which results in incorrect responses to requests for
Keystone tokens. This change updates the documentation to
recommend using a vanity URL that does not match a Keystone
API endpoint.

Closes-Bug: 2075349
Change-Id: I1dfba5c71da68522fdb6059f0dc03cddc74cb07d


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2075349

Title:
  JSONDecodeError when OIDCRedirectURI is the same as the Keystone OIDC
  auth endpoint

Status in OpenStack Keystone OIDC Integration Charm:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  This bug is about test failures for jammy-caracal, jammy-bobcat, and
  jammy-antelope in cherry-pick commits from this change:

  https://review.opendev.org/c/openstack/charm-keystone-openidc/+/922049

  That change fixed some bugs in the Keystone OpenIDC charm and added
  some additional configuration options to help with proxies.

  The tests all fail with a JSONDecodeError during the Zaza tests for
  the Keystone OpenIDC charm. Here is an example of the error:

  Expecting value: line 1 column 1 (char 0)
  Traceback (most recent call last):
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/requests/models.py", line 
974, in json
  return complexjson.loads(self.text, **kwargs)
    File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
  return _default_decoder.decode(s)
    File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
  raise JSONDecodeError("Expecting value", s, err.value) from None
  json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "/home/jadon/py3-venv/lib/python3.10/site-packages/cliff/app.py", line 
414, in run_subcommand
  self.prepare_to_run_command(cmd)
    File "/home/jadon/py3-venv/lib/python3.10/site-packages/osc_lib/shell.py", 
line 516, in prepare_to_run_command
  self.client_manager.auth_ref
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/osc_lib/clientmanager.py", 
line 208, in auth_ref
  self._auth_ref = self.auth.get_auth_ref(self.session)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/federation.py",
 line 62, in get_auth_ref
  auth_ref = self.get_unscoped_auth_ref(session)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/oidc.py",
 line 293, in get_unscoped_auth_ref
  return access.create(resp=response)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/access/access.py",
 line 36, in create
  body = resp.json()
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/requests/models.py", line 
978, in json
  raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
  requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
  clean_up ListServer: Expecting value: line 1 column 1 (char 0)
  END return value: 1

  According to debug output, the failure happens during the OIDC
  authentication flow. Testing using the OpenStack CLI shows the failure
  happen right after this request:

  REQ: curl -g -i --insecure -X POST 
https://10.70.143.111:5000/v3/OS-FEDERATION/identity_providers/keycloak/protocols/openid/auth
 -H "Authorization: 
{SHA256}45dbb29ea555e0bd24995cbb1481c8ac66c2d03383bc0c335be977d0daaf6959" -H 
"User-Agent: openstacksdk/3.3.0 keystoneauth1/5.7.0 python-requests/2.32.3 
CPython/3.10.12"
  Starting new HTTPS connection (1): 10.70.143.111:5000
  RESP: [200] Connection: Keep-Alive Content-Length: 0 Date: Tue, 30 Jul 2024 
19:28:17 GMT Keep-Alive: timeout=75, max=1000 Server: Apache/2.4.52 (Ubuntu)
  RESP BODY: Omitted, Content-Type is set to None. Only text/plain, 
application/json responses have their bodies logged.

  This request is unusual in that the request is a POST request with no
  request body, and the response is an empty response. The empty
  response causes the JSONDecodeError because the keystoneauth package
  expects a JSON document to return from the request for a Keystone
  token. The empty res

[Yahoo-eng-team] [Bug 2056195] Re: Return 409 at neutron-client conflict

2024-09-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/918048
Committed: 
https://opendev.org/openstack/nova/commit/88b661b0780ee534630c2d345ffd4545158db806
Submitter: "Zuul (22348)"
Branch:master

commit 88b661b0780ee534630c2d345ffd4545158db806
Author: Rajesh Tailor 
Date:   Sat Apr 20 15:37:50 2024 +0530

Handle neutron-client conflict

When user tries to add stateless and stateful security
groups on same port, neutron raises SecurityGroupConflict (409),
but nova doesnot handle it and raises InternalServerError (500).

As it appears to be invalid operation from user, so user should get
the message that they are doing wrong.

This changes catches SecurityGroupConflict from neutron
client and raises newly added nova exception
SecurityGroupConnectionStateConflict with 409 error code.

Closes-Bug: #2056195
Change-Id: Ifad28fdd536ff0a4b30e786b2fcbc5a55987a13a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2056195

Title:
  Return 409 at neutron-client conflict

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  When attaching a stateless and stateful security group to a VM, nova returns 
a 500 error but it's a user issue and a 409 conflict error should be returned.

  Steps to reproduce
  ==

  1. create network
  2. create VM "test-vm" attached to the network
  3. may create a statefull security group, but default group should already do
  4. openstack securit group create --stateless stateless-group
  5. openstack server add security group test-vm stateless-group

  Expected result
  ===
  Nova forwards the 409 error from Neutron with the error description from 
Neutron.

  Actual result
  =
  Nova returns: 
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-c6bbaf50-99b7-4108-98f0-808dfee84933)
   

  Environment
  ===

  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

  # nova-api --version
  26.2.2 (Zed)

  
  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

  Neutron with OVN

  
  Logs & Configs
  ==
  Stacktrace:

  Traceback (most recent call last):,
File "/usr/local/lib/python3.10/site-packages/nova/api/openstack/wsgi.py", 
line 658, in wrapped,
  return f(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/nova/api/openstack/compute/security_groups.py",
 line 437, in _addSecurityGroup,
  return security_group_api.add_to_instance(context, instance,,
File 
"/usr/local/lib/python3.10/site-packages/nova/network/security_group_api.py", 
line 653, in add_to_instance,
  raise e,
File 
"/usr/local/lib/python3.10/site-packages/nova/network/security_group_api.py", 
line 648, in add_to_instance,
  neutron.update_port(port['id'], {'port': updated_port}),
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
828, in update_port,
  return self._update_resource(self.port_path % (port), body=body,,
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
2548, in _update_resource,
  return self.put(path, **kwargs),
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
365, in put,
  return self.retry_request("PUT", action, body=body,,
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
333, in retry_request,
  return self.do_request(method, action, body=body,,
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
297, in do_request,
  self._handle_fault_response(status_code, replybody, resp),
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
272, in _handle_fault_response,
 

[Yahoo-eng-team] [Bug 2078432] Re: Port_hardware_offload_type API extension is reported as available but attribute is not set for ports

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/927577
Committed: 
https://opendev.org/openstack/neutron/commit/fbb7c9ae3d672796b72b796c53f89865ea6b3763
Submitter: "Zuul (22348)"
Branch:master

commit fbb7c9ae3d672796b72b796c53f89865ea6b3763
Author: Slawek Kaplonski 
Date:   Fri Aug 30 11:50:55 2024 +0200

Fix port_hardware_offload_type ML2 extension

This patch fixes 2 issues related to that port_hardware_offload_type
extension:

1. API extension is now not supported by the ML2 plugin directly so if
   ml2 extension is not loaded Neutron will not report that API
   extension is available,
2. Fix error 500 when creating port with hardware_offload_type
   attribute set but when binding:profile is not set (is of type
   Sentinel).

Closes-bug: #2078432
Closes-bug: #2078434
Change-Id: Ib0038dd39d8d210104ee8a70e4519124f09292da


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078432

Title:
  Port_hardware_offload_type API extension is reported as available but
  attribute is not set for ports

Status in neutron:
  Fix Released

Bug description:
  This API extension is implemented as ML2 plugin's extension but API
  extension is added also to the _supported_extension_aliases list
  directly in the ML2 plugin. Because of that even if ML2 extension is
  not really loaded, this API extension is reported as available.
  Because of that 'hardware_offload_type' attribute send from client is
  accepted by neutron but it is not saved in the db at all:

  
  $ openstack port create --network private --extra-property 
type=str,name=hardware_offload_type,value=switchdev test-port-hw-offload 

  
  
+-+-+



 
  | Field   | Value 
  | 



  
+-+-+



 
  | admin_state_up  | UP
  | 



  | allowed_address_pairs   |   
  | 



  | binding_host_id |   
  | 



  | binding_profile |   
  | 



  | binding_vif_details |   
  | 



  | binding_vif_type 

[Yahoo-eng-team] [Bug 2078434] Re: Creating port with hardware_offload_type attribute set fails with error 500

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/927577
Committed: 
https://opendev.org/openstack/neutron/commit/fbb7c9ae3d672796b72b796c53f89865ea6b3763
Submitter: "Zuul (22348)"
Branch:master

commit fbb7c9ae3d672796b72b796c53f89865ea6b3763
Author: Slawek Kaplonski 
Date:   Fri Aug 30 11:50:55 2024 +0200

Fix port_hardware_offload_type ML2 extension

This patch fixes 2 issues related to that port_hardware_offload_type
extension:

1. API extension is now not supported by the ML2 plugin directly so if
   ml2 extension is not loaded Neutron will not report that API
   extension is available,
2. Fix error 500 when creating port with hardware_offload_type
   attribute set but when binding:profile is not set (is of type
   Sentinel).

Closes-bug: #2078432
Closes-bug: #2078434
Change-Id: Ib0038dd39d8d210104ee8a70e4519124f09292da


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078434

Title:
  Creating port with hardware_offload_type attribute set fails with
  error 500

Status in neutron:
  Fix Released

Bug description:
  Wne port_hardware_offload_type ml2 extension is enabled and port with
  hardware_offload_attribute is created it may fail with error 500 if
  there is no binding:profile field provided (and it is of type
  'Sentinel'). Error is:

  ...
  ERROR neutron.pecan_wsgi.hooks.translation with 
excutils.save_and_reraise_exception():  



  
  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", 
line 227, in __exit__   



  ERROR neutron.pecan_wsgi.hooks.translation self.force_reraise()   




  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", 
line 200, in force_reraise  



  ERROR neutron.pecan_wsgi.hooks.translation raise self.value   




  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 1132, in 
_call_on_ext_drivers



  ERROR neutron.pecan_wsgi.hooks.translation getattr(driver.obj, 
method_name)(plugin_context, data, result)  


   
  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/plugins/ml2/extensions/port_hardware_offload_type.py",
 line 44, in process_create_port


  
  ERROR neutron.pecan_wsgi.hooks.translation 
self._process_create_port(context, data, result)


  

[Yahoo-eng-team] [Bug 1929805] Re: Can't remove records in 'Create Record Set' form in DNS dashboard

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/793420
Committed: 
https://opendev.org/openstack/horizon/commit/3b222c85c1e07ad0f55da93460520e1a07713a54
Submitter: "Zuul (22348)"
Branch:master

commit 3b222c85c1e07ad0f55da93460520e1a07713a54
Author: Vadym Markov 
Date:   Wed May 26 16:01:49 2021 +0300

CSS fix makes "Delete item" button active

Currently, used in designate-dashboard at DNS Zones - Create Record Set
modal window

Closes-Bug: #1929805
Change-Id: Ibcc97927df4256298a5c8d5e9834efa9ee498291


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1929805

Title:
  Can't remove records in 'Create Record Set' form in DNS dashboard

Status in Designate Dashboard:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Reproduced on devstack with master, but seems that any setup with
  Designate since Mitaka is affected.

  Steps to reproduce:

  1. Go to Project/DNS/Zones page 
  2. Create a Zone
  3. Click on ‘Create Record Set’ button at the right of the Zone record
  4. Try to fill several ‘Record’ fields in the ‘Records’ section of the form, 
then to delete data in the field with 'x' button

  Expected behavior:
  Record deleted

  Actual behavior:
  'x' button is inactive

  It is bug in CSS used in array widget in Horizon, but currently this
  array widget used only in designate-dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate-dashboard/+bug/1929805/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075147] Re: "neutron-tempest-plugin-api-ovn-wsgi" not working with TLS

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/925376
Committed: 
https://opendev.org/openstack/neutron/commit/76f343c5868556f12f9ee74b7ef2291cf5e2ff85
Submitter: "Zuul (22348)"
Branch:master

commit 76f343c5868556f12f9ee74b7ef2291cf5e2ff85
Author: Rodolfo Alonso Hernandez 
Date:   Wed Jul 31 10:53:14 2024 +

Monkey patch the system libraries before calling them

The Neutron API with WSGI module, and specifically when using ML2/OVN,
was importing some system libraries before patching them. That was
leading to a recursion error, as reported in the related LP bug.
By calling ``eventlet_utils.monkey_patch()`` at the very beginning
of the WSGI entry point [1], this issue is fixed.

[1] WSGI entry point:
  $ cat /etc/neutron/neutron-api-uwsgi.ini
  ...
  module = neutron.wsgi.api:application

Closes-Bug: #2075147
Change-Id: If2aa37b2a510a85172da833ca20564810817d246


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2075147

Title:
  "neutron-tempest-plugin-api-ovn-wsgi" not working with TLS

Status in neutron:
  Fix Released

Bug description:
  The Neutron CI job "neutron-tempest-plugin-api-ovn-wsgi" is not
  working because TLS is enabled. There is an issue in the SSL library
  that throws a recursive exception.

  Snippet https://paste.opendev.org/show/briEIdk5z5SwYg25axnf/

  Log:
  
https://987c691fdc28f24679c7-001d480fc44810e6cf7b18a72293f87e.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-
  tempest-plugin-api-ovn-wsgi/8e01634/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2075147/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051935] Re: [OVN] SNAT only happens for subnets directly connected to a router

2024-08-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926495
Committed: 
https://opendev.org/openstack/neutron/commit/dbf53b7bbfa27cb74b1d0b0e47629bf3e1403645
Submitter: "Zuul (22348)"
Branch:master

commit dbf53b7bbfa27cb74b1d0b0e47629bf3e1403645
Author: Ihar Hrachyshka 
Date:   Fri Aug 16 22:22:24 2024 +

Support nested SNAT for ml2/ovn

When ovn_router_indirect_snat = True, ml2/ovn will set a catch-all snat
rule for each external ip, instead of a snat rule per attached subnet.

NB: This option is global to cluster and cannot be controlled per
project or per router.

NB2: this patch assumes that 0.0.0.0/0 snat rules are properly handled
by OVN. Some (e.g. 22.03 and 24.03) OVN versions may have this scenario
broken. See: https://issues.redhat.com/browse/FDP-744 for details.

--

A long time ago, nested SNAT behavior was unconditionally enabled for
ml2/ovs, see: https://bugs.launchpad.net/neutron/+bug/1386041

Since this behavior has potential security implications, and since it
may not be desired in all environments, a new flag is introduced.

Since OVN was deployed without nested SNAT enabled in multiple
environments, the flag is set to False by default (meaning: no nested
SNAT).

In theory, instead of a config option, neutron could introduce a new API
to allow users to control the behavior per router. This would require
more work though. This granular API is left out of the patch. Interested
parties are welcome to start a discussion about adding the new API as a
new neutron extension to routers.

--

Before this patch, there was an alternative implementation proposed that
was not relying on 0.0.0.0/0 snat behavior implemented properly in OVN.
The implementation was abandoned because it introduced non-negligible
complexity in the neutron code and the OVN NB database.

See: https://review.opendev.org/c/openstack/neutron/+/907504

--

Closes-Bug: #2051935
Co-Authored-By: Brian Haley 
Change-Id: I28fae44edc122fae389916e25b3321550de001fd


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051935

Title:
  [OVN] SNAT only happens for subnets directly connected to a router

Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New

Bug description:
  I am trying to achieve the following scenario:

  I have a VM attached to a router w/o external gateway (called project-
  router) but with a default route which send all the traffic to another
  router (transit router) which has an external gateway with snat
  enabled and it is connected to a transit network 192.168.100.0/24

  My VM is  on 172.16.100.0/24, traffic hits the project-router thanks
  to the default route gets redirected to the transit-router correctly,
  here it gets into the external gateway but w/o being snat.

  This is because in ovn I see that SNAT on this router is only enabled
  for logical ip in 192.168.100.0/24 which is the subnet directly
  connected to the router

  # ovn-nbctl lr-nat-list neutron-6d1e6bb7-3949-43d1-8dac-dc55155b9ad8
  TYPE EXTERNAL_IPEXTERNAL_PORTLOGICAL_IP
EXTERNAL_MAC LOGICAL_PORT
  snat 147.22.16.207   192.168.100.0/24

  But I would like that this router snat all the traffic that hits it,
  even when coming from a subnet not directly connected to it.

  I can achieve this by setting in ovn the snat for 0.0.0.0/0

  # ovn-nbctl lr-nat-add neutron-6d1e6bb7-3949-43d1-8dac-dc55155b9ad8
  snat 147.22.16.207 0.0.0.0/0

  # ovn-nbctl lr-nat-list neutron-6d1e6bb7-3949-43d1-8dac-dc55155b9ad8
  TYPE EXTERNAL_IPEXTERNAL_PORTLOGICAL_IP
EXTERNAL_MAC LOGICAL_PORT
  snat 147.22.16.207   0.0.0.0/0
  snat 147.22.16.207   192.168.100.0/24

  But this workaround can be wiped if I run the neutron-ovn-db-sync-util
  on any of the neutron-api unit.

  Is there a way to achieve this via OpenStack? If not does it make
  sense to have this as a new feature?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051935/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2077228] Re: libvirt reports powerd down CPUs as being on socket 0 regardless of their real socket

2024-08-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/926218
Committed: 
https://opendev.org/openstack/nova/commit/79d1f06094599249e6e30ebba2488b8b7a10834e
Submitter: "Zuul (22348)"
Branch:master

commit 79d1f06094599249e6e30ebba2488b8b7a10834e
Author: Artom Lifshitz 
Date:   Tue Aug 13 11:29:10 2024 -0400

libvirt: call get_capabilities() with all CPUs online

While we do cache the hosts's capabilities in self._caps in the
libvirt Host object, if we happen to fist call get_capabilities() with
some of our dedicated CPUs offline, libvirt erroneously reports them
as being on socket 0 regardless of their real socket. We would then
cache that topology, thus breaking pretty much all of our NUMA
accounting.

To fix this, this patch makes sure to call get_capabilities()
immediately upon host init, and to power up all our dedicated CPUs
before doing so. That way, we cache their real socket ID.

For testing, because we don't really want to implement a libvirt bug
in our Python libvirt fixture, we make due with a simple unit tests
that asserts that init_host() has powered on the correct CPUs.

Closes-bug: 2077228
Change-Id: I9a2a7614313297f11a55d99fb94916d3583a9504


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2077228

Title:
  libvirt reports powerd down CPUs as being on socket 0 regardless of
  their real socket

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is more of a libvirt (or maybe even lower down in the kernel)
  bug, but the consequence of $topic's reporting is that if libvirt CPU
  power management is enabled, we mess up our NUMA accounting because we
  have the wrong socket for some/all of our dedicated CPUs, depending on
  whether they were online or not when we called get_capabilities().

  Initially found by internal Red Hat testing, and reported here:
  https://issues.redhat.com/browse/OSPRH-8712

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2077228/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045974] Re: RFE: Create a role for domain-scoped self-service identity management by end users

2024-08-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/924132
Committed: 
https://opendev.org/openstack/keystone/commit/69d1897d0974aafc5f41b851ce61f62ab879c805
Submitter: "Zuul (22348)"
Branch:master

commit 69d1897d0974aafc5f41b851ce61f62ab879c805
Author: Markus Hentsch 
Date:   Mon Jul 15 11:09:55 2024 +0200

Implement the Domain Manager Persona for Keystone

Introduces domain-scoped policies for the 'manager' role to permit
domain-wide management capabilities in regards to users, groups,
projects and role assignments.
Defines a new base policy rule to restrict the roles assignable by
domain managers.

Closes-Bug: #2045974
Change-Id: I62742ed7d906c92d1132251080758bb54d0fc8e1


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2045974

Title:
  RFE: Create a role for domain-scoped self-service identity management
  by end users

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When assigning individual domains to customers of an OpenStack cloud,
  customer-side self-service identity management (i.e. managing users,
  projects and groups) within a domain is a popular use case but hard to
  implement with the current default role model.

  With its current architecture, assigning the "admin" role to end users is 
very risky even if scoped [1] and usually not an option.
  Furthermore, the "admin" role already has an implicit meaning associated with 
it that goes beyond identity management according to operator feedback [2].

  The Consistent and Secure RBAC rework introduced a "manager" role for 
projects [3].
  Having a similar role model on domain-level for identity management would be 
a good complement to that and enable self-service capabilities for end users.

  Request: introduce a new "domain-manager" role in Keystone and associated 
policy rules.
  The new "domain-manager" role - once assigned to an end user in a domain 
scope - would enable them to manage projects, groups, users and associated role 
assignments within the limitations of the domain.

  [1] https://bugs.launchpad.net/keystone/+bug/968696

  [2] https://governance.openstack.org/tc/goals/selected/consistent-and-
  secure-rbac.html#the-issues-we-are-facing-with-scope-concept

  [3] https://governance.openstack.org/tc/goals/selected/consistent-and-
  secure-rbac.html#project-manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2045974/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2077790] Re: [eventlet] RPC handler thread model is incompatible with eventlet

2024-08-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926922
Committed: 
https://opendev.org/openstack/neutron/commit/ae90e2ccbfa45a8e864ec6f7fca2f28fa90d8062
Submitter: "Zuul (22348)"
Branch:master

commit ae90e2ccbfa45a8e864ec6f7fca2f28fa90d8062
Author: Rodolfo Alonso Hernandez 
Date:   Sat Aug 24 10:35:03 2024 +

Make RPC event cast synchronous with the event

Sometimes, the methods ``NeutronObject.get_object`` and
``ResourcesPushRpcApi.push`` yield the GIL during the execution.
Because of that, the thread in charge of sending the RPC information
doesn't finish until other operation is pushed (implemented in [1]).

By making the RPC cast synchronous with the update/delete events, it
is ensured that both operations will finish and the agents will receive
the RPC event on time, just after the event happens.

This issue is hitting more frequently in the migration to the WSGI
server, due to [2]. Once the eventlet library has been deprecated from
OpenStack, it will be possible to use the previous model (using a long
thread to handle the RCP updates to the agents). It is commented in the
code as a TODO.

This patch is temporarily reverting [3]. This code should be restored
too.

[1]https://review.opendev.org/c/openstack/neutron/+/788510
[2]https://review.opendev.org/c/openstack/neutron/+/925376
[3]https://review.opendev.org/c/openstack/neutron/+/824508

Closes-Bug: #2077790
Related-Bug: #2075147
Change-Id: I7b806e6de74164ad9730480a115a76d30e7f15fc


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2077790

Title:
  [eventlet] RPC handler thread model is incompatible with eventlet

Status in neutron:
  Fix Released

Bug description:
  The RPC handler class ``_ObjectChangeHandler``, that is instantiated
  in ``OVOServerRpcInterface``, is not eventlet compatible.

  The ``OVOServerRpcInterface`` class is in charge of receiving the
  resource events (port, network, SG, etc.) and send this update via RPC
  to the listeners (agents like OVS agent or DHCP agent). Since [1], we
  create a single long running thread that reads the stored events and
  sends the RPC message (``RPCClient.cast`` because it is not expected a
  reply).

  Although this architecture is correct, it is not fully compatible with
  eventlet. Since [2] and the upper patches testing this patch, the OVS
  jobs (that use RPC between the server and the agents) are randomly
  failing. This is more frequently with the SG API operations (SG rule
  addition and deletion).

  This bug proposes to make the event RPC cast synchronous with the API
  call, avoiding using a thread to collect and send the RPC messages.
  Once eventlet is removed from the OpenStack project, we'll be able to
  use the previous model.

  POC patch: https://review.opendev.org/c/openstack/neutron/+/926922
  Testing patch: https://review.opendev.org/c/openstack/neutron/+/926788

  [1]https://review.opendev.org/c/openstack/neutron/+/788510
  [2]https://review.opendev.org/c/openstack/neutron/+/925376

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2077790/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2018737] Re: neutron-dynamic-routing announces routes for disabled routers

2024-08-28 Thread OpenStack Infra
Reviewed:  
https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/882560
Committed: 
https://opendev.org/openstack/neutron-dynamic-routing/commit/06232f0b2c78bb983c5cefcd8a573761f87a
Submitter: "Zuul (22348)"
Branch:master

commit 06232f0b2c78bb983c5cefcd8a573761f87a
Author: Felix Huettner 
Date:   Mon May 8 11:53:55 2023 +0200

Ignore disabled routers for advertising

currently if a router is set to disabled the dragents still advertise
the routes. This causes the upstream routers to still know these routes
and try to forward packets to a non existing router.

By removing these routes we allow these upstream routers do directly
drop the traffic to these addresses instead of trying to forward it to
neutron routers.

Closes-Bug: 2018737

Change-Id: Icd6803769f37a04bf7581afb9722c78a44737374


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2018737

Title:
  neutron-dynamic-routing announces routes for disabled routers

Status in neutron:
  Fix Released

Bug description:
  neutron routers can be disabled, thereby basically removing them from their 
l3 agents.
  They will no longer accept, process or forward packets once they are disabled.

  Currently if a router is set to disabled the dragents still advertise the 
routes to its networks and floating ips.
  Even though the router is actually not active and can not handle these 
packets.
  This causes the upstream routers to still know these routes and try to 
forward packets to this disabled router.

  For example for internet network this causes unneeded traffic on the upstream 
routers and the network nodes.
  They will receive traffic that they forward to the network node which then 
will drop this traffic as the router is gone.

  It would be ideal if routes for disabled routers are no longer advertised by 
the dragents.
  This would cause upstream routers to loose the routes to these network/fips 
and allow them to drop the traffic as early as possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2018737/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2076670] Re: Default Roles in keystone: wrong format in example

2024-08-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/926291
Committed: 
https://opendev.org/openstack/keystone/commit/112331d9e95f7b7035f3f818716c2a5111baeb3e
Submitter: "Zuul (22348)"
Branch:master

commit 112331d9e95f7b7035f3f818716c2a5111baeb3e
Author: Artem Goncharov 
Date:   Wed Aug 14 17:37:46 2024 +0200

Fix role statement in admin doc

Closes-Bug: 2076670
Change-Id: I843dcce351d664124c769d815f72cd57caa5e429


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2076670

Title:
  Default Roles in keystone: wrong format in example

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output. 

  ```yaml
  "identity:create_foo": "role:service" or "role:admin"
  ```

  has to be

  ```yaml
  "identity:create_foo": "role:service or role:admin"
  ```

  ---
  Release: 25.1.0.dev52 on 2022-11-02 15:54:51
  SHA: a0cc504543e639c90212d69f3bcf91665648e71a
  Source: 
https://opendev.org/openstack/keystone/src/doc/source/admin/service-api-protection.rst
  URL: 
https://docs.openstack.org/keystone/latest/admin/service-api-protection.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2076670/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073894] Re: IPv6 dns nameservers described with their scope on the IP are not supported

2024-08-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926079
Committed: 
https://opendev.org/openstack/neutron/commit/1ed8609a6818d99133bf56483adb9bce8c886fd6
Submitter: "Zuul (22348)"
Branch:master

commit 1ed8609a6818d99133bf56483adb9bce8c886fd6
Author: Elvira García 
Date:   Fri Aug 9 18:16:59 2024 +0200

Get ips from system dns resolver without scope

Currently, is_valid_ipv6 accepts ipv6 addresses with scope. However
netaddr library won't accept an address with scope. Now,
get_noscope_ipv6() can be used to avoid this situation. In a future we
will be able to use the same function which is also being defined on
oslo.utils. https://review.opendev.org/c/openstack/oslo.utils/+/925469

Closes-Bug: #2073894
Signed-off-by: Elvira García 
Change-Id: I27f25f90c54d7aaa3c4a7b5317b4b8a4122e4068


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073894

Title:
  IPv6 dns nameservers described with their scope on the IP are not
  supported

Status in neutron:
  Fix Released
Status in oslo.utils:
  In Progress

Bug description:
  When updating a port, we sometimes need to check dns nameserver ips.
  When this happens, if the DNS resolver file (resolv.conf) includes an
  address with scope like fe80::5054:ff:fe96:8af7%eth2, oslo_utils
  is_valid_ipv6 detects this as valid ipv6 input, but netaddr will raise
  an exception since this is not strictly just the IPv6 address, and
  therefore the port update fails with a raised exception and the port
  is deleted.

  On a normal scenario, this means that the metadata port cannot be
  spawned and therefore no VMs can be properly configured using
  metadata.

  [resolv.conf example]
  # Generated by NetworkManager
  nameserver 10.0.0.1
  nameserver fe80::5054:ff:fe96:8af7%eth2
  nameserver 2620:52:0:13b8::fe

  This was found on an environment using Train, but affects every
  version.

  100% Reproducible, just need to try to spawn a VM on an environment
  with the resolv.conf similar to the example.

  Traceback found on controller logs:
  https://paste.opendev.org/show/bzqgpsJRifX0uovHw5nJ/

  From the compute logs we see the metadata port was deleted after the
  exception:

  2024-07-18 04:38:06.036 49524 DEBUG
  networking_ovn.agent.metadata.agent [-] There is no metadata port for
  network 75b73d16-cb05-42d1-84c5-19eccf3a252d or it has no MAC or IP
  addresses configured, tearing the namespace down if needed
  _get_provision_params /usr/lib/python3.6/site-
  packages/networking_ovn/agent/metadata/agent.py:474

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073894/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2077351] Re: "Error formatting log line" sometimes seen in l3-agent log

2024-08-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926566
Committed: 
https://opendev.org/openstack/neutron/commit/b9ca288a5d387acf01464e80b3d8b7b42ce9a9ae
Submitter: "Zuul (22348)"
Branch:master

commit b9ca288a5d387acf01464e80b3d8b7b42ce9a9ae
Author: Brian Haley 
Date:   Mon Aug 19 13:48:55 2024 -0400

Log a warning if pid file could not be read in l3-agent

A formatting error can sometimes be seen in the l3-agent
log while spawning the state change monitor if the pid
file is empty. Log a warning to that effect instead so
an admin is aware in case there is an issue observed
with the router.

Closes-bug: #2077351
Change-Id: Ic599c2419ca204a5e10654cb4bef66e6770cbcd7


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2077351

Title:
  "Error formatting log line" sometimes seen in l3-agent log

Status in neutron:
  Fix Released

Bug description:
  While looking at another issue, I noticed this error in the l3-agent
  log:

  Aug 19 14:53:06.339086 np0038216951 neutron-keepalived-state-change[103857]: 
INFO neutron.common.config [-] Logging enabled!
  Aug 19 14:53:06.339318 np0038216951 neutron-keepalived-state-change[103857]: 
INFO neutron.common.config [-] 
/opt/stack/data/venv/bin/neutron-keepalived-state-change version 
25.0.0.0b2.dev120
  Aug 19 14:53:06.339942 np0038216951 neutron-keepalived-state-change[103857]: 
DEBUG neutron.common.config [-] command line: 
/opt/stack/data/venv/bin/neutron-keepalived-state-change 
--router_id=2a06f3a4-8964-4200-97e8-a9d635f31fba 
--namespace=qrouter-2a06f3a4-8964-4200-97e8-a9d635f31fba 
--conf_dir=/opt/stack/data/neutron/ha_confs/2a06f3a4-8964-4200-97e8-a9d635f31fba
 
--log-file=/opt/stack/data/neutron/ha_confs/2a06f3a4-8964-4200-97e8-a9d635f31fba/neutron-keepalived-state-change.log
 --monitor_interface=ha-b1ac3293-17 --monitor_cidr=169.254.0.132/24 
--pid_file=/opt/stack/data/neutron/external/pids/2a06f3a4-8964-4200-97e8-a9d635f31fba.monitor.pid.neutron-keepalived-state-change-monitor
 --state_path=/opt/stack/data/neutron --user=1001 --group=1001 {{(pid=103857) 
setup_logging /opt/stack/neutron/neutron/common/config.py:123}}
  Aug 19 14:53:06.352377 np0038216951 neutron-l3-agent[62158]: ERROR 
neutron.agent.linux.utils [-] Unable to convert value in 
/opt/stack/data/neutron/external/pids/2a06f3a4-8964-4200-97e8-a9d635f31fba.monitor.pid.neutron-keepalived-state-change-monitor
  Aug 19 14:53:06.352377 np0038216951 neutron-l3-agent[62158]: DEBUG 
neutron.agent.l3.ha_router [-] Error formatting log line msg='Router 
*(router_id)s *(process)s pid *(pid)d' err=TypeError('*d format: a real number 
is required, not NoneType') {{(pid=62158) spawn_state_change_monitor 
/opt/stack/neutron/neutron/agent/l3/ha_router.py:453}}

  The code in question is printing the PID as %(pid)d so when it is None
  it generates a TypeError.

  I think in this case the pid file has simply not been written yet and
  the process is still spawning, so we should print a warning to that
  effect. That way if the admin does see an issue with that router there
  is something to indicate why.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2077351/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055784] Re: Resource MEMORY_MB Unable to retrieve providers information

2024-08-22 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/926160
Committed: 
https://opendev.org/openstack/horizon/commit/30888edfd52bfaadad241aa2fcdf44151d0aed96
Submitter: "Zuul (22348)"
Branch:master

commit 30888edfd52bfaadad241aa2fcdf44151d0aed96
Author: Tatiana Ovchinnikova 
Date:   Mon Aug 12 14:40:45 2024 -0500

Fix Placement statistics display

For some inventories MEMORY_MB and DISK_GB are optional,
so we need to check before displaying them.

Closes-Bug: #2055784
Change-Id: I2ef63caf72f0f8f72fe8af87b21742088221578c


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2055784

Title:
  Resource MEMORY_MB Unable to retrieve providers information

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Hello
  When I click on admin -> compute -> hypervisor I get error and nothing is 
displayed in resource providers summary.
  This
  This happens because i have an environment in which i have activate pci 
passthrough in nova and there are tracked in placement: 
https://docs.openstack.org/nova/2023.1/admin/pci-passthrough.html#pci-tracking-in-placement

  
  and this inventory doesn't have MEMORY_MB or DISK_GB (i looked at 
https://opendev.org/openstack/horizon/src/branch/master/openstack_dashboard/api/placement.py#L117-L127)

  eg
  one compute node where i have activated pci passthrough
  ```
  (openstack) [osc@ansible-3 ~]$ openstack resource provider show 
1fe9d32f-43cd-445a-8a49-a68f9ff5158f
  +--+--+
  | Field| Value|
  +--+--+
  | uuid | 1fe9d32f-43cd-445a-8a49-a68f9ff5158f |
  | name | compute-19_:04:00.0  |
  | generation   | 44   |
  | root_provider_uuid   | e9cb6a8d-e638-4245-bf79-981211c5a232 |
  | parent_provider_uuid | e9cb6a8d-e638-4245-bf79-981211c5a232 |
  +--+--+
  (openstack) [osc@ansible-3 ~]$ openstack resource provider usage show  
1fe9d32f-43cd-445a-8a49-a68f9ff5158f
  +--+---+
  | resource_class   | usage |
  +--+---+
  | CUSTOM_PCI_10DE_2330 | 1 |
  +--+---+
  ```

  One simple compute node show as
  ```
  (openstack) [osc@ansible-3 ~]$ openstack resource provider usage show  
f7af998e-1563-4a55-9145-4ee5f527d12b
  +++
  | resource_class |  usage |
  +++
  | VCPU   |412 |
  | MEMORY_MB  | 824320 |
  | DISK_GB|  0 |
  +++

  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2055784/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2072978] Re: Show some error in logs when failing to load nb connection certificate

2024-08-21 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/924059
Committed: 
https://opendev.org/openstack/ovn-octavia-provider/commit/efd63d1721742400e7ba2c0bfc55249ef15fc549
Submitter: "Zuul (22348)"
Branch:master

commit efd63d1721742400e7ba2c0bfc55249ef15fc549
Author: Chris Buggy 
Date:   Mon Jul 29 15:16:30 2024 +0200

Error log for missing certs with NB and SB DBs

When the ovn-provider starts up,
it attempts to connect to the NB and SB databases
by retrieving SSL and cert files.
To avoid errors, the code will now check if these
files exist before using them.
If the files are missing,
connections will be skipped and an error message
will be displayed in the logs.

Refactoring _check_and_set_ssl_files method to be
public and reusable. it will now check to see if a
string value is set and will now check path and
LOG an error message if not found.

Adding unit tests for ovsdb_monitor to bring up test coverage.
Updated ovsdb_tests to improve code.

Closes-Bug: #2072978
Change-Id: I2a21b94fee03767a5f703486bdab2908cda18746


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2072978

Title:
  Show some error in logs when failing to load nb connection certificate

Status in neutron:
  Fix Released

Bug description:
  When ovn-provider (api or driver-agent) start up they should connect
  to OVN NB/SB db using certificates in case they are configured in
  config file. Currently in case any of those files are not found they
  avoid the connection and no msg on the logs are shown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2072978/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2076328] Re: SubnetsSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links fails sporadically

2024-08-21 Thread OpenStack Infra
Reviewed:  
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/926201
Committed: 
https://opendev.org/openstack/neutron-tempest-plugin/commit/0274381d31b7a5e6dff7a8e3ce8ff53d5c97d443
Submitter: "Zuul (22348)"
Branch:master

commit 0274381d31b7a5e6dff7a8e3ce8ff53d5c97d443
Author: yatinkarel 
Date:   Tue Aug 13 18:14:37 2024 +0530

Filter resources in pagination tests to avoid random failures

When running tempest with higher concurrency, pagination tests
randomly fails as returned resources also include resources
created from other concurrent tests.
Filtering the returned results with names should help.

Closes-Bug: #2076328
Change-Id: I72de57cc382bb06606187c62b51ebb613f76291c


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2076328

Title:
  SubnetsSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links
  fails sporadically

Status in neutron:
  Fix Released

Bug description:
  
neutron_tempest_plugin.api.test_subnets.SubnetsSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links
  fails time to time with something like tihs:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/test_subnets.py",
 line 62, in test_list_pagination_page_reverse_with_href_links
  self._test_list_pagination_page_reverse_with_href_links()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/base.py",
 line 1413, in inner
  return f(self, *args, **kwargs)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/base.py",
 line 1404, in inner
  return f(self, *args, **kwargs)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/base.py",
 line 1629, in _test_list_pagination_page_reverse_with_href_links
  self.assertSameOrder(expected_resources, reversed(resources))
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/base.py",
 line 1441, in assertSameOrder
  self.assertEqual(len(original), len(actual))
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/testtools/testcase.py",
 line 419, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/testtools/testcase.py",
 line 509, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 7 != 8

  Similar bug for pagination from the past:
  https://bugs.launchpad.net/neutron/+bug/1881311

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2076328/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2072754] Re: Restarting octavia breaks IPv4 Load Balancers with health checks

2024-08-21 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/923196
Committed: 
https://opendev.org/openstack/ovn-octavia-provider/commit/ae1540bb1a04464c7065e542ec5e981947247f3b
Submitter: "Zuul (22348)"
Branch:master

commit ae1540bb1a04464c7065e542ec5e981947247f3b
Author: Vasyl Saienko 
Date:   Mon Jul 1 10:37:14 2024 +0300

Maintenance task: do not change IPv4 ip_port_mappings

IPv4 port mappings would get cleared by format_ip_port_mappings_ipv6(),
breaking load balancers with health monitors.

Change-Id: Ia29fd3c533b40f6eb13278a163ebb95465d77a99
Closes-Bug: #2072754
Co-Authored-By: Pierre Riteau 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2072754

Title:
  Restarting octavia breaks IPv4 Load Balancers with health checks

Status in neutron:
  Fix Released

Bug description:
  After implementing IPv6 health check support in change #919229 for the
  ovn-octavia-provider, it appears that the maintenance task is
  inadvertently deleting the `ip_port_mappings` of IPv4 load balancers.
  This issue results in the load balancers ceasing to function upon the
  restart of Octavia.

  I found this as a potential fix for this issue: [Proposed
  Fix](https://review.opendev.org/c/openstack/ovn-octavia-
  provider/+/923196).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2072754/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2069482] Re: [OVN] VMs cannot access metadata when connected to a network with only IPv6 subnets

2024-08-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/922264
Committed: 
https://opendev.org/openstack/neutron/commit/f7000f3d57bc59732522c4943d6ff2e9dfcf7d31
Submitter: "Zuul (22348)"
Branch:master

commit f7000f3d57bc59732522c4943d6ff2e9dfcf7d31
Author: Miguel Lavalle 
Date:   Tue Jun 18 19:36:13 2024 -0500

Fix support of IPv6 only networks in OVN metadata agent

When an IPv6 only network is used as the sole network for a VM and
there are no other bound ports on the same network in the same chassis,
the OVN metadata agent concludes that the associated namespace is not
needed and deletes it. As a consequence, the VM cannot access the
metadata service. With this change, the namespace is preserved if there
is at least one bound port on the chassis with either IPv4 or IPv6
addresses.

Closes-Bug: #2069482

Change-Id: Ie15c3344161ad521bf10b98303c7bb730351e2d8


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2069482

Title:
  [OVN] VMs cannot access metadata when connected to a network with only
  IPv6 subnets

Status in neutron:
  Fix Released

Bug description:
  VMs cannot access the metadata service when connected to a network
  with only IPv6 subnets.

  Neutron branch: master

  Steps to reproduce:

  1) Create a network with a single IPv6 subnet:

  $ openstack network create ipv6-net-dhcpv6-slaac
  $ openstack subnet create --subnet-range fdba:e036:9e22::/64 --ip-version 6 
--gateway dba:e036:9e22::1 --ipv6-ra-mode slaac --ipv6-address-mode slaac 
--network ipv6-net-dhcpv6-slaac ipv6-subnet-dhcpv6-slaac

  2) Create a VM using this network:

  $ openstack server create --key-name my_key --flavor m1.small --image
  ubuntu-20.04-minimal-cloudimg-amd64 --network ipv6-net-dhcpv6-slaac
  --security-group sg1 my-vm-slaac

  3) The following message is added to the metadata agent log file:

  Jun 14 22:00:32 central neutron-ovn-metadata-agent[89379]: DEBUG
  neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for
  network 191a0539-edbc-4037-b973-dfa77e3208f6, tearing the namespace
  down if needed {{(pid=89379) _get_provision_params
  /opt/stack/neutron/neutron/agent/ovn/metadata/agent.py:720}}

  which is produced here:

  
https://github.com/openstack/neutron/blob/79b2d709c80217830fed8ad73dcf6fbd3eea91b4/neutron/agent/ovn/metadata/agent.py#L719-L723

  4) When an IPv4 subnet is added to the network and the VM is
  recreated, the metadata service is accessible to it over IPv6:

  $ openstack subnet create --network ipv6-net-dhcpv6-slaac 
ipv4-subnet-dhcpv6-slaac --subnet-range 10.2.0.0/24
  $ openstack server delete my-vm-slaac
  $ openstack server create --key-name my_key --flavor m1.small --image 
ubuntu-20.04-minimal-cloudimg-amd64 --network ipv6-net-dhcpv6-slaac 
--security-group sg1 my-vm-slaac

  From the VM:

  ubuntu@my-vm-slaac:~$ curl http://[fe80::a9fe:a9fe%ens3]
  1.0
  2007-01-19
  2007-03-01
  2007-08-29
  2007-10-10
  2007-12-15
  2008-02-01
  2008-09-01
  2009-04-04
  latest

  ubuntu@my-vm-slaac:~$ curl http://[fe80::a9fe:a9fe%ens3]/openstack
  2012-08-10
  2013-04-04
  2013-10-17
  2015-10-15
  2016-06-30
  2016-10-06
  2017-02-22
  2018-08-27
  2020-10-14
  latest

  
  How reproducible: 100%

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2069482/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1860555] Re: PCI passthrough reschedule race condition

2024-08-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/926407
Committed: 
https://opendev.org/openstack/nova/commit/f8b98390dc99f6cb0101c88223eb840e0d1c7124
Submitter: "Zuul (22348)"
Branch:master

commit f8b98390dc99f6cb0101c88223eb840e0d1c7124
Author: Balazs Gibizer 
Date:   Thu Aug 15 13:06:39 2024 +0200

Fix PCI passthrough cleanup on reschedule

The resource tracker Claim object works on a copy of the instance object
got from the compute manager. But the PCI claim logic does not use the
copy but use the original instance object. However the abort claim logic
including the abort PCI claim logic worked on the copy only. Therefore the
claimed PCI devices are visible to the compute manager in the
instance.pci_decives list even after the claim is aborted.

There was another bug in the PCIDevice object where the instance object
wasn't passed to the free() function and therefore the
instance.pci_devices list wasn't updated when the device was freed.

Closes-Bug: #1860555
Change-Id: Iff343d4d78996cd17a6a584fefa7071c81311673


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1860555

Title:
  PCI passthrough reschedule race condition

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Steps to reproduce
  --

  Create multiple instances concurrently using a flavor with a PCI
  passthrough request (--property
  "pci_passthrough:alias"=":"), and a scheduler hint with
  some anti-affinity constraint.

  Expected result
  ---

  The instances are created successfully, and each have the expected
  number of PCI devices attached.

  Actual result
  -

  Sometimes, instances may fail during creation, or may be created with
  more PCI devices than requested.

  Environment
  ---

  Nova 18.2.2 (rocky), CentOS 7, libvirt, deployed by kolla-ansible.

  Analysis
  

  If an instance with PCI passthrough devices is rescheduled (e.g. due to
  affinity violation), the instance can end up with extra PCI devices attached.
  If the devices selected on the original and subsequent compute nodes have the
  same address, the instance will fail to create, with the following error:

  libvirtError: internal error: Device :89:00.0 is already in use

  However, if the devices are different, and all available on the first and
  second compute nodes, the VM may end up with additional hostdevs.

  On investigation, when the node is rescheduled, the instance object passed to
  the conductor RPC API contains the PCI devices that should have been freed.
  This is because the claim object holds a clone of the instance that is used to
  perform the abort on failure [1][2], and the PCI devices removed from its 
list are not
  reflected in the original object. There is a secondary issue that the PCI
  manager was not passing through the instance to the PCI object's free() method
  in all cases [3], resulting in the PCI device not being removed from the
  instance.pci_devices list.

  I have two alternative fixes for this issue, but they will need a
  little time to work their way out of an organisation. Essentially:

  1. pass the original instance (not the clone) to the abort function in the 
Claim.
  2. refresh the instance from DB when rescheduling

  The former is a more general solution, but I don't know the reasons
  for using a clone in the first place. The second works for
  reschedules, but may leave a hole for resize or migration. I haven't
  reproduced the issue in those cases but it seems possible that it
  would be present.

  [1] 
https://opendev.org/openstack/nova/src/branch/master/nova/compute/claims.py#L64
  [2] 
https://opendev.org/openstack/nova/src/branch/master/nova/compute/claims.py#L83
  [3] 
https://opendev.org/openstack/nova/src/branch/master/nova/pci/manager.py#L309

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1860555/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075959] Re: NUMATopologyFilter pagesize logs are missleading

2024-08-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/926223
Committed: 
https://opendev.org/openstack/nova/commit/4678bcbb064da580500b1dbeddb0bdfdeac074ef
Submitter: "Zuul (22348)"
Branch:master

commit 4678bcbb064da580500b1dbeddb0bdfdeac074ef
Author: Stephen Finucane 
Date:   Tue Aug 13 17:24:31 2024 +0100

hardware: Correct log

We currently get the following error message if attempting to fit a
guest with hugepages on a node that doesn't have enough:

  Host does not support requested memory pagesize, or not enough free
  pages of the requested size. Requested: -2 kB

Correct this, removing the kB suffix and adding a note on the meaning of
the negative values, like we have for the success path.

Change-Id: I247dc0ec03cd9e5a7b41f5c5534bdfb1af550029
Signed-off-by: Stephen Finucane 
Closes-Bug: #2075959


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2075959

Title:
  NUMATopologyFilter pagesize logs are missleading

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When the instance request mem pages via symbolic names (e.g. "large"
  instead of specifying the exact size) and the instance does not fit to
  a NUMA cell due to the memory requirements nova logs are confusing:

  ./nova-scheduler-scheduler.log:2024-07-31 23:37:28.428 1 DEBUG
  nova.virt.hardware [None req-c3efb10b-641c-4066-a569-206226315366
  f05a486d957b4e6082293ce5e707009d 8c8a6763e6924cd3a94427af5f8ef6ee - -
  default default] Host does not support requested memory pagesize, or
  not enough free pages of the requested size. Requested: -2 kB
  _numa_fit_instance_cell /usr/lib/python3.9/site-
  packages/nova/virt/hardware.py:944

  This happens because the symbolic name translated to a negative
  integer placeholder inside nova. So when the field is printed it
  should be translated back to the symbolic name instead.

  
  
https://github.com/openstack/nova/blob/bb2d7f9cad577f3a32cb9523e2b1d9a6d6db3407/nova/virt/hardware.py#L943-L946

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2075959/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938571] Re: vpnaas problem:ipsec pluto not running centos 8 victoria wallaby

2024-08-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-vpnaas/+/895824
Committed: 
https://opendev.org/openstack/neutron-vpnaas/commit/8e8f3b5a1d0108771d712b699e87839146a3
Submitter: "Zuul (22348)"
Branch:master

commit 8e8f3b5a1d0108771d712b699e87839146a3
Author: Bodo Petermann 
Date:   Tue Sep 19 15:58:56 2023 +0200

Support for libreswan 4

With libreswan 4 some command line option changed, the rundir is now
/run/pluto instead of /var/run/pluto, and nat_traversal must not be set
in ipsec.conf.
Adapt the libreswan device driver accordingly.
Users will require libreswan v4.0 or higher, compatibility with v3.x is
not maintained.

Closes-Bug: #1938571
Change-Id: Ib55e3c3f9cfbe3dfe1241ace8c821256d7fc174a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938571

Title:
  vpnaas problem:ipsec pluto not running centos 8 victoria wallaby

Status in neutron:
  Fix Released

Bug description:
  Hello. 
  I apologize if I don't do things right to explain the bug.
  I am using Centos 8 and I install openstak with, kolla ansible. Whether it is 
Ussuri, Victoria or Wallaby, when establishing the connection between the 2 
networks(with vpnaas), the error message is as follows:
  ipsec whack --status" (no "/run/pluto/pluto.ctl")

  The problem would be present with the Libreswan version 4.X which does not 
include the option "--use-netkey " used by the ipsec pluto command 
  This option was present in Libreswan 3.X.
  So the command "ipsec pluto." failed , so no "/run/pluto/pluto.ctl".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1938571/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2070486] Re: XStatic-JQuery.quicksearch is not updated in horizon

2024-08-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/926134
Committed: 
https://opendev.org/openstack/horizon/commit/fd1fa88c680d2068eb47ecfa8dbfd74caf194140
Submitter: "Zuul (22348)"
Branch:master

commit fd1fa88c680d2068eb47ecfa8dbfd74caf194140
Author: manchandavishal 
Date:   Mon Aug 12 17:12:06 2024 +0530

Update XStatic-JQuery.quicksearch min. version to include latest CVE fix

This patch updates XStatic-JQuery.quicksearch minimum version to ensure
the latest security vulnerabilities are addressed.

Closes-Bug: 2070486
Change-Id: Id8d00b325ad563ca7c720c758f4da928fed176cd


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2070486

Title:
  XStatic-JQuery.quicksearch is not updated in horizon

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Currently, horizon is using XStatic-JQuery.quicksearch version 2.0.3.1 which 
is very old and doesn't include the latest bug fixes. It was released in May 
2014. 
  We should use latest version of XStatic-JQuery.quicksearch = 2.0.3.2 [1].

  [1] https://pypi.org/project/XStatic-JQuery.quicksearch/#history

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2070486/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2034035] Re: neutron allowed address pair with same ip address causes ValueError

2024-08-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/893650
Committed: 
https://opendev.org/openstack/horizon/commit/9c75ebba01cc58c77a7114226ebaeedbe033962a
Submitter: "Zuul (22348)"
Branch:master

commit 9c75ebba01cc58c77a7114226ebaeedbe033962a
Author: Tobias Urdin 
Date:   Mon Sep 4 13:03:15 2023 +

Fix allowed address pair row unique ID

This fixes so that the ID for the allowed
address pair rows is unique if it's the
same ip_address range but different
mac_address.

Closes-Bug: 2034035
Change-Id: I49e84568ef7cfbc1547258305f2101bffe5bdea5


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2034035

Title:
  neutron allowed address pair with same ip address causes ValueError

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  when managing allowed address pairs in horizon for a neutron port and
  you create two identical ip_address but with different mac_address,
  horizon crashes because the id in the table is the same, see below
  traceback.

  solution is to concat mac_address if set in the ID for that row

  Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/django/core/handlers/exception.py", 
line 47, in inner
  response = get_response(request)
File "/usr/lib/python3.6/site-packages/django/core/handlers/base.py", line 
181, in _get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python3.6/site-packages/horizon/decorators.py", line 51, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/decorators.py", line 35, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/decorators.py", line 35, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/decorators.py", line 111, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/decorators.py", line 83, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/django/views/generic/base.py", line 
70, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/django/views/generic/base.py", line 
98, in dispatch
  return handler(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/tabs/views.py", line 156, in 
post
  return self.get(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/tabs/views.py", line 135, in 
get
  handled = self.handle_table(self._table_dict[table_name])
File "/usr/lib/python3.6/site-packages/horizon/tabs/views.py", line 116, in 
handle_table
  handled = tab._tables[table_name].maybe_handle()
File "/usr/lib/python3.6/site-packages/horizon/tables/base.py", line 1802, 
in maybe_handle
  return self.take_action(action_name, obj_id)
File "/usr/lib/python3.6/site-packages/horizon/tables/base.py", line 1644, 
in take_action
  response = action.multiple(self, self.request, obj_ids)
File "/usr/lib/python3.6/site-packages/horizon/tables/actions.py", line 
305, in multiple
  return self.handle(data_table, request, object_ids)
File "/usr/lib/python3.6/site-packages/horizon/tables/actions.py", line 
760, in handle
  datum = table.get_object_by_id(datum_id)
File "/usr/lib/python3.6/site-packages/horizon/tables/base.py", line 1480, 
in get_object_by_id
  % matches)
  ValueError: Multiple matches were returned for that id: 
[, ].

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2034035/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2047132] Re: floating ip on inactive port not shown in Horizon UI floating ip details

2024-08-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/904172
Committed: 
https://opendev.org/openstack/horizon/commit/53c82bbff75f646654585f66f666cfd1f1b53987
Submitter: "Zuul (22348)"
Branch:master

commit 53c82bbff75f646654585f66f666cfd1f1b53987
Author: Tobias Urdin 
Date:   Thu Dec 21 11:36:26 2023 +0100

Fix floating IP associated to unbound port

This fixes a bug where a floating IP associated to a
unbound port would now show the fixed IP of that port.

Closes-Bug: 2047132
Change-Id: I4fbbcc4c0509e74ce3c46fa55e006c5bc3837be3


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2047132

Title:
  floating ip on inactive port not shown in Horizon UI floating ip
  details

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When setting up a port that is not bound and assinging a Floating IP
  (FIP) to it, the FIP gets associated but the Horizon UI does not show
  the IP of the port, instead it shows a - .

  The terraform/tofu snippet for the setup:

  resource "openstack_networking_floatingip_associate_v2" "fip_1" {
floating_ip = 
data.openstack_networking_floatingip_v2.fip_1.address
port_id = openstack_networking_port_v2.port_vip.id
  }
  resource "openstack_networking_port_v2" "port_vip" {
name   = "port_vip"
network_id = 
data.openstack_networking_network_v2.network_1.id
fixed_ip {
  subnet_id  = 
data.openstack_networking_subnet_v2.subnet_1.id
  ip_address = "192.168.56.30"
}
  }

  Example from UI :

185.102.215.242 floatit 
stack1-config-barssl-3-hostany-bootstrap-1896c992-3e17-4fab-b084-bb642c517cbe 
192.168.56.20 europe-se-1-1a-net0 Active  
193.93.250.171  -   europe-se-1-1a-net0 Active  

  The top one is a port that is asigned to a host that looks as
  expected, the second is not and corresponds to the terraform snippet.
  ( it is being used as a floating IP internal for load balancing )

  Expected is to see the IP 192.168.56.30 that is set at creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2047132/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2076430] Re: rally job broken with recent keystone change and fails with ValueError: Cannot convert datetime.date(2024, 8, 7) to primitive

2024-08-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/oslo.serialization/+/926172
Committed: 
https://opendev.org/openstack/oslo.serialization/commit/f6e879db55465e6d5f17f054ed2757cbfcfc43bc
Submitter: "Zuul (22348)"
Branch:master

commit f6e879db55465e6d5f17f054ed2757cbfcfc43bc
Author: yatinkarel 
Date:   Tue Aug 13 11:35:05 2024 +0530

[jsonutils] Add handling of datetime.date format

Recent patch from keystone[1] do not work when
osprofiler is enabled as osprofiler does jsonutils.dumps
and datetime.date is not handled so it fails.
This patch adds the handling for it.

[1] https://review.opendev.org/c/openstack/keystone/+/924892

Needed-By: 
https://review.opendev.org/q/I1b71fb3881dc041db01083fbb4f2592400096a31
Related-Bug: #2074018
Closes-Bug: #2076430
Change-Id: Ifbcf5a1b3d42516bdf73f7ca6b2a7338f3985283


** Changed in: oslo.serialization
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2076430

Title:
  rally job broken with recent keystone change and fails with
  ValueError: Cannot convert datetime.date(2024, 8, 7) to primitive

Status in neutron:
  New
Status in oslo.serialization:
  Fix Released

Bug description:
  Test fails as:-
  2024-08-07 17:06:05.650302 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients [-] Unable to authenticate for user 
c_rally_927546a8_h6aDLbnK in project c_rally_927546a8_ahUUmlp9: 
keystoneauth1.exceptions.http.InternalServerError: Internal Server Error (HTTP 
500)
  2024-08-07 17:06:05.650941 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients Traceback (most recent call last):
  2024-08-07 17:06:05.651113 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/rally-openstack/rally_openstack/common/osclients.py", line 269, in 
auth_ref
  2024-08-07 17:06:05.651156 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients self.cache["keystone_auth_ref"] = 
plugin.get_access(sess)
  2024-08-07 17:06:05.651193 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/keystoneauth1/identity/base.py",
 line 131, in get_access
  2024-08-07 17:06:05.651229 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients self.auth_ref = self.get_auth_ref(session)
  2024-08-07 17:06:05.651263 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py",
 line 205, in get_auth_ref
  2024-08-07 17:06:05.651334 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients return self._plugin.get_auth_ref(session, 
**kwargs)
  2024-08-07 17:06:05.651348 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/base.py",
 line 185, in get_auth_ref
  2024-08-07 17:06:05.651356 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients resp = session.post(token_url, json=body, 
headers=headers,
  2024-08-07 17:06:05.651363 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/keystoneauth1/session.py", 
line 1162, in post
  2024-08-07 17:06:05.651370 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients return self.request(url, 'POST', **kwargs)
  2024-08-07 17:06:05.651377 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/keystoneauth1/session.py", 
line 985, in request
  2024-08-07 17:06:05.651384 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients raise exceptions.from_response(resp, 
method, url)
  2024-08-07 17:06:05.651391 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients 
keystoneauth1.exceptions.http.InternalServerError: Internal Server Error (HTTP 
500)

  From keyston log:-
  Aug 07 17:03:06.408072 np0038147114 devstack@keystone.service[56399]: 
CRITICAL keystone [None req-96647e8e-2585-4279-80fa-c4fa97b8c455 None None] 
Unhandled error: ValueError: Cannot convert datetime.date(2024, 8, 7) to 
primitive
  Aug 07 17:03:06.408072 np0038147114 devstack@keystone.service[56399]: ERROR 
keystone Traceback (most recent call last):
  Aug 07 17:03:06.408072 np0038147114 devstack@keystone.service[56399]: ERROR 
keystone   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/flask/app.py", line 1498, in 
__call__
  Aug 07 17:03:06.408072 np0038147114 devstack

[Yahoo-eng-team] [Bug 1981165] Re: Edit Instance - description box should not accept multi-line input

2024-08-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/914838
Committed: 
https://opendev.org/openstack/horizon/commit/55e9db65282f124041dc66cfa0d51b2901db7c29
Submitter: "Zuul (22348)"
Branch:master

commit 55e9db65282f124041dc66cfa0d51b2901db7c29
Author: flokots 
Date:   Tue Apr 2 03:04:35 2024 +0200

Add help text for edit instance form

This commit adds help text to the Edit Instance form to describe the
limitations of the text allowed in the name and description. The help text 
guides the maximum length for the instance name and description
and advises against using special characters or leading or trailing spaces. 
By providing this information, users will be better informed when modifying 
instance details, reducing the likelihood of
encountering errors.

Closes-Bug: #1981165
Change-Id: If8879c20b2842c3dd769e4cdef80834219c637cd


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1981165

Title:
  Edit Instance - description box should not accept multi-line input

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Currently, the text box for the Description is a multi-line text area.
  But if you try to enter more than one line it fails saying "Error:
  Unable to modify instance "name of instance".

  It turns out that the Nova / Nova client will reject any description
  that doesn't match a gnarly regex.  And newlines are rejected by that
  regex.  The error message you get from the CLI is like this:

  
  Invalid input for field/attribute description. Value: helloworld. 
'hello\nworld' does not match '^[\\ 
-\\~\xa0-¬®-ͷͺ-Ϳ΄-ΊΌΎ-ΡΣ-ԯԱ-Ֆՙ-֊֍-֏-א-תׯ-״؆-؛؞-۞-܍ܐ-ݍ-ޱ߀-ߺ-࠰-࠾ࡀ-࡞ࡠ-ࡪࢠ-ࢴࢶ-ࢽ--ঃঅ-ঌএ-ঐও-নপ-রলশ-হ-ে-ৈো-ৎৗড়-ঢ়য়-০--ਃਅ-ਊਏ-ਐਓ-ਨਪ-ਰਲ-ਲ਼ਵ-ਸ਼ਸ-ਹਾ---ਖ਼-ੜਫ਼੦-੶-ઃઅ-ઍએ-ઑઓ-નપ-રલ-ળવ-હ--ૉો-ૐૠ-૦-૱ૹ--ଃଅ-ଌଏ-ଐଓ-ନପ-ରଲ-ଳଵ-ହ-େ-ୈୋ--ୗଡ଼-ଢ଼ୟ-୦-୷-ஃஅ-ஊஎ-ஐஒ-கங-சஜஞ-டண-தந-பம-ஹா-ூெ-ைொ-ௐௗ௦-௺-ఌఎ-ఐఒ-నప-హఽ-ౄ---ౘ-ౚౠ-౦-౯౷-ಌಎ-ಐಒ-ನಪ-ಳವ-ಹ-ೄ-ೈೊ-ೕ-ೖೞೠ-೦-೯ೱ-ೲ-ഃഅ-ഌഎ-ഐഒ-െ-ൈൊ-൏ൔ-൦-ൿං-ඃඅ-ඖක-නඳ-රලව-ෆා-ෘ-ෟ෦-෯ෲ-෴ก-฿-๛ກ-ຂຄຆ-ຊຌ-ຣລວ-ຽເ-ໄໆ-໐-໙ໜ-ໟༀ-ཇཉ-ཬ--྾-࿌࿎-࿚က-ჅჇჍა-ቈቊ-ቍቐ-ቖቘቚ-ቝበ-ኈኊ-ኍነ-ኰኲ-ኵኸ-ኾዀዂ-ዅወ-ዖዘ-ጐጒ-ጕጘ-ፚ-፼ᎀ-᎙Ꭰ-Ᏽᏸ-ᏽ᐀-᚜ᚠ-ᛸᜀ-ᜌᜎ-ᜠ-᜶ᝀ-ᝠ-ᝬᝮ-ᝰ-ក-០-៩៰-៹᠀-᠐-᠙ᠠ-ᡸᢀ-ᢪᢰ-ᣵᤀ-ᤞ-ᤫᤰ-᥀᥄-ᥭᥰ-ᥴᦀ-ᦫᦰ-ᧉ᧐-᧚᧞-᨞---᪉᪐-᪙᪠-᪭--ᭋ᭐-᭼-᯳᯼-᰻-᱉ᱍ-ᲈᲐ-ᲺᲽ-᳇-ᳺᴀ--ἕἘ-Ἕἠ-ὅὈ-Ὅὐ-ὗὙὛὝὟ-ώᾀ-ᾴᾶ-ῄῆ-ΐῖ-Ί῝-`ῲ-ῴῶ-῾\u2000-\u200a‐-‧\u202f-\u205f⁰-ⁱ⁴-₎ₐ-ₜ₠-₿-℀-↋←-␦⑀-⑊①-⭳⭶-⮕⮘-Ⱞⰰ-ⱞⱠ-ⳳ⳹-ⴥⴧⴭⴰ-ⵧⵯ-⵰-ⶖⶠ-ⶦⶨ-ⶮⶰ-ⶶⶸ-ⶾⷀ-ⷆⷈ-ⷎⷐ-ⷖⷘ-ⷞ-⹏⺀-⺙⺛-⻳⼀-⿕⿰-⿻\u3000-〿ぁ-ゖ-ヿㄅ-ㄯㄱ-ㆎ㆐-ㆺ㇀-㇣ㇰ-㈞㈠-䶵䷀-鿯ꀀ-ꒌ꒐-꓆ꓐ-ꘫꙀ-꛷꜀-ꞿꟂ-Ᶎꟷ-꠫꠰-꠹ꡀ-꡷ꢀ-꣎-꣙-꥓꥟-ꥼ-꧍ꧏ-꧙꧞-ꧾꨀ-ꩀ-ꩍ꩐-꩙꩜-ꫂꫛ-ꬁ-ꬆꬉ-ꬎꬑ-ꬖꬠ-ꬦꬨ-ꬮꬰ-ꭧꭰ-꯰-꯹가-힣--豈-舘並-龎ff-stﬓ-ﬗיִ-זּטּ-לּמּנּ-סּףּ-פּצּ-﯁ﯓ-﴿ﵐ-ﶏﶒ-ﷇﷰ-﷽-︙-﹒﹔-﹦﹨-﹫ﹰ-ﹴﹶ-ﻼ!-하-ᅦᅧ-ᅬᅭ-ᅲᅳ-ᅵ¢-₩│-○-�]*$'

  ... which you would NOT want to show to an end user!

  Possible fixes for the problem would include:

  - Add help text to the Edit Instance form to describe the limitations on the 
text allowed in the Description.
  - Change the "look and feel" of the box to avoid giving the impression that 
multi-line descriptions are OK.
  - Change the UI to reject descriptions with bad characters ... before the get 
sent to nova / nova client
  - Detect the specific response message and translate it into a meaningful 
user error message; e.g. something like "Error: Description contains one or 
more unacceptable characters".

  At least the first one ... please.

  This syndrome may apply to other name, description and other text
  fields in the UI.  I didn't do looking for examples.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1981165/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1984736] Fix included in openstack/nova 27.5.0

2024-08-09 Thread OpenStack Infra
This issue was fixed in the openstack/nova 27.5.0  release.

** Changed in: nova/antelope
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1984736

Title:
  "TypeError: catching classes that do not inherit from BaseException is
  not allowed" is raised if volume mount fails in python3

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) antelope series:
  Fix Released
Status in OpenStack Compute (nova) wallaby series:
  In Progress
Status in OpenStack Compute (nova) xena series:
  In Progress
Status in OpenStack Compute (nova) yoga series:
  In Progress
Status in OpenStack Compute (nova) zed series:
  In Progress

Bug description:
  Saw this on a downstream CI run where a volume mount failed:

  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager 
[req-67e1cef8-e30a-4a47-8010-9e966fd30fce 8882186b6a324a0e9fb6fd268d337cce 
8b290d651e9b42fd89c95b5e2a9a25fb - default default] [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Failed to attach 
5a6a5f37-0888-44b2-9456-cf087ae8c356 at /dev/vdb: TypeError: catching classes 
that do not inherit from BaseException is not allowed
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Traceback (most recent call last):
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py", line 305, 
in mount
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] nova.privsep.fs.mount(fstype, export, 
mountpoint, options)
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 247, in 
_wrap
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] return self.channel.remote_call(name, 
args, kwargs)
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 224, in 
remote_call
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] raise exc_type(*result[2])
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] 
oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while 
running command.
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Command: mount -t nfs 
192.168.1.50:/vol_cinder /var/lib/nova/mnt/724dab229d80c6a1a1e49a71c8356eed
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Exit code: 32
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Stdout: ''
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Stderr: 'Failed to connect to bus: No 
data available\nmount.nfs: Operation not permitted\n'
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] 
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] During handling of the above exception, 
another exception occurred:
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] 
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Traceback (most recent call last):
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7023, in 
_attach_volume
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] bdm.attach(context, instance, 
self.volume_api, self.driver,
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 46, in 
wrapped
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] ret_val = method(obj, context, *args, 
**kwargs)
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 672, in 
attach
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] self._do_attach(context, instance, 
volume, volume_api,
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manag

[Yahoo-eng-team] [Bug 2073862] Fix included in openstack/nova 27.5.0

2024-08-09 Thread OpenStack Infra
This issue was fixed in the openstack/nova 27.5.0  release.

** Changed in: nova/antelope
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2073862

Title:
  test_vmdk_bad_descriptor_mem_limit and
  test_vmdk_bad_descriptor_mem_limit_stream_optimized fail if qemu-img
  binary is missing

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) 2024.1 series:
  Fix Released
Status in OpenStack Compute (nova) antelope series:
  Fix Released
Status in OpenStack Compute (nova) bobcat series:
  Fix Released

Bug description:
  When qemu-img binary is not present on the system, these tests fail
  like we can see on these logs:

  ==
  ERROR: 
nova.tests.unit.image.test_format_inspector.TestFormatInspectors.test_vmdk_bad_descriptor_mem_limit
  --
  pythonlogging:'': {{{
  2024-07-23 11:44:54,011 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:44:54,012 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:44:54,015 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'project_member_api', 'project_reader_api', 'project_member_or_admin', 
'project_reader_or_admin', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell', 
'os_compute_api:os-shelve:unshelve_to_host'] specified in policy files are the 
same as the defaults provided by the service. You can remove these rules from 
policy files which will make maintenance easier. You can detect these redundant 
rules by ``oslopolicy-list-redundant`` tool also.
  }}}

  Traceback (most recent call last):
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 408, in test_vmdk_bad_descriptor_mem_limit
  self._test_vmdk_bad_descriptor_mem_limit()
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 382, in _test_vmdk_bad_descriptor_mem_limit
  img = self._create_allocated_vmdk(image_size // units.Mi,
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 183, in _create_allocated_vmdk
  subprocess.check_output(
File "/usr/lib/python3.10/subprocess.py", line 421, in check_output
  return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/lib/python3.10/subprocess.py", line 526, in run
  raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command 'qemu-img convert -f raw -O vmdk -o 
subformat=monolithicSparse -S 0 
/tmp/tmpw0q0ibvj/nova-unittest-formatinspector--monolithicSparse-wz0i4kj1.raw 
/tmp/tmpw0q0ibvj/nova-unittest-formatinspector--monolithicSparse-qpo78jee.vmdk' 
returned non-zero exit status 127.


  
  ==
  ERROR: 
nova.tests.unit.image.test_format_inspector.TestFormatInspectors.test_vmdk_bad_descriptor_mem_limit_stream_optimized
  --
  pythonlogging:'': {{{
  2024-07-23 11:43:31,443 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:43:31,443 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to

[Yahoo-eng-team] [Bug 2073862] Fix included in openstack/nova 28.3.0

2024-08-09 Thread OpenStack Infra
This issue was fixed in the openstack/nova 28.3.0  release.

** Changed in: nova/bobcat
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2073862

Title:
  test_vmdk_bad_descriptor_mem_limit and
  test_vmdk_bad_descriptor_mem_limit_stream_optimized fail if qemu-img
  binary is missing

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) 2024.1 series:
  Fix Released
Status in OpenStack Compute (nova) antelope series:
  Fix Released
Status in OpenStack Compute (nova) bobcat series:
  Fix Released

Bug description:
  When qemu-img binary is not present on the system, these tests fail
  like we can see on these logs:

  ==
  ERROR: 
nova.tests.unit.image.test_format_inspector.TestFormatInspectors.test_vmdk_bad_descriptor_mem_limit
  --
  pythonlogging:'': {{{
  2024-07-23 11:44:54,011 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:44:54,012 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:44:54,015 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'project_member_api', 'project_reader_api', 'project_member_or_admin', 
'project_reader_or_admin', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell', 
'os_compute_api:os-shelve:unshelve_to_host'] specified in policy files are the 
same as the defaults provided by the service. You can remove these rules from 
policy files which will make maintenance easier. You can detect these redundant 
rules by ``oslopolicy-list-redundant`` tool also.
  }}}

  Traceback (most recent call last):
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 408, in test_vmdk_bad_descriptor_mem_limit
  self._test_vmdk_bad_descriptor_mem_limit()
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 382, in _test_vmdk_bad_descriptor_mem_limit
  img = self._create_allocated_vmdk(image_size // units.Mi,
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 183, in _create_allocated_vmdk
  subprocess.check_output(
File "/usr/lib/python3.10/subprocess.py", line 421, in check_output
  return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/lib/python3.10/subprocess.py", line 526, in run
  raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command 'qemu-img convert -f raw -O vmdk -o 
subformat=monolithicSparse -S 0 
/tmp/tmpw0q0ibvj/nova-unittest-formatinspector--monolithicSparse-wz0i4kj1.raw 
/tmp/tmpw0q0ibvj/nova-unittest-formatinspector--monolithicSparse-qpo78jee.vmdk' 
returned non-zero exit status 127.


  
  ==
  ERROR: 
nova.tests.unit.image.test_format_inspector.TestFormatInspectors.test_vmdk_bad_descriptor_mem_limit_stream_optimized
  --
  pythonlogging:'': {{{
  2024-07-23 11:43:31,443 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:43:31,443 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to Y

[Yahoo-eng-team] [Bug 2035375] Re: Detaching multiple NVMe-oF volumes may leave the subsystem in connecting state

2024-08-08 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/895192
Committed: 
https://opendev.org/openstack/nova/commit/18163761d02fc02d5484f91bf52cd4f25536f95e
Submitter: "Zuul (22348)"
Branch:master

commit 18163761d02fc02d5484f91bf52cd4f25536f95e
Author: Gorka Eguileor 
Date:   Tue Sep 12 20:53:15 2023 +0200

Fix guard for NVMeOF volumes

When detaching multiple NVMe-oF volumes from the same host we may end
with a NVMe subsystem in "connecting" state, and we'll see a bunch nvme
error in dmesg.

This happens on storage systems that share the same subsystem for
multiple volumes because Nova has not been updated to support the
tri-state "shared_targets" option that groups the detach and unmap of
volumes to prevent race conditions.

This is related to the issue mentioned in an os-brick commit message [1]

For the guard_connection method of os-brick to work as expected for
NVMe-oF volumes we need to use microversion 3.69 when retrieving the
cinder volume.

In microversion 3.69 we started reporting 3 states for shared_targets:
True, False, and None.

- True is to guard iSCSI volumes and will only be used if the iSCSI
  initiator running on the host doesn't have the manual scans feature.

- False is that no target/subsystem is being shared so no guard is
  necessary.

- None is to force guarding, and it's currenly used for NVMe-oF volumes
  when sharing the subsystem.

[1]: https://review.opendev.org/c/openstack/os-brick/+/836062/12//COMMIT_MSG

Closes-Bug: #2035375
Change-Id: I4def1c0f20118d0b8eb7d3bbb09af2948ffd70e1


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2035375

Title:
  Detaching multiple NVMe-oF volumes may leave the subsystem in
  connecting state

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When detaching multiple NVMe-oF volumes from the same host we may end
  with a NVMe subsystem in "connecting" state, and we'll see a bunch
  nvme error in dmesg.

  This happens on storage systems that share the same subsystem for
  multiple volumes because Nova has not been updated to support the tri-
  state "shared_targets" option that groups the detach and unmap of
  volumes to prevent race conditions.

  This is related to the issue mentioned in an os-brick commit message:
  https://review.opendev.org/c/openstack/os-
  brick/+/836062/12//COMMIT_MSG

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2035375/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052761] Re: libvirt: swtpm_ioctl is required for vTPM support

2024-08-08 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/908546
Committed: 
https://opendev.org/openstack/nova/commit/9a11bb25238288139c4473d9d91bf365ed88f435
Submitter: "Zuul (22348)"
Branch:master

commit 9a11bb25238288139c4473d9d91bf365ed88f435
Author: Takashi Kajinami 
Date:   Fri Feb 9 12:16:45 2024 +0900

libvirt: Ensure swtpm_ioctl is available for vTPM support

Libvirt uses swtpm_ioctl to terminate swtpm processes. If the binary
does not exist, swtpm processes are kept running after the associated
VM terminates, because QEMU does not send shutdown to swtpm.

Closes-Bug: #2052761
Change-Id: I682f71512fc33a49b8dfe93894f144e48f33abe6


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2052761

Title:
  libvirt: swtpm_ioctl is required for vTPM support

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  Libvirt uses swtpm_ioctl to shutdown the swtpm process at VM termination, 
because QEMU does not send shutdown command.
  However the binary is not included in the required binaries (swtpm and 
swtpm_setup, at the time of writing) checked by libvirt driver. So users can 
use vTPM support without binaries, which leaves swtpm processes kept running.

  Steps to reproduce
  ==
  * Deploy nova-compute with vTPM support
  * Move swtpm_ioctl from PATH
  * Restart nova-compute

  Expected result
  ===
  nova-compute fails to start because swtpm_ioctl is missing

  Actual result
  =
  nova-compute starts without error and reports TPM traits.

  Environment
  ===
  This issue was initially found in master, but would be present in stable 
branches.

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2052761/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2076163] Re: Persistent mdev support does not work with < libvirt 8.10 due to missing VIR_NODE_DEVICE_CREATE_XML_VALIDATE

2024-08-08 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/925826
Committed: 
https://opendev.org/openstack/nova/commit/f63029b461b81ad93e0681973ed9b5bfca405d5a
Submitter: "Zuul (22348)"
Branch:master

commit f63029b461b81ad93e0681973ed9b5bfca405d5a
Author: melanie witt 
Date:   Tue Aug 6 20:29:22 2024 +

libvirt: Remove node device XML validate flags

Node device XML validation flags [1]:

  VIR_NODE_DEVICE_(CREATE|DEFINE)_XML_VALIDATE

were added in libvirt 8.10.0 but we support older libvirt versions
which will raise an AttributeError when flag access is attempted.

We are not currently using the flags (nothing calling with
validate=True) so this removes the flags from the code entirely. If the
flags are needed in the future, they can be added again at that time.

Closes-Bug: #2076163

[1] 
https://github.com/libvirt/libvirt/commit/d8791c3c7caa6e3cadaf98a5a2c94b232ac30fed

Change-Id: I015d9b7cad413986058da4d29ca7711c844bfa84


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2076163

Title:
  Persistent mdev support does not work with < libvirt 8.10 due to
  missing VIR_NODE_DEVICE_CREATE_XML_VALIDATE

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The persistent mdev feautre passes that flag but that flag was only
  supported since libvirt 8.10.0 so with older libvirt like 7.3.0 (min
  persistent mdev) or 8.0.0 (ubuntu 22.04) the persistent mdev feature
  cannot be enabled as nova-compute will fail due to the missing
  constant.

  XML validation is just a nice to have feature so we can make that flag
  optional and only pass it if libvirt is >= 8.10.0

  
https://github.com/openstack/nova/commit/74befb68a79f8bff823fe067e0054504391ee179#diff-67d0163175a798156def4ec53c18fa2ce6eba79b6400fa833a9219d3669e9a11R1267
  
https://github.com/libvirt/libvirt/commit/d8791c3c7caa6e3cadaf98a5a2c94b232ac30fed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2076163/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2059236] Re: Add a RBAC action field in the query hooks

2024-08-07 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/915370
Committed: 
https://opendev.org/openstack/neutron/commit/f22f7ae012e75b34051945fcac29f955861896ab
Submitter: "Zuul (22348)"
Branch:master

commit f22f7ae012e75b34051945fcac29f955861896ab
Author: Rodolfo Alonso Hernandez 
Date:   Mon Apr 8 22:19:50 2024 +

Use the RBAC actions field for "network" and "subnet"

Since [1], it is possible to define a set of RBAC actions to filter the
model query. For "network" and "subnet" models, it is needed to add the
RBAC action "access_as_external" to the query. Instead of adding an
additional filter (as is now), this patch replaces the default RBAC
actions used in the model query, adding this extra one.

The neutron-lib library is bumped to version 3.14.0.

[1]https://review.opendev.org/c/openstack/neutron-lib/+/914473

Closes-Bug: #2059236
Change-Id: Ie3e77e2f812bd5cddf1971bc456854866843d4f3


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2059236

Title:
  Add a RBAC action field in the query hooks

Status in neutron:
  Fix Released

Bug description:
  Any Neutron resource (that is not only a single database table but a
  view, a combination of several tables), can register a set of hooks
  that will be used during the DB query creation [1]. These hooks
  include a query hook (to modify query depending on the database
  relationships), a filter hook (to add extra filtering steps to the
  final query) and a results filter hook (that could be used to join
  other tables with other dependencies).

  This bug proposes to add an extra field to this hooks to be able to
  filter the RBAC actions. Some resources, like networks [2] and subnets
  [3], need to add an extra RBAC action "ACCESS_EXTERNAL" to the query
  filter. This is done now by adding again the same RBAC filter included
  in the ``query_with_hooks`` [4] but with the "ACCESS_EXTERNAL" action.

  If instead of this, the ``query_with_hooks`` can include a
  configurable set of RBAC actions, the result query could be shorter,
  less complex and faster.

  
[1]https://github.com/openstack/neutron-lib/blob/625ae19e29758da98c5dd8c9ce03962840a87949/neutron_lib/db/model_query.py#L86-L90
  
[2]https://github.com/openstack/neutron/blob/bcf1f707bc9169e8f701613214516e97f039d730/neutron/db/external_net_db.py#L75-L80
  
[3]https://review.opendev.org/c/openstack/neutron/+/907313/15/neutron/db/external_net_db.py
  
[4]https://github.com/openstack/neutron-lib/blob/625ae19e29758da98c5dd8c9ce03962840a87949/neutron_lib/db/model_query.py#L127-L132

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2059236/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2074018] Re: disable_user_account_days_inactive option locks out all users

2024-08-07 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/924892
Committed: 
https://opendev.org/openstack/keystone/commit/e9513f8e4f25e1f20bc6fcab71d917712abf
Submitter: "Zuul (22348)"
Branch:master

commit e9513f8e4f25e1f20bc6fcab71d917712abf
Author: Douglas Mendizábal 
Date:   Fri Jul 19 17:10:11 2024 -0400

Add keystone-manage reset_last_active command

This patch adds the `reset_last_active` subcommand to the
`keystone-manage` command line tool.

This subcommand will update every user in the database that has a null
value in the `last_active_at` property to the current server time. This
is necessary to prevent user lockout in deployments that have been
running for a long time without `disable_user_account_days_inactive` and
later decide to turn it on.

This patch also includes a change to the logic that sets
`last_active_at` to fix the root issue of the lockout.

Closes-Bug: 2074018
Change-Id: I1b71fb3881dc041db01083fbb4f2592400096a31


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2074018

Title:
  disable_user_account_days_inactive option locks out all users

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Enabling the option `[security_compliance]
  disable_user_account_days_inactive = X` disables all user accounts in
  deployments that have been running for longer than X.

  The root cause seems to be the way that the values of the
  `last_active_at` column in the `user` table are set.  When the option
  is disabled, the `last_active_at` column is never updated, so it is
  null for all users.

  If you later decide to turn on this option for compliance reasons, the
  current logic in Keystone will use the value of `created_at` as the
  last time the user was active.  For any deployment where the users
  were created more than the value of
  `disable_user_account_days_inactive` will result in all users being
  disabled including the admin user regardless of when the user last
  logged in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2074018/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2024-08-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/blazar/+/925743
Committed: 
https://opendev.org/openstack/blazar/commit/3999bf1fb7b51ae6eb8e313cfc8526a57336677a
Submitter: "Zuul (22348)"
Branch:master

commit 3999bf1fb7b51ae6eb8e313cfc8526a57336677a
Author: Pierre Riteau 
Date:   Tue Aug 6 10:42:19 2024 +0200

Replace deprecated assertDictContainsSubset

This deprecated method was removed in Python 3.12 [1].

[1] https://docs.python.org/3/whatsnew/3.12.html#id3

Closes-Bug: #1938103
Change-Id: Ic5fcf58bfb6bea0cff669feadbe8fee5b01b1ce0


** Changed in: blazar
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Blazar:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Mistral:
  Fix Released
Status in neutron:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/blazar/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2066115] Re: Prevent KeyError getting value of optional data

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/919430
Committed: 
https://opendev.org/openstack/horizon/commit/fcce68a914f49938137785a4635d781b5a1741df
Submitter: "Zuul (22348)"
Branch:master

commit fcce68a914f49938137785a4635d781b5a1741df
Author: MinhNLH2 
Date:   Sun May 19 20:58:47 2024 +0700

Prevent KeyError when getting value of optional key

Closes-Bug: #2066115
Change-Id: Ica10eb749b48410583cb34bfa2fda0433a26c664
Signed-off-by: MinhNLH2 


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2066115

Title:
  Prevent KeyError getting value of optional data

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Problem: Some of optional data is retrieved in this way:
  backup_name = data['backup_name'] or None
  volume_id = data['volume_id'] or None

  etc...

  When the key does not exist, KeyError will be raised.
  Moreover, or None here is meaningless.

  Solution:
  Change to 
  backup_name = data.get('backup_name')
  volume_id = data.get('volume_id')

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2066115/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2072483] Re: Revert image status to queued if image conversion fails

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/glance/+/923624
Committed: 
https://opendev.org/openstack/glance/commit/ea131dd1442861cb5884f99b6bb9e47e397605ce
Submitter: "Zuul (22348)"
Branch:master

commit ea131dd1442861cb5884f99b6bb9e47e397605ce
Author: Abhishek Kekane 
Date:   Mon Jul 8 09:49:55 2024 +

Revert image state to queued if conversion fails

Made changes to revert image state to `queued` and deleting image data
from staging area if image conversion fails. If image is importing to
multiple stores at a time then resetting the image properties
`os_glance_importing_to_stores` and `os_glance_failed_imports` to
reflect the actual result of the operation.

Closes-Bug: 2072483
Change-Id: I373dde3a07332184c43d9605bad7a59c70241a71


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2072483

Title:
  Revert image status to queued if image conversion fails

Status in Glance:
  Fix Released

Bug description:
  When glance has enabled import plugin `image_conversion` and if this
  plugin fails to convert the image to its desired format then image
  remains in `importing` state forever and also image data remains in
  staging area unless you delete that image.

  Ideally image data should be deleted from the staging area and image
  state should be rolled back to `queued so that user can rectify the
  error in previous attempt and import image again.

  Environment settings:
  Ensure you have glance-direct,web-download method enabled in your 
glance-api.conf
  Ensure you have image_conversion plugin enabled in glance-image-import.conf

  How to reproduce:
  1. Create bad image file with below command
 qemu-img create -f qcow2 -o data_file=abcdefghigh,data_file_raw=on 
disk.qcow2 1G
  2. Use above file to create image using import workflow
 glance image-create-via-import --disk-format qcow2 --container-format bare 
--import-method glance-direct --file disk.qcow2 --name 
test-glance-direct-conversion_1

  Expected result:
  Operation should fail, image should be in queued state and image data 
should be deleted from staging area

  
  Actual result:
  Operation fails, image remains in importing state and image data remains 
in staging area

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2072483/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073987] Re: Switch from distributed to centralized Floating IPs breaks connectivity to the existing FIPs

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/925007
Committed: 
https://opendev.org/openstack/neutron/commit/4b1bfb93e380b8dce78935395b2cda57076e5476
Submitter: "Zuul (22348)"
Branch:master

commit 4b1bfb93e380b8dce78935395b2cda57076e5476
Author: Slawek Kaplonski 
Date:   Fri Jul 26 12:02:27 2024 +0200

Fix setting correct 'reside-on-chassis-redirect' in the maintenance task

Setting of the 'reside-on-chassis-redirect' was skipped for LRP ports of
the provider tenant networks in patch [1] but later patch [2] removed
this limitation from the ovn_client but not from the maintenance task.
Due to that this option wasn't updated after e.g. change of the
'enable_distributed_floating_ip' config option and connectivity to the
existing Floating IPs associated to the ports in vlan tenant networks
was broken.

This patch removes that limitation and this option is now updated for
all of the Logical_Router_Ports for vlan networks, not only for external
gateways.

[1] https://review.opendev.org/c/openstack/neutron/+/871252
[2] https://review.opendev.org/c/openstack/neutron/+/878450

Closes-bug: #2073987
Change-Id: I56e791847c8f4f3a07f543689bf22fde8160c9b7


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073987

Title:
  Switch from distributed to centralized Floating IPs breaks
  connectivity to the existing FIPs

Status in neutron:
  Fix Released

Bug description:
  This affects only ML2/OVN deployments. I was checking it with
  initially enabled distributed floating ips
  (enable_distributed_floating_ip=True in the neutron ml2 plugin's
  config file).

  Steps to reproduce the issue:

  1. Create vlan tenant network -- THIS IS VERY IMPORTANT, USING TUNNEL 
NETWORKS WILL NOT CAUSE THAT PROBLEM AT ALL
  2. Create external network - can be vlan or flat
  3. Create router and attach vlan tenant network to that router
  4. Set external network as router's gateway
  5. Create vm connected to that vlan tenant network and add Floating IP to it,
  6. Check connectivity to the VM using Floating IP - all works fine until 
now...

  7. Change 'enable_distributed_floating_ip' config option in Neutron to be 
FALSE
  8. Restart neutron-server
  9. FIP is not working anymore - it is because SNAT_AND_DNAT entry was changed 
to be centralized (no external_mac not set anymore in ovn-nb) but 
Logical_Router_Port still have option "reside-on-redirect-chassis" set to 
"false". After updating it manually to "True" connectiity over centralized 
gateway chassis works again.

  This option reside-on-redirect-chassis was added with patch
  https://review.opendev.org/c/openstack/neutron/+/871252. Additionally
  patch https://review.opendev.org/c/openstack/neutron/+/878450 added
  maintenance task to set correct value of the redirect-type in the
  Logical_Router's gateway port. But it seems that we are missing update
  of the 'reside-on-redirect-chassis' option for the existing
  Logical_Router_Ports when this config option is changed. Maybe we
  should have maintenance task for that also.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073987/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073782] Re: "Tagging" extension does not initialize the policy enforcer

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/924656
Committed: 
https://opendev.org/openstack/neutron/commit/776178e90763d004ccb595b131cdd4dd617cd34f
Submitter: "Zuul (22348)"
Branch:master

commit 776178e90763d004ccb595b131cdd4dd617cd34f
Author: Rodolfo Alonso Hernandez 
Date:   Sat Jul 20 00:46:04 2024 +

Initialize the policy enforcer for the "tagging" service plugin

The "tagging" service plugin API extension does use the policy enforcer
since [1]. If a tag API call is done just after the Neutron server has
been initialized and the policy enforcer, that is a global variable per
API worker, has not been initialized, the API call will fail.

This patch initializes the policy enforcer as is done in the
``PolicyHook``, that is called by many other API resources that inherit
from the ``APIExtensionDescriptor`` class.

[1]https://review.opendev.org/q/I9f3e032739824f268db74c5a1b4f04d353742dbd

Closes-Bug: #2073782
Change-Id: Ia35c51fb81cfc0a55c5a2436fc5c55f2b4c9bd01


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073782

Title:
  "Tagging" extension does not initialize the policy enforcer

Status in neutron:
  Fix Released

Bug description:
  The "tagging" service plugin extension uses its own controller. This
  controller doesn't call the WSGI hooks like the policy hook. Instead
  of this, the controller implements the policy enforcer directly on the
  WSGI methods (create, update, delete, etc.).

  It is needed to initialize the policy enforcer before any enforcement
  is done. If a tag API call is done just after the Neutron server has
  been restarted, the server will fail with the following error: [1].

  The policy enforcement was implemented in [2]. The fix for this bug
  should be backported up to 2023.2.

  [1]https://paste.opendev.org/show/bIeSoD2Y0vrTpJb4uYQ5/
  [2]https://review.opendev.org/q/I9f3e032739824f268db74c5a1b4f04d353742dbd

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073782/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2074209] Re: OVN maintenance tasks may be delayed 10 minutes in the podified deployment

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/925194
Committed: 
https://opendev.org/openstack/neutron/commit/04c217bcd0eda07d52a60121b6f86236ba6e26ee
Submitter: "Zuul (22348)"
Branch:master

commit 04c217bcd0eda07d52a60121b6f86236ba6e26ee
Author: Slawek Kaplonski 
Date:   Tue Jul 30 14:17:44 2024 +0200

Lower spacing time of the OVN maintenance tasks which should be run once

Some of the OVN maintenance tasks are expected to be run just once and
then they raise periodic.NeverAgain() to not be run anymore. Those tasks
also require to have acquried ovn db lock so that only one of the
maintenance workers really runs them.
All those tasks had set 600 seconds as a spacing time so they were run
every 600 seconds. This works fine usually but that may cause small
issue in the environments were Neutron is run in POD as k8s/openshift
application. In such case, when e.g. configuration of neutron is
updated, it may happen that first new POD with Neutron is spawned and
only once it is already running, k8s will stop old POD. Because of that
maintenance worker running in the new neutron-server POD will not
acquire lock on the OVN DB (old POD still holds the lock) and will not
run all those maintenance tasks immediately. After old POD will be
terminated, one of the new PODs will at some point acquire that lock and
then will run all those maintenance tasks but this would cause 600
seconds delay in running them.

To avoid such long waiting time to run those maintenance tasks, this
patch lowers its spacing time from 600 to just 5 seconds.
Additionally maintenance tasks which are supposed to be run only once and
only by the maintenance worker which has acquired ovn db lock will now be
stopped (periodic.NeverAgain will be raised) after 100 attempts of
run.
This will avoid running them every 5 seconds forever on the workers
which don't acquire lock on the OVN DB at all.

Closes-bug: #2074209
Change-Id: Iabb4bb427588c1a5da27a5d313f75b5bd23805b2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2074209

Title:
  OVN maintenance tasks may be delayed 10 minutes in the podified
  deployment

Status in neutron:
  Fix Released

Bug description:
  When running Neutron server on the K8s (or OpenShift) cluster it may
  happen that ovn maintenance periodic tasks which are supposed to be
  run imediatelly are delayed for about 10 minutes. It is like when e.g.
  Neutron's configuration is changed and K8s is restarting neutron pods.
  What happens in such case is:

  1. pods with neutron-api application are running,
  2. configuration is updated and k8s is first starting new pods and after new 
ones are ready it terminates old pods,
  3. during that time, neutron-server process which runs in the new pod is 
starting maintenance task and it immediately tries to run tasks defined with 
"periodics.periodic(spacing=600, run_immediately=True)" decorator.
  4. This new pod don't yet have lock to the ovn northbound db so each of such 
maintenance tasks is stopped immediately,
  5. Few seconds later OLD neutron-server pod is terminated by k8s and then new 
pod (the one started above in point 3) got lock to the ovn database,
  6. Now all maintenance tasks are run again by the maintenance worked after 
time defined in the "spacing" parameter which is 600 seconds. This 600 seconds 
is pretty long time to wait for e.g. some options in the ovn database will be 
adjusted to the new Neutron configuration.

  We could reduce this spacing time to e.g. 5 seconds. This will
  decrease this additonal waiting time significantly in the case
  described in this bug. It would make all those methods to be called
  much more often in neutron-server processes which don't have lock
  granted but we may introduce additional parameter for that and e.g.
  raise NeverAgain() after 100 attempts of run such periodic task.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2074209/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073745] Re: [eventlet-deprecation] Reduce the ``IpConntrackManager`` process pool to a single thread

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/924582
Committed: 
https://opendev.org/openstack/neutron/commit/23b9077df53d2d61a3749ea8631ce4c7fe277b35
Submitter: "Zuul (22348)"
Branch:master

commit 23b9077df53d2d61a3749ea8631ce4c7fe277b35
Author: Rodolfo Alonso Hernandez 
Date:   Fri Jul 19 18:25:39 2024 +

Reduce to 1 thread the processing of ``IpConntrackManager`` events

The multithread processing does not add any speed improvement to the
event processing. The aim of this patch is to reduce to 1 the number of
threads processing the ``IpConntrackManager`` events.

Closes-Bug: #2073745
Change-Id: I190d842349a86868578d6b6ee2ff53cfcd6fb1cc


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073745

Title:
  [eventlet-deprecation] Reduce the ``IpConntrackManager`` process pool
  to a single thread

Status in neutron:
  Fix Released

Bug description:
  This bug has the same justification as [1]. The multithread processing
  does not add any speed improvement to the event processing. The aim of
  this bug is to reduce to 1 the number of threads processing the
  ``IpConntrackManager`` events.

  [1]https://bugs.launchpad.net/neutron/+bug/2070376

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073745/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073567] Re: [master][ml2-ovn] Multiple Unexpected exception in notify_loop: neutron_lib.exceptions.PortNotFound

2024-07-31 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/925039
Committed: 
https://opendev.org/openstack/neutron/commit/c1b88fc5f52283380261f4fdc1562ff56ea06a29
Submitter: "Zuul (22348)"
Branch:master

commit c1b88fc5f52283380261f4fdc1562ff56ea06a29
Author: Miro Tomaska 
Date:   Fri Jul 26 10:50:40 2024 -0400

Only query for port do not get directly

It was observed in the tempest tests that the port could be already
deleted by some other concurrent event when the `run` is called.
This caused a flood of exception logs. Thus, with this patch we only
query for the port and only update_router_port when the port was
found.

Closes-Bug: #2073567
Change-Id: I54d027f7cb5014d296a99029cfa0a13a7800da0a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073567

Title:
  [master][ml2-ovn] Multiple Unexpected exception in notify_loop:
  neutron_lib.exceptions.PortNotFound

Status in neutron:
  Fix Released

Bug description:
  Multiple traces can be seen in ovn job like below:-
  Jul 18 19:35:46.623330 np0038010972 neutron-server[84540]: WARNING 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client [None 
req-fbcd2914-d4f5-4f87-a685-96f16cc4f5f2 None None] No port found with ID 
40c61c6b-8569-4bbd-a71d-4bf9a0917d19: RuntimeError: No port found with ID 
40c61c6b-8569-4bbd-a71d-4bf9a0917d19
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event [None req-fbcd2914-d4f5-4f87-a685-96f16cc4f5f2 None None] 
Unexpected exception in notify_loop: neutron_lib.exceptions.PortNotFound: Port 
40c61c6b-8569-4bbd-a71d-4bf9a0917d19 could not be found.
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event Traceback (most recent call last):
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File "/opt/stack/neutron/neutron/db/db_base_plugin_common.py", 
line 295, in _get_port
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event port = model_query.get_by_id(context, models_v2.Port, id,
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/neutron_lib/db/model_query.py",
 line 178, in get_by_id
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event return query.filter(model.id == object_id).one()
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/sqlalchemy/orm/query.py", 
line 2778, in one
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event return self._iter().one()  # type: ignore
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/sqlalchemy/engine/result.py",
 line 1810, in one
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event return self._only_one_row(
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/sqlalchemy/engine/result.py",
 line 752, in _only_one_row
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event raise exc.NoResultFound(
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event sqlalchemy.exc.NoResultFound: No row was found when one was 
required
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event 
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event During handling of the above exception, another exception 
occurred:
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event 
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event Traceback (most recent call last):
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/ovsdbapp/event.py", line 
177, in notify_loop
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event match.run(event, row, updates)
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py",
 line 581, in run
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event port = self.driver._plugin.get_port(self.admin_context, 
row.name)
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/neutron_lib/db/api.py", line 
223, in wrapped
  Jul 18 19:35:46.630294 np0038010972 neutron-server[8

  1   2   3   4   5   6   7   8   9   10   >