[Yahoo-eng-team] [Bug 2028895] Re: Interoperable Image Import in glance documented format for inject not working as expected

2024-02-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/glance/+/890423
Committed: 
https://opendev.org/openstack/glance/commit/dd9b3156cd1c5341b01c5befe2a2d13bab6e8d01
Submitter: "Zuul (22348)"
Branch:master

commit dd9b3156cd1c5341b01c5befe2a2d13bab6e8d01
Author: Cyril Roelandt 
Date:   Wed Dec 13 04:13:40 2023 +0100

inject_image_metadata plugin: Fix documentation

The properties and values given to the "inject" option must not be
quoted, otherwise the quotes become part of the values themselves.

Change-Id: Ibcb8b8488253f459f40e6d34f4221832b7ff3839
Closes-Bug: #2028895


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2028895

Title:
  Interoperable Image Import in glance documented format for inject not
  working as expected

Status in Glance:
  Fix Released

Bug description:
  According to the documentation, the correct way to specify custom import 
image metadata properties is:
  "inject is a comma-separated list of properties and values that will be 
injected into the image record for the imported image. Each property and value 
should be quoted and separated by a colon (‘:’) as shown in the example above."

  With the example being:
  inject = "property1":"value1","property2":"value2",...

  When specifying properties in this way the resulting properties in the 
imported image look like this:
  properties   | "property2"='"value2"', "property3"='"value3', 
os_glance_failed_import='', os_glance_importing_to_stores='', 
os_hash_algo='sha512', 
os_hash_value='cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e',
 os_hidden='False', owner_specified.openstack.md5='', 
owner_specified.openstack.object='images/proptest1', 
owner_specified.openstack.sha256='', property1"='"value1"', stores='local'

  If you look closely at each of the properties, the quotes are inconsistent:
  "property2"='"value2"'
  "property3"='"value3
  property1"='"value1"'

  Conversely, if you use the following (no quotes):
  inject = property1:value1,property2:value2,property3:value3

  properties   | os_glance_failed_import='',
  os_glance_importing_to_stores='', os_hash_algo='sha512',
  
os_hash_value='cf83e1357eefb8bdf1542850d66d8007d620e4050b5715dc83f4a921d36ce9ce47d0d13c5d85f2b0ff8318d2877eec2f63b931bd47417a81a538327af927da3e',
  os_hidden='False', owner_specified.openstack.md5='',
  owner_specified.openstack.object='images/proptest2',
  owner_specified.openstack.sha256='', property1='value1',
  property2='value2', property3='value3', stores='local'

  Now it looks better:
  property1='value1'
  property2='value2'
  property3='value3'

  The resulting quotes using this format seem to match the other
  standard properties, ie. key='value' and I suspect what we are going
  for. I'm unclear if this is a parser issue or a documentation issue.

  ---
  Release: 27.0.0.0b3.dev5 on 2022-08-30 13:35:51
  SHA: 46c30f0b6db6ed6a86b1b84e69748025ad9050c6
  Source: 
https://opendev.org/openstack/glance/src/doc/source/admin/interoperable-image-import.rst
  URL: 
https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2028895/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1884762] Re: Unhandled error: RuntimeError: dictionary changed size during iteration

2024-02-29 Thread Takashi Kajinami
I'm closing this because of inactivity. I've never seen this problem,
either.

** Changed in: oslo.config
   Status: New => Invalid

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1884762

Title:
  Unhandled error: RuntimeError: dictionary changed size during
  iteration

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.config:
  Invalid

Bug description:
  Observed in CentOS8 nova-scheduler logs:

  2020-06-23 13:37:46.779 23 ERROR nova Traceback (most recent call last):
  2020-06-23 13:37:46.779 23 ERROR nova   File "/usr/bin/nova-scheduler", line 
10, in 
  2020-06-23 13:37:46.779 23 ERROR nova sys.exit(main())
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/nova/cmd/scheduler.py", line 53, in main
  2020-06-23 13:37:46.779 23 ERROR nova service.serve(server, 
workers=workers)
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/nova/service.py", line 489, in serve
  2020-06-23 13:37:46.779 23 ERROR nova restart_method='mutate')
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_service/service.py", line 842, in launch
  2020-06-23 13:37:46.779 23 ERROR nova launcher.launch_service(service, 
workers=workers)
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_service/service.py", line 606, in 
launch_service
  2020-06-23 13:37:46.779 23 ERROR nova self._start_child(wrap)
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_service/service.py", line 575, in 
_start_child
  2020-06-23 13:37:46.779 23 ERROR nova self.launcher.restart()
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_service/service.py", line 311, in restart
  2020-06-23 13:37:46.779 23 ERROR nova self.conf.mutate_config_files()
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 3013, in 
mutate_config_files
  2020-06-23 13:37:46.779 23 ERROR nova self._warn_immutability()
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 3040, in 
_warn_immutability
  2020-06-23 13:37:46.779 23 ERROR nova for info, group in 
self._all_opt_infos():
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 2502, in 
_all_opt_infos
  2020-06-23 13:37:46.779 23 ERROR nova for info in self._opts.values():
  2020-06-23 13:37:46.779 23 ERROR nova RuntimeError: dictionary changed size 
during iteration
  2020-06-23 13:37:46.779 23 ERROR nova
  2020-06-23 13:37:46.780 20 CRITICAL nova 
[req-739357c3-918b-4778-ab8b-282fc7fad943 - - - - -] Unhandled error: 
RuntimeError: dictionary changed size during iteration

  OpenStack installed using RDO packages
  Package versions:
  (nova-scheduler)[nova@spsrc-controller-2 /]$ rpm -vqa | grep nova
  python3-nova-20.2.0-1.el8.noarch
  python3-novaclient-15.1.0-1.el8.noarch
  openstack-nova-common-20.2.0-1.el8.noarch
  openstack-nova-scheduler-20.2.0-1.el8.noarch
  (nova-scheduler)[nova@spsrc-controller-2 /]$ rpm -vqa | grep oslo
  python-oslo-utils-lang-3.41.5-1.el8.noarch
  python-oslo-i18n-lang-3.24.0-2.el8.noarch
  python-oslo-cache-lang-1.37.0-2.el8.noarch
  python3-oslo-concurrency-3.30.0-2.el8.noarch
  python3-oslo-messaging-10.2.0-2.el8.noarch
  python-oslo-versionedobjects-lang-1.36.1-1.el8.noarch
  python-oslo-policy-lang-2.3.3-1.el8.noarch
  python-oslo-log-lang-3.44.2-1.el8.noarch
  python3-oslo-i18n-3.24.0-2.el8.noarch
  python-oslo-concurrency-lang-3.30.0-2.el8.noarch
  python3-oslo-serialization-2.29.2-2.el8.noarch
  python3-oslo-config-6.11.2-1.el8.noarch
  python3-oslo-log-3.44.2-1.el8.noarch
  python3-oslo-service-1.40.2-2.el8.noarch
  python3-oslo-middleware-3.38.1-2.el8.noarch
  python-oslo-vmware-lang-2.34.1-1.el8.noarch
  python3-oslo-privsep-1.33.3-1.el8.noarch
  python3-oslo-vmware-2.34.1-1.el8.noarch
  python3-oslo-db-5.0.2-2.el8.noarch
  python3-oslo-policy-2.3.3-1.el8.noarch
  python3-oslo-reports-1.30.0-1.el8.noarch
  python3-oslo-rootwrap-5.16.1-1.el8.noarch
  python-oslo-privsep-lang-1.33.3-1.el8.noarch
  python-oslo-middleware-lang-3.38.1-2.el8.noarch
  python-oslo-db-lang-5.0.2-2.el8.noarch
  python3-oslo-utils-3.41.5-1.el8.noarch
  python3-oslo-context-2.23.0-2.el8.noarch
  python3-oslo-cache-1.37.0-2.el8.noarch
  python3-oslo-versionedobjects-1.36.1-1.el8.noarch
  python3-oslo-upgradecheck-0.3.2-1.el8.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1884762/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 2055409] Re: [SRU] config OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES does not apply to instance detail page

2024-02-29 Thread Rodrigo Barbieri
Fix has merged in Caracal/Noble cycle, has been backported upstream to
Bobcat, Antelope and Zed. It needs to have SRU'ed back to Ussuri.

Bobcat, Antelope and Zed could have Point Releases (as long as there one
new tag upstream), but for Yoga, Xena, Wallaby, Victoria and Ussuri it
will necessary to merge the diff directly into SRU code.

For simplicity, I believe it will be better to not wait for Point
releases of Bobcat, Antelope and Zed because just this "waiting for a
new tag" period could take months and all the other releases that don't
have point releases would have to wait on that to get their SRU started

** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Mantic)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Jammy)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2055409

Title:
  [SRU] config OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES does not apply
  to instance detail page

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive antelope series:
  New
Status in Ubuntu Cloud Archive bobcat series:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  New
Status in Ubuntu Cloud Archive xena series:
  New
Status in Ubuntu Cloud Archive yoga series:
  New
Status in Ubuntu Cloud Archive zed series:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in horizon package in Ubuntu:
  New
Status in horizon source package in Focal:
  New
Status in horizon source package in Jammy:
  New
Status in horizon source package in Mantic:
  New

Bug description:
  Setting the config option OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES to
  False successfully allows skipping neutron calls when loading the
  instance list page, therefore speeding up page loading. However, when
  clicking on an instance and loading the instance details page it still
  makes the neutron calls, taking a very long time.

  The usage of the config option in the code could be adjusted to also
  be used when loading the instance details page, thus speeding up the
  page loading there as well.

  ===
  SRU Description
  ===

  [Impact]

  Environments that have too many neutron ports struggle to load the
  instance list and instance detail pages. The existing config
  OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES allows speeding up the
  instance list but it is not being used when loading a single instance
  detail page. By using the config option when loading the instance
  detail page as well, we speed up instance detail page loading and we
  have minimal side effects, which are already the same seen when
  displaying the list (more info about the side effects at [1])

  [Test case]

  1. Setting up the env

  1a. Deploy openstack env with horizon/openstack-dashboard

  1b. Declare and set OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES to False
  in /etc/openstack-dashboard/local_settings.py and restart apache2

  2. Prepare to reproduce the bug

  2a. Create a single VM successfully

  2b. As we cannot easily create enough ports in the lab to replicate
  the slowness, we will rely on the message being present in the logs.
  Therefore, at this step we enable debug in horizon to see the
  messages. Set DEBUG to True in /etc/openstack-
  dashboard/local_settings.py and restart apache2.

  3. Reproducing the bug

  3a. Load the instance list page and verify that the following messages
  are not present in the logs:

  GET /v2.0/floatingips?port_id=...
  GET /v2.0/ports?tenant_id=...
  GET /v2.0/networks?id=...
  GET /v2.0/subnets

  3b. Click on the instance to load the detail page and verify that the
  following messages ARE present in the logs:

  GET /v2.0/floatingips?port_id=...
  GET /v2.0/ports?tenant_id=...
  GET /v2.0/networks?id=...
  GET /v2.0/subnets

  5. Install package that contains the fixed code

  6. Confirm fix

  6a. Repeat step 3a.

  6b. Click on the instance to load the detail page and verify that the
  following messages are NOT present in the logs:

  GET /v2.0/floatingips?port_id=...
  GET /v2.0/ports?tenant_id=...
  GET /v2.0/networks?id=...
  GET /v2.0/subnets

  [Regression Potential]

  The code has tested in upstream CI (without the addition of bug-
  specific functional tests) from master(Caracal) to stable/zed without
  any issue captured. Side effects documented at [1]. The code itself is
  a simple 2-liner with minimal to none chance of regression due to
  narrow scope of code change impact.

  [Other Info]

  None.

  [1]
  

[Yahoo-eng-team] [Bug 2055409] Re: [SRU] config OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES does not apply to instance detail page

2024-02-29 Thread Rodrigo Barbieri
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/zed
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/bobcat
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/antelope
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/yoga
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Tags added: sts sts-sru-needed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2055409

Title:
  [SRU] config OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES does not apply
  to instance detail page

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive antelope series:
  New
Status in Ubuntu Cloud Archive bobcat series:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  New
Status in Ubuntu Cloud Archive xena series:
  New
Status in Ubuntu Cloud Archive yoga series:
  New
Status in Ubuntu Cloud Archive zed series:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Committed

Bug description:
  Setting the config option OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES to
  False successfully allows skipping neutron calls when loading the
  instance list page, therefore speeding up page loading. However, when
  clicking on an instance and loading the instance details page it still
  makes the neutron calls, taking a very long time.

  The usage of the config option in the code could be adjusted to also
  be used when loading the instance details page, thus speeding up the
  page loading there as well.

  ===
  SRU Description
  ===

  [Impact]

  Environments that have too many neutron ports struggle to load the
  instance list and instance detail pages. The existing config
  OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES allows speeding up the
  instance list but it is not being used when loading a single instance
  detail page. By using the config option when loading the instance
  detail page as well, we speed up instance detail page loading and we
  have minimal side effects, which are already the same seen when
  displaying the list (more info about the side effects at [1])

  [Test case]

  1. Setting up the env

  1a. Deploy openstack env with horizon/openstack-dashboard

  1b. Declare and set OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES to False
  in /etc/openstack-dashboard/local_settings.py and restart apache2

  2. Prepare to reproduce the bug

  2a. Create a single VM successfully

  2b. As we cannot easily create enough ports in the lab to replicate
  the slowness, we will rely on the message being present in the logs.
  Therefore, at this step we enable debug in horizon to see the
  messages. Set DEBUG to True in /etc/openstack-
  dashboard/local_settings.py and restart apache2.

  3. Reproducing the bug

  3a. Load the instance list page and verify that the following messages
  are not present in the logs:

  GET /v2.0/floatingips?port_id=...
  GET /v2.0/ports?tenant_id=...
  GET /v2.0/networks?id=...
  GET /v2.0/subnets

  3b. Click on the instance to load the detail page and verify that the
  following messages ARE present in the logs:

  GET /v2.0/floatingips?port_id=...
  GET /v2.0/ports?tenant_id=...
  GET /v2.0/networks?id=...
  GET /v2.0/subnets

  5. Install package that contains the fixed code

  6. Confirm fix

  6a. Repeat step 3a.

  6b. Click on the instance to load the detail page and verify that the
  following messages are NOT present in the logs:

  GET /v2.0/floatingips?port_id=...
  GET /v2.0/ports?tenant_id=...
  GET /v2.0/networks?id=...
  GET /v2.0/subnets

  [Regression Potential]

  The code has tested in upstream CI (without the addition of bug-
  specific functional tests) from master(Caracal) to stable/zed without
  any issue captured. Side effects documented at [1]. The code itself is
  a simple 2-liner with minimal to none chance of regression due to
  narrow scope of code change impact.

  [Other Info]

  None.

  [1]
  
https://github.com/openstack/horizon/blob/2b03b44f3adeea7e7a8aaabcccfa00614301/doc/source/configuration/settings.rst#L2410

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2055409/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055419] [NEW] network autoallocation fails for non-admin user

2024-02-29 Thread Cyprian Kleist
Public bug reported:

Description
===
Automatic allocation of network topologies 
(https://docs.openstack.org/neutron/latest/admin/config-auto-allocation.html) 
causes unexpected API error when requested by user without admin role.

Tempest test affected:

tempest.api.compute.admin.test_auto_allocate_network.AutoAllocateNetworkTest.test_server_multi_create_auto_allocate

is failing.

Steps to reproduce
==

* request server creation with network autoallocation as user without
admin role:

$ openstack --os-compute-api-version 2.37 server create --flavor
 --image  --nic auto vm1

Expected result
===
Forbidden response (if i understand documentation correctly) or creation of 
network and router (if it is allowed).

Actual result
=
Unexpected API Error.

 ERROR nova.api.openstack.wsgi [None req-   - - default 
default] Unexpected exception in API method: 
neutronclient.common.exceptions.NotFound: The resource could not be found.
Neutron server returns request_ids: ['req-']
 ERROR nova.api.openstack.wsgi Traceback (most recent call last):
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/api/openstack/wsgi.py", 
line 658, in wrapped
 ERROR nova.api.openstack.wsgi return f(*args, **kwargs)
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
 ERROR nova.api.openstack.wsgi return func(*args, **kwargs)
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
 ERROR nova.api.openstack.wsgi return func(*args, **kwargs)
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/api/validation/__init__.py",
 line 110, in wrapper
 ERROR nova.api.openstack.wsgi return func(*args, **kwargs)
 ERROR nova.api.openstack.wsgi   [Previous line repeated 11 more times]
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/api/openstack/compute/servers.py",
 line 786, in create
 ERROR nova.api.openstack.wsgi instances, resv_id = self.compute_api.create(
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/api.py", line 
2207, in create
 ERROR nova.api.openstack.wsgi return self._create_instance(
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/api.py", line 
1683, in _create_instance
 ERROR nova.api.openstack.wsgi ) = self._validate_and_build_base_options(
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/api.py", line 
1081, in _validate_and_build_base_options
 ERROR nova.api.openstack.wsgi max_network_count = 
self._check_requested_networks(
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/api.py", line 
543, in _check_requested_networks
 ERROR nova.api.openstack.wsgi return 
self.network_api.validate_networks(context, requested_networks,
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/network/neutron.py", 
line 2648, in validate_networks
 ERROR nova.api.openstack.wsgi ports_needed_per_instance = 
self._ports_needed_per_instance(
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/network/neutron.py", 
line 2509, in _ports_needed_per_instance
 ERROR nova.api.openstack.wsgi if not 
self._can_auto_allocate_network(context, neutron):
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/network/neutron.py", 
line 2438, in _can_auto_allocate_network
 ERROR nova.api.openstack.wsgi 
neutron.validate_auto_allocated_topology_requirements(
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper
 ERROR nova.api.openstack.wsgi ret = obj(*args, **kwargs)
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/debtcollector/renames.py", 
line 41, in decorator
 ERROR nova.api.openstack.wsgi return wrapped(*args, **kwargs)
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/neutronclient/v2_0/client.py",
 line 2160, in validate_auto_allocated_topology_requirements
 ERROR nova.api.openstack.wsgi return 
self.get_auto_allocated_topology(project_id, fields=['dry-run'])
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper
 ERROR nova.api.openstack.wsgi ret = obj(*args, **kwargs)
 ERROR nova.api.openstack.wsgi   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/debtcollector/renames.py", 
line 41, in decorator
 ERROR nova.api.openstack.wsgi return wrapped(*args, **kwargs)
 

[Yahoo-eng-team] [Bug 2055411] [NEW] Nova VMwareapi Resize of Volume Backed server fails

2024-02-29 Thread Fabian Wiesel
Public bug reported:

Description
===
More specifically the following tempest test in master fails:
tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server


Steps to reproduce
==
* Install Devstack from master
* Run tempest test 
`tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server`

Expected result
===
The test succeeds.

Actual result
=
What happened instead of the expected result?
How did the issue look like?

Environment
===
1. Git 1858cf18b940b3636e54eb5aafaf4050bdd02939 (master). So essentially this:
 https://review.opendev.org/c/openstack/nova/+/909474
As instance creation is impossible without that patch.

2. Which hypervisor did you use? What's the version of that?

vmwareapi (VSphere 7.0.3 & ESXi 7.0.3)

2. Which storage type did you use?

vmdk on NFS 4.1

3. Which networking type did you use?

networking-nsx-t (https://github.com/sapcc/networking-nsx-t)

Logs & Configs
==

Can be found here: http://openstack-ci-
logs.global.cloud.sap/openstack/nova/1858cf18b940b3636e54eb5aafaf4050bdd02939/index.html

The critical exception for this bug report is (abbreviated and reformatted for 
clarity):

 req-7aa5ded6-ea97-4010-93c8-9e39389cbfe0 
tempest-ServerActionsTestOtherA-839537081
[  865.017199] env[58735]: ERROR nova.compute.manager [instance: 
b4d9131c-fc91-4fd4-813b-13b4bdfe1647] 
Traceback (most recent call last):
  File "/opt/stack/nova/nova/compute/manager.py", line 10856, in 
_error_out_instance_on_exception
yield
  File "/opt/stack/nova/nova/compute/manager.py", line 6096, in _resize_instance
disk_info = self.driver.migrate_disk_and_power_off(
  File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 263, in 
migrate_disk_and_power_off
return self._vmops.migrate_disk_and_power_off(context, instance,
  File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 1467, in 
migrate_disk_and_power_off
self._resize_disk(instance, vm_ref, vmdk, flavor)
  File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 1398, in 
_resize_disk
self._volumeops.detach_disk_from_vm(vm_ref, instance, vmdk.device)
  File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 121, in 
detach_disk_from_vm
disk_key = device.key
AttributeError: 'NoneType' object has no attribute 'key'

---


The bug is actually in the function `nova.virt.vmwareapi.vm_util.get_vmdk_info` 
here:
https://opendev.org/openstack/nova/src/branch/master/nova/virt/vmwareapi/vm_util.py#L690

The code works with the assumption, that the root-disk is named as the instance.
This assumption breaks in several cases, but most for this test-case, the root 
volume is actually a cinder volume.
It will also break when the the disk gets migrated to another datastore, either 
through a live-migration with no shared storage, or simply automatically with 
SDRS..

I have an alternative implementation here: 
https://github.com/sapcc/nova/blob/stable/xena-m3/nova/virt/vmwareapi/vm_util.py#L997-L1034
I'll provide a bug fix from it.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2055411

Title:
  Nova VMwareapi Resize of Volume Backed server fails

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  More specifically the following tempest test in master fails:
  
tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server

  
  Steps to reproduce
  ==
  * Install Devstack from master
  * Run tempest test 
`tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server`

  Expected result
  ===
  The test succeeds.

  Actual result
  =
  What happened instead of the expected result?
  How did the issue look like?

  Environment
  ===
  1. Git 1858cf18b940b3636e54eb5aafaf4050bdd02939 (master). So essentially this:
   https://review.opendev.org/c/openstack/nova/+/909474
  As instance creation is impossible without that patch.

  2. Which hypervisor did you use? What's the version of that?

  vmwareapi (VSphere 7.0.3 & ESXi 7.0.3)

  2. Which storage type did you use?

  vmdk on NFS 4.1

  3. Which networking type did you use?

  networking-nsx-t (https://github.com/sapcc/networking-nsx-t)

  Logs & Configs
  ==

  Can be found here: http://openstack-ci-
  
logs.global.cloud.sap/openstack/nova/1858cf18b940b3636e54eb5aafaf4050bdd02939/index.html

  The critical exception for this bug report is (abbreviated and reformatted 
for clarity):
  
   req-7aa5ded6-ea97-4010-93c8-9e39389cbfe0 
tempest-ServerActionsTestOtherA-839537081
  [  865.017199] env[58735]: ERROR nova.compute.manager 

[Yahoo-eng-team] [Bug 2055409] [NEW] config OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES does not apply to instance detail page

2024-02-29 Thread Rodrigo Barbieri
Public bug reported:

Setting the config option OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES to
False successfully allows skipping neutron calls when loading the
instance list page, therefore speeding up page loading. However, when
clicking on an instance and loading the instance details page it still
makes the neutron calls, taking a very long time.

The usage of the config option in the code could be adjusted to also be
used when loading the instance details page, thus speeding up the page
loading there as well.

** Affects: horizon
 Importance: Undecided
 Status: Fix Committed

** Changed in: horizon
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2055409

Title:
  config OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES does not apply to
  instance detail page

Status in OpenStack Dashboard (Horizon):
  Fix Committed

Bug description:
  Setting the config option OPENSTACK_INSTANCE_RETRIEVE_IP_ADDRESSES to
  False successfully allows skipping neutron calls when loading the
  instance list page, therefore speeding up page loading. However, when
  clicking on an instance and loading the instance details page it still
  makes the neutron calls, taking a very long time.

  The usage of the config option in the code could be adjusted to also
  be used when loading the instance details page, thus speeding up the
  page loading there as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2055409/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp