[Yahoo-eng-team] [Bug 2013473] Re: default_catalog.templates is outdated

2024-05-02 Thread Takashi Kajinami
** Changed in: keystone
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2013473

Title:
  default_catalog.templates is outdated

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  It seems the catalog template file is horribly outdated and contains
  the following problem.

   - keystone v2 was removed long ago
   - cinder no longer provides v2 api and v3 api should be used
   - cinder and nova no longer requires tenant_id templates in url. tenant_id 
templates prevents API access with domain/system scope tokens
   - telemetry endpoint was removed
   - now placement is required by nova
   - ec2 api was split out from nova and now is independent and optional service

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2013473/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062573] [NEW] pysendfile library is unmaintained

2024-04-19 Thread Takashi Kajinami
Public bug reported:

The pysendfile library[1] was added as an optimal dependency for zero-
copy image upload[2] but the library got no release for 10 years.

We should consider replacing it by os.sendfile or removing the feature
instead of using the unmaintained library.

[1] https://pypi.org/project/pysendfile/
[2] https://review.opendev.org/c/openstack/glance/+/3863

** Affects: glance
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2062573

Title:
  pysendfile library is unmaintained

Status in Glance:
  In Progress

Bug description:
  The pysendfile library[1] was added as an optimal dependency for zero-
  copy image upload[2] but the library got no release for 10 years.

  We should consider replacing it by os.sendfile or removing the feature
  instead of using the unmaintained library.

  [1] https://pypi.org/project/pysendfile/
  [2] https://review.opendev.org/c/openstack/glance/+/3863

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2062573/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062045] [NEW] Domain admin is not allowed to create credentials when scope enforcement is enabled

2024-04-17 Thread Takashi Kajinami
Public bug reported:

Currently when [oslo_policy] enforce_scope is set to True along with
[oslo_policy] enforce_new_defaults = True, domain admins are not allowed
to manage credentials.

However this limitation breaks heat, because heat requires creating
credentials, which is used by notification mechanism for example, by
it's own stack domain admin credential.

```

Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: ERROR 
heat.engine.check_resource [None req-f3f9047b-8ac5-46f0-b8df-eafa473cb252 demo 
None] Unexpected exception in resource check.: 
keystoneauth1.exceptions.http.Forbidden: You are not authorized to perform the 
requested action: identity:create_credential. (HTTP 403) (Request-ID: 
req-ff29e4ea-c6bc-48c5-88f4-fa4cb1893a87)
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource Traceback (most recent call last):
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/opt/stack/heat/heat/engine/check_resource.py", line 311, in check
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource 
self._initiate_propagate_resource(cnxt, resource_id,
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/opt/stack/heat/heat/engine/check_resource.py", line 251, in 
_initiate_propagate_resource
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource input_data = 
_get_input_data(req_node, input_forward_data)
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/opt/stack/heat/heat/engine/check_resource.py", line 233, in _get_input_data
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource return rsrc.node_data().as_dict()
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 1154, in node_data
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource self.FnGetRefId(), attribute_values,
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 2378, in FnGetRefId
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource return self.get_reference_id()
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/opt/stack/heat/heat/engine/resources/aws/cfn/wait_condition_handle.py", line 
40, in get_reference_id
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource return 
str(self._get_ec2_signed_url(signal_type=wc))
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/opt/stack/heat/heat/engine/resources/wait_condition.py", line 48, in 
_get_ec2_signed_url
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource 
self)._get_ec2_signed_url(signal_type)
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/opt/stack/heat/heat/engine/resources/signal_responder.py", line 138, in 
_get_ec2_signed_url
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource self._create_keypair()
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/opt/stack/heat/heat/engine/resources/stack_user.py", line 128, in 
_create_keypair
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource kp = 
self.keystone().create_stack_domain_user_keypair(
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/opt/stack/heat/heat/engine/clients/os/keystone/heat_keystoneclient.py", line 
551, in create_stack_domain_user_keypair
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource creds = 
self.domain_admin_client.credentials.create(
Apr 17 16:19:02.207067 np0037308484 heat-engine[89702]: 2024-04-17 16:19:02.204 
89702 TRACE heat.engine.check_resource   File 
"/usr/local/lib/python3.10/dist-packages/keystoneclient/v3/credentials.py", 
line 62, in create
Apr 17 16:19:02.207067 

[Yahoo-eng-team] [Bug 2059821] Re: Deprecated glanceclient exceptions are still used

2024-03-30 Thread Takashi Kajinami
** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2059821

Title:
  Deprecated glanceclient exceptions are still used

Status in Cinder:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The exceptions in glanceclient.exc were deprecated in glanceclient 0.4.0[1].
   glanceclient.exc.Forbidden
   glanceclient.exc.NotFound
   glanceclient.exc.ServiceUnavailable
   glanceclient.exc.Unauthorized

  https://github.com/openstack/python-
  glanceclient/commit/354c98b087515dc4303a07d1ff0d9a9d7b4dd48b

  But these are still used in the code.

  We should replace these by the new HTTP* exceptions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/2059821/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2059821] [NEW] Deprecated glanceclient exceptions are still used

2024-03-29 Thread Takashi Kajinami
Public bug reported:

Description
===
The exceptions in glanceclient.exc were deprecated in glanceclient 0.4.0[1].
 glanceclient.exc.Forbidden
 glanceclient.exc.NotFound
 glanceclient.exc.ServiceUnavailable
 glanceclient.exc.Unauthorized

https://github.com/openstack/python-
glanceclient/commit/354c98b087515dc4303a07d1ff0d9a9d7b4dd48b

But these are still used in the code.

We should replace these by the new HTTP* exceptions.

** Affects: cinder
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2059821

Title:
  Deprecated glanceclient exceptions are still used

Status in Cinder:
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The exceptions in glanceclient.exc were deprecated in glanceclient 0.4.0[1].
   glanceclient.exc.Forbidden
   glanceclient.exc.NotFound
   glanceclient.exc.ServiceUnavailable
   glanceclient.exc.Unauthorized

  https://github.com/openstack/python-
  glanceclient/commit/354c98b087515dc4303a07d1ff0d9a9d7b4dd48b

  But these are still used in the code.

  We should replace these by the new HTTP* exceptions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/2059821/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2059800] [NEW] Image download immediately fails when glance returns 500

2024-03-29 Thread Takashi Kajinami
Public bug reported:

Description
===
nova-compute downloads a vm image from glance when launching an instance. It 
retries requests when it gets 503, but it does not when it gets 500.
When glance uses cinder backend and a image volume is still used, glance 
returns 500 and this results in immediate instance creation failure.

Steps to reproduce
==
* Deploy glance with cinder image store
* Upload an image
* Create an image-boot instance from the image, while downloading the image in 
background

Expected result
===
Instance creation succeeeds

Actual result
=
Instance creation fails because of 500 error from glance

Environment
===
This has been seen in Puppet OpenStack integration job, which uses RDO master.

Logs & Configs
==
Example failure can be found in 
https://zuul.opendev.org/t/openstack/build/fc0e584a70f947d988ac057a8cc991c2

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2059800

Title:
  Image download immediately fails when glance returns 500

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  nova-compute downloads a vm image from glance when launching an instance. It 
retries requests when it gets 503, but it does not when it gets 500.
  When glance uses cinder backend and a image volume is still used, glance 
returns 500 and this results in immediate instance creation failure.

  Steps to reproduce
  ==
  * Deploy glance with cinder image store
  * Upload an image
  * Create an image-boot instance from the image, while downloading the image 
in background

  Expected result
  ===
  Instance creation succeeeds

  Actual result
  =
  Instance creation fails because of 500 error from glance

  Environment
  ===
  This has been seen in Puppet OpenStack integration job, which uses RDO master.

  Logs & Configs
  ==
  Example failure can be found in 
https://zuul.opendev.org/t/openstack/build/fc0e584a70f947d988ac057a8cc991c2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2059800/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2059780] [NEW] Domain admin can't view roles while it can mangage domain/project roles

2024-03-29 Thread Takashi Kajinami
Public bug reported:

Currently domain admin is allowed to manage role assignments for project or 
domain.
However domain admin currently can't view roles.

TO allow domain admin to actually manipulate role assignments, keystone
should allow domain admin to view roles.

** Affects: keystone
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2059780

Title:
  Domain admin can't view roles while it can mangage domain/project
  roles

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Currently domain admin is allowed to manage role assignments for project or 
domain.
  However domain admin currently can't view roles.

  TO allow domain admin to actually manipulate role assignments,
  keystone should allow domain admin to view roles.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2059780/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832164] Re: SADeprecationWarning: The joinedload_all() function is deprecated, and will be removed in a future release. Please use method chaining with joinedload() instead

2024-03-22 Thread Takashi Kajinami
** Changed in: cinder
   Status: Fix Committed => Fix Released

** Changed in: nova/stein
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1832164

Title:
  SADeprecationWarning: The joinedload_all() function is deprecated, and
  will be removed in a future release.  Please use method chaining with
  joinedload() instead

Status in Cinder:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) stein series:
  Fix Released

Bug description:
  The following warning is output in the unit tests.

  b'/tmp/nova/nova/db/sqlalchemy/api.py:1871: SADeprecationWarning: The 
joinedload_all() function is deprecated, and will be removed in a future 
release.  Please use method chaining with joinedload() instead'
  b"  options(joinedload_all('security_groups.rules')).\\"

  * http://logs.openstack.org/53/566153/43/check/openstack-tox-
  py36/b7edf77/job-output.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1832164/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1937261] Re: python3-msgpack package broken due to outdated cython

2024-03-21 Thread Takashi Kajinami
This is a problem with python-msgpack in Ubuntu, and does not look like
a bug in oslo.privsep or neutron.

** No longer affects: oslo.privsep

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1937261

Title:
  python3-msgpack package broken due to outdated cython

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive ussuri series:
  Fix Released
Status in neutron:
  New

Bug description:
  After a successful upgrade of the control-plance from Train -> Ussuri
  on Ubuntu Bionic, we upgraded a first compute / network node and
  immediately ran into issues with Neutron:

  We noticed that Neutron is extremely slow in setting up and wiring the
  network ports, so slow it would never finish and throw all sorts of
  errors (RabbitMQ connection timeouts, full sync required, ...)

  We were now able to reproduce the error on our Ussuri DEV cloud as
  well:

  1) First we used strace - -p $PID_OF_NEUTRON_LINUXBRIDGE_AGENT and 
noticed that the data exchange on the unix socket between the rootwrap-daemon 
and the main process is really really slow.
  One could actually read line by line the read calls to the fd of the socket.

  2) We then (after adding lots of log lines and other intensive manual
  debugging) used py-spy (https://github.com/benfred/py-spy) via "py-spy
  top --pid $PID" on the running neutron-linuxbridge-agent process and
  noticed all the CPU time (process was at 100% most of the time) was
  spent in msgpack/fallback.py

  3) Since the issue was not observed in TRAIN we compared the msgpack
  version used and noticed that TRAIN was using version 0.5.6 while
  Ussuri upgraded this dependency to 0.6.2.

  4) We then downgraded to version 0.5.6 of msgpack (ignoring the actual
  dependencies)

  --- cut ---
  apt policy python3-msgpack
  python3-msgpack:
    Installed: 0.6.2-1~cloud0
    Candidate: 0.6.2-1~cloud0
    Version table:
   *** 0.6.2-1~cloud0 500
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/ussuri/main amd64 Packages
   0.5.6-1 500
  500 http://de.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status
  --- cut ---

  
  vs.

  --- cut ---
  apt policy python3-msgpack
  python3-msgpack:
Installed: 0.5.6-1
Candidate: 0.6.2-1~cloud0
Version table:
   0.6.2-1~cloud0 500
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/ussuri/main amd64 Packages
   *** 0.5.6-1 500
  500 http://de.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
  100 /var/lib/dpkg/status
  --- cut ---

  
  and et voila: The Neutron-Linuxbridge-Agent worked just like before (building 
one port every few seconds) and all network ports eventually converged to 
ACTIVE.

  I could not yet spot which commit of msgpack changes
  (https://github.com/msgpack/msgpack-python/compare/0.5.6...v0.6.2)
  might have caused this issue, but I am really certain that this is a
  major issue for Ussuri on Ubuntu Bionic.

  There are "similar" issues with
   * https://bugs.launchpad.net/oslo.privsep/+bug/1844822
   * https://bugs.launchpad.net/oslo.privsep/+bug/1896734

  both related to msgpack or the size of messages exchanged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1937261/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2058400] [NEW] [ovn-octavia-provider] functional jobs are unstable

2024-03-19 Thread Takashi Kajinami
Public bug reported:

Functional tests of ovn-octavia-providers are very unstable and fail quite 
often.
This has been caught in 
https://review.opendev.org/c/openstack/ovn-octavia-provider/+/907582 .

According to the job history -release is more unstable than -master

https://zuul.opendev.org/t/openstack/builds?job_name=ovn-octavia-
provider-functional-release=openstack%2Fovn-octavia-
provider=907582=0

https://zuul.opendev.org/t/openstack/builds?job_name=ovn-octavia-
provider-functional-master=openstack%2Fovn-octavia-
provider=907582=0

The failures seen there are quite random and I could not find any test
case (or error) consistently failing.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn-octavia-provider

** Tags added: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2058400

Title:
  [ovn-octavia-provider] functional jobs are unstable

Status in neutron:
  New

Bug description:
  Functional tests of ovn-octavia-providers are very unstable and fail quite 
often.
  This has been caught in 
https://review.opendev.org/c/openstack/ovn-octavia-provider/+/907582 .

  According to the job history -release is more unstable than -master

  https://zuul.opendev.org/t/openstack/builds?job_name=ovn-octavia-
  provider-functional-release=openstack%2Fovn-octavia-
  provider=907582=0

  https://zuul.opendev.org/t/openstack/builds?job_name=ovn-octavia-
  provider-functional-master=openstack%2Fovn-octavia-
  provider=907582=0

  The failures seen there are quite random and I could not find any test
  case (or error) consistently failing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2058400/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1884762] Re: Unhandled error: RuntimeError: dictionary changed size during iteration

2024-02-29 Thread Takashi Kajinami
I'm closing this because of inactivity. I've never seen this problem,
either.

** Changed in: oslo.config
   Status: New => Invalid

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1884762

Title:
  Unhandled error: RuntimeError: dictionary changed size during
  iteration

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.config:
  Invalid

Bug description:
  Observed in CentOS8 nova-scheduler logs:

  2020-06-23 13:37:46.779 23 ERROR nova Traceback (most recent call last):
  2020-06-23 13:37:46.779 23 ERROR nova   File "/usr/bin/nova-scheduler", line 
10, in 
  2020-06-23 13:37:46.779 23 ERROR nova sys.exit(main())
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/nova/cmd/scheduler.py", line 53, in main
  2020-06-23 13:37:46.779 23 ERROR nova service.serve(server, 
workers=workers)
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/nova/service.py", line 489, in serve
  2020-06-23 13:37:46.779 23 ERROR nova restart_method='mutate')
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_service/service.py", line 842, in launch
  2020-06-23 13:37:46.779 23 ERROR nova launcher.launch_service(service, 
workers=workers)
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_service/service.py", line 606, in 
launch_service
  2020-06-23 13:37:46.779 23 ERROR nova self._start_child(wrap)
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_service/service.py", line 575, in 
_start_child
  2020-06-23 13:37:46.779 23 ERROR nova self.launcher.restart()
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_service/service.py", line 311, in restart
  2020-06-23 13:37:46.779 23 ERROR nova self.conf.mutate_config_files()
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 3013, in 
mutate_config_files
  2020-06-23 13:37:46.779 23 ERROR nova self._warn_immutability()
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 3040, in 
_warn_immutability
  2020-06-23 13:37:46.779 23 ERROR nova for info, group in 
self._all_opt_infos():
  2020-06-23 13:37:46.779 23 ERROR nova   File 
"/usr/lib/python3.6/site-packages/oslo_config/cfg.py", line 2502, in 
_all_opt_infos
  2020-06-23 13:37:46.779 23 ERROR nova for info in self._opts.values():
  2020-06-23 13:37:46.779 23 ERROR nova RuntimeError: dictionary changed size 
during iteration
  2020-06-23 13:37:46.779 23 ERROR nova
  2020-06-23 13:37:46.780 20 CRITICAL nova 
[req-739357c3-918b-4778-ab8b-282fc7fad943 - - - - -] Unhandled error: 
RuntimeError: dictionary changed size during iteration

  OpenStack installed using RDO packages
  Package versions:
  (nova-scheduler)[nova@spsrc-controller-2 /]$ rpm -vqa | grep nova
  python3-nova-20.2.0-1.el8.noarch
  python3-novaclient-15.1.0-1.el8.noarch
  openstack-nova-common-20.2.0-1.el8.noarch
  openstack-nova-scheduler-20.2.0-1.el8.noarch
  (nova-scheduler)[nova@spsrc-controller-2 /]$ rpm -vqa | grep oslo
  python-oslo-utils-lang-3.41.5-1.el8.noarch
  python-oslo-i18n-lang-3.24.0-2.el8.noarch
  python-oslo-cache-lang-1.37.0-2.el8.noarch
  python3-oslo-concurrency-3.30.0-2.el8.noarch
  python3-oslo-messaging-10.2.0-2.el8.noarch
  python-oslo-versionedobjects-lang-1.36.1-1.el8.noarch
  python-oslo-policy-lang-2.3.3-1.el8.noarch
  python-oslo-log-lang-3.44.2-1.el8.noarch
  python3-oslo-i18n-3.24.0-2.el8.noarch
  python-oslo-concurrency-lang-3.30.0-2.el8.noarch
  python3-oslo-serialization-2.29.2-2.el8.noarch
  python3-oslo-config-6.11.2-1.el8.noarch
  python3-oslo-log-3.44.2-1.el8.noarch
  python3-oslo-service-1.40.2-2.el8.noarch
  python3-oslo-middleware-3.38.1-2.el8.noarch
  python-oslo-vmware-lang-2.34.1-1.el8.noarch
  python3-oslo-privsep-1.33.3-1.el8.noarch
  python3-oslo-vmware-2.34.1-1.el8.noarch
  python3-oslo-db-5.0.2-2.el8.noarch
  python3-oslo-policy-2.3.3-1.el8.noarch
  python3-oslo-reports-1.30.0-1.el8.noarch
  python3-oslo-rootwrap-5.16.1-1.el8.noarch
  python-oslo-privsep-lang-1.33.3-1.el8.noarch
  python-oslo-middleware-lang-3.38.1-2.el8.noarch
  python-oslo-db-lang-5.0.2-2.el8.noarch
  python3-oslo-utils-3.41.5-1.el8.noarch
  python3-oslo-context-2.23.0-2.el8.noarch
  python3-oslo-cache-1.37.0-2.el8.noarch
  python3-oslo-versionedobjects-1.36.1-1.el8.noarch
  python3-oslo-upgradecheck-0.3.2-1.el8.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1884762/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1789351] Re: Glance deployment with python3 + "keystone" paste_deploy flavor Fails

2024-02-28 Thread Takashi Kajinami
This was fixed in keystonemiddleware so I'll close oslo.config bug.

** Changed in: oslo.config
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1789351

Title:
  Glance deployment with python3 + "keystone" paste_deploy flavor Fails

Status in Glance:
  Invalid
Status in keystonemiddleware:
  Fix Released
Status in oslo.config:
  Invalid
Status in python-keystonemiddleware package in Ubuntu:
  Fix Released

Bug description:
  This happens with oslo.config >= 6.3.0([1]) + python3 + "keystone" 
paste_deploy + current glance(before 
https://review.openstack.org/#/c/532503/10/glance/common/store_utils.py@30 it 
works)
  Testing in devstack: https://review.openstack.org/#/c/596380/

  The glance api service fails to start with below Error, reproducing here: 
https://review.openstack.org/#/c/596380/:-
  ERROR: dictionary changed size during iteration , see logs below

  Failure logs from job:-
  http://logs.openstack.org/80/596380/2/check/tempest-full-
  py3/514fa29/controller/logs/screen-g-
  api.txt.gz#_Aug_27_07_26_10_698243

  
  The Runtime Error is returned at keystonemiddleware:- 
https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L551
  Adding code snippet here:-
  if self._conf.oslo_conf_obj != cfg.CONF:   <-- Fails here
  oslo_cache.configure(self._conf.oslo_conf_obj)

  So with pdb found that an additional key(fatal_deprecations) was added
  to cfg.CONF at ^^, so Error is returned in python3. With python2 same
  key is added but no Error.

  There are multiple ways to avoid it, like use the paste_deploy configuration 
that works(ex: keystone+cachemanagement), use oslo.config <= 6.2.0, Use python2 
or update 
glance(https://review.openstack.org/#/c/532503/10/glance/common/store_utils.py@30
 as use_user_token is deprecated since long)
  with keystone+cachemanagement, all the config items were added before 
reaching the Failure point in keystonemiddleware and self._conf.oslo_conf_obj 
!= cfg.CONF didn't raised an error and returned Boolean. Don't know why.

  But it seems a real issue to me as it may happen in python3 at different 
places. So it would be good if Teams from affected projects(oslo.config, 
keystonemiddleware, glance) can look at it and fix(not avoid) at the best place.
  To me it looks like keystonemiddleware is not handling(comparing the dict) it 
properly for python3, as the conf is dynamically updated(how ? and when ?).

  - so can oslo.config Team check if glance and keystonmiddleware are 
handling/using oslo.config properly.
  - i checked keystone+cachemanagement is default in devstack from last 6 
years, is "keystone" flavor supported? if yes it should be fixed. Also it would 
be good to cleanup the deprecated options those are deprecated since Mitaka.
  - If it's wrongly used in keystonemiddleware/glance, it would be good to fix 
there.

  
  Initially detected while testing with Fedora[2], but later digged on why it's 
working in CI with Ubuntu and started [3].

  
  [1] https://review.openstack.org/#/c/560094/
  [2] https://review.rdoproject.org/r/#/c/14921/
  [3] https://review.openstack.org/#/c/596380/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1789351/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284431] Re: nova-compute doesn't reconnect properly after control plane outage

2024-02-20 Thread Takashi Kajinami
** Changed in: oslo.messaging
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284431

Title:
  nova-compute doesn't reconnect properly after control plane outage

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  Invalid
Status in tripleo:
  Fix Released

Bug description:
  We had to reboot the control node for ci-overcloud, after that, and
  ensuring it was online again properly...

  
+--+-+--+-+---+--
  | Binary   | Host| Zone   
  | Status  | State | Updated_at   
  
+--+-+--+-+---+--
  | nova-conductor   | ci-overcloud-notcompute0-gxezgcvv4v2q   | 
internal | enabled | down  | 2014-02-24T19:42:26.0
  | nova-cert| ci-overcloud-notcompute0-gxezgcvv4v2q   | 
internal | enabled | down  | 2014-02-24T19:42:18.0
  | nova-scheduler   | ci-overcloud-notcompute0-gxezgcvv4v2q   | 
internal | enabled | down  | 2014-02-24T19:42:26.0
  | nova-consoleauth | ci-overcloud-notcompute0-gxezgcvv4v2q   | 
internal | enabled | down  | 2014-02-24T19:42:18.0
  | nova-compute | ci-overcloud-novacompute4-5aywwwqlmtv3  | nova   
  | enabled | down  | 2014-02-25T02:07:37.0
  | nova-compute | ci-overcloud-novacompute7-mosbehy6ikhz  | nova   
  | enabled | down  | 2014-02-25T02:07:44.0
  | nova-compute | ci-overcloud-novacompute0-vidddfuaauhw  | nova   
  | enabled | down  | 2014-02-25T02:07:36.0
  | nova-compute | ci-overcloud-novacompute6-6fnuizd4n4gv  | nova   
  | enabled | down  | 2014-02-25T02:07:36.0
  | nova-compute | ci-overcloud-novacompute1-4q2dbhdklrkq  | nova   
  | enabled | down  | 2014-02-25T02:07:43.0
  | nova-compute | ci-overcloud-novacompute5-y27zvc4o5fps  | nova   
  | enabled | down  | 2014-02-25T02:07:36.0
  | nova-compute | ci-overcloud-novacompute3-sxibwe5v5gpw  | nova   
  | enabled | down  | 2014-02-25T02:08:40.0
  | nova-compute | ci-overcloud-novacompute8-4qu2kxq4e6pb  | nova   
  | enabled | down  | 2014-02-25T02:08:41.0
  | nova-compute | ci-overcloud-novacompute2-tvsutghnaofq  | nova   
  | enabled | down  | 2014-02-25T02:07:36.0
  | nova-compute | ci-overcloud-novacompute9-qt7sqeqcexjh  | nova   
  | enabled | down  | 2014-02-25T02:08:45.0
  | nova-scheduler   | ci-overcloud-notcompute0-gxezgcvv4v2q.novalocal | 
internal | enabled | up| 2014-02-25T03:24:53.0
  | nova-conductor   | ci-overcloud-notcompute0-gxezgcvv4v2q.novalocal | 
internal | enabled | up| 2014-02-25T03:24:59.0
  | nova-consoleauth | ci-overcloud-notcompute0-gxezgcvv4v2q.novalocal | 
internal | enabled | up| 2014-02-25T03:24:53.0
  | nova-cert| ci-overcloud-notcompute0-gxezgcvv4v2q.novalocal | 
internal | enabled | up| 2014-02-25T03:24:51.0
  
+--+-+--+-+---+--

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284431/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1393182] Re: nova compute failed to report health when nova conductor started

2024-02-20 Thread Takashi Kajinami
** Changed in: oslo.messaging
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1393182

Title:
  nova compute failed to report health when nova conductor started

Status in Compass:
  Confirmed
Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  Invalid

Bug description:
  I have an icehouse openstack deployment that includes one controller and 3 
computes. the controller was the last to go up and running. when rabbitmq 
started, nova compute tried to connect to it. As the log shows, it seemed that 
it finally got connected, but when doing nova service-list, the nova-compute 
service was still down. I later restarted nova-compute, and this time, nova 
service-list showed that the nova-compute became UP.
  I still have two other nova compute remain down status. probably if 
restarted, they would become UP as well. If any more info is needed, let me 
know.

  
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
622, in ensure
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
return method(*args, **kwargs)
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
702, in _consume
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
return self.connection.drain_events(timeout=timeout)
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/kombu/connection.py", line 139, in 
drain_events
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
return self.transport.drain_events(self.connection, **kwargs)
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/kombu/transport/pyamqplib.py", line 223, in 
drain_events
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
return connection.drain_events(**kwargs)
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/kombu/transport/pyamqplib.py", line 56, in 
drain_events
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
return self.wait_multi(self.channels.values(), timeout=timeout)
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/kombu/transport/pyamqplib.py", line 81, in 
wait_multi
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
return amqp_method(channel, args)
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/amqplib/client_0_8/connection.py", line 365, 
in _close
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
self._x_close_ok()
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/amqplib/client_0_8/connection.py", line 384, 
in _x_close_ok
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
self._send_method((10, 61))
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/amqplib/client_0_8/abstract_channel.py", line 
70, in _send_method
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
method_sig, args, content)
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/amqplib/client_0_8/method_framing.py", line 
233, in write_method
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
self.dest.write_frame(1, channel, payload)
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/amqplib/client_0_8/transport.py", line 125, 
in write_frame
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
frame_type, channel, size, payload, 0xce))
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/eventlet/greenio.py", line 359, in sendall
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
tail = self.send(data, flags)
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.6/site-packages/eventlet/greenio.py", line 342, in send
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit 
total_sent += fd.send(data[total_sent:], flags)
  2014-11-14 21:34:43.449 4314 TRACE oslo.messaging._drivers.impl_rabbit error: 
[Errno 104] Connection reset by peer
  2014-11-14 21:34:43.449 4314 TRACE 

[Yahoo-eng-team] [Bug 1404241] Re: nova-compute state not updated

2024-02-20 Thread Takashi Kajinami
I'll close this because of inactivity. Please reopen this if you still
see the problem.

** Changed in: oslo.messaging
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404241

Title:
  nova-compute state not updated

Status in OpenStack Compute (nova):
  Expired
Status in oslo.messaging:
  Invalid

Bug description:
  I'm running 2014.2.1 on CentOS7. 1 controller and 5 compute nodes are
  deployed using packstack.

  Whenever I reboot the controller node, some of nova-compute services
  report state=XXX even after 60 minutes after reboot completed and
  controller node is up and running again:

  [root@juno1 ~(keystone_admin)]# nova-manage service list
  Binary   Host Zone Status 
State Updated_At
  nova-consoleauth juno1internal 
enabled:-)   2014-12-19 13:17:48
  nova-scheduler   juno1internal 
enabled:-)   2014-12-19 13:17:47
  nova-conductor   juno1internal 
enabled:-)   2014-12-19 13:17:47
  nova-certjuno1internal 
enabled:-)   2014-12-19 13:17:48
  nova-compute juno4nova 
enabledXXX   2014-12-19 12:26:56
  nova-compute juno5nova 
enabled:-)   2014-12-19 13:17:47
  nova-compute juno6nova 
enabled:-)   2014-12-19 13:17:46
  nova-compute juno3nova 
enabled:-)   2014-12-19 13:17:46
  nova-compute juno2nova 
enabledXXX   2014-12-19 12:21:52

  Here is the chunk of nova-compute log from juno4:

  2014-12-19 15:46:02.082 5193 INFO oslo.messaging._drivers.impl_rabbit [-] 
Delaying reconnect for 1.0 seconds...
  2014-12-19 15:46:02.083 5193 ERROR oslo.messaging._drivers.impl_rabbit [-] 
Failed to consume message from queue: Socket closed
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
Traceback (most recent call last):
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
655, in ensure
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
return method()
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
735, in _consume
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
return self.connection.drain_events(timeout=timeout)
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 281, in 
drain_events
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
return self.transport.drain_events(self.connection, **kwargs)
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 94, in 
drain_events
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
return connection.drain_events(**kwargs)
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 299, in drain_events
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
chanmap, None, timeout=timeout,
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 362, in 
_wait_multiple
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
channel, method_sig, args, content = read_timeout(timeout)
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 333, in read_timeout
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
return self.method_reader.read_method()
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/site-packages/amqp/method_framing.py", line 189, in 
read_method
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
raise m
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
IOError: Socket closed
  2014-12-19 15:46:02.083 5193 TRACE oslo.messaging._drivers.impl_rabbit 
  2014-12-19 15:46:02.084 5193 INFO oslo.messaging._drivers.impl_rabbit [-] 
Delaying reconnect for 1.0 seconds...
  2014-12-19 15:46:03.084 5193 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP 

[Yahoo-eng-team] [Bug 1604479] Re: tenantId/default_project_id missing on Keystone service user in Mitaka

2024-02-20 Thread Takashi Kajinami
Closing this because of inactivity.

** Changed in: puppet-keystone
 Assignee: Kam Nasim (knasim-wrs) => (unassigned)

** Changed in: keystone
 Assignee: Kam Nasim (knasim-wrs) => (unassigned)

** Changed in: puppet-keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1604479

Title:
  tenantId/default_project_id missing on Keystone service user in Mitaka

Status in OpenStack Identity (keystone):
  Invalid
Status in puppet-keystone:
  Invalid

Bug description:
  On upgrading to Mitaka, we saw that the user ref in Keystone does not have a 
tenantId or default_project_id field. This breaks:
  1) The Detailed view in Horizon in the Identity pane where ProjectID is shown 
as "None"
  2) any services project based RBAC policies that we have in place.

  Noticed a new local_user DB table for all the services users (no 
project/tenantId field in here):
  keystone=# select * from local_user;
   id | user_id  | domain_id |name
  +--+---+
1 | 3c1bd8c0f6324dcc938900d8eb801aa5 | default   | admin
2 | d1c4f7a244f74892b612b9b2ded6d602 | default   | neutron
3 | a481a1f43ec0463083b7a30d20493d38 | default   | nova
4 | 951068b3372f47ac827ade8f67cc19b4 | default   | glance
6 | 4b76763e375946998445b65b11c8db73 | default   | ceilometer
7 | 15c8e1e463cc4370ad369eaf8504b727 | default   | cinder
8 | 5c3ea23eb8e14070bc562951bb266073 | default   | sysinv
9 | 2b62ced877244e74ba90b546225740d0 | default   | heat
   10 | 5a506282b45c4064b262f3f414f1f699 | default   | kam
  (9 rows)

  
  Note that an admin role is assigned for these services users in the services 
project. It is just not present within the user reference or keystone user-get:

  $ keystone user-role-list

  
+--+---+--+--+
  |id|  name | user_id  
|tenant_id |
  
+--+---+--+--+
  | f9985117736b4684904b4eb55476f30a | admin | a481a1f43ec0463083b7a30d20493d38 
| c211dda10c9a4b2db16f239dccf65acd |
  
+--+---+--+--+

  $ keystone user-get

  +--+--+
  | Property |  Value   |
  +--+--+
  |  email   |  nova@localhost  |
  | enabled  |   True   |
  |id| a481a1f43ec0463083b7a30d20493d38 |
  |   name   |   nova   |
  | username |   nova   |
  +--+--+

  
  Contrast this to Kilo/Liberty where tenantId is visible within user reference:

  
  $ keystone user-get b7a3bcd588b5482ab9741efcf3f9bb33

  +--+--+
  | Property |  Value   |
  +--+--+
  |  email   |  nova@localhost  |
  | enabled  |   True   |
  |id| b7a3bcd588b5482ab9741efcf3f9bb33 |
  |   name   |   nova   |
  | tenantId | 2e4a21e1a37840879321320107c74f86 | <<
  | username |   nova   |
  +--+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1604479/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2054203] [NEW] Default flag change in netaddr breaks (at least) unit tests

2024-02-17 Thread Takashi Kajinami
Public bug reported:

netaddr 1.0.0 changed the default parsing mode from INET_ATON to
INET_PTON[1]. This was initially added to netaddr.IPAddress and then
later was applied to netaddr.IPNetwork in 1.1.0 [2]

While we attempted to bump netaddr to 1.0.1, we noticed this change
broke some of the unit tests in neutron.

https://zuul.opendev.org/t/openstack/build/8cfad48dcfb84be893fe78a1f965c5e6

(example)
```
neutron.tests.unit.agent.l3.extensions.test_ndp_proxy.NDPProxyExtensionDVRTestCase.test__get_snat_idx_ipv4
--

Captured traceback:
~~~
Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py311/lib/python3.11/site-packages/netaddr/ip/__init__.py",
 line 346, in __init__
self._value = self._module.str_to_int(addr, flags)
  
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py311/lib/python3.11/site-packages/netaddr/strategy/ipv4.py",
 line 120, in str_to_int
raise error
netaddr.core.AddrFormatError: '101.12.13.00' is not a valid IPv4 address 
string!

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py311/lib/python3.11/site-packages/netaddr/ip/__init__.py",
 line 1019, in __init__
value, prefixlen = parse_ip_network(module, addr, flags)
   ^
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py311/lib/python3.11/site-packages/netaddr/ip/__init__.py",
 line 899, in parse_ip_network
ip = IPAddress(val1, module.version, flags=INET_PTON)
 
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py311/lib/python3.11/site-packages/netaddr/ip/__init__.py",
 line 348, in __init__
raise AddrFormatError(
netaddr.core.AddrFormatError: base address '101.12.13.00' is not IPv4

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", line 178, 
in func
return f(self, *args, **kwargs)
   
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", line 178, 
in func
return f(self, *args, **kwargs)
   
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/agent/l3/test_dvr_local_router.py",
 line 549, in test__get_snat_idx_ipv4
snat_idx = ri._get_snat_idx(ip_cidr)
   ^
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/l3/dvr_local_router.py",
 line 412, in _get_snat_idx
net = netaddr.IPNetwork(ip_cidr)
  ^^
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py311/lib/python3.11/site-packages/netaddr/ip/__init__.py",
 line 1028, in __init__
raise AddrFormatError('invalid IPNetwork %s' % (addr,))
netaddr.core.AddrFormatError: invalid IPNetwork 101.12.13.00/24
```

An "easy" fix is adding flags=INET_ATON to all places. (note that
INET_ATON was added in netaddr 0.10.0) but I'd like to ask someone from
the neutron team to look into this and check whether we really have to
use INET_ATON or fix unit test side to apply the more strict rule.

[1] https://netaddr.readthedocs.io/en/latest/changes.html#release-1-0-0
[2] https://netaddr.readthedocs.io/en/latest/changes.html#release-1-1-0

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2054203

Title:
  Default flag change in netaddr breaks (at least) unit tests

Status in neutron:
  New

Bug description:
  netaddr 1.0.0 changed the default parsing mode from INET_ATON to
  INET_PTON[1]. This was initially added to netaddr.IPAddress and then
  later was applied to netaddr.IPNetwork in 1.1.0 [2]

  While we attempted to bump netaddr to 1.0.1, we noticed this change
  broke some of the unit tests in neutron.

  https://zuul.opendev.org/t/openstack/build/8cfad48dcfb84be893fe78a1f965c5e6

  (example)
  ```
  
neutron.tests.unit.agent.l3.extensions.test_ndp_proxy.NDPProxyExtensionDVRTestCase.test__get_snat_idx_ipv4
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py311/lib/python3.11/site-packages/netaddr/ip/__init__.py",
 line 346, in __init__
  self._value = self._module.str_to_int(addr, flags)

[Yahoo-eng-team] [Bug 2053227] [NEW] [ovn-octavia-provider] ovn-octavia-provider-functional-master is broken by ovn build failure

2024-02-15 Thread Takashi Kajinami
Public bug reported:

The ovn-octavia-provider-functional-master job consistently fails during set up.
The log indicates ovn build is failing.


example: 
https://zuul.opendev.org/t/openstack/build/65fafcb26fdb4b9b97d9ce481f70037e
```
...
gcc -DHAVE_CONFIG_H -I.   -I ./include  -I ./include -I ./ovn -I ./include -I 
./lib -I ./lib -I /home/zuul/src/opendev.org/openstack/ovs/include -I 
/home/zuul/src/opendev.org/openstack/ovs/include -I 
/home/zuul/src/opendev.org/openstack/ovs/lib -I 
/home/zuul/src/opendev.org/openstack/ovs/lib -I 
/home/zuul/src/opendev.org/openstack/ovs -I 
/home/zuul/src/opendev.org/openstack/ovs-Wstrict-prototypes -Wall -Wextra 
-Wno-sign-compare -Wpointer-arith -Wformat -Wformat-security -Wswitch-enum 
-Wunused-parameter -Wbad-function-cast -Wcast-align -Wstrict-prototypes 
-Wold-style-definition -Wmissing-prototypes -Wmissing-field-initializers 
-fno-strict-aliasing -Wswitch-bool -Wlogical-not-parentheses 
-Wsizeof-array-argument -Wbool-compare -Wshift-negative-value -Wduplicated-cond 
-Wshadow -Wmultistatement-macros -Wcast-align=strict   -g -O2 -MT 
controller/physical.o -MD -MP -MF $depbase.Tpo -c -o controller/physical.o 
controller/physical.c &&\
mv -f $depbase.Tpo $depbase.Po
controller/ofctrl.c: In function ‘ofctrl_inject_pkt’:   
controller/ofctrl.c:3048:5: error: too many arguments to function 
‘flow_compose’
 3048 | flow_compose(, , NULL, 64, false);
  | ^~~~
In file included from 
/home/zuul/src/opendev.org/openstack/ovs/lib/dp-packet.h:34,
 from controller/ofctrl.c:21:
/home/zuul/src/opendev.org/openstack/ovs/lib/flow.h:129:6: note: declared here  
   
  129 | void flow_compose(struct dp_packet *, const struct flow *,
  |  ^~~~
make[1]: *** [Makefile:2369: controller/ofctrl.o] Error 1  
make[1]: *** Waiting for unfinished jobs
controller/pinctrl.c: In function ‘pinctrl_ip_mcast_handle_igmp’:   
   
controller/pinctrl.c:5488:54: error: ‘MCAST_GROUP_IGMPV1’ undeclared (first 
use in this function)
 5488 |   port_key_data, 
MCAST_GROUP_IGMPV1);
  |  ^~
controller/pinctrl.c:5488:54: note: each undeclared identifier is reported only 
once for each function it appears in
controller/pinctrl.c:5487:13: error: too many arguments to function 
‘mcast_snooping_add_group4’
 5487 | mcast_snooping_add_group4(ip_ms->ms, ip4, IP_MCAST_VLAN,
  | ^
In file included from controller/ip-mcast.h:19,
 from controller/pinctrl.c:64:
/home/zuul/src/opendev.org/openstack/ovs/lib/mcast-snooping.h:190:6: note: 
declared here
  190 | bool mcast_snooping_add_group4(struct mcast_snooping *ms, ovs_be32 ip4,
  |  ^
controller/pinctrl.c:5493:54: error: ‘MCAST_GROUP_IGMPV2’ undeclared (first 
use in this function)
 5493 |   port_key_data, 
MCAST_GROUP_IGMPV2);
  |  ^~
controller/pinctrl.c:5492:13: error: too many arguments to function 
‘mcast_snooping_add_group4’
 5492 | mcast_snooping_add_group4(ip_ms->ms, ip4, IP_MCAST_VLAN,
  | ^
In file included from controller/ip-mcast.h:19,
 from controller/pinctrl.c:64:
/home/zuul/src/opendev.org/openstack/ovs/lib/mcast-snooping.h:190:6: note: 
declared here
  190 | bool mcast_snooping_add_group4(struct mcast_snooping *ms, ovs_be32 ip4,
  |  ^
make[1]: *** [Makefile:2369: controller/pinctrl.o] Error 1  
   
make[1]: Leaving directory '/home/zuul/src/opendev.org/openstack/ovn'   
   
make: *** [Makefile:1528: all] Error 2
```

** Affects: neutron
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Tags added: ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2053227

Title:
  [ovn-octavia-provider] ovn-octavia-provider-functional-master is
  broken by ovn build failure

Status in neutron:
  In Progress

Bug description:
  The ovn-octavia-provider-functional-master job consistently fails during set 
up.
  The log indicates ovn build is failing.

  
  example: 
https://zuul.opendev.org/t/openstack/build/65fafcb26fdb4b9b97d9ce481f70037e
  ```
  ...
  gcc -DHAVE_CONFIG_H -I.   -I ./include  -I ./include -I ./ovn -I ./include -I 
./lib -I ./lib -I /home/zuul/src/opendev.org/openstack/ovs/include -I 
/home/zuul/src/opendev.org/openstack/ovs/include -I 
/home/zuul/src/opendev.org/openstack/ovs/lib -

[Yahoo-eng-team] [Bug 1461251] Re: Stop using deprecated oslo_utils.timeutils.isotime

2024-02-11 Thread Takashi Kajinami
The said interface was removed by
https://review.opendev.org/c/openstack/oslo.utils/+/842344

** Changed in: oslo.utils
   Status: In Progress => Fix Released

** Changed in: oslo.utils
 Assignee: Chenghui Yu (chenghuiyu) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1461251

Title:
  Stop using deprecated oslo_utils.timeutils.isotime

Status in cloudkitty:
  Fix Released
Status in ec2-api:
  Fix Released
Status in gce-api:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in oslo.utils:
  Fix Released
Status in python-keystoneclient:
  Fix Released

Bug description:
  oslo_utils.timeutils.isotime() is deprecated as of 1.6 so we need to
  stop using it.

  This breaks unit tests in keystone since we've got a check for calling
  deprecated functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloudkitty/+bug/1461251/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1857114] Re: zuul job dsvm-grenade error

2024-02-10 Thread Takashi Kajinami
This does not look like an actual problem in oslo.cache, but it's very
likely a deployment problem

** Changed in: oslo.cache
   Status: New => Incomplete

** Changed in: oslo.cache
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1857114

Title:
  zuul job dsvm-grenade error

Status in networking-ovn:
  Incomplete
Status in OpenStack Compute (nova):
  Incomplete
Status in oslo.cache:
  Invalid

Bug description:
  Many tasks have the following errors

  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage [None 
req-12e00068-1f43-43ea-9608-98b5c83b16bc None None] 
Error attempting to run : ModuleNotFoundError: No module named 'memcache'
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage Traceback (most 
recent call last):
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/cmd/manage.py", line 504, in _run_migration
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage found, done 
= migration_meth(ctxt, count)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/usr/local/lib/python3.6/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 1015, in wrapper
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage return 
fn(*args, **kwargs)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/objects/virtual_interface.py", line 279, in 
fill_virtual_interface_list
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage 
_set_or_delete_marker_for_migrate_instances(cctxt, marker)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 223, in wrapped
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage return 
f(context, *args, **kwargs)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/objects/virtual_interface.py", line 305, in 
_set_or_delete_marker_for_migrate_instances
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage 
instance.create()
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/usr/local/lib/python3.6/dist-packages/oslo_versionedobjects/base.py", line 
226, in wrapper
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage return 
fn(self, *args, **kwargs)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 629, in create
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage 
self._load_ec2_ids()
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 949, in _load_ec2_ids
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage self.ec2_ids 
= objects.EC2Ids.get_by_instance(self._context, self)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/usr/local/lib/python3.6/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage result = 
fn(cls, context, *args, **kwargs)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/objects/ec2.py", line 231, in get_by_instance
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage ec2_ids = 
cls._get_ec2_ids(context, instance)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/objects/ec2.py", line 217, in _get_ec2_ids
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage 
ec2_ids['instance_id'] = id_to_ec2_inst_id(context, instance.uuid)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/objects/ec2.py", line 57, in id_to_ec2_inst_id
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage int_id = 
get_int_id_from_instance_uuid(context, instance_id)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/objects/ec2.py", line 36, in memoizer
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage _CACHE = 
cache_utils.get_client(expiration_time=_CACHE_TIME)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/cache_utils.py", line 54, in get_client
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage 
_get_default_cache_region(expiration_time=expiration_time))
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage   File 
"/opt/stack/new/nova/nova/cache_utils.py", line 65, in _get_default_cache_region
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage 
cache.configure_cache_region(CONF, region)
  2019-12-19 05:32:35.240 | ERROR nova.cmd.manage 

[Yahoo-eng-team] [Bug 1657452] Re: Incompatibility with python-webob 1.7.0

2024-02-10 Thread Takashi Kajinami
I think this was fixed in oslo.middleware by
https://review.opendev.org/c/openstack/oslo.middleware/+/453712

** Changed in: oslo.middleware
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657452

Title:
  Incompatibility with python-webob 1.7.0

Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.middleware:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in keystone package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in python-oslo.middleware package in Ubuntu:
  Fix Released

Bug description:
  
  
keystone.tests.unit.test_v3_federation.WebSSOTests.test_identity_provider_specific_federated_authentication
  
---

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_v3_federation.py", line 4067, in 
test_identity_provider_specific_federated_authentication
  self.PROTOCOL)
File "keystone/federation/controllers.py", line 345, in 
federated_idp_specific_sso_auth
  return self.render_html_response(host, token_id)
File "keystone/federation/controllers.py", line 357, in 
render_html_response
  headerlist=headers)
File "/usr/lib/python2.7/dist-packages/webob/response.py", line 310, in 
__init__
  "You cannot set the body to a text value without a "
  TypeError: You cannot set the body to a text value without a charset

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1657452/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052761] [NEW] libvirt: swtpm_ioctl is required for vTPM support

2024-02-08 Thread Takashi Kajinami
Public bug reported:

Description
===
Libvirt uses swtpm_ioctl to shutdown the swtpm process at VM termination, 
because QEMU does not send shutdown command.
However the binary is not included in the required binaries (swtpm and 
swtpm_setup, at the time of writing) checked by libvirt driver. So users can 
use vTPM support without binaries, which leaves swtpm processes kept running.

Steps to reproduce
==
* Deploy nova-compute with vTPM support
* Move swtpm_ioctl from PATH
* Restart nova-compute
* Check capabilities reported by nova-compute

Expected result
===
The report shows no swtpm support

Actual result
=
The report shows swtpm support

Environment
===
This issue was initially found in master, but would be present in stable 
branches.

Logs & Configs
==
N/A

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Description changed:

  Description
  ===
- 
  Libvirt uses swtpm_ioctl to shutdown the swtpm process at VM termination, 
because QEMU does not send shutdown command.
  However the binary is not included in the required binaries (swtpm and 
swtpm_setup, at the time of writing) checked by libvirt driver. So users can 
use vTPM support without binaries, which leaves swtpm processes kept running.
  
  Steps to reproduce
  ==
  * Deploy nova-compute with vTPM support
  * Move swtpm_ioctl from PATH
  * Restart nova-compute
  * Check capabilities reported by nova-compute
  
  Expected result
  ===
  The report shows no swtpm support
  
  Actual result
  =
  The report shows swtpm support
  
  Environment
  ===
  This issue was initially found in master, but would be present in stable 
branches.
  
  Logs & Configs
  ==
  N/A

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2052761

Title:
  libvirt: swtpm_ioctl is required for vTPM support

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  Libvirt uses swtpm_ioctl to shutdown the swtpm process at VM termination, 
because QEMU does not send shutdown command.
  However the binary is not included in the required binaries (swtpm and 
swtpm_setup, at the time of writing) checked by libvirt driver. So users can 
use vTPM support without binaries, which leaves swtpm processes kept running.

  Steps to reproduce
  ==
  * Deploy nova-compute with vTPM support
  * Move swtpm_ioctl from PATH
  * Restart nova-compute
  * Check capabilities reported by nova-compute

  Expected result
  ===
  The report shows no swtpm support

  Actual result
  =
  The report shows swtpm support

  Environment
  ===
  This issue was initially found in master, but would be present in stable 
branches.

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2052761/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052760] [NEW] libvirt: swtpm_setup and swtpm are BOTH required for vtpm support

2024-02-08 Thread Takashi Kajinami
Public bug reported:

Description
===
Currently libvirt driver ensures any of swtpm_setup and swtpm is present for 
vTPM support.
However libvirt requires BOTH of these to setup and run swtpm.


Steps to reproduce
==
* Deploy nova-compute with swtpm support
* Remove swtpm_setup from PATH
* Restart nova-compute
* Check capabilities reported by nova-compute

Expected result
===
The report shows no swtpm support

Actual result
=
The report shows swtpm support

Environment
===
This issue was initially found in master, but would be present in stable 
branches.

Logs & Configs
==
N/A

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2052760

Title:
  libvirt: swtpm_setup and swtpm are BOTH required for vtpm support

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Currently libvirt driver ensures any of swtpm_setup and swtpm is present for 
vTPM support.
  However libvirt requires BOTH of these to setup and run swtpm.

  
  Steps to reproduce
  ==
  * Deploy nova-compute with swtpm support
  * Remove swtpm_setup from PATH
  * Restart nova-compute
  * Check capabilities reported by nova-compute

  Expected result
  ===
  The report shows no swtpm support

  Actual result
  =
  The report shows swtpm support

  Environment
  ===
  This issue was initially found in master, but would be present in stable 
branches.

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2052760/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052484] [NEW] [DEFAULT] rpc_worker=0 leaves one rpc worker

2024-02-05 Thread Takashi Kajinami
Public bug reported:

Since https://review.opendev.org/c/openstack/neutron/+/823637 was
merged, neutron-server allows disabling rpc worker by setting::

[DEFAULT]
rpc_worker=0


However, I observe one rpc worker is still kept even with this setting.

>From neutron-server log, rpc_workers and rpc_state_report_workers are
set to 0.

2024-02-05 13:05:33.159 70458 DEBUG neutron.common.config [-] 
rpc_state_report_workers   = 0 log_opt_values 
/usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
2024-02-05 13:05:33.159 70458 DEBUG neutron.common.config [-] rpc_workers   
 = 0 log_opt_values 
/usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602

ps shows there is one rpc worker running.

neutron70458   1   70458  0.3  1.8 133664 144496 /usr/bin/python3 -s 
/usr/bin/neutron-server ... 
neutron70499   70458   70499 11.4  3.1 246792 248240 neutron-server: api 
worker (...)
neutron70500   70458   70500 11.0  3.1 243640 249488 neutron-server: api 
worker (...)
neutron70502   70458   70502  0.3  1.7 141196 142132 neutron-server: rpc 
worker (...)
neutron70503   70458   70503  0.3  1.8 145256 146356 neutron-server: 
MaintenanceWorker (...)
neutron70504   70458   70504  0.0  1.7 135472 135604 neutron-server: 
periodic worker (...)

I've noticed this in Puppet OpenStack jobs which uses RDO master
packages.

The package versions currently used are::

openstack-neutron-24.0.0-0.20240131211457.b85b19e.el9.noarch
openstack-neutron-common-24.0.0-0.20240131211457.b85b19e.el9.noarch
openstack-neutron-ml2-24.0.0-0.20240131211457.b85b19e.el9.noarch
openstack-neutron-ovn-agent-24.0.0-0.20240131211457.b85b19e.el9.noarch
openstack-neutron-ovn-metadata-agent-24.0.0-0.20240131211457.b85b19e.el9.noarch
openstack-neutron-rpc-server-24.0.0-0.20240131211457.b85b19e.el9.noarch

** Affects: neutron
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052484

Title:
  [DEFAULT] rpc_worker=0 leaves one rpc worker

Status in neutron:
  New

Bug description:
  Since https://review.opendev.org/c/openstack/neutron/+/823637 was
  merged, neutron-server allows disabling rpc worker by setting::

  [DEFAULT]
  rpc_worker=0

  
  However, I observe one rpc worker is still kept even with this setting.

  From neutron-server log, rpc_workers and rpc_state_report_workers are
  set to 0.

  2024-02-05 13:05:33.159 70458 DEBUG neutron.common.config [-] 
rpc_state_report_workers   = 0 log_opt_values 
/usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
  2024-02-05 13:05:33.159 70458 DEBUG neutron.common.config [-] rpc_workers 
   = 0 log_opt_values 
/usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602

  ps shows there is one rpc worker running.

  neutron70458   1   70458  0.3  1.8 133664 144496 /usr/bin/python3 -s 
/usr/bin/neutron-server ... 
  neutron70499   70458   70499 11.4  3.1 246792 248240 neutron-server: api 
worker (...)
  neutron70500   70458   70500 11.0  3.1 243640 249488 neutron-server: api 
worker (...)
  neutron70502   70458   70502  0.3  1.7 141196 142132 neutron-server: rpc 
worker (...)
  neutron70503   70458   70503  0.3  1.8 145256 146356 neutron-server: 
MaintenanceWorker (...)
  neutron70504   70458   70504  0.0  1.7 135472 135604 neutron-server: 
periodic worker (...)

  I've noticed this in Puppet OpenStack jobs which uses RDO master
  packages.

  The package versions currently used are::

  openstack-neutron-24.0.0-0.20240131211457.b85b19e.el9.noarch
  openstack-neutron-common-24.0.0-0.20240131211457.b85b19e.el9.noarch
  openstack-neutron-ml2-24.0.0-0.20240131211457.b85b19e.el9.noarch
  openstack-neutron-ovn-agent-24.0.0-0.20240131211457.b85b19e.el9.noarch
  
openstack-neutron-ovn-metadata-agent-24.0.0-0.20240131211457.b85b19e.el9.noarch
  openstack-neutron-rpc-server-24.0.0-0.20240131211457.b85b19e.el9.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2052484/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1944043] Re: Wrong exception is expected to retry volume detachment API calls

2024-02-01 Thread Takashi Kajinami
** Changed in: nova/yoga
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1944043

Title:
  Wrong exception is expected to retry volume detachment API calls

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  New
Status in OpenStack Compute (nova) stein series:
  New
Status in OpenStack Compute (nova) train series:
  In Progress
Status in OpenStack Compute (nova) ussuri series:
  In Progress
Status in OpenStack Compute (nova) victoria series:
  Fix Committed
Status in OpenStack Compute (nova) wallaby series:
  Fix Committed
Status in OpenStack Compute (nova) xena series:
  Fix Released
Status in OpenStack Compute (nova) yoga series:
  Fix Released

Bug description:
  Description
  ===
  The following change introduced the logic to retry cinder API calls to detach 
volumes.

  https://review.opendev.org/c/openstack/nova/+/669674

  The logic detects the InternalServerError class from
  cindreclient.apiclient.exceptions.

  However this is wrong and these API calls raises the ClientException
  class from cinderclient.exceptions instead.

  Steps to reproduce
  ==
  N/A

  Actual result
  =
  N/A

  Environment
  ===
  N/A

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1944043/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607395] Re: Traceback in dynamic metadata driver: unexpected keyword argument 'extra_md'

2024-01-23 Thread Takashi Kajinami
The vendordata_driver option was already removed. So we may not "fix"
this problem really.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1607395

Title:
  Traceback in dynamic metadata driver: unexpected keyword argument
  'extra_md'

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Using new dynamic metadata driver fails with a traceback:

  ERROR nova.api.metadata.handler [req-d4df1623-dc4a-4e9c-b129-1e5dd76c59ac 
None None] Failed to get metadata for IP 10.0.0.3
  TRACE nova.api.metadata.handler Traceback (most recent call last):
  TRACE nova.api.metadata.handler   File 
"/home/stack/openstack/nova/nova/api/metadata/handler.py", line 134, in 
_handle_remote_ip_request
  TRACE nova.api.metadata.handler meta_data = 
self.get_metadata_by_remote_address(remote_address)
  TRACE nova.api.metadata.handler   File 
"/home/stack/openstack/nova/nova/api/metadata/handler.py", line 61, in 
get_metadata_by_remote_address
  TRACE nova.api.metadata.handler data = 
base.get_metadata_by_address(address)
  TRACE nova.api.metadata.handler   File 
"/home/stack/openstack/nova/nova/api/metadata/base.py", line 660, in 
get_metadata_by_address
  TRACE nova.api.metadata.handler ctxt)
  TRACE nova.api.metadata.handler   File 
"/home/stack/openstack/nova/nova/api/metadata/base.py", line 670, in 
get_metadata_by_instance_id
  TRACE nova.api.metadata.handler return InstanceMetadata(instance, address)
  TRACE nova.api.metadata.handler   File 
"/home/stack/openstack/nova/nova/api/metadata/base.py", line 195, in __init__
  TRACE nova.api.metadata.handler extra_md=extra_md, 
network_info=network_info)
  TRACE nova.api.metadata.handler TypeError: __init__() got an unexpected 
keyword argument 'extra_md'

  This is the configuration:

  vendordata_providers = StaticJSON, DynamicJSON
  vendordata_dynamic_targets = 'join@http://127.0.0.1:/v1/'
  vendordata_driver = nova.api.metadata.vendordata_dynamic.DynamicVendorData
  vendordata_dynamic_connect_timeout = 5
  vendordata_dynamic_read_timeout = 30
  vendordata_jsonfile_path = /etc/nova/cloud-config.json

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1607395/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259760] Re: Spice console isn't working when ssl_only=True is set

2024-01-23 Thread Takashi Kajinami
** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259760

Title:
  Spice console isn't working when ssl_only=True is set

Status in OpenStack Nova Cloud Controller Charm:
  Invalid
Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in spice-html5 package in Ubuntu:
  Fix Released

Bug description:
  OpenStack instalation: 2013.2
  OS: Ubuntu 13.10
  Repo: standart Ubuntu repozitory

  
  When using ssl_only in nova.conf, browser gets error: 
  [Exception... "The operation is insecure." code: "18" nsresult: "0x80530012 
(SecurityError)" location: "https://api.region.domain.tld:6082/spiceconn.js 
Line: 34"]

  Problem: trying to reach using ws:// schema, not wss://.

  Temporary fixed changing /usr/share/spice-html5/spice_auto.html scheme
  = "wss://" at 82th line.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1259760/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051066] [NEW] Traceback is dumped when nova-api is run by apache + mod_wsgi

2024-01-23 Thread Takashi Kajinami
Public bug reported:

Description
===

We noticed that the following traceback is dumped to error log in httpd when 
nova-api is run by httpd + mod_wsgi.
This is because nova-api WSGI application attempts to register signal handers 
for GMR but this is blocked y httpd.
This does not cause any functional problem, but is annoying for operators, and 
we should consider the way to surpress these warnings.


[Mon Jan 22 06:29:49.889120 2024] [wsgi:warn] [pid 82455:tid 82557] mod_wsgi 
(pid=82455): Callback registration for signal 12 ignored.
[Mon Jan 22 06:29:49.889918 2024] [wsgi:warn] [pid 82455:tid 82557]   File 
"/var/www/cgi-bin/nova/nova-api", line 52, in 
[Mon Jan 22 06:29:49.889937 2024] [wsgi:warn] [pid 82455:tid 82557] 
application = init_application()
[Mon Jan 22 06:29:49.889955 2024] [wsgi:warn] [pid 82455:tid 82557]   File 
"/usr/lib/python3.9/site-packages/nova/api/openstack/compute/wsgi.py", line 20, 
in init_application
[Mon Jan 22 06:29:49.889967 2024] [wsgi:warn] [pid 82455:tid 82557] return 
wsgi_app.init_application(NAME)
[Mon Jan 22 06:29:49.889983 2024] [wsgi:warn] [pid 82455:tid 82557]   File 
"/usr/lib/python3.9/site-packages/nova/api/openstack/wsgi_app.py", line 128, in 
init_application
[Mon Jan 22 06:29:49.889994 2024] [wsgi:warn] [pid 82455:tid 82557] 
init_global_data(conf_files, name)
[Mon Jan 22 06:29:49.890027 2024] [wsgi:warn] [pid 82455:tid 82557]   File 
"/usr/lib/python3.9/site-packages/nova/utils.py", line 1133, in wrapper
[Mon Jan 22 06:29:49.890039 2024] [wsgi:warn] [pid 82455:tid 82557] return 
func(*args, **kwargs)
[Mon Jan 22 06:29:49.890054 2024] [wsgi:warn] [pid 82455:tid 82557]   File 
"/usr/lib/python3.9/site-packages/nova/api/openstack/wsgi_app.py", line 105, in 
init_global_data
[Mon Jan 22 06:29:49.890065 2024] [wsgi:warn] [pid 82455:tid 82557] 
gmr.TextGuruMeditation.setup_autorun(
[Mon Jan 22 06:29:49.890080 2024] [wsgi:warn] [pid 82455:tid 82557]   File 
"/usr/lib/python3.9/site-packages/oslo_reports/guru_meditation_report.py", line 
155, in setup_autorun
[Mon Jan 22 06:29:49.890091 2024] [wsgi:warn] [pid 82455:tid 82557] 
cls._setup_signal(signal.SIGUSR2,
[Mon Jan 22 06:29:49.890106 2024] [wsgi:warn] [pid 82455:tid 82557]   File 
"/usr/lib/python3.9/site-packages/oslo_reports/guru_meditation_report.py", line 
188, in _setup_signal
[Mon Jan 22 06:29:49.890117 2024] [wsgi:warn] [pid 82455:tid 82557] 
signal.signal(signum,

Steps to reproduce
==
* Install httpd
* Add vhost to run nova-api
* Start httpd

Expected result
===
No traceback appears in error.log

Actual result
=
Traceback appears in error.log

Environment
===
This issue wsa initially found in CentOS Stream 9 + RDO master.

httpd-2.4.57-6.el9.x86_64
openstack-nova-api-28.1.0-0.20240111050756.fed1230.el9.noarch
openstack-nova-common-28.1.0-0.20240111050756.fed1230.el9.noarch
openstack-nova-compute-28.1.0-0.20240111050756.fed1230.el9.noarch
openstack-nova-conductor-28.1.0-0.20240111050756.fed1230.el9.noarch
openstack-nova-novncproxy-28.1.0-0.20240111050756.fed1230.el9.noarch
openstack-nova-scheduler-28.1.0-0.20240111050756.fed1230.el9.noarch


Logs & Configs
==
Example log:
https://cab2ad659632c7fadcca-8cee698db2ecce5ea7fdb78c34542529.ssl.cf1.rackcdn.com/906237/1/check/puppet-openstack-integration-7-scenario001-tempest-centos-9-stream/29e7836/logs/apache/nova_api_wsgi_error_ssl.txt

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
     Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2051066

Title:
  Traceback is dumped when nova-api is run by apache + mod_wsgi

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  We noticed that the following traceback is dumped to error log in httpd when 
nova-api is run by httpd + mod_wsgi.
  This is because nova-api WSGI application attempts to register signal handers 
for GMR but this is blocked y httpd.
  This does not cause any functional problem, but is annoying for operators, 
and we should consider the way to surpress these warnings.

  
  [Mon Jan 22 06:29:49.889120 2024] [wsgi:warn] [pid 82455:tid 82557] mod_wsgi 
(pid=82455): Callback registration for signal 12 ignored.
  [Mon Jan 22 06:29:49.889918 2024] [wsgi:warn] [pid 82455:tid 82557]   File 
"/var/www/cgi-bin/nova/nova-api", line 52, in 
  [Mon Jan 22 06:29:49.889937 2024] [wsgi:warn] [pid 82455:tid 82557] 
application = init_application()
  [Mon Jan 22 06:29:49.889955 2024] [wsgi:warn] [pid 82455:tid 82557]   File 
"/usr/lib/python3.9/site-packages/nova/api/openstack/compute/wsgi.py", l

[Yahoo-eng-team] [Bug 2050090] Re: doc build is broken with pillow>=10.0.0

2024-01-22 Thread Takashi Kajinami
*** This bug is a duplicate of bug 2026345 ***
https://bugs.launchpad.net/bugs/2026345

** This bug has been marked a duplicate of bug 2026345
   Sphinx raises 'ImageDraw' object has no attribute 'textsize' error

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2050090

Title:
  doc build is broken with pillow>=10.0.0

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Since pillow in upper-constraints was bumped to >=10.0.0, doc build (tox -e 
docs) consistently fails with the following error.

  ```
  $ tox -e docs
  ...
  done
  WARNING: dot code 'seqdiag {\nAPI; Conductor; Scheduler; Source; 
Destination;\nedge_length = 300;\nspan_height = 15;\nactivation = 
none;\ndefault_note_color = white;\n\nAPI ->> Conductor [label = 
"cast", note = "resize_instance/migrate_server"];\nConductor => Scheduler 
[label = "MigrationTask", note = "select_destinations"];\nConductor -> 
Conductor [label = "TargetDBSetupTask"];\nConductor => Destination [label = 
"PrepResizeAtDestTask", note = "prep_snapshot_based_resize_at_dest"];\n
Conductor => Source [label = "PrepResizeAtSourceTask", note = 
"prep_snapshot_based_resize_at_source"];\nConductor => Destination [label = 
"FinishResizeAtDestTask", note = "finish_snapshot_based_resize_at_dest"];\n
Conductor -> Conductor [label = "FinishResizeAtDestTask", note = "update 
instance mapping"];\n}': 'ImageDraw' object has no attribute 'textsize'
  WARNING: dot code 'seqdiag {\nAPI; Conductor; Source;\nedge_length = 
300;\nspan_height = 15;\nactivation = none;\ndefault_note_color = 
white;\n\nAPI ->> Conductor [label = "cast (or call if deleting)", note = 
"confirm_snapshot_based_resize"];\n\n// separator to indicate everything 
after this is driven by ConfirmResizeTask\n=== ConfirmResizeTask ===\n\n
Conductor => Source [label = "call", note = 
"confirm_snapshot_based_resize_at_source"];\nConductor -> Conductor [note = 
"hard delete source cell instance"];\nConductor -> Conductor [note = 
"update target cell instance status"];\n\n}': 'ImageDraw' object has no 
attribute 'textsize'
  WARNING: dot code 'seqdiag {\nAPI; Conductor; Source; Destination;\n
edge_length = 300;\nspan_height = 15;\nactivation = none;\n
default_note_color = white;\n\nAPI ->> Conductor [label = "cast", note = 
"revert_snapshot_based_resize"];\n\n// separator to indicate everything 
after this is driven by RevertResizeTask\n=== RevertResizeTask ===\n\n
Conductor -> Conductor [note = "update records from target to source cell"];\n  
  Conductor -> Conductor [note = "update instance mapping"];\nConductor => 
Destination [label = "call", note = "revert_snapshot_based_resize_at_dest"];\n  
  Conductor -> Conductor [note = "hard delete target cell instance"];\n
Conductor => Source [label = "call", note = 
"finish_revert_snapshot_based_resize_at_source"];\n\n}': 'ImageDraw' object has 
no attribute 'textsize'
  WARNING: dot code 'seqdiag {\nAPI; Conductor; Scheduler; Source; 
Destination;\nedge_length = 300;\nspan_height = 15;\nactivation = 
none;\ndefault_note_color = white;\n\nAPI -> Conductor [label = "cast", 
note = "resize_instance/migrate_server"];\n   Conductor => Scheduler 
[label = "call", note = "select_destinations"];\n   Conductor -> 
Destination [label = "cast", note = "prep_resize"];\n   Source <- 
Destination [label = "cast", leftnote = "resize_instance"];\n   
Source -> Destination [label = "cast", note = "finish_resize"];\n}': 
'ImageDraw' object has no attribute 'textsize'
  WARNING: dot code 'seqdiag {\nAPI; Source;\nedge_length = 300;\n
span_height = 15;\nactivation = none;\ndefault_note_color = white;\n\n  
  API -> Source [label = "cast (or call if deleting)", note = 
"confirm_resize"];\n}': 'ImageDraw' object has no attribute 'textsize'
  WARNING: dot code 'seqdiag {\nAPI; Source; Destination;\nedge_length 
= 300;\nspan_height = 15;\nactivation = none;\ndefault_note_color = 
white;\n\nAPI -> Destination [label = "cast", note = "revert_resize"];\n
   Source <- Destination [label = "cast", leftnote = 
"finish_revert_resize"];\n}': 'ImageDraw' object has no attribute 'textsize'
  WARNING: dot code 'actdiag {\nbuild-spec -> send-spec -> send-reqs -> 
query -> return-rps ->\ncreate -> filter -> claim -> return-hosts -> 
send-hosts;\n\nlane conductor {\nlabel = "Conductor";\n
build-spec [label = "Build request spec object", height = 38];\n
send-spec [label = "Submit request spec to scheduler", height = 38];\n
send-hosts [label = "Submit list of suitable hosts to target cell", height = 
51];\n}\n\nlane scheduler {\nlabel = 

[Yahoo-eng-team] [Bug 2050090] [NEW] doc build is broken with pillow>=10.0.0

2024-01-22 Thread Takashi Kajinami
Public bug reported:

Description
===
Since pillow in upper-constraints was bumped to >=10.0.0, doc build (tox -e 
docs) consistently fails with the following error.

```
$ tox -e docs
...
done
WARNING: dot code 'seqdiag {\nAPI; Conductor; Scheduler; Source; 
Destination;\nedge_length = 300;\nspan_height = 15;\nactivation = 
none;\ndefault_note_color = white;\n\nAPI ->> Conductor [label = 
"cast", note = "resize_instance/migrate_server"];\nConductor => Scheduler 
[label = "MigrationTask", note = "select_destinations"];\nConductor -> 
Conductor [label = "TargetDBSetupTask"];\nConductor => Destination [label = 
"PrepResizeAtDestTask", note = "prep_snapshot_based_resize_at_dest"];\n
Conductor => Source [label = "PrepResizeAtSourceTask", note = 
"prep_snapshot_based_resize_at_source"];\nConductor => Destination [label = 
"FinishResizeAtDestTask", note = "finish_snapshot_based_resize_at_dest"];\n
Conductor -> Conductor [label = "FinishResizeAtDestTask", note = "update 
instance mapping"];\n}': 'ImageDraw' object has no attribute 'textsize'
WARNING: dot code 'seqdiag {\nAPI; Conductor; Source;\nedge_length = 
300;\nspan_height = 15;\nactivation = none;\ndefault_note_color = 
white;\n\nAPI ->> Conductor [label = "cast (or call if deleting)", note = 
"confirm_snapshot_based_resize"];\n\n// separator to indicate everything 
after this is driven by ConfirmResizeTask\n=== ConfirmResizeTask ===\n\n
Conductor => Source [label = "call", note = 
"confirm_snapshot_based_resize_at_source"];\nConductor -> Conductor [note = 
"hard delete source cell instance"];\nConductor -> Conductor [note = 
"update target cell instance status"];\n\n}': 'ImageDraw' object has no 
attribute 'textsize'
WARNING: dot code 'seqdiag {\nAPI; Conductor; Source; Destination;\n
edge_length = 300;\nspan_height = 15;\nactivation = none;\n
default_note_color = white;\n\nAPI ->> Conductor [label = "cast", note = 
"revert_snapshot_based_resize"];\n\n// separator to indicate everything 
after this is driven by RevertResizeTask\n=== RevertResizeTask ===\n\n
Conductor -> Conductor [note = "update records from target to source cell"];\n  
  Conductor -> Conductor [note = "update instance mapping"];\nConductor => 
Destination [label = "call", note = "revert_snapshot_based_resize_at_dest"];\n  
  Conductor -> Conductor [note = "hard delete target cell instance"];\n
Conductor => Source [label = "call", note = 
"finish_revert_snapshot_based_resize_at_source"];\n\n}': 'ImageDraw' object has 
no attribute 'textsize'
WARNING: dot code 'seqdiag {\nAPI; Conductor; Scheduler; Source; 
Destination;\nedge_length = 300;\nspan_height = 15;\nactivation = 
none;\ndefault_note_color = white;\n\nAPI -> Conductor [label = "cast", 
note = "resize_instance/migrate_server"];\n   Conductor => Scheduler 
[label = "call", note = "select_destinations"];\n   Conductor -> 
Destination [label = "cast", note = "prep_resize"];\n   Source <- 
Destination [label = "cast", leftnote = "resize_instance"];\n   
Source -> Destination [label = "cast", note = "finish_resize"];\n}': 
'ImageDraw' object has no attribute 'textsize'
WARNING: dot code 'seqdiag {\nAPI; Source;\nedge_length = 300;\n
span_height = 15;\nactivation = none;\ndefault_note_color = white;\n\n  
  API -> Source [label = "cast (or call if deleting)", note = 
"confirm_resize"];\n}': 'ImageDraw' object has no attribute 'textsize'
WARNING: dot code 'seqdiag {\nAPI; Source; Destination;\nedge_length = 
300;\nspan_height = 15;\nactivation = none;\ndefault_note_color = 
white;\n\nAPI -> Destination [label = "cast", note = "revert_resize"];\n
   Source <- Destination [label = "cast", leftnote = 
"finish_revert_resize"];\n}': 'ImageDraw' object has no attribute 'textsize'
WARNING: dot code 'actdiag {\nbuild-spec -> send-spec -> send-reqs -> query 
-> return-rps ->\ncreate -> filter -> claim -> return-hosts -> 
send-hosts;\n\nlane conductor {\nlabel = "Conductor";\n
build-spec [label = "Build request spec object", height = 38];\n
send-spec [label = "Submit request spec to scheduler", height = 38];\n
send-hosts [label = "Submit list of suitable hosts to target cell", height = 
51];\n}\n\nlane scheduler {\nlabel = "Scheduler";\n
send-reqs [label = "Submit resource requirements to placement", height = 64];\n 
   create [label = "Create a HostState object for each RP returned from 
Placement", height = 64];\nfilter [label = "Filter and weigh results", 
height = 38];\nreturn-hosts [label = "Return a list of selected host & 
alternates, along with their allocations, to the conductor", height = 89];\n
}\n\nlane placement {\nlabel = "Placement";\nquery [labe
 l = "Query to 

[Yahoo-eng-team] [Bug 2049064] [NEW] Unit/functional test failures with oslo.limit 2.3.0

2024-01-11 Thread Takashi Kajinami
Public bug reported:

Description
===
The new oslo.limit 2.3.0 release introduced the validation to ensure the 
[oslo_limit] endpoint_id option is set.
However this change broke some unit/functional test cases which enable unified 
quota implementation without setting this option.

Steps to reproduce
==
- Download global upper constraints

- Bump oslo.limit version to 2.3.0

- Run unit tests with the modified upper constraints
 $ TOX_CONSTRAINTS_FILE= tox -e py310

Expected result
===
- No test case fails

Actual result
=
- Some of the test cases fail because of the following error

Environment
===
N/A

Logs & Configs
==
cross-nova-py310: 
https://zuul.opendev.org/t/openstack/build/79d2c815f0e04d4b8f6838b1d1ec026f
cross-nova-functional: 
https://zuul.opendev.org/t/openstack/build/159f06a88d7948209a111a9f45306e0a
cross-glance-py310: 
https://zuul.opendev.org/t/openstack/build/7634390991ab4442b4230b09563ef26b

** Affects: glance
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Also affects: glance
   Importance: Undecided
   Status: New

** Description changed:

  Description
  ===
  The new oslo.limit 2.3.0 release introduced the validation to ensure the 
[oslo_limit] endpoint_id option is set.
  However this change broke some unit/functional test cases which enable 
unified quota implementation without setting this option.
  
  Steps to reproduce
  ==
  - Download global upper constraints
  
  - Bump oslo.limit version to 2.3.0
  
  - Run unit tests with the modified upper constraints
- 
+  $ TOX_CONSTRAINTS_FILE= tox -e py310
  
  Expected result
  ===
  - No test case fails
  
  Actual result
  =
  - Some of the test cases fail because of the following error
  
  Environment
  ===
  N/A
  
  Logs & Configs
  ==
  cross-nova-py310: 
https://zuul.opendev.org/t/openstack/build/79d2c815f0e04d4b8f6838b1d1ec026f
  cross-nova-functional: 
https://zuul.opendev.org/t/openstack/build/159f06a88d7948209a111a9f45306e0a
  cross-glance-py310: 
https://zuul.opendev.org/t/openstack/build/7634390991ab4442b4230b09563ef26b

** Summary changed:

- Unit test fails with oslo.limit 2.3.0
+ Unit/functional test failures with oslo.limit 2.3.0

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: glance
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2049064

Title:
  Unit/functional test failures with oslo.limit 2.3.0

Status in Glance:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The new oslo.limit 2.3.0 release introduced the validation to ensure the 
[oslo_limit] endpoint_id option is set.
  However this change broke some unit/functional test cases which enable 
unified quota implementation without setting this option.

  Steps to reproduce
  ==
  - Download global upper constraints

  - Bump oslo.limit version to 2.3.0

  - Run unit tests with the modified upper constraints
   $ TOX_CONSTRAINTS_FILE= tox -e py310

  Expected result
  ===
  - No test case fails

  Actual result
  =
  - Some of the test cases fail because of the following error

  Environment
  ===
  N/A

  Logs & Configs
  ==
  cross-nova-py310: 
https://zuul.opendev.org/t/openstack/build/79d2c815f0e04d4b8f6838b1d1ec026f
  cross-nova-functional: 
https://zuul.opendev.org/t/openstack/build/159f06a88d7948209a111a9f45306e0a
  cross-glance-py310: 
https://zuul.opendev.org/t/openstack/build/7634390991ab4442b4230b09563ef26b

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2049064/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2047399] [NEW] nova api returns 500 when resizing an instance with memory encryption enabled

2023-12-26 Thread Takashi Kajinami
Public bug reported:

Description
===
When a user attempts to resize an instance with memory encryption enabled, API 
returns 500 error consistently.
Looking into nova-api.log, it seems the issue is caused by a mechanism similar 
to https://bugs.launchpad.net/nova/+bug/2041511 .

Steps to reproduce
==
* Create an image with hw_mem_encryption=True
 $ openstack image create encrypted ...
 $ openstack image set encrypted --property hw_mem_encryption=True

* Create an instance
 $ openstack server create testinstance --image encrypted --flavor flavor1 ...

* Resize the instance
 $ openstack server resize testinstance --flavor flavor2 

Expected result
===
Instance resize is accepted and processed by nova, without errors

Actual result
=
Nova api returns 500 error and does not accept the request

Environment
===

1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

Ubuntu 22.04 and UCA bobcat.

# dpkg -l | grep nova
ii nova-api 3:28.0.0-0ubuntu1~cloud0 all OpenStack Compute - API frontend
ii nova-common 3:28.0.0-0ubuntu1~cloud0 all OpenStack Compute - common files
ii nova-compute 3:28.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute node 
base
ii nova-compute-kvm 3:28.0.0-0ubuntu1~cloud0 all OpenStack Compute - compute 
node (KVM)
ii nova-compute-libvirt 3:28.0.0-0ubuntu1~cloud0 all OpenStack Compute - 
compute node libvirt support
ii nova-conductor 3:28.0.0-0ubuntu1~cloud0 all OpenStack Compute - conductor 
service
ii nova-novncproxy 3:28.0.0-0ubuntu1~cloud0 all OpenStack Compute - NoVNC proxy
ii nova-scheduler 3:28.0.0-0ubuntu1~cloud0 all OpenStack Compute - virtual 
machine scheduler
ii python3-nova 3:28.0.0-0ubuntu1~cloud0 all OpenStack Compute Python 3 
libraries
ii python3-novaclient 2:18.4.0-0ubuntu1~cloud0 all client library for OpenStack 
Compute API - 3.x

2. Which hypervisor did you use?
Libvirt + KVM

3. Which storage type did you use?
LVM

4. Which networking type did you use?
ml2 + ovs

Logs & Configs
==
The following traceback is found in nova-api.log

```
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi [None 
req-20b9b69c-a792-45cd-8520-7e9cd3387c0d 838cd42e04884ddfa8ec4ac11e2f8818 
baf003aa0202430a92edd003f98794a3 - - default default] Unexpected exception in 
API method: NotImplementedError: Cannot load 'id' in the base class
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/openstack/wsgi.py", line 658, in 
wrapped
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi return f(*args, 
**kwargs)
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/validation/__init__.py", line 110, in 
wrapper
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/openstack/compute/servers.py", line 
1146, in _action_resize
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi 
self._resize(req, id, flavor_ref, **kwargs)
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/openstack/compute/servers.py", line 
1060, in _resize
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi 
self.compute_api.resize(context, instance, flavor_id,
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/compute/api.py", line 389, in inner
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi return 
function(self, context, instance, *args, **kwargs)
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/compute/api.py", line 374, in wrapper
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi return 
func(self, context, instance, *args, **kwargs)
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/compute/api.py", line 357, in wrapper
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi return 
func(self, context, instance, *args, **kwargs)
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/compute/api.py", line 242, in inner
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi return 
function(self, context, instance, *args, **kwargs)
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/compute/api.py", line 168, in inner
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi return f(self, 
context, instance, *args, **kw)
2023-12-26 08:02:19.371 30791 ERROR nova.api.openstack.wsgi   File 

[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2023-12-21 Thread Takashi Kajinami
** Changed in: senlin
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  New
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  Fix Released
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in masakari:
  Fix Released
Status in Mistral:
  Invalid
Status in Monasca:
  New
Status in networking-arista:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  Fix Committed
Status in networking-ofagent:
  Fix Committed
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in oslo.middleware:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  Fix Released
Status in Rally:
  Fix Released
Status in OpenStack Searchlight:
  Fix Released
Status in senlin:
  Fix Released
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  Fix Committed
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2023-12-21 Thread Takashi Kajinami
This was fixed in python-watcherclient by
https://review.opendev.org/c/openstack/python-watcherclient/+/280026 .

** Changed in: python-watcherclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  New
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  Fix Released
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in masakari:
  Fix Released
Status in Mistral:
  Invalid
Status in Monasca:
  New
Status in networking-arista:
  Fix Released
Status in networking-calico:
  Fix Released
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  Fix Committed
Status in networking-ofagent:
  Fix Committed
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in oslo.middleware:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  Fix Released
Status in Rally:
  Fix Released
Status in OpenStack Searchlight:
  Fix Released
Status in senlin:
  Fix Committed
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  Fix Committed
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045287] Re: Error in charm when bootstrapping OpenStack with Sunbeam

2023-11-30 Thread Takashi Kajinami
This does not really apear as a neutron problem now, and should be
investigated from Sunbeam's perspective until the problem is narrowed
down to something wrong with Neutron.

** Project changed: neutron => charm-neutron-api

** Project changed: charm-neutron-api => charm-ops-sunbeam

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045287

Title:
  Error in charm when bootstrapping OpenStack with Sunbeam

Status in Ops Sunbeam:
  New

Bug description:
  When trying to bootstrap OpenStack using Sunbeam, there is an error in
  the charm of neutron, which looks like the following:

  * In app view:

  neutronwaiting  1
  neutron-k8s2023.1/stable   53  10.152.183.36   no
  installing agent

  * In unit view:

  neutron/0*   blocked   idle   10.1.163.29
  (workload) Error in charm (see logs): timed out waiting for change 2
  (301 seconds)

  * Looking at the logs for the error, they look like this:

  2023-11-30T09:27:16.903Z [container-agent] 2023-11-30 09:27:16 INFO juju-log 
identity-service:85: Syncing database...
  2023-11-30T09:32:18.000Z [container-agent] 2023-11-30 09:32:18 ERROR juju-log 
identity-service:85: Exception raised in section 'Bootstrapping': timed out 
waiting for change 2 (301 seconds)
  2023-11-30T09:32:18.008Z [container-agent] 2023-11-30 09:32:18 ERROR juju-log 
identity-service:85: Traceback (most recent call last):
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/guard.py", line 91, 
in guard
  2023-11-30T09:32:18.008Z [container-agent] yield
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
265, in configure_charm
  2023-11-30T09:32:18.008Z [container-agent] self.configure_unit(event)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
479, in configure_unit
  2023-11-30T09:32:18.008Z [container-agent] self.run_db_sync()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/job_ctrl.py", line 
74, in wrapped_f
  2023-11-30T09:32:18.008Z [container-agent] f(charm, *args, **kwargs)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/./src/charm.py", line 302, in 
run_db_sync
  2023-11-30T09:32:18.008Z [container-agent] super().run_db_sync()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/job_ctrl.py", line 
74, in wrapped_f
  2023-11-30T09:32:18.008Z [container-agent] f(charm, *args, **kwargs)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
549, in run_db_sync
  2023-11-30T09:32:18.008Z [container-agent] self._retry_db_sync(cmd)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
289, in wrapped_f
  2023-11-30T09:32:18.008Z [container-agent] return self(f, *args, **kw)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
379, in __call__
  2023-11-30T09:32:18.008Z [container-agent] do = 
self.iter(retry_state=retry_state)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
314, in iter
  2023-11-30T09:32:18.008Z [container-agent] return fut.result()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
  2023-11-30T09:32:18.008Z [container-agent] return self.__get_result()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
  2023-11-30T09:32:18.008Z [container-agent] raise self._exception
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/tenacity/__init__.py", line 
382, in __call__
  2023-11-30T09:32:18.008Z [container-agent] result = fn(*args, **kwargs)
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops_sunbeam/charm.py", line 
529, in _retry_db_sync
  2023-11-30T09:32:18.008Z [container-agent] out, warnings = 
process.wait_output()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops/pebble.py", line 1354, in 
wait_output
  2023-11-30T09:32:18.008Z [container-agent] exit_code: int = self._wait()
  2023-11-30T09:32:18.008Z [container-agent]   File 
"/var/lib/juju/agents/unit-neutron-0/charm/venv/ops/pebble.py", line 1294, in 
_wait
  

[Yahoo-eng-team] [Bug 2044896] [NEW] [metadata_rate_limiting] options are absent from sample config files

2023-11-27 Thread Takashi Kajinami
Public bug reported:

The metadata_rate_limiting options were added as part of the metadata
ratelimiting feature, which was added a few cycles ago[1].

However the change did not add these options for config entry points
thus these options are not added to the sample config files generated by
oslo-config-generator.

https://review.opendev.org/c/openstack/neutron/+/858879

** Affects: neutron
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2044896

Title:
  [metadata_rate_limiting] options are absent from sample config files

Status in neutron:
  In Progress

Bug description:
  The metadata_rate_limiting options were added as part of the metadata
  ratelimiting feature, which was added a few cycles ago[1].

  However the change did not add these options for config entry points
  thus these options are not added to the sample config files generated
  by oslo-config-generator.

  https://review.opendev.org/c/openstack/neutron/+/858879

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2044896/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2035168] Re: Remaining db migrations for unmaintained Nuage plugin

2023-11-15 Thread Takashi Kajinami
*** This bug is a duplicate of bug 2038555 ***
https://bugs.launchpad.net/bugs/2038555

** This bug has been marked a duplicate of bug 2038555
   Remove unused tables

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2035168

Title:
  Remaining db migrations for unmaintained Nuage plugin

Status in neutron:
  New

Bug description:
  (This is not a functional bug but is a potential cleanup opportunity)

  The latest master still contains database migration code for tables
  used by Nuage plugin.

  
https://github.com/openstack/neutron/tree/8cba9a2ee86cb3b65645674ef315c14cfb261143/neutron/db/migration/alembic_migrations
   -> nuage_init_opts.py

  However I noticed the nuage plugin is no longer maintained.

  https://github.com/nuagenetworks/nuage-openstack-neutron/tree/master

  AFAIU we could not remove these tables because plugins split out from
  the neutron repo early rely in tables/databases created by neutron,
  but it's no longer useful to maintain these in case the plugin is
  already unmaintained.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2035168/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2043116] Re: Unit test fails with oslo.utils 6.3.0

2023-11-09 Thread Takashi Kajinami
** Also affects: ironic
   Importance: Undecided
   Status: New

** Description changed:

  Description
  ===
  
- We recently created oslo.utils 6.3.0 release.
- However when we attempt to bump the version in u-c file[1], unit tests of 
ironic and nova fail with the following errors.
+ We recently created oslo.utils 6.3.0 release which includes the fix for
+ the bug[1] affecting sqlalchemy-master jobs.
+ 
+ However when we attempt to bump the version in u-c file[2], unit tests
+ of ironic and nova fail with the following errors.
  
  ```
  
nova.tests.unit.objects.test_objects.TestRegistry.test_hook_keeps_newer_properly
  

  
  Captured traceback:
  ~~~
- Traceback (most recent call last):
+ Traceback (most recent call last):
  
-   File "/usr/lib/python3.10/unittest/mock.py", line 1379, in patched
- return func(*newargs, **newkeywargs)
+   File "/usr/lib/python3.10/unittest/mock.py", line 1379, in patched
+ return func(*newargs, **newkeywargs)
  
-   File 
"/home/zuul/src/opendev.org/openstack/nova/nova/tests/unit/objects/test_objects.py",
 line 1060, in test_hook_keeps_newer_properly
- reg.registration_hook(MyObj, 0)
+   File 
"/home/zuul/src/opendev.org/openstack/nova/nova/tests/unit/objects/test_objects.py",
 line 1060, in test_hook_keeps_newer_properly
+ reg.registration_hook(MyObj, 0)
  
-   File "/home/zuul/src/opendev.org/openstack/nova/nova/objects/base.py", 
line 72, in registration_hook
- cur_version = versionutils.convert_version_to_tuple(
+   File "/home/zuul/src/opendev.org/openstack/nova/nova/objects/base.py", 
line 72, in registration_hook
+ cur_version = versionutils.convert_version_to_tuple(
  
-   File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py310/lib/python3.10/site-packages/oslo_utils/versionutils.py",
 line 91, in convert_version_to_tuple
- version_str = re.sub(r'(\d+)(a|alpha|b|beta|rc)\d+$', '\\1', version_str)
+   File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py310/lib/python3.10/site-packages/oslo_utils/versionutils.py",
 line 91, in convert_version_to_tuple
+ version_str = re.sub(r'(\d+)(a|alpha|b|beta|rc)\d+$', '\\1', version_str)
  
-   File "/usr/lib/python3.10/re.py", line 209, in sub
- return _compile(pattern, flags).sub(repl, string, count)
+   File "/usr/lib/python3.10/re.py", line 209, in sub
+ return _compile(pattern, flags).sub(repl, string, count)
  
- TypeError: expected string or bytes-like object
+ TypeError: expected string or bytes-like object
  ```
  
- 
- [1] https://review.opendev.org/c/openstack/requirements/+/900517
+ [1] https://bugs.launchpad.net/oslo.utils/+bug/2042886
+ [2] https://review.opendev.org/c/openstack/requirements/+/900517

** Changed in: ironic
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2043116

Title:
  Unit test fails with oslo.utils 6.3.0

Status in Ironic:
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  We recently created oslo.utils 6.3.0 release which includes the fix
  for the bug[1] affecting sqlalchemy-master jobs.

  However when we attempt to bump the version in u-c file[2], unit tests
  of ironic and nova fail with the following errors.

  ```
  
nova.tests.unit.objects.test_objects.TestRegistry.test_hook_keeps_newer_properly
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):

    File "/usr/lib/python3.10/unittest/mock.py", line 1379, in patched
  return func(*newargs, **newkeywargs)

    File 
"/home/zuul/src/opendev.org/openstack/nova/nova/tests/unit/objects/test_objects.py",
 line 1060, in test_hook_keeps_newer_properly
  reg.registration_hook(MyObj, 0)

    File "/home/zuul/src/opendev.org/openstack/nova/nova/objects/base.py", 
line 72, in registration_hook
  cur_version = versionutils.convert_version_to_tuple(

    File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py310/lib/python3.10/site-packages/oslo_utils/versionutils.py",
 line 91, in convert_version_to_tuple
  version_str = re.sub(r'(\d+)(a|alpha|b|beta|rc)\d+$', '\\1', version_str)

    File "/usr/lib/python3.10/re.py", line 209, in sub
  return _compile(pattern, flags).sub(repl, string, count)

  TypeError: expected string or bytes-like object
  ```

  [1] https://bugs.launchpad.net/oslo.utils/+bug/2042886
  [2] https://review.opendev.org/c/openstack/requirements/+/900517

To mana

[Yahoo-eng-team] [Bug 2043116] [NEW] Unit test fails with oslo.utils 6.3.0

2023-11-09 Thread Takashi Kajinami
Public bug reported:

Description
===

We recently created oslo.utils 6.3.0 release which includes the fix for
the bug[1] affecting sqlalchemy-master jobs.

However when we attempt to bump the version in u-c file[2], unit tests
of ironic and nova fail with the following errors.

```
nova.tests.unit.objects.test_objects.TestRegistry.test_hook_keeps_newer_properly


Captured traceback:
~~~
Traceback (most recent call last):

  File "/usr/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)

  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/tests/unit/objects/test_objects.py",
 line 1060, in test_hook_keeps_newer_properly
reg.registration_hook(MyObj, 0)

  File "/home/zuul/src/opendev.org/openstack/nova/nova/objects/base.py", 
line 72, in registration_hook
cur_version = versionutils.convert_version_to_tuple(

  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py310/lib/python3.10/site-packages/oslo_utils/versionutils.py",
 line 91, in convert_version_to_tuple
version_str = re.sub(r'(\d+)(a|alpha|b|beta|rc)\d+$', '\\1', version_str)

  File "/usr/lib/python3.10/re.py", line 209, in sub
return _compile(pattern, flags).sub(repl, string, count)

TypeError: expected string or bytes-like object
```

[1] https://bugs.launchpad.net/oslo.utils/+bug/2042886
[2] https://review.opendev.org/c/openstack/requirements/+/900517

** Affects: ironic
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: New

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2043116

Title:
  Unit test fails with oslo.utils 6.3.0

Status in Ironic:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  We recently created oslo.utils 6.3.0 release which includes the fix
  for the bug[1] affecting sqlalchemy-master jobs.

  However when we attempt to bump the version in u-c file[2], unit tests
  of ironic and nova fail with the following errors.

  ```
  
nova.tests.unit.objects.test_objects.TestRegistry.test_hook_keeps_newer_properly
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):

    File "/usr/lib/python3.10/unittest/mock.py", line 1379, in patched
  return func(*newargs, **newkeywargs)

    File 
"/home/zuul/src/opendev.org/openstack/nova/nova/tests/unit/objects/test_objects.py",
 line 1060, in test_hook_keeps_newer_properly
  reg.registration_hook(MyObj, 0)

    File "/home/zuul/src/opendev.org/openstack/nova/nova/objects/base.py", 
line 72, in registration_hook
  cur_version = versionutils.convert_version_to_tuple(

    File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/py310/lib/python3.10/site-packages/oslo_utils/versionutils.py",
 line 91, in convert_version_to_tuple
  version_str = re.sub(r'(\d+)(a|alpha|b|beta|rc)\d+$', '\\1', version_str)

    File "/usr/lib/python3.10/re.py", line 209, in sub
  return _compile(pattern, flags).sub(repl, string, count)

  TypeError: expected string or bytes-like object
  ```

  [1] https://bugs.launchpad.net/oslo.utils/+bug/2042886
  [2] https://review.opendev.org/c/openstack/requirements/+/900517

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/2043116/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2042647] [NEW] doc: Options described in "DHCP High-availability" are outdated

2023-11-03 Thread Takashi Kajinami
Public bug reported:

The "DHCP High-availability" chapter in admin guide[1] contains multiple
outdated options.

 - linux bridge core_plugin is used instead of ml2 + linuxbridge
 - [database] option should be added to neutron.conf
 - [DEFAULT] rabbit_host option and [DEFAULT] rabbit_password option no longer 
exit

 - [DEFAULT] use_neutron and [DEFAULT] firewall_driver were removed from nova
 - [neutron] admin_* options were removed from nova 

[1] https://docs.openstack.org/neutron/latest/admin/config-dhcp-ha.html

Although we can fix these, it probably makes better sense to refer to
installation guide for most of options and then describe only specific
options ( dhcp_agents_per_network ), so that we don't have to maintain
basic options in multiple chapters.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc

** Tags added: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2042647

Title:
  doc: Options described in "DHCP High-availability" are outdated

Status in neutron:
  New

Bug description:
  The "DHCP High-availability" chapter in admin guide[1] contains
  multiple outdated options.

   - linux bridge core_plugin is used instead of ml2 + linuxbridge
   - [database] option should be added to neutron.conf
   - [DEFAULT] rabbit_host option and [DEFAULT] rabbit_password option no 
longer exit

   - [DEFAULT] use_neutron and [DEFAULT] firewall_driver were removed from nova
   - [neutron] admin_* options were removed from nova 

  [1] https://docs.openstack.org/neutron/latest/admin/config-dhcp-
  ha.html

  Although we can fix these, it probably makes better sense to refer to
  installation guide for most of options and then describe only specific
  options ( dhcp_agents_per_network ), so that we don't have to maintain
  basic options in multiple chapters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2042647/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2041511] [NEW] nova api returns 500 when creating a volume booted instance with memory enryption enabled

2023-10-27 Thread Takashi Kajinami
Public bug reported:

Description
===

When creating an instance with an volume created from an image with
hw_mem_encryption: true, nova-api returns 500 and the creation request
is not accepted.

Steps to reproduce
==
* Create an image with hw_mem_encryption=True
 $ openstack image create encrypted ...
 $ openstack image set encrypted --property hw_mem_encryption=True

* Create a volume from the image
 $ openstack volume create bootvolume --image encrypted ...

* Create an instance
 $ openstack server create --volume bootvolume ...

Expected result
===
Instance creation is accepted and processed by nova, without errors

Actual result
=
Nova api returns 500 error and does not accept the request

Environment
===

1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

Ubuntu 22.04 and UCA bobcat.

# dpkg -l | grep nova
ii  nova-api3:28.0.0-0ubuntu1~cloud0
 all  OpenStack Compute - API frontend
ii  nova-common 3:28.0.0-0ubuntu1~cloud0
 all  OpenStack Compute - common files
ii  nova-compute3:28.0.0-0ubuntu1~cloud0
 all  OpenStack Compute - compute node base
ii  nova-compute-kvm3:28.0.0-0ubuntu1~cloud0
 all  OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt3:28.0.0-0ubuntu1~cloud0
 all  OpenStack Compute - compute node libvirt 
support
ii  nova-conductor  3:28.0.0-0ubuntu1~cloud0
 all  OpenStack Compute - conductor service
ii  nova-novncproxy 3:28.0.0-0ubuntu1~cloud0
 all  OpenStack Compute - NoVNC proxy
ii  nova-scheduler  3:28.0.0-0ubuntu1~cloud0
 all  OpenStack Compute - virtual machine scheduler
ii  python3-nova3:28.0.0-0ubuntu1~cloud0
 all  OpenStack Compute Python 3 libraries
ii  python3-novaclient  2:18.4.0-0ubuntu1~cloud0
 all  client library for OpenStack Compute API - 3.x

2. Which hypervisor did you use?
Libvirt + KVM

3. Which storage type did you use?
LVM

4. Which networking type did you use?
ml2 + ovs

Logs & Configs
==
The following traceback is found in nova-api.log

```
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi [None 
req-f55255c7-5829-4f89-bee9-ab34a6c02faf 69d6ccfef7e240398970c80f0be8ccf7 
5a2803c4cdb1412fa1e83738d7821904 - - default default] Unexpected exception in 
API method: NotImplementedError: Cannot load 'id' in the base class
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi Traceback (most 
recent call last):
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/openstack/wsgi.py", line 658, in 
wrapped
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi return 
f(*args, **kwargs)
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/validation/__init__.py", line 110, in 
wrapper
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/validation/__init__.py", line 110, in 
wrapper
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/validation/__init__.py", line 110, in 
wrapper
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi return 
func(*args, **kwargs)
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi   [Previous line 
repeated 11 more times]
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/api/openstack/compute/servers.py", line 
786, in create
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi instances, 
resv_id = self.compute_api.create(
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/compute/api.py", line 2207, in create
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi return 
self._create_instance(
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi   File 
"/usr/lib/python3/dist-packages/nova/compute/api.py", line 1725, in 
_create_instance
2023-10-27 07:46:56.878 381436 ERROR nova.api.openstack.wsgi 
self._checks_for_create_and_rebuild(context, 

[Yahoo-eng-team] [Bug 2040449] [NEW] Instance with memory encryption enabled can't be launched when [libvirt] cpu_mode is custom

2023-10-25 Thread Takashi Kajinami
Public bug reported:

Description
===
When a user tries to launch an instance with memory encryption enabled, the 
instance always becomes error state if the nova-compute has [libvirt] cpu_mode 
= custom.

Steps to reproduce
==
1. Set the following options in nova.conf and restart nova-compute

[libvirt]
cpu_mode = custom
cpu_models = EPYC

2. Prepare a flavor with memory encryption enabled

$ openstack flavor show m1.small-enc -f yaml
OS-FLV-DISABLED:disabled: false
OS-FLV-EXT-DATA:ephemeral: 0
access_project_ids: null
description: null
disk: 20
id: ee97652f-8948-4cdd-a5cd-71411cf9c8e4
name: m1.small-enc
os-flavor-access:is_public: true
properties:
  hw:mem_encryption: 'true'
ram: 2048
rxtx_factor: 1.0
swap: 0
vcpus: 1

3. Create an image with hw_firmware_type property set to 'uefi'

$ openstack image show cirros-uefi -f yaml
checksum: c8fc807773e5354afe61636071771906
container_format: bare
created_at: '2023-10-25T02:46:57Z'
disk_format: qcow2
file: /v2/images/d6353363-f580-464c-9909-93212298a58a/file
id: d6353363-f580-464c-9909-93212298a58a
min_disk: 0
min_ram: 0
name: cirros-uefi
owner: 5a2803c4cdb1412fa1e83738d7821904
properties:
  hw_disk_bus: scsi
  hw_firmware_type: uefi
  hw_scsi_model: virtio-scsi
  os_hash_algo: sha512
  os_hash_value: 
1103b92ce8ad966e41235a4de260deb791ff571670c0342666c8582fbb9caefe6af07ebb11d34f44f8414b609b29c1bdf1d72ffa6faa39c88e8721d09847952b
  os_hidden: false
  owner_specified.openstack.md5: ''
  owner_specified.openstack.object: images/cirros-uefi
  owner_specified.openstack.sha256: ''
  stores: fs
protected: false
schema: /v2/schemas/image
size: 21430272
status: active
tags: []
updated_at: '2023-10-25T06:00:15Z'
virtual_size: 117440512
visibility: public

4. launch an instance using the flavr and the image
$ openstack server create --image cirros-uefi --flavor m1.small-enc --network 
private cirros-enc

Expected result
===
The instance becomes active state

Actual result
=
Instance becomes error state. The following traceback is found in 
nova-compute.log

```
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [None 
req-104288bc-7bf5-4bcd-a728-cd85ac72416f 69d6ccfef7e240398970c80f0be8ccf7 
5a2803c4cdb1412fa1e83738d7821904 - - default default] [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542] Failed to build and run instance: 
nova.exception.FlavorImageConflict: Memory encryption requested by 
hw:mem_encryption extra spec in m1.small-enc flavor but image None doesn't have 
'hw_firmware_type' property set to 'uefi' or volume-backed instance was 
requested
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542] Traceback (most recent call last):
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542]   File 
"/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2615, in 
_build_and_run_instance
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542] self.driver.spawn(context, instance, 
image_meta,
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542]   File 
"/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 4413, in 
spawn
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542] xml = self._get_guest_xml(context, 
instance, network_info,
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542]   File 
"/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7565, in 
_get_guest_xml
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542] conf = 
self._get_guest_config(instance, network_info, image_meta,
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542]   File 
"/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7045, in 
_get_guest_config
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542] guest.cpu = 
self._get_guest_cpu_config(
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542]   File 
"/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 5451, in 
_get_guest_cpu_config
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542] cpu = 
self._get_guest_cpu_model_config(flavor, arch)
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542]   File 
"/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 5398, in 
_get_guest_cpu_model_config
2023-10-25 06:33:20.674 38337 ERROR nova.compute.manager [instance: 
000b22bc-6b28-4adb-a3af-44b1f090c542] flags = 
libvirt_utils.get_flags_by_flavor_specs(flavor)
2023-10-25 

[Yahoo-eng-team] [Bug 1566622] Re: live migration fails with xenapi virt driver and SRs with old-style naming convention

2023-10-16 Thread Takashi Kajinami
I don't understand what can be a reason to change the affected project.
Please describe it in case the change was appropriate and intentional.

** Project changed: ilh-facebook => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1566622

Title:
  live migration fails with xenapi virt driver and SRs with old-style
  naming convention

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  version: commit ce5a2fb419f999bec0fb2c67413387c8b67a691a

  1. create a boot-from-volume instance prior to deploying commit 
5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
  2. upgrade nova to commit 5bd222e8d854ca7f03ee6936454ee57e0d6e1a78
  3. live-migrate instance
  4. observe live-migrate action fail

  based on my analysis of logs and code:
  1. destination uses new-style SR naming convention in sr_uuid_map.
  2. source tries to use new-style SR naming convention in talking to XenAPI 
(in nova.virt.xenapi.vmops.py:VMOps.live_migrate() -> 
_call_live_migrate_command())
  3. xenapi throws XenAPI.Failure exception because it "Got exception 
UUID_INVALID" because it only knows the SR by the old-style naming convention

  example destination nova-compute, source nova-compute, and xenapi logs
  from a live-migrate request to follow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1566622/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2035168] [NEW] Remaining db migrations for unmaintained Nuage plugin

2023-09-11 Thread Takashi Kajinami
Public bug reported:

(This is not a functional bug but is a potential cleanup opportunity)

The latest master still contains database migration code for tables used
by Nuage plugin.

https://github.com/openstack/neutron/tree/8cba9a2ee86cb3b65645674ef315c14cfb261143/neutron/db/migration/alembic_migrations
 -> nuage_init_opts.py

However I noticed the nuage plugin is no longer maintained.

https://github.com/nuagenetworks/nuage-openstack-neutron/tree/master

AFAIU we can't remove these tables because plugins split out from the neutron 
repo early
rely in tables/databases created by neutron, but it's no longer useful to 
maintain these
in case the plugin is already unmaintained.

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Remaining db migrations for Nuage plugin 
+ Remaining db migrations for unmaintained Nuage plugin

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2035168

Title:
  Remaining db migrations for unmaintained Nuage plugin

Status in neutron:
  New

Bug description:
  (This is not a functional bug but is a potential cleanup opportunity)

  The latest master still contains database migration code for tables
  used by Nuage plugin.

  
https://github.com/openstack/neutron/tree/8cba9a2ee86cb3b65645674ef315c14cfb261143/neutron/db/migration/alembic_migrations
   -> nuage_init_opts.py

  However I noticed the nuage plugin is no longer maintained.

  https://github.com/nuagenetworks/nuage-openstack-neutron/tree/master

  AFAIU we can't remove these tables because plugins split out from the neutron 
repo early
  rely in tables/databases created by neutron, but it's no longer useful to 
maintain these
  in case the plugin is already unmaintained.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2035168/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033683] Re: openvswitch.agent.ovs_neutron_agent fails to Cmd: ['iptables-restore', '-n']

2023-09-11 Thread Takashi Kajinami
We are facing this issue in Puppet OpenStack CI which uses RDO stable/yoga and 
c8s, so this looks like a legit bug in iptables.
I don't think this is also related to TripleO so I'll close this as invalid.

** Changed in: tripleo
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033683

Title:
  openvswitch.agent.ovs_neutron_agent fails to Cmd: ['iptables-restore',
  '-n']

Status in neutron:
  Invalid
Status in tripleo:
  Invalid

Bug description:
  Description
  ===
  Wallaby deployment via undercloud/overcloud started to fail recently on 
overcloud node provision
  Neutron constantly reports inability to update iptables that in turn makes 
baremetal to fail to boot from PXE
  From the review it seems that /usr/bin/update-alternatives set to legacy 
fails since neutron user doesn't have sudo to run it
  In the info I can see that neutron user has the following subset of commands 
it's able to run:
  ...
  (root) NOPASSWD: /usr/bin/update-alternatives --set iptables 
/usr/sbin/iptables-legacy
  (root) NOPASSWD: /usr/bin/update-alternatives --set ip6tables 
/usr/sbin/ip6tables-legacy
  (root) NOPASSWD: /usr/bin/update-alternatives --auto iptables
  (root) NOPASSWD: /usr/bin/update-alternatives --auto ip6tables

  But the issue is the fact that command isn't found as it was moved to
  /usr/sbin/update-alternatives

  Steps to reproduce
  ==
  1. Deploy undercloud
  2. Deploy networks and VIP
  3. Add and introspect a node
  4. Execute overcloud node provision ... that will timeout 

  Expected result
  ===
  Successful overcloud node baremetal provisioning

  Logs & Configs
  ==
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-18d52177-9c93-401c-b97d-0334e488a257 - - - - -] Error while processing VIF 
ports: neutron_lib.exceptions.ProcessExecutionError: Exit code: 1; Cmd: 
['iptables-restore', '-n']; Stdin: # Generated by iptables_manager

  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent COMMIT
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent # Completed by 
iptables_manager
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ; Stdout: ; 
Stderr: iptables-restore: line 23 failed

  Environment
  ===
  Centos 9 Stream and undercloud deployment tool

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033683/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2013473] [NEW] default_catalog.templates is outdated

2023-03-31 Thread Takashi Kajinami
Public bug reported:

It seems the catalog template file is horribly outdated and contains the
following problem.

 - keystone v2 was removed long ago
 - cinder no longer provides v2 api and v3 api should be used
 - cinder and nova no longer requires tenant_id templates in url. tenant_id 
templates prevents API access with domain/system scope tokens
 - telemetry endpoint was removed
 - now placement is required by nova
 - ec2 api was split out from nova and now is independent and optional service

** Affects: keystone
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2013473

Title:
  default_catalog.templates is outdated

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  It seems the catalog template file is horribly outdated and contains
  the following problem.

   - keystone v2 was removed long ago
   - cinder no longer provides v2 api and v3 api should be used
   - cinder and nova no longer requires tenant_id templates in url. tenant_id 
templates prevents API access with domain/system scope tokens
   - telemetry endpoint was removed
   - now placement is required by nova
   - ec2 api was split out from nova and now is independent and optional service

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2013473/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2007924] [NEW] stevedore always shows error if boto3 is not installed

2023-02-21 Thread Takashi Kajinami
Public bug reported:

Currently stevedore always dump the following error in case boto3 is not
installed in the system.

ERROR stevedore.extension [-] Could not load 'glance.store.s3.Store': No module 
named 'boto3': ModuleNotFoundError: No module named 'boto3'
ERROR stevedore.extension [-] Could not load 's3': No module named 'boto3': 
ModuleNotFoundError: No module named 'boto3'

This error is red herring because missing boto3 does not harm unless s3
backend is actually used.

The other stores such as swift store ignores missing library during
loading drivers but fails in case swift store is actually requested.
It'd be helpful to follow that strategy for s3 backend to avoid
confusing error.

** Affects: glance
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2007924

Title:
  stevedore always shows error if boto3 is not installed

Status in Glance:
  New

Bug description:
  Currently stevedore always dump the following error in case boto3 is
  not installed in the system.

  ERROR stevedore.extension [-] Could not load 'glance.store.s3.Store': No 
module named 'boto3': ModuleNotFoundError: No module named 'boto3'
  ERROR stevedore.extension [-] Could not load 's3': No module named 'boto3': 
ModuleNotFoundError: No module named 'boto3'

  This error is red herring because missing boto3 does not harm unless
  s3 backend is actually used.

  The other stores such as swift store ignores missing library during
  loading drivers but fails in case swift store is actually requested.
  It'd be helpful to follow that strategy for s3 backend to avoid
  confusing error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2007924/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2007532] [NEW] Wrong description about minimum value of live_migration_downtime(_delay)

2023-02-16 Thread Takashi Kajinami
Public bug reported:

Description
===
The parameter descriptions say live_migration_downtime_steps and 
live_migration_downtime_delay are rounded if too small values are given. 
However actually these parameters have minimum defined and oslo.config does not 
accept too small values and fail to load the config.


Steps to reproduce
==
Configure
 [libvirt] live_migration_downtime = 50
 [libvirt] live_migration_downtime_steps =2

Expected result
===
Values are rounded to the described minimum values

Actual result
=
nova-compute fails to start because these does not meat the minimum requirement

Environment
===
N/A

Logs & Configs
==
N/A

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2007532

Title:
  Wrong description about minimum value of
  live_migration_downtime(_delay)

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The parameter descriptions say live_migration_downtime_steps and 
live_migration_downtime_delay are rounded if too small values are given. 
However actually these parameters have minimum defined and oslo.config does not 
accept too small values and fail to load the config.

  
  Steps to reproduce
  ==
  Configure
   [libvirt] live_migration_downtime = 50
   [libvirt] live_migration_downtime_steps =2

  Expected result
  ===
  Values are rounded to the described minimum values

  Actual result
  =
  nova-compute fails to start because these does not meat the minimum 
requirement

  Environment
  ===
  N/A

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2007532/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1638368] Re: new keystone db migrations require either SUPER or log_bin_trust_function_creators=1

2023-01-11 Thread Takashi Kajinami
It's not clear what is the pending item on puppet-keystone, and as I no
longer see the problem in recent versions, I'll mark this as won't fix
from our side.

** Changed in: puppet-keystone
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1638368

Title:
  new keystone db migrations require either SUPER or
  log_bin_trust_function_creators=1

Status in OpenStack Identity (keystone):
  Fix Released
Status in puppet-keystone:
  Won't Fix

Bug description:
  Upgrade Process Docs:
  http://docs.openstack.org/developer/keystone/upgrading.html#upgrading-
  without-downtime

  The new keystone upgrade features (keystone-manage db_sync --expand)
  require either that the keystone user has SUPER or that

  set global log_bin_trust_function_creators=1; is run.

  I'm not sure which is the better option but logging this anyway.

  Without that you get this error:

  root@dev01-keystone-001:/var/log/mysql# keystone-manage db_sync --expand
  2016-11-01 19:56:17.803 1 INFO migrate.versioning.api [-] 97 -> 98...
  2016-11-01 19:56:17.821 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:17.821 1 INFO migrate.versioning.api [-] 98 -> 99...
  2016-11-01 19:56:17.839 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:17.839 1 INFO migrate.versioning.api [-] 99 -> 100...
  2016-11-01 19:56:17.855 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:17.856 1 INFO migrate.versioning.api [-] 100 -> 101...
  2016-11-01 19:56:17.897 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:17.897 1 INFO migrate.versioning.api [-] 101 -> 102...
  2016-11-01 19:56:17.961 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:17.961 1 INFO migrate.versioning.api [-] 102 -> 103...
  2016-11-01 19:56:18.108 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:18.109 1 INFO migrate.versioning.api [-] 103 -> 104...
  2016-11-01 19:56:18.132 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:18.132 1 INFO migrate.versioning.api [-] 104 -> 105...
  2016-11-01 19:56:18.454 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:18.455 1 INFO migrate.versioning.api [-] 105 -> 106...
  2016-11-01 19:56:18.680 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:18.680 1 INFO migrate.versioning.api [-] 106 -> 107...
  2016-11-01 19:56:18.968 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:18.968 1 INFO migrate.versioning.api [-] 107 -> 108...
  2016-11-01 19:56:19.324 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:19.325 1 INFO migrate.versioning.api [-] 108 -> 109...
  2016-11-01 19:56:19.477 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:19.534 1 INFO migrate.versioning.api [-] 0 -> 1...
  2016-11-01 19:56:19.550 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:19.550 1 INFO migrate.versioning.api [-] 1 -> 2...
  2016-11-01 19:56:19.569 1 INFO migrate.versioning.api [-] done
  2016-11-01 19:56:19.569 1 INFO migrate.versioning.api [-] 2 -> 3...
  2016-11-01 19:56:19.881 1 CRITICAL keystone [-] OperationalError: 
(_mysql_exceptions.OperationalError) (1419, 'You do not have the SUPER 
privilege and binary logging is enabled (you *might* want to use the less safe 
log_bin_trust_function_creators variable)') [SQL: "\nCREATE TRIGGER 
credential_insert_read_only BEFORE INSERT ON credential\nFOR EACH ROW\nBEGIN\n  
SIGNAL SQLSTATE '45000'\nSET MESSAGE_TEXT = 'Credential migration in 
progress. Cannot perform writes to credential table.';\nEND;\n"]
  2016-11-01 19:56:19.881 1 ERROR keystone Traceback (most recent call last):
  2016-11-01 19:56:19.881 1 ERROR keystone   File "/usr/bin/keystone-manage", 
line 10, in 
  2016-11-01 19:56:19.881 1 ERROR keystone sys.exit(main())
  2016-11-01 19:56:19.881 1 ERROR keystone   File 
"/venv/local/lib/python2.7/site-packages/keystone/cmd/manage.py", line 44, in 
main
  2016-11-01 19:56:19.881 1 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2016-11-01 19:56:19.881 1 ERROR keystone   File 
"/venv/local/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1254, in 
main
  2016-11-01 19:56:19.881 1 ERROR keystone CONF.command.cmd_class.main()
  2016-11-01 19:56:19.881 1 ERROR keystone   File 
"/venv/local/lib/python2.7/site-packages/keystone/cmd/cli.py", line 438, in main
  2016-11-01 19:56:19.881 1 ERROR keystone migration_helpers.expand_schema()
  2016-11-01 19:56:19.881 1 ERROR keystone   File 
"/venv/local/lib/python2.7/site-packages/keystone/common/sql/migration_helpers.py",
 line 233, in expand_schema
  2016-11-01 19:56:19.881 1 ERROR keystone 
_sync_repo(repo_name='expand_repo')
  2016-11-01 19:56:19.881 1 ERROR keystone   File 
"/venv/local/lib/python2.7/site-packages/keystone/common/sql/migration_helpers.py",
 line 144, in _sync_repo
  2016-11-01 19:56:19.881 1 ERROR keystone init_version=init_version, 

[Yahoo-eng-team] [Bug 1998274] [NEW] interface detach does not progress because libvirt does not complete the operation

2022-11-29 Thread Takashi Kajinami
Public bug reported:

Description
===
# This might not be a nova bug but I'm wondering whether anyone in the team has 
any idea about $topic.

Currently some tests in heat CI are consistently failing. Looking at the
failures, we found detaching a port from a interface does not progress.

example:
https://zuul.opendev.org/t/openstack/build/301ed642a2374caf9a4f807952702a6a

~~~
Nov 29 10:14:44.823945 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.driver [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] Attempting to detach device tape01452d4-15 from instance 
5cf9dfbe-e6cd-414b-a868-888aef23f733 from the persistent domain config. 
{{(pid=81163) _detach_from_persistent 
/opt/stack/nova/nova/virt/libvirt/driver.py:2445}}
Nov 29 10:14:44.824612 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.guest [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] detach device xml: 
Nov 29 10:14:44.829211 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.guest [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] looking for interface given config:  {{(pid=81163) 
get_interface_by_cfg /opt/stack/nova/nova/virt/libvirt/guest.py:257}}
Nov 29 10:14:44.832316 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.guest [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] No interface of type:  found in domain 
{{(pid=81163) get_interface_by_cfg 
/opt/stack/nova/nova/virt/libvirt/guest.py:261}}
Nov 29 10:14:44.832606 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
INFO nova.virt.libvirt.driver [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] Successfully detached device tape01452d4-15 from instance 
5cf9dfbe-e6cd-414b-a868-888aef23f733 from the persistent domain config.
Nov 29 10:14:44.832978 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.driver [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] (1/8): Attempting to detach device tape01452d4-15 with device alias 
net0 from instance 5cf9dfbe-e6cd-414b-a868-888aef23f733 from the live domain 
config. {{(pid=81163) _detach_from_live_with_retry 
/opt/stack/nova/nova/virt/libvirt/driver.py:2481}}
Nov 29 10:14:44.833403 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.guest [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] detach device xml: 
Nov 29 10:14:49.838217 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.driver [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] Start waiting for the detach event from libvirt for device 
tape01452d4-15 with device alias net0 for instance 
5cf9dfbe-e6cd-414b-a868-888aef23f733 {{(pid=81163) 
_detach_from_live_and_wait_for_event 
/opt/stack/nova/nova/virt/libvirt/driver.py:2557}}
Nov 29 10:15:09.840289 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
WARNING nova.virt.libvirt.driver [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] Waiting for libvirt event about the detach of device tape01452d4-15 
with device alias net0 from instance 5cf9dfbe-e6cd-414b-a868-888aef23f733 is 
timed out.
Nov 29 10:15:09.841061 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.guest [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] looking for interface given config:  {{(pid=81163) 
get_interface_by_cfg /opt/stack/nova/nova/virt/libvirt/guest.py:257}}
Nov 29 10:15:09.846547 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.driver [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] Failed to detach device tape01452d4-15 with device alias net0 from 
instance 5cf9dfbe-e6cd-414b-a868-888aef23f733 from the live domain config. 
Libvirt did not report any error but the device is still in the config. 
{{(pid=81163) _detach_from_live_with_retry 
/opt/stack/nova/nova/virt/libvirt/driver.py:2499}}
Nov 29 10:15:09.846792 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.driver [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] (2/8): Attempting to detach device tape01452d4-15 with device alias 
net0 from instance 5cf9dfbe-e6cd-414b-a868-888aef23f733 from the live domain 
config. {{(pid=81163) _detach_from_live_with_retry 
/opt/stack/nova/nova/virt/libvirt/driver.py:2481}}
Nov 29 10:15:09.847211 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.guest [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] detach device xml: 
Nov 29 10:15:09.851474 ubuntu-jammy-rax-dfw-0032320806 nova-compute[81163]: 
DEBUG nova.virt.libvirt.driver [None req-3a1d745d-ca71-4dd3-9f75-d9adac3c5cdb 
demo demo] Libvirt returned error while detaching device tape01452d4-15 from 
instance 5cf9dfbe-e6cd-414b-a868-888aef23f733. Libvirt error code: 1, error 
message: internal error: unable to execute QEMU command 'device_del': Device 
net0 is already in the process of unplug. 

[Yahoo-eng-team] [Bug 1931558] Re: LFI vulnerability in "Create Workbook"

2022-08-12 Thread Takashi Kajinami
** Changed in: python-mistralclient
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1931558

Title:
  LFI vulnerability in "Create Workbook"

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Mistral:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in python-mistralclient:
  Fix Released

Bug description:
  Hello,
  I've found a Local File Inclusion (LFI) vulnerability in creating a workbook 
on OpenStack Dashboard.
  This vulnerability allows the attacker to read a sensitive file on the server 
like /etc/password, config file, etc. Tested version: Victoria Horizon 18.6.3
  I do not an opportunity to test the other version, but I think those versions 
also vulnerable.

  Steps to reproduce:
  1. Create a text file datnt78.txt with content: "/etc/passwd"
  2. Select Workflow -> Workbooks -> Create Workbook
  3. In "Definition Source" select "File" then browse datnt78.txt file then 
click Validate and got /etc/passwd content.

  This is the request: http://paste.openstack.org/show/806520/
  This is the response: http://paste.openstack.org/show/806521/
  Please find the sample file and POC image in the attachment.

  Thank you,
  DatNT78 at FTEL CSOC

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1931558/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1984238] [NEW] neutron-dynamic-routing: Continuous warning because of missing context wrapper

2022-08-10 Thread Takashi Kajinami
Public bug reported:

It seems the fix for https://bugs.launchpad.net/neutron/+bug/1980671 was
incomplete.


In Puppet OpenStack job we observe the following WARNING when 
neutron-dynamic-routing is enabled

https://zuul.opendev.org/t/openstack/build/9033dba56cd843e8a022a9f11a11f69a

https://597ec4b4da1176461210-f1e10e895ca9a37e1229881907aff07c.ssl.cf1.rackcdn.com/845984/2/check/puppet-
openstack-integration-7-scenario004-tempest-
centos-9-stream/9033dba/logs/neutron/l2gw-agent.txt


```
2022-08-10 01:35:26.723 102553 WARNING neutron.objects.base [None 
req-4a9650ba-3041-46db-98c1-af8c625df7c9 85c0a6ad83f24770b3cf565b324d33fd 
414c0188246e442c8f0baf6ae86e1d7d - - default default] ORM session: SQL 
execution without transacti
on in progress, traceback:
  File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 221, in 
main
result = function(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/eventlet/wsgi.py", line 837, in 
process_request
proto.__init__(conn_state, self)
...
  File "/usr/lib/python3.9/site-packages/neutron/db/l3_dvr_db.py", line 1411, 
in update_floatingip
old_floatingip, floatingip = self._update_floatingip(
  File "/usr/lib/python3.9/site-packages/neutron/db/l3_db.py", line 1519, in 
_update_floatingip
registry.publish(
  File "/usr/lib/python3.9/site-packages/neutron_lib/callbacks/registry.py", 
line 54, in publish
_get_callback_manager().publish(resource, event, trigger, payload=payload)
  File "/usr/lib/python3.9/site-packages/neutron_lib/db/utils.py", line 105, in 
_wrapped
return function(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py", 
line 150, in publish
errors = self._notify_loop(resource, event, trigger, payload)
  File "/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py", 
line 181, in _notify_loop
callback(resource, event, trigger, payload=payload)
  File 
"/usr/lib/python3.9/site-packages/neutron_dynamic_routing/services/bgp/bgp_plugin.py",
 line 257, in floatingip_update_callback
next_hop = self._get_fip_next_hop(
  File "/usr/lib/python3.9/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 1090, in _get_fip_next_hop
router = self._get_router(context, router_id)
  File "/usr/lib/python3.9/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 1084, in _get_router
router = model_query.get_by_id(context, l3_db.Router, router_id)
  File "/usr/lib/python3.9/site-packages/neutron_lib/db/model_query.py", line 
169, in get_by_id
return query.filter(model.id == object_id).one()
  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2865, 
in one
return self._iter().one()
  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2903, 
in _iter
result = self.session.execute(
  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 
1693, in execute
result = fn(orm_exec_state)
```

** Affects: neutron
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Description changed:

  It seems the fix for https://bugs.launchpad.net/neutron/+bug/1980671 was
  incomplete.
  
- In Puppet OpenStack job we observe the following WARNING when neutron-
- dynamic-routing is enabled
+ 
+ In Puppet OpenStack job we observe the following WARNING when 
neutron-dynamic-routing is enabled
+ 
+ https://zuul.opendev.org/t/openstack/build/9033dba56cd843e8a022a9f11a11f69a
+ 
+ 
https://597ec4b4da1176461210-f1e10e895ca9a37e1229881907aff07c.ssl.cf1.rackcdn.com/845984/2/check/puppet-
+ openstack-integration-7-scenario004-tempest-
+ centos-9-stream/9033dba/logs/neutron/l2gw-agent.txt
+ 
  
  ```
  2022-08-10 01:35:26.723 102553 WARNING neutron.objects.base [None 
req-4a9650ba-3041-46db-98c1-af8c625df7c9 85c0a6ad83f24770b3cf565b324d33fd 
414c0188246e442c8f0baf6ae86e1d7d - - default default] ORM session: SQL 
execution without transacti
  on in progress, traceback:
-   File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 221, 
in main
- result = function(*args, **kwargs)
-   File "/usr/lib/python3.9/site-packages/eventlet/wsgi.py", line 837, in 
process_request
- proto.__init__(conn_state, self)
+   File "/usr/lib/python3.9/site-packages/eventlet/greenthread.py", line 221, 
in main
+ result = function(*args, **kwargs)
+   File "/usr/lib/python3.9/site-packages/eventlet/wsgi.py", line 837, in 
process_request
+ proto.__init__(conn_state, self)
  ...
-   File "/usr/lib/python3.9/site-packages/neutron/db/l3_dvr_db.py", line 1411, 
in update_floatingip
- old_floatingip, floatingip = self._update_floatingip(
-   File "/usr/lib/python3.9/site-packages/neutron/db/l3_db.py", line 1519, in 
_update_floatingip
- registr

[Yahoo-eng-team] [Bug 1937904] Re: imp module is deprecated

2022-07-30 Thread Takashi Kajinami
This was fixed by
https://review.opendev.org/c/openstack/neutron/+/842450 in neutron.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1937904

Title:
  imp module is deprecated

Status in neutron:
  Fix Released
Status in os-win:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in tripleo:
  New

Bug description:
  The imp module is deprecated since Python 3.4 and should be replaced by the 
importlib module.
  Now usage of the imp module shows the following deprecation warning.
  ~~~
  DeprecationWarning: the imp module is deprecated in favour of importlib; see 
the module's documentation for alternative uses
  ~~~

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1937904/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1982720] [NEW] stable/train: neutron-grenade job consistently fails in reqirements repo

2022-07-24 Thread Takashi Kajinami
Public bug reported:

Currently the neutron-grenade job in stable/train branch of requirements
repo consistently fails.

Example:
https://zuul.opendev.org/t/openstack/build/ab7522d64b2349858c57b746ee063b91

Looking at the job-output.txt, it seems installation is being stuck at some 
point
but I could not find out the actual cause because of no logs captured.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1982720

Title:
  stable/train: neutron-grenade job consistently fails in reqirements
  repo

Status in neutron:
  New

Bug description:
  Currently the neutron-grenade job in stable/train branch of
  requirements repo consistently fails.

  Example:
  https://zuul.opendev.org/t/openstack/build/ab7522d64b2349858c57b746ee063b91

  Looking at the job-output.txt, it seems installation is being stuck at some 
point
  but I could not find out the actual cause because of no logs captured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1982720/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978444] [NEW] Volume can't be detached if attachment delete api call fails with 504 gateway timeout

2022-06-12 Thread Takashi Kajinami
Public bug reported:

Description
===
When cinder-api is running behind load balancer like haproxy, the load balancer 
can return 504 if it can not receive response from cinder-api within timeout.
When this timeout occurs while detaching a volume, this results in 
un-detachable volume.

 - nova-compute calls delete attachment api in cinder
 - haproxy detects server timeout and returns 504
 - cinder continues processing the API and removes the attachment
 - nova-compute immediately aborts the volume detachment and leaves the bdm
 - when a client tries to detach the volume again, the detachment fails because 
the attachment no longer exists in Nova

See for details https://bugzilla.redhat.com/show_bug.cgi?id=2002643

Steps to reproduce
==
* Stop cinder-volume
* Detach a volume from an instance
* Start cinder-volume
* Detach the volume again

Expected result
===
* Volume can be detached after cinder-volume is recovered

Actual result
===
* Volume can't be detached

Environment
===
* The issue was initially found in stable/train

Logs & Configs
==
* See https://bugzilla.redhat.com/show_bug.cgi?id=2002643#c1

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1978444

Title:
  Volume can't be detached if attachment delete api call fails with 504
  gateway timeout

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When cinder-api is running behind load balancer like haproxy, the load 
balancer can return 504 if it can not receive response from cinder-api within 
timeout.
  When this timeout occurs while detaching a volume, this results in 
un-detachable volume.

   - nova-compute calls delete attachment api in cinder
   - haproxy detects server timeout and returns 504
   - cinder continues processing the API and removes the attachment
   - nova-compute immediately aborts the volume detachment and leaves the bdm
   - when a client tries to detach the volume again, the detachment fails 
because the attachment no longer exists in Nova

  See for details https://bugzilla.redhat.com/show_bug.cgi?id=2002643

  Steps to reproduce
  ==
  * Stop cinder-volume
  * Detach a volume from an instance
  * Start cinder-volume
  * Detach the volume again

  Expected result
  ===
  * Volume can be detached after cinder-volume is recovered

  Actual result
  ===
  * Volume can't be detached

  Environment
  ===
  * The issue was initially found in stable/train

  Logs & Configs
  ==
  * See https://bugzilla.redhat.com/show_bug.cgi?id=2002643#c1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1978444/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1968468] Re: metering-agent: Duplicate report_interval parameters

2022-04-10 Thread Takashi Kajinami
After digging into the implementation again, I noticed [DEFAULT]
report_interval is actually used to determine interval between metering
report, instead of agent status report.

** Changed in: neutron
 Assignee: Takashi Kajinami (kajinamit) => (unassigned)

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1968468

Title:
  metering-agent: Duplicate report_interval parameters

Status in neutron:
  Invalid

Bug description:
  The neutron-metering-agent service has the following two
  report_interval parameters, and these two looks duplicate.

  (1)
  [DEFAULT]
  report_interval=300

  (2)
  [AGENT]
  report_interval=30

  Looking at the implementation, (1) is used in configuration.report_interval 
in agent status while (2) determines the actual interval in agent side.
  Considering consistency with the other agents, we should use (2) only and 
deprecate (1).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1968468/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1968468] [NEW] metering-agent: Duplicate report_interval parameters

2022-04-10 Thread Takashi Kajinami
Public bug reported:

The neutron-metering-agent service has the following two report_interval
parameters, and these two looks duplicate.

(1)
[DEFAULT]
report_interval=300

(2)
[AGENT]
report_interval=30

Looking at the implementation, (1) is used to determine the 
configuration.report_interval in agent status while (2) is used to determine 
the actual interval in agent side.
Considering consistency with the other agents, we should use (2) only and 
deprecate (1).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1968468

Title:
  metering-agent: Duplicate report_interval parameters

Status in neutron:
  New

Bug description:
  The neutron-metering-agent service has the following two
  report_interval parameters, and these two looks duplicate.

  (1)
  [DEFAULT]
  report_interval=300

  (2)
  [AGENT]
  report_interval=30

  Looking at the implementation, (1) is used to determine the 
configuration.report_interval in agent status while (2) is used to determine 
the actual interval in agent side.
  Considering consistency with the other agents, we should use (2) only and 
deprecate (1).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1968468/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967686] Re: [DEFAULT] use_forwarded_for is duplicate of the HTTPProxyToWSGI middleware

2022-04-03 Thread Takashi Kajinami
** Changed in: ec2-api
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** No longer affects: ec2-api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1967686

Title:
  [DEFAULT] use_forwarded_for is duplicate of the HTTPProxyToWSGI
  middleware

Status in Cinder:
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The [DEFAULT] use_forwarded_for parameter enables detection of remote address 
by the X-Forwarded-For request header.
  However this functionality is duplicate of the HTTPProxyToWSGI middleware in 
the oslo.middleware library.

  Now the HTTPProxyToWSGI middleware is enabled in api pipeline by
  default, and also is globally used by multiple components, the own
  use_forwarded_for parameter can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1967686/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967686] Re: [DEFAULT] use_forwarded_for is duplicate of the HTTPProxyToWSGI middleware

2022-04-03 Thread Takashi Kajinami
** Also affects: ec2-api
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1967686

Title:
  [DEFAULT] use_forwarded_for is duplicate of the HTTPProxyToWSGI
  middleware

Status in Cinder:
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The [DEFAULT] use_forwarded_for parameter enables detection of remote address 
by the X-Forwarded-For request header.
  However this functionality is duplicate of the HTTPProxyToWSGI middleware in 
the oslo.middleware library.

  Now the HTTPProxyToWSGI middleware is enabled in api pipeline by
  default, and also is globally used by multiple components, the own
  use_forwarded_for parameter can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1967686/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967686] [NEW] [DEFAULT] use_forwarded_for is duplicate of the HTTPProxyToWSGI middleware

2022-04-03 Thread Takashi Kajinami
Public bug reported:

The [DEFAULT] use_forwarded_for parameter enables detection of remote address 
by the X-Forwarded-For request header.
However this functionality is duplicate of the HTTPProxyToWSGI middleware in 
the oslo.middleware library.

Now the HTTPProxyToWSGI middleware is enabled in api pipeline by
default, and also is globally used by multiple components, the own
use_forwarded_for parameter can be removed.

** Affects: cinder
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Affects: manila
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: manila
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: manila
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Description changed:

  The [DEFAULT] use_forwarded_for parameter enables detection of remote address 
by the X-Forwarded-For request header.
- However this functionality is duplicate of the HTTPProxyToWSGI middleware.
+ However this functionality is duplicate of the HTTPProxyToWSGI middleware in 
the oslo.middleware library.
  
  Now the HTTPProxyToWSGI middleware is enabled in api pipeline by
  default, and also is globally used by multiple components, the own
  use_forwarded_for parameter can be removed.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1967686

Title:
  [DEFAULT] use_forwarded_for is duplicate of the HTTPProxyToWSGI
  middleware

Status in Cinder:
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The [DEFAULT] use_forwarded_for parameter enables detection of remote address 
by the X-Forwarded-For request header.
  However this functionality is duplicate of the HTTPProxyToWSGI middleware in 
the oslo.middleware library.

  Now the HTTPProxyToWSGI middleware is enabled in api pipeline by
  default, and also is globally used by multiple components, the own
  use_forwarded_for parameter can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1967686/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967683] Re: Wrong property to look up remote address

2022-04-03 Thread Takashi Kajinami
** Also affects: masakari
   Importance: Undecided
   Status: New

** No longer affects: masakari

** Changed in: cinder
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: manila
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: ec2-api
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1967683

Title:
  Wrong property to look up remote address

Status in Cinder:
  In Progress
Status in ec2-api:
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Currently, remote_address attribute of the Reqeust object is used to
  look up client address in multiple places.

  eg.
  
https://github.com/openstack/cinder/blob/7086157de07b77e8b67bbb767bc2ce25e86c2f51/cinder/api/middleware/auth.py#L64

  ~~~
  def _set_request_context(req, **kwargs):
  """Sets request context based on parameters and request."""
  remote_address = getattr(req, 'remote_address', '127.0.0.1')
  ~~~

  However, webob.Request has no remote_address attribute but only remote_addr 
attribute.
   
https://docs.pylonsproject.org/projects/webob/en/stable/api/request.html#webob.request.BaseRequest.remote_addr

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1967683/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967683] Re: Wrong property to look up remote address

2022-04-03 Thread Takashi Kajinami
** Also affects: ec2-api
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1967683

Title:
  Wrong property to look up remote address

Status in Cinder:
  In Progress
Status in ec2-api:
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Currently, remote_address attribute of the Reqeust object is used to
  look up client address in multiple places.

  eg.
  
https://github.com/openstack/cinder/blob/7086157de07b77e8b67bbb767bc2ce25e86c2f51/cinder/api/middleware/auth.py#L64

  ~~~
  def _set_request_context(req, **kwargs):
  """Sets request context based on parameters and request."""
  remote_address = getattr(req, 'remote_address', '127.0.0.1')
  ~~~

  However, webob.Request has no remote_address attribute but only remote_addr 
attribute.
   
https://docs.pylonsproject.org/projects/webob/en/stable/api/request.html#webob.request.BaseRequest.remote_addr

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1967683/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967683] Re: Wrong property to look up remote address

2022-04-03 Thread Takashi Kajinami
** Also affects: manila
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1967683

Title:
  Wrong property to look up remote address

Status in Cinder:
  In Progress
Status in ec2-api:
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Currently, remote_address attribute of the Reqeust object is used to
  look up client address in multiple places.

  eg.
  
https://github.com/openstack/cinder/blob/7086157de07b77e8b67bbb767bc2ce25e86c2f51/cinder/api/middleware/auth.py#L64

  ~~~
  def _set_request_context(req, **kwargs):
  """Sets request context based on parameters and request."""
  remote_address = getattr(req, 'remote_address', '127.0.0.1')
  ~~~

  However, webob.Request has no remote_address attribute but only remote_addr 
attribute.
   
https://docs.pylonsproject.org/projects/webob/en/stable/api/request.html#webob.request.BaseRequest.remote_addr

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1967683/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967683] [NEW] Wrong property to look up remote address

2022-04-03 Thread Takashi Kajinami
Public bug reported:

Currently, remote_address attribute of the Reqeust object is used to
look up client address in multiple places.

eg.
https://github.com/openstack/cinder/blob/7086157de07b77e8b67bbb767bc2ce25e86c2f51/cinder/api/middleware/auth.py#L64

~~~
def _set_request_context(req, **kwargs):
"""Sets request context based on parameters and request."""
remote_address = getattr(req, 'remote_address', '127.0.0.1')
~~~

However, webob.Request has no remote_address attribute but only remote_addr 
attribute.
 
https://docs.pylonsproject.org/projects/webob/en/stable/api/request.html#webob.request.BaseRequest.remote_addr

** Affects: cinder
 Importance: Undecided
 Status: In Progress

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1967683

Title:
  Wrong property to look up remote address

Status in Cinder:
  In Progress
Status in OpenStack Compute (nova):
  New

Bug description:
  Currently, remote_address attribute of the Reqeust object is used to
  look up client address in multiple places.

  eg.
  
https://github.com/openstack/cinder/blob/7086157de07b77e8b67bbb767bc2ce25e86c2f51/cinder/api/middleware/auth.py#L64

  ~~~
  def _set_request_context(req, **kwargs):
  """Sets request context based on parameters and request."""
  remote_address = getattr(req, 'remote_address', '127.0.0.1')
  ~~~

  However, webob.Request has no remote_address attribute but only remote_addr 
attribute.
   
https://docs.pylonsproject.org/projects/webob/en/stable/api/request.html#webob.request.BaseRequest.remote_addr

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1967683/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525439] Re: Glance V2 API is not backwards compatible and breaks Cinder solidfire driver

2022-03-04 Thread Takashi Kajinami
** Changed in: puppet-cinder
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1525439

Title:
  Glance V2 API is not backwards compatible and breaks Cinder solidfire
  driver

Status in Cinder:
  Won't Fix
Status in Glance:
  Won't Fix
Status in puppet-cinder:
  Won't Fix

Bug description:
  In stable/kilo

  Glance API V2 change of image-metadata is_public flag to visibility =
  Public breaks the SolidFire (and maybe other, NetApp?) drivers that
  depend on is_public flag. Specifically this breaks the ability
  efficiently handle images by caching images in the SolidFire cluster.

  Changing the API back to V1 through the cinder.conf file then breaks
  Ceph which depends on V2 and the image-metadata direct_url and
  locations to determine if it can clone a image to a volume.  So this
  breaks Ceph's ability to efficiently handle images

  This version mismatch does not allow for SolidFire and Ceph to both be
  used efficiently in the same OpenStack cloud.

  NOTE: openstack/puppet-cinder defaults to glance-api-version = 2 which
  allows Ceph efficientcy to work and not SolidFire (and others).

  Mainly Opening this Bug to document this problem since no changes are
  allowed to Kilo there is probably no way to fix this.

  Code locations:

  cinder/cinder/image/glance.py line 250-256
  cinder/cinder/volume/drivers/rbd.py line 827
  cinder/cinder/volume/drivers/solidfire.py line 647
  puppet-cinder/manifests/glance.pp line 59

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1525439/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1962726] [NEW] ssh-rsa key is no longer allowed by recent openssh

2022-03-02 Thread Takashi Kajinami
Public bug reported:

Description
===
Currently create Key-pair API without actual key content returns the key 
generated at server side which is formatted in ssh-rsa.

However ssh-rsa is no longer supported by default since openssh 8.8

https://www.openssh.com/txt/release-8.8

```
This release disables RSA signatures using the SHA-1 hash algorithm
by default. This change has been made as the SHA-1 hash algorithm is
cryptographically broken, and it is possible to create chosen-prefix
hash collisions for https://www.openssh.com/txt/release-8.8
+ 
+ ```
+ 
+ This release disables RSA signatures using the SHA-1 hash algorithm
+ by default. This change has been made as the SHA-1 hash algorithm is
+ cryptographically broken, and it is possible to create chosen-prefix
+ hash collisions for https://www.openssh.com/txt/release-8.8
  
  ```
- 
  This release disables RSA signatures using the SHA-1 hash algorithm
  by default. This change has been made as the SHA-1 hash algorithm is
  cryptographically broken, and it is possible to create chosen-prefix
  hash collisions for https://bugs.launchpad.net/bugs/1962726

Title:
  ssh-rsa key is no longer allowed by recent openssh

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Currently create Key-pair API without actual key content returns the key 
generated at server side which is formatted in ssh-rsa.

  However ssh-rsa is no longer supported by default since openssh 8.8

  https://www.openssh.com/txt/release-8.8

  ```
  This release disables RSA signatures using the SHA-1 hash algorithm
  by default. This change has been made as the SHA-1 hash algorithm is
  cryptographically broken, and it is possible to create chosen-prefix
  hash collisions for https://bugs.launchpad.net/nova/+bug/1962726/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1960247] [NEW] server suspend action allows authorization by user_id while server resume action does not

2022-02-07 Thread Takashi Kajinami
Public bug reported:

Description
===
Since the following change was merged, nova allows authorization by user_id for 
server suspend action.

https://review.opendev.org/c/openstack/nova/+/353344

However the same is not yet implemented in resume action and this
results in inconsistent policy rule for corresponding two operations.

Steps to reproduce
==
* Define policy rules like the following example
  "os_compute_api:os-suspend-server:suspend": "rule:admin_api or 
user_id:%(user_id)s"
  "os_compute_api:os-suspend-server:resume": "rule:admin_api or 
user_id:%(user_id)s"
* Create a server by a non-admin user
* Suspend the server by the user
* Resume the server by the user

Expected result
===
Both suspend and resume are accepted

Actual result
=
Only suspend is accepted and resume fails with

ERROR (Forbidden): Policy doesn't allow os_compute_api:os-suspend-
server:suspend to be performed. (HTTP 403) (Request-ID: req-...)

Environment
===
This issue was initially reported as one found in stable/xena deployment.
 
http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027078.html

Logs & Configs
==
N/A

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1960247

Title:
  server suspend action allows authorization by user_id while server
  resume action does not

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Since the following change was merged, nova allows authorization by user_id 
for server suspend action.

  https://review.opendev.org/c/openstack/nova/+/353344

  However the same is not yet implemented in resume action and this
  results in inconsistent policy rule for corresponding two operations.

  Steps to reproduce
  ==
  * Define policy rules like the following example
    "os_compute_api:os-suspend-server:suspend": "rule:admin_api or 
user_id:%(user_id)s"
    "os_compute_api:os-suspend-server:resume": "rule:admin_api or 
user_id:%(user_id)s"
  * Create a server by a non-admin user
  * Suspend the server by the user
  * Resume the server by the user

  Expected result
  ===
  Both suspend and resume are accepted

  Actual result
  =
  Only suspend is accepted and resume fails with

  ERROR (Forbidden): Policy doesn't allow os_compute_api:os-suspend-
  server:suspend to be performed. (HTTP 403) (Request-ID: req-...)

  Environment
  ===
  This issue was initially reported as one found in stable/xena deployment.
   
http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027078.html

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1960247/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958075] [NEW] The fixtures library is missing from requirements.txt

2022-01-16 Thread Takashi Kajinami
Public bug reported:

Description
===
The following change[1] made the nova.utils module depend on the fixtures 
library.
[1] https://review.opendev.org/c/openstack/nova/+/824280

However the fixtures library is listed only in test-requirements.txt and is not 
yet listed in requirements.txt .
Because of this the library is not installed in normal deployment and the 
`nova-manage api_db sync` command fails with the following error.

Traceback (most recent call last):
  File "/usr/bin/nova-manage", line 6, in 
from nova.cmd.manage import main
  File "/usr/lib/python3.6/site-packages/nova/cmd/manage.py", line 49, in 

from nova.cmd import common as cmd_common
  File "/usr/lib/python3.6/site-packages/nova/cmd/common.py", line 26, in 

import nova.db.main.api
  File "/usr/lib/python3.6/site-packages/nova/db/main/api.py", line 45, in 

from nova import block_device
  File "/usr/lib/python3.6/site-packages/nova/block_device.py", line 26, in 

from nova import utils
  File "/usr/lib/python3.6/site-packages/nova/utils.py", line 32, in 
import fixtures
ModuleNotFoundError: No module named 'fixtures'

This issue was initially detected in litmus jobs in puppet repos[2].
These jobs uses rdo packages which define dependencies based on requirements.txt

[2] example:
https://zuul.opendev.org/t/openstack/build/e086ca3375714860ae463b7a1d9b1bab

Steps to reproduce
==

Expected result
===

Actual result
=

Environment
===

Logs & Configs
======

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1958075

Title:
  The fixtures library is missing from requirements.txt

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The following change[1] made the nova.utils module depend on the fixtures 
library.
  [1] https://review.opendev.org/c/openstack/nova/+/824280

  However the fixtures library is listed only in test-requirements.txt and is 
not yet listed in requirements.txt .
  Because of this the library is not installed in normal deployment and the 
`nova-manage api_db sync` command fails with the following error.

  Traceback (most recent call last):
File "/usr/bin/nova-manage", line 6, in 
  from nova.cmd.manage import main
File "/usr/lib/python3.6/site-packages/nova/cmd/manage.py", line 49, in 

  from nova.cmd import common as cmd_common
File "/usr/lib/python3.6/site-packages/nova/cmd/common.py", line 26, in 

  import nova.db.main.api
File "/usr/lib/python3.6/site-packages/nova/db/main/api.py", line 45, in 

  from nova import block_device
File "/usr/lib/python3.6/site-packages/nova/block_device.py", line 26, in 

  from nova import utils
File "/usr/lib/python3.6/site-packages/nova/utils.py", line 32, in 
  import fixtures
  ModuleNotFoundError: No module named 'fixtures'

  This issue was initially detected in litmus jobs in puppet repos[2].
  These jobs uses rdo packages which define dependencies based on 
requirements.txt

  [2] example:
  https://zuul.opendev.org/t/openstack/build/e086ca3375714860ae463b7a1d9b1bab

  Steps to reproduce
  ==

  Expected result
  ===

  Actual result
  =

  Environment
  ===

  Logs & Configs
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1958075/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1957180] [NEW] The [AGENT] veth_mtu parameter is no longer used

2022-01-12 Thread Takashi Kajinami
Public bug reported:

Since the [ovs] use_veth_interconnection parameter was removed by [1],
the [AGENT] veth_mtu parameter is no longer used.

[1] https://review.opendev.org/c/openstack/neutron/+/759947

We should deprecate and remove the parameter since it has no effect now.

** Affects: neutron
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Description changed:

  Since the [ovs] use_veth_interconnection parameter was removed by [1],
  the [AGENT] veth_mtu parameter is no longer used.
  
  [1] https://review.opendev.org/c/openstack/neutron/+/759947
+ 
+ We should deprecate and remove the parameter since it has no effect now.

** Changed in: neutron
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1957180

Title:
  The [AGENT] veth_mtu parameter is no longer used

Status in neutron:
  In Progress

Bug description:
  Since the [ovs] use_veth_interconnection parameter was removed by [1],
  the [AGENT] veth_mtu parameter is no longer used.

  [1] https://review.opendev.org/c/openstack/neutron/+/759947

  We should deprecate and remove the parameter since it has no effect
  now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1957180/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2021-11-29 Thread Takashi Kajinami
** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Designate:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Mistral:
  In Progress
Status in neutron:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in tempest:
  In Progress

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1946100] [NEW] [oslo_limit] parameters are missing from glance-api.conf

2021-10-05 Thread Takashi Kajinami
Public bug reported:

The following change[1] introduced dependency on the oslo.limit library
to implement the unified quota but the parameter of the library are
still missing from the example glance-api.conf as well as the conf file
generated by oslo-config-generator.

We should add the missing oslo.config.opts entrypoint so that the
library parameters are rendered into the conf file by oslo-config-
generator.

[1] https://review.opendev.org/c/openstack/glance/+/788054

** Affects: glance
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1946100

Title:
  [oslo_limit] parameters are missing from glance-api.conf

Status in Glance:
  In Progress

Bug description:
  The following change[1] introduced dependency on the oslo.limit
  library to implement the unified quota but the parameter of the
  library are still missing from the example glance-api.conf as well as
  the conf file generated by oslo-config-generator.

  We should add the missing oslo.config.opts entrypoint so that the
  library parameters are rendered into the conf file by oslo-config-
  generator.

  [1] https://review.opendev.org/c/openstack/glance/+/788054

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1946100/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1944043] [NEW] Wrong exception type is expected to retry volume detachment API calls

2021-09-18 Thread Takashi Kajinami
Public bug reported:

Description
===
The following change introduced the logic to retry cinder API calls to detach 
volumes.

https://review.opendev.org/c/openstack/nova/+/669674

The logic detects the InternalServerError class from
cindreclient.apiclient.exceptions.

However this is wrong and these API calls raises the ClientException
class from cinderclient.exceptions instead.

Steps to reproduce
==
N/A

Actual result
=
N/A

Environment
===
N/A

Logs & Configs
==
N/A

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1944043

Title:
  Wrong exception type is expected to retry volume detachment API calls

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  The following change introduced the logic to retry cinder API calls to detach 
volumes.

  https://review.opendev.org/c/openstack/nova/+/669674

  The logic detects the InternalServerError class from
  cindreclient.apiclient.exceptions.

  However this is wrong and these API calls raises the ClientException
  class from cinderclient.exceptions instead.

  Steps to reproduce
  ==
  N/A

  Actual result
  =
  N/A

  Environment
  ===
  N/A

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1944043/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2021-09-04 Thread Takashi Kajinami
** Changed in: mistral
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Also affects: designate
   Importance: Undecided
   Status: New

** Changed in: designate
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  In Progress
Status in Mistral:
  In Progress
Status in neutron:
  In Progress
Status in python-neutronclient:
  In Progress

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2021-09-04 Thread Takashi Kajinami
** Also affects: mistral
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  In Progress
Status in Mistral:
  In Progress
Status in neutron:
  In Progress
Status in python-neutronclient:
  In Progress

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2021-09-04 Thread Takashi Kajinami
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  In Progress
Status in neutron:
  In Progress
Status in python-neutronclient:
  In Progress

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2021-09-04 Thread Takashi Kajinami
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  In Progress
Status in neutron:
  New
Status in python-neutronclient:
  In Progress

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1941020] [NEW] [memcache] options are no longer used

2021-08-24 Thread Takashi Kajinami
Public bug reported:

Keystone provides some options under the [memcache] section but these 
parameters are not used in any place.
Looking at history, it seems these parameters were used in memcache_pool 
backend to persist tokens but that backend was removed during Pike.

https://opendev.org/openstack/keystone/src/tag/newton-
eol/keystone/token/persistence/backends/memcache_pool.py

** Affects: keystone
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1941020

Title:
  [memcache] options are no longer used

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Keystone provides some options under the [memcache] section but these 
parameters are not used in any place.
  Looking at history, it seems these parameters were used in memcache_pool 
backend to persist tokens but that backend was removed during Pike.

  https://opendev.org/openstack/keystone/src/tag/newton-
  eol/keystone/token/persistence/backends/memcache_pool.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1941020/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940790] [NEW] oslo.cache options are missing from neutron.conf generated by "tox -e genconfig"

2021-08-22 Thread Takashi Kajinami
Public bug reported:

# We could include this in bug 1940009 but the fix for the bug was already 
merged.
# So I'll crate a separate bug to avoid two Closes-Bug commits which I believe 
is confusing.

Neutron uses oslo.cache library for caching and registers its options.
However these options are missing from neutron.conf generated by "tox -e 
genconfig".

[1]
https://github.com/openstack/neutron/blob/84d9bb1e0e5cde34acd9f4ee7f54baf9c89c7d81/neutron/common/cache_utils.py#L28-L30

** Affects: neutron
 Importance: Undecided
 Status: In Progress

** Description changed:

  # We could include this in bug 1940009 but the fix for the bug was already 
merged.
- # So I'll crate a separate bug to avoid two Closes-Bug commit which is 
confusing.
+ # So I'll crate a separate bug to avoid two Closes-Bug commits which I 
believe is confusing.
  
  Neutron uses oslo.cache library for caching and registers its options.
  However these options are missing from neutron.conf generated by "tox -e 
genconfig".
  
  [1]
  
https://github.com/openstack/neutron/blob/84d9bb1e0e5cde34acd9f4ee7f54baf9c89c7d81/neutron/common/cache_utils.py#L28-L30

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940790

Title:
  oslo.cache options are missing from neutron.conf generated by "tox -e
  genconfig"

Status in neutron:
  In Progress

Bug description:
  # We could include this in bug 1940009 but the fix for the bug was already 
merged.
  # So I'll crate a separate bug to avoid two Closes-Bug commits which I 
believe is confusing.

  Neutron uses oslo.cache library for caching and registers its options.
  However these options are missing from neutron.conf generated by "tox -e 
genconfig".

  [1]
  
https://github.com/openstack/neutron/blob/84d9bb1e0e5cde34acd9f4ee7f54baf9c89c7d81/neutron/common/cache_utils.py#L28-L30

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940790/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940733] Re: [oslo_reports] options are missing from the config file generated by oslo-confi-generator

2021-08-21 Thread Takashi Kajinami
** Also affects: designate
   Importance: Undecided
   Status: New

** Changed in: designate
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: ceilometer
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1940733

Title:
  [oslo_reports] options are missing from the config file generated by
  oslo-confi-generator

Status in Ceilometer:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The oslo.reports library[1] introduced ability to generate an error report 
which is usually called "guru meditation report".
  This library provides several config options but currently none of them are 
rendered into .conf file generated by oslo-confing-generator.

  [1] https://github.com/openstack/oslo.reports

  Steps to reproduce
  ==
  * Generate .conf file by `tox -e genconfig`
  * Review options described in the generated .conf file

  Expected result
  ===
  The [oslo_reports] section is included

  Actual result
  =
  The [oslo_reports] section is missing

  Environment
  ===
  N/A

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1940733/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940733] Re: [oslo_reports] options are missing from the config file generated by oslo-confi-generator

2021-08-21 Thread Takashi Kajinami
** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Changed in: ceilometer
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1940733

Title:
  [oslo_reports] options are missing from the config file generated by
  oslo-confi-generator

Status in Ceilometer:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The oslo.reports library[1] introduced ability to generate an error report 
which is usually called "guru meditation report".
  This library provides several config options but currently none of them are 
rendered into .conf file generated by oslo-confing-generator.

  [1] https://github.com/openstack/oslo.reports

  Steps to reproduce
  ==
  * Generate .conf file by `tox -e genconfig`
  * Review options described in the generated .conf file

  Expected result
  ===
  The [oslo_reports] section is included

  Actual result
  =
  The [oslo_reports] section is missing

  Environment
  ===
  N/A

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1940733/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940733] [NEW] [oslo_reports] options are missing from the config file generated by oslo-confi-generator

2021-08-21 Thread Takashi Kajinami
Public bug reported:

Description
===
The oslo.reports library[1] introduced ability to generate an error report 
which is usually called "guru meditation report".
This library provides several config options but currently none of them are 
rendered into .conf file generated by oslo-confing-generator.

[1] https://github.com/openstack/oslo.reports

Steps to reproduce
==
* Generate .conf file by `tox -e genconfig`
* Review options described in the generated .conf file

Expected result
===
The [oslo_reports] section is included

Actual result
=
The [oslo_reports] section is missing

Environment
===
N/A

Logs & Configs
==
N/A

** Affects: ceilometer
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Affects: designate
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Affects: glance
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Affects: manila
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Affects: nova
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Also affects: manila
   Importance: Undecided
   Status: New

** Changed in: manila
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1940733

Title:
  [oslo_reports] options are missing from the config file generated by
  oslo-confi-generator

Status in Ceilometer:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The oslo.reports library[1] introduced ability to generate an error report 
which is usually called "guru meditation report".
  This library provides several config options but currently none of them are 
rendered into .conf file generated by oslo-confing-generator.

  [1] https://github.com/openstack/oslo.reports

  Steps to reproduce
  ==
  * Generate .conf file by `tox -e genconfig`
  * Review options described in the generated .conf file

  Expected result
  ===
  The [oslo_reports] section is included

  Actual result
  =
  The [oslo_reports] section is missing

  Environment
  ===
  N/A

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1940733/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940090] [NEW] options of the castellan library are missing from glance-api.conf

2021-08-16 Thread Takashi Kajinami
Public bug reported:

Glance loads the castellan library for encryption but options for that
library(like ones under [key_manager], [barbican] and etc) are missing
from example glance-api.conf.

I've regenerated the conf file using `tox -e genconfig` which interaly
calls oslo-confing-generator, but even in the generated config file the
options are still missing.

** Affects: glance
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1940090

Title:
  options of the castellan library are missing from glance-api.conf

Status in Glance:
  In Progress

Bug description:
  Glance loads the castellan library for encryption but options for that
  library(like ones under [key_manager], [barbican] and etc) are missing
  from example glance-api.conf.

  I've regenerated the conf file using `tox -e genconfig` which interaly
  calls oslo-confing-generator, but even in the generated config file
  the options are still missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1940090/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1937904] Re: imp module is deprecated

2021-08-15 Thread Takashi Kajinami
** Also affects: tripleo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1937904

Title:
  imp module is deprecated

Status in neutron:
  In Progress
Status in os-win:
  Fix Released
Status in python-novaclient:
  In Progress
Status in tripleo:
  New

Bug description:
  The imp module is deprecated since Python 3.4 and should be replaced by the 
importlib module.
  Now usage of the imp module shows the following deprecation warning.
  ~~~
  DeprecationWarning: the imp module is deprecated in favour of importlib; see 
the module's documentation for alternative uses
  ~~~

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1937904/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1937904] Re: imp module is deprecated

2021-08-15 Thread Takashi Kajinami
** Changed in: os-win
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: neutron
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1937904

Title:
  imp module is deprecated

Status in neutron:
  In Progress
Status in os-win:
  Fix Released
Status in python-novaclient:
  New

Bug description:
  The imp module is deprecated since Python 3.4 and should be replaced by the 
importlib module.
  Now usage of the imp module shows the following deprecation warning.
  ~~~
  DeprecationWarning: the imp module is deprecated in favour of importlib; see 
the module's documentation for alternative uses
  ~~~

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1937904/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1937904] Re: imp module is deprecated

2021-08-15 Thread Takashi Kajinami
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1937904

Title:
  imp module is deprecated

Status in neutron:
  In Progress
Status in os-win:
  Fix Released

Bug description:
  The imp module is deprecated since Python 3.4 and should be replaced by the 
importlib module.
  Now usage of the imp module shows the following deprecation warning.
  ~~~
  DeprecationWarning: the imp module is deprecated in favour of importlib; see 
the module's documentation for alternative uses
  ~~~

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1937904/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940009] [NEW] The healthcheck middleware options are not included in neutron.conf generated by "tox -e genconfig"

2021-08-15 Thread Takashi Kajinami
Public bug reported:

The healthcheck middleware was added to api pipeline during Victoria cycle.
 https://review.opendev.org/c/openstack/neutron/+/724676

However the options of the middleware is not included in neutron.conf generated 
by "tox e genconfig" which executes internally oslo-config-generator.
This is because the oslo.middleware.healthcheck endpoint is not added to the 
configuration file used by the generator command.

** Affects: neutron
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Description changed:

- The healthcheck middleware was added during Victoria cycle.
-  https://review.opendev.org/c/openstack/neutron/+/724676
+ The healthcheck middleware was added to api pipeline during Victoria cycle.
+  https://review.opendev.org/c/openstack/neutron/+/724676
  
  However the options of the middleware is not included in neutron.conf 
generated by "tox e genconfig" which executes internally oslo-config-generator.
  This is because the oslo.middleware.healthcheck endpoint is not added to the 
configuration file used by the generator command.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940009

Title:
  The healthcheck middleware options are not included in neutron.conf
  generated by "tox -e genconfig"

Status in neutron:
  In Progress

Bug description:
  The healthcheck middleware was added to api pipeline during Victoria cycle.
   https://review.opendev.org/c/openstack/neutron/+/724676

  However the options of the middleware is not included in neutron.conf 
generated by "tox e genconfig" which executes internally oslo-config-generator.
  This is because the oslo.middleware.healthcheck endpoint is not added to the 
configuration file used by the generator command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940009/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1939944] [NEW] The parameters of the healthcheck middlewares are missing from glance-api.conf

2021-08-14 Thread Takashi Kajinami
Public bug reported:

Since https://review.opendev.org/c/openstack/glance/+/148595 was merged,
the healthcheck middleware has been enabled in the default api
pipelines.

However the parameters of the middleware are missing from the example
glance-api.conf file, and are not rendered if we regenerate the file
using oslo-config-generator.

** Affects: glance
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1939944

Title:
  The parameters of the healthcheck middlewares are missing from glance-
  api.conf

Status in Glance:
  In Progress

Bug description:
  Since https://review.opendev.org/c/openstack/glance/+/148595 was
  merged, the healthcheck middleware has been enabled in the default api
  pipelines.

  However the parameters of the middleware are missing from the example
  glance-api.conf file, and are not rendered if we regenerate the file
  using oslo-config-generator.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1939944/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938120] Re: keystone-protection-functional is failing because of missing demo project

2021-07-30 Thread Takashi Kajinami
** No longer affects: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938120

Title:
  keystone-protection-functional is failing because of missing demo
  project

Status in devstack:
  New

Bug description:
  The keystone-protection-functional job is repeatedly failing because
  the demo project is not found.

  ```
  + ./stack.sh:main:1294 :   echo_summary 'Creating initial 
neutron network elements'
  + ./stack.sh:echo_summary:422  :   [[ -t 3 ]]
  + ./stack.sh:echo_summary:428  :   echo -e Creating initial 
neutron network elements
  + ./stack.sh:main:1297 :   type -p 
neutron_plugin_create_initial_networks
  + ./stack.sh:main:1300 :   create_neutron_initial_network
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:164 :   
local project_id
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
grep ' demo '
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
oscwrap project list
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
get_field 1
  ++ functions-common:get_field:726   :   local data field
  ++ functions-common:get_field:727   :   read data
  ++ functions-common:oscwrap:2349:   return 0
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
project_id=
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:166 :   
die_if_not_set 166 project_id 'Failure retrieving project_id for demo'
  + functions-common:die_if_not_set:216  :   local exitcode=0
  [Call Trace]
  ./stack.sh:1300:create_neutron_initial_network
  /opt/stack/devstack/lib/neutron_plugins/services/l3:166:die_if_not_set
  /opt/stack/devstack/functions-common:223:die
  [ERROR] /opt/stack/devstack/functions-common:166 Failure retrieving 
project_id for demo
  exit_trap: cleaning up child processes
  Error on exit
  *** FINISHED ***
  ```

  Example can be found here;
   https://zuul.opendev.org/t/openstack/build/90628c08f0f84927a0e547e5c9fc409e

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1938120/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938120] Re: keystone-protection-functional is failing because of missing demo project

2021-07-27 Thread Takashi Kajinami
After looking into this in detail now I have a feeling that this issue
was introduced by a change[1] in devstack and we should fix devstack.

[1]
https://opendev.org/openstack/devstack/commit/9dc2b88eb42a5f98f43bc8ad3dfa3962a4d44d74

** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938120

Title:
  keystone-protection-functional is failing because of missing demo
  project

Status in devstack:
  New
Status in OpenStack Identity (keystone):
  New

Bug description:
  The keystone-protection-functional job is repeatedly failing because
  the demo project is not found.

  ```
  + ./stack.sh:main:1294 :   echo_summary 'Creating initial 
neutron network elements'
  + ./stack.sh:echo_summary:422  :   [[ -t 3 ]]
  + ./stack.sh:echo_summary:428  :   echo -e Creating initial 
neutron network elements
  + ./stack.sh:main:1297 :   type -p 
neutron_plugin_create_initial_networks
  + ./stack.sh:main:1300 :   create_neutron_initial_network
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:164 :   
local project_id
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
grep ' demo '
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
oscwrap project list
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
get_field 1
  ++ functions-common:get_field:726   :   local data field
  ++ functions-common:get_field:727   :   read data
  ++ functions-common:oscwrap:2349:   return 0
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
project_id=
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:166 :   
die_if_not_set 166 project_id 'Failure retrieving project_id for demo'
  + functions-common:die_if_not_set:216  :   local exitcode=0
  [Call Trace]
  ./stack.sh:1300:create_neutron_initial_network
  /opt/stack/devstack/lib/neutron_plugins/services/l3:166:die_if_not_set
  /opt/stack/devstack/functions-common:223:die
  [ERROR] /opt/stack/devstack/functions-common:166 Failure retrieving 
project_id for demo
  exit_trap: cleaning up child processes
  Error on exit
  *** FINISHED ***
  ```

  Example can be found here;
   https://zuul.opendev.org/t/openstack/build/90628c08f0f84927a0e547e5c9fc409e

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1938120/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938120] [NEW] keystone-protection-functional is failing because of missing demo project

2021-07-26 Thread Takashi Kajinami
Public bug reported:

The keystone-protection-functional job is repeatedly failing because the
demo project is not found.

```
+ ./stack.sh:main:1294 :   echo_summary 'Creating initial 
neutron network elements'
+ ./stack.sh:echo_summary:422  :   [[ -t 3 ]]
+ ./stack.sh:echo_summary:428  :   echo -e Creating initial neutron 
network elements
+ ./stack.sh:main:1297 :   type -p 
neutron_plugin_create_initial_networks
+ ./stack.sh:main:1300 :   create_neutron_initial_network
+ lib/neutron_plugins/services/l3:create_neutron_initial_network:164 :   local 
project_id
++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   grep 
' demo '
++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
oscwrap project list
++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
get_field 1
++ functions-common:get_field:726   :   local data field
++ functions-common:get_field:727   :   read data
++ functions-common:oscwrap:2349:   return 0
+ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
project_id=
+ lib/neutron_plugins/services/l3:create_neutron_initial_network:166 :   
die_if_not_set 166 project_id 'Failure retrieving project_id for demo'
+ functions-common:die_if_not_set:216  :   local exitcode=0
[Call Trace]
./stack.sh:1300:create_neutron_initial_network
/opt/stack/devstack/lib/neutron_plugins/services/l3:166:die_if_not_set
/opt/stack/devstack/functions-common:223:die
[ERROR] /opt/stack/devstack/functions-common:166 Failure retrieving project_id 
for demo
exit_trap: cleaning up child processes
Error on exit
*** FINISHED ***
```

Example can be found here;
 https://zuul.opendev.org/t/openstack/build/90628c08f0f84927a0e547e5c9fc409e

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938120

Title:
  keystone-protection-functional is failing because of missing demo
  project

Status in OpenStack Identity (keystone):
  New

Bug description:
  The keystone-protection-functional job is repeatedly failing because
  the demo project is not found.

  ```
  + ./stack.sh:main:1294 :   echo_summary 'Creating initial 
neutron network elements'
  + ./stack.sh:echo_summary:422  :   [[ -t 3 ]]
  + ./stack.sh:echo_summary:428  :   echo -e Creating initial 
neutron network elements
  + ./stack.sh:main:1297 :   type -p 
neutron_plugin_create_initial_networks
  + ./stack.sh:main:1300 :   create_neutron_initial_network
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:164 :   
local project_id
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
grep ' demo '
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
oscwrap project list
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
get_field 1
  ++ functions-common:get_field:726   :   local data field
  ++ functions-common:get_field:727   :   read data
  ++ functions-common:oscwrap:2349:   return 0
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
project_id=
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:166 :   
die_if_not_set 166 project_id 'Failure retrieving project_id for demo'
  + functions-common:die_if_not_set:216  :   local exitcode=0
  [Call Trace]
  ./stack.sh:1300:create_neutron_initial_network
  /opt/stack/devstack/lib/neutron_plugins/services/l3:166:die_if_not_set
  /opt/stack/devstack/functions-common:223:die
  [ERROR] /opt/stack/devstack/functions-common:166 Failure retrieving 
project_id for demo
  exit_trap: cleaning up child processes
  Error on exit
  *** FINISHED ***
  ```

  Example can be found here;
   https://zuul.opendev.org/t/openstack/build/90628c08f0f84927a0e547e5c9fc409e

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1938120/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2021-07-26 Thread Takashi Kajinami
** Changed in: glance
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: keystone
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Also affects: designate
   Importance: Undecided
   Status: New

** No longer affects: designate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2021-07-26 Thread Takashi Kajinami
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  New

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] [NEW] assertDictContainsSubset is deprecated since Python3.2

2021-07-26 Thread Takashi Kajinami
Public bug reported:

unittest.TestCase.assertDictContainsSubset is deprecated since Python
3.2[1] and shows the following warning.

~~~
/usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
  warnings.warn('assertDictContainsSubset is deprecated',
~~~

[1] https://docs.python.org/3/whatsnew/3.2.html#unittest

** Affects: glance
 Importance: Undecided
 Status: In Progress

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  New

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938045] [NEW] OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade

2021-07-26 Thread Takashi Kajinami
Public bug reported:

The following deprecation warning in continuously observed in unit test
jobs.

/home/zuul/src/opendev.org/openstack/glance/.tox/py39/lib/python3.9/site-
packages/oslo_db/sqlalchemy/enginefacade.py:1366:
OsloDBDeprecationWarning: EngineFacade is deprecated; please use
oslo_db.sqlalchemy.enginefacade

Example can be found here.
https://zuul.opendev.org/t/openstack/build/744e2be61f5f459b9b2bcf7f046cd31e

Usage of EngineFacade is deprecated since oslo.db 1.12.0
https://github.com/openstack/oslo.db/commit/fdbd928b1fdf0334e1740e565ab8206fff54eaa6

However there is one usage left in glance code.
https://github.com/openstack/glance/blob/fa558885503121813bd7d9bacb63754ad5b61676/glance/db/sqlalchemy/api.py#L88

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1938045

Title:
  OsloDBDeprecationWarning: EngineFacade is deprecated; please use
  oslo_db.sqlalchemy.enginefacade

Status in Glance:
  New

Bug description:
  The following deprecation warning in continuously observed in unit
  test jobs.

  /home/zuul/src/opendev.org/openstack/glance/.tox/py39/lib/python3.9/site-
  packages/oslo_db/sqlalchemy/enginefacade.py:1366:
  OsloDBDeprecationWarning: EngineFacade is deprecated; please use
  oslo_db.sqlalchemy.enginefacade

  Example can be found here.
  https://zuul.opendev.org/t/openstack/build/744e2be61f5f459b9b2bcf7f046cd31e

  Usage of EngineFacade is deprecated since oslo.db 1.12.0
  
https://github.com/openstack/oslo.db/commit/fdbd928b1fdf0334e1740e565ab8206fff54eaa6

  However there is one usage left in glance code.
  
https://github.com/openstack/glance/blob/fa558885503121813bd7d9bacb63754ad5b61676/glance/db/sqlalchemy/api.py#L88

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938045/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1937901] [NEW] healthcheck middleware should be deployed as app instead of filter

2021-07-24 Thread Takashi Kajinami
Public bug reported:

Since oslo.middleware 3.22.0[1], deploying healtcheck middleware as a
filter is deprecated and it should be deployed as an app.

[1] https://review.opendev.org/c/openstack/oslo.middleware/+/403734

** Affects: glance
 Importance: Undecided
 Assignee: Takashi Kajinami (kajinamit)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1937901

Title:
  healthcheck middleware should be deployed as app instead of filter

Status in Glance:
  In Progress

Bug description:
  Since oslo.middleware 3.22.0[1], deploying healtcheck middleware as a
  filter is deprecated and it should be deployed as an app.

  [1] https://review.opendev.org/c/openstack/oslo.middleware/+/403734

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1937901/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936667] Re: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3

2021-07-22 Thread Takashi Kajinami
** Also affects: cinder
   Importance: Undecided
   Status: New

** No longer affects: cinder

** Also affects: tacker
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1936667

Title:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in Mistral:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in tacker:
  In Progress
Status in taskflow:
  Fix Released
Status in tempest:
  In Progress
Status in zaqar:
  In Progress

Bug description:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3.

  For example:

  >>> import collections 
  >>> collections.Iterable
  :1: DeprecationWarning: Using or importing the ABCs from 'collections' 
instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 
it will stop working
  

  >>> from collections import abc
  >>> abc.Iterable
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1936667/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936667] Re: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3

2021-07-16 Thread Takashi Kajinami
** Also affects: swift
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1936667

Title:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in Mistral:
  In Progress
Status in neutron:
  In Progress
Status in OpenStack Object Storage (swift):
  In Progress
Status in taskflow:
  In Progress
Status in tempest:
  In Progress
Status in zaqar:
  In Progress

Bug description:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3.

  For example:

  >>> import collections 
  >>> collections.Iterable
  :1: DeprecationWarning: Using or importing the ABCs from 'collections' 
instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 
it will stop working
  

  >>> from collections import abc
  >>> abc.Iterable
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1936667/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936667] Re: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3

2021-07-16 Thread Takashi Kajinami
The following command would detect what we should replace...

egrep -r -e
'collections\.(Awaitable|Coroutine|AsyncIterable|AsyncIterator|AsyncGenerator|Hashable|Iterable|Iterator|Generator|Reversible|Sized|Container|Callable|Collection|Set|MutableSet|Mapping|MutableMapping|MappingView|KeysView|ItemsView|ValuesView|Sequence|MutableSequence|ByteString)'
--exclude-dir .tox --exclude-dir doc .

** Also affects: cinder
   Importance: Undecided
   Status: New

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1936667

Title:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in Mistral:
  In Progress
Status in neutron:
  In Progress
Status in taskflow:
  In Progress
Status in tempest:
  In Progress
Status in zaqar:
  In Progress

Bug description:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3.

  For example:

  >>> import collections 
  >>> collections.Iterable
  :1: DeprecationWarning: Using or importing the ABCs from 'collections' 
instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 
it will stop working
  

  >>> from collections import abc
  >>> abc.Iterable
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1936667/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936667] Re: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3

2021-07-16 Thread Takashi Kajinami
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1936667

Title:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in Mistral:
  In Progress
Status in neutron:
  In Progress
Status in taskflow:
  In Progress
Status in tempest:
  In Progress
Status in zaqar:
  In Progress

Bug description:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3.

  For example:

  >>> import collections 
  >>> collections.Iterable
  :1: DeprecationWarning: Using or importing the ABCs from 'collections' 
instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 
it will stop working
  

  >>> from collections import abc
  >>> abc.Iterable
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1936667/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936667] Re: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3

2021-07-16 Thread Takashi Kajinami
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1936667

Title:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in Mistral:
  In Progress
Status in neutron:
  In Progress
Status in taskflow:
  In Progress
Status in tempest:
  In Progress
Status in zaqar:
  In Progress

Bug description:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3.

  For example:

  >>> import collections 
  >>> collections.Iterable
  :1: DeprecationWarning: Using or importing the ABCs from 'collections' 
instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 
it will stop working
  

  >>> from collections import abc
  >>> abc.Iterable
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1936667/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >