[Yahoo-eng-team] [Bug 1938127] [NEW] Horizon Error 403 Forbidden after clean install of Wallaby Openstack-Dashboard

2021-07-26 Thread Stephen
Public bug reported:


[Tue Jul 27 02:31:00.785821 2021] [mpm_event:notice] [pid 81880:tid 
140655686478144] AH00489: Apache/2.4.37 (Red Hat Enterprise Linux) 
mod_wsgi/4.6.4 Python/3.6 configured -- resuming normal operations
[Tue Jul 27 02:31:00.785865 2021] [core:notice] [pid 81880:tid 140655686478144] 
AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[Tue Jul 27 02:31:42.115483 2021] [authz_core:error] [pid 81891:tid 
140655048824576] [client 10.24.1.254:57432] AH01630: client denied by server 
configuration: /usr/share/openstack-dashboard/openstack_dashboard/wsgi
[Tue Jul 27 02:36:44.314313 2021] [authz_core:error] [pid 81893:tid 
140654642444032] [client 10.24.1.254:50747] AH01630: client denied by server 
configuration: /usr/share/openstack-dashboard/openstack_dashboard/wsgi

This happens after trying to access the http://openstack/dashboard/ url
Currently running httpd 2.4 on RHEL 8.4

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1938127

Title:
  Horizon Error 403 Forbidden after clean install of Wallaby Openstack-
  Dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  [Tue Jul 27 02:31:00.785821 2021] [mpm_event:notice] [pid 81880:tid 
140655686478144] AH00489: Apache/2.4.37 (Red Hat Enterprise Linux) 
mod_wsgi/4.6.4 Python/3.6 configured -- resuming normal operations
  [Tue Jul 27 02:31:00.785865 2021] [core:notice] [pid 81880:tid 
140655686478144] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
  [Tue Jul 27 02:31:42.115483 2021] [authz_core:error] [pid 81891:tid 
140655048824576] [client 10.24.1.254:57432] AH01630: client denied by server 
configuration: /usr/share/openstack-dashboard/openstack_dashboard/wsgi
  [Tue Jul 27 02:36:44.314313 2021] [authz_core:error] [pid 81893:tid 
140654642444032] [client 10.24.1.254:50747] AH01630: client denied by server 
configuration: /usr/share/openstack-dashboard/openstack_dashboard/wsgi

  This happens after trying to access the http://openstack/dashboard/ url
  Currently running httpd 2.4 on RHEL 8.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1938127/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938120] [NEW] keystone-protection-functional is failing because of missing demo project

2021-07-26 Thread Takashi Kajinami
Public bug reported:

The keystone-protection-functional job is repeatedly failing because the
demo project is not found.

```
+ ./stack.sh:main:1294 :   echo_summary 'Creating initial 
neutron network elements'
+ ./stack.sh:echo_summary:422  :   [[ -t 3 ]]
+ ./stack.sh:echo_summary:428  :   echo -e Creating initial neutron 
network elements
+ ./stack.sh:main:1297 :   type -p 
neutron_plugin_create_initial_networks
+ ./stack.sh:main:1300 :   create_neutron_initial_network
+ lib/neutron_plugins/services/l3:create_neutron_initial_network:164 :   local 
project_id
++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   grep 
' demo '
++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
oscwrap project list
++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
get_field 1
++ functions-common:get_field:726   :   local data field
++ functions-common:get_field:727   :   read data
++ functions-common:oscwrap:2349:   return 0
+ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
project_id=
+ lib/neutron_plugins/services/l3:create_neutron_initial_network:166 :   
die_if_not_set 166 project_id 'Failure retrieving project_id for demo'
+ functions-common:die_if_not_set:216  :   local exitcode=0
[Call Trace]
./stack.sh:1300:create_neutron_initial_network
/opt/stack/devstack/lib/neutron_plugins/services/l3:166:die_if_not_set
/opt/stack/devstack/functions-common:223:die
[ERROR] /opt/stack/devstack/functions-common:166 Failure retrieving project_id 
for demo
exit_trap: cleaning up child processes
Error on exit
*** FINISHED ***
```

Example can be found here;
 https://zuul.opendev.org/t/openstack/build/90628c08f0f84927a0e547e5c9fc409e

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938120

Title:
  keystone-protection-functional is failing because of missing demo
  project

Status in OpenStack Identity (keystone):
  New

Bug description:
  The keystone-protection-functional job is repeatedly failing because
  the demo project is not found.

  ```
  + ./stack.sh:main:1294 :   echo_summary 'Creating initial 
neutron network elements'
  + ./stack.sh:echo_summary:422  :   [[ -t 3 ]]
  + ./stack.sh:echo_summary:428  :   echo -e Creating initial 
neutron network elements
  + ./stack.sh:main:1297 :   type -p 
neutron_plugin_create_initial_networks
  + ./stack.sh:main:1300 :   create_neutron_initial_network
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:164 :   
local project_id
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
grep ' demo '
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
oscwrap project list
  ++ lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
get_field 1
  ++ functions-common:get_field:726   :   local data field
  ++ functions-common:get_field:727   :   read data
  ++ functions-common:oscwrap:2349:   return 0
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:165 :   
project_id=
  + lib/neutron_plugins/services/l3:create_neutron_initial_network:166 :   
die_if_not_set 166 project_id 'Failure retrieving project_id for demo'
  + functions-common:die_if_not_set:216  :   local exitcode=0
  [Call Trace]
  ./stack.sh:1300:create_neutron_initial_network
  /opt/stack/devstack/lib/neutron_plugins/services/l3:166:die_if_not_set
  /opt/stack/devstack/functions-common:223:die
  [ERROR] /opt/stack/devstack/functions-common:166 Failure retrieving 
project_id for demo
  exit_trap: cleaning up child processes
  Error on exit
  *** FINISHED ***
  ```

  Example can be found here;
   https://zuul.opendev.org/t/openstack/build/90628c08f0f84927a0e547e5c9fc409e

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1938120/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1937292] Re: All overcloud VM's powered off on hypervisor when nova_libvirt is restarted

2021-07-26 Thread Brendan Shephard
This is an issue related to the runtime for podman. In my case crun caused the 
issue, runc works fine:
https://paste.opendev.org/show/807670/

For whatever reason I can't post that comment directly here.

** Changed in: tripleo
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1937292

Title:
  All overcloud VM's powered off on hypervisor when nova_libvirt is
  restarted

Status in OpenStack Compute (nova):
  Incomplete
Status in tripleo:
  Invalid

Bug description:
  Description:

  Using TripleO. Noted that all VM's on a Hypervisor are powered off
  during the overcloud deployment. (I only have one Hypervisor sorry, I
  can't tell you if it would happen to more than one hypervisor).

  Seems to happen when the nova_libvirt container is restarted.

  Environment:
  TripleO - Master
  # podman exec -it nova_libvirt rpm -qa | grep nova
  python3-nova-23.1.0-0.20210625160814.1f6c351.el8.noarch
  openstack-nova-compute-23.1.0-0.20210625160814.1f6c351.el8.noarch
  openstack-nova-common-23.1.0-0.20210625160814.1f6c351.el8.noarch
  openstack-nova-migration-23.1.0-0.20210625160814.1f6c351.el8.noarch
  python3-novaclient-17.5.0-0.20210601131008.f431295.el8.noarch

  Reproducer:
  At least for me:
  1. Start a VM
  2. Restart tripleo_nova_libvirt.service:
  systemctl restart tripleo_nova_libvirt.service
  3. All VM's are stopped

  Relevant logs:
  2021-07-22 16:31:05.532 3 DEBUG nova.compute.manager 
[req-19a38d0b-e019-472b-95c4-03c796040767 d2ab1d5792604ba094af82d7447e88cf 
c4740b2aba4147adb7f101a2782003c3 - default default] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] No wait
  ing events found dispatching 
network-vif-plugged-d9b29fef-cd87-41db-ba79-8b8c65b74efb pop_instance_event 
/usr/lib/python3.6/site-packages/nova/compute/manager.py:319
  2021-07-22 16:31:05.532 3 WARNING nova.compute.manager 
[req-19a38d0b-e019-472b-95c4-03c796040767 d2ab1d5792604ba094af82d7447e88cf 
c4740b2aba4147adb7f101a2782003c3 - default default] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Recei
  ved unexpected event network-vif-plugged-d9b29fef-cd87-41db-ba79-8b8c65b74efb 
for instance with vm_state active and task_state None.
  2021-07-22 16:31:30.583 3 DEBUG nova.compute.manager 
[req-7be814ae-0e3d-4631-8a4c-348ead46c213 - - - - -] Triggering sync for uuid 
b28cc3ae-6442-40cf-9d66-9d4938a567c7 _sync_power_states 
/usr/lib/python3.6/site-packages/nova/compute/man
  ager.py:9695 
  2021-07-22 16:31:30.589 3 DEBUG oslo_concurrency.lockutils [-] Lock 
"b28cc3ae-6442-40cf-9d66-9d4938a567c7" acquired by 
"nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync"
 ::
   waited 0.000s inner 
/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
  2021-07-22 16:31:30.746 3 INFO nova.compute.manager [-] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] During _sync_instance_power_state the DB 
power_state (1) does not match the vm_power_state from the hypervisor (4). 
Updating power_
  state in the DB to match the hypervisor. 
  2021-07-22 16:31:30.930 3 WARNING nova.compute.manager [-] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Instance shutdown by itself. Calling the 
stop API. Current vm_state: active, current task_state: None, original DB 
power_state:
  1, current VM power_state: 4   
  2021-07-22 16:31:30.931 3 DEBUG nova.compute.api [-] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Going to try to stop instance force_stop 
/usr/lib/python3.6/site-packages/nova/compute/api.py:2584
  2021-07-22 16:31:31.135 3 DEBUG oslo_concurrency.lockutils [-] Lock 
"b28cc3ae-6442-40cf-9d66-9d4938a567c7" released by 
"nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync"
 ::
   held 0.547s inner 
/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
  2021-07-22 16:31:31.161 3 DEBUG oslo_concurrency.lockutils 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] Lock 
"b28cc3ae-6442-40cf-9d66-9d4938a567c7" acquired by 
"nova.compute.manager.ComputeManager.stop_instance..do_stop_
  instance" :: waited 0.000s inner 
/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
  2021-07-22 16:31:31.162 3 DEBUG nova.compute.manager 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Checking state _get_power_state 
/usr/lib/python3.6/site-packages/nova/compute/man
  ager.py:1561 
  2021-07-22 16:31:31.165 3 DEBUG nova.compute.manager 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Stopping instance; current vm_state: 
active, current task_state: powering-off, cu
  rrent DB power_state: 4, current VM power_state: 4 do_stop_instance 
/usr/lib/p

[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2021-07-26 Thread Takashi Kajinami
** Changed in: glance
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Changed in: keystone
 Assignee: (unassigned) => Takashi Kajinami (kajinamit)

** Also affects: designate
   Importance: Undecided
   Status: New

** No longer affects: designate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2021-07-26 Thread Takashi Kajinami
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  New

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] [NEW] assertDictContainsSubset is deprecated since Python3.2

2021-07-26 Thread Takashi Kajinami
Public bug reported:

unittest.TestCase.assertDictContainsSubset is deprecated since Python
3.2[1] and shows the following warning.

~~~
/usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
  warnings.warn('assertDictContainsSubset is deprecated',
~~~

[1] https://docs.python.org/3/whatsnew/3.2.html#unittest

** Affects: glance
 Importance: Undecided
 Status: In Progress

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  New

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938093] [NEW] Inconsistent return code for "Feature not implemented' API

2021-07-26 Thread Ghanshyam Mann
Public bug reported:

There is inconsistency on return code nova API return for  "Feature not
implemented'. Current return code are 400, 409, and 403.

- 400 case: Example: Multiattach Swap Volume Not Supported:
https://github.com/openstack/nova/blob/0c64f4c3eae8e2654ec11f60682c0fa5eda30c1a/nova/exception.py#L295
https://github.com/openstack/nova/blob/788035add9b32fa841389d906a0e307c231456ba/nova/api/openstack/compute/volumes.py#L429

- 403 case: Cyborg integration.
https://github.com/openstack/nova/blob/0c64f4c3eae8e2654ec11f60682c0fa5eda30c1a/nova/exception.py#L158
https://github.com/openstack/nova/blob/0e7cd9d1a95a30455e3c91916ece590454235e0e/nova/api/openstack/compute/suspend_server.py#L47

- 409 case: Example: Operation Not Supported For SEV , Operation Not Supported 
For VTPM
https://github.com/openstack/nova/blob/0c64f4c3eae8e2654ec11f60682c0fa5eda30c1a/nova/exception.py#L528-L537

In xena PTG, we agreed to fix this by returning 400 in all cases and
backport the fix. L446: https://etherpad.opendev.org/p/nova-xena-ptg

** Affects: nova
 Importance: Undecided
 Assignee: Ghanshyam Mann (ghanshyammann)
 Status: Triaged


** Tags: api

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
 Assignee: (unassigned) => Ghanshyam Mann (ghanshyammann)

** Tags added: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1938093

Title:
  Inconsistent return code for "Feature not implemented' API

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  There is inconsistency on return code nova API return for  "Feature
  not implemented'. Current return code are 400, 409, and 403.

  - 400 case: Example: Multiattach Swap Volume Not Supported:
  
https://github.com/openstack/nova/blob/0c64f4c3eae8e2654ec11f60682c0fa5eda30c1a/nova/exception.py#L295
  
https://github.com/openstack/nova/blob/788035add9b32fa841389d906a0e307c231456ba/nova/api/openstack/compute/volumes.py#L429

  - 403 case: Cyborg integration.
  
https://github.com/openstack/nova/blob/0c64f4c3eae8e2654ec11f60682c0fa5eda30c1a/nova/exception.py#L158
  
https://github.com/openstack/nova/blob/0e7cd9d1a95a30455e3c91916ece590454235e0e/nova/api/openstack/compute/suspend_server.py#L47

  - 409 case: Example: Operation Not Supported For SEV , Operation Not 
Supported For VTPM
  
https://github.com/openstack/nova/blob/0c64f4c3eae8e2654ec11f60682c0fa5eda30c1a/nova/exception.py#L528-L537

  In xena PTG, we agreed to fix this by returning 400 in all cases and
  backport the fix. L446: https://etherpad.opendev.org/p/nova-xena-ptg

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1938093/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938051] [NEW] Verify operation in glance

2021-07-26 Thread amin shahidi
Public bug reported:

Hii,
I'm using https://docs.openstack.org/glance/stein/install/verify.html this 
documentation.
When i run the command:

openstack image create "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public

it gives me this:

HTTP 500 Internal Server Error: The server has either erred or is
incapable of performing the requested operation.


---
Release:  on 2018-02-13 11:52:51
SHA: 149ea050cc58f39eaf9b4660bb8f0271b99d03da
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/verify.rst
URL: https://docs.openstack.org/glance/stein/install/verify.html

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: documentation

** Description changed:

+ Hii,
+ I'm using https://docs.openstack.org/glance/stein/install/verify.html this 
documentation.
+ When i run the command:
  
- This bug tracker is for errors with the documentation, use the following
- as a template and remove or add fields as you see fit. Convert [ ] into
- [x] to check boxes:
+ openstack image create "cirros" \
+   --file cirros-0.4.0-x86_64-disk.img \
+   --disk-format qcow2 --container-format bare \
+   --public
  
- - [ ] This doc is inaccurate in this way: __
- - [ ] This is a doc addition request.
- - [ ] I have a fix to the document that I can paste below including example: 
input and output. 
+ it gives me this:
  
- If you have a troubleshooting or support issue, use the following
- resources:
+ HTTP 500 Internal Server Error: The server has either erred or is
+ incapable of performing the requested operation.
  
-  - Ask OpenStack: http://ask.openstack.org
-  - The mailing list: http://lists.openstack.org
-  - IRC: 'openstack' channel on Freenode
  
  ---
  Release:  on 2018-02-13 11:52:51
  SHA: 149ea050cc58f39eaf9b4660bb8f0271b99d03da
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/verify.rst
  URL: https://docs.openstack.org/glance/stein/install/verify.html

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1938051

Title:
  Verify operation in glance

Status in Glance:
  New

Bug description:
  Hii,
  I'm using https://docs.openstack.org/glance/stein/install/verify.html this 
documentation.
  When i run the command:

  openstack image create "cirros" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public

  it gives me this:

  HTTP 500 Internal Server Error: The server has either erred or is
  incapable of performing the requested operation.

  
  ---
  Release:  on 2018-02-13 11:52:51
  SHA: 149ea050cc58f39eaf9b4660bb8f0271b99d03da
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/verify.rst
  URL: https://docs.openstack.org/glance/stein/install/verify.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938051/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1853376] Re: fix debian packaging warnings/errors

2021-07-26 Thread Paride Legovini
Lintian is now rather happy with the package (both source and binary),
with just one warning remaining:

W: cloud-init: command-with-path-in-maintainer-script postinst:296
/usr/sbin/grub-install

Explanation: https://lintian.debian.org/tags/command-with-path-in-
maintainer-script

I'll submit a PR for this, but given that there are no errors anymore I
think we can mark this as Fix Released.

** Changed in: cloud-init
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1853376

Title:
  fix debian packaging warnings/errors

Status in cloud-init:
  Fix Released

Bug description:
  E: cloud-init source: untranslatable-debconf-templates cloud-init.templates: 6
  W: cloud-init source: missing-file-from-potfiles-in grub.templates
  W: cloud-init source: build-depends-on-obsolete-package build-depends: 
dh-systemd => use debhelper (>= 9.20160709)
  W: cloud-init source: timewarp-standards-version (2011-12-16 < 2014-09-17)
  W: cloud-init source: ancient-standards-version 3.9.6 (released 2014-09-17) 
(current is 4.4.1)
  W: cloud-init source: binary-nmu-debian-revision-in-source 
19.3-244-gbee7e918-1~bddeb~20.04.1
  W: cloud-init: binary-without-manpage usr/bin/cloud-id
  W: cloud-init: binary-without-manpage usr/bin/cloud-init
  W: cloud-init: binary-without-manpage usr/bin/cloud-init-per
  W: cloud-init: command-with-path-in-maintainer-script postinst:141 
/usr/sbin/grub-install
  W: cloud-init: systemd-service-file-refers-to-unusual-wantedby-target 
lib/systemd/system/cloud-config.service cloud-init.target
  W: cloud-init: systemd-service-file-refers-to-unusual-wantedby-target 
lib/systemd/system/cloud-final.service cloud-init.target
  W: cloud-init: systemd-service-file-refers-to-unusual-wantedby-target 
lib/systemd/system/cloud-init-local.service cloud-init.target
  W: cloud-init: systemd-service-file-refers-to-unusual-wantedby-target 
lib/systemd/system/cloud-init.service cloud-init.target
  W: cloud-init: systemd-service-file-shutdown-problems 
lib/systemd/system/cloud-init.service
  N: 1 tag overridden (1 error)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1853376/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831632] Re: "Error: retrieving gpg key timed out." during integration test setup

2021-07-26 Thread Paride Legovini
This was merged long ago.

** Changed in: cloud-init
 Assignee: Paride Legovini (paride) => (unassigned)

** Changed in: cloud-init
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1831632

Title:
  "Error: retrieving gpg key timed out." during integration test setup

Status in cloud-init:
  Fix Released

Bug description:
  Looking at the code, I believe the error is produced by the call to
  `add-apt-repository` in download() in daily_deb.sh (from server-test-
  scripts).

  This could probably be improved by retrying the deb creation step a
  couple of times, as this may have been caused by a temporary network
  glitch.

  (Full console text attached, as jenkins.u.c isn't publicly-accessible
  ATM.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1831632/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938023] Re: Cannot delete vlan type network via network generic switch (ngs)

2021-07-26 Thread Rodolfo Alonso
Hello:

The provider information is added to the network dictionary in [1]. If
the number of segments in a network is 1, the provider information will
be set in the network dictionary. If the number of segments is >1, the
provider information will be written in network['segments'] = [...].

You are probably hitting this issue.

I'll add "networking_generic_switch" to this bug to make this project
aware of this problem.

Regards.

[1]https://github.com/openstack/neutron/blob/84ba0a9aebcb22ce8ebb8131bb114dbb035c0d50/neutron/plugins/ml2/managers.py#L167-L184

** Also affects: networking-generic-switch
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938023

Title:
  Cannot delete vlan type network via network generic switch (ngs)

Status in Networking ML2 Generic Switch :
  New
Status in neutron:
  New

Bug description:
  User cannot delete vlan type network via network generic switch:

  how to reproduce:
  1.Create network - works fine - NGS is able to create vlans on switch
  openstack network create test-network

  2.Delete network - Is not failing via CLI - however goes too quickly - error 
log in neutron-server (pasted below) 
  openstack network delete test-network

  neutron stable/victoria - It seems that something had changed in
  neutron between those two versions (the same NGS version):

  working neutron version:
  $ pip list | grep -E "neutron|generic-switch"
  networking-generic-switch 4.0.1.dev3
  neutron   17.1.2.dev36
  neutron-dynamic-routing   17.0.1.dev3
  neutron-fwaas 16.0.0
  neutron-lib   2.6.1
  neutron-vpnaas17.0.1.dev4
  python-neutronclient  7.2.1

  
  not working neutron version:
  $ pip list | grep -E "neutron|generic-switch"
  networking-generic-switch 4.0.1.dev3
  neutron   17.2.1.dev4
  neutron-dynamic-routing   17.0.1.dev3
  neutron-fwaas 16.0.0
  neutron-lib   2.6.1
  neutron-vpnaas17.0.1.dev4
  python-neutronclient  7.2.1

  
  error:
  2021-07-26 09:23:05.274 25 DEBUG neutron.db.ovn_revision_numbers_db 
[req-80f2ecdb-2321-4d36-8f9c-886e9ff03967 d01eab5bbeb54381aee11cc57b44b45b 
2eb2193c72574662945d7a0a60212413 - default default] 
delete_revision(b23a36b9-1906-4494-bb56-b3a7525c39bb) delete_revision 
/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/db/ovn_revision_numbers_db.py:118
  2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers 
[req-80f2ecdb-2321-4d36-8f9c-886e9ff03967 d01eab5bbeb54381aee11cc57b44b45b 
2eb2193c72574662945d7a0a60212413 - default default] Mechanism driver 
'genericswitch' failed in delete_network_postcommit: KeyError: 
'provider:network_type'
  2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers Traceback (most 
recent call last):
  2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/managers.py",
 line 477, in _call_on_drivers
  2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/networking_generic_switch/generic_switch_mech.py",
 line 164, in delete_network_postcommit
  2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers 
provider_type = network['provider:network_type']
  2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers KeyError: 
'provider:network_type'
  2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers
  2021-07-26 09:23:05.294 25 ERROR neutron.plugins.ml2.plugin 
[req-80f2ecdb-2321-4d36-8f9c-886e9ff03967 d01eab5bbeb54381aee11cc57b44b45b 
2eb2193c72574662945d7a0a60212413 - default default] 
mechanism_manager.delete_network_postcommit failed: 
neutron.plugins.ml2.common.exceptions.MechanismDriverError

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-generic-switch/+bug/1938023/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938045] [NEW] OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade

2021-07-26 Thread Takashi Kajinami
Public bug reported:

The following deprecation warning in continuously observed in unit test
jobs.

/home/zuul/src/opendev.org/openstack/glance/.tox/py39/lib/python3.9/site-
packages/oslo_db/sqlalchemy/enginefacade.py:1366:
OsloDBDeprecationWarning: EngineFacade is deprecated; please use
oslo_db.sqlalchemy.enginefacade

Example can be found here.
https://zuul.opendev.org/t/openstack/build/744e2be61f5f459b9b2bcf7f046cd31e

Usage of EngineFacade is deprecated since oslo.db 1.12.0
https://github.com/openstack/oslo.db/commit/fdbd928b1fdf0334e1740e565ab8206fff54eaa6

However there is one usage left in glance code.
https://github.com/openstack/glance/blob/fa558885503121813bd7d9bacb63754ad5b61676/glance/db/sqlalchemy/api.py#L88

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1938045

Title:
  OsloDBDeprecationWarning: EngineFacade is deprecated; please use
  oslo_db.sqlalchemy.enginefacade

Status in Glance:
  New

Bug description:
  The following deprecation warning in continuously observed in unit
  test jobs.

  /home/zuul/src/opendev.org/openstack/glance/.tox/py39/lib/python3.9/site-
  packages/oslo_db/sqlalchemy/enginefacade.py:1366:
  OsloDBDeprecationWarning: EngineFacade is deprecated; please use
  oslo_db.sqlalchemy.enginefacade

  Example can be found here.
  https://zuul.opendev.org/t/openstack/build/744e2be61f5f459b9b2bcf7f046cd31e

  Usage of EngineFacade is deprecated since oslo.db 1.12.0
  
https://github.com/openstack/oslo.db/commit/fdbd928b1fdf0334e1740e565ab8206fff54eaa6

  However there is one usage left in glance code.
  
https://github.com/openstack/glance/blob/fa558885503121813bd7d9bacb63754ad5b61676/glance/db/sqlalchemy/api.py#L88

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1938045/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938030] [NEW] neutron deployed with httpd does not work with ovn mech driver

2021-07-26 Thread Rabi Mishra
*** This bug is a duplicate of bug 1912359 ***
https://bugs.launchpad.net/bugs/1912359

Public bug reported:

Deploying neutron with httpd with ovn driver fails with the 500 with the
below traceback[1] when creating networks.

'openstack network agent list'  does not list any controller agents or
metadata agents.

Looks like ovn driver pre_fork_initialize/post_fork_initialize are not
triggered that initializes ovn db connections.

neutron.conf
-

auth_strategy=keystone
core_plugin=ml2
host=oc0-controller-0.mydomain.tld
dns_domain=openstacklocal
dhcp_agent_notification=True
allow_overlapping_ips=True
global_physnet_mtu=1500
vlan_transparent=False
service_plugins=qos,ovn-router,trunk,segments,port_forwarding,log
l3_ha=False
max_l3_agents_per_router=3
api_workers=2
rpc_workers=1
router_scheduler_driver=neutron.scheduler.l3_agent_scheduler.ChanceScheduler
router_distributed=False
enable_dvr=False
allow_automatic_l3agent_failover=True

ml2_conf.ini

[ml2]
type_drivers=geneve,vxlan,vlan,flat
tenant_network_types=geneve,vlan
mechanism_drivers=ovn
path_mtu=0
extension_drivers=qos,port_security,dns
overlay_ip_version=4

[ml2_type_geneve]
max_header_size=38
vni_ranges=1:65536

[ml2_type_vxlan]
vxlan_group=224.0.0.1
vni_ranges=1:65536

[ml2_type_vlan]
network_vlan_ranges=datacentre:1:1000

[ml2_type_flat]
flat_networks=datacentre

[ovn]
ovn_nb_connection=tcp:172.16.13.9:6641
ovn_sb_connection=tcp:172.16.13.9:6642
ovsdb_connection_timeout=180
neutron_sync_mode=log
ovn_metadata_enabled=True
additional_worker_classes_with_ovn_idl=[MaintenanceWorker, RpcWorker]
enable_distributed_floating_ip=True
dns_servers=
ovn_emit_need_to_frag=False


[1]

2021-07-26 10:52:26.530 19 DEBUG neutron.api.v2.base 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Request body: {'network': 
{'name': 'test_net', 'admin_state_up': True}} prepare_request_body 
/usr/lib/python3.6/site-packages/neutron/api/v2/base.py:729
2021-07-26 10:52:26.532 19 INFO neutron.quota 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Loaded quota_driver: 
.
2021-07-26 10:52:26.560 19 DEBUG neutron.pecan_wsgi.hooks.quota_enforcement 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Made reservation on behalf 
of 20ad786fb60d459a9ac43bea8623d8b3 for: {'network': 1} before 
/usr/lib/python3.6/site-packages/neutron/pecan_wsgi/hooks/quota_enforcement.py:55
2021-07-26 10:52:26.615 19 DEBUG neutron.policy 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Loaded default policies 
from ['neutron'] under neutron.policies entry points register_rules 
/usr/lib/python3.6/site-packages/neutron/policy.py:75
2021-07-26 10:52:26.671 19 DEBUG neutron_lib.callbacks.manager 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Notify callbacks 
['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group_handler--9223372036846591125']
 for network, before_create _notify_loop 
/usr/lib/python3.6/site-packages/neutron_lib/callbacks/manager.py:193
2021-07-26 10:52:26.732 19 DEBUG neutron.plugins.ml2.drivers.helpers 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] geneve segment allocate 
from pool success with {'geneve_vni': 32248}  
allocate_partially_specified_segment 
/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/helpers.py:155
2021-07-26 10:52:26.747 19 DEBUG neutron_lib.callbacks.manager 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Notify callbacks 
['neutron.services.segments.db._add_segment_host_mapping_for_segment--9223363271446231241',
 
'neutron.plugins.ml2.plugin.Ml2Plugin._handle_segment_change--9223372036848473802']
 for segment, precommit_create _notify_loop 
/usr/lib/python3.6/site-packages/neutron_lib/callbacks/manager.py:193
2021-07-26 10:52:26.747 19 INFO neutron.db.segments_db 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Added segment 
71185329-c337-40d3-b4b6-42797d1d76d7 of type geneve for network 
8313f893-f5ca-4dfd-8a3a-b87a305b10ef
2021-07-26 10:52:26.813 19 DEBUG neutron_lib.callbacks.manager 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Notify callbacks 
['neutron.services.qos.qos_plugin.QoSPlugin._validate_create_network_callback--9223372036854650062',
 
'neutron.services.auto_allocate.db._ensure_external_network_default_value_callback-8765406609216']
 for network, precommit_create _noti

[Yahoo-eng-team] [Bug 1938023] [NEW] Cannot delete vlan type network via network generic switch (ngs)

2021-07-26 Thread Bartosz Bezak
Public bug reported:

User cannot delete vlan type network via network generic switch:

how to reproduce:
1.Create network - works fine - NGS is able to create vlans on switch
openstack network create test-network

2.Delete network - Is not failing via CLI - however goes too quickly - error 
log in neutron-server (pasted below) 
openstack network delete test-network

neutron stable/victoria - It seems that something had changed in neutron
between those two versions (the same NGS version):

working neutron version:
$ pip list | grep -E "neutron|generic-switch"
networking-generic-switch 4.0.1.dev3
neutron   17.1.2.dev36
neutron-dynamic-routing   17.0.1.dev3
neutron-fwaas 16.0.0
neutron-lib   2.6.1
neutron-vpnaas17.0.1.dev4
python-neutronclient  7.2.1


not working neutron version:
$ pip list | grep -E "neutron|generic-switch"
networking-generic-switch 4.0.1.dev3
neutron   17.2.1.dev4
neutron-dynamic-routing   17.0.1.dev3
neutron-fwaas 16.0.0
neutron-lib   2.6.1
neutron-vpnaas17.0.1.dev4
python-neutronclient  7.2.1


error:
2021-07-26 09:23:05.274 25 DEBUG neutron.db.ovn_revision_numbers_db 
[req-80f2ecdb-2321-4d36-8f9c-886e9ff03967 d01eab5bbeb54381aee11cc57b44b45b 
2eb2193c72574662945d7a0a60212413 - default default] 
delete_revision(b23a36b9-1906-4494-bb56-b3a7525c39bb) delete_revision 
/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/db/ovn_revision_numbers_db.py:118
2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers 
[req-80f2ecdb-2321-4d36-8f9c-886e9ff03967 d01eab5bbeb54381aee11cc57b44b45b 
2eb2193c72574662945d7a0a60212413 - default default] Mechanism driver 
'genericswitch' failed in delete_network_postcommit: KeyError: 
'provider:network_type'
2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers Traceback (most 
recent call last):
2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/managers.py",
 line 477, in _call_on_drivers
2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/networking_generic_switch/generic_switch_mech.py",
 line 164, in delete_network_postcommit
2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers provider_type 
= network['provider:network_type']
2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers KeyError: 
'provider:network_type'
2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers
2021-07-26 09:23:05.294 25 ERROR neutron.plugins.ml2.plugin 
[req-80f2ecdb-2321-4d36-8f9c-886e9ff03967 d01eab5bbeb54381aee11cc57b44b45b 
2eb2193c72574662945d7a0a60212413 - default default] 
mechanism_manager.delete_network_postcommit failed: 
neutron.plugins.ml2.common.exceptions.MechanismDriverError

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938023

Title:
  Cannot delete vlan type network via network generic switch (ngs)

Status in neutron:
  New

Bug description:
  User cannot delete vlan type network via network generic switch:

  how to reproduce:
  1.Create network - works fine - NGS is able to create vlans on switch
  openstack network create test-network

  2.Delete network - Is not failing via CLI - however goes too quickly - error 
log in neutron-server (pasted below) 
  openstack network delete test-network

  neutron stable/victoria - It seems that something had changed in
  neutron between those two versions (the same NGS version):

  working neutron version:
  $ pip list | grep -E "neutron|generic-switch"
  networking-generic-switch 4.0.1.dev3
  neutron   17.1.2.dev36
  neutron-dynamic-routing   17.0.1.dev3
  neutron-fwaas 16.0.0
  neutron-lib   2.6.1
  neutron-vpnaas17.0.1.dev4
  python-neutronclient  7.2.1

  
  not working neutron version:
  $ pip list | grep -E "neutron|generic-switch"
  networking-generic-switch 4.0.1.dev3
  neutron   17.2.1.dev4
  neutron-dynamic-routing   17.0.1.dev3
  neutron-fwaas 16.0.0
  neutron-lib   2.6.1
  neutron-vpnaas17.0.1.dev4
  python-neutronclient  7.2.1

  
  error:
  2021-07-26 09:23:05.274 25 DEBUG neutron.db.ovn_revision_numbers_db 
[req-80f2ecdb-2321-4d36-8f9c-886e9ff03967 d01eab5bbeb54381aee11cc57b44b45b 
2eb2193c72574662945d7a0a60212413 - default default] 
delete_revision(b23a36b9-1906-4494-bb56-b3a7525c39bb) delete_revision 
/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/db/ovn_revision_numbers_db.py:118
  2021-07-26 09:23:05.293 25 ERROR neutron.plugins.ml2.managers 
[req-80f2ecdb-2321-4d36-8f9c-886e9ff03967 d01eab5bbe

[Yahoo-eng-team] [Bug 1937292] Re: All overcloud VM's powered off on hypervisor when nova_libvirt is restarted

2021-07-26 Thread Lee Yarwood
Marking this as incomplete for Nova and adding TripleO.

The n-cpu logs just show the compute service reacting to the instances
being stopped *already* on the host:

2021-07-22 16:31:30.930 3 WARNING nova.compute.manager [-] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Instance shutdown by itself. Calling the 
stop API. Current vm_state: active, current task_state: None, original DB 
power_state:
1, current VM power_state: 4

1 == RUNNING
4 == SHUTDOWN

https://github.com/openstack/nova/blob/b241663b8929b638a1795c5cc2f859b103a1d468/nova/objects/fields.py#L1026-L1045



** Changed in: nova
   Status: New => Incomplete

** Also affects: tripleo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1937292

Title:
  All overcloud VM's powered off on hypervisor when nova_libvirt is
  restarted

Status in OpenStack Compute (nova):
  Incomplete
Status in tripleo:
  New

Bug description:
  Description:

  Using TripleO. Noted that all VM's on a Hypervisor are powered off
  during the overcloud deployment. (I only have one Hypervisor sorry, I
  can't tell you if it would happen to more than one hypervisor).

  Seems to happen when the nova_libvirt container is restarted.

  Environment:
  TripleO - Master
  # podman exec -it nova_libvirt rpm -qa | grep nova
  python3-nova-23.1.0-0.20210625160814.1f6c351.el8.noarch
  openstack-nova-compute-23.1.0-0.20210625160814.1f6c351.el8.noarch
  openstack-nova-common-23.1.0-0.20210625160814.1f6c351.el8.noarch
  openstack-nova-migration-23.1.0-0.20210625160814.1f6c351.el8.noarch
  python3-novaclient-17.5.0-0.20210601131008.f431295.el8.noarch

  Reproducer:
  At least for me:
  1. Start a VM
  2. Restart tripleo_nova_libvirt.service:
  systemctl restart tripleo_nova_libvirt.service
  3. All VM's are stopped

  Relevant logs:
  2021-07-22 16:31:05.532 3 DEBUG nova.compute.manager 
[req-19a38d0b-e019-472b-95c4-03c796040767 d2ab1d5792604ba094af82d7447e88cf 
c4740b2aba4147adb7f101a2782003c3 - default default] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] No wait
  ing events found dispatching 
network-vif-plugged-d9b29fef-cd87-41db-ba79-8b8c65b74efb pop_instance_event 
/usr/lib/python3.6/site-packages/nova/compute/manager.py:319
  2021-07-22 16:31:05.532 3 WARNING nova.compute.manager 
[req-19a38d0b-e019-472b-95c4-03c796040767 d2ab1d5792604ba094af82d7447e88cf 
c4740b2aba4147adb7f101a2782003c3 - default default] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Recei
  ved unexpected event network-vif-plugged-d9b29fef-cd87-41db-ba79-8b8c65b74efb 
for instance with vm_state active and task_state None.
  2021-07-22 16:31:30.583 3 DEBUG nova.compute.manager 
[req-7be814ae-0e3d-4631-8a4c-348ead46c213 - - - - -] Triggering sync for uuid 
b28cc3ae-6442-40cf-9d66-9d4938a567c7 _sync_power_states 
/usr/lib/python3.6/site-packages/nova/compute/man
  ager.py:9695 
  2021-07-22 16:31:30.589 3 DEBUG oslo_concurrency.lockutils [-] Lock 
"b28cc3ae-6442-40cf-9d66-9d4938a567c7" acquired by 
"nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync"
 ::
   waited 0.000s inner 
/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
  2021-07-22 16:31:30.746 3 INFO nova.compute.manager [-] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] During _sync_instance_power_state the DB 
power_state (1) does not match the vm_power_state from the hypervisor (4). 
Updating power_
  state in the DB to match the hypervisor. 
  2021-07-22 16:31:30.930 3 WARNING nova.compute.manager [-] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Instance shutdown by itself. Calling the 
stop API. Current vm_state: active, current task_state: None, original DB 
power_state:
  1, current VM power_state: 4   
  2021-07-22 16:31:30.931 3 DEBUG nova.compute.api [-] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Going to try to stop instance force_stop 
/usr/lib/python3.6/site-packages/nova/compute/api.py:2584
  2021-07-22 16:31:31.135 3 DEBUG oslo_concurrency.lockutils [-] Lock 
"b28cc3ae-6442-40cf-9d66-9d4938a567c7" released by 
"nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync"
 ::
   held 0.547s inner 
/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
  2021-07-22 16:31:31.161 3 DEBUG oslo_concurrency.lockutils 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] Lock 
"b28cc3ae-6442-40cf-9d66-9d4938a567c7" acquired by 
"nova.compute.manager.ComputeManager.stop_instance..do_stop_
  instance" :: waited 0.000s inner 
/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
  2021-07-22 16:31:31.162 3 DEBUG nova.compute.manager 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Checking state _get_power

[Yahoo-eng-team] [Bug 1938021] [NEW] oslo.messaging._drivers.impl_fake.send failure during nova functional tests

2021-07-26 Thread Lee Yarwood
Public bug reported:

https://a8ba7f0ac14669316775-62d3a5548ea094caef4a9963ba6c55d1.ssl.cf1.rackcdn.com/798145/4/gate/nova-
tox-functional-centos8-py36/1ee0272/testr_results.html

2021-07-25 02:45:22,896 ERROR [nova.api.openstack.wsgi] Unexpected exception in 
API method
Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_fake.py",
 line 207, in _send
reply, failure = reply_q.get(timeout=timeout)
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/eventlet/queue.py",
 line 322, in get
return waiter.wait()
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/eventlet/queue.py",
 line 141, in wait
return get_hub().switch()
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
return self.greenlet.switch()
queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/nova/nova/api/openstack/wsgi.py", 
line 658, in wrapped
return f(*args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/api/openstack/compute/servers.py",
 line 1070, in delete
self._delete(req.environ['nova.context'], req, id)
  File 
"/home/zuul/src/opendev.org/openstack/nova/nova/api/openstack/compute/servers.py",
 line 883, in _delete
self.compute_api.delete(context, instance)
  File "/home/zuul/src/opendev.org/openstack/nova/nova/compute/api.py", line 
226, in inner
return function(self, context, instance, *args, **kwargs)
  File "/home/zuul/src/opendev.org/openstack/nova/nova/compute/api.py", line 
153, in inner
return f(self, context, instance, *args, **kw)
  File "/home/zuul/src/opendev.org/openstack/nova/nova/compute/api.py", line 
2541, in delete
self._delete_instance(context, instance)
  File "/home/zuul/src/opendev.org/openstack/nova/nova/compute/api.py", line 
2533, in _delete_instance
task_state=task_states.DELETING)
  File "/home/zuul/src/opendev.org/openstack/nova/nova/compute/api.py", line 
2311, in _delete
self._confirm_resize_on_deleting(context, instance)
  File "/home/zuul/src/opendev.org/openstack/nova/nova/compute/api.py", line 
2405, in _confirm_resize_on_deleting
context, instance, migration, do_cast=False)
  File "/home/zuul/src/opendev.org/openstack/nova/nova/conductor/api.py", line 
182, in confirm_snapshot_based_resize
ctxt, instance, migration, do_cast=do_cast)
  File "/home/zuul/src/opendev.org/openstack/nova/nova/conductor/rpcapi.py", 
line 468, in confirm_snapshot_based_resize
return cctxt.call(ctxt, 'confirm_snapshot_based_resize', **kw)
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_messaging/rpc/client.py",
 line 179, in call
transport_options=self.transport_options)
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_messaging/transport.py",
 line 128, in _send
transport_options=transport_options)
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_fake.py",
 line 223, in send
transport_options)
  File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_fake.py",
 line 214, in _send
'No reply on topic %s' % target.topic)
oslo_messaging.exceptions.MessagingTimeout: No reply on topic conductor
2021-07-25 02:45:22,898 INFO [nova.api.openstack.wsgi] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.


** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo.messaging
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1938021

Title:
  oslo.messaging._drivers.impl_fake.send failure during nova functional
  tests

Status in OpenStack Compute (nova):
  New
Status in oslo.messaging:
  New

Bug description:
  
https://a8ba7f0ac14669316775-62d3a5548ea094caef4a9963ba6c55d1.ssl.cf1.rackcdn.com/798145/4/gate/nova-
  tox-functional-centos8-py36/1ee0272/testr_results.html

  2021-07-25 02:45:22,896 ERROR [nova.api.openstack.wsgi] Unexpected exception 
in API method
  Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py36/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_fake.py",
 line 207, in _send
  reply, failure = reply_q.get(timeout=timeout