[Yahoo-eng-team] [Bug 1952395] [NEW] Tempes jobs in the ovn-octavia-provider are broken

2021-11-25 Thread Slawek Kaplonski
Public bug reported:

Failure example:

https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c9d/819377/2/check/ovn-
octavia-provider-tempest-release/c9db1e6/job-output.txt

2021-11-26 07:20:41.865039 | controller | + ./stack.sh:exit_trap:507
 :   local r=1
2021-11-26 07:20:41.868331 | controller | ++ ./stack.sh:exit_trap:508   
  :   jobs -p
2021-11-26 07:20:41.872000 | controller | + ./stack.sh:exit_trap:508
 :   jobs=84673
2021-11-26 07:20:41.875064 | controller | + ./stack.sh:exit_trap:511
 :   [[ -n 84673 ]]
2021-11-26 07:20:41.877646 | controller | + ./stack.sh:exit_trap:511
 :   [[ -n /opt/stack/logs/devstacklog.txt.2021-11-26-070811 ]]
2021-11-26 07:20:41.880814 | controller | + ./stack.sh:exit_trap:511
 :   [[ True == \T\r\u\e ]]
2021-11-26 07:20:41.883918 | controller | + ./stack.sh:exit_trap:512
 :   echo 'exit_trap: cleaning up child processes'
2021-11-26 07:20:41.883992 | controller | exit_trap: cleaning up child processes
2021-11-26 07:20:41.887286 | controller | + ./stack.sh:exit_trap:513
 :   kill 84673
2021-11-26 07:20:41.890505 | controller | + ./stack.sh:exit_trap:517
 :   '[' -f /tmp/tmp.kOZU5nmHMT ']'
2021-11-26 07:20:41.893624 | controller | + ./stack.sh:exit_trap:518
 :   rm /tmp/tmp.kOZU5nmHMT
2021-11-26 07:20:41.897844 | controller | + ./stack.sh:exit_trap:522
 :   kill_spinner
2021-11-26 07:20:41.901167 | controller | + ./stack.sh:kill_spinner:417 
 :   '[' '!' -z '' ']'
2021-11-26 07:20:41.904165 | controller | + ./stack.sh:exit_trap:524
 :   [[ 1 -ne 0 ]]
2021-11-26 07:20:41.906863 | controller | + ./stack.sh:exit_trap:525
 :   echo 'Error on exit'
2021-11-26 07:20:41.906923 | controller | Error on exit
2021-11-26 07:20:41.909908 | controller | + ./stack.sh:exit_trap:527
 :   type -p generate-subunit
2021-11-26 07:20:41.912989 | controller | + ./stack.sh:exit_trap:528
 :   generate-subunit 1637910489 752 fail
2021-11-26 07:20:42.221225 | controller | + ./stack.sh:exit_trap:530
 :   [[ -z /opt/stack/logs ]]
2021-11-26 07:20:42.224703 | controller | + ./stack.sh:exit_trap:533
 :   /usr/bin/python3.8 /opt/stack/devstack/tools/worlddump.py -d 
/opt/stack/logs
2021-11-26 07:20:42.690784 | controller | + ./stack.sh:exit_trap:542
 :   exit 1
2021-11-26 07:20:42.690829 | controller | *** FINISHED ***
2021-11-26 07:20:53.478505 | controller | ERROR
2021-11-26 07:20:53.478789 | controller | {


In the ovn-northd logs there is error like:

Nov 26 07:19:11.721742 ubuntu-focal-inmotion-iad3-0027510773 bash[111405]:  * 
Creating empty database /opt/stack/data/ovn/ovnsb_db.db
Nov 26 07:19:11.722667 ubuntu-focal-inmotion-iad3-0027510773 bash[111446]: 
chown: changing ownership of '/opt/stack/data/ovn': Operation not permitted
Nov 26 07:19:11.723666 ubuntu-focal-inmotion-iad3-0027510773 bash[111447]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/ovs-vswitchd.pid': 
Operation not permitted
Nov 26 07:19:11.723666 ubuntu-focal-inmotion-iad3-0027510773 bash[111447]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovs-vswitchd.110711.ctl': Operation not 
permitted
Nov 26 07:19:11.723666 ubuntu-focal-inmotion-iad3-0027510773 bash[111447]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/br-ex.mgmt': 
Operation not permitted
Nov 26 07:19:11.723927 ubuntu-focal-inmotion-iad3-0027510773 bash[111447]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovsdb-server.110207.ctl': Operation not 
permitted
Nov 26 07:19:11.723927 ubuntu-focal-inmotion-iad3-0027510773 bash[111447]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/db.sock': 
Operation not permitted
Nov 26 07:19:11.723927 ubuntu-focal-inmotion-iad3-0027510773 bash[111447]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/br-ex.snoop': 
Operation not permitted
Nov 26 07:19:11.723927 ubuntu-focal-inmotion-iad3-0027510773 bash[111447]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/ovsdb-server.pid': 
Operation not permitted
Nov 26 07:19:11.723927 ubuntu-focal-inmotion-iad3-0027510773 bash[111447]: 
chown: changing ownership of '/usr/local/var/run/openvswitch': Operation not 
permitted
Nov 26 07:19:11.724747 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of '/opt/stack/logs/devstacklog.txt': Operation not 
permitted
Nov 26 07:19:11.724747 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of '/opt/stack/logs/ovsdb-server.log': Operation not 
permitted
Nov 26 07:19:11.724747 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of '/opt/stack/logs/dstat-csv.log': Operation not 
permitted
Nov 26 07:19:11.724747 ubuntu-focal-inmotion-iad3-002751

[Yahoo-eng-team] [Bug 1952393] [NEW] [master] neutron-tempest-plugin-scenario-ovn broken with "ovn-northd did not start"

2021-11-25 Thread yatin
Public bug reported:

Job failing consistently with below error since 
https://review.opendev.org/c/openstack/devstack/+/806858:-
2021-11-26 05:58:40.377912 | controller | + 
functions-common:test_with_retry:2339:   timeout 60 sh -c 'while ! test -e 
/usr/local/var/run/openvswitch/ovn-northd.pid; do sleep 1; done'
2021-11-26 05:59:40.383253 | controller | + 
functions-common:test_with_retry:2340:   die 2340 'ovn-northd did not start'
2021-11-26 05:59:40.386420 | controller | + functions-common:die:253
 :   local exitcode=0

Nov 26 05:58:40.329669 ubuntu-focal-inmotion-iad3-0027510462 bash[107881]:  * 
Creating empty database /opt/stack/data/ovn/ovnsb_db.db
Nov 26 05:58:40.330503 ubuntu-focal-inmotion-iad3-0027510462 bash[107922]: 
chown: changing ownership of '/opt/stack/data/ovn': Operation not permitted
Nov 26 05:58:40.331416 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/ovs-vswitchd.pid': 
Operation not permitted
Nov 26 05:58:40.331416 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovsdb-server.106684.ctl': Operation not 
permitted
Nov 26 05:58:40.331647 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/br-ex.mgmt': 
Operation not permitted
Nov 26 05:58:40.331647 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovs-vswitchd.107192.ctl': Operation not 
permitted


Example logs:-
https://780a11778aeb29655ec5-5f07f27cfda9f7663453c94db6894b0a.ssl.cf5.rackcdn.com/818443/7/check/neutron-tempest-plugin-scenario-ovn/642b702/job-output.txt
https://780a11778aeb29655ec5-5f07f27cfda9f7663453c94db6894b0a.ssl.cf5.rackcdn.com/818443/7/check/neutron-tempest-plugin-scenario-ovn/642b702/controller/logs/screen-ovn-northd.txt

Job Builds:- https://zuul.openstack.org/builds?job_name=neutron-tempest-
plugin-scenario-ovn


Other ovn jobs which uses OVN_BUILD_FROM_SOURCE=True will also be impacted. So 
need to fix affected cases by 
https://review.opendev.org/c/openstack/devstack/+/806858.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure ovn

** Tags added: gate-failure ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952393

Title:
  [master] neutron-tempest-plugin-scenario-ovn broken with "ovn-northd
  did not start"

Status in neutron:
  New

Bug description:
  Job failing consistently with below error since 
https://review.opendev.org/c/openstack/devstack/+/806858:-
  2021-11-26 05:58:40.377912 | controller | + 
functions-common:test_with_retry:2339:   timeout 60 sh -c 'while ! test -e 
/usr/local/var/run/openvswitch/ovn-northd.pid; do sleep 1; done'
  2021-11-26 05:59:40.383253 | controller | + 
functions-common:test_with_retry:2340:   die 2340 'ovn-northd did not start'
  2021-11-26 05:59:40.386420 | controller | + functions-common:die:253  
   :   local exitcode=0

  Nov 26 05:58:40.329669 ubuntu-focal-inmotion-iad3-0027510462 bash[107881]:  * 
Creating empty database /opt/stack/data/ovn/ovnsb_db.db
  Nov 26 05:58:40.330503 ubuntu-focal-inmotion-iad3-0027510462 bash[107922]: 
chown: changing ownership of '/opt/stack/data/ovn': Operation not permitted
  Nov 26 05:58:40.331416 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/ovs-vswitchd.pid': 
Operation not permitted
  Nov 26 05:58:40.331416 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovsdb-server.106684.ctl': Operation not 
permitted
  Nov 26 05:58:40.331647 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/br-ex.mgmt': 
Operation not permitted
  Nov 26 05:58:40.331647 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovs-vswitchd.107192.ctl': Operation not 
permitted

  
  Example logs:-
  
https://780a11778aeb29655ec5-5f07f27cfda9f7663453c94db6894b0a.ssl.cf5.rackcdn.com/818443/7/check/neutron-tempest-plugin-scenario-ovn/642b702/job-output.txt
  
https://780a11778aeb29655ec5-5f07f27cfda9f7663453c94db6894b0a.ssl.cf5.rackcdn.com/818443/7/check/neutron-tempest-plugin-scenario-ovn/642b702/controller/logs/screen-ovn-northd.txt

  Job Builds:- https://zuul.openstack.org/builds?job_name=neutron-
  tempest-plugin-scenario-ovn

  
  Other ovn jobs which uses OVN_BUILD_FROM_SOURCE=True will also be impacted. 
So need to fix affected cases by 
https://review.opendev.org/c/openstack/devstack/+/806858.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1952393/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-t

[Yahoo-eng-team] [Bug 1952357] [NEW] Functional tests job in the ovn-octavia-provider is broken

2021-11-25 Thread Slawek Kaplonski
Public bug reported:

Probably because https://review.opendev.org/c/openstack/neutron/+/814009
Failure example: 
https://zuul.opendev.org/t/openstack/build/642360f0bd8b46699316e0063d9becd0

+ lib/databases/postgresql:configure_database_postgresql:92 :   sudo -u root 
sudo -u postgres -i psql -c 'CREATE ROLE root WITH SUPERUSER LOGIN PASSWORD 
'\''openstack_citest'\'''
CREATE ROLE
++ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:176
 :   mktemp -d
+ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:176
 :   tmp_dir=/tmp/tmp.5EA0JIeLQG
+ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:177
 :   trap 'rm -rf /tmp/tmp.5EA0JIeLQG' EXIT
+ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:179
 :   cat
+ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:185
 :   /usr/bin/mysql -u root -popenstack_citest
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1396 (HY000) at line 2: Operation CREATE USER failed for 
'root'@'localhost'

** Affects: neutron
 Importance: Critical
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: functional-tests gate-failure ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952357

Title:
  Functional tests job in the ovn-octavia-provider is broken

Status in neutron:
  Confirmed

Bug description:
  Probably because https://review.opendev.org/c/openstack/neutron/+/814009
  Failure example: 
https://zuul.opendev.org/t/openstack/build/642360f0bd8b46699316e0063d9becd0

  + lib/databases/postgresql:configure_database_postgresql:92 :   sudo -u root 
sudo -u postgres -i psql -c 'CREATE ROLE root WITH SUPERUSER LOGIN PASSWORD 
'\''openstack_citest'\'''
  CREATE ROLE
  ++ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:176
 :   mktemp -d
  + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:176
 :   tmp_dir=/tmp/tmp.5EA0JIeLQG
  + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:177
 :   trap 'rm -rf /tmp/tmp.5EA0JIeLQG' EXIT
  + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:179
 :   cat
  + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:185
 :   /usr/bin/mysql -u root -popenstack_citest
  mysql: [Warning] Using a password on the command line interface can be 
insecure.
  ERROR 1396 (HY000) at line 2: Operation CREATE USER failed for 
'root'@'localhost'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1952357/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1952249] [NEW] Created VM name overrides predefined port dns-name

2021-11-25 Thread Maor Blaustein
Public bug reported:

Hi, I think I've encountered a bug in internal DNS feature.


Generic Description:

Hostname of port created with '--dns-name' option is being overridden by the 
name of the created server (VM) which uses predefined port.

* This is the default behavior and I haven't seen any options to change
that in server creation phase.

* It seems to be acknowledged at the last bullets in internal DNS doc [1],
but IMO it does not make sense to be that way, since it makes this option 
useless or it is not the intended behavior, and is a bug.
Maybe I'm missing some point there.

[1] https://docs.openstack.org/neutron/latest/admin/config-dns-int.html


Setup:
==
- Openstack 16.2.
- OVN.
- Deployed with TripleO (Director).
- 3 controllers, 2 computes.
- Host images RHEL 8.4, guest images RHEL 8.4/cirrOS 0.3.4, all x86_64.  


Steps to reproduce:
===
1) Creating resources: network, subnet, etc.

2) Creating a port with '--dns-name' option:
openstack port create ... --dns-name test-hostname-1234 dns_port \
-c dns_name -c dns_assignment -c id

3) Creating a VM using the same predefined port with hostname (outputs changed 
hostname):
openstack server create ... --nic port-id=${port_id} vm1 \
-c OS-EXT-SRV-ATTR:hostname

4) Check the predefined port with changed hostname details
openstack port show dns_port \
-c dns_name -c dns_assignment -c id


Result:
===
The VM/port predefined hostname from the port creation phase is replaced with 
the VM given name in VM creation phase.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952249

Title:
  Created VM name overrides predefined port dns-name

Status in neutron:
  New

Bug description:
  Hi, I think I've encountered a bug in internal DNS feature.

  
  Generic Description:
  
  Hostname of port created with '--dns-name' option is being overridden by the 
name of the created server (VM) which uses predefined port.

  * This is the default behavior and I haven't seen any options to
  change that in server creation phase.

  * It seems to be acknowledged at the last bullets in internal DNS doc [1],
  but IMO it does not make sense to be that way, since it makes this option 
useless or it is not the intended behavior, and is a bug.
  Maybe I'm missing some point there.

  [1] https://docs.openstack.org/neutron/latest/admin/config-dns-
  int.html

  
  Setup:
  ==
  - Openstack 16.2.
  - OVN.
  - Deployed with TripleO (Director).
  - 3 controllers, 2 computes.
  - Host images RHEL 8.4, guest images RHEL 8.4/cirrOS 0.3.4, all x86_64.  

  
  Steps to reproduce:
  ===
  1) Creating resources: network, subnet, etc.

  2) Creating a port with '--dns-name' option:
  openstack port create ... --dns-name test-hostname-1234 dns_port \
  -c dns_name -c dns_assignment -c id

  3) Creating a VM using the same predefined port with hostname (outputs 
changed hostname):
  openstack server create ... --nic port-id=${port_id} vm1 \
  -c OS-EXT-SRV-ATTR:hostname

  4) Check the predefined port with changed hostname details
  openstack port show dns_port \
  -c dns_name -c dns_assignment -c id

  
  Result:
  ===
  The VM/port predefined hostname from the port creation phase is replaced with 
the VM given name in VM creation phase.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1952249/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1950657] Re: Nova-compute wouldn't retry image download when gets "Corrupt image download" error

2021-11-25 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/818503
Committed: 
https://opendev.org/openstack/nova/commit/ce493273b9404530dfa8ecfe3eaa3d6c81a20e39
Submitter: "Zuul (22348)"
Branch:master

commit ce493273b9404530dfa8ecfe3eaa3d6c81a20e39
Author: sdmitriev1 
Date:   Thu Nov 18 22:05:05 2021 -0500

Retry image download if it's corrupted

Adding IOError in list of catching exceptions in order to
fix behavior when nova-compute wouldn't retry image download
when got "Corrupt image download" error from glanceclient
and had num_retries config option set.

Closes-Bug: #1950657
Change-Id: Iae4fd0579f71d3ba6793dbdb037275352d7e57b0


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1950657

Title:
  Nova-compute wouldn't retry image download when gets "Corrupt image
  download" error

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova-compute wouldn't retry image download when gets "Corrupt image
  download" error from glanceclient.

  There is a configuration option num_retries (equal 3 by default) in
  glance section of nova-config file, so nova-compute supposed to retry
  image download if it failed, but it wouldn't work in case of next
  exception:

  2021-11-02 10:42:34.192 6 ERROR nova.compute.manager [instance: 
ec0f0736-a5cd-48dc-b2f9-851629604a89]   File 
"/usr/lib/python2.7/site-packages/nova/image/glance.py", line 375, in download
  2021-11-02 10:42:34.192 6 ERROR nova.compute.manager [instance: 
ec0f0736-a5cd-48dc-b2f9-851629604a89] for chunk in image_chunks:
  2021-11-02 10:42:34.192 6 ERROR nova.compute.manager [instance: 
ec0f0736-a5cd-48dc-b2f9-851629604a89]   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", line 519, in 
__iter__
  2021-11-02 10:42:34.192 6 ERROR nova.compute.manager [instance: 
ec0f0736-a5cd-48dc-b2f9-851629604a89] for chunk in self.iterable:
  2021-11-02 10:42:34.192 6 ERROR nova.compute.manager [instance: 
ec0f0736-a5cd-48dc-b2f9-851629604a89]   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", line 469, in 
serious_integrity_iter
  2021-11-02 10:42:34.192 6 ERROR nova.compute.manager [instance: 
ec0f0736-a5cd-48dc-b2f9-851629604a89] (computed, hash_value))
  2021-11-02 10:42:34.192 6 ERROR nova.compute.manager [instance: 
ec0f0736-a5cd-48dc-b2f9-851629604a89] IOError: [Errno 32] Corrupt image 
download. Hash was 
12e58a8b858a560ba89a035c24c3453bb19a294b1cc59088ff3d9f414053c7cdd84b323510dc8c30eb560a813cd670caa6ef9f56e12ae1213f12680aea039f53
 expected 
a37eacb7894f4e76c7511b6f5862b246776e3a2ccfdd195894170866650a63b67353c2a53c1898e4b079e280d43f09f27ced6a057d16cc93018b71ac13c26bd7
  2021-11-02 10:42:34.192 6 ERROR nova.compute.manager [instance: 
ec0f0736-a5cd-48dc-b2f9-851629604a89] 

  
  It wouldn't work because IOError exception is not in retry_excs list:
  https://github.com/openstack/nova/blob/master/nova/image/glance.py#L179

  retry_excs = (glanceclient.exc.ServiceUnavailable,
  glanceclient.exc.InvalidEndpoint,
  glanceclient.exc.CommunicationError)

  so try-except block doesn't catch and download retry never happens

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1950657/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928953] Re: Volume uuid of deleted volume is visible in volume list while restoring volume backup in horizon.

2021-11-25 Thread Vishal Manchanda
** Project changed: cinder => horizon

** Changed in: horizon
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1928953

Title:
  Volume uuid of deleted volume is visible in volume list while
  restoring volume backup in horizon.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Issue:
  When I deleted a volume and try to restore a backup of same volume which I 
deleted , the selected value in drop-down shows the UUID of volume which lead 
to problem.

  Analysis:
  When I checked the code and found that the value of volume UUID is picked 
from backup, but volume is not present anymore in the cloud.

  The default value is set by this block of code:
  
https://github.com/openstack/horizon/blob/2d2f944e2fe127433f2973ef77ba86ec997cf434/horizon/forms/fields.py#L303

  the template used here for rendering:
  
https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/fields/_themable_select.html#L5

  Code block responsible for choices of available volume:
  
https://github.com/openstack/horizon/blob/stable/train/openstack_dashboard/dashboards/project/backups/forms.py#L107

  
  Steps to reproduce:
  1. create a volume backup let's say vol
  2. create backup 'backup' of volume 'vol'
  3. delete the volume 'vol'
  4. now restore backup 'back'

   A list of all available volume is shown, but the UUID of deleted
  volume 'vol' is also present in the list.

  Actual:
  UUID of deleted volume is visible in the list

  Expected:
  Volume UUID of deleted should not be present

  Proposed Solution: If initial_value is not present in choices then
  take the first value of choices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1928953/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928953] [NEW] Volume uuid of deleted volume is visible in volume list while restoring volume backup in horizon.

2021-11-25 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Issue:
When I deleted a volume and try to restore a backup of same volume which I 
deleted , the selected value in drop-down shows the UUID of volume which lead 
to problem.

Analysis:
When I checked the code and found that the value of volume UUID is picked from 
backup, but volume is not present anymore in the cloud.

The default value is set by this block of code:
https://github.com/openstack/horizon/blob/2d2f944e2fe127433f2973ef77ba86ec997cf434/horizon/forms/fields.py#L303

the template used here for rendering:
https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/fields/_themable_select.html#L5

Code block responsible for choices of available volume:
https://github.com/openstack/horizon/blob/stable/train/openstack_dashboard/dashboards/project/backups/forms.py#L107


Steps to reproduce:
1. create a volume backup let's say vol
2. create backup 'backup' of volume 'vol'
3. delete the volume 'vol'
4. now restore backup 'back'

 A list of all available volume is shown, but the UUID of deleted volume
'vol' is also present in the list.

Actual:
UUID of deleted volume is visible in the list

Expected:
Volume UUID of deleted should not be present

Proposed Solution: If initial_value is not present in choices then take
the first value of choices.

** Affects: horizon
 Importance: High
 Status: New


** Tags: backup-service horizon
-- 
Volume uuid of deleted volume is visible in volume list while restoring volume 
backup in horizon.
https://bugs.launchpad.net/bugs/1928953
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1952023] Re: Neutron functional tests don't properly clean up ovn-northd

2021-11-25 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/819049
Committed: 
https://opendev.org/openstack/neutron/commit/74aa86a976c5bcf42b2779e5c557d7f7f4fdac9b
Submitter: "Zuul (22348)"
Branch:master

commit 74aa86a976c5bcf42b2779e5c557d7f7f4fdac9b
Author: Terry Wilson 
Date:   Tue Nov 23 21:21:27 2021 -0600

Properly clean up ovn-northd in functional tests

ovn-northd.ctl is a socket, so os.path.isfile() returns false and
ovn-northd is not properly killed. Use os.path.exists() instead.

Closes-Bug: #1952023
Change-Id: I00fba2dc4395c0a8cd4631d4c2c71d9c3dc429e9


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952023

Title:
  Neutron functional tests don't properly clean up ovn-northd

Status in neutron:
  Fix Released

Bug description:
  ovn-northd does not get stopped when running dsvm-functional tests.

  [vagrant@fedora neutron]$ pgrep ovn-northd
  [vagrant@fedora neutron]$ tox -e dsvm-functional test_agent_change
  dsvm-functional develop-inst-noop: /opt/stack/neutron

  ...

  dsvm-functional: commands succeeded
  congratulations :)

  [vagrant@fedora neutron]$ pgrep ovn-northd
  525468

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1952023/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp