[Yahoo-eng-team] [Bug 1978938] [NEW] [9-stream][master][yoga] ovs/ovn fips jobs fails randomly with DNS issue

2022-06-16 Thread yatin
Public bug reported:

These FIPS jobs running on centos-9-stream fails randomly at different points 
due to DNS issue.
In these jobs after configuring fips, node get's rebooted. The unbound service 
may take some time to be ready after the reboot and until then any DNS 
resolution fails.

Fails like below:-
2022-06-16 02:36:58.829027 | controller | + ./stack.sh:_install_rdo:314 
 :   sudo curl -L -o /etc/yum.repos.d/delorean-deps.repo 
http://trunk.rdoproject.org/centos9-master/delorean-deps.repo
2022-06-16 02:36:58.940595 | controller |   % Total% Received % Xferd  
Average Speed   TimeTime Time  Current
2022-06-16 02:36:58.940689 | controller |  
Dload  Upload   Total   SpentLeft  Speed
2022-06-16 02:36:58.941657 | controller | 
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
0curl: (6) Could not resolve host: trunk.rdoproject.org
2022-06-16 02:36:58.946989 | controller | + ./stack.sh:_install_rdo:316 
 :   sudo dnf -y update
2022-06-16 02:36:59.649717 | controller | CentOS Stream 9 - BaseOS  
  0.0  B/s |   0  B 00:00
2022-06-16 02:36:59.650078 | controller | Errors during downloading metadata 
for repository 'baseos':
2022-06-16 02:36:59.650117 | controller |   - Curl error (6): Couldn't resolve 
host name for 
https://mirror-int.ord.rax.opendev.org/centos-stream/9-stream/BaseOS/x86_64/os/repodata/repomd.xml
 [Could not resolve host: mirror-int.ord.rax.opendev.org]

2022-06-16 02:40:30.411317 | controller | + 
functions-common:sudo_with_proxies:2384  :   sudo http_proxy= https_proxy= 
no_proxy= dnf install -y rabbitmq-server
2022-06-16 02:40:31.225502 | controller | Last metadata expiration check: 
0:03:30 ago on Thu 16 Jun 2022 02:37:01 AM UTC.
2022-06-16 02:40:31.278762 | controller | No match for argument: rabbitmq-server
2022-06-16 02:40:31.286118 | controller | Error: Unable to find a match: 
rabbitmq-server

Job builds:-
- 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovn-tempest-ovs-release-fips&project=openstack/neutron
- 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-tempest-fips&project=openstack/neutron

** Affects: neutron
 Importance: High
 Assignee: yatin (yatinkarel)
 Status: In Progress

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1978938

Title:
  [9-stream][master][yoga] ovs/ovn fips jobs fails randomly with DNS
  issue

Status in neutron:
  In Progress

Bug description:
  These FIPS jobs running on centos-9-stream fails randomly at different points 
due to DNS issue.
  In these jobs after configuring fips, node get's rebooted. The unbound 
service may take some time to be ready after the reboot and until then any DNS 
resolution fails.

  Fails like below:-
  2022-06-16 02:36:58.829027 | controller | + ./stack.sh:_install_rdo:314   
   :   sudo curl -L -o /etc/yum.repos.d/delorean-deps.repo 
http://trunk.rdoproject.org/centos9-master/delorean-deps.repo
  2022-06-16 02:36:58.940595 | controller |   % Total% Received % Xferd  
Average Speed   TimeTime Time  Current
  2022-06-16 02:36:58.940689 | controller |  
Dload  Upload   Total   SpentLeft  Speed
  2022-06-16 02:36:58.941657 | controller | 
0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
0curl: (6) Could not resolve host: trunk.rdoproject.org
  2022-06-16 02:36:58.946989 | controller | + ./stack.sh:_install_rdo:316   
   :   sudo dnf -y update
  2022-06-16 02:36:59.649717 | controller | CentOS Stream 9 - BaseOS
0.0  B/s |   0  B 00:00
  2022-06-16 02:36:59.650078 | controller | Errors during downloading metadata 
for repository 'baseos':
  2022-06-16 02:36:59.650117 | controller |   - Curl error (6): Couldn't 
resolve host name for 
https://mirror-int.ord.rax.opendev.org/centos-stream/9-stream/BaseOS/x86_64/os/repodata/repomd.xml
 [Could not resolve host: mirror-int.ord.rax.opendev.org]

  2022-06-16 02:40:30.411317 | controller | + 
functions-common:sudo_with_proxies:2384  :   sudo http_proxy= https_proxy= 
no_proxy= dnf install -y rabbitmq-server
  2022-06-16 02:40:31.225502 | controller | Last metadata expiration check: 
0:03:30 ago on Thu 16 Jun 2022 02:37:01 AM UTC.
  2022-06-16 02:40:31.278762 | controller | No match for argument: 
rabbitmq-server
  2022-06-16 02:40:31.286118 | controller | Error: Unable to find a match: 
rabbitmq-server

  Job builds:-
  - 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovn-tempest-ovs-release-fips&project=openstack/neutron
  - 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ov

[Yahoo-eng-team] [Bug 1978869] Re: bindep: SUSE & dpkg-based platforms require some fixes

2022-06-16 Thread Erno Kuvaja
** Also affects: glance/zed
   Importance: Undecided
 Assignee: Cyril Roelandt (cyril-roelandt)
   Status: In Progress

** Also affects: glance/xena
   Importance: Undecided
   Status: New

** Also affects: glance/yoga
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1978869

Title:
  bindep: SUSE & dpkg-based platforms require some fixes

Status in Glance:
  In Progress
Status in Glance xena series:
  New
Status in Glance yoga series:
  New
Status in Glance zed series:
  In Progress

Bug description:
  tl;dr:

  - On OpenSUSE Tumbleweed, pg_config is provided by postgresql-server-devel
  - On OpenSUSE Tumbleweed, qemu-img is provided by qemu-tools (not qemu-img)

  Details:

  On Tumbleweed, I ran the following commands:

  $ tox -ebindep
  ...
  Missing packages:
  gcc libffi48-devel libmysqlclient-devel mariadb postgresql 
postgresql-devel postgresql-server python3-devel qemu-img
  ...

  $ sudo zypper install gcc libffi48-devel libmysqlclient-devel mariadb
  postgresql postgresql-devel postgresql-server python3-devel qemu-img

  $ tox -epy3
  ...
  Collecting psycopg2>=2.8.4

Using cached psycopg2-2.9.3.tar.gz (380 kB) 

Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error' 
  
error: subprocess-exited-with-error 


× python 
setup.py egg_info did not run successfully.
│ exit code: 1 
╰─> [25 lines of output]
running egg_info
creating /tmp/pip-pip-egg-info-rk_ofq01/psycopg2.egg-info
writing /tmp/pip-pip-egg-info-rk_ofq01/psycopg2.egg-info/PKG-INFO
writing dependency_links to 
/tmp/pip-pip-egg-info-rk_ofq01/psycopg2.egg-info/dependency_links.txt
writing top-level names to 
/tmp/pip-pip-egg-info-rk_ofq01/psycopg2.egg-info/top_level.txt
writing manifest file 
'/tmp/pip-pip-egg-info-rk_ofq01/psycopg2.egg-info/SOURCES.txt'

/home/vagrant/glance/.tox/py3/lib/python3.8/site-packages/setuptools/config/setupcfg.py:458:
 DeprecationWarning: The license_file parameter is dep
  recated, use license_files instead.
  warnings.warn(msg, warning_class)
 
Error: pg_config executable not found.
  ...

  
  $ sudo zypper install postgresql-server-devel

  # Now pg_config is available

  
  $ which qemu-img
  which: no qemu-img in 
(/home/vagrant/glance/.tox/py3/bin:/home/vagrant/bin:/usr/local/bin:/usr/bin:/bin)

  # Installing qemu-tools made qemu-img available.

  
  I still could not run "tox -epy3" successfully because of issues with stestr 
on this OS, but it's pretty clear that bindep.txt should be fixed for OpenSUSE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1978869/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978958] [NEW] Attempting to rescue volume-based instance doesn't return error but leaves VM in error state

2022-06-16 Thread Jonathan Heathcote
Public bug reported:

Description
===

When attempting to rescue a volume-based instance using an image without
the hw_rescue_device and hw_rescue_bus properties set (i.e. which
doesn't support the stable rescue mode) I would expect the API call to
fail leaving the state in its existing state. Instead Nova appears to go
ahead with the rescue which fails (since non-stable rescues for volume
backed instances aren't supported) leading to an unrecoverable, broken
instance.


Steps to reproduce
==

1. Create an instance booting from a volume (in my case the instance is
booting from a volume in Ceph)

2. Attempt to trigger a rescue using an image *without* the
hw_rescue_device and hw_rescue_bus properties set using:

 $ openstack --os-compute-api-version 2.87 server rescue --image
 


Expected result
===

I would have expected step 2 to fail with an error along the lines of
"Cannot rescue a volume-backed instance", leaving the instance
unchanged.

My crude skimming of the Nova source appears to try and perform this
check here:
https://github.com/openstack/nova/blob/f766db261634c8f95f874ba132159f148de9e8bf/nova/compute/api.py#L4514-L4517.


Actual result
=

The rescue command returns normally and the instance enters the rescue
state very briefly before ending up in the Error state with the message:

Instance 05043c69-3533-4396-a610-98e564fa3ead cannot be rescued:
Driver Error: internal error: qemu unexpectedly closed the monitor:
2022-06-16T11:53:51.583795Z qemu-system-x86_64: -blockdev
{"driver":"rbd","pool":"ephemeral-vms","image":"05043c69-3533-43

Looking at the nova-compute logs on the hypervisor gets the following


2022-06-16 11:53:40.765 32936 INFO nova.virt.libvirt.driver 
[req-bd8be6a4-2195-498e-9209-caed5c525524 
ce20f804a02fc250ec44394149d69a682aed245fd50194cd917fd238c2dc52c1 
b259e9c9e29949f287d63e3d05ae9a8a - default default] [instance: 
05043c69-3533-4396-a610-98e564fa3ead] Attempting rescue
2022-06-16 11:53:40.768 32936 INFO nova.virt.libvirt.driver 
[req-bd8be6a4-2195-498e-9209-caed5c525524 
ce20f804a02fc250ec44394149d69a682aed245fd50194cd917fd238c2dc52c1 
b259e9c9e29949f287d63e3d05ae9a8a - default default] [instance: 
05043c69-3533-4396-a610-98e564fa3ead] Creating image
2022-06-16 11:53:40.943 32936 WARNING nova.compute.manager 
[req-a39b605e-e2db-45ce-8f7f-c92a192d9fbe 023555cbd3a2484f89984da17ae9bbfb 
0c6d3af6910e429894c203af88fc96c4 - default default] [instance: 
05043c69-3533-4396-a610-98e564fa3ead] Received unexpected event 
network-vif-unplugged-464bd8c6-2b41-4648-abe8-d900c88dd996 for instance with 
vm_state active a>
2022-06-16 11:53:50.665 32936 INFO nova.virt.libvirt.driver [-] [instance: 
05043c69-3533-4396-a610-98e564fa3ead] Instance destroyed successfully.
2022-06-16 11:53:51.807 32936 ERROR nova.virt.libvirt.guest 
[req-bd8be6a4-2195-498e-9209-caed5c525524 
ce20f804a02fc250ec44394149d69a682aed245fd50194cd917fd238c2dc52c1 
b259e9c9e29949f287d63e3d05ae9a8a - default default] Error launching a defined 
domain with XML: 
  instance-00015604
  05043c69-3533-4396-a610-98e564fa3ead
  ...snip...

  
  

  
  



  
  
  


  
  

  
  



  
  
  

...snip...
: libvirt.libvirtError: internal error: qemu unexpectedly closed the 
monitor: 2022-06-16T11:53:51.583795Z qemu-system-x86_64: -blockdev 
{"driver":"rbd","pool":"ephemeral-vms","image":"05043c69-3533-4396-a610-98e564fa3ead_disk","server":[{"host":"10.81.240.10","port":"6789"},{"host":"10.81.240.11","port":"6789"},{"host":"10.81.240.12","port":"6789"}],"user":"cinder","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}:
 error reading header from 05043c69-3533-4396-a610-98e564fa3ead_disk: No such 
file or directory
2022-06-16 11:53:51.807 32936 ERROR nova.virt.libvirt.guest Traceback (most 
recent call last):
2022-06-16 11:53:51.807 32936 ERROR nova.virt.libvirt.guest   File 
"/openstack/venvs/nova-24.0.0/lib/python3.8/site-packages/nova/virt/libvirt/guest.py",
 line 159, in launch
2022-06-16 11:53:51.807 32936 ERROR nova.virt.libvirt.guest return 
self._domain.createWithFlags(flags)
2022-06-16 11:53:51.807 32936 ERROR nova.virt.libvirt.guest   File 
"/openstack/venvs/nova-24.0.0/lib/python3.8/site-packages/eventlet/tpool.py", 
line 193, in doit
2022-06-16 11:53:51.807 32936 ERROR nova.virt.libvirt.guest result = 
proxy_call(self._autowrap, f, *args, **kwargs)
2022-06-16 11:53:51.807 32936 ERROR nova.virt.libvirt.guest   File 
"/openstack/venvs/nova-24.0.0/lib/python3.8/site-packages/eventlet/tpool.py", 
line 151, in proxy_call
 

[Yahoo-eng-team] [Bug 1978971] [NEW] Graceful shutdown not working for wsgi and uwsgi

2022-06-16 Thread Abhishek Kekane
Public bug reported:

Earlier when we added config reloading functionality we made provision
for the workers who were performing existing tasks should complete those
before terminating them. The sequence was likely below;

On receipt of a SIGHUP signal the master process will:

* reload the configuration
* send a SIGHUP to the original workers
* start (a potentially different number of) new workers with the new
  configuration
* its listening socket will *not* be closed

On receipt of a SIGHUP signal each original worker process will:

* close the listening socket so as not to accept new requests
* complete any in-flight requests
* complete async requests (V1 create with copy-from option and V2 task api)
* exit

Recently while working on one of the functionality I found that the
workers are not waiting to complete any in-flight requests due to some
errors.


Following are some important logs;
[1] Logs for upload/import request which gets terminated on reload/sighup
[2] Reproducer

[1] https://paste.opendev.org/show/bTjORjtLtbhUe93xQFDu/
[2] https://paste.opendev.org/show/bFqCD4U4P0Q8MyIrsO4J/

** Affects: glance
 Importance: High
 Status: New

** Changed in: glance
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1978971

Title:
  Graceful shutdown not working for wsgi and uwsgi

Status in Glance:
  New

Bug description:
  Earlier when we added config reloading functionality we made provision
  for the workers who were performing existing tasks should complete
  those before terminating them. The sequence was likely below;

  On receipt of a SIGHUP signal the master process will:

  * reload the configuration
  * send a SIGHUP to the original workers
  * start (a potentially different number of) new workers with the new
configuration
  * its listening socket will *not* be closed

  On receipt of a SIGHUP signal each original worker process will:

  * close the listening socket so as not to accept new requests
  * complete any in-flight requests
  * complete async requests (V1 create with copy-from option and V2 task api)
  * exit

  Recently while working on one of the functionality I found that the
  workers are not waiting to complete any in-flight requests due to some
  errors.

  
  Following are some important logs;
  [1] Logs for upload/import request which gets terminated on reload/sighup
  [2] Reproducer

  [1] https://paste.opendev.org/show/bTjORjtLtbhUe93xQFDu/
  [2] https://paste.opendev.org/show/bFqCD4U4P0Q8MyIrsO4J/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1978971/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978983] [NEW] evacuate is not possible if the instance has task_state

2022-06-16 Thread Balazs Gibizer
Public bug reported:

Description
===
A compute host dies but before anything notices it a VM that was running on 
that host is requested to be stopped by the user. The VM task_state is set to 
powering-off and the shutdown RPC is sent to the dead compute. A bit later the 
monitoring system detect that the compute is dead and fences the compute, set 
the compute to forced_down in nova and triggers the evacuation of the VM. 
However the evacuation is rejected by nova:

Cannot 'evacuate' instance 81451eb2-4600-4036-a6f1-b99139f0d277 while it
is in task_state powering-off (HTTP 409) (Request-ID:
req-363ca0a3-0d68-42f6-95d2-122bd2a53463)


Steps to reproduce
==
0) deploy a multi node devstack
1) create a VM
   $openstack --os-compute-api-version 2.80 server create --image 
cirros-0.5.2-x86_64-disk --flavor c1 --nic net-id=public  --use-config-drive 
vm1 --wait
2) stop the nova-compute service of the host the VM is scheduled to:
   $sudo systemctl stop devstack@n-cpu
3) stop the VM
   $openstack server stop vm1
4) fence the host and force the host down in nova
5) try to evacuate the VM
   $server evacuate vm1

See also [1]

Expected result
===
The VM is evacuated successfully 

Actual result
=
Cannot 'evacuate' instance 81451eb2-4600-4036-a6f1-b99139f0d277 while it is in 
task_state powering-off (HTTP 409) (Request-ID: 
req-363ca0a3-0d68-42f6-95d2-122bd2a53463)

Environment
===
devstack on recent master

Workaround
==
The admin can reset the state of the VM with 
$nova reset-state --active vm1 
then retry the evacuation.

[1] https://paste.opendev.org/show/bQphEfOf8eLBnM6XmleQ/
[2] https://paste.opendev.org/show/bVI7D8H5g9Oqjjo4rKfk/

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: evacuate

** Tags added: evacuate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1978983

Title:
  evacuate is not possible if the instance has task_state

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Description
  ===
  A compute host dies but before anything notices it a VM that was running on 
that host is requested to be stopped by the user. The VM task_state is set to 
powering-off and the shutdown RPC is sent to the dead compute. A bit later the 
monitoring system detect that the compute is dead and fences the compute, set 
the compute to forced_down in nova and triggers the evacuation of the VM. 
However the evacuation is rejected by nova:

  Cannot 'evacuate' instance 81451eb2-4600-4036-a6f1-b99139f0d277 while
  it is in task_state powering-off (HTTP 409) (Request-ID:
  req-363ca0a3-0d68-42f6-95d2-122bd2a53463)


  Steps to reproduce
  ==
  0) deploy a multi node devstack
  1) create a VM
 $openstack --os-compute-api-version 2.80 server create --image 
cirros-0.5.2-x86_64-disk --flavor c1 --nic net-id=public  --use-config-drive 
vm1 --wait
  2) stop the nova-compute service of the host the VM is scheduled to:
 $sudo systemctl stop devstack@n-cpu
  3) stop the VM
 $openstack server stop vm1
  4) fence the host and force the host down in nova
  5) try to evacuate the VM
 $server evacuate vm1

  See also [1]

  Expected result
  ===
  The VM is evacuated successfully 

  Actual result
  =
  Cannot 'evacuate' instance 81451eb2-4600-4036-a6f1-b99139f0d277 while it is 
in task_state powering-off (HTTP 409) (Request-ID: 
req-363ca0a3-0d68-42f6-95d2-122bd2a53463)

  Environment
  ===
  devstack on recent master

  Workaround
  ==
  The admin can reset the state of the VM with 
  $nova reset-state --active vm1 
  then retry the evacuation.

  [1] https://paste.opendev.org/show/bQphEfOf8eLBnM6XmleQ/
  [2] https://paste.opendev.org/show/bVI7D8H5g9Oqjjo4rKfk/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1978983/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978869] Re: bindep: SUSE & dpkg-based platforms require some fixes

2022-06-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/glance/+/844596
Committed: 
https://opendev.org/openstack/glance/commit/53f322f1d8c0a5c1e8c494f88f1f05c47b29729a
Submitter: "Zuul (22348)"
Branch:master

commit 53f322f1d8c0a5c1e8c494f88f1f05c47b29729a
Author: Cyril Roelandt 
Date:   Fri Jun 3 15:49:20 2022 +0200

Bindep fixes for SUSE-like systems

- qemu-img is provided by qemu-tools
- pg_config is provided by postgresql-server-devel

Closes-Bug: #1978869
Change-Id: Ia0e5f52f3841b3306a8776762d18a56c6df1e2f5


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1978869

Title:
  bindep: SUSE & dpkg-based platforms require some fixes

Status in Glance:
  Fix Released
Status in Glance xena series:
  New
Status in Glance yoga series:
  New
Status in Glance zed series:
  Fix Released

Bug description:
  tl;dr:

  - On OpenSUSE Tumbleweed, pg_config is provided by postgresql-server-devel
  - On OpenSUSE Tumbleweed, qemu-img is provided by qemu-tools (not qemu-img)

  Details:

  On Tumbleweed, I ran the following commands:

  $ tox -ebindep
  ...
  Missing packages:
  gcc libffi48-devel libmysqlclient-devel mariadb postgresql 
postgresql-devel postgresql-server python3-devel qemu-img
  ...

  $ sudo zypper install gcc libffi48-devel libmysqlclient-devel mariadb
  postgresql postgresql-devel postgresql-server python3-devel qemu-img

  $ tox -epy3
  ...
  Collecting psycopg2>=2.8.4

Using cached psycopg2-2.9.3.tar.gz (380 kB) 

Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error' 
  
error: subprocess-exited-with-error 


× python 
setup.py egg_info did not run successfully.
│ exit code: 1 
╰─> [25 lines of output]
running egg_info
creating /tmp/pip-pip-egg-info-rk_ofq01/psycopg2.egg-info
writing /tmp/pip-pip-egg-info-rk_ofq01/psycopg2.egg-info/PKG-INFO
writing dependency_links to 
/tmp/pip-pip-egg-info-rk_ofq01/psycopg2.egg-info/dependency_links.txt
writing top-level names to 
/tmp/pip-pip-egg-info-rk_ofq01/psycopg2.egg-info/top_level.txt
writing manifest file 
'/tmp/pip-pip-egg-info-rk_ofq01/psycopg2.egg-info/SOURCES.txt'

/home/vagrant/glance/.tox/py3/lib/python3.8/site-packages/setuptools/config/setupcfg.py:458:
 DeprecationWarning: The license_file parameter is dep
  recated, use license_files instead.
  warnings.warn(msg, warning_class)
 
Error: pg_config executable not found.
  ...

  
  $ sudo zypper install postgresql-server-devel

  # Now pg_config is available

  
  $ which qemu-img
  which: no qemu-img in 
(/home/vagrant/glance/.tox/py3/bin:/home/vagrant/bin:/usr/local/bin:/usr/bin:/bin)

  # Installing qemu-tools made qemu-img available.

  
  I still could not run "tox -epy3" successfully because of issues with stestr 
on this OS, but it's pretty clear that bindep.txt should be fixed for OpenSUSE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1978869/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1979031] [NEW] [master] fullstack functional jobs broken with pyrouet2-0.6.12

2022-06-16 Thread yatin
Public bug reported:

After pyroute2-0.6.12 update[1], multiple tests in fullstack/functional
jobs failing[2][3].

Noting some failures here:-
functional:-
AssertionError: CIDR 2001:db8::/64 not found in the list of routes
AssertionError: Route not found: {'table': 'main', 'cidr': '192.168.0.0/24', 
'source_prefix': None, 'scope': 'global', 'device': 'test_device', 'via': None, 
'metric': 0, 'proto': 'static'}
testtools.matchers._impl.MismatchError: 'via' not in None
external_device.route.get_gateway().get('via'))
AttributeError: 'NoneType' object has no attribute 'get'
testtools.matchers._impl.MismatchError: !=:
reference = [{'cidr': '0.0.0.0/0',
  'device': 'rfp-403146e1-f',
  'table': 16,
  'via': '169.254.88.135'},
 {'cidr': '::/0',
  'device': 'rfp-403146e1-f',
  'table': 'main',
  'via': 'fe80::9005:f3ff:fe70:40b9'}]
actual= []

fullstack
neutron.tests.common.machine_fixtures.FakeMachineException: Address 
10.0.0.87/24 or gateway 10.0.0.1 not configured properly on port port3f2663

Route related calls looks broken with new version.
Example:-
functional:- 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_736/797120/19/check/neutron-functional-with-uwsgi/7366408/testr_results.html
fullstack:- 
https://983763dc8d641eb0d8f2-98d26e4bc9646b70fe4861772f4678c8.ssl.cf2.rackcdn.com/845363/4/gate/neutron-fullstack-with-uwsgi/1cc6a9d/testr_results.html


[1] https://review.opendev.org/c/openstack/requirements/+/845871
[2] 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi&project=openstack%2Fneutron&branch=master&skip=0
[3] 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack-with-uwsgi&project=openstack%2Fneutron&branch=master&skip=0

** Affects: neutron
 Importance: Critical
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New


** Tags: fullstack functional-tests gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: fullstack functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1979031

Title:
  [master] fullstack functional jobs broken with pyrouet2-0.6.12

Status in neutron:
  New

Bug description:
  After pyroute2-0.6.12 update[1], multiple tests in
  fullstack/functional jobs failing[2][3].

  Noting some failures here:-
  functional:-
  AssertionError: CIDR 2001:db8::/64 not found in the list of routes
  AssertionError: Route not found: {'table': 'main', 'cidr': '192.168.0.0/24', 
'source_prefix': None, 'scope': 'global', 'device': 'test_device', 'via': None, 
'metric': 0, 'proto': 'static'}
  testtools.matchers._impl.MismatchError: 'via' not in None
  external_device.route.get_gateway().get('via'))
  AttributeError: 'NoneType' object has no attribute 'get'
  testtools.matchers._impl.MismatchError: !=:
  reference = [{'cidr': '0.0.0.0/0',
'device': 'rfp-403146e1-f',
'table': 16,
'via': '169.254.88.135'},
   {'cidr': '::/0',
'device': 'rfp-403146e1-f',
'table': 'main',
'via': 'fe80::9005:f3ff:fe70:40b9'}]
  actual= []

  fullstack
  neutron.tests.common.machine_fixtures.FakeMachineException: Address 
10.0.0.87/24 or gateway 10.0.0.1 not configured properly on port port3f2663

  Route related calls looks broken with new version.
  Example:-
  functional:- 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_736/797120/19/check/neutron-functional-with-uwsgi/7366408/testr_results.html
  fullstack:- 
https://983763dc8d641eb0d8f2-98d26e4bc9646b70fe4861772f4678c8.ssl.cf2.rackcdn.com/845363/4/gate/neutron-fullstack-with-uwsgi/1cc6a9d/testr_results.html

  
  [1] https://review.opendev.org/c/openstack/requirements/+/845871
  [2] 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi&project=openstack%2Fneutron&branch=master&skip=0
  [3] 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack-with-uwsgi&project=openstack%2Fneutron&branch=master&skip=0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1979031/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp