[Yahoo-eng-team] [Bug 1930929] Re: Magic Search problems with text search

2021-07-22 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/794899
Committed: 
https://opendev.org/openstack/horizon/commit/ca429efbd8e9c6b0ed90777b54181a6a12cfa634
Submitter: "Zuul (22348)"
Branch:master

commit ca429efbd8e9c6b0ed90777b54181a6a12cfa634
Author: Tatiana Ovchinnikova 
Date:   Fri Jun 4 15:42:55 2021 -0500

Magic Search filter facet removal fix

When there is a text search and a filter facet applied together,
deleting the filter facet makes the remaining text search ignored.
This patch fixes it, calling text search if text facet remains.

Closes-Bug: #1930929

Change-Id: I343dc4220d572669a1b36ed9cf2d10d4dacf7166


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1930929

Title:
  Magic Search problems with text search

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When there is a text search and a filter facet applied together,
  deleting the filter facet makes the remaining text search ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1930929/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1899649] Re: Volume marked as available after a failure to build

2021-07-22 Thread Lee Yarwood
** Changed in: nova
   Status: In Progress => Fix Released

** Changed in: nova/victoria
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

** Changed in: nova/victoria
   Importance: Undecided => Medium

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1899649

Title:
  Volume marked as available after a failure to build

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) victoria series:
  Fix Released

Bug description:
  Description
  ===
  When skipping the scheduler by specifying a host in the provided 
availability_zone parameter of a request ('$az:$host') any failure to build an 
instance with volumes attached on the target host will result in the removal of 
all volume attachments within Cinder. This in turn marks this volume as 
available, allowing future requests to potentially use the volume.

  This is due to the following exception handling code being used when
  no retry info is provided when skipping the scheduler:

  
https://github.com/openstack/nova/blob/261de76104ca67bed3ea6cdbcaaab0e44030f1e2/nova/compute/manager.py#L2225-L2241

  This is contrast to what happens after a failure to schedule an
  instance with volumes attached where the volume attachments remain and
  the volume is left in a reserved state.

  Steps to reproduce
  ==
  * Attempt to spawn an instance on a specific host using the availability_zone 
parameter.
  * Ensure this request fails during _build_and_run_instance raising 
exception.RescheduledException

  Expected result
  ===
  Any associated volumes still have attachments to the instance and are left in 
a reserved state.

  Actual result
  =
  All volume attachments are removed and any associated volumes are left in an 
available state.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 1e10461c71cb78226824988b8c903448ba7a8a76

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 N/A

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1899649/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936667] Re: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3

2021-07-22 Thread Takashi Kajinami
** Also affects: cinder
   Importance: Undecided
   Status: New

** No longer affects: cinder

** Also affects: tacker
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1936667

Title:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  In Progress
Status in Mistral:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in tacker:
  In Progress
Status in taskflow:
  Fix Released
Status in tempest:
  In Progress
Status in zaqar:
  In Progress

Bug description:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3.

  For example:

  >>> import collections 
  >>> collections.Iterable
  :1: DeprecationWarning: Using or importing the ABCs from 'collections' 
instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 
it will stop working
  

  >>> from collections import abc
  >>> abc.Iterable
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1936667/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1937261] [NEW] linuxbridge agent broken due to msgpack upgrade 0.6.2 for Ussuri on Bionic

2021-07-22 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

After a successful upgrade of the control-plance from Train -> Ussuri on
Ubuntu Bionic, we upgraded a first compute / network node and
immediately ran into issues with Neutron:

We noticed that Neutron is extremely slow in setting up and wiring the
network ports, so slow it would never finish and throw all sorts of
errors (RabbitMQ connection timeouts, full sync required, ...)


We were now able to reproduce the error on our Ussuri DEV cloud as well:
 

1) First we used strace - -p $PID_OF_NEUTRON_LINUXBRIDGE_AGENT and noticed 
that the data exchange on the unix socket between the rootwrap-daemon and the 
main process is really really slow.
One could actually read line by line the read calls to the fd of the socket.

2) We then (after adding lots of log lines and other intensive manual
debugging) used py-spy (https://github.com/benfred/py-spy) via "py-spy
top --pid $PID" on the running neutron-linuxbridge-agent process and
noticed all the CPU time (process was at 100% most of the time) was
spent in msgpack/fallback.py

3) Since the issue was not observed in TRAIN we compared the msgpack
version used and noticed that TRAIN was using version 0.5.6 while Ussuri
upgraded this dependency to 0.6.2.


4) We then installed version 0.5.6 of msgpack (ignoring the actual dependencies)

--- cut ---
apt policy python3-msgpack
python3-msgpack:
  Installed: 0.6.2-1~cloud0
  Candidate: 0.6.2-1~cloud0
  Version table:
 *** 0.6.2-1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/ussuri/main amd64 Packages
 0.5.6-1 500
500 http://de.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
100 /var/lib/dpkg/status
--- cut ---


and et voila: The Neutron-Linuxbridge-Agent worked just like before (building 
one port every few seconds) and all network ports eventually converged to 
ACTIVE.


I could not yet spot which commit of msgpack changes
(https://github.com/msgpack/msgpack-python/compare/0.5.6...v0.6.2) might
have caused this issue, but I am really certain that this is a major
issue for Ussuri on Ubuntu Bionic.


There are "similar" issues with 
 * https://bugs.launchpad.net/oslo.privsep/+bug/1844822
 * https://bugs.launchpad.net/oslo.privsep/+bug/1896734

both related to msgpack or the size of messages exchanged.

** Affects: oslo.privsep
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
linuxbridge agent broken due to msgpack upgrade 0.6.2 for Ussuri on Bionic
https://bugs.launchpad.net/bugs/1937261
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1937261] [NEW] msgpack upgrade to 0.6.2 breaks linuxbridge agent

2021-07-22 Thread Christian Rohmann
Public bug reported:

After a successful upgrade of the control-plance from Train -> Ussuri on
Ubuntu Bionic, we upgraded a first compute / network node and
immediately ran into issues with Neutron:

We noticed that Neutron is extremely slow in setting up and wiring the
network ports, so slow it would never finish and throw all sorts of
errors (RabbitMQ connection timeouts, full sync required, ...)

We were now able to reproduce the error on our Ussuri DEV cloud as well:

1) First we used strace - -p $PID_OF_NEUTRON_LINUXBRIDGE_AGENT and noticed 
that the data exchange on the unix socket between the rootwrap-daemon and the 
main process is really really slow.
One could actually read line by line the read calls to the fd of the socket.

2) We then (after adding lots of log lines and other intensive manual
debugging) used py-spy (https://github.com/benfred/py-spy) via "py-spy
top --pid $PID" on the running neutron-linuxbridge-agent process and
noticed all the CPU time (process was at 100% most of the time) was
spent in msgpack/fallback.py

3) Since the issue was not observed in TRAIN we compared the msgpack
version used and noticed that TRAIN was using version 0.5.6 while Ussuri
upgraded this dependency to 0.6.2.

4) We then downgraded to version 0.5.6 of msgpack (ignoring the actual
dependencies)

--- cut ---
apt policy python3-msgpack
python3-msgpack:
  Installed: 0.6.2-1~cloud0
  Candidate: 0.6.2-1~cloud0
  Version table:
 *** 0.6.2-1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/ussuri/main amd64 Packages
 0.5.6-1 500
500 http://de.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
100 /var/lib/dpkg/status
--- cut ---


vs.

--- cut ---
apt policy python3-msgpack
python3-msgpack:
  Installed: 0.5.6-1
  Candidate: 0.6.2-1~cloud0
  Version table:
 0.6.2-1~cloud0 500
500 http://ubuntu-cloud.archive.canonical.com/ubuntu 
bionic-updates/ussuri/main amd64 Packages
 *** 0.5.6-1 500
500 http://de.archive.ubuntu.com/ubuntu bionic/main amd64 Packages
100 /var/lib/dpkg/status
--- cut ---


and et voila: The Neutron-Linuxbridge-Agent worked just like before (building 
one port every few seconds) and all network ports eventually converged to 
ACTIVE.

I could not yet spot which commit of msgpack changes
(https://github.com/msgpack/msgpack-python/compare/0.5.6...v0.6.2) might
have caused this issue, but I am really certain that this is a major
issue for Ussuri on Ubuntu Bionic.

There are "similar" issues with
 * https://bugs.launchpad.net/oslo.privsep/+bug/1844822
 * https://bugs.launchpad.net/oslo.privsep/+bug/1896734

both related to msgpack or the size of messages exchanged.

** Affects: cloud-archive
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: oslo.privsep
 Importance: Undecided
 Status: New

** Affects: python-oslo.privsep (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: ubuntu
   Importance: Undecided
   Status: New

** Package changed: ubuntu => neutron

** Also affects: python-oslo.privsep (Ubuntu)
   Importance: Undecided
   Status: New

** Summary changed:

- linuxbridge agent broken due to msgpack upgrade 0.6.2 for Ussuri on Bionic
+ linuxbridge agent broken due to msgpack upgrade to 0.6.2 for Ussuri on Bionic

** Summary changed:

- linuxbridge agent broken due to msgpack upgrade to 0.6.2 for Ussuri on Bionic
+ msgpack upgrade to 0.6.2 for Ussuri on Bionic breaks linuxbridge agent

** Summary changed:

- msgpack upgrade to 0.6.2 for Ussuri on Bionic breaks linuxbridge agent
+ msgpack upgrade to 0.6.2 breaks linuxbridge agent

** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Description changed:

  After a successful upgrade of the control-plance from Train -> Ussuri on
  Ubuntu Bionic, we upgraded a first compute / network node and
  immediately ran into issues with Neutron:
  
  We noticed that Neutron is extremely slow in setting up and wiring the
  network ports, so slow it would never finish and throw all sorts of
  errors (RabbitMQ connection timeouts, full sync required, ...)
  
- 
  We were now able to reproduce the error on our Ussuri DEV cloud as well:
-  
  
  1) First we used strace - -p $PID_OF_NEUTRON_LINUXBRIDGE_AGENT and 
noticed that the data exchange on the unix socket between the rootwrap-daemon 
and the main process is really really slow.
  One could actually read line by line the read calls to the fd of the socket.
  
  2) We then (after adding lots of log lines and other intensive manual
  debugging) used py-spy (https://github.com/benfred/py-spy) via "py-spy
  top --pid $PID" on the running neutron-linuxbridge-agent process and
  noticed all the CPU time (process was at 100% most of the time) was
  spent in msgpack/fallback.py
  
  3) Since the issue was not observed in TRAIN we compared the msgpack
  version used and notic

[Yahoo-eng-team] [Bug 1937292] [NEW] All overcloud VM's powered off on hypervisor when nova_libvirt is restarted

2021-07-22 Thread Brendan Shephard
Public bug reported:

Description:

Using TripleO. Noted that all VM's on a Hypervisor are powered off
during the overcloud deployment. (I only have one Hypervisor sorry, I
can't tell you if it would happen to more than one hypervisor).

Seems to happen when the nova_libvirt container is restarted.

Environment:
TripleO - Master
# podman exec -it nova_libvirt rpm -qa | grep nova
python3-nova-23.1.0-0.20210625160814.1f6c351.el8.noarch
openstack-nova-compute-23.1.0-0.20210625160814.1f6c351.el8.noarch
openstack-nova-common-23.1.0-0.20210625160814.1f6c351.el8.noarch
openstack-nova-migration-23.1.0-0.20210625160814.1f6c351.el8.noarch
python3-novaclient-17.5.0-0.20210601131008.f431295.el8.noarch

Reproducer:
At least for me:
1. Start a VM
2. Restart tripleo_nova_libvirt.service:
systemctl restart tripleo_nova_libvirt.service
3. All VM's are stopped

Relevant logs:
2021-07-22 16:31:05.532 3 DEBUG nova.compute.manager 
[req-19a38d0b-e019-472b-95c4-03c796040767 d2ab1d5792604ba094af82d7447e88cf 
c4740b2aba4147adb7f101a2782003c3 - default default] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] No wait
ing events found dispatching 
network-vif-plugged-d9b29fef-cd87-41db-ba79-8b8c65b74efb pop_instance_event 
/usr/lib/python3.6/site-packages/nova/compute/manager.py:319
2021-07-22 16:31:05.532 3 WARNING nova.compute.manager 
[req-19a38d0b-e019-472b-95c4-03c796040767 d2ab1d5792604ba094af82d7447e88cf 
c4740b2aba4147adb7f101a2782003c3 - default default] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Recei
ved unexpected event network-vif-plugged-d9b29fef-cd87-41db-ba79-8b8c65b74efb 
for instance with vm_state active and task_state None.
2021-07-22 16:31:30.583 3 DEBUG nova.compute.manager 
[req-7be814ae-0e3d-4631-8a4c-348ead46c213 - - - - -] Triggering sync for uuid 
b28cc3ae-6442-40cf-9d66-9d4938a567c7 _sync_power_states 
/usr/lib/python3.6/site-packages/nova/compute/man
ager.py:9695 
2021-07-22 16:31:30.589 3 DEBUG oslo_concurrency.lockutils [-] Lock 
"b28cc3ae-6442-40cf-9d66-9d4938a567c7" acquired by 
"nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync"
 ::
 waited 0.000s inner 
/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
2021-07-22 16:31:30.746 3 INFO nova.compute.manager [-] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] During _sync_instance_power_state the DB 
power_state (1) does not match the vm_power_state from the hypervisor (4). 
Updating power_
state in the DB to match the hypervisor. 
2021-07-22 16:31:30.930 3 WARNING nova.compute.manager [-] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Instance shutdown by itself. Calling the 
stop API. Current vm_state: active, current task_state: None, original DB 
power_state:
1, current VM power_state: 4   
2021-07-22 16:31:30.931 3 DEBUG nova.compute.api [-] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Going to try to stop instance force_stop 
/usr/lib/python3.6/site-packages/nova/compute/api.py:2584
2021-07-22 16:31:31.135 3 DEBUG oslo_concurrency.lockutils [-] Lock 
"b28cc3ae-6442-40cf-9d66-9d4938a567c7" released by 
"nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync"
 ::
 held 0.547s inner 
/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:371
2021-07-22 16:31:31.161 3 DEBUG oslo_concurrency.lockutils 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] Lock 
"b28cc3ae-6442-40cf-9d66-9d4938a567c7" acquired by 
"nova.compute.manager.ComputeManager.stop_instance..do_stop_
instance" :: waited 0.000s inner 
/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:359
2021-07-22 16:31:31.162 3 DEBUG nova.compute.manager 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Checking state _get_power_state 
/usr/lib/python3.6/site-packages/nova/compute/man
ager.py:1561 
2021-07-22 16:31:31.165 3 DEBUG nova.compute.manager 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Stopping instance; current vm_state: 
active, current task_state: powering-off, cu
rrent DB power_state: 4, current VM power_state: 4 do_stop_instance 
/usr/lib/python3.6/site-packages/nova/compute/manager.py:3095
2021-07-22 16:31:31.166 3 INFO nova.compute.manager 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] [instance: 
b28cc3ae-6442-40cf-9d66-9d4938a567c7] Instance is already powered off in the 
hypervisor when stop is called.
2021-07-22 16:31:31.168 3 DEBUG nova.objects.instance 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] Lazy-loading 'flavor' on 
Instance uuid b28cc3ae-6442-40cf-9d66-9d4938a567c7 obj_load_attr 
/usr/lib/python3.6/site-packages/nova/$
bjects/instance.py:1103   
2021-07-22 16:31:31.236 3 DEBUG nova.objects.instance 
[req-a87509b3-9674-49df-ad1f-9f8967871e10 - - - - -] Lazy-loading 'metadata'

[Yahoo-eng-team] [Bug 1877189] Re: nova.task_log database records are never deleted

2021-07-22 Thread melanie witt
** Also affects: nova/wallaby
   Importance: Undecided
   Status: New

** Also affects: nova/ussuri
   Importance: Undecided
   Status: New

** Also affects: nova/victoria
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1877189

Title:
  nova.task_log database records are never deleted

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) train series:
  New
Status in OpenStack Compute (nova) ussuri series:
  New
Status in OpenStack Compute (nova) victoria series:
  New
Status in OpenStack Compute (nova) wallaby series:
  New

Bug description:
  nova.task_log database records are related to the server usage audit
  log API [1] which is dependent on the [DEFAULT]/instance_usage_audit
  config option being set to True (it defaults to False). Enablement of
  the config option causes nova-compute to emit notifications which are
  consumed by OpenStack Telemetry. When the config option is enabled,
  records will be periodically created in the nova.task_log table. And
  there is no cleanup of the records in nova.

  I started a ML thread about the server usage audit log API awhile back
  [2] and the conclusion was that deprecating or removing the API isn't
  practical as there are systems in the wild still using it, so the best
  action to take for now is to create a nova-manage command for cleaning
  up old nova.task_log records.

  I brought up the proposal in a nova meeting [3] and in the nova
  channel [4] and the consensus was to treat it as a Wishlist bug so
  that it can be backported upstream.

  So this is a bug for the work to add a new 'nova-manage db
  purge_task_log' command that offers a --before  option.

  [1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2019-09-06.log.html#t2019-09-06T14:19:50
  [2] 
http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009245.html
  [3] 
http://eavesdrop.openstack.org/meetings/nova/2019/nova.2019-10-17-14.00.log.html#l-306
  [4] 
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2019-10-17.log.html#t2019-10-17T14:54:39

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1877189/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936667] Re: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3

2021-07-22 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/manila/+/801079
Committed: 
https://opendev.org/openstack/manila/commit/c03f96ffaeb64e8cd374b6ed15d24cc21fdec034
Submitter: "Zuul (22348)"
Branch:master

commit c03f96ffaeb64e8cd374b6ed15d24cc21fdec034
Author: Takashi Kajinami 
Date:   Sat Jul 17 00:16:26 2021 +0900

Replace deprecated import of ABCs from collections

ABCs in collections should be imported from collections.abc and direct
import from collections is deprecated since Python 3.3.

Closes-Bug: #1936667
Change-Id: Ie82a523b4f8ab3d44d0fc9d70cb1ca6c059cc48f


** Changed in: manila
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1936667

Title:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in Mistral:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in tacker:
  In Progress
Status in taskflow:
  Fix Released
Status in tempest:
  In Progress
Status in zaqar:
  In Progress

Bug description:
  Using or importing the ABCs from 'collections' instead of from
  'collections.abc' is deprecated since Python 3.3.

  For example:

  >>> import collections 
  >>> collections.Iterable
  :1: DeprecationWarning: Using or importing the ABCs from 'collections' 
instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 
it will stop working
  

  >>> from collections import abc
  >>> abc.Iterable
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1936667/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1937319] [NEW] cloud-init can attempt to download an iso from kernel arg url

2021-07-22 Thread Dan Bungert
Public bug reported:

As listed at https://discourse.ubuntu.com/t/netbooting-the-live-server-
installer/14510/135

When attempting a netboot, we can end up in a situation where cloud-init
downloads a full iso, only to decide that the first few bytes aren't the
expected header and to ignore it.

I suggest an enhancement where a small amount of data is downloaded at
first, we check this data for the required #cloud-config, and then use
that info to decide to continue to download the file or not.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1937319

Title:
  cloud-init can attempt to download an iso from kernel arg url

Status in cloud-init:
  New

Bug description:
  As listed at https://discourse.ubuntu.com/t/netbooting-the-live-
  server-installer/14510/135

  When attempting a netboot, we can end up in a situation where cloud-
  init downloads a full iso, only to decide that the first few bytes
  aren't the expected header and to ignore it.

  I suggest an enhancement where a small amount of data is downloaded at
  first, we check this data for the required #cloud-config, and then use
  that info to decide to continue to download the file or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1937319/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1934666] Re: IPv6 not set on VMs when using DVR HA + SLAAC/SLAAC (or DHCPv6 stateless)

2021-07-22 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/801503
Committed: 
https://opendev.org/openstack/neutron/commit/45a33dcb0a5e58c3f4f9e51fdfae01fae6dd6709
Submitter: "Zuul (22348)"
Branch:master

commit 45a33dcb0a5e58c3f4f9e51fdfae01fae6dd6709
Author: Bence Romsics 
Date:   Tue Jul 20 17:11:34 2021 +0200

doc: Do not use dvr_snat on computes

Change-Id: I1194d9995c16c1e178026986d9465aa09c1529c8
Closes-Bug: #1934666


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1934666

Title:
  IPv6 not set on VMs when using DVR HA + SLAAC/SLAAC (or DHCPv6
  stateless)

Status in neutron:
  Fix Released

Bug description:
  Hi, I think I've encountered a bug in DVR HA + IPv6 SLAAC, but I'd
  greatly appreciate it if someone could confirm it on their infra (in
  case I've made a mistake in my Devstack configs). I also have a
  suggestion regarding Tempest blacklist/whitelist, as tests that could
  catch this issue are not running on the affected L3 config.

  My setup:
  - multinode Devstack Stein (OVS) - contr + 2 computes
  - DVR HA router
  - private subnet SLAAC/SLAAC (note: DHCPv6 stateless is also affected since 
it also uses radvd), network type vxlan
  - 2 instances on separate nodes

  I'm using
  tempest.scenario.test_network_v6.TestGettingAddress.test_slaac_from_os
  to test this.

  There seems to be an issue with setting IPv6 address on the instance
  when using a SLAAC/SLAAC subnet (and by extension, DHCPv6 stateless
  subnet) + DVR HA router. The IPv6 is set only on the instance that's
  placed on the same node as the Master router. Instances placed on
  nodes with Backup router have no IPv6 set on their interfaces.

  This issue happens only to DVR HA routers. Legacy, legacy HA and DVR
  no-HA work correctly.

  radvd is running only on the node with Master router (as it should), in the 
qrouter namespace. RAs reach the local VM via qr if -> tap if (I think?). RAs 
also manage to reach other nodes via br-tun, but I think I see them dropped by 
a flow in br-int on destination nodes. They don't reach the tap interfaces.
  So the traffic maybe goes like this: (?)
  qr if in qrouter ns -> br-int -> br-tun -> RAs reach the other node -> br-tun 
-> br-int (and get dropped here, I think in table 1)

  The path these RAs take in br-tun (on dest node) seem to be slightly
  different depending on the router type (legacy HA vs DVR HA). I have
  no idea if this is important, but I'm dropping this here just in case:

  Legacy HA:
  RAs enter table 9 (path from table 0 to 9 is identical in both cases). They 
are resubmitted to table 10, then go through a learn flow that also outputs 
traffic to br-int:

  cookie=0xe3d0b08836a75945, duration=88239.681s, table=10,
  n_packets=8649, n_bytes=528332, priority=1
  
actions=learn(table=20,hard_timeout=300,priority=1,cookie=0xe3d0b08836a75945,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:OXM_OF_IN_PORT[]),output:"patch-
  int"

  DVR HA:
  RAs enter table 9. They go through a flow that outputs traffic to br-int:

  cookie=0xe3d0b08836a75945, duration=88218.262s, table=9,
  n_packets=600, n_bytes=56004, priority=1,dl_src=fa:16:3f:68:d6:f0
  actions=output:"patch-int"

  One note - radvd complains that forwarding flag in qrouter ns is set to 0 
with DVR HA (no complains in logs for other router types). I've checked flags 
for forwarding and accept_ra, set some to 2 as per Pike docs (I couldn't find 
anything more recent), but it didn't help.
  Another note - the code seems to be setting these flags in the namespace 
where HA interface resides. This means that they're set in the snat namespace 
when using DVR HA, but radvd is still running in qrouter ns. Is it intended to 
be like this?

  As for my suggestion:
  I've noticed that the Zuul job responsible for multinode-dvr-ha-full is not 
running the tests that could catch this issue, namely:
  tempest.scenario.test_network_v6.TestGettingAddress
  These tests are ran on the ipv6-only job, but since it's set to use legacy 
routers by default, everything works fine there.
  I propose to add these tests to the multinode-dvr-ha-full job to test IPv6 + 
SLAAC with DVR HA routers. (I have no idea where these test lists are generated 
for Zuul jobs, so I don't know in which repo the patch should be made)
  Since Tempest uses tags, it'd be done for Master only.

  I've tested this on Stein, but I've quickly stacked Master multinode
  (using OVS) to check and I think it's happening there as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1934666/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-t