The status in neutron was not updated somehow. The fix
https://review.opendev.org/c/openstack/neutron/+/745330/ landed during
Wallaby development.
** Changed in: neutron
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineer
** Changed in: neutron
Status: Opinion => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1933802
Title:
missing global_request_id in neutron_lib context from_dict method
*** This bug is a duplicate of bug 1794718 ***
https://bugs.launchpad.net/bugs/1794718
I confirmed this is a duplicate of bug 1794718 as mentioned above. Note
that the fix is included in stable/train. While stable/stein is in the
Extended-Maintenance phase, it may be good to backport it.
** T
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]
** Changed in: nova
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs
Why is this a bug? I looked in the entire Neutron code.
global_request_id is only used once here:
https://github.com/openstack/neutron/blob/a33588b08639bd4bb78e632eb1fb2600a96aeb44/neutron/notifiers/nova.py#L83.
And it is generated. So how is the absence of global_request_id in the
Neutron context
I talked with my team and maas should not be responsible for or know
that other hosts need to be added to known_hosts. In fact this would
probably be a security violation as that means other nodes would have
privileges to login that were never meant to have access to each other.
Currently nova doe
Reviewed: https://review.opendev.org/c/openstack/neutron/+/798058
Committed:
https://opendev.org/openstack/neutron/commit/b189b0f32284ca5209a9b116953053448ca4a49b
Submitter: "Zuul (22348)"
Branch:master
commit b189b0f32284ca5209a9b116953053448ca4a49b
Author: Rodolfo Alonso Hernandez
Date:
Thanks Jorge. Let's patch rocky as well for upgrade purposes.
** Changed in: cloud-archive/rocky
Status: Won't Fix => In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launch
Reviewed: https://review.opendev.org/c/openstack/glance/+/797721
Committed:
https://opendev.org/openstack/glance/commit/7c1cd438a0a9fe5cababc9ff0164ce7844c98abf
Submitter: "Zuul (22348)"
Branch:master
commit 7c1cd438a0a9fe5cababc9ff0164ce7844c98abf
Author: Dan Smith
Date: Wed Jun 23 10:08
** Changed in: cloud-init
Status: New => Opinion
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1931735
Title:
node failed to deploy because an ephemeral network device was not
This bug was fixed in the package nova - 2:17.0.13-0ubuntu2~cloud0
---
nova (2:17.0.13-0ubuntu2~cloud0) xenial-queens; urgency=medium
.
* New update for the Ubuntu Cloud Archive.
.
nova (2:17.0.13-0ubuntu2) bionic; urgency=medium
.
* d/control: Update VCS paths for move to l
This bug was fixed in the package nova - 2:18.3.0-0ubuntu1~cloud2
---
nova (2:18.3.0-0ubuntu1~cloud2) bionic-rocky; urgency=medium
.
* d/control: Update VCS paths for move to lp:~ubuntu-openstack-dev.
* d/p/1892361-update-pci-stat-pools.patch: Cherry pick upstream fix
for
This bug was fixed in the package nova - 2:19.3.2-0ubuntu1~cloud1
---
nova (2:19.3.2-0ubuntu1~cloud1) bionic-stein; urgency=medium
.
[ Hemanth Nakkina ]
* d/p/0001-Update-pci-stat-pools-based-on-PCI-device-changes.patch: Update
pci
stats pools based on PCI device changes
Public bug reported:
After updating Neutron from Queens to Ussuri we started getting the
errors "OSError: Premature eof waiting for privileged process" for all
of neutron agents and for different operations (create network,
enable/disable dhcp etc.). All errors raises inside librari oslo.privsep
w
** No longer affects: nova
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1921381
Title:
iSCSI: Flushing issues when multipath config has changed
Status in OpenStack Co
Moving to valid as we are now seeing this in our own upstream CI:
https://zuul.opendev.org/t/openstack/build/68e59744ef7444a5ae108118983c9353/log
/job-output.txt
2021-06-27 04:37:51.638267 | controller | Collecting libvirt-python===7.4.0
2021-06-27 04:37:51.647585 | controller | Downloading
ht
Given that the three point releases mentioned are now included in Ubuntu
/ the Cloud Archive, I'm marking this fix-released for all affected
targets.
Focal-updates: 2:16.3.2
Groovy-updates: 2:17.1.1
Hirsute: 2:18.0.0
** Changed in: neutron (Ubuntu Focal)
Status: Triaged => Fix Released
**
Public bug reported:
code:
@classmethod
def from_dict(cls, values):
return cls(user_id=values.get('user_id', values.get('user')),
¦ ¦ ¦ tenant_id=values.get('tenant_id', values.get('project_id')),
¦ ¦ ¦ is_admin=values.get('is_admin'),
¦ ¦ ¦ roles=values.get('roles')
18 matches
Mail list logo