Public bug reported:
This bug describes a race condition during deletion of a bare metal
(Ironic) instance when automated cleaning is enabled in Ironic (which is
the default).
# Steps to reproduce
As a race condition, this one is not easy to reproduce, although it has
been seen in the wild on se
** Changed in: nova
Status: In Progress => Invalid
** Changed in: nova
Status: Invalid => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1733861
Added neutron to affected projects.
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1922923
Title:
OVS port issue
St
Public bug reported:
# Steps to reproduce
As a non admin user, navigate to Identity -> Users. Then click on the
username of your user to go to the detail page.
# Expected results
Only the allowed Overview tab is visible.
# Actual results
The view shows three tabs: Overview, Role assignments,
Public bug reported:
Description
===
Due to the differences in nova, cinder and barbican policies described in bug
1895848, a user cannot migrate an instance with an encrypted volume (using
barbican) that belongs to a user in a different project. Furthermore, if a cold
migration or resi
** Changed in: kolla-ansible/ussuri
Status: Fix Committed => Fix Released
** Changed in: kolla-ansible/train
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (k
** Changed in: kolla-ansible/victoria
Status: Fix Released => Triaged
** Changed in: kolla-ansible/ussuri
Status: Fix Committed => Triaged
** Changed in: kolla-ansible/train
Status: Fix Committed => Triaged
--
You received this bug notification because you are a member of Y
Public bug reported:
# Description
Migration and evacuation fails with encrypted volumes, when the user is
in a different project to the instance creator, even if they are admin.
This is a common use case, since operators typically need to migrate
around instances. It also occurs with masakari du
*** This bug is a duplicate of bug 1744670 ***
https://bugs.launchpad.net/bugs/1744670
** This bug has been marked a duplicate of bug 1744670
In pike ssl deployment horizon cnt retrieve volumes/snapshots and service
data via cinderclient
--
You received this bug notification because you
** Changed in: kolla-ansible/victoria
Status: Triaged => Confirmed
** Changed in: kolla-ansible/victoria
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.ne
** Changed in: kolla-ansible
Milestone: 10.0.0 => None
** Changed in: kolla-ansible/ussuri
Milestone: 10.0.0 => None
** Also affects: kolla-ansible/victoria
Importance: Medium
Assignee: Radosław Piliszek (yoctozepto)
Status: In Progress
** Changed in: kolla-ansible/ussuri
** Also affects: kolla/rocky
Importance: Undecided
Status: New
** Changed in: kolla/rocky
Status: New => Triaged
** Changed in: kolla
Status: Confirmed => Invalid
** Changed in: kolla/rocky
Importance: Undecided => High
--
You received this bug notification because y
** Changed in: kolla-ansible/train
Status: New => Triaged
** Changed in: kolla-ansible/stein
Status: New => Triaged
** Changed in: kolla-ansible
Status: New => Triaged
** Changed in: kolla-ansible
Status: Triaged => Invalid
--
You received this bug notification beca
This affects Stein deploy jobs and Train upgrade jobs (due to Stein
images).
** Changed in: kolla-ansible/stein
Importance: Undecided => High
** Changed in: kolla-ansible/train
Importance: Undecided => High
** Also affects: horizon
Importance: Undecided
Status: New
--
You recei
Public bug reported:
Steps to reproduce
--
Create multiple instances concurrently using a flavor with a PCI
passthrough request (--property
"pci_passthrough:alias"=":"), and a scheduler hint with
some anti-affinity constraint.
Expected result
---
The instances are cr
Given the current size of the patch, we'll drop this from 9.0.0.
** Changed in: kolla-ansible/train
Milestone: 9.0.0 => None
** Changed in: kolla-ansible/train
Status: In Progress => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team,
execution of the
resource tracker update, which no doubt has some unintended
consequences.
Environment
===
Seen on Rocky 18.2.0, and master (in functional testing).
** Affects: nova
Importance: Undecided
Assignee: Mark Goddard (mgoddard)
Status: In Progress
** Changed in
nless the
node is orphaned
** Affects: nova
Importance: Undecided
Assignee: Mark Goddard (mgoddard)
Status: In Progress
** Changed in: nova
Assignee: (unassigned) => Mark Goddard (mgoddard)
** Changed in: nova
Status: New => In Progress
--
You received th
** Also affects: kolla-ansible/ussuri
Importance: Medium
Assignee: Radosław Piliszek (yoctozepto)
Status: In Progress
** Changed in: kolla-ansible/ussuri
Milestone: 9.0.0 => 10.0.0
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
lit('.')
Tested on CentOS 7.7, cloud-init 18.5.
** Affects: cloud-init
Importance: Undecided
Assignee: Mark Goddard (mgoddard)
Status: New
** Changed in: cloud-init
Assignee: (unassigned) => Mark Goddard (mgoddard)
--
You received this bug notification be
Thanks for fixing this.
** Changed in: kolla-ansible
Status: Triaged => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1843104
Title:
KeyError: 'def
It looks like this was caused by https://review.opendev.org/#/c/655208/,
which changes the handling of defaults for config in horizon. Since we
override the OPENSTACK_NEUTRON_NETWORK variable in local_settings.py,
but do not include the default for 'default_dns_nameservers'. A simple
workaround for
Public bug reported:
When a network is deleted, sometimes the delete_network_postcommit
method of my ML2 mechanism driver receives a network object in the
context that has the provider attributes set to None.
I am using Rocky (13.0.4), on CentOS 7.5 + RDO, and kolla-ansible. I
have three controll
Public bug reported:
Neutron bootstrap is currently failing on Ubuntu bionic (kolla-ansible-
ubuntu-source jobs) with the following error:
INFO [alembic.runtime.migration] Running upgrade 63fd95af7dcd -> c613d0b82681
Traceback (most recent call last):
File
"/var/lib/kolla/venv/lib/python3.6/
** Changed in: kolla-ansible/stein
Status: Fix Committed => Fix Released
** Changed in: kolla-ansible/stein
Status: Fix Released => In Progress
** Changed in: kolla-ansible/rocky
Status: Fix Committed => In Progress
** Changed in: kolla-ansible
Status: Fix Committed =
** Changed in: kolla/stein
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1832860
Title:
Failed instances stuck in BUILD sta
Public bug reported:
Description
===
When performing an upgrade, services cap their RPC version when
communicating with nova-compute to that of the compute service with the
lowest version. Once all computes are running the new version, we can
restart the services to remove this cap.
When
Public bug reported:
When performing an upgrade, the upgrade check is supposed to be run
after the DB schema syncs and data migration. This should be something
that is checked by the upgrade check command.
Steps to reproduce
==
Tested in Queens -> Rocky upgrade.
Prior to an upgr
** Changed in: kolla-ansible
Milestone: None => 8.0.0
** Project changed: kolla-ansible => kolla
** Changed in: kolla
Milestone: 8.0.0 => None
** No longer affects: kolla-ansible/rocky
** Changed in: kolla
Importance: Undecided => High
** Also affects: kolla/rocky
Importance: Und
Some things to note:
I'm pretty confident that the DB sync had been run using the rocky nova-
api container prior to the upgrade.
The 'missing' trusted_certs column did exist in the instance_extra table
in the nova DB prior to performing the workaround DB sync.
No restart of services was necessa
** No longer affects: kolla
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1763608
Title:
Netplan ignores Interfaces without IP Addresses
Status in netplan:
Fix Committed
Status in ne
** Changed in: kolla-ansible
Status: New => In Progress
** Changed in: kolla-ansible
Importance: Undecided => High
** Changed in: kolla-ansible
Assignee: (unassigned) => Mark Goddard (mgoddard)
** Also affects: kolla-ansible/stein
Importance: High
Assignee: Mar
** Changed in: kolla-ansible/ocata
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1682060
Title:
empty nova service and hype
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1823818
Title:
Memory leak in some neutron agents
Status in kolla:
** Project changed: kolla => kolla-ansible
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821696
Title:
Failed to start instances with encrypted volumes
Status in koll
I don't think this is a kolla bug. Marking invalid for kolla.
** Changed in: kolla
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1763608
Title:
Netplan ig
nova
Importance: Undecided
Assignee: Mark Goddard (mgoddard)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1794773
Title:
Unnecessary
ronment
===
Nova stable/queens @ 01b756f960ed19ab801994d08d749dd94d729a22
** Affects: nova
Importance: Undecided
Assignee: Mark Goddard (mgoddard)
Status: New
** Changed in: nova
Assignee: (unassigned) => Mark Goddard (mgoddard)
--
You received this bug notificat
Public bug reported:
Per the discussion in [1], the ironic nodes added to the node cache in
the ironic virt driver may be missing the required field resource_class,
as this field is not in _NODE_FIELDS. In practice, this is typically not
an issue (possibly never), as the normal code path uses a de
Public bug reported:
Description
===
Sometimes when a baremetal instance is terminated, some VIFs are not
detached from the node. This can lead to the node becoming unusable,
with subsequent attempts to provision it fail during VIF attachment due
to there being insufficient free ironic po
Public bug reported:
Description
===
A baremetal (ironic) instance can become stuck in the BUILD state if the
ironic node to which the instance has been assigned is either deleted or
torn down manually while the instance is being built.
Steps to reproduce
==
* Create a n
Public bug reported:
When creating an image with the swift backend, the swift object URL
(including password) is logged at debug level in the registry log. The
locations field is currently censored, but location_data is not.
Example:
# glance image-create --name test --disk-format raw --containe
new ports to instance info cache
---
Request:
- Refresh instance network cache with new interfaces (get_instance_nw_info)
- Unconditionally add duplicate interfaces to cache.
** Affects: nova
Importance: Undecided
Assignee: Mark Goddard (mgoddard
Public bug reported:
Concurrently attaching multiple network interfaces to a single instance
can often result in corruption of the instance's information cache in
Nova. The result is that some network interfaces may be missing from
'nova list', and silently fail to detach when 'nova interface-deta
Public bug reported:
With syslog enabled on Juno (enable_syslog=True in glance-api.conf),
glance spins on startup, consuming all available CPU cycles.
With some carefully placed calls to exit(), the line in glance causing
the problem was determined to be
https://github.com/openstack/glance/blob/s
Public bug reported:
See https://bugs.launchpad.net/ironic/+bug/1405131 for an equivalent bug
in ironic that describes the issue.
The bug itself resides in nova, but requires cooperation from ironic in
order to fix it.
I have attached a patch that we are using internally to resolve the
issue. It
Public bug reported:
Seen on RDO Juno, running on CentOS 7.
Steps to reproduce:
- Set admin_workers=1 and public_workers=1 in /etc/keystone/keystone.conf
- Start the keystone service: `systemctl start openstack-keystone`
- Start a 'persistent' TCP connection to keystone: `telnet localhost 5000 &
47 matches
Mail list logo