Public bug reported:
In volume snapshot table 'Project' column returns
'os-vol-tenant-attr:tenant_id' (response from volume) [1]
but it have to return 'os-extended-snapshot-attributes:project_id' (response
from snapshot).
Public bug reported:
Description
===
the cloud has 2 nova-compute service and both of their compute_driver is
ironic. then changed one nova-compute service to libvirt.
when add more bm-nodes to the cloud,but some bm-nodes didn't show up in
the hypervisor list
** Affects: nova
Reviewed: https://review.openstack.org/551302
Committed:
https://git.openstack.org/cgit/openstack/nova/commit/?id=b626c0dc7b113365002e743e6de2aeb40121fc81
Submitter: Zuul
Branch:master
commit b626c0dc7b113365002e743e6de2aeb40121fc81
Author: Matthew Booth
Date: Fri Mar 9 14:41:49 2018
Reviewed: https://review.openstack.org/551302
Committed:
https://git.openstack.org/cgit/openstack/nova/commit/?id=b626c0dc7b113365002e743e6de2aeb40121fc81
Submitter: Zuul
Branch:master
commit b626c0dc7b113365002e743e6de2aeb40121fc81
Author: Matthew Booth
Date: Fri Mar 9 14:41:49 2018
Public bug reported:
This isn't a request for a new feature per say, but rather a placeholder
for the neutron drivers team to take a look at [1].
Specifically I'm hoping for drivers team agreement that the
modules/functionality being rehomed in [1] makes sense; no actual (deep) code
review of
Public bug reported:
A common request we see from corporate environments when providing
Active Directory/LDAP integration into keystone is the ability for role
assignments to apply for users who are members of a sub-group of the
role-assigned group.
For instance, if you have the following groups
Public bug reported:
When deleting a baremetal server, we see:
2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance:
dcb4f055-cda4-4d61-ba8f-976645c4e92a] Traceback (most recent call last):
2019-02-13 12:58:16.856 7 ERROR nova.compute.manager [instance:
Reviewed: https://review.openstack.org/636635
Committed:
https://git.openstack.org/cgit/openstack/nova/commit/?id=194c8c4a5fee14799b816e726316409055706cb8
Submitter: Zuul
Branch:master
commit 194c8c4a5fee14799b816e726316409055706cb8
Author: Alexandra Settle
Date: Wed Feb 13 14:11:26 2019
Public bug reported:
The configuration variable "rpc_response_max_timeout" is not defined in
fullstack tests.
Error log: http://logs.openstack.org/52/636652/1/check/neutron-
fullstack/91b459a/logs/dsvm-fullstack-
logs/TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart_OVS,VLANs
** Changed in: nova
Assignee: Matt Riedemann (mriedem) => Gary Kotton (garyk)
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status:
** This bug is no longer a duplicate of bug 1683972
Overlapping iSCSI volume detach/attach can leave behind broken SCSI devices
and multipath maps.
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Public bug reported:
When using K8s above OpenStack there is a race condition when a
persistant volume will be deleted at the same time that a instance using
that volume is deleted. This results in the instance going into an error
state.
Public bug reported:
Description
===
I have a baremetal environment using Nova to schedule on top of four Ironic
baremetal nodes. I successfully deployed an instance to one of those nodes.
Then I ran 'nova hypervisor-servers' against each of the nodes, and the
instance showed up
Public bug reported:
When booting a baremetal server with Nova, we see Ironic report a
successful power on:
ironic-conductor.log:2019-02-13 10:52:15.901 7 INFO
ironic.conductor.utils [req-774350ce-9392-4096-b66c-20ad3d794e4e
7a9b1ac45e084e7cbeb9cb740ffe8d08 41ea8af8d00e46438c7be3b182bbb53f -
** Changed in: openstack-ansible
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1643991
Title:
504 Gateway Timeout when creating a port
Status in
Public bug reported:
While trying to improve performance on validation of EC2 credentials, we
have just realized than the credentials are always fetched from the
underlying database.
If there is a flood on credential validation, this will transform in a
increase of load on the database server
** Changed in: openstack-ansible
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1749574
Title:
[tracking] removal and
Marking invalid for OSA. If this is still an issue, please submit for
Neutron project.
** Changed in: openstack-ansible
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Public bug reported:
Oslo.config uses re.search() to check config values against the allowed
regex. This checks if the regex matches anywhere in the string, rather
than checking if the entire string matches the regex.
Nova has three config options that appear as if the entire string should match
Public bug reported:
While upgrading to rocky, we ended up with a broken openvswitch
infrastructure and moved back to the old openvswitch.
We ended up with new machines working, old machines didn't and it took a
while to realize that we had qvo* interfaces that not only wasn't
plugged but also
Public bug reported:
In ip_lib.get_devices_info(), if the device retrieved is one of the
interfaces of a veth pair and the other one is created in other
namespace, the information of the second interface won't be available in
the list of interfaces of the first interface namespace. Because of
Public bug reported:
In the Rocky release it still possible to delete a port that is attached
to a VM as primary network interface. Nova doesn't even seem to notice
when this happens. Shouldn't there be any kind of precaution form
Neutron's side?
Reproduce:
$ openstack port create ...
$
Public bug reported:
nova-manage db sync --version is failing with stack
trace as below,
[stack@hostname ~]$ nova-manage db sync --version 392
.
.
.
ERROR: Could not access cell0.
Has the nova_api database been created?
Has the nova_cell0 database been created?
Has "nova-manage api_db sync"
23 matches
Mail list logo