Furthermore we won't get here if:
https://github.com/openstack/cinder/blob/9bc9a528ef46522fd727a2a047da435c658a15c5/cinder/volume/api.py#L2100
If the volume doesn't have any attachments. Looks like that for loop
should have an else clause to set override back to False.
This is a cinder bug so
OK so nova and cinder are both at the queens release, which means when
you attach the volume to the server, the compute API should create a
volume attachment record on the given volume. If the volume is in
'error' status I'd expect that to fail like how the old volume reserve
action would fail for
It looks like placement isn't running, you got a 503 response from the
placement API, not a 404. The compute node record in the 'nova' (cell1)
database is auto-generated when the nova-compute service starts up. That
compute node record uuid is used to create the resource provider in
placement.
This isn't really a valid workflow. You need to first detach the volume
from the server via the compute API:
https://docs.openstack.org/python-openstackclient/latest/cli/command-
objects/server.html#server-remove-volume
And then you can delete the volume. The volume attachments API in
Cinder is
Looks like this was fixed by a series of partial fixes.
** Changed in: nova
Status: Confirmed => Fix Released
** Changed in: nova
Assignee: (unassigned) => Julian Sy (syjulian)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Looks like the os-volumes_boot route still exists:
https://github.com/openstack/nova/blob/f81865f56bd5aeffddfff99fdaa089160ce88048/nova/api/openstack/compute/routes.py#L751
I figured we'd want a functional test for something that's in our route
map, but we don't advertise the API at all so
Fixed with https://review.openstack.org/#/c/266233/.
** Changed in: nova
Status: Confirmed => Fix Released
** Changed in: nova
Assignee: (unassigned) => Ren Qiaowei (qiaowei-ren)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
No recent hits in logstash so I'm going to mark this invalid now. We can
re-open if it shows back up.
** Changed in: nova
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute
I haven't seen this since cdent's fixes:
https://review.openstack.org/#/q/topic:bug/1705753+(status:open+OR+status:merged)
So marking it fixed.
** Changed in: nova
Status: In Progress => Fix Released
** Changed in: nova
Assignee: (unassigned) => Chris Dent (cdent)
--
You received
Not seeing this in logstash so this might be fixed inadvertently or the
signature has changed and we need to track this fresh.
** Changed in: nova
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
Given we're about to cut RC1 for rocky so ocata/pike are old and cells
v1 is deprecated, I don't think anyone cares about fixing this for CI.
** Changed in: nova
Status: Confirmed => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team,
The docs are correct. We list the compute services first to make sure
the service has started and created an entry in the database. Then we
discover_hosts, because you can't discover the host until it's
registered itself (specifically the compute_nodes record) in the
database.
** Changed in: nova
This is really a duplicate bug and the original fix is here:
https://review.openstack.org/#/c/401009/
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/pike
Assignee: (unassigned) => s10 (vlad-esten)
** Changed in: nova/queens
Assignee: (unassigned) => s10 (vlad-esten)
**
*** This bug is a duplicate of bug 1746483 ***
https://bugs.launchpad.net/bugs/1746483
** This bug has been marked a duplicate of bug 1746483
Not able to boot from Volume / Volume snapshot when using isolated_images
--
You received this bug notification because you are a member of Yahoo!
As noted this isn't a bug, so marking it invalid. Would require a spec
to change this behavior, but I doubt we'd do it, since nova shouldn't
have access to the guest once it's created. You'd have to rebuild the
server if you wanted to change the hostname in the guest.
** Changed in: nova
Is this still an issue? If so, please explain what the issue is in more
detail - what is the user impact?
Conductor is now passing the migration record it creates down to the
compute, so I don't think RT creates it anymore and this shouldn't be an
issue. See:
Looks like this is fixed for nova as of
https://review.openstack.org/#/c/539164/
** Tags added: api
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute
*** This bug is a duplicate of bug 1777505 ***
https://bugs.launchpad.net/bugs/1777505
This isn't a nova bug, it's a devstack bug, and already fixed.
** Also affects: devstack
Importance: Undecided
Status: New
** Changed in: nova
Status: Confirmed => Invalid
** This bug
Consider it ignored! :)
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1785511
Title:
cpu_quota does not throttle cpu
Changing the database like this isn't really supported. You'd likely
need to delete the old cell and map the hosts to a new cell using the
nova-manage cell_v2 * commands like create_cell, map_instances and
discover_hosts.
** Tags added: cells
** Changed in: nova
Status: New => Invalid
--
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => Confirmed
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/pike
Public bug reported:
I'm not sure if this is due to the tempest-full rename to tempest-full-
py3 but it seems like this didn't used to be an issue. But we now run
tempest-full-py3 even on test-only changes, like this change:
https://review.openstack.org/#/c/588935/
My guess is we had this
This was fixed in Pike with change:
https://review.openstack.org/#/c/469037/6/nova/scheduler/utils.py
If we needed to backport something to Ocata, it would have to be an
Ocata-only tactical fix.
** Changed in: nova
Status: Confirmed => Fix Released
--
You received this bug notification
Public bug reported:
The samples in https://developer.openstack.org/api-ref/compute/#server-
groups-os-server-groups are using microversion 2.64 but don't explicitly
say that. The samples were changed in Rocky with change:
This has been broken since newton:
https://github.com/openstack/nova/commit/74ab427d4796d8a386f84a15cc49188c2a60f8f1
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/pike
Public bug reported:
When we resize a server, we update the RequestSpec.flavor because for
later move operations of the server, the RequestSpec is what gets passed
to the scheduler, so naturally we need the RequestSpec.flavor to match
the instance.flavor after the instance is resized. That
we could optimize this by simply caching the host=az
mapping.
https://github.com/openstack/nova/blob/4c37ff72e5446c835a48d569dd5a1416fcd36c71/nova/conductor/manager.py#L1263
** Affects: nova
Importance: Low
Assignee: Matt Riedemann (mriedem)
Status: Triaged
** Affects: nova/p
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/ocata
Status: New => Triaged
** Changed in:
Public bug reported:
I found this in the starlingx diff for nova:
https://github.com/starlingx-staging/stx-
nova/commit/71acfeae0d1c59fdc77704527d763bd85a276f9a#diff-
afb9c0c0ca5276c7eacd987bbf51d8e6R447
For volume-backed instances, the instance image_meta comes from the
volume_image_metadata
See https://review.openstack.org/#/c/565601/5 for more context - that
was changed because it failed the ceph job, because apparently with rbd
volumes you can't delete the volume snapshots until the original volume
is deleted, which in the cinder API you normally can't do that if there
are
** Also affects: devstack
Importance: Undecided
Status: New
** No longer affects: nova
** Changed in: devstack
Status: New => Fix Released
** Changed in: devstack
Importance: Undecided => Medium
** Changed in: devstack
Assignee: (unassigned) => Pawel Koniszewski
Public bug reported:
The pause:
https://developer.openstack.org/api-ref/compute/#pause-server-pause-
action
And suspend:
https://developer.openstack.org/api-ref/compute/#suspend-server-suspend-
action
APIs just say they pause and suspend the server, which does not really
tell the user
** Changed in: nova
Importance: Medium => Low
** Changed in: nova
Status: Confirmed => Opinion
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1420662
Title:
Bug 1488111 is what I was thinking about, but Lee clarified the issue
for me. The scenario is like:
1. spawn on host1 fails, reschedule to host2
2. prep_block_devices fails on host2 because of the volume attachment issue
mentioned
** Also affects: nova/queens
Importance: Undecided
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => Confirmed
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/pike
Public bug reported:
- [x] This is a doc addition request.
The "os_shutdown_timeout" image property, used by nova, is not
documented. It's in the metadefs though:
https://github.com/openstack/glance/blob/48ee8ef4793ed40397613193f09872f474c11abe/etc/metadefs
/compute-guest-shutdown.json#L13
"By
Marked as incomplete since I'm not sure what you're saying is the bug.
Please clarify. I'll fix the API reference docs in the meantime.
** Changed in: nova
Status: Opinion => Incomplete
** Changed in: nova
Importance: Wishlist => Undecided
--
You received this bug notification
Is your bug really about saying that admins shouldn't have to pass
is_public=None *by default* and is_public=None should just be the
default behavior for admins if the is_public query parameter isn't
provided? If so, that's not a bug, and would require a microversion
since it's a behavior change
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => Confirmed
** Changed in: nova/pike
Importance: Undecided => Medium
** Changed in: nova/queens
Public bug reported:
This was noted in review:
https://review.openstack.org/#/c/587636/4/nova/compute/resource_tracker.py@141
That the ResourceTracker.compute_nodes and ResourceTracker.stats (and
old_resources) entries only grow and are never cleaned up as we
rebalance nodes or nodes are
Public bug reported:
A single nova-compute service host can manage multiple ironic nodes,
which creates multiple ComputeNode records per compute service host, and
ironic instances are 1:1 with each compute node.
Before change https://review.openstack.org/#/c/398473/ in Ocata, the
ComputeManager
Bug 1784666 has the proper triage information on the problem, it's an
import ordering issue.
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => Triaged
**
*** This bug is a duplicate of bug 1773102 ***
https://bugs.launchpad.net/bugs/1773102
Someone said the same in this change:
https://review.openstack.org/#/c/582332/
"""
Hello melanie, I meet the same problem about request-id in log.
I think this bug is similar to cinder bug
This is invalid since Pike when we dropped using quota usages and
reservations tables:
https://specs.openstack.org/openstack/nova-specs/specs/pike/implemented
/cells-count-resources-to-check-quota-in-api.html
** Changed in: nova
Status: Confirmed => Invalid
--
You received this bug
*** This bug is a duplicate of bug 1535918 ***
https://bugs.launchpad.net/bugs/1535918
I believe this has been fixed by sending the event to both the source
and destination host of the evacuate based on the migration record, see:
https://review.openstack.org/#/c/371048/
** This bug has been
-properties.rst
URL: https://docs.openstack.org/glance/latest/admin/useful-image-properties.html
** Affects: glance
Importance: Undecided
Assignee: Matt Riedemann (mriedem)
Status: Triaged
** Tags: documentation
** Changed in: glance
Status: New => Triaged
--
You recei
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova
Importance: Undecided => Medium
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/queens
Importance: Undecided => Medium
** Changed in: nova/queens
Assignee:
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Fix Released
** Changed in:
There is a patch here: https://review.openstack.org/#/c/586965/
** Also affects: nova/pike
Importance: Undecided
Status: New
** Changed in: nova
Status: New => Invalid
** Changed in: nova/pike
Status: New => In Progress
** Changed in: nova/pike
Importance: Undecided
Public bug reported:
The imageRef parameter description does not describe what happens during
rebuild if a new image is provided, nor is there any asynchronous post-
condition section for what the user should expect:
https://developer.openstack.org/api-ref/compute/#rebuild-server-rebuild-
action
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => In Progress
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/pike
This isn't an issue after all because we move the allocations on the
source node from the instance to the migration *before* we do the copy:
https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/conductor/tasks/live_migrate.py#L82
Looks like this was regressed in Queens:
https://review.openstack.org/#/c/507638/29/nova/compute/manager.py@a6289
And I even pointed it out on the review but we didn't think about the
forced live migration case:
https://review.openstack.org/#/c/507638/25/nova/compute/manager.py@6252
** Also
Public bug reported:
***This is purely based on code inspection right now.***
With a forced host live migration, we bypass the scheduler and copy the
instance's resource allocations from the source node to the dest node:
Public bug reported:
https://review.openstack.org/#/c/560459/ in Rocky changed the libvirt
driver such that if the compute node provider is in a shared storage
provider aggregate relationship (in the same aggregate with a resource
provider that has DISK_GB inventory and the
To summarize, during post-copy on the source host, nova activates the
port binding on the dest host. During post live migration on the source
host, nova refreshes the ports in the instance info cache and then calls
unplug_vifs in the virt driver, but the vif type now shows up as
'unbound' - and
Adding cinder since cinder controls the volume metadata and nova
shouldn't be changing anything during live migration with respect to the
attached_mode.
** Also affects: cinder
Importance: Undecided
Status: New
** Tags added: live-migration volumes
--
You received this bug
Public bug reported:
Placement API microversion 1.26 which was added here:
https://review.openstack.org/#/c/564838/
Is not documented in the API reference:
https://developer.openstack.org/api-ref/placement/
Looks like the description on the "reserved" parameter should be updated
here:
This isn't a nova bug, it's a bug in the zvm sdk, so it should be routed
through whatever issue tracker they have.
** Changed in: nova
Status: Confirmed => Invalid
** Also affects: python-zvm-sdk
Importance: Undecided
Status: New
** Changed in: python-zvm-sdk
Status: New
** Changed in: nova
Status: Fix Released => In Progress
** Tags added: rocky-rc-potential
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1781710
Title:
va-compute[30504]:
ERROR oslo_messaging.rpc.server MigrationPreCheckError: Migration pre-check
error: Migration is not supported for LVM backed instances
Jul 19 14:40:50.492097 ubuntu-xenial-ovh-gra1-829690 nova-compute[30504]:
ERROR oslo_messaging.rpc.server
** Affects: nova
Importance: Medium
Public bug reported:
It looks like at some point the 'driver-notes.*' entries in the feature
support matrix docs stopped working:
https://github.com/openstack/nova/blob/master/doc/source/user/support-
matrix.ini#L142
I don't see it in queens or pike:
Public bug reported:
While testing the (partial) fix for bug 1469179:
https://review.openstack.org/#/c/580720/
Someone reported that nova hypervisor-statistics still reported
local_gb_used, even though 'openstack resource provider allocation show'
for the instance didn't report DISK_GB usage.
Public bug reported:
The ComputeNode.local_gb_used value is set in the
ResourceTracker._update_usage() method:
https://github.com/openstack/nova/blob/eb4f65a7951e921b1cd8d05713e144e72f2f254f/nova/compute/resource_tracker.py#L934
Based on:
1. root_gb in the flavor
2. any disk overhead from the
Reading the docs on the upgrade_levels config options, only
[upgrade_levels]/compute supports 'auto', so that's why it's blowing up
since you're setting it for everything.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Looks like the correct analysis of the bug. We don't test the MC server
group driver or enable_new_services config option very well, or together
(obviously).
** Changed in: nova
Importance: Undecided => Medium
** Changed in: nova
Status: Confirmed => Triaged
** Tags added: memcache
Public bug reported:
We added the zVM driver in Rocky with limited capabilities:
https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky
So it needs to be documented in the feature support matrix docs:
https://docs.openstack.org/nova/latest/user/support-matrix.html
I believe in Rocky
Public bug reported:
Started seeing this recently which looks like a regression:
http://logs.openstack.org/44/56/14/check/neutron-tempest-multinode-
full/dba40b9/job-output.txt.gz#_2018-07-13_19_53_15_275866
2018-07-13 19:53:15.275866 | primary | {1}
Public bug reported:
http://logs.openstack.org/44/56/14/gate/nova-tox-functional/75cad04
/job-output.txt.gz#_2018-07-13_16_27_07_833394
Things are failing during scheduling:
2018-07-13 16:27:07.846188 | ubuntu-xenial | 2018-07-13 16:27:02,302 INFO
[nova.scheduler.host_manager] Host
igh
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Changed in: nova
Assignee: Surya Seetharaman (tssurya) => Matt Riedemann (mriedem)
--
You received this bug notification becau
.
** Affects: nova
Importance: High
Assignee: Matt Riedemann (mriedem)
Status: Triaged
** Affects: nova/pike
Importance: Undecided
Status: New
** Affects: nova/queens
Importance: Undecided
Status: New
** Tags: api cells
** Also affects: nova/pike
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/queens
Importance: Undecided => Medium
** Changed in: nova/queens
Assignee:
I've confirmed this with a devstack setup that this was fixed indirectly
in pike with change https://review.openstack.org/#/c/446053/.
** Changed in: nova
Status: In Progress => Fix Released
** Changed in: nova
Assignee: Matt Riedemann (mriedem) => Dan Smith (danms)
--
You re
Public bug reported:
This is semi-related to bug 1497253 but I found it while triaging that
bug to see if it was still an issue since Pike (I don't think it is).
If you run devstack with default superconductor mode configuration, and
configure nova-cpu.conf with:
[cinder]
cross_az_attach=False
OK looking at the stacktrace I see it's not the '_destroy_build_request'
call that's blowing up on reschedule, it's the up-call to get the
availability zone for the next chosen host from the list of alternates:
Importance: Medium
Assignee: Matt Riedemann (mriedem)
Status: In Progress
** Affects: nova/queens
Importance: Medium
Status: Confirmed
** Changed in: nova
Status: New => Triaged
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/queen
ged in: nova/pike
Status: New => In Progress
** Changed in: nova/queens
Status: New => Fix Released
** Changed in: nova/ocata
Importance: Undecided => Medium
** Changed in: nova/queens
Importance: Undecided => Medium
** Changed in: nova/pike
Assignee: (unassigned) =&
** Changed in: nova
Assignee: Matt Riedemann (mriedem) => Eric M Gonzalez (egrh3)
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undeci
** Also affects: nova/pike
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => In Progress
** Changed in: nova/pike
Assignee: (unassigned) => Matt Riedemann (mriedem)
** Changed in: nova/pike
Importance: Undecided => Medium
--
You
Assignee: (unassigned) => Matt Riedemann (mriedem)
** Changed in: nova/pike
Status: Fix Released => In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1724621
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
** Changed in: nova
Assignee: Balazs Gibizer (balazs-gibizer) => Matt Riedemann (mriedem)
** Changed in: nova
Status: Confirmed => Triaged
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/ocata
Importance: Undecided
Statu
'supported_perf_events': [],
'target_connect_addr': None},
'nova_object.name': 'LibvirtLiveMigrateData',
'nova_object.namespace': 'nova',
'nova_object.version': '1.8'}
** Affects: nova
Importance: Critical
Assignee: Matt Riedemann (mried
** No longer affects: nova/newton
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1678056
Title:
RequestSpec records are never deleted when destroying an instance
*** This bug is a duplicate of bug 1678056 ***
https://bugs.launchpad.net/bugs/1678056
This is already fixed, they are deleted during archive:
https://github.com/openstack/nova/commit/32fd58813f8247641a6b574b5f01528b29d48b76
** This bug has been marked a duplicate of bug 1678056
Public bug reported:
This was unintentionally regressed here:
https://review.openstack.org/#/c/548934/5/placement-api-
ref/source/parameters.yaml@354
** Affects: nova
Importance: Medium
Assignee: Matt Riedemann (mriedem)
Status: In Progress
** Tags: api-ref placement
We can't backport the fix to pike because the cross-cell listing
framework is not in pike and would be a big backport.
https://review.openstack.org/#/q/topic:instance-
list+(status:open+OR+status:merged)
** Changed in: nova/pike
Status: Confirmed => Won't Fix
--
You received this bug
** Changed in: nova
Assignee: Matt Riedemann (mriedem) => Sylvain Bauza (sylvain-bauza)
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/queens
Importance: Undecided => Medium
Public bug reported:
- [x] This is a doc addition request.
The powervm driver also supports config drive since queens:
https://review.openstack.org/#/c/409404/
---
Release: 18.0.0.0b3.dev225 on 2018-06-28 13:03
SHA: de7055bfa937a0b3d26e5a02e9fc38650a0bfdb1
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/queens
Importance: Undecided => Low
** Changed in: nova/queens
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691602
Title:
live migration generates several network-changed events which lock up
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/queens
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
Instance group member records used to be in the cell databases but were
moved to the API database in Ocata. Previously, when deleting an
instance in the cell, we'd also delete it's instance group membership
record in the same cell database. Now that instance group membership
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777478
Title:
Public bug reported:
Due to this deprecation in oslo.test 3.5.0:
https://github.com/openstack/oslotest/commit/cae8c8d51a94b891ce5b311a91d01b4264b296d2
#diff-72a674ad74b628edbd0f73729c353b85R24
We get this warning quite a bit:
nova/test.py:323: DeprecationWarning: Using class 'MoxStubout'
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => Confirmed
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/pike
Public bug reported:
- [x] This is a doc addition request.
According to [1], 'geneve' should be listed as a tunneled network type.
[1]
https://review.openstack.org/#/c/564445/10/nova/network/neutronv2/api.py
---
Release: 11.0.6.dev14 on 2018-06-25 22:39
SHA:
Public bug reported:
https://github.com/openstack/nova/blob/b992b90d73ab745b41924db9c2173f6cecb9d85e/nova/cmd/manage.py#L859
That should be using the "version2" parameter since the "version"
parameter is deprecated.
** Affects: nova
Importance: Medium
Status: Triaged
** Tags: db
will also synchronize the optionally
configured placement database, so we might as well mention that.
** Affects: nova
Importance: Medium
Assignee: Matt Riedemann (mriedem)
Status: Triaged
** Tags: docs nova-manage
--
You received this bug notification because you are a membe
701 - 800 of 2946 matches
Mail list logo