** Also affects: nova/rocky
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1804271
Title:
nova-api is broken in postgresql
--
Release: 18.1.0.dev876 on 2018-10-30 10:13:24
SHA: f6996903d2ef0fdb40135b506c83ed6517b28e19
Source:
https://git.openstack.org/cgit/openstack/nova/tree/api-ref/source/index.rst
URL: https://developer.openstack.org/api-ref/compute/
** Affects: nova
Importance: Medium
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => In Progress
** Changed in: nova/rocky
Importance: Undecided => Medium
** Changed in: nova/rocky
Assignee: (unassigned) => Lee Yarwood (lyarwood)
--
You received this
Yeah I recreated:
http://paste.openstack.org/show/737236/
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/queens
There is definitely some kind of infinite loop here:
Dec 13 14:43:21.412172 ubuntu-xenial-rax-ord-0001176665
devstack@g-api.service[6990]:
/opt/stack/new/glance/glance/quota/__init__.py:168: ResourceWarning: unclosed
Dec 13 14:43:21.412563 ubuntu-xenial-rax-ord-0001176665
** Changed in: nova
Status: New => Confirmed
** Also affects: devstack
Importance: Undecided
Status: New
** Changed in: devstack
Status: New => In Progress
** Changed in: devstack
Assignee: (unassigned) => Dr. Jens Harbott (j-harbott)
** Changed in: devstack
Public bug reported:
Seen here:
http://logs.openstack.org/43/619143/12/check/nova-
lvm/786180c/logs/screen-n-cpu.txt.gz?level=TRACE#_Dec_12_12_35_39_607002
Dec 12 12:35:39.607002 ubuntu-xenial-rax-iad-0001148680 nova-compute[29772]:
ERROR nova.compute.manager [None
It should also be noted that you only hit this if you configure nova
with [upgrade_levels]compute=auto.
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance:
Public bug reported:
Seen here:
http://logs.openstack.org/46/623246/3/check/nova-
multiattach/efa830b/logs/screen-n-cpu.txt.gz?#_Dec_12_00_01_13_179474
Dec 12 00:01:13.179474 ubuntu-xenial-inap-mtl01-0001136812 nova-
compute[29399]: WARNING nova.virt.libvirt.driver [None req-f43f3c35
signee: (unassigned) => Matt Riedemann (mriedem)
** No longer affects: nova
** Summary changed:
- swap multiattach volume intermittently fails when servers are on different
hosts
+ test_volume_swap_with_multiattach intermittently fails during cleanup
--
You received this bug notification be
** No longer affects: cinder
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1807723
Title:
swap multiattach volume intermittently fails when servers are on
different
thub.com/openstack/oslo.context/blob/0daf01065d1d51694e06aaecb3dcf4dcc78710fe/oslo_context/context.py#L318
The fix might be here: https://review.openstack.org/#/c/564349/
** Affects: nova
Importance: Medium
Assignee: Matt Riedemann (mriedem)
Status: Triaged
** Tags: api logging polic
Looking at the swap volume flow in nova again, I think
https://github.com/openstack/nova/blob/ae3064b7a820ea02f7fc8a1aa4a41f35a06534f1/nova/compute/manager.py#L5798-L5806
is likely intentional since for volume1/server1 there is only a single
BDM record. It starts out with the old volume_id and
Public bug reported:
This is found from debugging the
tempest.api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap
failures in the (multinode) tempest-slow job here:
https://review.openstack.org/#/c/606981/
The failure is ultimately during teardown of the test class, it fails to
delete
Clark Boylan has a fix here: https://review.openstack.org/#/c/623597/
** Also affects: devstack
Importance: Undecided
Status: New
** No longer affects: glance
** Changed in: devstack
Assignee: (unassigned) => Clark Boylan (cboylan)
** Changed in: devstack
Status: New =>
Public bug reported:
http://logs.openstack.org/96/623596/1/check/grenade-py3/77e4c1b/job-
output.txt.gz#_2018-12-08_02_05_43_696072
2018-12-08 02:05:43.696072 | primary | {0}
tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_update_image
[0.402494s] ... FAILED
2018-12-08
@Jeff, yeah it's gross, and taken way too long to deal with (granted, I
don't think anyone noticed/appreciated this regression until ~queens,
about a year after it happened).
There has been discussion about how to make the aggregate filters with
the allocation_ratio metadata *work* again,
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova
Assignee: Balazs Gibizer (balazs-gibizer) => Matt Riedemann (mriedem)
** Changed in: nova/queens
Status:
Public bug reported:
There are two snapshot test failures in this job run under class
ImagesOneServerTestJSON:
http://logs.openstack.org/47/623247/2/check/nova-cells-v1/18338f0/job-
output.txt.gz
2018-12-06 23:40:45.318619 | primary | {1}
So, should this be marked invalid then?
** Changed in: nova
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1607400
Title:
UEFI
Public bug reported:
- [x] This is a doc addition request.
I was looking at this nova config option:
https://docs.openstack.org/nova/rocky/configuration/config.html#DEFAULT.non_inheritable_image_properties
And noticed that those img_signature* properties are not documented in
the useful image
stack.org/#/c/615641/
The NOTE in the API is no longer true:
https://github.com/openstack/nova/blob/c9dca64fa64005e5bea327f06a7a3f4821ab72b1/nova/compute/api.py#L256
So the API likely just needs to add it's own lazy-load behavior for that
client.
** Affects: nova
Importance: Medium
Assi
Mitaka is very old at this point. Is this still a problem on newer
releases, like Pike or Rocky?
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova
Importance: Undecided => Medium
** Changed in: nova/pike
Status: New => In Progress
** Changed in: nova/queens
Isn't this covered by blueprint
https://blueprints.launchpad.net/nova/+spec/per-aggregate-scheduling-
weight ? If so, this is a feature request already covered by the
blueprint...
** Changed in: nova
Importance: Undecided => Wishlist
** Changed in: nova
Status: New => Invalid
** Tags
This is working as designed. The underlying issues that require CERN to
set max_placement_results to such a low number (10 when there are ~200
hosts per cell, and ~14K hosts total in the deployment) are what we need
to focus on, like bug 1805984 for example.
** Changed in: nova
Status: New
Hmm, this is an interesting point. Setting the protected=true flag on
the image seems like a good solution, except I don't see any kind of
force delete option for images. Would a user be able to change the
protected value from true to false if they really knew what they were
doing and wanted to
Public bug reported:
This CI job failed devstack setup because nova-api took longer than 60
seconds to start (it took 64 seconds):
http://logs.openstack.org/01/619701/5/gate/tempest-
slow/2bb461b/controller/logs/screen-n-api.txt.gz
Looking at what could be taking time in there, it was noticed
Public bug reported:
In the post-test hook in the nova-live-migration job where we test
evacuate, we're doing the following:
1. create an image-backed and volume-backed server on the subnode
2. stop libvirtd on the local node
3. run evacuate to see it fail because nova-compute is disabled on the
** Changed in: nova
Importance: Undecided => Medium
** Tags added: api cells
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status:
Public bug reported:
Seen here:
http://logs.openstack.org/22/606122/7/check/nova-tox-
functional/1f3126b/testr_results.html.gz
There are different failures, but the tests were introduced with this
change:
https://review.openstack.org/#/c/591733/
For example, in
Public bug reported:
Seen here:
http://logs.openstack.org/22/606122/7/check/openstack-tox-
py27/d70a4d5/testr_results.html.gz
ft1.7:
nova.tests.unit.virt.libvirt.test_driver.LibvirtSnapshotTests.test_raw_with_rbd_clone_failure_does_cold_snapshot_StringException:
pythonlogging:'': {{{
It should be relatively easy to write a functional regression test
similar to
https://review.openstack.org/#/c/545123/5/nova/tests/functional/wsgi/test_servers.py
but for this scenario.
** Changed in: nova
Status: New => Triaged
** Changed in: nova
Importance: Undecided => Medium
**
Public bug reported:
- [x] This doc is inaccurate in this way:
The pci_passthrough 'manage flavors' link here is wrong:
https://docs.openstack.org/nova/latest/admin/pci-
passthrough.html#configure-a-flavor-controller
It should probably link to this doc:
Public bug reported:
- [x] I have a fix to the document that I can paste below including
example: input and output.
The link to the neutron admin guide here is broken:
https://docs.openstack.org/nova/latest/admin/pci-passthrough.html
#configure-nova-scheduler-controller
Looks like bad
Thanks for this very nicely detailed bug report, it sounds like you're
OK with your solution and this should ultimately be resolved with the
hpet blueprint in stein which you're already aware of. Given that, we'll
likely close this as part of that blueprint since I'm not sure what kind
of
This is definitely not a nova issue. From reading some other forums, it
is a change in mysql 8. It would probably be better to ask in
ask.openstack.org (forum), or check other solution sites via google. Or
read the mysql docs:
https://dev.mysql.com/doc/refman/8.0/en/caching-sha2-pluggable-
ova/pike
Importance: Undecided => Medium
** Changed in: nova
Importance: Medium => Low
** Changed in: nova
Assignee: (unassigned) => Matt Riedemann (mriedem)
** Changed in: nova/rocky
Importance: Undecided => Medium
** Changed in: nova/pike
Importance: Medium
It looks like the nova-consoleauth service is not running since the RPC
call is timing out - can you confirm?
** Tags added: console spice
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
- [x] This doc is inaccurate in this way:
This bug is specifically about the weights section here:
https://docs.openstack.org/nova/rocky/admin/configuration/schedulers.html#weights
There are a few issues:
1. Mentioning cells in here is not qualified as only for cells v1,
*** This bug is a duplicate of bug 1718455 ***
https://bugs.launchpad.net/bugs/1718455
This has been fixed awhile ago, I need to find the duplicate bug.
** This bug has been marked a duplicate of bug 1718455
[pike] Nova host disable and Live Migrate all instances fail.
--
You received
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
**
Public bug reported:
As a result of this change:
https://review.openstack.org/#/c/591658/
The nova-api logs now traceback InstanceNotFound errors when polling a
server to be deleted, which is an expected situation and we shouldn't be
logging errors in the API logs for that:
** Changed in: nova
Status: In Progress => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1627597
Title:
Nova instance backup with rotation 0 creates
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => In Progress
** Changed in: nova/rocky
Importance: Undecided => Medium
** Changed in: nova/rocky
Assignee: (unassigned) => s10 (vlad-esten)
--
You received this bug
Looks like some kind of misconfiguration. You're trying to create a
flavor and nova-api is trying to call keystone to validate the token
using the keystone auth token middleware, and that is failing because
something is misconfigured. Check the install docs and config docs:
** Changed in: nova
Assignee: Matt Riedemann (mriedem) => iain MacDonnell (imacdonn)
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => Fix Released
** Changed in: nova/rocky
Importance: Undecided => Medium
*
Public bug reported:
Seen here:
http://logs.openstack.org/40/617040/1/check/nova-
next/0a82b26/logs/screen-q-agt.txt.gz?level=TRACE
Nov 10 03:51:03.120446 ubuntu-xenial-ovh-bhs1-461070
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command [-]
Error executing command:
There is an online data migration:
https://review.openstack.org/#/c/377138/62/nova/objects/resource_provider.py@917
But it's only when listing/showing resource providers. The allocation
candidates code must be getting the providers and relying on the
root_provider_id using sqla model objects
Looks like the failure is here?
http://logs.openstack.org/26/615126/5/check/tempest-
full/69d913a/controller/logs/screen-n-super-
cond.txt.gz#_Nov_06_19_48_37_408999
The test is failing to unshelve the server because unshelve fails during
scheduling b/c of that placement failure. I saw this the
Hmm, nova-api is failing to RPC cast to the nova-conductor service. What
is the value of the [conductor]/topic configuration option in nova.conf?
It should be 'conductor'. Do you have it set to 'nova'?
https://docs.openstack.org/nova/queens/configuration/config.html#conductor.topic
Otherwise
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/rocky
Importance: Undecided => Medium
** Changed in: nova
Importance: Low => Medium
--
You received this bug notification because you are
Maybe the check should try 8192, 4096 and then 512 and failing all three
consider it not supported.
** Changed in: nova
Status: New => Triaged
** Changed in: nova
Importance: Undecided => High
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects:
Public bug reported:
This is probably due to not having a metavar defined for the --before
option, but the help output is weird:
stack@stein:~/nova$ nova-manage db purge -h
/usr/local/lib/python2.7/dist-packages/psycopg2/__init__.py:144: UserWarning:
The psycopg2 wheel package will be renamed
Public bug reported:
The API reference sample for the GET /os-migrations API with the 2.59
microversion:
https://developer.openstack.org/api-ref/compute/?expanded=list-
migrations-detail#list-migrations
Shows an incorrect sort order. The returned migrations are sorted by
[created_at, id] in
Well, is the nova-consoleauth service running? Looks like it's timing
out trying to communicate with the nova-consoleauth service, which you
might have not installed/configured due to some confusion over the
deprecation of that service in the Rocky release, see:
** Tags removed: fault
** Tags added: fault-injection
** Changed in: nova
Status: Triaged => Opinion
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1800508
Title:
This is a CLI issue. You're using python-novaclient 11.0.0 and the
corresponding remove_fixed_ip code in novaclient was removed in 10.0.0:
https://github.com/openstack/python-
novaclient/commit/01fb16533bf562f39fe822bc12b9cc34b8580359#diff-
23708944688abb26fc151d28d327c721
So you need
Public bug reported:
The API sample for the GET /servers/{server_id} response is clearly not
correct:
https://github.com/openstack/nova/blob/f13debf2f0e5377b9d0b0bbd9422c6a79d2cc611/doc/api_samples/servers/v2.66
/server-get-resp.json
Since a GET on a single server does not return a list. So
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Importance: Undecided => Low
** Changed in: nova/rocky
Importance: Undecided => Low
** Changed in: nova/rocky
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => In Progress
** Changed in: nova/rocky
Assignee: (unassigned) => Lee Yarwood (lyarwood)
** Changed in: nova/rocky
Importance: Undecided => Low
--
You received this
Public bug reported:
- [x] This is a doc addition request.
The flavors doc mentions traits but does not mention anything about
granular resource request groupings as described in the spec:
https://specs.openstack.org/openstack/nova-specs/specs/queens/approved
Public bug reported:
- [x] This is a doc addition request.
The nova flavor user guide doesn't mention the ability to specify custom
resource classes on a flavor, or how to override standard resource
classes, introduced in this spec in Pike:
Public bug reported:
- [x] This is a doc addition request.
The document should be updated for the reshape provider tree changes
made in Stein:
https://specs.openstack.org/openstack/nova-specs/specs/stein/approved
/reshape-provider-tree.html#changes-to-update-provider-tree
Specifically the
FWIW I don't think
https://github.com/openstack/nova/commit/2b52cde565d542c03f004b48ee9c1a6a25f5b7cd
really changed how
https://github.com/openstack/nova/commit/f02b3800051234ecc14f3117d5987b1a8ef75877
could have broken anything. _update_vif_xml is called from the source
host using migrate data
Public bug reported:
http://logs.openstack.org/48/613348/1/check/nova-tox-functional/16b5d01
/job-output.txt.gz#_2018-10-29_10_14_46_484703
2018-10-29 10:14:46.550516 | ubuntu-xenial | 2018-10-29 10:14:45,457 INFO
[nova.api.openstack.requestlog] 127.0.0.1 "POST
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => Triaged
** Changed in: nova/queens
Status: New => Triaged
** Changed in: nova/pike
Importance: Undecided => Medium
** Changed in: nova/rocky
Status: New =>
** Tags added: api
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
**
Check the [neutron] section of the nova configuration file used for your
nova-conductor service, because that is what is failing. Is that
configured? And if so, what are the values? Are those consistent with
what is in the service catalog for the networking service (neutron)?
** Changed in: nova
This doesn't seem to be a nova problem, but a problem with your
database:
2018-10-11 04:47:49.827 12628 ERROR nova.api.metadata.handler
[req-e8a61425-56dc-4dd7-bbca-05ae913f24c0 - - - - -] Failed to get metadata for
instance id: b9f8fe03-a78b-43f3-bc1f-68ceaff3f978
2018-10-11 04:47:49.827 12628
I believe the bug is right here:
https://github.com/openstack/nova/blob/835faf3f40af6b0e07c585690982a997d6a2ac47/nova/compute/provider_tree.py#L128
That is just comparing the keys in the dict, not the values, so:
>>> old
{'a': 1, 'b': 2}
>>> new
{'a': 1, 'b': 2}
>>> new['b'] = 3
>>> old
{'a':
I guess this shouldn't be a problem in queens because the resource
tracker didn't use that provider tree to report back the inventory
changes:
https://github.com/openstack/nova/blob/stable/queens/nova/compute/resource_tracker.py#L883
That was done in Rocky:
Assignee: Matt Riedemann (mriedem)
Status: In Progress
** Tags: compute config
** Changed in: nova
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack
Public bug reported:
This is based on code inspection, but I was wondering what would happen
if a user tried to resize a baremetal instance, which isn't supported by
the ironic virt driver.
It is possible to resize a stopped instance. If a user tried to resize a
stopped baremetal instance, I
Public bug reported:
The API reference for the createImage server action:
https://developer.openstack.org/api-ref/compute/?expanded=create-server-
back-up-createbackup-action-detail,create-image-createimage-action-
detail#create-image-createimage-action
Does not mention anything about
** Tags added: compute live-migration volumes
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/pike
** Tags added: api db metadata performance
** Changed in: nova
Status: New => Triaged
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Triaged
** Changed in: nova/rocky
Status: New => Triaged
** Changed in: nova
Assignee: Stephen Finucane (stephenfinucane) => Lee Yarwood (lyarwood)
** Changed
This isn't really supported. You should be configuring
[upgrade_levels]/compute to pike:
https://docs.openstack.org/nova/latest/configuration/config.html#upgrade_levels.compute
Until you get everything upgraded to Queens at which point you can
remove the RPC version pin.
** Changed in: nova
Yeah looking at this page in queens you see the nova-consoleauth service
being installed and started:
https://docs.openstack.org/nova/queens/install/controller-install-
ubuntu.html
But not in Rocky:
https://docs.openstack.org/nova/rocky/install/controller-install-
ubuntu.html
That is because
** Changed in: nova/rocky
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1789654
Title:
placement allocation_ratio
One suggestion in IRC today was that we could add a "nova-status upgrade
check" which iterates the cell DBs looking to see if there are any non-
deleted/disabled nova-consoleauth services table records and if so,
check to see if there are no console_auth_tokens entries in that DB and
if
** Changed in: nova
Status: New => Triaged
** Changed in: nova
Importance: Undecided => High
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/rocky
Importance: Undecided => High
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Importance: Undecided => Medium
** Changed in: nova/rocky
Status: New => Triaged
** Tags added: cells
--
You received this bug notification because you are a member of Yahoo!
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/queens
0 and qemu 2.11).
Looking at this change:
https://review.openstack.org/#/c/270891/4/nova/virt/libvirt/driver.py
The one issue there is we're using a stale guest object. We should try
to get a new instance of the guest object from the host and if the guest
is gone we should get an Instance
Have you tested this or just guessing that the libvirt driver in nova
isn't doing the right thing? Because multiattach disks are always set to
cache mode "none":
https://github.com/openstack/nova/blob/20bc0136d0665bafdcd379f19389a0a5ea7bf310/nova/virt/libvirt/driver.py#L423-L426
#
Importance: High
Assignee: Matt Riedemann (mriedem)
Status: Triaged
** Affects: nova/queens
Importance: High
Status: Confirmed
** Affects: nova/rocky
Importance: High
Status: Confirmed
** Tags: live-migration
** Also affects: nova/queens
Importance
or
downloading (HTTP 400) (Request-ID: req-c7739c6e-b110-4b0e-a0ee-
62f3e530205e)
We shouldn't log an error for that since it doesn't require operator
intervention.
** Affects: nova
Importance: Medium
Assignee: Matt Riedemann (mriedem)
Status: In Progress
** Affects: nova/queens
ium
** Changed in: nova/pike
Importance: Undecided => Medium
** Changed in: nova/queens
Importance: Undecided => Medium
** Changed in: nova/rocky
Importance: Undecided => Medium
** Tags added: libvirt
** Changed in: nova
Assignee: (unassigned) => Matt Riedemann (mriedem)
Public bug reported:
Seen here:
http://logs.openstack.org/31/606031/4/check/nova-live-
migration/9d106bb/logs/screen-n-cpu.txt.gz#_Oct_10_15_27_01_355106
Oct 10 15:27:01.355106 ubuntu-xenial-ovh-gra1-0002837152 nova-compute[15353]:
ERROR nova.virt.libvirt.driver [None
> Triaged
** Changed in: nova/pike
Importance: Undecided => High
** Changed in: nova/rocky
Assignee: (unassigned) => Matt Riedemann (mriedem)
** Changed in: nova/queens
Importance: Undecided => High
** Changed in: nova/rocky
Status: New => Triaged
** Changed in:
/blob/f63fd14975cda83d24121b010cbedfc3a7e5ff1f/nova/compute/resource_tracker.py#L1466
** Affects: nova
Importance: Medium
Assignee: Matt Riedemann (mriedem)
Status: Triaged
** Affects: nova/rocky
Importance: Medium
Status: Triaged
** Tags: compute resour volumes
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/rocky
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
2536892
nova-conductor[24583]: ERROR nova.conductor.manager 'to message ID %s' %
msg_id)
Oct 02 23:29:15.161412 ubuntu-xenial-limestone-regionone-0002536892
nova-conductor[24583]: ERROR nova.conductor.manager MessagingTimeout: Timed out
waiting for a reply to message ID 832f7daa5e764687921a
Well, did you run "nova-manage api_db sync" when you setup the system?
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova
Status: New => Won't Fix
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/queens
Assignee: (unassigned) => Aditya Vaja (wolverine-av)
** Changed in:
I believe that code was all reverted:
https://review.openstack.org/#/q/Ibf2b5eeafd962e93ae4ab6290015d58c33024132
Marking this as invalid.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Sounds like neutron auth configuration needs to be investigated.
** Changed in: nova
Status: New => Incomplete
** Changed in: nova
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
This has been around since Juno: https://review.openstack.org/#/c/98828/
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects:
501 - 600 of 2946 matches
Mail list logo