** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova
Status: New => Won't Fix
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/queens
Assignee: (unassigned) => Aditya Vaja (wolverine-av)
** Changed in: nova/quee
I believe that code was all reverted:
https://review.openstack.org/#/q/Ibf2b5eeafd962e93ae4ab6290015d58c33024132
Marking this as invalid.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscr
Sounds like neutron auth configuration needs to be investigated.
** Changed in: nova
Status: New => Incomplete
** Changed in: nova
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenSt
This has been around since Juno: https://review.openstack.org/#/c/98828/
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Chan
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Changed in: nova/ocata
Status: New => Confirmed
** Changed in: nova/ocata
Importance: Undecided => High
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscr
*** This bug is a duplicate of bug 1708127 ***
https://bugs.launchpad.net/bugs/1708127
** This bug has been marked a duplicate of bug 1708127
openstack server list command crashes
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribe
** No longer affects: nova/ocata
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1643444
Title:
TenantUsagesTestJSON.test_list_usage_all_tenants 500 from Db layer
Status
** Changed in: nova
Status: Confirmed => Fix Released
** No longer affects: nova/ocata
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1644248
Title:
Nova incorre
I've closed this bug since the immediate issue is fixed. In order to
remove the workaround during move operations from change
I34b1d99a9d0d2aca80f094a79ec1656abaf762dc we'd have to add an online
data migration, but that could be done later separately from this bug.
** Changed in: nova
Statu
** Changed in: nova/ocata
Status: In Progress => Fix Released
** Changed in: nova/ocata
Assignee: (unassigned) => György Szombathelyi (gyurco)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
h
** No longer affects: nova/ocata
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404867
Title:
Volume remains in-use status, if instance booted from volume is
deleted
** No longer affects: nova/ocata
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408527
Title:
Delete instance without block_device_mapping record in database after
sc
Medium
** Changed in: nova/ocata
Assignee: (unassigned) => Lee Yarwood (lyarwood)
** Changed in: nova/pike
Assignee: (unassigned) => Lee Yarwood (lyarwood)
** Changed in: nova/queens
Assignee: (unassigned) => Matt Riedemann (mriedem)
** Changed in: nova/queens
Assign
eted='yes'
on the context.
This is similar to bug 1745977 which was fixed with change:
https://review.openstack.org/#/q/Ide6cc5bb1fce2c9aea9fa3efdf940e8308cd9ed0
But that only handled loading of generic attributes, in that case
system_metadata.
** Affects: nova
Importance: High
Assi
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Changed in: nova/ocata
Status: New => Confirmed
** Changed in: nova/ocata
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subs
Public bug reported:
Test times out waiting for a server being rebuilt to go to ACTIVE
status:
http://logs.openstack.org/06/604906/5/check/nova-multiattach/bf3ea47
/job-output.txt.gz#_2018-09-27_15_34_39_538772
Details: (ServerActionsTestJSON:test_rebuild_server) Server 94b4ab8e-
47d6-43be-870c-
(3:18:02 PM) slaweq: mriedem: it's not a race
(3:18:30 PM) slaweq: mriedem: I saw it already before, it's some issue with
neutron that it process request very slow, see:
http://logs.openstack.org/70/605270/1/gate/tempest-full-py3/f18bf28/controller/logs/screen-q-svc.txt.gz?level=INFO#_Sep_27_01_1
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794870
Title:
NetworkNotFound failures on network test teardown
Stat
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/pike
** No longer affects: nova
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1794558
Title:
Tempest test AttachInterfacesUnderV243Test results in
FixedIpNotFoundForSpecif
This is the port delete request:
http://logs.openstack.org/98/604898/2/check/nova-
next/df58e8a/logs/screen-q-svc.txt.gz#_Sep_26_00_20_10_283226
Sep 26 00:20:10.283226 ubuntu-xenial-ovh-bhs1-0002284194 neutron-
server[24409]: DEBUG neutron.plugins.ml2.plugin [None req-8e9ab2d9-25b2
-452e-8fe1-f66
Public bug reported:
This new Tempest change was recently merged:
https://review.openstack.org/#/c/587734/
And results in a traceback in the n-cpu logs:
http://logs.openstack.org/98/604898/2/check/nova-
next/df58e8a/logs/screen-n-cpu.txt.gz?level=TRACE#_Sep_26_00_20_14_150429
Sep 26 00:20:14.1
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1793747
Title:
Fails to boot instance using Blazar flavor if compute host na
(3:05:42 PM) mriedem: that populate_uuids migration was added in queens, and
looking at a grenade run from queens, we should see bdms getting migrated to
have a uuid, and i'm not seeing any results in the table for a grenade run on
queens
(3:05:42 PM) mriedem:
http://logs.openstack.org/48/60444
** Changed in: nova
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1788564
Title:
Increase set_target_cell performance by refactor
** Changed in: nova
Status: In Progress => Fix Released
** Changed in: nova/queens
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.
Since there is no top-level consumers resource in the placement API I'm
not sure how much internal machinery we want to document in the
reference. This came from a comment I had in a nova change that
mentioned it - and I'm mostly concerned about nova making assumptions
about how the internally undo
https://review.openstack.org/#/q/Iee4b9bbf412adfdc6fdc62ea3429fb960d6ac2a2
was just released in pike and queens as well.
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
This doesn't seem like a nova problem - there is a problem with the DB
connection configuration (maybe not scaled properly for your
environment?).
2018-08-31 04:04:11.328 58009 ERROR nova.api.openstack.wsgi
OperationalError: (pymysql.err.OperationalError) (1040, u'Too many
connections') (Backgroun
I don't understand - the point of the --verbose option is to print the
created cell uuid, which can be used later if needed. The docs look
correct to me.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, w
This isn't a bug. Detaching the root volume/device from a volume-backed
server is not supported at this time. That feature is being proposed for
the Stein release though:
https://review.openstack.org/#/c/600628/
** Changed in: nova
Status: New => Invalid
--
You received this bug notifica
Installing nova-consoleauth was removed from the docs in Rocky as part
of this change:
https://review.openstack.org/#/c/565367/
Need to talk to melwitt about this.
** Tags added: consoleauth doc
** Changed in: nova
Status: New => Confirmed
** Changed in: nova
Importance: Undecided =
Looks like the [neutron] section of nova.conf used by the nova-api
service is not configured properly.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
htt
This tells me that your configuration for neutron is not correct or not
being used by the nova-api service:
2018-09-12 18:05:02.596 2520 ERROR nova.api.openstack.wsgi File
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1566, in
create_pci_requests_for_sriov_ports
2018-09-
This is a feature request and I'm not really sure we should be building
more functionality into python-novaclient if our long-term direction for
the CLI is to deprecate the nova CLI and use OSC.
** Changed in: nova
Importance: Undecided => Wishlist
** Changed in: nova
Status: New => Opi
This was the nova fix: https://review.openstack.org/#/c/597421/
** Changed in: nova
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/17933
OK I think I see, _get_marker_for_migrate_instances returns the marker
because there is still a request_specs table entry with the marker
instance_uuid (because we didn't used to clean up request specs on db
archive/purge - but now we do). So when listing instances we passed a
marker to an instance
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/pike
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/rocky
Importance: Undecided => Low
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscri
** Changed in: nova
Assignee: (unassigned) => Matt Riedemann (mriedem)
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Confirmed
-
Public bug reported:
On nova-compute startup with the ironic driver, the _sync_power_states
periodic can fail and trace with a VirtDriverNotReady error if ironic-
api is not yet running. This is normal, and we shouldn't trace for it.
http://logs.openstack.org/27/602127/2/check/ironic-tempest-dsvm
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/pike
Importance: Undecided => Medium
** Changed in: nova
Public bug reported:
Seen in Ironic CI jobs since Rocky when version discovery was added to
the ironic virt driver in nova-compute:
http://logs.openstack.org/27/602127/2/check/ironic-tempest-dsvm-ipa-
wholedisk-bios-agent_ipmitool-
tinyipa/4238d0f/controller/logs/screen-n-cpu.txt.gz?level=TRACE#_
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/rocky
Impo
This hasn't shown up in a long time so marking it invalid now.
** Changed in: nova
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/17099
The related issue is that the scheduler was not filtering out deleted
compute node records when pulling them from the cell DB:
https://github.com/openstack/nova/blob/d87852ae6a1987b6faa3cb5851f9758b47ef4636/nova/objects/compute_node.py#L443
Because ^ that query doesn't filter out deleted records.
Are you sure you're stopping the nova-compute service before deleting
the actual service record via the API?
https://developer.openstack.org/api-ref/compute/#delete-compute-service
Otherwise the ResourceTracker in the compute process will recreate the
compute node.
The Service.destroy is called
We are getting some deprecation warnings for old test fixture usage:
2018-09-19 14:21:14.238558 | ubuntu-xenial |
b"/home/zuul/src/git.openstack.org/openstack/nova/.tox/py35/lib/python3.5/site-packages/oslo_db/sqlalchemy/test_base.py:175:
DeprecationWarning: Using class 'MySQLOpportunisticFi
Public bug reported:
Seen in nova and cinder unit test runs in the gate:
http://logs.openstack.org/03/602403/1/gate/openstack-tox-
py35/b4c9214/testr_results.html.gz
http://logs.openstack.org/94/603194/1/gate/openstack-tox-
py35/e37d161/testr_results.html.gz
Years ago we updated the timeout sca
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/queens
Importance: Undecided => Low
** Changed in: nova/queens
Assignee: (unassigned) => Elod Illes (elod-illes)
--
You received thi
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => In Progress
** Changed in: nova/rocky
Importance: Undecided => Medium
** Changed in: nova/rocky
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You
devstack-gate (for legacy CI jobs) defaults to qemu:
http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree
/devstack-vm-gate-wrap.sh#n365
And the devstack-base playbook uses qemu as well:
http://git.openstack.org/cgit/openstack-
dev/devstack/tree/.zuul.yaml#n117
These are the neutron
Public bug reported:
This came up in the upgrades SIG room at the Stein PTG:
https://etherpad.openstack.org/p/upgrade-sig-ptg-stein
The request is that failing to archive/purge old deleted records from
the database can cause upgrades to have major downtime because of the
old records (heat and ke
Public bug reported:
Seen here:
http://logs.openstack.org/60/600260/1/gate/nova-cells-v1/0735337/job-
output.txt.gz#_2018-09-08_22_39_02_29
2018-09-08 22:39:02.29 | primary | Captured traceback:
2018-09-08 22:39:02.292315 | primary | ~~~
2018-09-08 22:39:02.292466 | prima
I'm going to mark this as invalid. If you want some help debugging your
code, you can post it to gerrit and get some help there to identify what
is wrong in your change, but it looks like you're trying to send the
SchedulerReportClient over RPC and that object is not serializable.
** Changed in: n
Yeah this is working as designed. Cold migration doesn't change the
flavor, so if you modify the flavor extra specs that change isn't
reflected in the instance when it moves. It's generally better to create
a new flavor with the new extra specs rather than modify old ones that
existing instances ar
/nova/compute/manager.py#L3131
Which is wasteful since we can just pass the already-retrieved
image_meta from the first method to the second.
** Affects: nova
Importance: Low
Assignee: Matt Riedemann (mriedem)
Status: In Progress
** Tags: performance rebuild
--
You received
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/rocky
Importance: Undecided => Medium
--
You received this bug notification because yo
I'm going to close this since it's not a bug, it's just required for
this blueprint:
https://blueprints.launchpad.net/nova/+spec/extend-in-use-rbd-volumes
We can track the changes with the blueprint.
** Changed in: nova
Status: In Progress => Invalid
--
You received this bug notificatio
** Tags added: placement
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Chan
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Assignee: (unassigned) => Elod Illes (elod-illes)
** Changed in: nova/rocky
Importance: Undecided => Low
** Changed in: nova/rocky
Status: New => In Progress
--
You received this bu
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Assignee: (unassigned) => sahid (sahid-ferdjaoui)
** Changed in: nova/rocky
Status: New => In Progress
** Changed in: nova/rocky
Importance: Undecided => Medium
--
You received this
Looks like this was the regression:
https://review.openstack.org/#/c/541435/
Because before that the placement_context_manager was configured in the
sqlalchemy DB API code. Now it's only configured in a few select places,
one of which is not the online_data_migrations code.
This is also noticeab
Public bug reported:
This was reported in IRC:
nova-manage cell_v2 list_cells
An error has occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.5/site-packages/nova/cmd/manage.py", line 2303, in
main
ret = fn(*fn_args, **fn_kwargs)
File "/usr/lib64/python3.5/site-package
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/queens
Importance: Undecided => Medium
** Changed in: nova/rocky
Importance: Undecided => Medium
** Also affects: nova/pike
Importance: Undecided
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bug
The easy fix is to null out the instance.availability_zone here:
https://github.com/openstack/nova/blob/bb14337c30df0c17bc1dadc00d5a5500ae2dc4b7/nova/compute/manager.py#L574
** Changed in: nova
Assignee: (unassigned) => Matt Riedemann (mriedem)
** Also affects: nova/pike
Importa
Public bug reported:
When a server is shelved (and offloaded from the compute host), the
instance.host and instance.node values are cleared because it's no
longer on any host:
https://github.com/openstack/nova/blob/bb14337c30df0c17bc1dadc00d5a5500ae2dc4b7/nova/compute/manager.py#L5007
However, t
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1785318
Title:
evacuate rebuild claim will not use any image_meta so
Public bug reported:
This is a long-standing known issue from at least Pike when the nova
FilterScheduler started using placement to create allocations during
server create and move (e.g. resize) operations.
In Pike, resize to the same host resulted in allocations against the
compute node provide
Public bug reported:
As a result of this recent change in stein:
https://review.openstack.org/#/c/584598/21/nova/compute/resource_tracker.py@1281
We now get this error in the n-cpu logs on a fresh startup after the
compute node record is created in the database but before the resource
provider i
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/rocky
Importance: Undecided => Low
** Changed in: nova/rocky
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/rocky
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subs
Logs for the failed debian rocky run:
http://logs.openstack.org/75/597175/1/check/puppet-openstack-
integration-4-scenario001-tempest-debian-stable-luminous/fd38fcf/logs/
This was also reported by the xenserver CI:
http://lists.openstack.org/pipermail/openstack-
dev/2018-August/133896.html
My g
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bug
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => Triaged
** Changed in: nova/rocky
Importance: Undecided => Medium
--
You received this bug notification because you
Public bug reported:
This is a follow up to bug 1739325 which fixed the scenario that the
flavor.disabled field was missing from the embedded instance flavor. The
same case occurs for the is_public field, so we should default that to
True if it's not set in the embedded instance.flavor.
** Affect
Public bug reported:
Seen here:
http://logs.openstack.org/71/594571/2/gate/nova-tox-functional-
py35/fd2d9ac/testr_results.html.gz
2018-08-24 16:36:47,192 ERROR [nova.compute.manager] Instance failed to spawn
Traceback (most recent call last):
File
"/home/zuul/src/git.openstack.org/openstack/
Public bug reported:
This change:
https://github.com/openstack/nova/commit/459ca56de2366aea53efc9ad3295fdf4ddcd452c
Added code to the setup_instance_group flow to get the instance group
fresh so we had the latest hosts for members of the group.
Then change:
https://github.com/openstack/nova/co
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova
Status: New => Confirmed
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/rocky
Importance: Undecided => Medium
--
You received this bug notification because you a
Public bug reported:
Seen here:
http://logs.openstack.org/98/591898/3/check/tempest-slow/c480e82/job-
output.txt.gz#_2018-08-21_23_20_11_337095
2018-08-21 23:20:11.337095 | controller | {0}
tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_c
https://docs.openstack.org/nova/pike/configuration/config.html#filter_scheduler.baremetal_enabled_filters
** Changed in: nova
Status: New => Won't Fix
** Tags added: ironic scheduler
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subsc
** Also affects: nova/rocky
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1788176
Title:
placement functional tests can vari
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => Triaged
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed t
** Summary changed:
- nova-manage db online_data_migrations hangs with deleted instances
+ nova-manage db online_data_migrations hangs with instances with no host set
** Summary changed:
- nova-manage db online_data_migrations hangs with instances with no host set
+ nova-manage db online_data_mi
I believe during review we intentionally did not add support for server
tags when creating a server with cells v1 because cells v1 is
deprecated:
https://review.openstack.org/#/c/459593/26/nova/conductor/manager.py@532
This isn't the first new feature in the API that we've omitted for cells
v1 be
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova
Assignee: Matt Riedemann (mriedem) => Vladyslav Drok (vdrok)
** Changed in: nova/rocky
Status: New => Triaged
** Changed in: nova/rocky
Importance: Undecided => High
--
You rece
Public bug reported:
This is based on some performance and scale testing done by Huawei,
reported in this dev ML thread:
http://lists.openstack.org/pipermail/openstack-
dev/2018-August/133363.html
In that scenario, they have 10 cells with 1 instances in each cell.
They then run through a few
This isn't really a bug. We can just remove it. We don't require bugs
for removing deprecated code.
** Changed in: nova
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova)
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Importance: Undecided => Medium
** Changed in: nova/rocky
Status: New => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subs
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => Confirmed
** Changed in: nova/qu
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1752824
Title:
VMware: error while get serial console log
Status in OpenS
*** This bug is a duplicate of bug 1752824 ***
https://bugs.launchpad.net/bugs/1752824
Already fixed (in rocky). I'll propose a backport to stable/queens.
** Tags added: privsep vmware
** This bug has been marked a duplicate of bug 1752824
VMware: error while get serial console log
--
** Also affects: nova/rocky
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1746863
Title:
scheduler affinity doesn't work wit
** Changed in: nova/rocky
Status: Fix Released => In Progress
** Tags added: rocky-rc-potential
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786318
Title:
Vol
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net
** Changed in: nova
Assignee: Chris Dent (cdent) => Matt Riedemann (mriedem)
** Also affects: nova/rocky
Importance: Medium
Assignee: Matt Riedemann (mriedem)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, wh
** Also affects: nova/rocky
Importance: Medium
Assignee: Lee Yarwood (lyarwood)
Status: Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786318
601 - 700 of 3047 matches
Mail list logo