Public bug reported:
When returning port details, trunk_details.sub_ports should contain:
* segmentation_id
* segmentation_type
* port_id
* mac_address
This is the case when GETting a single port, but when listing ports
mac_address is missing.
In the following:
* Parent port:
Public bug reported:
2 functions used in error cleanup in _do_build_and_run_instance:
_cleanup_allocated_networks and _set_instance_obj_error_state, call an
unguarded instance.save(). The problem with this is that the instance
object may have been in an unclean state before the build exception
This is not fixed. We've just had a report where we appear to be hitting
the race reported in review here:
https://review.opendev.org/#/c/571410/7/nova/virt/libvirt/driver.py
** Changed in: nova
Status: Fix Released => In Progress
** Changed in: nova/stein
Status: Fix Committed =>
Public bug reported:
A customer is hitting an issue with symptoms identical to bug 1045152
(from 2012). Specifically, we are frequently seeing the compute host
being marked down. From log correlation, we can see that when this
occurs the relevant compute is always in the middle of executing
Yep. The actual error thrown was "Unable to detach from guest transient
domain.", which is now "Unable to detach the device from the live
config." in master. That RetryDecorator makes this function a whole lot
harder to read, but with your explanation it seems that the detach was
actually timing
Public bug reported:
1020162 ERROR root [req-46fbc6c8-de2c-4afb-9f24-9d75947c9a3c
9ccddbb72e2d42b6ab1a31ad48ea21fb 86bea4eb057b412a98402a1b7e1d9222 - - -]
Original exception being dropped: ['Traceback (most recent call
last):\n', ' File "/usr/lib/python2.7/site-
Public bug reported:
A customer reported that they were getting DB corruption if they called
shelve twice in quick succession on the same instance. This should be
prevented by the guard in nova.API.shelve, which does:
instance.task_state = task_states.SHELVING
Public bug reported:
The DatabaseAtVersion fixture starts the global TransactionContext, but
doesn't set the guard to configure() used by the Database fixture.
Consequently, if Database runs after DatabaseAtVersion in the same
worker, the subsequent fixture will fail. An example ordering which
Public bug reported:
db_version() attempts to initialise versioning if the db is not
versioned. However, it doesn't consider concurrency, so we can get
errors if multiple watchers try to get the db version before the db is
initialised. We are seeing this in practise during tripleo deployments
in
: Undecided
Assignee: Matthew Booth (mbooth-9)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1803961
Title:
Nova doesn't call
Public bug reported:
If the compute api service does a 'local delete', it only emits legacy
notifications when the operation starts and ends. If the delete goes to
a compute host, the compute host emits both legacy and versioned
notifications. This is both inconsistent, and a gap in versioned
Public bug reported:
_rollback_live_migration doesn't restore connection_info.
** Affects: nova
Importance: Undecided
Assignee: Matthew Booth (mbooth-9)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
Public bug reported:
ComputeManager.pre_live_migration fails to clean up volume attachments
if the call to driver.pre_live_migration() fails. There's a try block in
there to clean up attachments, but its scope isn't large enough. The
result is a volume in a perpetual attaching state.
** Affects:
The Nova fix should be to not call plug_vifs at all during ironic driver
initialization. It probably isn't necessary for 'non-local' hypervisors
in general, so guessing also Power, Hyper-V, and VMware.
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug
Public bug reported:
We currently permit the following:
Create multiattach volumes a and b
Create servers 1 and 2
Attach volume a to servers 1 and 2
swap_volume(server 1, volume a, volume b)
In fact, we have a tempest test which tests exactly this sequence:
Public bug reported:
Originally reported in RH bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1584315
Reproduced on OSP12 (Pike).
After resizing an instance but before confirm, update_available_resource
will fail on the source compute due to bug 1774249. If nova compute is
restarted at
Public bug reported:
Original reported in RH Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1584315
Tested on OSP12 (Pike), but appears to be still present on master.
Should only occur if nova compute is configured to use local file
instance storage.
Create instance A on compute X
Public bug reported:
CAVEAT: The following is only from code inspection. I have not
reproduced the issue.
During instance delete, we call:
driver.cleanup():
foreach volume:
_disconnect_volume():
if _should_disconnect_target():
disconnect_volume()
There is no
, there is at least one cinder
driver (delliscsi) which doesn't. This results in a failure to
disconnect on the source host post migration.
** Affects: nova
Importance: Undecided
Assignee: Matthew Booth (mbooth-9)
Status: In Progress
--
You received this bug notification because you
Public bug reported:
Change I8a705114d47384fcd00955d4a4f204072fed57c2 (written by me... sigh)
addressed a bug which prevented live migration to a target host with
overcommitted disk when made with microversion <2.25. It achieved this,
but the fix is still not correct. We now do:
if
Public bug reported:
When live migrating a BFV instance with a config disk, the API currently
requires block migration to be specified due to the local storage
requirement. This doesn't make sense on a number of levels.
Before calling migrateToURI3() in this case, the libvirt driver filters
out
After some brief discussion in #openstack-nova I've moved this to
oslo.log. The issue here appears to be that we spawn multiple separate
conductor processes writing to the same nova-conductor.log file. We
don't want to stop doing this, as it would break people.
It seems that by default python
Public bug reported:
I'm looking at conductor logs generated by a customer running RH OSP 10
(Newton). The logs appear to be corrupt in a manner I'd expect to see if
2 processes were writing to the same log file simultaneously. For
example:
===
2017-09-14 15:54:39.689 120626 ERROR
Public bug reported:
This is from code inspection only.
ComputeManager.resize_instance does:
with self._error_out_instance_on_exception(context, instance,
quotas=quotas):
...stuff...
self.compute_rpcapi.finish_resize(context, instance,
Public bug reported:
ML post describing the issue here:
http://lists.openstack.org/pipermail/openstack-
dev/2017-April/115989.html
User was resizing an instance whose glance image had been deleted. An
ssh failure occurred in finish_migration, which runs on the destination,
attempting to copy
Public bug reported:
GlanceImageServiceV2.download() ensures its downloaded file is closed
before releasing for use by an external qemu process, but it doesn't do
an fdatasync(). This means that the downloaded file may be temporarily
in the host kernel's cache rather than on disk, which means
Public bug reported:
Note: this is exclusively from code inspection.
delete_instance_metadata and update_instance_metadata in ComputeManager
are both guarded by:
@check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.PAUSED,
vm_states.SUSPENDED,
Importance: Undecided
Assignee: Matthew Booth (mbooth-9)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662483
Title:
detach_volume races
will not call _create_ephemeral if the target
already exists. Because the Lvm backend must create the disk first, this
is never called.
** Affects: nova
Importance: Undecided
Assignee: Matthew Booth (mbooth-9)
Status: In Progress
--
You received this bug notification because you
Public bug reported:
This bug is purely from code inspection; I haven't replicated it on a
running system.
Change I46b5658efafe558dd6b28c9910fb8fde830adec0 added a resize check
that the backing file exists before checking its size. Unfortunately we
forgot that Rbd overrides get_disk_size(path),
Public bug reported:
snapshot_volume_backed() in compute.API does not set a task_state during
execution. However, in essence it does:
if vm_state == ACTIVE:
quiesce()
snapshot()
if vm_state == ACTIVE:
unquiesce()
There is no exclusion here, though, which means a user could do:
quiesce()
Public bug reported:
CAVEAT: This is from code inspection only.
Change I931421ea moved the following snippet of code:
if CONF.libvirt.virt_type == 'uml':
libvirt_utils.chown(image('disk').path, 'root')
from the bottom of _create_image to the top. The problem is, the new
Public bug reported:
Nova resize does not resize ephemeral disks. I have tested this with the
default qcow2 backend, but I expect it to be true for all backends.
I have created 2 flavors:
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 1 |
| disk
Public bug reported:
The libvirt driver creates flat disks as sparse by default. However, it
always returns over_committed_disk_size=0 for flat disks in
_get_instance_disk_info(). This incorrect data ends up being reported to
the scheduler in the libvirt driver's get_available_resource() via
This was intended to be a low hanging fruit bug, but it doesn't meet the
criteria. Closing, at it has no other purpose.
** Changed in: nova
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
Public bug reported:
The nova.virt.driver 'interface' defines a get_instance_disk_info
method, which is called by compute manager to get disk info during live
migration to get the source hypervisor's internal representation of disk
info and pass it directly to the target hypervisor over rpc. To
: Matthew Booth (mbooth-9)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1581382
Title:
nova migration-list --status returns no results
Status
Public bug reported:
In finish_migration(), after resize the driver does:
if info['type'] == 'raw' and CONF.use_cow_images:
self._disk_raw_to_qcow2(info['path'])
This ensures that if use_cow_images is set to True, all raw disks will
be converted to qcow2. This
Public bug reported:
The libvirt driver caches the output of mkfs and mkswap in the image
cache. One consequence of this is that all ephemeral disks of a
particular size and format on a single compute will have the same UUID.
The same applies to swap disks. These identifiers are intended to be
Public bug reported:
The libvirt driver uses common backing files for ephemeral and swap
disks. These are generated on the local compute host by running mkfs or
mkswap as appropriate. The output of these files for a particular size
and format is stored in the image cache on the compute host which
Importance: Undecided
Assignee: Matthew Booth (mbooth-9)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543181
Title:
Raw and qcow2 disks
** Changed in: nova
Status: Fix Released => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1392527
Title:
[OSSA 2015-017] Deleting instance while resize
Public bug reported:
When doing a live migration of an instance using ceph for shared
storage, if the migration fails then the instance directory will not be
cleaned up on the destination host. The next attempt to do the live
migration will fail with DestinationDiskExists, but will cleanup the
** Also affects: oslo.vmware
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416000
Title:
VMware: write error lost while
Public bug reported:
There is a race in instance_create between fetching security groups
(returned by _security_group_get_by_names) and adding them to the
instance. We have no guarantee that they have not been deleted in the
meantime.
The result is currently that the
Public bug reported:
_search_ds in the fake driver does:
path = file.lstrip(dname).split('/')
The intention is to remove a prefix of dname from the beginning of file,
but this actually removes all instances of all characters in dname from
the left of file.
** Affects: nova
Importance:
Public bug reported:
Change I1046576c448704841ae8e1800b8390e947b0d457 uses
ExtensionManager.RegisterExtension, which requires the additional
permission Extension.Register on the vSphere server. Unfortunately we
missed the DocImpact in review. This needs to be added to the relevant
docs.
The
Public bug reported:
Change I70fd7d3ee06040d6ce49d93a4becd9cbfdd71f78 removed passwords from
VNC hosts. This change is fine because we proxy the VNC connection and
do access control at the proxy, but it assumes that ESX hosts are not
externally routable.
In a non-OpenStack VMware deployment,
Public bug reported:
Full logs here: http://logs.openstack.org/02/124402/3/check/gate-nova-
python26/1d3512b/
Seen:
2014-09-26 15:20:46.795 | ExpectedMethodCallsError: Verify: Expected
methods never called:
2014-09-26 15:20:46.796 | 0.
Public bug reported:
Tempest failure: http://logs.openstack.org/57/122757/1/check/check-
tempest-dsvm-neutron-full/a08fb08/
2014-09-19 18:48:47.388 | 2014-09-19 18:15:35,926 6578 INFO
[tempest.common.rest_client] Request
Public bug reported:
Change I8f6a857b88659ee30b4aa1a25ac52d7e01156a68 added typed consoles,
and updated drivers to use them. However, when it touched the VMware
driver, it modified get_vnc_console in VMwareVMOps, but not in
VMwareVCVMOps, which is the one which is actually used.
Incidentally,
Public bug reported:
The VMware fake session keeps an internal list of created files and
directories. Directories can be created explicitly, e.g. by
MakeDirectory(createParentDirectories=True), but the fake session will
not recognise these.
** Affects: nova
Importance: Undecided
Public bug reported:
When booting a VMware instance from an image, guestId is taken from the
vmware_ostype property in glance. If this value is invalid, spawn() will
fail with the error message:
VMwareDriverException: A specified parameter was not correct.
As there are many parameters to
Public bug reported:
Details in http://logs.openstack.org/46/104146/15/check/check-tempest-
dsvm-full/d235389/, specifically in n-cond logs:
2014-08-12 14:58:57.099 ERROR nova.quota
[req-7efe48be-f5b4-4343-898a-5b4b32694530 AggregatesAdminTestJSON-719157131
AggregatesAdminTestJSON-1908648657]
Public bug reported:
In general[1] it is incorrect to use the value of a config variable at
import time, because although the config variable may have been
registered, its value will not have been loaded. The result will always
be the default value, regardless of the contents of the relevant
Public bug reported:
If you suspend a rescued instance, resume returns it to the ACTIVE state
rather than the RESCUED state.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
N.B. This is based purely on code inspection. A reasonable resolution
would be to point out that I've misunderstood something and it's
actually fine. I'm filing this bug because it's potentially a subtle
data corruptor, and I'd like more eyes on it.
The snapshot code in
Public bug reported:
Extending a disk during spawn races, which can result in failure. It is
possible to hit this bug by launching a large number of instances of an
image which isn't already cached, simultaneously. Some of them will race
to extend the cached image, ultimately resulting in an
Public bug reported:
Spurious gate failure: http://logs.openstack.org/65/99065/4/check/gate-
nova-docs/af27af8/console.html
Logs are full of:
2014-06-23 09:55:32.057 |
/home/jenkins/workspace/gate-nova-docs/doc/source/devref/api.rst:39: WARNING:
autodoc: failed to import module
Public bug reported:
If fixed IP allocation fails, for example because nova's network
interfaces got renamed after a reboot, nova will loop continuously
trying, and failing, to create a new instance. For every attempted spawn
the instance will end up with an additional fixed IP allocated to it.
Public bug reported:
The VMware driver doesn't pass volume authentication information to the
hba when attaching an iscsi volume. Consequently, adding an iscsi volume
which requires authentication will always fail.
** Affects: nova
Importance: Undecided
Status: New
** Tags: vmware
Public bug reported:
Take, for example, resize_instance(). In manager.py, we assert that the
instance is in RESIZE_PREP state with:
instance.save(expected_task_state=task_states.RESIZE_PREP)
This should mean that the first resize will succeed, and any subsequent
will fail. However, the
Public bug reported:
libvirt/driver.py passes partition=None to disk.inject_data() for any
instance with kernel_id set. partition=None means that inject_data will
attempt to mount the whole image, i.e. assuming there is no partition
table. While this may be true for EC2, it is not safe to assume
Public bug reported:
A bug in VMwareAPISession.__del__() prevents the session being logged
out when the session object is garbage collected.
** Affects: nova
Importance: Medium
Status: New
** Tags: havana-backport-potential vmware
--
You received this bug notification because
Public bug reported:
The behaviour of spawn() in the vmwareapi driver wrt images and block
device mappings is currently as follows:
If there are any block device mappings, images are ignored
If there are any block device mappings, the last becomes the root device and
all others are ignored
65 matches
Mail list logo