Public bug reported:
For example:
2015-06-10 09:35:54.675 | Captured traceback:
2015-06-10 09:35:54.675 | ~~~
2015-06-10 09:35:54.675 | Traceback (most recent call last):
2015-06-10 09:35:54.676 | File
"tempest/api/compute/admin/test_live_migration.py", line 116, in
te
Public bug reported:
This code:
https://github.com/openstack/nova/blob/master/nova/api/ec2/__init__.py#L270-L288
uses requests directly to talk to keystone, which means that the ssl
option configuration is nonstandard. We should use the keystoneclient
directly for consistency.
** Affects: nova
** Changed in: nova
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1403431
Title:
nova list --all-tenants should display tenant name/
Public bug reported:
Default devstack install leaves
force_config_drive = always
in /etc/nova/nova.conf (slightly contradicting the docs:
http://docs.openstack.org/user-guide/content/enable_config_drive.html
which expects ' = true')
An instance booted on this system does not have a confi
Confirmed:
ubuntu@cont:~/devstack$ nova list --all-tenants
+--+--+++-+--+
| ID | Name | Status | Task State | Power
State | Networks |
+---
Public bug reported:
ComputeCapabilitiesFilter code is convoluted. There are at least 3
different ways it can fail, and 2 of them don't provide any output at
all. The one which does logs at debug (should be info), and does not
actually provide enough info to diagnose the problem.
The code aroun
Thanks for the reminder Joe - this is not really a bug and the patch was
unnecessary as Andrea Rosa points out.
** Changed in: nova
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Com
Public bug reported:
I am able to set a flavor-key but not unset it. devstack
sha1=fdf1cffbd5d2a7b47d5bdadbc0755fcb2ff6d52f
ubuntu@d8:~/devstack$ nova help flavor-key
usage: nova flavor-key[ ...]
Set or unset extra_spec for a flavor.
Positional arguments:
Name or ID of flavor
** Changed in: neutron
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1324934
Title:
Neutron port leak when connection is dropped during port create
** Description changed:
If an instance is deleted after it has a neutron port allocated but
before it has reached ACTIVE state, sometimes the port is not deleted
but the instance is. These orphan ports count toward the user's quota
so if it happens enough times the user will be unable to
Public bug reported:
If an instance is deleted after it has a neutron port allocated but
before it has reached ACTIVE state, sometimes the port is not deleted
but the instance is. These orphan ports count toward the user's quota
so if it happens enough times the user will be unable to boot instan
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274317
Title:
heal_instance_info_cache_interval config is
Public bug reported:
Currently when trying to issue a hard reboot to an instance, the logic
in nova/compute/api.py says:
if (reboot_type == 'HARD' and instance['task_state'] ==
task_states.REBOOTING_HARD)):
raise exception.InstanceInvalidState
This mean's there's no user-facing way
Public bug reported:
The command:
nova boot --flavor $FLAV --key_name $KEY --image $IMG --meta foo=bar
meta1
should inject a file into `/meta.js` with content `{"foo":"bar"}`.
Currently in devstack this doesn't work.
>From nova compute logs:
2014-02-24 13:09:57 57751 DEBUG nova.virt.dis
Public bug reported:
Every time an item is fetched from the memory cache, the whole cache is
scanned for expired items:
https://github.com/openstack/nova/blob/master/nova/openstack/common/memorycache.py#L63-L67
This is not the right place to expire items - a large cache can become
slow. There s
Public bug reported:
Currently there is inconsistency in what fields are present in some
instance-lifecycle notifications. `compute.instance.update` includes
fields `audit_period_beginning` and `audit_period_end` but
`compute.instance.{create,delete}.{start,end}` event types do not, and
we are fo
16 matches
Mail list logo