** Changed in: horizon
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1438133
Title:
django-admin.py collectstatic failing to find
I would rather mark it as invalid now that the design changed.
I really would like to keep fix committed to match with Gerrit changes
** Changed in: nova
Status: Fix Committed = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
This causes the spawn to fail
** Affects: nova
Importance: High
Assignee: Gary Kotton (garyk)
Status: In Progress
** Changed in: nova
Importance: Undecided = High
--
You received this bug notification because you are a member of Yahoo!
Engineering
Public bug reported:
An error was occurring in a devstack setup with nova version:
commit ab25f5f34b6ee37e495aa338aeb90b914f622b9d
Merge instance termination with update_dns_entries set fails
A volume-type encrypted with CryptsetupEncryptor was being used. A
volume was created using this
Public bug reported:
Overview:
When making the list images request, passing in a datetime for the created_at
property, there are no results returned.
Steps to reproduce:
1) Create an image, note the image's created_at property
2) Perform a list images request passing in the created_at property
Public bug reported:
When running the following tempest tests an error occurs in n-cpu:
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup
test_encrypted_cinder_volumes_luks
This occurred when using devstack with nova at:
HEAD is
Public bug reported:
Hello,
I'm testing the latest kilo packages and it looks like there are some missing
packages.
The following packages have unmet dependencies:
python-django-horizon : Depends: python-xstatic-term.js but it is not
installable
Depends:
Public bug reported:
I've set up a lab where live migration can occur in block mode
It seems that if I leave the default config, block live-migration fails;
I can see that the port is left in BUILD state after the failure, but
the VM is still running on the source host.
** Affects: neutron
Thanks for filing this one Anthony. This is basically user error and
invalid.
If you have multipath-tools installed and running on a system, you
really need to enable it in nova.conf and cinder.conf (if this is a
cinder controller as well).
invalid bug.
** Changed in: nova
Status:
Public bug reported:
We need to show admin password if enabled for the hypervisor type.
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L558
Public bug reported:
My environment:
1. nova 2014.1
2. novaclient 2.17.0
I checked the source:
nova.compute.manager.py:
def rebuild_instance(self, context, instance, orig_image_ref, image_ref,
injected_files, new_pass, orig_sys_metadata,
bdms,
Decision made in #openstack-neutron to not do a shim for small changes
like permitting mac address update. See
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron
/%23openstack-neutron.2015-04-02.log at 2015-04-02T20:39:10.
** Changed in: neutron
Status: In Progress = Invalid
--
Public bug reported:
When setting a volume, allow setting the volume device name if
supported.
** Affects: horizon
Importance: Undecided
Assignee: Travis Tripp (travis-tripp)
Status: New
** Changed in: horizon
Assignee: (unassigned) = Travis Tripp (travis-tripp)
--
You
@Soumit
I tried on my local env by setting 'max_local_block_devices' to 0 in
nova.conf and gets below error in Tempest.
ERROR (BadRequest): Block Device Mapping is Invalid: You specified more
local devices than the limit allows (HTTP 400) (Request-ID: req-
3ef100c7-b5c5-4a2d-a5da-8344726336e2)
Public bug reported:
This has been seen occasionally in the past week:
Public bug reported:
i create a service as follows,it can be successful.
curl -H X-Auth_Token:fc1629a543c64be18937ba8a1296468b -H Content-type:
application/json -d
'{service:{description:test_service,name:name_service,type:test_servce}}'
http://localhost:35357/v3/services |python -mjson.tool
Public bug reported:
Delete domain by domain_id as follows:
curl -H X-Auth_Token:$ADMIN -X DELETE
http://localhost:35357/v3/domains/b3ee886f22d544a7bfc0426edac0543f
then get a list of role assignments :
curl -H X-Auth_Token:55fc47665d344104b801751784292ccb
Public bug reported:
we don't pass migrate_data to live migration recover method, so if live
migration failed, instance's image will be deleted in cleanup action if
the instance is using shared block device like ceph rbd.
** Affects: nova
Importance: Undecided
Assignee: Jiajun Liu
Neutron doesn't support this feature
** Changed in: nova
Status: Incomplete = Invalid
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute
** Changed in: anvil
Assignee: (unassigned) = Alexander Schmidt (alexs-h)
** Changed in: anvil
Status: Fix Committed = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
** Changed in: nova
Status: Incomplete = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255511
Title:
Timeout in
Public bug reported:
I have ansible playbook which utilize nova_compute module
http://docs.ansible.com/nova_compute_module.html
and after that we use wait_for module:
http://docs.ansible.com/wait_for_module.html
which waits until port 22 is ready. However next task tries to ssh on that VM,
Public bug reported:
When glance regisry is deployed in trusted-auth mode, it doesn't
authenticate[0] but populates the context based on the identity headers
sent[1]. When the context is populated it is elevated to admin context,
required for scrubber[2], based on the roles sent in identity
Public bug reported:
This is appearing in some logs upstream:
http://logs.openstack.org/73/170073/1/experimental/check-tempest-dsvm-
neutron-full-non-isolated/ac882e3/logs/kern_log.txt.gz#_Apr__2_13_03_06
And it has also been reported by andreaf in IRC as having been observed
downstream.
Public bug reported:
EC2 API fails to create a snapshot of a volume backed instance
It's reproduced with current (~Kilo-3) devstack.
Steps to reproduce:
$ nova boot inst --block-device
id=cirros,source=image,dest=volume,bootindex=0,size=1--flavor m1.nano
$ euca-create-image ec2-instance-id -n
Public bug reported:
Fail to create a snapshot of an instance booted from a volume backed
snapshot.
It's reproduced with current (~Kilo-3) devstack.
Steps to reproduce:
$ nova boot inst --block-device
id=cirros,source=image,dest=volume,bootindex=0,size=1--flavor m1.nano
$ nova image-create
Public bug reported:
Provide support for adding multiple IPv6 subnets to an internal router
port.
This was part of the multiple-ipv6-prefixes blueprint
(https://blueprints.launchpad.net/neutron/+spec/multiple-ipv6-prefixes),
but did not make it into Kilo and is instead being re-targeted for
OT: I am not sure why the gerrit patch i posted didn't appear here
automatically, inspite of linking this bug in the patch!
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
Running devstack K, trying to get the notification from nova.api,
especially the instance update.
Enabled in config:
[DEFAULT]
notification_driver=nova.openstack.common.notifier.rpc_notifier
notification_topics=notifications,monitor
notify_on_state_change=vm_and_task_state
This is a documentation issue and not a nova bug
** No longer affects: nova
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381061
Title:
VMware: ESX hosts must not be
** Project changed: glance = python-glanceclient
** Changed in: python-glanceclient
Milestone: kilo-rc1 = None
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1323660
Title:
Glance
I hit the same issue and it was due to the way in which cinder was
configured. My understanding is that this is the case here as this is
the CI. Since updating the config is up and running.
** Changed in: nova
Status: Incomplete = Invalid
--
You received this bug notification because you
thanks to dims, I understood that my devstack were too old.
The change Idecf7966968369d2f372abffcab85fbf9aa097c7 in devstack is
fixing this bug
https://github.com/openstack-
dev/devstack/commit/d2287cfb9f4dfac71f14f3374514f5b8c2b0c70b
** Changed in: nova
Status: Incomplete = Invalid
--
** Also affects: cinder
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401799
Title:
Attach Volume to instance running on
Public bug reported:
This review https://review.openstack.org/#/c/168335/ highlighted a need
to change the limit summary template under certain conditions which is
difficult with the current template.
The template needs to be re-factored to allow easier customisation and
reflect the changes
Public bug reported:
Creating a stack with heat that creates a network and then referencing that
network ID to a consumer (in this case lb-healthmonitor) would result in a 404
for that ID.
This happens only at the first attempt to do so. Deleting the heat stack and
recreating it using the same
Public bug reported:
The cert_parser is attempting to validate intermediates against the
users private key. It only need to validate that the intermediates are
readable and the users certificate/key match.
** Affects: neutron
Importance: Undecided
Status: New
** Tags: lbaas
Public bug reported:
When regions are configured (NOT via local_settings.py but by adding multiple
endpoints to keystone), regardless of what horizon is used for logging in, you
will be logged into first region in drop down list. Drop down list itself,
unsorted (I guess it is just python
** Changed in: networking-odl
Status: Fix Committed = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376169
Title:
ODL MD can't reconnect to ODL after it restarts
Public bug reported:
Overview:
When a user attempts to update an image by adding 'id' as an image property, a
200 response is returned.
Steps to reproduce:
1) Create an image
2) Update the image via PATCH /images/id passing '[{path: /id, value:
----, op: add}]'
Public bug reported:
In commit 97d63d8745cd9b3b391ce96b94b4da263b3a053d, logging was changed
to use oslo.log. However, the ironic driver previously interacted with
the stdlib logging module to set the log level dynamically.
oslo.log does not provide the methods that were being used
Public bug reported:
When attempting to attach an encrypted iSCSI volume to an instance with
iscsi_use_multipath set to True in nova.conf an error occurs in n-cpu.
The devstack system being used had the following nova version:
commit ab25f5f34b6ee37e495aa338aeb90b914f622b9d
Merge instance
Public bug reported:
When new VMs are spawned after deleting previous VMs, the new VMs obtain
completely new ips and the old ones are not recycled to reuse. I looked
into the mysql database to see where ips may be being stored and
accessed by openstack to determine what the next in line should
Public bug reported:
When CONF.scheduler_use_baremetal_filters is set, and IronicHostManager
is in use, the default scheduler filters should be as defined by
CONF.baremetal_scheduler_default_filters. This is done in
IronicHostManager's __init__ method.
However, __init__ calls the superclass'
44 matches
Mail list logo