[Yahoo-eng-team] [Bug 1392656] [NEW] ImportError: No module named cmd.all
Public bug reported: nova version:nova-2014.2 os version: [root@ops1 data]# uname -a Linux ops1 2.6.32-504.1.3.el6.x86_64 #1 SMP Tue Nov 11 17:57:25 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux there is no err during installation procedure,but when i run this command ,error come out [root@ops1 data]# nova-all Traceback (most recent call last): File /usr/bin/nova-all, line 6, in module from nova.cmd.all import main ImportError: No module named cmd.all [root@ops1 data]# and the file exists under the path [root@ops1 data]# cd /usr/lib/python2.7/site-packages/nova/cmd/ [root@ops1 cmd]# ll|grep all -rw-r--r-- 1 root root 3416 Oct 16 07:52 all.py -rw-r--r-- 1 root root 2509 Nov 14 08:03 all.pyc [root@ops1 cmd]# ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1392656 Title: ImportError: No module named cmd.all Status in OpenStack Compute (Nova): New Bug description: nova version:nova-2014.2 os version: [root@ops1 data]# uname -a Linux ops1 2.6.32-504.1.3.el6.x86_64 #1 SMP Tue Nov 11 17:57:25 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux there is no err during installation procedure,but when i run this command ,error come out [root@ops1 data]# nova-all Traceback (most recent call last): File /usr/bin/nova-all, line 6, in module from nova.cmd.all import main ImportError: No module named cmd.all [root@ops1 data]# and the file exists under the path [root@ops1 data]# cd /usr/lib/python2.7/site-packages/nova/cmd/ [root@ops1 cmd]# ll|grep all -rw-r--r-- 1 root root 3416 Oct 16 07:52 all.py -rw-r--r-- 1 root root 2509 Nov 14 08:03 all.pyc [root@ops1 cmd]# To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1392656/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392663] [NEW] Un-used function check_attach() in module nova.volume.cinder
Public bug reported: Version: stable/juno The function check_attach() in module nova.volume.cinder is really un-used. It's only used in Unit-Test. In fact, if this function is really used in reality, then it will be impossible to attach a volume to a VM instance in the case that the volume is created in a different availability_zone than the VM instance. However, in reality, for single-node deployment, if a new availability_zone is created in Nova and thus the default availability_zone of Nova is replaced, then Nova compute service will be running in this newly-created availability_zone while Cinder services are still running in the default availability_zone. It is quite possible to attach a volume (created in the default availability_zone) to a VM instance (created in the new availability_zone) ** Affects: nova Importance: Undecided Assignee: Trung Trinh (trung-t-trinh) Status: New ** Tags: nova-manage ** Changed in: nova Assignee: (unassigned) = Trung Trinh (trung-t-trinh) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1392663 Title: Un-used function check_attach() in module nova.volume.cinder Status in OpenStack Compute (Nova): New Bug description: Version: stable/juno The function check_attach() in module nova.volume.cinder is really un-used. It's only used in Unit-Test. In fact, if this function is really used in reality, then it will be impossible to attach a volume to a VM instance in the case that the volume is created in a different availability_zone than the VM instance. However, in reality, for single-node deployment, if a new availability_zone is created in Nova and thus the default availability_zone of Nova is replaced, then Nova compute service will be running in this newly-created availability_zone while Cinder services are still running in the default availability_zone. It is quite possible to attach a volume (created in the default availability_zone) to a VM instance (created in the new availability_zone) To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1392663/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392685] [NEW] With OS-Federation users can get the wrong mapping
Public bug reported: In case multiple saml IdPs are configured with OS-Federation, following the configuration proposed in the documentation (http://docs.openstack.org/security-guide/content/identity.html;), the mapping are not strictly associated to the IdP. As an example, considering the system has two IdPs, named IdP-A and IdP-B, with the mapping MAP-A and MAP-B respectively. If a user from IdP-A accesses the URL /v3/OS-FEDERATION/identity_providers/IdP-B/protocols/saml2/auth get the map for the users of IdP-B, the only condition is that the IdPs shuld return similar attributes but this is quite common for universities. The problem is that there are not constrains between the mapping URL and the corresponding IdP so users can get mapped differently according to the url they access. A quick solution is to modify the configuration so the URL can be accessed only by one IdP. A better solution would require the inclusion of an id to verify the IdP used for the authentication. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1392685 Title: With OS-Federation users can get the wrong mapping Status in OpenStack Identity (Keystone): New Bug description: In case multiple saml IdPs are configured with OS-Federation, following the configuration proposed in the documentation (http://docs.openstack.org/security-guide/content/identity.html;), the mapping are not strictly associated to the IdP. As an example, considering the system has two IdPs, named IdP-A and IdP-B, with the mapping MAP-A and MAP-B respectively. If a user from IdP-A accesses the URL /v3/OS-FEDERATION/identity_providers/IdP-B/protocols/saml2/auth get the map for the users of IdP-B, the only condition is that the IdPs shuld return similar attributes but this is quite common for universities. The problem is that there are not constrains between the mapping URL and the corresponding IdP so users can get mapped differently according to the url they access. A quick solution is to modify the configuration so the URL can be accessed only by one IdP. A better solution would require the inclusion of an id to verify the IdP used for the authentication. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1392685/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1356053] Re: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog
Horizon does not perform any translation of the endpoint from the service catalog returned by Keystone. There is no work required to fix in Horizon, once Keystone is fixed it should work work in Horizon. ** Changed in: horizon Status: New = Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1356053 Title: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog Status in devstack - openstack dev environments: New Status in OpenStack Dashboard (Horizon): Won't Fix Status in OpenStack Data Processing (Sahara, ex. Savanna): New Bug description: When using the keystone static catalog file to register endpoints (http://docs.openstack.org/developer/keystone/configuration.html#file-based-service-catalog-templated-catalog), an endpoint registered (correctly) as catalog.region.data_processing gets read as data-processing by keystone. Thus, when Sahara looks for an endpoint, it is unable to find one for data_processing. This causes a problem with the commandline interface and the dashboard. Keystone seems to be converting underscores to dashes here: https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/templated.py#L47 modifying this line to not perform the replacement seems to work fine for me, but may have unintended consequences. To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1356053/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392718] [NEW] Sticky region selection in Login page
Public bug reported: Remember the last Region (keystone endpoint) selected in the Login page. If the deployment has multiple Regions and user is using the same Region most of the time, it would be better for UX to just default to the last Region selected for the user. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392718 Title: Sticky region selection in Login page Status in OpenStack Dashboard (Horizon): New Bug description: Remember the last Region (keystone endpoint) selected in the Login page. If the deployment has multiple Regions and user is using the same Region most of the time, it would be better for UX to just default to the last Region selected for the user. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1392718/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392721] [NEW] Documentation error in base tables
Public bug reported: In /horizon/tables/base.py there is a mistake in the documentation here: https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L197 An iterable of CSS classes which will be added when the column's text is displayed as a link. This is left for backward compatibility. Deprecated in favor of the link_attributes attribute. Example: ``classes=('link-foo', 'link-bar')``. Defaults to ``None`` The name of the attribute is link_classes, not classes like the example suggests. That could be confusing for developers. For example, if you modify openstack- dashboard/dashboards/project/images/images/tables.py file and add this: name = tables.Column(get_image_name_version, link=(horizon:project:images:images:detail), verbose_name=_(Image Name), classes=(css-foo,css-bar)) You expect something like: a href=whatever class=css-foo css-barmy name/a But doesn't work. ** Affects: horizon Importance: Undecided Assignee: Marcos Lobo (marcos-fermin-lobo) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392721 Title: Documentation error in base tables Status in OpenStack Dashboard (Horizon): In Progress Bug description: In /horizon/tables/base.py there is a mistake in the documentation here: https://github.com/openstack/horizon/blob/master/horizon/tables/base.py#L197 An iterable of CSS classes which will be added when the column's text is displayed as a link. This is left for backward compatibility. Deprecated in favor of the link_attributes attribute. Example: ``classes=('link-foo', 'link-bar')``. Defaults to ``None`` The name of the attribute is link_classes, not classes like the example suggests. That could be confusing for developers. For example, if you modify openstack- dashboard/dashboards/project/images/images/tables.py file and add this: name = tables.Column(get_image_name_version, link=(horizon:project:images:images:detail), verbose_name=_(Image Name), classes=(css-foo,css-bar)) You expect something like: a href=whatever class=css-foo css-barmy name/a But doesn't work. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1392721/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1381413] Re: Switch Region dropdown doesn't work
** Also affects: django-openstack-auth Importance: Undecided Status: New ** Changed in: django-openstack-auth Status: New = In Progress ** Changed in: horizon Importance: Undecided = High ** Changed in: django-openstack-auth Importance: Undecided = High ** Changed in: django-openstack-auth Assignee: (unassigned) = Vlad Okhrimenko (vokhrimenko) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1381413 Title: Switch Region dropdown doesn't work Status in Django OpenStack Auth: In Progress Status in OpenStack Dashboard (Horizon): In Progress Bug description: In case Horizon was set up to work with multiple regions (by editing AVAILABLE_REGIONS in settings.py), region selector drop-down appears in top right corner. But it doesn't work now. Suppose I login into the Region1, then if I try to switch to Region2, it redirects me to the login view of django-openstack-auth https://github.com/openstack/horizon/blob/2014.2.rc1/horizon/templates/horizon/common/_region_selector.html#L11 There I am being immediately redirected to the settings.LOGIN_REDIRECT_URL because I am already authenticated at Region1, so I cannot view Region2 resources if I switch to it via top right dropdown. Selecting region at login page works though. To manage notifications about this bug go to: https://bugs.launchpad.net/django-openstack-auth/+bug/1381413/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392735] [NEW] Project Limits don't refresh while selecting Flavor
Public bug reported: To recreate: Project - Compute - Instances - Launch instance Change the flavor using the up/down arrows Observe how the project limits do not update until the user tabs out of the field ** Affects: horizon Importance: Undecided Assignee: Lin Hua Cheng (lin-hua-cheng) Status: New ** Changed in: horizon Assignee: (unassigned) = Lin Hua Cheng (lin-hua-cheng) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392735 Title: Project Limits don't refresh while selecting Flavor Status in OpenStack Dashboard (Horizon): New Bug description: To recreate: Project - Compute - Instances - Launch instance Change the flavor using the up/down arrows Observe how the project limits do not update until the user tabs out of the field To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1392735/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392565] Re: Install and configure in Openstack(Glance) Installation Guide for Ubuntu 14.04 - juno
** Also affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1392565 Title: Install and configure in Openstack(Glance) Installation Guide for Ubuntu 14.04 - juno Status in OpenStack Image Registry and Delivery Service (Glance): New Status in OpenStack Manuals: Incomplete Bug description: The following configuration item: default_store = file should in the default section, instead of glance_store section --- Built: 2014-11-12T16:08:49 00:00 git SHA: 89c0d5ea2ded32ed1ae95637e7b690cee891b6a5 URL: http://docs.openstack.org/juno/install-guide/install/apt/content/glance-install.html source File: file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/install-guide/section_glance-install.xml xml:id: glance-install To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1392565/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1387808] Re: Documentation for installation lists [glance_store] section for wrong config file
This issue affects the openstack-manuals project, not the glance project. Issue already reported in bug #1387738 and fixed in patch #132393. ** Project changed: glance = openstack-manuals ** Changed in: openstack-manuals Status: New = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1387808 Title: Documentation for installation lists [glance_store] section for wrong config file Status in OpenStack Manuals: Fix Released Bug description: In the installation guide at http://docs.openstack.org/juno/install- guide/install/apt/content/glance-install.html section 3.c asks for [glance-store] parameters to be added to /etc/glance/glance- registry.conf. These really belong to glance-api.conf however, i.e. that should become section 2.c. To manage notifications about this bug go to: https://bugs.launchpad.net/openstack-manuals/+bug/1387808/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392773] [NEW] Live migration of volume backed instances broken after upgrade to Juno
Public bug reported: I'm running nova in a virtualenv with a checkout of stable/juno: root@compute1:/opt/openstack/src/nova# git branch stable/icehouse * stable/juno root@compute1:/opt/openstack/src/nova# git rev-list stable/juno | head -n 1 54330ce33ee31bbd84162f0af3a6c74003d57329 Since upgrading from icehouse, our iscsi backed instances are no longer able to live migrate, throwing exceptions like: Traceback (most recent call last): File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 134, in _dispatch_and_reply incoming.message)) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 177, in _dispatch return self._do_dispatch(endpoint, method, ctxt, args) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 123, in _do_dispatch result = getattr(endpoint, method)(ctxt, **new_args) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/exception.py, line 88, in wrapped payload) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/exception.py, line 71, in wrapped return f(self, context, *args, **kw) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/compute/manager.py, line 326, in decorated_function kwargs['instance'], e, sys.exc_info()) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/compute/manager.py, line 314, in decorated_function return function(self, context, *args, **kwargs) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/compute/manager.py, line 4882, in check_can_live_migrate_source dest_check_data) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 5040, in check_can_live_migrate_source raise exception.InvalidSharedStorage(reason=reason, path=source) InvalidSharedStorage: compute2 is not on shared storage: Live migration can not be used without shared storage. Looking back through the code, given dest_check_data like this: {u'disk_over_commit': False, u'disk_available_mb': None, u'image_type': u'default', u'filename': u'tmpyrUUg1', u'block_migration': False, 'is_volume_backed': True} In Icehouse the code to validate the request skipped this[0]: elif not shared and (not is_volume_backed or has_local_disks): In Juno, it matches this[1]: if (dest_check_data.get('is_volume_backed') and not bool(jsonutils.loads( self.get_instance_disk_info(instance['name']: In Juno at least, get_instance_disk_info returns something like this: [{u'disk_size': 10737418240, u'type': u'raw', u'virt_disk_size': 10737418240, u'path': u'/dev/disk/by-path/ip-10.0.0.1:3260-iscsi- iqn.2010-10.org.openstack:volume-10f2302c-26b6-44e0-a3ea- 7033d1091470-lun-1', u'backing_file': u'', u'over_committed_disk_size': 0}] I wonder if that was previously an empty return value in Icehouse, I'm unable to test right now, but if it returned the same then I'm not sure how it ever worked before. This is a lab environment, the volume storage is an LVM+ISCSI cinder service. nova.conf and cinder.conf here[2] [0]: https://github.com/openstack/nova/blob/stable/icehouse/nova/virt/libvirt/driver.py#L4299 [1]: https://github.com/openstack/nova/blob/stable/juno/nova/virt/libvirt/driver.py#L5073 [2]: https://gist.github.com/DazWorrall/b1b1e906a6dc2338f6c1 ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1392773 Title: Live migration of volume backed instances broken after upgrade to Juno Status in OpenStack Compute (Nova): New Bug description: I'm running nova in a virtualenv with a checkout of stable/juno: root@compute1:/opt/openstack/src/nova# git branch stable/icehouse * stable/juno root@compute1:/opt/openstack/src/nova# git rev-list stable/juno | head -n 1 54330ce33ee31bbd84162f0af3a6c74003d57329 Since upgrading from icehouse, our iscsi backed instances are no longer able to live migrate, throwing exceptions like: Traceback (most recent call last): File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 134, in _dispatch_and_reply incoming.message)) File /opt/openstack/venv/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 177, in _dispatch return self._do_dispatch(endpoint, method, ctxt, args) File
[Yahoo-eng-team] [Bug 1392798] [NEW] Deleted instances show power state as 'Running'
Public bug reported: After deleting an instance and executing a `nova list --deleted` command as an administrator, the Power State of the deleted instance is still displayed as 'Running' and set to 1 in the database as well. Steps to reproduce: Boot an instance: dboik@dboik-VirtualBox:~$ nova boot foo --flavor m1.tiny --image cirros-032-x86_64-uec Wait for instance to finish building: dboik@dboik-VirtualBox:~$ nova list +--+--+++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+--+ | 2fed0daa-b083-43cf-9285-7364ce4852ce | foo | ACTIVE | - | Running | private=10.0.0.2 | +--+--+++-+--+ Delete the instance: dboik@dboik-VirtualBox:~$ nova delete foo Request to delete server foo has been accepted. As an OpenStack administrator, list the deleted instances: dboik@dboik-VirtualBox:~$ nova list --deleted +--+--+-++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+-++-+--+ | 2fed0daa-b083-43cf-9285-7364ce4852ce | foo | DELETED | - | Running | private=10.0.0.2 | +--+--+-++-+--+ ** Affects: nova Importance: Undecided Assignee: Andrew Boik (drewboik) Status: In Progress ** Changed in: nova Assignee: (unassigned) = Andrew Boik (drewboik) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1392798 Title: Deleted instances show power state as 'Running' Status in OpenStack Compute (Nova): In Progress Bug description: After deleting an instance and executing a `nova list --deleted` command as an administrator, the Power State of the deleted instance is still displayed as 'Running' and set to 1 in the database as well. Steps to reproduce: Boot an instance: dboik@dboik-VirtualBox:~$ nova boot foo --flavor m1.tiny --image cirros-032-x86_64-uec Wait for instance to finish building: dboik@dboik-VirtualBox:~$ nova list +--+--+++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+++-+--+ | 2fed0daa-b083-43cf-9285-7364ce4852ce | foo | ACTIVE | - | Running | private=10.0.0.2 | +--+--+++-+--+ Delete the instance: dboik@dboik-VirtualBox:~$ nova delete foo Request to delete server foo has been accepted. As an OpenStack administrator, list the deleted instances: dboik@dboik-VirtualBox:~$ nova list --deleted +--+--+-++-+--+ | ID | Name | Status | Task State | Power State | Networks | +--+--+-++-+--+ | 2fed0daa-b083-43cf-9285-7364ce4852ce | foo | DELETED | - | Running | private=10.0.0.2 | +--+--+-++-+--+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1392798/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392264] Re: Keystonemiddleware crashes when memcache encryption is enabled with Swift
This does not affect Keystone but middleware. Changed the target. ** Also affects: keystonemiddleware Importance: Undecided Status: New ** Changed in: keystone Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1392264 Title: Keystonemiddleware crashes when memcache encryption is enabled with Swift Status in OpenStack Identity (Keystone): Invalid Status in OpenStack Identity (Keystone) Middleware: New Bug description: We've come across the following issue when deploying standalone Swift servers using TripleO, where we've enabled auth token memcache with encryption. We get this error from the Swift proxy: Nov 11 15:17:49 overcloud-swiftstorage1-ohdtremvbiw3 proxy-server: Error: An error occurred: #012Traceback (most recent call last):#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/swift/common/middlewar e/catch_errors.py, line 41, in handle_request#012resp = self._app_call(env)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/swift/common/wsgi.py, line 582, in _app_call#012resp = self.app(env, self._start_response)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/swift/common/middleware/gatekeeper.py, line 90, in __call__#012return self.app(env, gatekeeper_response)#012 File /opt/stack /venvs/openstack/local/lib/python2.7/site-packages/swift/common/middleware/healthcheck.py, line 57, in __call__#012return self.app(env, start_response)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packag es/swift/common/middleware/proxy_logging.py, line 289, in __call__#012 iterable = self.app(env, my_start_response)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/swift/common/middleware/memcache.py , line 85, in __call__#012return self.app(env, start_response)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/swift/common/middleware/crossdomain.py, line 82, in __call__#012return self.app(e nv, start_response)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/swift/common/middleware/tempurl.py, line 295, in __call__#012return self.app(env, start_response)#012 File /opt/stack/venvs/ope nstack/local/lib/python2.7/site-packages/swift/common/middleware/formpost.py, line 231, in __call__#012return self.app(env, start_response)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonem iddleware/auth_token.py, line 710, in __call__#012token_info = self._validate_token(user_token, env)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth_token.py, line 891, in _validate_token#012self._token_cache.store_invalid(token_id)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth_token.py, line 1714, in store_invalid#012self._cache_store(t oken_id, self._INVALID_INDICATOR)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/auth_token.py, line 1822, in _cache_store#012data_to_store = memcache_crypt.protect_data(keys, s erialized_data)#012 File /opt/stack/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/_memcache_crypt.py, line 166, in protect_data#012data = encrypt_data(keys['ENCRYPTION'], data)#012 File /opt/sta ck/venvs/openstack/local/lib/python2.7/site-packages/keystonemiddleware/_memcache_crypt.py, line 80, in wrapper#012raise CryptoUnavailableError()#012CryptoUnavailableError (txn: tx9bf0c765e603404e8a776-0054622899) Looking in the _memcache_crypt.py code the problem is that pycrypto isn't installed in the Swift venv. pycrypto isn't listed in the Keystonemiddleware requirements.txt file. Since memcache encryption in Keystone middleware relies on pycrypto, and to avoid this issue where the Swift proxy errors out, we believe that pyrcypto should be added to Keystonemiddleware's requirements.txt file. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1392264/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392804] [NEW] Collectstatic steps not documented anywhere
Public bug reported: We don't seem to be documenting anywhere the collectstatic step required to collect the static files and avoid some SCSS parsing errors. We should do this (probably in [1]?) and perhaps also include information on enabling offline compression. If I understand correctly, the steps are as follow: 1. ./run_tests.sh collectstatic And then optionally, to enable offline compression: 2. Make sure COMPRESS_ENABLED and COMPRESS_ENABLED are set to True in the local_settings.py file 3. Run ./run_tests.sh compress [1] https://github.com/openstack/horizon/blob/master/doc/source/topics/install.rst ** Affects: horizon Importance: Low Status: New ** Tags: documentation low-hanging-fruit -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392804 Title: Collectstatic steps not documented anywhere Status in OpenStack Dashboard (Horizon): New Bug description: We don't seem to be documenting anywhere the collectstatic step required to collect the static files and avoid some SCSS parsing errors. We should do this (probably in [1]?) and perhaps also include information on enabling offline compression. If I understand correctly, the steps are as follow: 1. ./run_tests.sh collectstatic And then optionally, to enable offline compression: 2. Make sure COMPRESS_ENABLED and COMPRESS_ENABLED are set to True in the local_settings.py file 3. Run ./run_tests.sh compress [1] https://github.com/openstack/horizon/blob/master/doc/source/topics/install.rst To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1392804/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1371298] Re: libvirt: AMI-based Linux instances /dev/console unusable
** Also affects: tempest Importance: Undecided Status: New ** No longer affects: tempest ** Also affects: tempest Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1371298 Title: libvirt: AMI-based Linux instances /dev/console unusable Status in OpenStack Compute (Nova): In Progress Status in Tempest: New Bug description: In Linux, the last console= option listed in /proc/cmdline becomes /dev/console, which is used for things like rescue mode, single-user mode, etc. In the case of AMI-based Linux images, libvirt defines the serial console (tied to the console.log) last, which means a crashed instance ends up being unrecoverable Steps to Reproduce: 1. Upload the AMI/AKI/ARI images attached to this bug into Glance and tie them together (if how to do this is not common knowledge, I can follow-on with exact steps) 2. Boot an instance against the image. It has been altered so that it will crash on startup, believing there is filesystem corruption Expected Behaviour: A Press enter for maintenance (or type Control-D to continue): prompt on the interactive console (Spice/VNC/etc.) Actual Behaviour: The aforementioned prompt appears in the libvirt console.log, and the instance is effectively bricked. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1371298/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392834] [NEW] power state column not translated on admin instances
Public bug reported: Power state has been made translatable on project instances screen, but this was omitted on the admin instances screen. See related bug: https://bugs.launchpad.net/bugs/1224329 ** Affects: horizon Importance: Undecided Assignee: Doug Fish (drfish) Status: In Progress ** Tags: i18n juno-backport-potential ** Changed in: horizon Assignee: (unassigned) = Doug Fish (drfish) ** Tags added: juno-backport-potential ** Tags added: i18n -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392834 Title: power state column not translated on admin instances Status in OpenStack Dashboard (Horizon): In Progress Bug description: Power state has been made translatable on project instances screen, but this was omitted on the admin instances screen. See related bug: https://bugs.launchpad.net/bugs/1224329 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1392834/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392848] [NEW] in instances column Task None is not tranlsated
Public bug reported: In both project-compute-instances and admin-system-instances the value None for the Task column is not translated. ** Affects: horizon Importance: Undecided Assignee: Doug Fish (drfish) Status: New ** Changed in: horizon Assignee: (unassigned) = Doug Fish (drfish) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392848 Title: in instances column Task None is not tranlsated Status in OpenStack Dashboard (Horizon): New Bug description: In both project-compute-instances and admin-system-instances the value None for the Task column is not translated. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1392848/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392851] [NEW] Launch Cluster - required dropdown with no choices should have better instruction
Public bug reported: You need to have Cluster Template and Base Image defined before you can create cluster. However, a new user may not know this and just see that there are no selections or any explanation for what to do. When they click 'Create' they will just get the generic error msg 'This field is required.' We need to make it clear that they need to take some actions beforehand. We could have a '+' button beside the Base Image to link to register an image action, just like what Key Pair has? http://docs.openstack.org/developer/sahara/horizon/dashboard.user.guide.html#launching-a-cluster Related to: https://bugs.launchpad.net/horizon/+bug/1391343 ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392851 Title: Launch Cluster - required dropdown with no choices should have better instruction Status in OpenStack Dashboard (Horizon): New Bug description: You need to have Cluster Template and Base Image defined before you can create cluster. However, a new user may not know this and just see that there are no selections or any explanation for what to do. When they click 'Create' they will just get the generic error msg 'This field is required.' We need to make it clear that they need to take some actions beforehand. We could have a '+' button beside the Base Image to link to register an image action, just like what Key Pair has? http://docs.openstack.org/developer/sahara/horizon/dashboard.user.guide.html#launching-a-cluster Related to: https://bugs.launchpad.net/horizon/+bug/1391343 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1392851/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392855] [NEW] Include message of the day option
Public bug reported: As an operator there are many cases where I would like to notify my users about something through a message of the day. Things such as: * We just upgraded to a new version of X, and this is what changed * Planned outages * etc. This idea came up in the ops summit in Paris, where one operator said he doesn't upgrade Horizon because people don't like surprise changes. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392855 Title: Include message of the day option Status in OpenStack Dashboard (Horizon): New Bug description: As an operator there are many cases where I would like to notify my users about something through a message of the day. Things such as: * We just upgraded to a new version of X, and this is what changed * Planned outages * etc. This idea came up in the ops summit in Paris, where one operator said he doesn't upgrade Horizon because people don't like surprise changes. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1392855/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392656] Re: ImportError: No module named cmd.all
This looks like a packaging issue not an upstream nova one. ** Changed in: nova Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1392656 Title: ImportError: No module named cmd.all Status in OpenStack Compute (Nova): Invalid Bug description: nova version:nova-2014.2 os version: [root@ops1 data]# uname -a Linux ops1 2.6.32-504.1.3.el6.x86_64 #1 SMP Tue Nov 11 17:57:25 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux there is no err during installation procedure,but when i run this command ,error come out [root@ops1 data]# nova-all Traceback (most recent call last): File /usr/bin/nova-all, line 6, in module from nova.cmd.all import main ImportError: No module named cmd.all [root@ops1 data]# and the file exists under the path [root@ops1 data]# cd /usr/lib/python2.7/site-packages/nova/cmd/ [root@ops1 cmd]# ll|grep all -rw-r--r-- 1 root root 3416 Oct 16 07:52 all.py -rw-r--r-- 1 root root 2509 Nov 14 08:03 all.pyc [root@ops1 cmd]# To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1392656/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392718] Re: Sticky region selection in Login page
** Also affects: django-openstack-auth Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392718 Title: Sticky region selection in Login page Status in Django OpenStack Auth: New Status in OpenStack Dashboard (Horizon): New Bug description: Remember the last Region (keystone endpoint) selected in the Login page. If the deployment has multiple Regions and user is using the same Region most of the time, it would be better for UX to just default to the last Region selected for the user. To manage notifications about this bug go to: https://bugs.launchpad.net/django-openstack-auth/+bug/1392718/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392909] [NEW] data processing delete buttons are missing icons
Public bug reported: missing the 'x' icon ** Affects: horizon Importance: Undecided Assignee: Cindy Lu (clu-m) Status: New ** Changed in: horizon Assignee: (unassigned) = Cindy Lu (clu-m) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1392909 Title: data processing delete buttons are missing icons Status in OpenStack Dashboard (Horizon): New Bug description: missing the 'x' icon To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1392909/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392921] [NEW] host ssh key has been changed after full installation reboot
Public bug reported: We've has a planned outage for whole OS installation, and after booting back (+few reboots of hosts and instances during that process) many (may be all) instances changed their ssh keys. OS: havana@ubuntu cloud-init: cloud-init 0.7.2-3~bpo70+1 cloud-initramfs-growroot 0.18.debian5~bpo70+1 cloud-init.log in attachment. ** Affects: cloud-init Importance: Undecided Status: New ** Attachment added: cloud-init.log https://bugs.launchpad.net/bugs/1392921/+attachment/4260843/+files/cloud-init.log ** Description changed: We've has a planned outage for whole OS installation, and after booting back (+few reboots of hosts and instances during that process) many (may be all) instances changed their ssh keys. OS: havana@ubuntu - cloud-init: - ii cloud-init 0.7.2-3~bpo70+1 all initialization system for infrastructure cloud instances - ii cloud-initramfs-growroot 0.18.debian5~bpo70+1 all automatically resize the root partition on first boot - + cloud-init: + cloud-init 0.7.2-3~bpo70+1 + cloud-initramfs-growroot 0.18.debian5~bpo70+1 cloud-init.log in attachment. -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1392921 Title: host ssh key has been changed after full installation reboot Status in Init scripts for use on cloud images: New Bug description: We've has a planned outage for whole OS installation, and after booting back (+few reboots of hosts and instances during that process) many (may be all) instances changed their ssh keys. OS: havana@ubuntu cloud-init: cloud-init 0.7.2-3~bpo70+1 cloud-initramfs-growroot 0.18.debian5~bpo70+1 cloud-init.log in attachment. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1392921/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392923] [NEW] Orphan floating ip's created via rapid delete/assign/remove operations
Public bug reported: It is possible to create 'orphan' floating ip's (at least in devstack testing) through a sequence of: delete vip assign vip remove vip API calls + timestamps from a test run: 26101:[2014-11-14 17:18:57,446] mahmachine/INFO/stdout: 0x7f8833395250: delete floating ip id: 14 26145:[2014-11-14 17:18:58,237] mahmachine/INFO/stdout: 0x7f88333e8810: assign floating ip: 172.24.4.14 || d4545f39-6a5c-40e3-99f4-f72c22d56fc7 27333:[2014-11-14 17:19:25,144] mahmachine/INFO/stdout: 0x7f88333e8810: remove floating ip: 172.24.4.14 || d4545f39-6a5c-40e3-99f4-f72c22d56fc7 This results in floating ip addresses that are still listed as attached to an instance, yet are not owned (and are not removable) by the instance's owner. In the database, the fixed_ip_id is not null (the server id), yet the project id is: the 'host' column may or may not be populated, but the cause and effect appear to be the same regardless of this. select id, address, fixed_ip_id, project_id, host from floating_ips where project_id IS NULL and fixed_ip_id IS NOT NULL; ++-+-++-+ | id | address | fixed_ip_id | project_id | host| ++-+-++-+ | 2 | 172.24.4.2 | 4 | NULL | NULL| | 7 | 172.24.4.7 | 4 | NULL | mahmachine | | 11 | 172.24.4.11 | 4 | NULL | mahmachine | | 6 | 172.24.4.6 | 7 | NULL | mahmachine | | 15 | 172.24.4.15 | 7 | NULL | mahmachine | | 3 | 172.24.4.3 | 8 | NULL | mahmachine | | 14 | 172.24.4.14 | 10 | NULL | NULL| ++-+-++-+ ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1392923 Title: Orphan floating ip's created via rapid delete/assign/remove operations Status in OpenStack Compute (Nova): New Bug description: It is possible to create 'orphan' floating ip's (at least in devstack testing) through a sequence of: delete vip assign vip remove vip API calls + timestamps from a test run: 26101:[2014-11-14 17:18:57,446] mahmachine/INFO/stdout: 0x7f8833395250: delete floating ip id: 14 26145:[2014-11-14 17:18:58,237] mahmachine/INFO/stdout: 0x7f88333e8810: assign floating ip: 172.24.4.14 || d4545f39-6a5c-40e3-99f4-f72c22d56fc7 27333:[2014-11-14 17:19:25,144] mahmachine/INFO/stdout: 0x7f88333e8810: remove floating ip: 172.24.4.14 || d4545f39-6a5c-40e3-99f4-f72c22d56fc7 This results in floating ip addresses that are still listed as attached to an instance, yet are not owned (and are not removable) by the instance's owner. In the database, the fixed_ip_id is not null (the server id), yet the project id is: the 'host' column may or may not be populated, but the cause and effect appear to be the same regardless of this. select id, address, fixed_ip_id, project_id, host from floating_ips where project_id IS NULL and fixed_ip_id IS NOT NULL; ++-+-++-+ | id | address | fixed_ip_id | project_id | host| ++-+-++-+ | 2 | 172.24.4.2 | 4 | NULL | NULL| | 7 | 172.24.4.7 | 4 | NULL | mahmachine | | 11 | 172.24.4.11 | 4 | NULL | mahmachine | | 6 | 172.24.4.6 | 7 | NULL | mahmachine | | 15 | 172.24.4.15 | 7 | NULL | mahmachine | | 3 | 172.24.4.3 | 8 | NULL | mahmachine | | 14 | 172.24.4.14 | 10 | NULL | NULL| ++-+-++-+ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1392923/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1349545] Re: UnexpectedMethodCallError in test_stats_for_line_chart of ceilometer at gate
[Expired for OpenStack Dashboard (Horizon) because there has been no activity for 60 days.] ** Changed in: horizon Status: Incomplete = Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1349545 Title: UnexpectedMethodCallError in test_stats_for_line_chart of ceilometer at gate Status in OpenStack Dashboard (Horizon): Expired Bug description: openstack_dashboard.dashboards.admin.metering.tests:MeteringViewTests.test_stats_for_line_chart failed at zuul gate. stack traces are like : Traceback (most recent call last): 2014-07-28 09:48:34.545 | File /usr/lib/python2.7/threading.py, line 551, in __bootstrap_inner 2014-07-28 09:48:34.545 | self.run() 2014-07-28 09:48:34.545 | File /home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/api/ceilometer.py, line 359, in run 2014-07-28 09:48:34.545 | stats_attr=self.stats_attr, additional_query=self.additional_query) 2014-07-28 09:48:34.546 | File /home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/api/ceilometer.py, line 577, in update_with_statistics 2014-07-28 09:48:34.546 | query=query, period=period) 2014-07-28 09:48:34.546 | File /home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/api/ceilometer.py, line 308, in statistic_list 2014-07-28 09:48:34.547 | statistics.list(meter_name=meter_name, q=query, period=period) 2014-07-28 09:48:34.547 | File /home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py, line 1002, in __call__ 2014-07-28 09:48:34.547 | expected_method = self._VerifyMethodCall() 2014-07-28 09:48:34.548 | File /home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py, line 1049, in _VerifyMethodCall 2014-07-28 09:48:34.548 | expected = self._PopNextMethod() 2014-07-28 09:48:34.548 | File /home/jenkins/workspace/gate-horizon-python27/.tox/py27/local/lib/python2.7/site-packages/mox.py, line 1035, in _PopNextMethod 2014-07-28 09:48:34.549 | raise UnexpectedMethodCallError(self, None) 2014-07-28 09:48:34.549 | UnexpectedMethodCallError: Unexpected method call list(meter_name=u'memory', period=1512, q=[{'field': 'project_id', 'value': '2', 'op': 'eq'}, {'field': 'timestamp', 'value': datetime.datetime(2014, 7, 21, 4, 48, 34, 523843), 'op': 'ge'}, {'field': 'timestamp', 'value': datetime.datetime(2014, 7, 28, 4, 48, 34, 523843), 'op': 'le'}]) - None 2014-07-28 09:48:34.549 | 2014-07-28 09:49:56.265 | ... 2014-07-28 09:49:56.267 | == 2014-07-28 09:49:56.268 | FAIL: test_stats_for_line_chart (openstack_dashboard.dashboards.admin.metering.tests.MeteringViewTests) 2014-07-28 09:49:56.269 | -- 2014-07-28 09:49:56.270 | Traceback (most recent call last): 2014-07-28 09:49:56.270 | File /home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/test/helpers.py, line 82, in instance_stub_out 2014-07-28 09:49:56.271 | return fn(self, *args, **kwargs) 2014-07-28 09:49:56.271 | File /home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/admin/metering/tests.py, line 93, in test_stats_for_line_chart 2014-07-28 09:49:56.271 | expected_names) 2014-07-28 09:49:56.272 | File /home/jenkins/workspace/gate-horizon-python27/openstack_dashboard/dashboards/admin/metering/tests.py, line 52, in _verify_series 2014-07-28 09:49:56.272 | self.assertEqual(len(data['series']), len(expected_names)) 2014-07-28 09:49:56.272 | AssertionError: 2 != 3 2014-07-28 09:49:56.273 | '2 != 3' = '%s != %s' % (safe_repr(2), safe_repr(3)) 2014-07-28 09:49:56.273 | '2 != 3' = self._formatMessage('2 != 3', '2 != 3') 2014-07-28 09:49:56.273 | raise self.failureException('2 != 3') 2014-07-28 09:49:56.273 | For more information check the Jenkins's verify error in review https://review.openstack.org/#/c/109365/ To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1349545/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1246525] Re: Horizon displays floating IPs to allocate from unreachable external networks of other tenants.
[Expired for OpenStack Dashboard (Horizon) because there has been no activity for 60 days.] ** Changed in: horizon Status: Incomplete = Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1246525 Title: Horizon displays floating IPs to allocate from unreachable external networks of other tenants. Status in OpenStack Dashboard (Horizon): Expired Status in OpenStack Neutron (virtual network service): Incomplete Bug description: Description of problem: === Horizon displays floating IPs to allocate from unreachable external networks of other tenants. Those pools are not reachable and cannot be used by a non related tenant. Version-Release number of selected component (if applicable): = Grizzly, python-django-horizon-2013.1.4-1.el6ost.noarch How reproducible: = Always. Steps to Reproduce: === 1. Have two tenants (admin tenant, test tenant) 2. Network for admin tenant: - Create network named: internal with the subnet 192.168.1.0/24 - Create network named: external with the subnet 10.10.10.0/24 check External Network in Admin tab for this network. - Create Router named: Router1, Set gateway network: external 3. Network for demo tenant: - Create network named: internal2 with the subnet 192.168.2.0/24 - Create network named: external2 with the subnet 11.11.11.0/24 check External Network in Admin tab for this network. - Create Router named: Router2, Set gateway network: external2 4. Launch an instance in admin tenant, attach the 'internal' network. 5. Associate Floating IP to that instance. 5. Click + and select the pool of the other tenant: external2. 6. Click Associate Actual results: === The IP address (11.11.11.x) suggested belongs to the other tenant pool: external2, which shouldn't be accessible. Association fails with the following error: Error: External network d1e2a98f-0ee6-4192-bdd4-eb759456f059 is not reachable from subnet 7e58ab9f-bac4-4544-af64-896c247542bd. Therefore, cannot associate Port 60550899-a94a-44e2-a231-fe344f1d1838 with a Floating IP. Error: Unable to associate IP address 11.11.11.3. Expected results: = Only IPs allocated to the current tenant should be listed. Additional Info: I've yet to test if this reproduces in Havana. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1246525/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1392947] [NEW] instance's directory is not removed at destination host during rollback of pre_live_migration
Public bug reported: In method pre_live_migration, we create instance's directory at destination host during block live migration. If some exceptions happen before creating domain succesfully like failure of connecting volume to destination host, then we can't destroy instance in method rollback_live_migration_at_destination. In this case we don't remove the instance's directory, this will lead anDestinationDiskExists when live migratin same instance to same destination. ** Affects: nova Importance: Undecided Assignee: ChangBo Guo(gcb) (glongwave) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1392947 Title: instance's directory is not removed at destination host during rollback of pre_live_migration Status in OpenStack Compute (Nova): In Progress Bug description: In method pre_live_migration, we create instance's directory at destination host during block live migration. If some exceptions happen before creating domain succesfully like failure of connecting volume to destination host, then we can't destroy instance in method rollback_live_migration_at_destination. In this case we don't remove the instance's directory, this will lead anDestinationDiskExists when live migratin same instance to same destination. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1392947/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp