[Yahoo-eng-team] [Bug 1352186] [NEW] Uncaught exception resize api
Public bug reported: [req-b445d808-9bd4-4573-824b-d5b00bbb09fd ServersAdminTestJSON630630154-user ServersAdminTestJSON630630154-tenant] Caught error: Quota exceeded for ram: Requested 51137, but already used 128 of 51200 ram 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack Traceback (most recent call last): 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/api/openstack/__init__.py, line 119, in __call__ 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack return req.get_response(self.application) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, in send 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack application, catch_exc_info=False) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, in call_application 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack app_iter = application(self.environ, start_response) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__ 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack return resp(environ, start_response) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py, line 663, in __call__ 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack return self.app(env, start_response) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__ 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack response = self.app(environ, start_response) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__ 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack resp = self.call_func(req, *args, **self.kwargs) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack return self.func(req, *args, **kwargs) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/api/openstack/wsgi.py, line 938, in __call__ 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack content_type, body, accept) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/api/openstack/wsgi.py, line 997, in _process_stack 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack action_result = self.dispatch(meth, request, action_args) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/api/openstack/wsgi.py, line 1078, in dispatch 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack return method(req=request, **action_args) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/api/openstack/compute/servers.py, line 1248, in _action_resize 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack return self._resize(req, id, flavor_ref, **kwargs) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/api/openstack/compute/servers.py, line 1113, in _resize 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack self.compute_api.resize(context, instance, flavor_id, **kwargs) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/compute/api.py, line 198, in wrapped 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack return func(self, context, target, *args, **kwargs) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/compute/api.py, line 188, in inner 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack return function(self, context, instance, *args, **kwargs) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/compute/api.py, line 215, in _wrapped 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack return fn(self, context, instance, *args, **kwargs) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/compute/api.py, line 169, in inner 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack return f(self, context, instance, *args, **kw) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack File /opt/stack/new/nova/nova/compute/api.py, line 2333, in resize 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack resource=resource) 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack TooManyInstances: Quota exceeded for ram: Requested 51137, but already used 128 of 51200 ram 2014-08-04 05:27:04.305 12275 TRACE nova.api.openstack Field Action Value @timestamp 2014-08-04T05:27:04.305+00:00 @version1
[Yahoo-eng-team] [Bug 1352193] [NEW] The nova API service can’t hand image metadata properly when metadata key contains uppercase letter
Public bug reported: OS: centos 6.5 64bit openstack release: icehouse Steps to reproduce: 1. Call the image metadata API of nova using the following command: curl -X 'POST' -v http://IP:PORT/v2/${tenant_id}/images/${image_id}/metadata -H X-Auth-Token: $token -H 'Content-type: application/json' -d '{metadata:{Key1:Value1}}' | python -mjson.tool 2. Execute the above command again: curl -X 'POST' -v http://IP:PORT/v2/${tenant_id}/images/${image_id}/metadata -H X-Auth-Token: $token -H 'Content-type: application/json' -d '{metadata:{Key1:Value1}}' | python -mjson.tool Expected result: In step1, the json response should be: {metadata:{Key1:Value1}} In setp2, the json response should be: {metadata:{Key1:Value1}} Observed result: In step1, the json response is: {metadata:{key1:Value1}} In setp2, the json response is: {metadata:{key1:Value1,Value1}} Besides, we can observer that each image metadata key in table image_properties of glance DB is converted to lowercase even if the key user inputted contains uppercase letter. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1352193 Title: The nova API service can’t hand image metadata properly when metadata key contains uppercase letter Status in OpenStack Compute (Nova): New Bug description: OS: centos 6.5 64bit openstack release: icehouse Steps to reproduce: 1. Call the image metadata API of nova using the following command: curl -X 'POST' -v http://IP:PORT/v2/${tenant_id}/images/${image_id}/metadata -H X-Auth-Token: $token -H 'Content-type: application/json' -d '{metadata:{Key1:Value1}}' | python -mjson.tool 2. Execute the above command again: curl -X 'POST' -v http://IP:PORT/v2/${tenant_id}/images/${image_id}/metadata -H X-Auth-Token: $token -H 'Content-type: application/json' -d '{metadata:{Key1:Value1}}' | python -mjson.tool Expected result: In step1, the json response should be: {metadata:{Key1:Value1}} In setp2, the json response should be: {metadata:{Key1:Value1}} Observed result: In step1, the json response is: {metadata:{key1:Value1}} In setp2, the json response is: {metadata:{key1:Value1,Value1}} Besides, we can observer that each image metadata key in table image_properties of glance DB is converted to lowercase even if the key user inputted contains uppercase letter. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1352193/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1327775] Re: tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestXML.test_create_delete_image timed out
This bug in not related to swift, probably to glance or tempest itself. (But i have no permission to set Swift status to Invalid) ** Also affects: glance Importance: Undecided Status: New ** Changed in: swift Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1327775 Title: tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestXML.test_create_delete_image timed out Status in OpenStack Image Registry and Delivery Service (Glance): New Status in OpenStack Object Storage (Swift): Invalid Bug description: http://logs.openstack.org/44/98044/1/gate/gate-tempest-dsvm- neutron/525dcba/ 2014-06-08 00:45:51.509 | Captured traceback-1: 2014-06-08 00:45:51.509 | ~ 2014-06-08 00:45:51.509 | Traceback (most recent call last): 2014-06-08 00:45:51.509 | File tempest/api/compute/images/test_images_oneserver.py, line 31, in tearDown 2014-06-08 00:45:51.510 | self.server_check_teardown() 2014-06-08 00:45:51.510 | File tempest/api/compute/base.py, line 161, in server_check_teardown 2014-06-08 00:45:51.510 | 'ACTIVE') 2014-06-08 00:45:51.510 | File tempest/services/compute/xml/servers_client.py, line 388, in wait_for_server_status 2014-06-08 00:45:51.510 | raise_on_error=raise_on_error) 2014-06-08 00:45:51.510 | File tempest/common/waiters.py, line 93, in wait_for_server_status 2014-06-08 00:45:51.510 | raise exceptions.TimeoutException(message) 2014-06-08 00:45:51.510 | TimeoutException: Request timed out 2014-06-08 00:45:51.510 | Details: (ImagesOneServerTestXML:tearDown) Server 72897dd4-cd42-4e0b-af15-3eec5b677d0b failed to reach ACTIVE status and task state None within the required time (196 s). Current status: ACTIVE. Current task state: image_snapshot. 2014-06-08 00:45:51.510 | 2014-06-08 00:45:51.510 | 2014-06-08 00:45:51.511 | Captured traceback: 2014-06-08 00:45:51.511 | ~~~ 2014-06-08 00:45:51.511 | Traceback (most recent call last): 2014-06-08 00:45:51.511 | File tempest/api/compute/images/test_images_oneserver.py, line 77, in test_create_delete_image 2014-06-08 00:45:51.511 | self.client.wait_for_image_status(image_id, 'ACTIVE') 2014-06-08 00:45:51.511 | File tempest/services/compute/xml/images_client.py, line 140, in wait_for_image_status 2014-06-08 00:45:51.511 | waiters.wait_for_image_status(self, image_id, status) 2014-06-08 00:45:51.511 | File tempest/common/waiters.py, line 129, in wait_for_image_status 2014-06-08 00:45:51.511 | raise exceptions.TimeoutException(message) 2014-06-08 00:45:51.511 | TimeoutException: Request timed out 2014-06-08 00:45:51.511 | Details: (ImagesOneServerTestXML:test_create_delete_image) Image fbe2b95d-7126-444d-be5a-e4104ec7d799 failed to reach ACTIVE status within the required time (196 s). Current status: SAVING. logstash query: http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpcImNvbnNvbGUuaHRtbFwiIEFORCBtZXNzYWdlOlwiRGV0YWlsczogKEltYWdlc09uZVNlcnZlclRlc3RYTUw6dGVzdF9jcmVhdGVfZGVsZXRlX2ltYWdlKSBJbWFnZVwiIEFORCBtZXNzYWdlOlwiIGZhaWxlZCB0byByZWFjaCBBQ1RJVkUgc3RhdHVzIHdpdGhpbiB0aGUgcmVxdWlyZWQgdGltZSAoMTk2IHMpLiBDdXJyZW50IHN0YXR1czogU0FWSU5HLlwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAyMjMzOTQwMjA1fQ== To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1327775/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1224972] Re: When createing a volume from an image - nova leaves the volume name empty
** Also affects: cinder Importance: Undecided Status: New ** Changed in: cinder Assignee: (unassigned) = sandhya (sandhya-ganapathy) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1224972 Title: When createing a volume from an image - nova leaves the volume name empty Status in Cinder: New Status in OpenStack Compute (Nova): Triaged Bug description: When a block device with source=image, dest=volume to nova instance boot, nova will instruct Cinder to create the volume, however it will not set any name. It would be helpful to set a descriptive name so that the user knows where the volume came from. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1224972/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352256] [NEW] Uploading a new object fails with Ceph as object storage backend using RadosGW
Public bug reported: While uploading a new Object using Horizon, with Ceph as object storage backend, it fails with error mesage Error: Unable to upload object Ceph Release : Firefly Error in horizon_error.log: [Wed Jul 23 09:04:46.840751 2014] [:error] [pid 30045:tid 140685813683968] INFO:urllib3.connectionpool:Starting new HTTP connection (1): firefly-master.ashish.com [Wed Jul 23 09:04:46.842984 2014] [:error] [pid 30045:tid 140685813683968] WARNING:urllib3.connectionpool:HttpConnectionPool is full, discarding connection: firefly-master.ashish.com [Wed Jul 23 09:04:46.843118 2014] [:error] [pid 30045:tid 140685813683968] REQ: curl -i http://firefly-master.ashish.com/swift/v1/new-cont-dash/test -X PUT -H X-Auth-Token: 91fc8466ce17e0d22af86de9b3343b2d [Wed Jul 23 09:04:46.843227 2014] [:error] [pid 30045:tid 140685813683968] RESP STATUS: 411 Length Required [Wed Jul 23 09:04:46.843584 2014] [:error] [pid 30045:tid 140685813683968] RESP HEADERS: [('date', 'Wed, 23 Jul 2014 09:04:46 GMT'), ('content-length', '238'), ('content-type', 'text/html; charset=iso-8859-1'), ('connection', 'close'), ('server', 'Apache/2.4.7 (Ubuntu)')] [Wed Jul 23 09:04:46.843783 2014] [:error] [pid 30045:tid 140685813683968] RESP BODY: !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN [Wed Jul 23 09:04:46.843907 2014] [:error] [pid 30045:tid 140685813683968] htmlhead [Wed Jul 23 09:04:46.843930 2014] [:error] [pid 30045:tid 140685813683968] title411 Length Required/title [Wed Jul 23 09:04:46.843937 2014] [:error] [pid 30045:tid 140685813683968] /headbody [Wed Jul 23 09:04:46.843944 2014] [:error] [pid 30045:tid 140685813683968] h1Length Required/h1 [Wed Jul 23 09:04:46.843951 2014] [:error] [pid 30045:tid 140685813683968] pA request of the requested method PUT requires a valid Content-length.br / [Wed Jul 23 09:04:46.843957 2014] [:error] [pid 30045:tid 140685813683968] /p [Wed Jul 23 09:04:46.843963 2014] [:error] [pid 30045:tid 140685813683968] /body/html [Wed Jul 23 09:04:46.843969 2014] [:error] [pid 30045:tid 140685813683968] [Wed Jul 23 09:04:46.844530 2014] [:error] [pid 30045:tid 140685813683968] Object PUT failed: http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length Required [first 60 chars of response] !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN [Wed Jul 23 09:04:46.844555 2014] [:error] [pid 30045:tid 140685813683968] htmlhe [Wed Jul 23 09:04:46.844567 2014] [:error] [pid 30045:tid 140685813683968] Traceback (most recent call last): [Wed Jul 23 09:04:46.844573 2014] [:error] [pid 30045:tid 140685813683968] File /opt/stack/python-swiftclient/swiftclient/client.py, line 1208, in _retry [Wed Jul 23 09:04:46.844582 2014] [:error] [pid 30045:tid 140685813683968] rv = func(self.url, self.token, *args, **kwargs) [Wed Jul 23 09:04:46.844588 2014] [:error] [pid 30045:tid 140685813683968] File /opt/stack/python-swiftclient/swiftclient/client.py, line 981, in put_object [Wed Jul 23 09:04:46.844594 2014] [:error] [pid 30045:tid 140685813683968] http_response_content=body) [Wed Jul 23 09:04:46.844601 2014] [:error] [pid 30045:tid 140685813683968] ClientException: Object PUT failed: http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length Required [first 60 chars of response] !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN [Wed Jul 23 09:04:46.844607 2014] [:error] [pid 30045:tid 140685813683968] htmlhe [Wed Jul 23 09:04:46.844879 2014] [:error] [pid 30045:tid 140685813683968] Recoverable error: Object PUT failed: http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length Required [first 60 chars of response] !DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN [Wed Jul 23 09:04:46.844900 2014] [:error] [pid 30045:tid 140685813683968] htmlhe [Wed Jul 23 09:04:46.854643 2014] [:error] [pid 30045:tid 140685813683968] INFO:urllib3.connectionpool:Starting new HTTP connection (1): 10.0.1.60 [Wed Jul 23 09:04:46.855247 2014] [:error] [pid 30045:tid 140685813683968] DEBUG:urllib3.connectionpool:Setting read timeout to None [Wed Jul 23 09:04:46.888503 2014] [:error] [pid 30045:tid 140685813683968] DEBUG:urllib3.connectionpool:POST /v2.0/tokens HTTP/1.1 200 1358 [Wed Jul 23 09:04:46.892722 2014] [:error] [pid 30045:tid 140685813683968] INFO:urllib3.connectionpool:Starting new HTTP connection (1): 10.0.1.60 [Wed Jul 23 09:04:46.894144 2014] [:error] [pid 30045:tid 140685813683968] DEBUG:urllib3.connectionpool:Setting read timeout to None [Wed Jul 23 09:04:46.910724 2014] [:error] [pid 30045:tid 140685813683968] DEBUG:urllib3.connectionpool:GET /v2.0/tenants HTTP/1.1 200 231 ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352256 Title: Uploading a new object fails with Ceph as object storage backend using RadosGW Status in OpenStack
[Yahoo-eng-team] [Bug 909096] Re: LinuxOVSInterfaceDriver never deletes the OVS ports it creates
The fix released in 2012.1 https://github.com/openstack/nova/commit/1265104b873d4cd791cecc62134ef874b4656003 ** Changed in: nova Status: Confirmed = Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/909096 Title: LinuxOVSInterfaceDriver never deletes the OVS ports it creates Status in OpenStack Compute (Nova): Fix Released Bug description: Dan noticed this while looking at the code: we never actually delete the ovs ports that we create. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/909096/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352269] [NEW] Line chart y-axis label should be hideable
Public bug reported: We should allow to just show axis without labels. This design will be used in small charts in overview pages. x-label does not have hide-able label in Rickshaw, that is why this applies only to y axis. ** Affects: horizon Importance: Undecided Assignee: Ladislav Smola (lsmola) Status: In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352269 Title: Line chart y-axis label should be hideable Status in OpenStack Dashboard (Horizon): In Progress Bug description: We should allow to just show axis without labels. This design will be used in small charts in overview pages. x-label does not have hide-able label in Rickshaw, that is why this applies only to y axis. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352269/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1214341] Re: Not all db.sqal.session methods are wrapped by wrap_db_error
** Also affects: cinder Importance: Undecided Status: New ** Changed in: cinder Assignee: (unassigned) = Ivan Kolodyazhny (e0ne) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1214341 Title: Not all db.sqal.session methods are wrapped by wrap_db_error Status in Cinder: New Status in OpenStack Bare Metal Provisioning Service (Ironic): Fix Released Status in OpenStack Identity (Keystone): In Progress Status in Oslo - a Library of Common OpenStack Code: Fix Released Bug description: first(), all(), begin(), commit() and other public methods could produce amount of exceptions, that should be wrapped in exception in any case. To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1214341/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352314] [NEW] Meaningless replacing of slashes with dashes in Keystone tokens
Public bug reported: It looks like Keystone uses some a bit strange conception of Base64 which does not include slashes - https://github.com/openstack/keystone/commit/bcc0f6d6fc1f674bc4b340d041b28bc1cfddf66a http://tools.ietf.org/html/rfc4648 shows that slash is a valid Base64 character. So currently for some unknown reason Keystone replaces slashes with dashes when returning tokens and does the opposite when reading tokens. I understand that fixing this will break backwards compatibility but it makes sense at least to document this strange behaviour so developers accessing Keystone not with Keystone original bindings (e.g. from othe languages) will not be caught by surprise. ** Affects: keystone Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1352314 Title: Meaningless replacing of slashes with dashes in Keystone tokens Status in OpenStack Identity (Keystone): New Bug description: It looks like Keystone uses some a bit strange conception of Base64 which does not include slashes - https://github.com/openstack/keystone/commit/bcc0f6d6fc1f674bc4b340d041b28bc1cfddf66a http://tools.ietf.org/html/rfc4648 shows that slash is a valid Base64 character. So currently for some unknown reason Keystone replaces slashes with dashes when returning tokens and does the opposite when reading tokens. I understand that fixing this will break backwards compatibility but it makes sense at least to document this strange behaviour so developers accessing Keystone not with Keystone original bindings (e.g. from othe languages) will not be caught by surprise. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1352314/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1337089] Re: Pip temporarily diseappeared?
https://review.openstack.org/#/c/108406/ ** Also affects: keystone Importance: Undecided Status: New ** Changed in: keystone Status: New = Triaged ** Changed in: keystone Importance: Undecided = Medium ** Changed in: keystone Status: Triaged = Fix Committed ** Changed in: keystone Assignee: (unassigned) = David Stanek (dstanek) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1337089 Title: Pip temporarily diseappeared? Status in OpenStack Identity (Keystone): Fix Committed Status in OpenStack Core Infrastructure: Incomplete Bug description: Seeing a interesting bug, may just be transient but recording incase others see it. /usr/local/jenkins/slave_scripts/run-unittests.sh: line 34: tox: command not found /usr/local/jenkins/slave_scripts/run-unittests.sh: line 39: .tox/py26/bin/pip: No such file or directory Logs @ https://jenkins02.openstack.org/job/gate-taskflow- python26/1316/console Perhaps just a slave going wonky. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1337089/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352341] [NEW] reboot_instance testcase missing from tempest
Public bug reported: reboot_instance test case missing from tempest [test_reboot_instance(), similar to test_start_instance() and test_stop_instance()] ** Affects: tempest Importance: Undecided Status: New ** Tags: tempest ** Project changed: nova = tempest -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1352341 Title: reboot_instance testcase missing from tempest Status in Tempest: New Bug description: reboot_instance test case missing from tempest [test_reboot_instance(), similar to test_start_instance() and test_stop_instance()] To manage notifications about this bug go to: https://bugs.launchpad.net/tempest/+bug/1352341/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1348244] Re: debug log messages need to be unicode
Marking as invalid in Cinder. ** Changed in: cinder Milestone: juno-3 = None ** Changed in: cinder Status: In Progress = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1348244 Title: debug log messages need to be unicode Status in Cinder: Invalid Status in OpenStack Compute (Nova): In Progress Status in Oslo - a Library of Common OpenStack Code: In Progress Bug description: Debug logs should be: LOG.debug(message) should be LOG.debug(umessage) Before the translation of debug log messages was removed, the translation was returning unicode. Now that they are no longer translated they need to be explicitly marked as unicode. This was confirmed by discussion with dhellman. See 2014-07-23T13:48:23 in this log http://eavesdrop.openstack.org/irclogs /%23openstack-oslo/%23openstack-oslo.2014-07-23.log The problem was discovered when an exception was used as replacement text in a debug log message: LOG.debug(Failed to mount image %(ex)s), {'ex': e}) In particular it was discovered as part of enabling lazy translation, where the exception message is replaced with an object that does not support str(). Note that this would also fail without lazy enabled, if a translation for the exception message was provided that was unicode. Example trace: Traceback (most recent call last): File nova/tests/virt/disk/test_api.py, line 78, in test_can_resize_need_fs_type_specified self.assertFalse(api.is_image_partitionless(imgfile, use_cow=True)) File nova/virt/disk/api.py, line 208, in is_image_partitionless fs.setup() File nova/virt/disk/vfs/localfs.py, line 80, in setup LOG.debug(Failed to mount image %(ex)s), {'ex': e}) File /usr/lib/python2.7/logging/__init__.py, line 1412, in debug self.logger.debug(msg, *args, **kwargs) File /usr/lib/python2.7/logging/__init__.py, line 1128, in debug self._log(DEBUG, msg, args, **kwargs) File /usr/lib/python2.7/logging/__init__.py, line 1258, in _log self.handle(record) File /usr/lib/python2.7/logging/__init__.py, line 1268, in handle self.callHandlers(record) File /usr/lib/python2.7/logging/__init__.py, line 1308, in callHandlers hdlr.handle(record) File nova/test.py, line 212, in handle self.format(record) File /usr/lib/python2.7/logging/__init__.py, line 723, in format return fmt.format(record) File /usr/lib/python2.7/logging/__init__.py, line 464, in format record.message = record.getMessage() File /usr/lib/python2.7/logging/__init__.py, line 328, in getMessage msg = msg % self.args File /opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo/i18n/_message.py, line 167, in __str__ raise UnicodeError(msg) UnicodeError: Message objects do not support str() because they may contain non-ascii characters. Please use unicode() or translate() instead. == FAIL: nova.tests.virt.disk.test_api.APITestCase.test_resize2fs_e2fsck_fails tags: worker-3 To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1348244/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1349792] Re: nova-client return Unauthorized (HTTP 401) when provider is pki
** Changed in: keystone Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1349792 Title: nova-client return Unauthorized (HTTP 401) when provider is pki Status in OpenStack Identity (Keystone): Invalid Bug description: keystone service status is all right ,but when use nava command ,it reurns http 401 error; when I change keystone.conf provider from pki to uuid, nova commands OK ; the log is below: [root@localhost keystone(keystone_admin)]# nova --debug list REQ: curl -i 'http://127.0.0.1:5000/v2.0/tokens' -X POST -H Content- Type: application/json -H Accept: application/json -H User-Agent: python-novaclient -d '{auth: {tenantName: admin, passwordCredentials: {username: admin, password: keystone}}}' INFO (connectionpool:203) Starting new HTTP connection (1): 127.0.0.1 DEBUG (connectionpool:295) POST /v2.0/tokens HTTP/1.1 200 9724 RESP: [200] {'date': 'Sun, 06 Jan 2002 01:49:30 GMT', 'content-type': 'application/json', 'content-length': '9724', 'vary': 'X-Auth-Token'} RESP BODY: {access: {token: {issued_at: 2002-01-06T01:49:30.187415, expires: 2002-01-07T01:49:30Z, id: MIIRCAYJKoZIhvcNAQcCoIIQ+TCCEPUCAQExCTAHBgUrDgMCGjCCD14GCSqGSIb3DQEHAaCCD08Egg9LeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAwMi0wMS0wNlQwMTo0OTozMC4xODc0MTUiLCAiZXhwaXJlcyI6ICIyMDAyLTAxLTA3VDAxOjQ5OjMwWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogImFkbWluIHRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogIjExYTU4M2Q3ZGRmMjRjOTRhMGI1Y2NjYmFjYTM3ZTZjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92Mi8xMWE1ODNkN2RkZjI0Yzk0YTBiNWNjY2JhY2EzN2U2YyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NC92Mi8xMWE1ODNkN2RkZjI0Yzk0YTBiNWNjY2JhY2EzN2U2YyIsICJpZCI6ICIxYzVkNGVkMTM2NWU0YTgyYjhlMmYyN2MwZTFhNWNiMSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc0L3YyLzExYTU4M2Q3ZGRmMjRjOTRhMGI1Y2NjYmFjYTM3ZTZjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogIm NvbXB1dGUiLCAibmFtZSI6ICJub3ZhIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo5Njk2LyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6OTY5Ni8iLCAiaWQiOiAiNmUwYzVjNGNmOWQxNDQwNjhhNGU2NDlkZTdkODJjODIiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6OTY5Ni8ifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAibmV0d29yayIsICJuYW1lIjogIm5ldXRyb24ifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzYvdjIvMTFhNTgzZDdkZGYyNGM5NGEwYjVjY2NiYWNhMzdlNmMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjg3NzYvdjIvMTFhNTgzZDdkZGYyNGM5NGEwYjVjY2NiYWNhMzdlNmMiLCAiaWQiOiAiYmQ4NTc3OThiZWYzNDA1NDkwY2RlNmMyMDUxNDcwNWMiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92Mi8xMWE1ODNkN2RkZjI0Yzk0YTBiNWNjY2JhY2EzN2U2YyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWV2MiIsICJuYW1lIjogImNpbmRlcl92MiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODA4MCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJ MIjogImh0dHA6Ly8xMjcuMC4wLjE6ODA4MCIsICJpZCI6ICJhZjI1ZDNhYmY0MzQ0NmI2OTgyNzUzZTUxNWQxYmY1YSIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4MDgwIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUiOiAic3dpZnRfczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjkyOTIiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTI3LjAuMC4xOjkyOTIiLCAiaWQiOiAiMDkwY2JiNTM0MTIyNDZmOWJmZWEyZjUwMGRjOTExZDIiLCAicHVibGljVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3NyIsICJpZCI6ICIxYWFmNTJkYzc3NDM0YTRlYjUxNjk2MzNlZWRhZmYzNyIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzEyNy4wLjAuMTo4Nzc3In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogIm1ldGVyaW5nIiwgIm5hbWUiOiAiY2VpbG9tZXRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMjcuMC4wLjE6ODc3Ni92MS8xMWE1
[Yahoo-eng-team] [Bug 1309430] Re: openstack role add RHEL error
This looks like it's an issue between devstack and openstackclient. This was filed months ago though, is it still an issue? ** Project changed: keystone = devstack ** Changed in: devstack Status: New = Incomplete ** Also affects: python-openstackclient Importance: Undecided Status: New ** Changed in: python-openstackclient Status: New = Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1309430 Title: openstack role add RHEL error Status in devstack - openstack dev environments: Incomplete Status in OpenStack Command Line Client: Incomplete Status in “keystone” package in Ubuntu: Invalid Bug description: I setting devstack and under Ubuntu without proxy. This stack.sh works. Under RHEL after fixing some odd error, it final stuck at the error message at Keystone user account setting up. Not sure why the ADMIN_USER and ADMIN_ROLE parameters one has quotes and one dont have. OS : Red Hat Enterprise Linux Server release 6.4 (Santiago) Kernel : 2.6.32-279.11.1.el6.x86_64 user/devstack/lib/keystone function create_keystone_accounts { # admin ADMIN_TENANT=$(openstack project create \ admin \ | grep id | get_field 2) ADMIN_USER=$(openstack user create \ admin \ --project $ADMIN_TENANT \ --email ad...@example.com \ --password $ADMIN_PASSWORD \ | grep id | get_field 2) ADMIN_ROLE=$(openstack role create \ admin \ | grep id | get_field 2) openstack role add \ $ADMIN_ROLE \ --project $ADMIN_TENANT \ --user $ADMIN_USER Error Message : ERROR: cliff.app Not supported proxy scheme None + ADMIN_ROLE= + openstack role add --project --user usage: openstack role add [-h] [-f {shell,table,value}] [-c COLUMN] [--max-width integer] [--variable VARIABLE] [--prefix PREFIX] --project project --user user role openstack role add: error: argument --project: expected one argument + exit_trap + local r=2 ++ jobs -p + jobs= + [[ -n '' ]] + exit 2 To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1309430/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1348244] Re: debug log messages need to be unicode
Rushi, putting this back to confirmed and targeted. This is still an issue but we are fixing it from a different approach. I want to leave this open to track syncing the fix from Oslo. ** Changed in: cinder Milestone: None = juno-3 ** Changed in: cinder Status: Invalid = Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1348244 Title: debug log messages need to be unicode Status in Cinder: Confirmed Status in OpenStack Compute (Nova): In Progress Status in Oslo - a Library of Common OpenStack Code: In Progress Bug description: Debug logs should be: LOG.debug(message) should be LOG.debug(umessage) Before the translation of debug log messages was removed, the translation was returning unicode. Now that they are no longer translated they need to be explicitly marked as unicode. This was confirmed by discussion with dhellman. See 2014-07-23T13:48:23 in this log http://eavesdrop.openstack.org/irclogs /%23openstack-oslo/%23openstack-oslo.2014-07-23.log The problem was discovered when an exception was used as replacement text in a debug log message: LOG.debug(Failed to mount image %(ex)s), {'ex': e}) In particular it was discovered as part of enabling lazy translation, where the exception message is replaced with an object that does not support str(). Note that this would also fail without lazy enabled, if a translation for the exception message was provided that was unicode. Example trace: Traceback (most recent call last): File nova/tests/virt/disk/test_api.py, line 78, in test_can_resize_need_fs_type_specified self.assertFalse(api.is_image_partitionless(imgfile, use_cow=True)) File nova/virt/disk/api.py, line 208, in is_image_partitionless fs.setup() File nova/virt/disk/vfs/localfs.py, line 80, in setup LOG.debug(Failed to mount image %(ex)s), {'ex': e}) File /usr/lib/python2.7/logging/__init__.py, line 1412, in debug self.logger.debug(msg, *args, **kwargs) File /usr/lib/python2.7/logging/__init__.py, line 1128, in debug self._log(DEBUG, msg, args, **kwargs) File /usr/lib/python2.7/logging/__init__.py, line 1258, in _log self.handle(record) File /usr/lib/python2.7/logging/__init__.py, line 1268, in handle self.callHandlers(record) File /usr/lib/python2.7/logging/__init__.py, line 1308, in callHandlers hdlr.handle(record) File nova/test.py, line 212, in handle self.format(record) File /usr/lib/python2.7/logging/__init__.py, line 723, in format return fmt.format(record) File /usr/lib/python2.7/logging/__init__.py, line 464, in format record.message = record.getMessage() File /usr/lib/python2.7/logging/__init__.py, line 328, in getMessage msg = msg % self.args File /opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo/i18n/_message.py, line 167, in __str__ raise UnicodeError(msg) UnicodeError: Message objects do not support str() because they may contain non-ascii characters. Please use unicode() or translate() instead. == FAIL: nova.tests.virt.disk.test_api.APITestCase.test_resize2fs_e2fsck_fails tags: worker-3 To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1348244/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352405] [NEW] Storage on hypervisors page incorrect for shared storage
Public bug reported: The storage total and storage used shown on the Hypervisors page does not take account of the shared storage case. We have shared storage for /var/lib/nova/instances (currently using Gluster) and Horizon computes a simple addition of the usage across the compute nodes. The total and used figures are incorrect. ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352405 Title: Storage on hypervisors page incorrect for shared storage Status in OpenStack Dashboard (Horizon): New Bug description: The storage total and storage used shown on the Hypervisors page does not take account of the shared storage case. We have shared storage for /var/lib/nova/instances (currently using Gluster) and Horizon computes a simple addition of the usage across the compute nodes. The total and used figures are incorrect. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352405/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352404] [NEW] Need to disable IPv6 in forms
Public bug reported: In some openstack settings, you could have a network with no IPv6 capabilities. This become misleading to users when they see in the forms IPv6 fields. Also in most cases the network API will raise an exception. We need a way to disable IPv6 fields. ** Affects: horizon Importance: Undecided Assignee: George Peristerakis (george-peristerakis) Status: New ** Changed in: horizon Assignee: (unassigned) = George Peristerakis (george-peristerakis) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352404 Title: Need to disable IPv6 in forms Status in OpenStack Dashboard (Horizon): New Bug description: In some openstack settings, you could have a network with no IPv6 capabilities. This become misleading to users when they see in the forms IPv6 fields. Also in most cases the network API will raise an exception. We need a way to disable IPv6 fields. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352404/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1347862] Re: keystone will not auth users if there is a bad endpoint
** Also affects: keystone/icehouse Importance: Undecided Status: New ** Changed in: keystone/icehouse Status: New = In Progress ** Changed in: keystone/icehouse Importance: Undecided = Medium ** Changed in: keystone Importance: Undecided = Medium ** Changed in: keystone/icehouse Assignee: (unassigned) = David Stanek (dstanek) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1347862 Title: keystone will not auth users if there is a bad endpoint Status in OpenStack Identity (Keystone): Confirmed Status in Keystone icehouse series: In Progress Bug description: I deployed a bad endpoint today and Keystone's failure case was that it was refusing to authenticate users. This was a rather severe failure for a bad swift admin URL. An error level log is fine, but I'd prefer not to impact the rest of my users. 2014-07-23 16:33:39.435 6722 ERROR keystone.catalog.core [-] Malformed endpoint http://foo.com:80/v1/KEY_%{tenant_id)s - incomplete format (are you missing a type notifier ?) The {tenant_ids) is incorrect obviously. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1347862/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1351339] Re: Thousands of [:error] in keystone debug logs makes it hard to debug
There's not a single ERROR level log there from Keystone - this looks to be coming from apache's log config in devstack? ** Project changed: keystone = devstack -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1351339 Title: Thousands of [:error] in keystone debug logs makes it hard to debug Status in devstack - openstack dev environments: New Bug description: Look here: http://logs.openstack.org/31/31/2/gate/gate-tempest-dsvm- full/58b9b4d/logs/screen-key.txt.gz Search for error and you get ~48K hits. This makes it pretty hard to debug actual errors in the keystone logs. I'm guessing this is keystone reading from the apache httpd stderr stream. To manage notifications about this bug go to: https://bugs.launchpad.net/devstack/+bug/1351339/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352428] [NEW] HyperV Shutting Down state is not mapped
Public bug reported: The method which gets VM related information can fail if the VM is in an intermediary state such as Shutting down. The reason is that some of the Hyper-V specific vm states are not defined as possible states. This will result into a key error as shown bellow: http://paste.openstack.org/show/90015/ ** Affects: nova Importance: Undecided Status: New ** Tags: hyper-v -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1352428 Title: HyperV Shutting Down state is not mapped Status in OpenStack Compute (Nova): New Bug description: The method which gets VM related information can fail if the VM is in an intermediary state such as Shutting down. The reason is that some of the Hyper-V specific vm states are not defined as possible states. This will result into a key error as shown bellow: http://paste.openstack.org/show/90015/ To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1352428/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1351438] Re: Code repetition in modal body and footer
The more I think about this, the more I realize that this is more a blueprint than a bug... ** Changed in: horizon Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1351438 Title: Code repetition in modal body and footer Status in OpenStack Dashboard (Horizon): Invalid Bug description: The modal code below is very common and we see it repeated in many templates. {% block modal-body %} div class=left fieldset {% include horizon/common/_form_fields.html %} /fieldset /div div class=right h3{% trans Description: %}/h3 p{% trans Choose the rule you want to remove. %}/p /div {% endblock %} {% block modal-footer %} input class=btn btn-primary pull-right type=submit value={% trans Some value %} / a href={% url 'url' %} class=btn btn-default secondary cancel close{% trans Cancel %}/a {% endblock %} We need to provide a mechanism to replace these values in the base modal_form.html in python instead. That way, there is less template clutter. This would keep the code cleaner and would likely work better with serialization for angular in the future. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1351438/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352459] [NEW] Wrong data filled for email while Create User
Public bug reported: 1. Login to DevStack as admin user. 2. Save the user password in the Browser 3. Go to identity-users-Create Users 4. Observe that email is displayed as username , also password data is auto filled. ** Affects: horizon Importance: Undecided Status: New ** Attachment added: User_Data.PNG https://bugs.launchpad.net/bugs/1352459/+attachment/4169694/+files/User_Data.PNG -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352459 Title: Wrong data filled for email while Create User Status in OpenStack Dashboard (Horizon): New Bug description: 1. Login to DevStack as admin user. 2. Save the user password in the Browser 3. Go to identity-users-Create Users 4. Observe that email is displayed as username , also password data is auto filled. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352459/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352102] Re: users are unable to create ports on provider networks
** Also affects: nova (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1352102 Title: users are unable to create ports on provider networks Status in OpenStack Compute (Nova): New Status in “nova” package in Ubuntu: New Bug description: after commit da66d50010d5b1ba1d7fc9c3d59d81b6c01bb0b0 my users are unable to boot vm attached to provider networks, this is a serious regression for me as we mostly use provider networks. bug which originated the commit https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1284718 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1352102/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352502] [NEW] form validation error text is unaligned
Public bug reported: There is too much spacing before the error message. See image. ** Affects: horizon Importance: Undecided Status: New ** Tags: bootstrap ** Attachment added: Untitled.png https://bugs.launchpad.net/bugs/1352502/+attachment/4169741/+files/Untitled.png ** Tags added: bootstrap -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352502 Title: form validation error text is unaligned Status in OpenStack Dashboard (Horizon): New Bug description: There is too much spacing before the error message. See image. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352502/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352509] [NEW] RBD store: need to use READ_CHUNKSIZE in the ImageIterator
Public bug reported: See https://bugs.launchpad.net/glance/+bug/1336168 Change 4e0c563b8f3a5ced8f65fcca83d341a97729a5d4 was incomplete and missed a variable in the RBD store. ** Affects: glance Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1352509 Title: RBD store: need to use READ_CHUNKSIZE in the ImageIterator Status in OpenStack Image Registry and Delivery Service (Glance): New Bug description: See https://bugs.launchpad.net/glance/+bug/1336168 Change 4e0c563b8f3a5ced8f65fcca83d341a97729a5d4 was incomplete and missed a variable in the RBD store. To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1352509/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352510] [NEW] Delete and re-add of same node to compute_nodes table is broken
Public bug reported: When a compute node is deleted (or marked deleted) in the DB and another compute node is re-added with the same name, things break. This is because the resource tracker caches the compute node object/dict and uses the 'id' to update the record. When this happens, rt.update_available_resources will raise a ComputeHostNotFound. This ends up short-circuiting the full run of the update_available_resource() periodic task. This mostly applies when using a virt driver where a nova-compute manages more than 1 hypervisor. ** Affects: nova Importance: Medium Assignee: Chris Behrens (cbehrens) Status: In Progress ** Changed in: nova Status: New = In Progress ** Changed in: nova Assignee: (unassigned) = Chris Behrens (cbehrens) ** Changed in: nova Importance: Undecided = Medium -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1352510 Title: Delete and re-add of same node to compute_nodes table is broken Status in OpenStack Compute (Nova): In Progress Bug description: When a compute node is deleted (or marked deleted) in the DB and another compute node is re-added with the same name, things break. This is because the resource tracker caches the compute node object/dict and uses the 'id' to update the record. When this happens, rt.update_available_resources will raise a ComputeHostNotFound. This ends up short-circuiting the full run of the update_available_resource() periodic task. This mostly applies when using a virt driver where a nova-compute manages more than 1 hypervisor. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1352510/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352516] [NEW] cinder-volumes has insufficient free space
Public bug reported: Project Volume Create Volume fails and only shows a generic 'Error' in the table. User has to go to /var/log/cinder/volume.log to see why it failed. It would be nice to propagate the error message (or something) like the one in the volume.log: Volume group cinder- volumes has insufficient free space (1279 extents): 1280 required.\n' Related: I believe Instances has a nice way of letting user know the failure... it will show a new section in the Instances table: Fault Message No valid host was found. Code 500 Details File /opt/stack/nova/nova/scheduler/filter_scheduler.py, line 107, in schedule_run_instance raise exception.NoValidHost(reason=) Created Aug. 4, 2014, 6:50 p.m. --- volume.log trace: 2014-05-09 05:15:24.365 32325 TRACE oslo.messaging.rpc.dispatcher Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -n volume-e78a37c4-69b5-4b02-91ea-8f8131da5062 cinder-volumes -L 5g 2014-05-09 05:15:24.365 32325 TRACE oslo.messaging.rpc.dispatcher Exit code: 5 2014-05-09 05:15:24.365 32325 TRACE oslo.messaging.rpc.dispatcher Stdout: '' 2014-05-09 05:15:24.365 32325 TRACE oslo.messaging.rpc.dispatcher Stderr: ' Volume group cinder-volumes has insufficient free space (1279 extents): 1280 required.\n' 2014-05-09 05:15:24.365 32325 TRACE oslo.messaging.rpc.dispatcher 2014-05-09 05:15:24.367 32325 ERROR oslo.messaging._drivers.common [req-fb3ce0ad-17a5-4e8d-bf54-f7ef58f3721c 12854cf133d442a3808b2c5f45d95cd2 31e76abd2fe64fd1af8f94e0bb06487c - - -] Returning exception Unexpected error while running command. Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -n volume-e78a37c4-69b5-4b02-91ea-8f8131da5062 cinder-volumes -L 5g Exit code: 5 Stdout: '' Stderr: ' Volume group cinder-volumes has insufficient free space (1279 extents): 1280 required.\n' to caller 2014-05-09 05:15:24.368 32325 ERROR oslo.messaging._drivers.common [req-fb3ce0ad-17a5-4e8d-bf54-f7ef58f3721c 12854cf133d442a3808b2c5f45d95cd2 31e76abd2fe64fd1af8f94e0bb06487c - - -] ['Traceback (most recent call last):\n', ' File /usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 134, in _dispatch_and_reply\nincoming.message))\n', ' File /usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 177, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', ' File /usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, **new_args)\n', ' File /usr/lib/python2.6/site-packages/cinder/volume/manager.py, line 363, in create_volume\n_run_flow()\n', ' File /usr/lib/python2.6/site-packages/cinder/volume/manager.py, line 356, in _run_flow\nflow_engine.run()\n', ' File /usr/lib/python2.6/site-packages/taskflow/utils/lock_utils.py, line 53, in wrapper\nreturn f(*args, **kwargs)\n', ' File /usr/lib/python2.6/site-packages/taskflow/engines/action_engine/engine.py, line 111, in run\nself._run()\n', ' File /usr/lib/python2.6/site-packages/taskflow/engines/action_engine/engine.py, line 121, in _run\nself._revert(misc.Failure())\n', ' File /usr/lib/python2.6/site-packages/taskflow/engines/action_engine/engine.py, line 78, in _revert\nmisc.Failure.reraise_if_any(failures.values())\n', ' File /usr/lib/python2.6/site-packages/taskflow/utils/misc.py, line 558, in reraise_if_any\nfailures[0].reraise()\n', ' File /usr/lib/python2.6/site-packages/taskflow/utils/misc.py, line 565, in reraise\nsix.reraise(*self._exc_info)\n', ' File /usr/lib/python2.6/site-packages/taskflow/engines/action_engine/executor.py, line 36, in _execute_task\nresult = task.execute(**arguments)\n', ' File /usr/lib/python2.6/site-packages/cinder/volume/flows/manager/create_volume.py, line 594, in execu te\n**volume_spec)\n', ' File /usr/lib/python2.6/site-packages/cinder/volume/flows/manager/create_volume.py, line 564, in _create_raw_volume\nreturn self.driver.create_volume(volume_ref)\n', ' File /usr/lib/python2.6/site-packages/cinder/volume/drivers/lvm.py, line 196, in create_volume\nmirror_count)\n', ' File /usr/lib/python2.6/site-packages/cinder/volume/drivers/lvm.py, line 185, in _create_volume\nvg_ref.create_volume(name, size, lvm_type, mirror_count)\n', ' File /usr/lib/python2.6/site-packages/cinder/brick/local_dev/lvm.py, line 474, in create_volume\nrun_as_root=True)\n', ' File /usr/lib/python2.6/site-packages/cinder/utils.py, line 136, in execute\n return processutils.execute(*cmd, **kwargs)\n', ' File /usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py, line 173, in execute\ncmd=\' \'.join(cmd))\n', 'ProcessExecutionError: Unexpected error while running command.\nCommand: sudo cinder-rootwrap /etc/cinder/r ootwrap.conf lvcreate -n volume-e78a37c4-69b5-4b02-91ea-8f8131da5062 cinder-volumes -L 5g\nExit code: 5\nStdout: \'\'\nStderr: \' Volume group
[Yahoo-eng-team] [Bug 1339107] Re: Kyestone: Auth token not in the request header
Actually, I totally overlooked that the request was in the logs, to POST /v2.0/tokens. There should not be an X-Auth-Token in a request to POST /v2.0/tokens anyway, so that's completely normal. The rest of the logs in the problem description are also completely normal, so far as I can tell. Without further details, all I can say regarding the admin user is not authorized for some commands is that there must be some sort of other misconfiguration in the deployment - likely something that should have been handled by packstack? Finally, I don't see how https://www.redhat.com/archives/rdo- list/2014-June/msg00067.html is related? ** Changed in: keystone Status: Incomplete = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1339107 Title: Kyestone: Auth token not in the request header Status in OpenStack Identity (Keystone): Invalid Bug description: Hi, I am using CentOS 6.4, deployed OpenStack Icehouse with Packstack. After the deployment, they admin user is not authorized for some commands, e.g. nova list, neutron net-list, etc. Similar to the bug described in https://bugs.launchpad.net/keystone/+bug/1289935, however the solution patch does not apply. Some output: 2014-07-08 16:52:11.063 1649 INFO eventlet.wsgi.server [-] 10.0.230.14 - - [08/Jul/2014 16:52:11] POST /v2.0/tokens HTTP/1.1 200 7520 0.201348 2014-07-08 16:52:11.079 1649 DEBUG keystone.middleware.core [-] Auth token not in the request header. Will not build auth context. process_request /usr/lib/python2.6/site-packages/keystone/middleware/core.py:271 2014-07-08 16:52:11.081 1649 DEBUG keystone.common.wsgi [-] arg_dict: {} __call__ /usr/lib/python2.6/site-packages/keystone/common/wsgi.py:181 2014-07-08 16:52:11.086 1649 DEBUG keystone.notifications [-] CADF Event: {'typeURI': 'http://schemas.dmtf.org/cloud/audit/1.0/event', 'initiator': {'typeURI': 'service/security/account/user', 'host': {'agent': 'python-neutronclient', 'address': '10.0.230.14'}, 'id': 'openstack:ca12b898-95bb-4705-8455-6122aae81752', 'name': u'77aabd14a2e1453489dec37d7b174e58'}, 'target': {'typeURI': 'service/security/account/user', 'id': 'openstack:c9028777-2e4b-4c8a-bf07-4175e1c1f5e9'}, 'observer': {'typeURI': 'service/security', 'id': 'openstack:669df929-fca7-4f71-99cf-0e2af4e981fa'}, 'eventType': 'activity', 'eventTime': '2014-07-08T14:52:11.086573+', 'action': 'authenticate', 'outcome': 'pending', 'id': 'openstack:0d35b838-3cc9-46ed-bdf6-e384583d0982'} _send_audit_notification /usr/lib/python2.6/site-packages/keystone/notifications.py:289 Identical to the issue mentioned here: https://www.redhat.com/archives/rdo-list/2014-June/msg00067.html To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1339107/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1211582] Re: Filter user list by partial attributes
I'm going to ignore the mentions of firstname and lastname, since the patch above ignores them as well. Up until now, a user's email address has been considered metadata on the user that Keystone itself makes no guarantees or assumptions about. If email is to be a first class attribute, I'd like to have a solid use case on why Keystone needs to care about the attribute in addition to name (which can also be an email address, for all keystone cares), and then we need to talk about validation, backwards compatibility and expectations across backends. If someone would like to pursue this, let's pick up this discussion in a spec. If you create one, please link it here for posterity. ** Changed in: keystone Status: In Progress = Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1211582 Title: Filter user list by partial attributes Status in OpenStack Identity (Keystone): Opinion Bug description: Listing all Users is not practical for large organziations. Even pagination won't be sufficient for non trivial user lists. Keystone needs to support query parameters for winnow a list of user s byt firstname, lastname, username, and email address. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1211582/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352548] [NEW] test_volume_upload times out with internal server errors from glance
Public bug reported: Saw this on stable/icehouse, the test reports a timeout failure: http://logs.openstack.org/03/111803/1/check/check-tempest-dsvm-postgres- full/a17f798/console.html#_2014-08-04_20_20_55_504 2014-08-04 20:20:55.504 | tempest.api.volume.test_volumes_actions.VolumesV2ActionsTestXML.test_volume_upload[gate,image] 2014-08-04 20:20:55.504 | -- 2014-08-04 20:20:55.504 | 2014-08-04 20:20:55.504 | Captured traceback: 2014-08-04 20:20:55.504 | ~~~ 2014-08-04 20:20:55.504 | Traceback (most recent call last): 2014-08-04 20:20:55.504 | File tempest/test.py, line 128, in wrapper 2014-08-04 20:20:55.504 | return f(self, *func_args, **func_kwargs) 2014-08-04 20:20:55.504 | File tempest/api/volume/test_volumes_actions.py, line 106, in test_volume_upload 2014-08-04 20:20:55.505 | self.image_client.wait_for_image_status(image_id, 'active') 2014-08-04 20:20:55.505 | File tempest/services/image/v1/json/image_client.py, line 304, in wait_for_image_status 2014-08-04 20:20:55.505 | raise exceptions.TimeoutException(message) 2014-08-04 20:20:55.505 | TimeoutException: Request timed out 2014-08-04 20:20:55.505 | Details: (VolumesV2ActionsTestXML:test_volume_upload) Time Limit Exceeded! (196s)while waiting for active, but we got saving. There are HTTP 500 errors in the cinder volume logs: Looks like the image id in the failure is a1159899-0f66-47e4-92fb- dd13f92db283 . You can see the copy_volume_to_image failure here: http://logs.openstack.org/03/111803/1/check/check-tempest-dsvm-postgres- full/a17f798/logs/screen-c-vol.txt.gz?level=TRACE#_2014-08-04_20_05_11_540 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, in _dispatch_and_reply 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 176, in _dispatch 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 122, in _do_dispatch 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/cinder/cinder/volume/manager.py, line 719, in copy_volume_to_image 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher payload['message'] = unicode(error) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/cinder/cinder/openstack/common/excutils.py, line 68, in __exit__ 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/cinder/cinder/volume/manager.py, line 713, in copy_volume_to_image 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher image_meta) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/cinder/cinder/volume/drivers/lvm.py, line 276, in copy_volume_to_image 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher self.local_path(volume)) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/cinder/cinder/image/image_utils.py, line 242, in upload_volume 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher image_service.update(context, image_id, {}, image_file) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/cinder/cinder/image/glance.py, line 311, in update 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher _reraise_translated_image_exception(image_id) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/cinder/cinder/image/glance.py, line 309, in update 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher **image_meta) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/cinder/cinder/image/glance.py, line 158, in call 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher return getattr(client.images, method)(*args, **kwargs) 2014-08-04 20:05:11.540 14177 TRACE oslo.messaging.rpc.dispatcher File /opt/stack/new/python-glanceclient/glanceclient/v1/images.py, line 329, in update 2014-08-04 20:05:11.540 14177 TRACE
[Yahoo-eng-team] [Bug 1350792] Re: In case of HTTP 40x error on HEAD method, the Content-Length will be set incorrectly.
Comparing just the method difference with curl, I'm not able to reproduce this. Further, this behavior matches our understanding of HEAD. The non-zero Content-Length basically indicates to the client how large the response body would be in a normal GET request. $ curl http://localhost:35357/v3/groups/invalid_group_id/users/invalid_user_id --header x-auth-token=ADMIN {error: {message: Could not find user: invalid_user_id, code: 404, title: Not Found}} $ curl --head http://localhost:35357/v3/groups/invalid_group_id/users/invalid_user_id --header x-auth-token=ADMIN HTTP/1.1 404 Not Found Vary: X-Auth-Token Content-Type: application/json Content-Length: 97 Date: Mon, 04 Aug 2014 21:36:14 GMT ** Changed in: keystone Status: New = Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Keystone. https://bugs.launchpad.net/bugs/1350792 Title: In case of HTTP 40x error on HEAD method, the Content-Length will be set incorrectly. Status in OpenStack Identity (Keystone): Invalid Bug description: [description] In case of HTTP 40x error on HEAD method, the Content-Length will be set incorrectly. The response body is none. So the Content-Length should be zero. But some value was set as the Content-Length. This problem occurred in the following API. - HEAD /v3/groups/{group_id}/users/{user_id} - HEAD /v3/domains/{domain_id}/users/{user_id}/roles/{role_id} - HEAD /v3/domains/{domain_id}/groups/{group_id}/roles/{role_id} - HEAD /v3/OS-INHERIT/domains/{domain_id}/users/{user_id}/roles/{role_id}/inherited_to_projects - HEAD /v3/OS-INHERIT/domains/{domain_id}/groups/{group_id}/roles/{role_id}/inherited_to_projects At this moment, - curl are waiting for response body but that is no needs to wait. - keystone server will not send response body even if some value was set as the Content-Length. If using Apache + mod_wsgi for keystone deployment, following messege will be shown, but curl will not be waited for response body. curl: (18) transfer closed with 162 bytes remaining to read [steps to reproduce] Run the API listed in description section by curl to occur HTTP 40x error. At that time, the condition is like follows. - Specify unauthorized token - Specify invalid domain_id / user_id / role_id [condition] - Ubuntu 14.04 LTS server - using devstack [about HEAD] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.htm The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response. The metainformation contained in the HTTP headers in response to a HEAD request SHOULD be identical to the information sent in response to a GET request. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1350792/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352583] [NEW] DeleteBackup incomplete action_past message
Public bug reported: https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/volumes/backups/tables.py#L49 class DeleteBackup(tables.DeleteAction): data_type_singular = _(Volume Backup) data_type_plural = _(Volume Backups) action_past = _(Scheduled deletion of)== * policy_rules = ((volume, backup:delete),) *seems to be missing %(data_types)s like in other places ** Affects: horizon Importance: Undecided Assignee: Cindy Lu (clu-m) Status: New ** Changed in: horizon Assignee: (unassigned) = Cindy Lu (clu-m) ** Description changed: https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/volumes/backups/tables.py#L49 class DeleteBackup(tables.DeleteAction): - data_type_singular = _(Volume Backup) - data_type_plural = _(Volume Backups) - action_past = _(Scheduled deletion of)== seems to be missing %(data_types)s like in other places - policy_rules = ((volume, backup:delete),) + data_type_singular = _(Volume Backup) + data_type_plural = _(Volume Backups) + action_past = _(Scheduled deletion of)== * + policy_rules = ((volume, backup:delete),) + + *seems to be missing %(data_types)s like in other places -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352583 Title: DeleteBackup incomplete action_past message Status in OpenStack Dashboard (Horizon): New Bug description: https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/volumes/backups/tables.py#L49 class DeleteBackup(tables.DeleteAction): data_type_singular = _(Volume Backup) data_type_plural = _(Volume Backups) action_past = _(Scheduled deletion of)== * policy_rules = ((volume, backup:delete),) *seems to be missing %(data_types)s like in other places To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352583/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352590] [NEW] [Sahara] A lot of things don't work in new horizon design
Public bug reported: Found by brief surfing: 1. Plugin version selection shows all versions of all plugins 2. large gap on cluster details page 3. no move up/down buttons in scaling ** Affects: horizon Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352590 Title: [Sahara] A lot of things don't work in new horizon design Status in OpenStack Dashboard (Horizon): New Bug description: Found by brief surfing: 1. Plugin version selection shows all versions of all plugins 2. large gap on cluster details page 3. no move up/down buttons in scaling To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352590/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352595] [NEW] nova boot fails when using rbd backend
Public bug reported: Trace ends with: TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3] File /opt/stack/nova/nova/virt/libvirt/rbd.py, line 238, in exists TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3] except rbd.ImageNotFound: TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3] AttributeError: 'module' object has no attribute 'ImageNotFound' It looks like the above module tries to do a import rbd and ends up importing itself again instead of the global library module. A quick fix would be renaming the file to rbd2.py and changing the references in driver.py and imagebackend.py, but maybe there is a better solution? ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1352595 Title: nova boot fails when using rbd backend Status in OpenStack Compute (Nova): New Bug description: Trace ends with: TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3] File /opt/stack/nova/nova/virt/libvirt/rbd.py, line 238, in exists TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3] except rbd.ImageNotFound: TRACE nova.compute.manager [instance: c1edd5bf-ba48-4374-880f-1f5fa2f41cd3] AttributeError: 'module' object has no attribute 'ImageNotFound' It looks like the above module tries to do a import rbd and ends up importing itself again instead of the global library module. A quick fix would be renaming the file to rbd2.py and changing the references in driver.py and imagebackend.py, but maybe there is a better solution? To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1352595/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1342919] Re: instances rescheduled after building network info do not update the MAC
I'm proposing a new virt driver method for nova that allows the virt driver to say whether reschedules should deallocate networks. Once the nova side is confirmed, we'll add the method to ironic's virt driver. ** Also affects: ironic Importance: Undecided Status: New ** Changed in: nova Assignee: Robert Collins (lifeless) = Chris Behrens (cbehrens) ** Changed in: ironic Assignee: (unassigned) = Chris Behrens (cbehrens) ** Changed in: ironic Status: New = In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1342919 Title: instances rescheduled after building network info do not update the MAC Status in OpenStack Bare Metal Provisioning Service (Ironic): In Progress Status in OpenStack Compute (Nova): In Progress Bug description: This is weird - Ironic has used the mac from a different node (which quite naturally leads to failures to boot!) nova list | grep spawn | 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | ci-overcloud-NovaCompute3-zmkjp5aa6vgf | BUILD | spawning | NOSTATE | ctlplane=10.10.16.137 | nova show 6c364f0f-d4a0-44eb-ae37-e012bbdd368c | grep hyperv | OS-EXT-SRV-ATTR:hypervisor_hostname | b07295ee-1c09-484c-9447-10b9efee340c | neutron port-list | grep 137 | 272f2413-0309-4e8b-9a6d-9cb6fdbe978d || 78:e7:d1:23:90:0d | {subnet_id: a6ddb35e-305e-40f1-9450-7befc8e1af47, ip_address: 10.10.16.137} | ironic node-show b07295ee-1c09-484c-9447-10b9efee340c | grep wait | provision_state| wait call-back | ironic port-list | grep 78:e7:d1:23:90:0d # from neutron | 33ab97c0-3de9-458a-afb7-8252a981b37a | 78:e7:d1:23:90:0d | ironic port-show 33ab97c0-3de9-458a-afb7-8252a981 ++---+ | Property | Value | ++---+ | node_uuid | 69dc8c40-dd79-4ed6-83a9-374dcb18c39b | # Ruh-roh, wrong node! | uuid | 33ab97c0-3de9-458a-afb7-8252a981b37a | | extra | {u'vif_port_id': u'aad5ee6b-52a3-4f8b-8029-7b8f40e7b54e'} | | created_at | 2014-07-08T23:09:16+00:00 | | updated_at | 2014-07-16T01:23:23+00:00 | | address| 78:e7:d1:23:90:0d | ++---+ ironic port-list | grep 78:e7:d1:23:9b:1d # This is the MAC my hardware list says the node should have | caba5b36-f518-43f2-84ed-0bc516cc89df | 78:e7:d1:23:9b:1d | # ironic port-show caba5b36-f518-43f2-84ed-0bc516cc ++---+ | Property | Value | ++---+ | node_uuid | b07295ee-1c09-484c-9447-10b9efee340c | # and tada right node | uuid | caba5b36-f518-43f2-84ed-0bc516cc89df | | extra | {u'vif_port_id': u'272f2413-0309-4e8b-9a6d-9cb6fdbe978d'} | | created_at | 2014-07-08T23:08:26+00:00 | | updated_at | 2014-07-16T19:07:56+00:00 | | address| 78:e7:d1:23:9b:1d | ++---+ To manage notifications about this bug go to: https://bugs.launchpad.net/ironic/+bug/1342919/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352619] [NEW] Cisco plugin module become the neutron dependence
Public bug reported: In the former neutron db migration code. We don't have to install the cisco plugin module if user don't need it. But now, because of the latest merged py file https://github.com/openstack/neutron/blob/master/neutron/db/migration/alembic_migrations/versions/5446f2a45467_set_server_default.py import the cisco_constants(line 34), the user have to install it. Otherwise, it will raise a cisco module miss error. However, this is a neutron plugin, it should not be the neutron dependence. The error can be reproduced by runnig neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current. Error information. [root@rhelhw ~]# neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL. Traceback (most recent call last): File /usr/bin/neutron-db-manage, line 10, in module sys.exit(main()) File /usr/lib/python2.6/site-packages/neutron/db/migration/cli.py, line 171, in main CONF.command.func(config, CONF.command.name) File /usr/lib/python2.6/site-packages/neutron/db/migration/cli.py, line 63, in do_alembic_command getattr(alembic_command, cmd)(config, *args, **kwargs) File /usr/lib/python2.6/site-packages/alembic/command.py, line 233, in current script.run_env() File /usr/lib/python2.6/site-packages/alembic/script.py, line 203, in run_env util.load_python_file(self.dir, 'env.py') File /usr/lib/python2.6/site-packages/alembic/util.py, line 212, in load_python_file module = load_module_py(module_id, path) File /usr/lib/python2.6/site-packages/alembic/compat.py, line 58, in load_module_py mod = imp.load_source(module_id, path, fp) File /usr/lib/python2.6/site-packages/neutron/db/migration/alembic_migrations/env.py, line 106, in module run_migrations_online() File /usr/lib/python2.6/site-packages/neutron/db/migration/alembic_migrations/env.py, line 90, in run_migrations_online options=build_options()) File string, line 7, in run_migrations File /usr/lib/python2.6/site-packages/alembic/environment.py, line 688, in run_migrations self.get_context().run_migrations(**kw) File /usr/lib/python2.6/site-packages/alembic/migration.py, line 242, in run_migrations self): File /usr/lib/python2.6/site-packages/alembic/command.py, line 214, in display_version rev = script.get_revision(rev) File /usr/lib/python2.6/site-packages/alembic/script.py, line 102, in get_revision return self._revision_map[id_] File /usr/lib/python2.6/site-packages/alembic/util.py, line 268, in __get__ obj.__dict__[self.__name__] = result = self.fget(obj) File /usr/lib/python2.6/site-packages/alembic/script.py, line 213, in _revision_map script = Script._from_filename(self, self.versions, file_) File /usr/lib/python2.6/site-packages/alembic/script.py, line 496, in _from_filename module = util.load_python_file(dir_, filename) File /usr/lib/python2.6/site-packages/alembic/util.py, line 212, in load_python_file module = load_module_py(module_id, path) File /usr/lib/python2.6/site-packages/alembic/compat.py, line 58, in load_module_py mod = imp.load_source(module_id, path, fp) File /usr/lib/python2.6/site-packages/neutron/db/migration/alembic_migrations/versions/5446f2a45467_set_server_default.py, line 34, in module from neutron.plugins.cisco.common import cisco_constants ImportError: No module named cisco.common ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1352619 Title: Cisco plugin module become the neutron dependence Status in OpenStack Neutron (virtual network service): New Bug description: In the former neutron db migration code. We don't have to install the cisco plugin module if user don't need it. But now, because of the latest merged py file https://github.com/openstack/neutron/blob/master/neutron/db/migration/alembic_migrations/versions/5446f2a45467_set_server_default.py import the cisco_constants(line 34), the user have to install it. Otherwise, it will raise a cisco module miss error. However, this is a neutron plugin, it should not be the neutron dependence. The error can be reproduced by runnig neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current. Error information. [root@rhelhw ~]# neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL. Traceback (most recent call last): File /usr/bin/neutron-db-manage, line 10, in
[Yahoo-eng-team] [Bug 1352624] [NEW] [Bootstrap] Modal's Cancel button is smaller
Public bug reported: See attached image. ** Affects: horizon Importance: Undecided Status: New ** Attachment added: Untitled.png https://bugs.launchpad.net/bugs/1352624/+attachment/4169954/+files/Untitled.png -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352624 Title: [Bootstrap] Modal's Cancel button is smaller Status in OpenStack Dashboard (Horizon): New Bug description: See attached image. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352624/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352635] [NEW] Allow Cisco ML2 driver to use the upstream ncclient
Public bug reported: Currently, the Cisco ML2 driver relies on a specially patched and maintained custom version of the ncclient 3rd party library for communication with various switches. Changes have been submitted to the upstream ncclient now so that there is no need to maintain a separate version of the ncclient anymore. To take advantage of the new ncclient version, a small change needs to be made to the Cisco ML2 driver, so that it can detect whether the old (custom) ncclient is installed, or whether the new upstream ncclient is used. Installation and maintenance will be simplified by not requiring a custom version of the ncclient. ** Affects: neutron Importance: Undecided Assignee: Juergen Brendel (jbrendel) Status: New ** Changed in: neutron Assignee: (unassigned) = Juergen Brendel (jbrendel) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1352635 Title: Allow Cisco ML2 driver to use the upstream ncclient Status in OpenStack Neutron (virtual network service): New Bug description: Currently, the Cisco ML2 driver relies on a specially patched and maintained custom version of the ncclient 3rd party library for communication with various switches. Changes have been submitted to the upstream ncclient now so that there is no need to maintain a separate version of the ncclient anymore. To take advantage of the new ncclient version, a small change needs to be made to the Cisco ML2 driver, so that it can detect whether the old (custom) ncclient is installed, or whether the new upstream ncclient is used. Installation and maintenance will be simplified by not requiring a custom version of the ncclient. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1352635/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352659] [NEW] race in server show api
Public bug reported: Because of the instance object lazy loading its possible to get into situations where the API code is half way through assembling data to return to the client when the instance disappears underneath it. We really need to ensure everything we will need is retreived up front so we have a consistent snapshot view of the instance [req-5ca39eb3-c1d2-433b-8dac-1bf5f338ce1f ServersAdminNegativeV3Test-1453501114 ServersAdminNegativeV3Test-364813115] Unexpected exception in API method 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions Traceback (most recent call last): 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/extensions.py, line 473, in wrapped 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions return f(*args, **kwargs) 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/compute/plugins/v3/servers.py, line 410, in show 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions return self._view_builder.show(req, instance) 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/compute/views/servers.py, line 268, in show 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions _inst_fault = self._get_fault(request, instance) 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/api/openstack/compute/views/servers.py, line 214, in _get_fault 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions fault = instance.fault 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/objects/base.py, line 67, in getter 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions self.obj_load_attr(name) 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/objects/instance.py, line 520, in obj_load_attr 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions expected_attrs=[attrname]) 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/objects/base.py, line 153, in wrapper 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions result = fn(cls, context, *args, **kwargs) 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/objects/instance.py, line 310, in get_by_uuid 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions use_slave=use_slave) 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/db/api.py, line 676, in instance_get_by_uuid 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions columns_to_join, use_slave=use_slave) 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 167, in wrapper 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 1715, in instance_get_by_uuid 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions columns_to_join=columns_to_join, use_slave=use_slave) 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 1727, in _instance_get_by_uuid 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions raise exception.InstanceNotFound(instance_id=uuid) 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions InstanceNotFound: Instance fcff276a-d410-4760-9b98-4014024b1353 could not be found. 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions http://logs.openstack.org/periodic-qa/periodic-tempest-dsvm- nova-v3-full-master/a278802/logs/screen-n-api.txt ** Affects: nova Importance: Medium Assignee: Christopher Yeoh (cyeoh-0) Status: New ** Tags: api -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1352659 Title: race in server show api Status in OpenStack Compute (Nova): New Bug description: Because of the instance object lazy loading its possible to get into situations where the API code is half way through assembling data to return to the client when the instance disappears underneath it. We really need to ensure everything we will need is retreived up front so we have a consistent snapshot view of the instance [req-5ca39eb3-c1d2-433b-8dac-1bf5f338ce1f ServersAdminNegativeV3Test-1453501114 ServersAdminNegativeV3Test-364813115] Unexpected exception in API method 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions Traceback (most recent call last): 2014-08-04 06:37:25.738 21228 TRACE nova.api.openstack.extensions
[Yahoo-eng-team] [Bug 1212947] Re: Multiple Floating IP Pools is not full functional
[Expired for neutron because there has been no activity for 60 days.] ** Changed in: neutron Status: Incomplete = Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1212947 Title: Multiple Floating IP Pools is not full functional Status in OpenStack Neutron (virtual network service): Expired Bug description: I'v tested 'Multiple Floating IP Pools' and refer to the Doc of http://docs.openstack.org/grizzly/openstack- network/admin/content/adv_cfg_l3_agent_multi_extnet.html But there are some of issues: 1. Besides create external networks and router before invoke multiple l3-agent services, it also needs to do following process, otherwise router will not work normally - set gateway for router (quantum router-gateway-set) - add subnets of internal network to router (quantum router-interface-add) 2. After multiple l3-agent services is invoked, it can't remove or add any interfaces from the router. There is no any error when your do these processes either via commands or dashboard, but in the network node,no any updates happened in the router namespace. 3. Can't disassociate Floating IPs, same phenomenon like above, no errors in the commands and dashboard, but Floating IP still exists in the router namespace. BTW 'Multiple Floating IP Pools' is a really import, because it is hard to get a large number (B class or half B)and continue public IP address. In general it will a several of C class IP addresses. In this case it has to create many network nodes with L3-agent to support support it if can't run 'Multiple Floating IP Pools' in same network node. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1212947/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352668] [NEW] After archive db the instance with deleted flavor failed
Public bug reported: reproduce as below: os@os2:~$ nova show vm1 +--++ | Property | Value | +--++ | OS-DCF:diskConfig| MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | os3 | | OS-EXT-SRV-ATTR:hypervisor_hostname | os3 | | OS-EXT-SRV-ATTR:instance_name| instance-0045 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state| - | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2014-08-05T03:47:09.00 | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2014-08-05T03:47:01Z | | flavor | test1 (333) | | hostId | c8e8cab21e9e22dbc3779fd171e77f44940ba1c81161dc114ba4ad85 | | id | c2e84eda-4bc6-4ef7-a5ee-f6590fb1f6f7 | | image| cirros-0.3.2-x86_64-uec (da82a342-aeac-407a-bf9d-cf28bf68dc6b) | | key_name | - | | metadata | {} | | name | vm1 | | net1 network | 12.0.0.55 | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | ACTIVE | | tenant_id| fdbb1e8f23eb40c89f3a677e2621b95c | | updated | 2014-08-05T03:47:09Z | | user_id | 158d3c971e244f479593c86ff751bf8f | +--++ os@os2:~$ nova delete ^C os@os2:~$ nova flavor-delete 333 +-+---+---+--+---+--+---+-+---+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +-+---+---+--+---+--+---+-+---+ | 333 | test1 | 512 | 1| 1 | 10 | 1 | 1.0 | True | +-+---+---+--+---+--+---+-+---+ 2014-08-05 12:16:09.558 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-08-05 12:16:09.558 TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 134, in _dispatch_and_reply 2014-08-05 12:16:09.558 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-08-05 12:16:09.558 TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 177, in _dispatch 2014-08-05 12:16:09.558 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-08-05 12:16:09.558 TRACE oslo.messaging.rpc.dispatcher File /usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 123, in _do_dispatch 2014-08-05 12:16:09.558 TRACE oslo.messaging.rpc.dispatcher result =
[Yahoo-eng-team] [Bug 1352676] [NEW] typo in Step class docstring from horizon.workflows.base
Public bug reported: In horizon/workflows/base.py `.. attribute:: action` should be `.. attribute:: action_class` ** Affects: horizon Importance: Undecided Status: New ** Tags: low-hanging-fruit -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1352676 Title: typo in Step class docstring from horizon.workflows.base Status in OpenStack Dashboard (Horizon): New Bug description: In horizon/workflows/base.py `.. attribute:: action` should be `.. attribute:: action_class` To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1352676/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352619] Re: Cisco plugin module become the neutron dependence
This is a packaging bug. Distributions should not be breaking apart the neutron code. See this similar RedHat bug: https://bugzilla.redhat.com/show_bug.cgi?id=1120332 They resolved it by putting all of the code into a python-neutron package. ** Bug watch added: Red Hat Bugzilla #1120332 https://bugzilla.redhat.com/show_bug.cgi?id=1120332 ** Changed in: neutron Status: In Progress = Opinion -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1352619 Title: Cisco plugin module become the neutron dependence Status in OpenStack Neutron (virtual network service): Opinion Bug description: In the former neutron db migration code. We don't have to install the cisco plugin module if user don't need it. But now, because of the latest merged py file https://github.com/openstack/neutron/blob/master/neutron/db/migration/alembic_migrations/versions/5446f2a45467_set_server_default.py import the cisco_constants(line 34), the user have to install it. Otherwise, it will raise a cisco module miss error. However, this is a neutron plugin, it should not be the neutron dependence. The error can be reproduced by runnig neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current. Error information. [root@rhelhw ~]# neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL. Traceback (most recent call last): File /usr/bin/neutron-db-manage, line 10, in module sys.exit(main()) File /usr/lib/python2.6/site-packages/neutron/db/migration/cli.py, line 171, in main CONF.command.func(config, CONF.command.name) File /usr/lib/python2.6/site-packages/neutron/db/migration/cli.py, line 63, in do_alembic_command getattr(alembic_command, cmd)(config, *args, **kwargs) File /usr/lib/python2.6/site-packages/alembic/command.py, line 233, in current script.run_env() File /usr/lib/python2.6/site-packages/alembic/script.py, line 203, in run_env util.load_python_file(self.dir, 'env.py') File /usr/lib/python2.6/site-packages/alembic/util.py, line 212, in load_python_file module = load_module_py(module_id, path) File /usr/lib/python2.6/site-packages/alembic/compat.py, line 58, in load_module_py mod = imp.load_source(module_id, path, fp) File /usr/lib/python2.6/site-packages/neutron/db/migration/alembic_migrations/env.py, line 106, in module run_migrations_online() File /usr/lib/python2.6/site-packages/neutron/db/migration/alembic_migrations/env.py, line 90, in run_migrations_online options=build_options()) File string, line 7, in run_migrations File /usr/lib/python2.6/site-packages/alembic/environment.py, line 688, in run_migrations self.get_context().run_migrations(**kw) File /usr/lib/python2.6/site-packages/alembic/migration.py, line 242, in run_migrations self): File /usr/lib/python2.6/site-packages/alembic/command.py, line 214, in display_version rev = script.get_revision(rev) File /usr/lib/python2.6/site-packages/alembic/script.py, line 102, in get_revision return self._revision_map[id_] File /usr/lib/python2.6/site-packages/alembic/util.py, line 268, in __get__ obj.__dict__[self.__name__] = result = self.fget(obj) File /usr/lib/python2.6/site-packages/alembic/script.py, line 213, in _revision_map script = Script._from_filename(self, self.versions, file_) File /usr/lib/python2.6/site-packages/alembic/script.py, line 496, in _from_filename module = util.load_python_file(dir_, filename) File /usr/lib/python2.6/site-packages/alembic/util.py, line 212, in load_python_file module = load_module_py(module_id, path) File /usr/lib/python2.6/site-packages/alembic/compat.py, line 58, in load_module_py mod = imp.load_source(module_id, path, fp) File /usr/lib/python2.6/site-packages/neutron/db/migration/alembic_migrations/versions/5446f2a45467_set_server_default.py, line 34, in module from neutron.plugins.cisco.common import cisco_constants ImportError: No module named cisco.common To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1352619/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1352698] [NEW] neutron context doesn't include auth token
Public bug reported: Discussed with the thread starting with http://lists.openstack.org/pipermail/openstack-dev/2014-July/040644.html Neutron context isn't populated with auth token unlike nova, glance. Since there are several (potential) users for it. servicevm project, routervm implementation(cisco csr1kv, vyatta vrouter), it makes sense for neutron context to include auth token. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1352698 Title: neutron context doesn't include auth token Status in OpenStack Neutron (virtual network service): New Bug description: Discussed with the thread starting with http://lists.openstack.org/pipermail/openstack-dev/2014-July/040644.html Neutron context isn't populated with auth token unlike nova, glance. Since there are several (potential) users for it. servicevm project, routervm implementation(cisco csr1kv, vyatta vrouter), it makes sense for neutron context to include auth token. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1352698/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp