[Yahoo-eng-team] [Bug 1704928] [NEW] updated_at field is set on the instance only after it is scheduled
Public bug reported: When we added the update_at field to the instance versioned notifications it become visible that nova update the updated_at field of the instance only after the instance is scheduled. However before that step the instance already goes through two state transition and therefore two instance.update notification is emitted with updated_at field being None. This looks contradicting. Steps to reproduce == * (Apply https://review.openstack.org/#/c/475276 if it is not merged yet) * Boot an instance * Observe the update_at field of the instance.update notifications Expected result === Every instance.update notification has updated_at field set Actual result = The first two instance.update notification is emitted with updated_at field being None Environment === * devstack or nova functional test environment with the test case: nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSample.test_create_delete_server_with_instance_update ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1704928 Title: updated_at field is set on the instance only after it is scheduled Status in OpenStack Compute (nova): New Bug description: When we added the update_at field to the instance versioned notifications it become visible that nova update the updated_at field of the instance only after the instance is scheduled. However before that step the instance already goes through two state transition and therefore two instance.update notification is emitted with updated_at field being None. This looks contradicting. Steps to reproduce == * (Apply https://review.openstack.org/#/c/475276 if it is not merged yet) * Boot an instance * Observe the update_at field of the instance.update notifications Expected result === Every instance.update notification has updated_at field set Actual result = The first two instance.update notification is emitted with updated_at field being None Environment === * devstack or nova functional test environment with the test case: nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSample.test_create_delete_server_with_instance_update To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1704928/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1461266] Re: Failed logon does not state where user is from (REMOTE_IP)
[Expired for OpenStack Dashboard (Horizon) because there has been no activity for 60 days.] ** Changed in: horizon Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1461266 Title: Failed logon does not state where user is from (REMOTE_IP) Status in OpenStack Dashboard (Horizon): Expired Bug description: When a user logs on to horizon the status of their logon is logged to the apache error.log file. However this log data does not provide anything useful for the configuration of monitoring or security controls because it does not provide the REMOTE_IP. Since some configurations use ha_proxy and some don't the logging will need to be able to determine if the user is accessing via a proxy or not. There are several issues with this as pointed out in this article: http://esd.io/blog/flask-apps-heroku-real-ip-spoofing.html. I would recommend using a function similar to what is in that post, however to get things working I have used the following code to get the log to display the end-user IP address: /usr/lib/python2.7/dist-packages/openstack_auth/forms.py 27a28,34 > def get_client_ip(request): > x_forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR') > if x_forwarded_for: > ip = x_forwarded_for > else: > ip = request.META.get('REMOTE_ADDR') > return ip 94,95c101,102 < msg = 'Login successful for user "%(username)s".' % \ < {'username': username} --- > msg = '$(remote_ip)s - Login successful for user "%(username)s".' % \ > {'username': username, 'remote_ip': get_client_ip(self.request) } 98,99c105,106 < msg = 'Login failed for user "%(username)s".' % \ < {'username': username} --- > msg = '%(remote_ip)s - Login failed for user "%(username)s".' % \ > {'username': username, 'remote_ip': get_client_ip(self.request) } It's defiantly not the best answer, in fact it may not even be fully functional :), but something is needed to be able to monitor invalid attempts; unless something in django can be used to have some logic (beyond locking accounts) where it is able to send a user to a sink hole or something based on # of exceptions per session or something. But that's beyond the scope of this request :) To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1461266/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1503236] Re: Security Keypair creation is not logged in Horizon Log
[Expired for OpenStack Dashboard (Horizon) because there has been no activity for 60 days.] ** Changed in: horizon Status: Incomplete => Expired -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1503236 Title: Security Keypair creation is not logged in Horizon Log Status in OpenStack Dashboard (Horizon): Expired Bug description: When creating a new keypair through the GUI - the actions are not logged in the /var/log/horizon/horizon.log. This occurs when creating a new keypair or importing an existing keypair. When deleting a keypair the action is logged with the following message. 2015-10-06 10:40:14,185 14311 INFO horizon.tables.actions Deleted Key Pair: "test1" Is there a specific reason for the difference in behavior? To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1503236/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704914] [NEW] Extend Quota API to report usage statistics
Public bug reported: https://review.openstack.org/383673 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit a8109af65f275ec1b2e725695bf3bb9976f22ae3 Author: Sergey BelousDate: Fri Oct 7 14:29:07 2016 +0300 Extend Quota API to report usage statistics Extend existing quota api to report a quota set. The quota set will contain a set of resources and its corresponding reservation, limits and in_use count for each tenant. DocImpact:Documentation describing the new API as well as the new information that it exposes. APIImpact Co-Authored-By: Prince Boateng Change-Id: Ief2a6a4d2d7085e2a9dcd901123bc4fe6ac7ca22 Related-bug: #1599488 ** Affects: neutron Importance: Undecided Status: New ** Tags: doc neutron -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1704914 Title: Extend Quota API to report usage statistics Status in neutron: New Bug description: https://review.openstack.org/383673 Dear bug triager. This bug was created since a commit was marked with DOCIMPACT. Your project "openstack/neutron" is set up so that we directly report the documentation bugs against it. If this needs changing, the docimpact-group option needs to be added for the project. You can ask the OpenStack infra team (#openstack-infra on freenode) for help if you need to. commit a8109af65f275ec1b2e725695bf3bb9976f22ae3 Author: Sergey Belous Date: Fri Oct 7 14:29:07 2016 +0300 Extend Quota API to report usage statistics Extend existing quota api to report a quota set. The quota set will contain a set of resources and its corresponding reservation, limits and in_use count for each tenant. DocImpact:Documentation describing the new API as well as the new information that it exposes. APIImpact Co-Authored-By: Prince Boateng Change-Id: Ief2a6a4d2d7085e2a9dcd901123bc4fe6ac7ca22 Related-bug: #1599488 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1704914/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704908] [NEW] Launching instance from image causes image to be deallocated
Public bug reported: Steps to reproduce: 1. Go to Project > Compute > Images 2. Click "Launch" on one of the images to launch an instance. 3. Go to "Source" tab. The image selected is not allocated and does not appear in the available list. I realize that the image was initially allocated but was then de- allocated when changeBootSource(selectedSource) was called repeatedly: https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow /launch-instance/source/source.controller.js#L440 ** Affects: horizon Importance: Undecided Assignee: Chiew Yee Xin (yeexinc) Status: New ** Changed in: horizon Assignee: (unassigned) => Chiew Yee Xin (yeexinc) ** Description changed: Steps to reproduce: - 1. Go to Project > Computer > Images + 1. Go to Project > Compute > Images 2. Click "Launch" on one of the images to launch an instance. 3. Go to "Source" tab. The image selected is not allocated and does not appear in the available list. I realize that the image was initially allocated but was then de- allocated when changeBootSource(selectedSource) was called repeatedly: https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow /launch-instance/source/source.controller.js#L440 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1704908 Title: Launching instance from image causes image to be deallocated Status in OpenStack Dashboard (Horizon): New Bug description: Steps to reproduce: 1. Go to Project > Compute > Images 2. Click "Launch" on one of the images to launch an instance. 3. Go to "Source" tab. The image selected is not allocated and does not appear in the available list. I realize that the image was initially allocated but was then de- allocated when changeBootSource(selectedSource) was called repeatedly: https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow /launch-instance/source/source.controller.js#L440 To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1704908/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704392] Re: TestInstanceNotificationSample.test_volume_swap_server fails with "testtools.matchers._impl.MismatchError: 7 != 6"
Reviewed: https://review.openstack.org/484288 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=cf7f61c180c294f533e228697885a0ffb7e429bc Submitter: Jenkins Branch:master commit cf7f61c180c294f533e228697885a0ffb7e429bc Author: Balazs GibizerDate: Mon Jul 17 10:32:00 2017 +0200 fix test_volume_swap_server instability The test_volume_swap_server notification sample test was unstable becase sometimes volume_swap.end notification was missing. The test was waiting for the new volume id to appear on the REST API before asserting the received notifications but the compute manager updates the BDM earlier and then emits the volume_swap.end. This patch modifies the test to explicitly wait for the volume_swap.end notification. This expected to remove the test instability. Change-Id: Id6eefa7c85c4f63562344b552f027f1d513a90e1 Closes-Bug: #1704392 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1704392 Title: TestInstanceNotificationSample.test_volume_swap_server fails with "testtools.matchers._impl.MismatchError: 7 != 6" Status in OpenStack Compute (nova): Fix Released Bug description: http://logs.openstack.org/17/483917/1/check/gate-nova-tox-functional- ubuntu-xenial/f92a89d/console.html#_2017-07-14_13_14_22_687228 2017-07-14 13:14:22.687228 | nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSample.test_volume_swap_server 2017-07-14 13:14:22.687273 | 2017-07-14 13:14:22.687286 | 2017-07-14 13:14:22.687304 | Captured traceback: 2017-07-14 13:14:22.687322 | ~~~ 2017-07-14 13:14:22.687345 | Traceback (most recent call last): 2017-07-14 13:14:22.687388 | File "nova/tests/functional/notification_sample_tests/test_instance.py", line 837, in test_volume_swap_server 2017-07-14 13:14:22.687419 | self.assertEqual(7, len(fake_notifier.VERSIONED_NOTIFICATIONS)) 2017-07-14 13:14:22.687479 | File "/home/jenkins/workspace/gate-nova-tox-functional-ubuntu-xenial/.tox/functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual 2017-07-14 13:14:22.687507 | self.assertThat(observed, matcher, message) 2017-07-14 13:14:22.687565 | File "/home/jenkins/workspace/gate-nova-tox-functional-ubuntu-xenial/.tox/functional/local/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat 2017-07-14 13:14:22.687586 | raise mismatch_error 2017-07-14 13:14:22.687611 | testtools.matchers._impl.MismatchError: 7 != 6 This could be due to the recent change in the CinderFixture here: https://review.openstack.org/#/c/448779/ We need the dump the notifications in the error message to get context when this fails to debug it. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1704392/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704881] [NEW] subnetpools not cleaned up after subnetpool tempest tests
Public bug reported: Openstack Version: master Neutron change [1] introduced tempest tests for default subnetpools. These tests create subnetpools but do not clean them up after the test is complete. This results in subsequent tests failing because a default subnetpool already exists (see trace below). These subnetpools need to be deleted for the subsequent tests to succeed. [2] provides an example tempest run hitting this bug. [1] https://github.com/openstack/neutron/commit/316e2f4 [2] http://184.172.12.213/39/484439/2/check/nova-out-of-tree-pvm/edcdcfa/powervm_os_ci.html TRACE: Traceback (most recent call last): File "tempest/test.py", line 122, in wrapper return func(*func_args, **func_kwargs) File "/opt/stack/neutron/neutron/tests/tempest/api/test_subnetpools.py", line 386, in test_convert_default_subnetpool_to_non_default is_default=True) File "/opt/stack/neutron/neutron/tests/tempest/api/test_subnetpools.py", line 49, in _create_subnetpool return cls.create_subnetpool(name=name, is_admin=is_admin, **kwargs) File "/opt/stack/neutron/neutron/tests/tempest/api/base.py", line 416, in create_subnetpool body = cls.admin_client.create_subnetpool(name, **kwargs) File "/opt/stack/neutron/neutron/tests/tempest/services/network/json/network_client.py", line 184, in create_subnetpool resp, body = self.post(uri, body) File "tempest/lib/common/rest_client.py", line 270, in post return self.request('POST', url, extra_headers, headers, body, chunked) File "tempest/lib/common/rest_client.py", line 659, in request self._error_checker(resp, resp_body) File "tempest/lib/common/rest_client.py", line 770, in _error_checker raise exceptions.BadRequest(resp_body, resp=resp) tempest.lib.exceptions.BadRequest: Bad request Details: {u'message': u'Invalid input for operation: A default subnetpool for this IP family has already been set. Only one default may exist per IP family.', u'type': u'InvalidInput', u'detail': u''} ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1704881 Title: subnetpools not cleaned up after subnetpool tempest tests Status in neutron: New Bug description: Openstack Version: master Neutron change [1] introduced tempest tests for default subnetpools. These tests create subnetpools but do not clean them up after the test is complete. This results in subsequent tests failing because a default subnetpool already exists (see trace below). These subnetpools need to be deleted for the subsequent tests to succeed. [2] provides an example tempest run hitting this bug. [1] https://github.com/openstack/neutron/commit/316e2f4 [2] http://184.172.12.213/39/484439/2/check/nova-out-of-tree-pvm/edcdcfa/powervm_os_ci.html TRACE: Traceback (most recent call last): File "tempest/test.py", line 122, in wrapper return func(*func_args, **func_kwargs) File "/opt/stack/neutron/neutron/tests/tempest/api/test_subnetpools.py", line 386, in test_convert_default_subnetpool_to_non_default is_default=True) File "/opt/stack/neutron/neutron/tests/tempest/api/test_subnetpools.py", line 49, in _create_subnetpool return cls.create_subnetpool(name=name, is_admin=is_admin, **kwargs) File "/opt/stack/neutron/neutron/tests/tempest/api/base.py", line 416, in create_subnetpool body = cls.admin_client.create_subnetpool(name, **kwargs) File "/opt/stack/neutron/neutron/tests/tempest/services/network/json/network_client.py", line 184, in create_subnetpool resp, body = self.post(uri, body) File "tempest/lib/common/rest_client.py", line 270, in post return self.request('POST', url, extra_headers, headers, body, chunked) File "tempest/lib/common/rest_client.py", line 659, in request self._error_checker(resp, resp_body) File "tempest/lib/common/rest_client.py", line 770, in _error_checker raise exceptions.BadRequest(resp_body, resp=resp) tempest.lib.exceptions.BadRequest: Bad request Details: {u'message': u'Invalid input for operation: A default subnetpool for this IP family has already been set. Only one default may exist per IP family.', u'type': u'InvalidInput', u'detail': u''} To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1704881/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704788] Re: Hardcoded choices for nova scheduler driver
** Also affects: nova/ocata Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1704788 Title: Hardcoded choices for nova scheduler driver Status in OpenStack Compute (nova): Confirmed Status in OpenStack Compute (nova) ocata series: New Bug description: Hi everyone, There's a driver option in nova.conf which is parsed with configuration from here: https://github.com/openstack/nova/blob/stable/ocata/nova/conf/scheduler.py#L58 Hardcoded list of choices possible for that option specified on #60 and #61 lines blocks nova scheduler from allowing any custom scheduler driver. Is this intentional and there's another workaround for plugging in scheduler driver? Or is it just a mistake? Thanks for your attention. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1704788/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1703938] Re: AttributeError: 'PortContext' object has no attribute 'session' in l3_hamode_db
Reviewed: https://review.openstack.org/483085 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=5c331ecbd25be464184e345cea8f38bb9fc712fc Submitter: Jenkins Branch:master commit 5c331ecbd25be464184e345cea8f38bb9fc712fc Author: Ihar HrachyshkaDate: Wed Jul 12 12:59:36 2017 -0700 Fixed AttributeError in l2pop.delete_port_postcommit The error sneaked in with Ib6e59ab3405857d3ed4d82df1a80800089c3f06e where is_ha_router_port expects a NeutronContext object but we still pass PortContext instead. Change-Id: I593af5d050de00ddea7d758007d9856c4b97695f Closes-Bug: #1703938 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1703938 Title: AttributeError: 'PortContext' object has no attribute 'session' in l3_hamode_db Status in neutron: Fix Released Bug description: Jul 11 20:08:35.720679 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers [None req-13c07cf3-201f-4b86-9e92-8f51bd141c6c admin admin] Mechanism driver 'l2population' failed in delete_port_postcommit: AttributeError: 'PortContext' object has no attribute 'session' Jul 11 20:08:35.720775 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers Traceback (most recent call last): Jul 11 20:08:35.720895 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers File "/opt/stack/new/neutron/neutron/plugins/ml2/managers.py", line 426, in _call_on_drivers Jul 11 20:08:35.720971 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers getattr(driver.obj, method_name)(context) Jul 11 20:08:35.721056 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers File "/opt/stack/new/neutron/neutron/plugins/ml2/drivers/l2pop/mech_driver.py", line 79, in delete_port_postcommit Jul 11 20:08:35.721134 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers context, port['device_owner'], port['device_id']): Jul 11 20:08:35.721206 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers File "/opt/stack/new/neutron/neutron/db/l3_hamode_db.py", line 726, in is_ha_router_port Jul 11 20:08:35.721283 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers context, router_id=router_id, ha=True) Jul 11 20:08:35.721369 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers File "/opt/stack/new/neutron/neutron/objects/base.py", line 712, in objects_exist Jul 11 20:08:35.721447 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers context, cls.db_model, **cls.modify_fields_to_db(kwargs)) Jul 11 20:08:35.721526 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers File "/opt/stack/new/neutron/neutron/objects/db/api.py", line 32, in get_object Jul 11 20:08:35.721610 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers return _get_filter_query(context, model, **kwargs).first() Jul 11 20:08:35.721725 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers File "/opt/stack/new/neutron/neutron/objects/db/api.py", line 25, in _get_filter_query Jul 11 20:08:35.721802 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers with context.session.begin(subtransactions=True): Jul 11 20:08:35.721877 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers AttributeError: 'PortContext' object has no attribute 'session' Jul 11 20:08:35.721952 ubuntu-xenial-3-node-osic-cloud1-s3500-9770546 neutron-server[27121]: ERROR neutron.plugins.ml2.managers Example: http://logs.openstack.org/73/304873/44/check/gate-tempest- dsvm-neutron-dvr-ha-multinode-full-ubuntu-xenial- nv/586400d/logs/screen-q-svc.txt.gz?level=TRACE#_Jul_11_20_08_35_720679 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1703938/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704875] Re: Nova option api_class erroneously overriden in __init__.py
** Tags added: barbican ** Also affects: nova/newton Importance: Undecided Status: New ** Also affects: nova/ocata Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1704875 Title: Nova option api_class erroneously overriden in __init__.py Status in OpenStack Compute (nova): New Status in OpenStack Compute (nova) newton series: New Status in OpenStack Compute (nova) ocata series: New Bug description: When adding api_class = in the nova.conf file for Newton, the nova/keymgr/__init__.py file has code that re-sets the value of api_class to default. This makes it impossible to use barbican with nova without hacking some workaround release: Newton to reproduce: 1. set api_class in nova.conf 2. restart nova with debug 3. see that api_class is still default Workaround: commenting out lines 29 and 30 in __init__.py fixes the issue. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1704875/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704875] [NEW] Nova option api_class erroneously overriden in __init__.py
Public bug reported: When adding api_class = in the nova.conf file for Newton, the nova/keymgr/__init__.py file has code that re-sets the value of api_class to default. This makes it impossible to use barbican with nova without hacking some workaround release: Newton to reproduce: 1. set api_class in nova.conf 2. restart nova with debug 3. see that api_class is still default Workaround: commenting out lines 29 and 30 in __init__.py fixes the issue. ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1704875 Title: Nova option api_class erroneously overriden in __init__.py Status in OpenStack Compute (nova): New Bug description: When adding api_class = in the nova.conf file for Newton, the nova/keymgr/__init__.py file has code that re-sets the value of api_class to default. This makes it impossible to use barbican with nova without hacking some workaround release: Newton to reproduce: 1. set api_class in nova.conf 2. restart nova with debug 3. see that api_class is still default Workaround: commenting out lines 29 and 30 in __init__.py fixes the issue. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1704875/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704872] [NEW] sysconfig use subnet prefix and set DEFROUTE key
Public bug reported: Sysconfig network files should set DEFROUTE and IPV6_DEFROUTE if the key is configured. Additionally, ipv6 routes should use the subnet 'prefix' value instead of netmask. ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1704872 Title: sysconfig use subnet prefix and set DEFROUTE key Status in cloud-init: New Bug description: Sysconfig network files should set DEFROUTE and IPV6_DEFROUTE if the key is configured. Additionally, ipv6 routes should use the subnet 'prefix' value instead of netmask. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1704872/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1447227] Re: Connecting two or more distributed routers to a subnet doesn't work properly
It's actually not clear to me what the failure mode is here. If you attach two routers to a subnet and then have routes setup to point to one router for some subnets and the rest to another, are you saying that traffic fails to be forwarded to the correct subnet due to the flows? ** Changed in: neutron Status: Opinion => Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1447227 Title: Connecting two or more distributed routers to a subnet doesn't work properly Status in neutron: Incomplete Bug description: DVR code currently assumes that only one router may be attached to a subnet but this is not the case. OVS flows for example will not work correctly for E/W traffic as incoming traffic is always assumed to be coming from one of the two routers. The simple solution is to block the attachment of a distributed router to a subnet already attached to another distributed router. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1447227/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704833] [NEW] Some actions are missing on admin/instances panel
Public bug reported: Admins are able to do all of the actions shown on project/instances but there are only a handful of actions available on admin/instances panel. All of the actions on project/instances panel other than launch instance should be available on admin/instances panel. ** Affects: horizon Importance: Undecided Assignee: Ying Zuo (yingzuo) Status: New ** Changed in: horizon Assignee: (unassigned) => Ying Zuo (yingzuo) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1704833 Title: Some actions are missing on admin/instances panel Status in OpenStack Dashboard (Horizon): New Bug description: Admins are able to do all of the actions shown on project/instances but there are only a handful of actions available on admin/instances panel. All of the actions on project/instances panel other than launch instance should be available on admin/instances panel. To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1704833/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1576765] Re: Potential DOS: Keystone Extra Fields
** Changed in: keystone Status: Triaged => Won't Fix -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1576765 Title: Potential DOS: Keystone Extra Fields Status in OpenStack Identity (keystone): Won't Fix Status in OpenStack Security Advisory: Won't Fix Bug description: A user that has rights to update a resource in Keystone (project, user, domain, etc) can inject extra data (near unlimited amounts) with data that is only limited by the maximum request size. The extra fields cannot be deleted (ever) in the current design (the value of the field can be set to ~1byte minimum). An update excluding the field leaves the field data intact as is. This means that a bad actor can update a keystone resource and do one of the following to DOS Keystone cluster, database replication, database traffic, etc: 1) Create endless numbers of fields with very little data, that will cause longer and longer json serialization/deserailization times due to the volume of elements. 2) Create endless numbers of fields with large data sets, increasing the delta of what is stored in the RDBMS and putting extra load on the replication/etc processes for the shared data. This potentially could be used as a vector to run the DB server out of ram/cache/buffers/disk. This also causes the issue itemized above (1). 3) With HMT, it is possible to duplicate (as a domain/user) the above listed items with more and more resources. Memcache/caching will offset some of these issues until the memcache server can no longer store the data from the keystone resource due to exceeding the slab size (1MB) which could cause excessive load on the memcached servers/caching servers. With caching enabled, it is possible to run the keystone processes out of memory/DOS due to the request_local cache in use to ensure that the resources are fetched from the backend a single time (using a msgpack of the data stored in memory) for a given HTTP request. --- PROPOSED FIX -- * Issue a security bug fix that by default disables the ability to store data in the extra fields for *ALL* keystone resources * Migrate any/all fields that keystone supports to first class-attributes (columns) in the SQL backend[s]. * 2-Cycle deprecation before removal of the support for "extra" field storage (toggled via config value) - in the P Cycle extra fields will no longer be supported. All non-standard data will need to be migrated to an external metadata storage. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1576765/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1649616] Re: Keystone Token Flush job does not complete in HA deployed environment
This bug was fixed in the package keystone - 2:10.0.1-0ubuntu2 --- keystone (2:10.0.1-0ubuntu2) yakkety; urgency=high * d/p/0001-Make-flushing-tokens-more-robust.patch: Commit token flushes between batches in order to lower resource consumption and make flushing more robust for replication (LP: #1649616). -- Jorge NiedbalskiWed, 07 Jun 2017 13:07:50 +0100 ** Changed in: keystone (Ubuntu Yakkety) Status: Fix Committed => Fix Released ** Changed in: keystone (Ubuntu Xenial) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1649616 Title: Keystone Token Flush job does not complete in HA deployed environment Status in Ubuntu Cloud Archive: In Progress Status in Ubuntu Cloud Archive mitaka series: Fix Committed Status in Ubuntu Cloud Archive newton series: Fix Committed Status in Ubuntu Cloud Archive ocata series: Fix Committed Status in OpenStack Identity (keystone): Fix Released Status in OpenStack Identity (keystone) newton series: In Progress Status in OpenStack Identity (keystone) ocata series: In Progress Status in puppet-keystone: Triaged Status in tripleo: In Progress Status in keystone package in Ubuntu: In Progress Status in keystone source package in Xenial: Fix Released Status in keystone source package in Yakkety: Fix Released Status in keystone source package in Zesty: Fix Released Bug description: [Impact] * The Keystone token flush job can get into a state where it will never complete because the transaction size exceeds the mysql galara transaction size - wsrep_max_ws_size (1073741824). [Test Case] 1. Authenticate many times 2. Observe that keystone token flush job runs (should be a very long time depending on disk) >20 hours in my environment 3. Observe errors in mysql.log indicating a transaction that is too large Actual results: Expired tokens are not actually flushed from the database without any errors in keystone.log. Only errors appear in mysql.log. Expected results: Expired tokens to be removed from the database [Additional info:] It is likely that you can demonstrate this with less than 1 million tokens as the >1 million token table is larger than 13GiB and the max transaction size is 1GiB, my token bench-marking Browbeat job creates more than needed. Once the token flush job can not complete the token table will never decrease in size and eventually the cloud will run out of disk space. Furthermore the flush job will consume disk utilization resources. This was demonstrated on slow disks (Single 7.2K SATA disk). On faster disks you will have more capacity to generate tokens, you can then generate the number of tokens to exceed the transaction size even faster. Log evidence: [root@overcloud-controller-0 log]# grep " Total expired" /var/log/keystone/keystone.log 2016-12-08 01:33:40.530 21614 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 1082434 2016-12-09 09:31:25.301 14120 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 1084241 2016-12-11 01:35:39.082 4223 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 1086504 2016-12-12 01:08:16.170 32575 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 1087823 2016-12-13 01:22:18.121 28669 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 1089202 [root@overcloud-controller-0 log]# tail mysqld.log 161208 1:33:41 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592 161208 1:33:41 [ERROR] WSREP: rbr write fail, data_len: 0, 2 161209 9:31:26 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592 161209 9:31:26 [ERROR] WSREP: rbr write fail, data_len: 0, 2 161211 1:35:39 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592 161211 1:35:40 [ERROR] WSREP: rbr write fail, data_len: 0, 2 161212 1:08:16 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592 161212 1:08:17 [ERROR] WSREP: rbr write fail, data_len: 0, 2 161213 1:22:18 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592 161213 1:22:19 [ERROR] WSREP: rbr write fail, data_len: 0, 2 Disk utilization issue graph is attached. The entire job in that graph takes from the first spike is disk util(~5:18UTC) and culminates in about ~90 minutes of pegging the disk (between 1:09utc to 2:43utc). [Regression Potential] * Not identified To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1649616/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net
[Yahoo-eng-team] [Bug 1593641] Re: Strange errors when using dump/dumps from jsonutils
Adding nova since the oslo.serialization change breaks nova unit and functional tests so I need to fix those up first. ** Changed in: oslo.serialization Status: New => Fix Released ** Also affects: nova Importance: Undecided Status: New ** Changed in: nova Status: New => In Progress ** Changed in: nova Assignee: (unassigned) => Matt Riedemann (mriedem) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1593641 Title: Strange errors when using dump/dumps from jsonutils Status in OpenStack Compute (nova): In Progress Status in oslo.serialization: Fix Released Bug description: Methods dump/dumps throw strange exceptions (ValueError: Circular reference detected) when trying to serialize arbitrary (non- serializable) objects. For example, jsonutils.dumps(object()) yields the following traceback: Traceback (most recent call last): File "/home/gdavoian/oslo.serialization/oslo_serialization/jsonutils.py", line 259, in print dumps(object()) File "/home/gdavoian/oslo.serialization/oslo_serialization/jsonutils.py", line 184, in dumps return json.dumps(obj, default=default, **kwargs) File "/usr/lib/python2.7/json/__init__.py", line 250, in dumps sort_keys=sort_keys, **kw).encode(obj) File "/usr/lib/python2.7/json/encoder.py", line 207, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode return _iterencode(o, 0) ValueError: Circular reference detected To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1593641/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1649616] Re: Keystone Token Flush job does not complete in HA deployed environment
This bug was fixed in the package keystone - 2:11.0.2-0ubuntu1 --- keystone (2:11.0.2-0ubuntu1) zesty; urgency=medium [ Jorge Niedbalski ] * d/p/0001-Make-flushing-tokens-more-robust.patch: Commit token flushes between batches in order to lower resource consumption and make flushing more robust for replication (LP: #1649616). [ James Page ] * New upstream stable release for OpenStack Ocata (LP: #1696139). -- James PageWed, 07 Jun 2017 16:01:45 +0100 ** Changed in: keystone (Ubuntu Zesty) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1649616 Title: Keystone Token Flush job does not complete in HA deployed environment Status in Ubuntu Cloud Archive: In Progress Status in Ubuntu Cloud Archive mitaka series: Fix Committed Status in Ubuntu Cloud Archive newton series: Fix Committed Status in Ubuntu Cloud Archive ocata series: Fix Committed Status in OpenStack Identity (keystone): Fix Released Status in OpenStack Identity (keystone) newton series: In Progress Status in OpenStack Identity (keystone) ocata series: In Progress Status in puppet-keystone: Triaged Status in tripleo: In Progress Status in keystone package in Ubuntu: In Progress Status in keystone source package in Xenial: Fix Committed Status in keystone source package in Yakkety: Fix Committed Status in keystone source package in Zesty: Fix Released Bug description: [Impact] * The Keystone token flush job can get into a state where it will never complete because the transaction size exceeds the mysql galara transaction size - wsrep_max_ws_size (1073741824). [Test Case] 1. Authenticate many times 2. Observe that keystone token flush job runs (should be a very long time depending on disk) >20 hours in my environment 3. Observe errors in mysql.log indicating a transaction that is too large Actual results: Expired tokens are not actually flushed from the database without any errors in keystone.log. Only errors appear in mysql.log. Expected results: Expired tokens to be removed from the database [Additional info:] It is likely that you can demonstrate this with less than 1 million tokens as the >1 million token table is larger than 13GiB and the max transaction size is 1GiB, my token bench-marking Browbeat job creates more than needed. Once the token flush job can not complete the token table will never decrease in size and eventually the cloud will run out of disk space. Furthermore the flush job will consume disk utilization resources. This was demonstrated on slow disks (Single 7.2K SATA disk). On faster disks you will have more capacity to generate tokens, you can then generate the number of tokens to exceed the transaction size even faster. Log evidence: [root@overcloud-controller-0 log]# grep " Total expired" /var/log/keystone/keystone.log 2016-12-08 01:33:40.530 21614 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 1082434 2016-12-09 09:31:25.301 14120 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 1084241 2016-12-11 01:35:39.082 4223 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 1086504 2016-12-12 01:08:16.170 32575 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 1087823 2016-12-13 01:22:18.121 28669 INFO keystone.token.persistence.backends.sql [-] Total expired tokens removed: 1089202 [root@overcloud-controller-0 log]# tail mysqld.log 161208 1:33:41 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592 161208 1:33:41 [ERROR] WSREP: rbr write fail, data_len: 0, 2 161209 9:31:26 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592 161209 9:31:26 [ERROR] WSREP: rbr write fail, data_len: 0, 2 161211 1:35:39 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592 161211 1:35:40 [ERROR] WSREP: rbr write fail, data_len: 0, 2 161212 1:08:16 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592 161212 1:08:17 [ERROR] WSREP: rbr write fail, data_len: 0, 2 161213 1:22:18 [Warning] WSREP: transaction size limit (1073741824) exceeded: 1073774592 161213 1:22:19 [ERROR] WSREP: rbr write fail, data_len: 0, 2 Disk utilization issue graph is attached. The entire job in that graph takes from the first spike is disk util(~5:18UTC) and culminates in about ~90 minutes of pegging the disk (between 1:09utc to 2:43utc). [Regression Potential] * Not identified To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1649616/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to :
[Yahoo-eng-team] [Bug 1704205] Re: GET /v3/role_assignments?effective_names API fails with unexpected 500 error
Yeah, those instructions were followed, but the problem here was that some users didn't have a value set in the property that was used for name. More specifically, the customer used a field that holds the email address as the name, and some users didn't have an email address. But even beyond that, we couldn't tell them to use a different LDAP attribute because there was no single attribute that consistently had a value for all users, even cn. You could argue that LDAP was misconfigured, but good luck getting that fixed in a large enterprise environment (which this was). You could argue that keystone was misconfigured, but in this case there was not a better LDAP attribute to use for name. So I'd like to see keystone handle this better somehow. Could keystone report a name of "" or "" or something when the attribute that is supposed to have the name is not found on a given resource? ** Changed in: keystone Status: Invalid => New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1704205 Title: GET /v3/role_assignments?effective_names API fails with unexpected 500 error Status in OpenStack Identity (keystone): New Bug description: In an environment like ldap server as identity backend, where a group has role assignment but some users in group doesn't have "name" attribute configured in ldap. So while fetching effective role assignments with include_names, it is failing in below stack trace error. 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi Traceback (most recent call last): 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 228, in __call__ 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi result = method(req, **params) 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/assignment/controllers.py", line 999, in list_role_assignments_wrapper 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi return self.list_role_assignments(request) 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 235, in wrapper 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi return f(self, request, filters, **kwargs) 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/assignment/controllers.py", line 956, in list_role_assignments 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi return self._list_role_assignments(request, filters) 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/assignment/controllers.py", line 945, in _list_role_assignments 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi include_names=include_names) 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 123, in wrapped 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi __ret_val = __f(*args, **kwargs) 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/assignment/core.py", line 948, in list_role_assignments 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi return self._get_names_from_role_assignments(role_assignments) 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi File "/usr/lib/python2.7/site-packages/keystone/assignment/core.py", line 974, in _get_names_from_role_assignments 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi new_assign['user_name'] = _user['name'] 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi KeyError: 'name' 2017-07-13 05:06:10.835 10460 ERROR keystone.common.wsgi To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1704205/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704798] [NEW] GET /os-quota-sets/{tenant_id} API is failing with SSL exception
Public bug reported: In the flow of GET /os-quota-sets/{tenant_id} API, when project_id/tenant_id is being verified by communicating with keystone through secure(https)connection at https://github.com/openstack/nova/blob/master/nova/api/openstack/identity.py#L32, it is failing in certificate validation error as below. 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity Traceback (most recent call last): 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity File "/usr/lib/python2.7/site-packages/nova/api/openstack/identity.py", line 42, in verify_project_id 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity raise_exc=False) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 758, in get 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity return self.request(url, 'GET', **kwargs) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity return wrapped(*args, **kwargs) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 616, in request 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity resp = send(**kwargs) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 678, in _send_request 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity raise exceptions.SSLError(msg) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity SSLError: SSL exception connecting to https://xxx.xxx.xxx.xxx:5000/v3/projects/0fe761dc32934fc88c390d244acb6971: ("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity ** Affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1704798 Title: GET /os-quota-sets/{tenant_id} API is failing with SSL exception Status in OpenStack Compute (nova): New Bug description: In the flow of GET /os-quota-sets/{tenant_id} API, when project_id/tenant_id is being verified by communicating with keystone through secure(https)connection at https://github.com/openstack/nova/blob/master/nova/api/openstack/identity.py#L32, it is failing in certificate validation error as below. 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity Traceback (most recent call last): 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity File "/usr/lib/python2.7/site-packages/nova/api/openstack/identity.py", line 42, in verify_project_id 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity raise_exc=False) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 758, in get 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity return self.request(url, 'GET', **kwargs) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity return wrapped(*args, **kwargs) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 616, in request 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity resp = send(**kwargs) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 678, in _send_request 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity raise exceptions.SSLError(msg) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity SSLError: SSL exception connecting to https://xxx.xxx.xxx.xxx:5000/v3/projects/0fe761dc32934fc88c390d244acb6971: ("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",) 2017-07-06 01:13:28.134 21365 ERROR nova.api.openstack.identity To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1704798/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704423] Re: _test_unshelve_server intermittently fails in functional versioned notification tests
Reviewed: https://review.openstack.org/483986 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=da57d17e6c7d5d7e84af3c46a836ee587581bf8d Submitter: Jenkins Branch:master commit da57d17e6c7d5d7e84af3c46a836ee587581bf8d Author: Balazs GibizerDate: Fri Jul 14 18:18:33 2017 +0200 fix unshelve notification test instability The unshelve notification sample test shelve-offloads an instance, waits for it state to change to SHELVED_OFFLOADED then unshelve the instance and matches generated the unshelve notification with the stored sample. This test intermittently fails as the host paramter of the instance doesn't match sometimes. The reason is that the compute manager during shelve offloading first sets the state of the instance then later sets the host of the instance. So the test can start unshelving the instance before the host is cleaned by the shelve offload code. The test is updated to not just wait for the state change but also wait for the change of the host attribute. Change-Id: I459332de407187724fd2962effb7f3a34751f505 Closes-Bug: #1704423 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1704423 Title: _test_unshelve_server intermittently fails in functional versioned notification tests Status in OpenStack Compute (nova): Fix Released Bug description: http://logs.openstack.org/39/483939/1/check/gate-nova-tox-functional- ubuntu-xenial/95a614f/console.html#_2017-07-14_14_16_48_905908 None != u'compute': path: root.payload.nova_object.data.host Looks like we're racing between the time that the vm_state is set to SHELVED_OFFLOADED and the host is set to None: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4446-L4457 So the test should be waiting for the notification after the state change AND the instance.host is None. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1704423/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1704793] [NEW] tests_simple.py has side effects, allows test_create_users to pass when it should not.
*** This bug is a duplicate of bug 1703697 *** https://bugs.launchpad.net/bugs/1703697 Public bug reported: test_simple_run.py has some side effects, and makes test_create_users.py (and possibly others) succeed when they should not. This was found when looking at bug 1704024. It seems that python 3.6 ends up iterating through tests in a different order than 3.5. $ python3 -m nose tests/unittests/test_runs/test_simple_run.py tests/unittests/test_distros/test_create_users.py .. -- Ran 10 tests in 0.065s OK $ python3 -m nose tests/unittests/test_distros/test_create_users.pyF == FAIL: test_basic (tests.unittests.test_distros.test_create_users.TestCreateUser) -- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/mock/mock.py", line 1305, in patched return func(*args, **keywargs) File "/home/smoser-public/src/cloud-init/cloud-init/tests/unittests/test_distros/test_create_users.py", line 63, in test_basic mock.call(['passwd', '-l', user])]) AssertionError: [call(['systemd-detect-virt', '--quiet', '--conta[116 chars]r'])] != [call(['useradd', 'foouser', '-m'], logstring=['u[57 chars]r'])] >> begin captured logging << cloudinit.util: DEBUG: Reading from /etc/os-release (quiet=True) cloudinit.util: DEBUG: Read 407 bytes from /etc/os-release cloudinit.util: DEBUG: Reading from /proc/1/cmdline (quiet=False) cloudinit.util: DEBUG: Read 47 bytes from /proc/1/cmdline cloudinit.util: DEBUG: Reading from /etc/system-image/channel.ini (quiet=True) cloudinit.util: DEBUG: Read 0 bytes from /etc/system-image/channel.ini cloudinit.distros: DEBUG: Adding user foouser - >> end captured logging << - -- Ran 9 tests in 0.023s FAILED (failures=1) Related bugs: * bug 1704024: tox fails under python 3.6 ** Affects: cloud-init Importance: High Status: Confirmed ** Changed in: cloud-init Status: New => Confirmed ** Changed in: cloud-init Importance: Undecided => High ** This bug has been marked a duplicate of bug 1703697 tox fails under python 3.6 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1704793 Title: tests_simple.py has side effects, allows test_create_users to pass when it should not. Status in cloud-init: Confirmed Bug description: test_simple_run.py has some side effects, and makes test_create_users.py (and possibly others) succeed when they should not. This was found when looking at bug 1704024. It seems that python 3.6 ends up iterating through tests in a different order than 3.5. $ python3 -m nose tests/unittests/test_runs/test_simple_run.py tests/unittests/test_distros/test_create_users.py .. -- Ran 10 tests in 0.065s OK $ python3 -m nose tests/unittests/test_distros/test_create_users.pyF == FAIL: test_basic (tests.unittests.test_distros.test_create_users.TestCreateUser) -- Traceback (most recent call last): File "/usr/lib/python3/dist-packages/mock/mock.py", line 1305, in patched return func(*args, **keywargs) File "/home/smoser-public/src/cloud-init/cloud-init/tests/unittests/test_distros/test_create_users.py", line 63, in test_basic mock.call(['passwd', '-l', user])]) AssertionError: [call(['systemd-detect-virt', '--quiet', '--conta[116 chars]r'])] != [call(['useradd', 'foouser', '-m'], logstring=['u[57 chars]r'])] >> begin captured logging << cloudinit.util: DEBUG: Reading from /etc/os-release (quiet=True) cloudinit.util: DEBUG: Read 407 bytes from /etc/os-release cloudinit.util: DEBUG: Reading from /proc/1/cmdline (quiet=False) cloudinit.util: DEBUG: Read 47 bytes from /proc/1/cmdline cloudinit.util: DEBUG: Reading from /etc/system-image/channel.ini (quiet=True) cloudinit.util: DEBUG: Read 0 bytes from /etc/system-image/channel.ini cloudinit.distros: DEBUG: Adding user foouser - >> end captured logging << - -- Ran 9 tests in 0.023s FAILED (failures=1) Related bugs: * bug 1704024: tox fails under python 3.6 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1704793/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to :
[Yahoo-eng-team] [Bug 1704788] [NEW] Hardcoded choices for nova scheduler driver
Public bug reported: Hi everyone, There's a driver option in nova.conf which is parsed with configuration from here: https://github.com/openstack/nova/blob/stable/ocata/nova/conf/scheduler.py#L58 Hardcoded list of choices possible for that option specified on #60 and #61 lines blocks nova scheduler from allowing any custom scheduler driver. Is this intentional and there's another workaround for plugging in scheduler driver? Or is it just a mistake? Thanks for your attention. ** Affects: nova Importance: Undecided Status: New ** Tags: config scheduler -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1704788 Title: Hardcoded choices for nova scheduler driver Status in OpenStack Compute (nova): New Bug description: Hi everyone, There's a driver option in nova.conf which is parsed with configuration from here: https://github.com/openstack/nova/blob/stable/ocata/nova/conf/scheduler.py#L58 Hardcoded list of choices possible for that option specified on #60 and #61 lines blocks nova scheduler from allowing any custom scheduler driver. Is this intentional and there's another workaround for plugging in scheduler driver? Or is it just a mistake? Thanks for your attention. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1704788/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1596829] Re: String interpolation should be delayed at logging calls
** Also affects: watcher Importance: Undecided Status: New ** Changed in: watcher Assignee: (unassigned) => LiChunlin (lichl) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1596829 Title: String interpolation should be delayed at logging calls Status in congress: Fix Released Status in ec2-api: Confirmed Status in Glance: In Progress Status in glance_store: In Progress Status in Glare: New Status in OpenStack Dashboard (Horizon): Fix Released Status in Ironic: Fix Released Status in OpenStack Identity (keystone): In Progress Status in masakari: Fix Released Status in networking-vsphere: Fix Released Status in OpenStack Compute (nova): Fix Released Status in os-brick: Fix Released Status in os-vif: Fix Released Status in Glance Client: Fix Released Status in python-manilaclient: Fix Released Status in python-openstackclient: Fix Released Status in python-troveclient: Fix Released Status in senlin: Invalid Status in watcher: In Progress Status in Zun: In Progress Bug description: String interpolation should be delayed to be handled by the logging code, rather than being done at the point of the logging call. Wrong: LOG.debug('Example: %s' % 'bad') Right: LOG.debug('Example: %s', 'good') See the following guideline. * http://docs.openstack.org/developer/oslo.i18n/guidelines.html #adding-variables-to-log-messages The rule for it should be added to hacking checks. To manage notifications about this bug go to: https://bugs.launchpad.net/congress/+bug/1596829/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1669746] Re: sample config for host is unclear
Reviewed: https://review.openstack.org/441210 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=80b5ad046da7e633b88bbec687745e03691841cd Submitter: Jenkins Branch:master commit 80b5ad046da7e633b88bbec687745e03691841cd Author: Stephen FinucaneDate: Fri Jun 30 14:58:06 2017 +0100 conf: fix netconf, my_ip and host are unclear Default values set for my_ip and host config opts in the netconf file are reporting the details of the infra worker, thus making it unclear what the real default value should be. Also, help text for the host opt did not mention its relevance to the cinder and neutron settings. Change-Id: I69e3953fa46766ea2818bd01c4de949fd43938b0 Closes-Bug: 1669746 Implements: blueprint centralize-config-options-pike ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1669746 Title: sample config for host is unclear Status in OpenStack Compute (nova): Fix Released Bug description: When we default the host and my_ip setting using oslo_config we end up reporting the details of the infra worker that generated the docs. We should make use of the sample default to add some text that looks better in the docs. In addition the description of host doesn't mention this is used as the oslo.messaging queue name for nova-compute worker. It is also used for the neutron bind host, so should match the host config of the neutorn agent. It is also used for the cinder host attachment information. For context, here is the current rendering of the conf: # # Hostname, FQDN or IP address of this host. Must be valid within AMQP key. # # Possible values: # # * String with hostname, FQDN or IP address. Default is hostname of this host. # (string value) #host = ubuntu-xenial-osic-cloud1-disk-7584065 Note there are other ones needing sample default to be set: # # The IP address which the host is using to connect to the management network. # # Possible values: # # * String with valid IP address. Default is IPv4 address of this host. # # Related options: # # * metadata_host # * my_block_storage_ip # * routing_source_ip # * vpn_ip # (string value) #my_ip = 10.29.14.104 To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1669746/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1703303] Re: compute.pp fails during installation on rhel 7.4
** Project changed: nova => packstack ** Changed in: packstack Status: Invalid => New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1703303 Title: compute.pp fails during installation on rhel 7.4 Status in Packstack: New Bug description: I was trying to install packstack on rhel 7.4, but I am getting following error- ^[[1;31mError: /Stage[main]/Packstack::Nova::Compute::Libvirt/File_line[libvirt-guests]: Could not evaluate: No such file or directory - /etc/sysconfig/libvirt-guests^[[0m ^[[1;31mError: /Stage[main]/Packstack::Nova::Compute::Libvirt/Exec[virsh-net-destroy-default]: Could not evaluate: Could not find command '/usr/bin/virsh'^[[0m and compute.pp gets fail. To manage notifications about this bug go to: https://bugs.launchpad.net/packstack/+bug/1703303/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp