[Yahoo-eng-team] [Bug 1803941] [NEW] UnicodeDecodeError occurs when non-ascii char in the instance name/description
Public bug reported: Description === When using the non-ascii character in the instance description, the new instance spinning up fails due to the UnicodDecodeError in the nova-scheduler. We are using the pike version nova in ubuntu. (16.1.4). The python env is py27 Steps to reproduce == 1) Add some no-ascii char into the instance description. nova update --description "testing 测试" 2eda7ea7-94f1-4ad9-8ea5-f4f007bb4e4d 2) restart the nova-scheduler process 3) spin up a new instance. Then The new instance scheduling is failed. The fix is remove non-ascii character and restart the nova-scheduler. Log stack is: 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server [req-69d3744f-8f4b-4aa4-b194-d2ec6e002bcb 8ca775f6ebab4cc6b16405b7c2fefd05 704fdb05a2f645ac8dbf4fb1222bf267 - default default] Exception during message handling: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 4326: ordinal not in range(128) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 160, in _process_incoming 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 232, in inner 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server return func(*args, **kwargs) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 150, in select_destinations 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server alloc_reqs_by_rp_uuid, provider_summaries) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 89, in select_destinations 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server alloc_reqs_by_rp_uuid, provider_summaries) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 158, in _schedule 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server provider_summaries) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 352, in _get_all_host_states 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server spec_obj) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 675, in get_host_states_by_uuids 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server return self._get_host_states(context, compute_nodes, services) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 720, in _get_host_states 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server self._get_instance_info(context, compute)) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 185, in update 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server return _locked_update(self, compute, service, aggregates, inst_dict) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server return f(*args, **kwargs) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line 182, in _locked_update 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server LOG.debug("Update host state with instances: %s", inst_dict) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/logging/__init__.py", line 1440, in debug 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server self.logger.debug(msg, *args, **kwargs) 2018-11-19 16:44:45.548 12113 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/logging/__init__.py", line 1155, in
[Yahoo-eng-team] [Bug 1803925] [NEW] There is no interface for operators to migrate *all* the existing compute resource providers to be ready for nested providers
Public bug reported: When nested resource provider feature was added in Rocky, root_provider_uuid column, which should be non-None value is created in the resource provider DB. For existing resource providers created before queens, we have an online data migration: https://review.openstack.org/#/c/377138/62/nova/objects/resource_provider.py@917 But it's only done via listing/showing resource providers. We should have explicit migration script something like "placement-manage db online_data_migrations" to make sure all the resource providers are ready for the nested provider feature, that is all the root_provider_uuid column has non-None value. This bug tracking can be closed when the following tasks are done - Provide something like "placement-manage db online_data_migrations" so that in Stein we are sure all the root_provider_uuid column is non-None value. - Clean placement/objects/resource_provider.py removing many TODOs like "Change this to an inner join when we are sure all root_provider_id values are NOT NULL" NOTE: This report is created after fixing/closing https://bugs.launchpad.net/nova/+bug/1799892 in a temporary way without the explicit DB migration script. ** Affects: nova Importance: Undecided Status: New ** Tags: placement ** Tags added: placement -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1803925 Title: There is no interface for operators to migrate *all* the existing compute resource providers to be ready for nested providers Status in OpenStack Compute (nova): New Bug description: When nested resource provider feature was added in Rocky, root_provider_uuid column, which should be non-None value is created in the resource provider DB. For existing resource providers created before queens, we have an online data migration: https://review.openstack.org/#/c/377138/62/nova/objects/resource_provider.py@917 But it's only done via listing/showing resource providers. We should have explicit migration script something like "placement-manage db online_data_migrations" to make sure all the resource providers are ready for the nested provider feature, that is all the root_provider_uuid column has non-None value. This bug tracking can be closed when the following tasks are done - Provide something like "placement-manage db online_data_migrations" so that in Stein we are sure all the root_provider_uuid column is non-None value. - Clean placement/objects/resource_provider.py removing many TODOs like "Change this to an inner join when we are sure all root_provider_id values are NOT NULL" NOTE: This report is created after fixing/closing https://bugs.launchpad.net/nova/+bug/1799892 in a temporary way without the explicit DB migration script. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1803925/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1803717] Re: Instance snapshot fails with rbd backend
Reviewed: https://review.openstack.org/618534 Committed: https://git.openstack.org/cgit/openstack/nova/commit/?id=fd540e2135c26d8c297695a3fa73d993655f0ad8 Submitter: Zuul Branch:master commit fd540e2135c26d8c297695a3fa73d993655f0ad8 Author: Jens Harbott Date: Fri Nov 16 14:50:41 2018 + Fix regression in glance client call In [0] the way parameters are passed to the glance client was changed. Sadly one required argument was dropped during this, we need to insert it again in order to fix e.g. rbd backend usage. [0] https://review.openstack.org/614351 Change-Id: I5a4cfb3c9b8125eca4f6c9561d3023537e606a93 Closes-Bug: 1803717 ** Changed in: nova Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1803717 Title: Instance snapshot fails with rbd backend Status in OpenStack Compute (nova): Fix Released Bug description: http://logs.openstack.org/85/617985/1/check/devstack-plugin-ceph- tempest/58fe872/controller/logs/screen-n-cpu.txt.gz#_Nov_16_07_59_55_423217 Nov 16 08:07:14.891163 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: DEBUG nova.virt.libvirt.storage.rbd_utils [None req-3005471d-96d3-4fdd-a042-0b9e6025ccf4 tempest-ServerActionsTestJSON-406716108 tempest-ServerActionsTestJSON-406716108] creating snapshot(snap) on rbd image(0ef68017-c94d-43b4-8bb9-78f4d77cf928) {{(pid=3629) create_snap /opt/stack/nova/nova/virt/libvirt/storage/rbd_utils.py:383}} Nov 16 08:07:16.213304 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: DEBUG oslo_service.periodic_task [None req-898d2dca-37a7-403f-b578-5ca2ae90e329 None None] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens {{(pid=3629) run_periodic_tasks /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:219}} Nov 16 08:07:16.322727 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver [None req-3005471d-96d3-4fdd-a042-0b9e6025ccf4 tempest-ServerActionsTestJSON-406716108 tempest-ServerActionsTestJSON-406716108] Failed to snapshot image: TypeError: add_location() takes exactly 4 arguments (3 given) Nov 16 08:07:16.322893 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver Traceback (most recent call last): Nov 16 08:07:16.323039 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1908, in snapshot Nov 16 08:07:16.323192 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver purge_props=False) Nov 16 08:07:16.323326 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver File "/opt/stack/nova/nova/image/api.py", line 142, in update Nov 16 08:07:16.323460 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver purge_props=purge_props) Nov 16 08:07:16.323604 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver File "/opt/stack/nova/nova/image/glance.py", line 588, in update Nov 16 08:07:16.323801 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver _reraise_translated_image_exception(image_id) Nov 16 08:07:16.324000 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver File "/opt/stack/nova/nova/image/glance.py", line 908, in _reraise_translated_image_exception Nov 16 08:07:16.324179 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver six.reraise(type(new_exc), new_exc, exc_trace) Nov 16 08:07:16.324362 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver File "/opt/stack/nova/nova/image/glance.py", line 586, in update Nov 16 08:07:16.324511 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver image = self._update_v2(context, sent_service_image_meta, data) Nov 16 08:07:16.324655 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver File "/opt/stack/nova/nova/image/glance.py", line 600, in _update_v2 Nov 16 08:07:16.324802 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver image = self._add_location(context, image_id, location) Nov 16 08:07:16.324948 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver File "/opt/stack/nova/nova/image/glance.py", line 485, in _add_location Nov 16 08:07:16.325110 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver context, 2, 'add_location', args=(image_id, location)) Nov 16 08:07:16.325263 ubuntu-xenial-rax-iad-536097 nova-compute[3629]: ERROR nova.virt.libvirt.driver File
[Yahoo-eng-team] [Bug 1803919] [NEW] [L2] dataplane down during ovs-agent restart
Public bug reported: ENV: neutron: stable/queens tenant network type: vlan provider network type: vlan kernel: 3.10.0-862.3.2.el7.x86_64 Problem description: This is an extremly case for neutron ovs-agent during restart. (1) condition 1: tenant network and provider network share the physic NIC, aka send the traffic to the same physic NIC, so the brige mapping will be: br-provider:bond1. No other mappings. (2) condition 2: Neutron-servers are all down, or message queue is down. Then, restart the L2 ovs-agent, the dataplane will down. This issue was seen during a large deployment upgrading procedure, when restart neutron-server and ovs-agent synchronously, some ovs-agent will get message timeout, and the VM traffic is down. Code digging: stable/queens and master branch has basicly same procedure for this issue. The ovs-agent init procedure has a call for `setup_physical_bridges`, it has two drop flows: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1225-L1226 After this two drop flows installed, the VMs traffic will go down. If the MQ or neutron server is not up, the VM will be unreachable. Until the MQ or neutron server are all up, the ovs-agent will require a manually restart again to recover the traffic. ** Affects: neutron Importance: Undecided Assignee: LIU Yulong (dragon889) Status: New ** Description changed: ENV: neutron: stable/queens tenant network type: vlan provider network type: vlan kernel: 3.10.0-862.3.2.el7.x86_64 Problem description: This is an extremly case for neutron ovs-agent during restart. (1) condition 1: tenant network and provider network share the physic NIC, aka send the traffic to the same physic NIC, so the brige mapping will be: br-provider:bond1. No other mappings. (2) condition 2: Neutron-servers are all down, or message queue is down. Then, restart the L2 ovs-agent, the dataplane will down. This issue was seen during a large deployment upgrading procedure, when restart neutron-server and ovs-agent synchronously, some ovs-agent will get message timeout, and the VM traffic is down. Code digging: stable/queens and master branch has basicly same procedure for this issue. The ovs-agent init procedure has a call for `setup_physical_bridges`, it has two drop flows: - https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1221-L1222 + https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1225-L1226 After this two drop flows installed, the VMs traffic will go down. If the MQ or neutron server is not up, the VM will be unreachable. Until the MQ or neutron server are all up, the ovs-agent will require a manually restart again to recover the traffic. ** Changed in: neutron Assignee: (unassigned) => LIU Yulong (dragon889) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1803919 Title: [L2] dataplane down during ovs-agent restart Status in neutron: New Bug description: ENV: neutron: stable/queens tenant network type: vlan provider network type: vlan kernel: 3.10.0-862.3.2.el7.x86_64 Problem description: This is an extremly case for neutron ovs-agent during restart. (1) condition 1: tenant network and provider network share the physic NIC, aka send the traffic to the same physic NIC, so the brige mapping will be: br-provider:bond1. No other mappings. (2) condition 2: Neutron-servers are all down, or message queue is down. Then, restart the L2 ovs-agent, the dataplane will down. This issue was seen during a large deployment upgrading procedure, when restart neutron-server and ovs-agent synchronously, some ovs- agent will get message timeout, and the VM traffic is down. Code digging: stable/queens and master branch has basicly same procedure for this issue. The ovs-agent init procedure has a call for `setup_physical_bridges`, it has two drop flows: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1225-L1226 After this two drop flows installed, the VMs traffic will go down. If the MQ or neutron server is not up, the VM will be unreachable. Until the MQ or neutron server are all up, the ovs-agent will require a manually restart again to recover the traffic. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1803919/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1801778] Re: Keystone circular reference on OPTIONS
Marking this as invalid in Triple-O, there is an underlying issue in keystone causing the "recursive" error. The OPTIONS bug solved the issue directly. ** Changed in: tripleo Status: Triaged => Invalid ** Summary changed: - Keystone circular reference on OPTIONS + Keystone 500 on OPTIONS ** Changed in: keystone Milestone: None => stein-1 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1801778 Title: Keystone 500 on OPTIONS Status in OpenStack Identity (keystone): Fix Released Status in tripleo: Invalid Bug description: When trying to authenticate against https://192.168.24.2/keystone/v3/auth/tokens with CORS (the OPTIONS req), I get a 500 error. Inside the keystone container, the logs have this: 018-11-05 19:01:33.396 230 DEBUG keystone.common.rbac_enforcer.enforcer [req-def53dc3-9ac5-4470-9d21-7f737534dc90 f2ff68e4483344268c959e3dcf6b8b45 53568db657e445a49d40a25c4a7fdd42 - default default] RBAC: Policy Enforcement Cred Data `identity:validate_token creds(service_project_id=None, service_user_id=None, service_user_domain_id=None, service_project_domain_id=None, trustor_id=None, user_domain_id=default, domain_id=None, trust_id=None, project_domain_id=default, service_roles=[], group_ids=[], user_id=f2ff68e4483344268c959e3dcf6b8b45, roles=[u'member', u'reader', u'admin'], system_scope=None, trustee_id=None, domain_name=None, is_admin_project=True, token=*** (audit_id=RyLKr6cyRLC2p6oV5-52Cg, audit_chain_id=[u'RyLKr6cyRLC2p6oV5-52Cg']) at 0x7f371cf0dc50>, project_id=53568db657e445a49d40a25c4a7fdd42)` enforce_call /usr/lib/python2.7/site-packages/keystone/common/rbac_enforcer/enforcer.py:418 2018-11-05 19:01:33.396 230 DEBUG keystone.common.rbac_enforcer.enforcer [req-def53dc3-9ac5-4470-9d21-7f737534dc90 f2ff68e4483344268c959e3dcf6b8b45 53568db657e445a49d40a25c4a7fdd42 - default default] RBAC: Policy Enforcement Target Data `identity:validate_token => target(target.token.user.domain.id=default, target.token.user_id=5f351e642aa54a1abc20726ffe9bcc04)` enforce_call /usr/lib/python2.7/site-packages/keystone/common/rbac_enforcer/enforcer.py:426 2018-11-05 19:01:33.415 230 DEBUG keystone.common.rbac_enforcer.enforcer [req-def53dc3-9ac5-4470-9d21-7f737534dc90 f2ff68e4483344268c959e3dcf6b8b45 53568db657e445a49d40a25c4a7fdd42 - default default] RBAC: Authorization granted enforce_call /usr/lib/python2.7/site-packages/keystone/common/rbac_enforcer/enforcer.py:432 2018-11-05 19:01:33.425 230 ERROR keystone.assignment.core [req-def53dc3-9ac5-4470-9d21-7f737534dc90 f2ff68e4483344268c959e3dcf6b8b45 53568db657e445a49d40a25c4a7fdd42 - default default] Circular reference found role inference rules - 5be439ef59e949b28f7e38599a828374. 2018-11-05 19:01:33.433 230 ERROR keystone.assignment.core [req-def53dc3-9ac5-4470-9d21-7f737534dc90 f2ff68e4483344268c959e3dcf6b8b45 53568db657e445a49d40a25c4a7fdd42 - default default] Circular reference found role inference rules - 5be439ef59e949b28f7e38599a828374. 2018-11-05 19:01:33.447 230 ERROR keystone.assignment.core [req-def53dc3-9ac5-4470-9d21-7f737534dc90 f2ff68e4483344268c959e3dcf6b8b45 53568db657e445a49d40a25c4a7fdd42 - default default] Circular reference found role inference rules - 5be439ef59e949b28f7e38599a828374. This is blocking the tripleo-ui because I can't log in. It's a brand new install using reproducer in RDO cloud. The deployment finished successfully. To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1801778/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1803882] [NEW] Keystone – error message is not correct/clear in case when no “rule” is associated to user
Public bug reported: Keystone – error message is not correct/clear in case when no “rule” is associated to user Scenario: 1) Source as admin user . overcloudrc 2) Create a new project openstack project create --description 'my new project' new-project --domain default 3) Create user for previously created project openstack user create --project new-project --password PASSWORD new-user 4) Copy overcloudrc content to userrc file and change cp overcloudrc userrc 5) Change relevant for new-user values: export OS_USERNAME=new-user export OS_PASSWORD=PASSWORD export OS_PROJECT_NAME= new-project 6) Save modified file and source now with this gile source userrc 7) Execute some openstack command for example: openstack network list Actual Result: On CLI output the error which is shown to user is: The request you have made requires authentication. (HTTP 401) (Request-ID: req-373d8b48-15b7-4036-83d1-c82453584f15) In keystone log: /var/log/containers/keystone/keystone.log (5739, 5739) 2018-11-18 15:09:15.902 35 WARNING keystone.common.wsgi [req-373d8b48-15b7-4036-83d1-c82453584f15 - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.100.27: Unauthorized: The request you have made requires authentication. Expected Result: The real reason no rule is asociated to ‘new-user’ (or something like that) should be logged and prompted to user. Actual message we have is not relevant and not clear. Keystone logs attached. ** Affects: neutron Importance: Undecided Status: New ** Tags: logging usability ** Attachment added: "keystone.zip" https://bugs.launchpad.net/bugs/1803882/+attachment/5213938/+files/keystone.zip -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1803882 Title: Keystone – error message is not correct/clear in case when no “rule” is associated to user Status in neutron: New Bug description: Keystone – error message is not correct/clear in case when no “rule” is associated to user Scenario: 1) Source as admin user . overcloudrc 2) Create a new project openstack project create --description 'my new project' new-project --domain default 3) Create user for previously created project openstack user create --project new-project --password PASSWORD new-user 4) Copy overcloudrc content to userrc file and change cp overcloudrc userrc 5) Change relevant for new-user values: export OS_USERNAME=new-user export OS_PASSWORD=PASSWORD export OS_PROJECT_NAME= new-project 6) Save modified file and source now with this gile source userrc 7) Execute some openstack command for example: openstack network list Actual Result: On CLI output the error which is shown to user is: The request you have made requires authentication. (HTTP 401) (Request-ID: req-373d8b48-15b7-4036-83d1-c82453584f15) In keystone log: /var/log/containers/keystone/keystone.log (5739, 5739) 2018-11-18 15:09:15.902 35 WARNING keystone.common.wsgi [req-373d8b48-15b7-4036-83d1-c82453584f15 - - - - -] Authorization failed. The request you have made requires authentication. from 192.168.100.27: Unauthorized: The request you have made requires authentication. Expected Result: The real reason no rule is asociated to ‘new-user’ (or something like that) should be logged and prompted to user. Actual message we have is not relevant and not clear. Keystone logs attached. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1803882/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1803861] [NEW] Unexpected API Error.
Public bug reported: Description === ClientException: Unexpected API Error has returned when I perform below command: $ openstack console url show provider-instance Steps to reproduce == * I following Rocky install-guide for Ubuntu. then I did perform above command after the instance launched for access virtual console. (https://docs.openstack.org/install-guide/launch-instance-provider.html) For further investigating, openstack client command with debug option like below. --- REQ: curl -g -i -X POST http://CONTROLLER:8774/v2.1/servers/0892b6b0-15b9-4ee9-bd72-6ea124e36721/action -H "Accept: application/json" -H "Content-Type: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA1}ce0fa2f294403fb0799374c146609be3328d2006" -d '{"os-getVNCConsole": {"type": "novnc"}}' http://CONTROLLER:8774 "POST /v2.1/servers/0892b6b0-15b9-4ee9-bd72-6ea124e36721/action HTTP/1.1" 500 216 RESP: [500] Connection: keep-alive Content-Length: 216 Content-Type: application/json; charset=UTF-8 Date: Sun, 18 Nov 2018 10:56:58 GMT Openstack-Api-Version: compute 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-0485f3c1-155b-4000-9474-6197a7039577 X-Openstack-Nova-Api-Version: 2.1 X-Openstack-Request-Id: req-0485f3c1-155b-4000-9474-6197a7039577 RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.\n", "code": 500}} POST call to compute for http://CONTROLLER:8774/v2.1/servers/0892b6b0-15b9-4ee9-bd72-6ea124e36721/action used request id req-0485f3c1-155b-4000-9474-6197a7039577 Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-0485f3c1-155b-4000-9474-6197a7039577) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 400, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/dist-packages/osc_lib/command/command.py", line 41, in run return super(Command, self).run(parsed_args) File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 116, in run column_names, data = self.take_action(parsed_args) File "/usr/lib/python2.7/dist-packages/openstackclient/compute/v2/console.py", line 130, in take_action data = server.get_console_url(parsed_args.url_type) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 147, in get_console_url return self.manager.get_console_url(self, console_type) File "/usr/lib/python2.7/dist-packages/novaclient/api_versions.py", line 393, in substitution return methods[-1].func(obj, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 931, in get_console_url return self._action(action, server, {'type': console_type}) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 1918, in _action info=info, **kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 1929, in _action_return_resp_and_body return self.api.client.post(url, body=body) File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 334, in post return self.request(url, 'POST', **kwargs) File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 83, in request raise exceptions.from_response(resp, body, url, method) ClientException: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-0485f3c1-155b-4000-9474-6197a7039577) clean_up ShowConsoleURL: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-0485f3c1-155b-4000-9474-6197a7039577) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/osc_lib/shell.py", line 135, in run ret_val = super(OpenStackShell, self).run(argv) File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 279, in run result = self.run_subcommand(remainder) File "/usr/lib/python2.7/dist-packages/osc_lib/shell.py", line 175, in run_subcommand ret_value = super(OpenStackShell, self).run_subcommand(argv) File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 400, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/dist-packages/osc_lib/command/command.py", line 41, in run return super(Command, self).run(parsed_args) File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 116, in run column_names, data = self.take_action(parsed_args) File "/usr/lib/python2.7/dist-packages/openstackclient/compute/v2/console.py", line 130, in take_action data = server.get_console_url(parsed_args.url_type) File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 147, in get_console_url return
[Yahoo-eng-team] [Bug 1803858] [NEW] 创建实例时报错:No schema supplied
Public bug reported: 我安装官方文档部署完openstack-pike后,最后一步需要启动一个实例 官方文档的地址:https://docs.openstack.org/install-guide/launch-instance-selfservice.html 前面的操作都是正确的,比如; . demo-openrc openstack flavor list [root@controller ~]# openstack flavor list ++-+-+--+---+---+---+ | ID | Name| RAM | Disk | Ephemeral | VCPUs | Is Public | ++-+-+--+---+---+---+ | 0 | m1.nano | 64 |1 | 0 | 1 | True | ++-+-+--+---+---+---+ openstack image list [root@controller ~]# openstack image list +--+++ | ID | Name | Status | +--+++ | f213ed9c-ef55-416e-a615-7a6c811fbdd9 | cirros | active | +--+++ openstack network list [root@controller ~]# openstack network list +--+--+--+ | ID | Name | Subnets | +--+--+--+ | a01249d3-38fe-4fea-bd27-9d0c08364bc9 | provider | afa5c984-b2db-4a05-b9bf-60c47ec78b2e | | bac14094-3b96-4a1f-b2de-1eb74bcb0ba1 | selfservice-demo | 5991c21e-6a01-48b5-9c3e-08a5b2b63f5c | +--+--+--+ openstack security group list [root@controller ~]# openstack security group list +--+-+-+--+ | ID | Name| Description | Project | +--+-+-+--+ | 1e207c89-07f2-452e-a96d-fa7eea71e865 | default | 缺省安全组 | 44b99ab754fd4c95b6d8e32dd826a39f | +--+-+-+--+ [root@controller ~]# openstack keypair list +---+-+ | Name | Fingerprint | +---+-+ | mykey | 1f:ce:ee:c9:d4:20:ec:af:a8:49:b8:a9:b9:dc:7d:c5 | +---+-+ 上面的操作都正常,与官方一致 当我执行: [root@controller ~]# openstack server create --flavor m1.nano --image cirros --nic net-id=bac14094-3b96-4a1f-b2de-1eb74bcb0ba1 --security-group default --key-name mykey selfservice-instance 就报了错误: [root@controller ~]# openstack server create --flavor m1.nano --image cirros --nic net-id=bac14094-3b96-4a1f-b2de-1eb74bcb0ba1 --security-group default --key-name mykey selfservice-instance 发生意外 API 错误。请在 http://bugs.launchpad.net/nova/ 处报告此错误,并且附上 Nova API 日志(如果可能)。 (HTTP 500) (Request-ID: req-942fd1e8-3cf5-4f54-9b25-5dfaa7fa915d) 查看了nova-api的日志: 2018-11-18 17:42:43.373 2041 ERROR nova.api.openstack.extensions return wrapped(*args, **kwargs) 2018-11-18 17:42:43.373 2041 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 703, in request 2018-11-18 17:42:43.373 2041 ERROR nova.api.openstack.extensions resp = send(**kwargs) 2018-11-18 17:42:43.373 2041 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 781, in _send_request 2018-11-18 17:42:43.373 2041 ERROR nova.api.openstack.extensions raise exceptions.UnknownConnectionError(msg, e) 2018-11-18 17:42:43.373 2041 ERROR nova.api.openstack.extensions UnknownConnectionError: Unexpected exception for http//controller:9292/v2/images/f213ed9c-ef55-416e-a615-7a6c811fbdd9: Invalid URL 'http//controller:9292/v2/images/f213ed9c-ef55-416e-a615-7a6c811fbdd9': No schema supplied. Perhaps you meant http://http//controller:9292/v2/images/f213ed9c-ef55-416e-a615-7a6c811fbdd9? 2018-11-18 17:42:43.373 2041 ERROR nova.api.openstack.extensions 2018-11-18 17:42:43.374 2041 INFO nova.api.openstack.wsgi [req-0b26b9a0-1269-4eb8-a93a-36e9b491d5a3 - - - - -] HTTP exception thrown: 发生意外 API 错误。请在 http://bugs.launchpad.net/nova/ 处报告此错误,并且附上 Nova API 日志(如果可能)。 请问这是什么问题,要如何解决,谢谢 ** Affects: nova Importance: Undecided Status: New ** Tags: instance -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1803858 Title: 创建实例时报错:No schema supplied Status in OpenStack Compute (nova): New Bug description: 我安装官方文档部署完openstack-pike后,最后一步需要启动一个实例 官方文档的地址:https://docs.openstack.org/install-guide/launch-instance-selfservice.html 前面的操作都是正确的,比如; . demo-openrc openstack flavor list [root@controller ~]# openstack flavor list