[Yahoo-eng-team] [Bug 1594230] [NEW] nova hypervisor cannot show ironic node
Public bug reported: [[Environment]] Have enabled n-api n-cond n-cpu n-crt n-obj n-sch. [[Since]] This issue is observed to happen since 2016-06-17 07:31:07 by a 3rd party CI. [[The issue]] Other nova command (like flavor-show or list) worked fine. However, showing detail of an Ironic node hypervisor failed. $ nova hypervisor-list ++--+---+-+ | ID | Hypervisor hostname | State | Status | ++--+---+-+ | 1 | aad1dade-c627-42b0-b2bf-dd7d9925f1bb | up| enabled | ++--+---+-+ $ ironic node-list +--+--+---+-++-+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--+--+---+-++-+ | aad1dade-c627-42b0-b2bf-dd7d9925f1bb | None | None | power off | available | False | +--+--+---+-++-+ $ nova hypervisor-show aad1dade-c627-42b0-b2bf-dd7d9925f1bb ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-2eab78b2-ac04-454f-b74b-7e7311eeb6d4) $ nova hypervisor-show 1 ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-898eb920-0547-4178-a44b-ddf9c55fbaec) [[n-api.log]] 2016-06-20 02:16:20.958 12142 DEBUG nova.api.openstack.wsgi [req-898eb920-0547-4178-a44b-ddf9c55fbaec admin admin] Calling method '>' _process_stack /opt/stack/new/nova/nova/api/openstack/wsgi.py:702 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions [req-898eb920-0547-4178-a44b-ddf9c55fbaec admin admin] Unexpected exception in API method 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions Traceback (most recent call last): 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions File "/opt/stack/new/nova/nova/api/openstack/extensions.py", line 453, in wrapped 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions return f(*args, **kwargs) 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions File "/opt/stack/new/nova/nova/api/openstack/compute/hypervisors.py", line 119, in show 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions hyp, service, True, req)) 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions File "/opt/stack/new/nova/nova/api/openstack/compute/hypervisors.py", line 69, in _view_hypervisor 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions hyp_dict['cpu_info'] = jsonutils.loads(hypervisor.cpu_info) 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions File "/usr/local/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py", line 235, in loads 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions return json.loads(encodeutils.safe_decode(s, encoding), **kwargs) 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/json/__init__.py", line 338, in loads 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions return _default_decoder.decode(s) 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/json/decoder.py", line 366, in decode 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions raise ValueError("No JSON object could be decoded") 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions ValueError: No JSON object could be decoded 2016-06-20 02:16:20.974 12142 ERROR nova.api.openstack.extensions 2016-06-20 02:16:20.975 12142 INFO nova.api.openstack.wsgi [req-898eb920-0547-4178-a44b-ddf9c55fbaec admin admin] HTTP exception thrown: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. 2016-06-20 02:16:20.976 12142 DEBUG nova.api.openstack.wsgi [req-898eb920-0547-4178-a44b-ddf9c55fbaec admin admin] Returning 500 to user: Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. __call__ /opt/stack/new/nova/nova/api/openstack/wsgi.py:1114 2016-06-20 02:16:20.979 12142 INFO nova.osapi_compute.wsgi.server [req-898eb920-0547-4178-a44b-ddf9c55fbaec admin admin] 127.0.0.1 "GET /v2.1/os-hyper
[Yahoo-eng-team] [Bug 1439472] Re: OVS doesn't restart properly when Exception occurred
** Changed in: neutron/kilo Status: New => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1439472 Title: OVS doesn't restart properly when Exception occurred Status in neutron: Fix Released Status in neutron kilo series: Fix Released Bug description: Wish this fix can be fixed into kilo. If it's not able due to the bad timing, wish this fix can be merged into stable/kilo. --- [The problem] If there is an Exception (such as DBConnectionError) occurred/occurring when OVS restart, OVS will return "every thing is OK :-)", while flow of created network in br-tun will NOT be recovered. :-( Unless user operation of OVS restart has been executed. --- [action and log] [q-agent.log] [[[create network and subnet and add it to DHCP agent]]] [[[I turned off MySQL]]] [[[But nothing happened]]] [[[Then I restarted OVS]]] [[[Here it goes...]]] ... ... ... 2015-04-01 22:06:48.237 DEBUG neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Unable to sync tunnel IP 192.168.122.96: Remote error: DBConnectionError (OperationalError) (2003, "Can't connect to MySQL server on '127.0.0.1' (111)") None None ... ... ... 2015-04-01 22:06:56.060 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Agent tunnel out of sync with plugin! 2015-04-01 22:06:56.061 DEBUG oslo_messaging._drivers.amqpdriver [req-352cd26d-7278-483e-a873-7558d0f37acd None None] MSG_ID is 705639bc86ae44f4b4cc28715ce981e8 _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311 2015-04-01 22:06:56.062 DEBUG oslo_messaging._drivers.amqp [req-352cd26d-7278-483e-a873-7558d0f37acd None None] UNIQUE_ID is 41f997f166c04cff986ff08eb298b3eb. _add_unique_id /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258 2015-04-01 22:06:56.085 DEBUG neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Unable to sync tunnel IP 192.168.122.96: Remote error: DBConnectionError (OperationalError) (2003, "Can't connect to MySQL server on '127.0.0.1' (111)") None None ... ... ... 2015-04-01 22:06:56.111 DEBUG oslo_messaging._drivers.amqpdriver [req-352cd26d-7278-483e-a873-7558d0f37acd None None] MSG_ID is 6f012243c4844978a7b8181bedcafcc9 _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311 2015-04-01 22:06:56.112 DEBUG oslo_messaging._drivers.amqp [req-352cd26d-7278-483e-a873-7558d0f37acd None None] UNIQUE_ID is 04ed1ccb78bb4ab495b9ebf40c2338f5. _add_unique_id /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258 2015-04-01 22:06:56.138 ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Error while processing VIF ports 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last): 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1522, in rpc_loop 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted) 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1260, in process_network_ports 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent port_info.get('updated', set())) 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 360, in setup_port_filters 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent self.prepare_devices_filter(new_devices) 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 219, in decorated_function 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent *args, **kwargs) 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 229, in prepare_devices_filter 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent self.context, list(device_ids)) 2015-04-01 22:06:56.138 3698 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent File "/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 116, in security_group_info_for_devices 2015-04
[Yahoo-eng-team] [Bug 1526599] Re: back-port of oslo import to stable kilo is not enough
Hello, Itxaka Serrano (itxakaserrano) Thank you for the comment. After another try, I found my devstack is in master branch. And after I checkout the stable/kilo branch of devstack, everything goes well. And the pip list is attached here. ** Changed in: horizon Status: New => Invalid ** Attachment added: "pip-list-after-stack.txt" https://bugs.launchpad.net/horizon/+bug/1526599/+attachment/4535519/+files/pip-list-after-stack.txt -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1526599 Title: back-port of oslo import to stable kilo is not enough Status in OpenStack Dashboard (Horizon): Invalid Bug description: Back-port of oslo import to stable kilo is not enough and caused Devstack installation failed. [Error message example] 2015-12-16 02:41:08.120 | + cd /opt/stack/horizon 2015-12-16 02:41:08.120 | + ./run_tests.sh -N --compilemessages 2015-12-16 02:41:08.604 | WARNING:root:No local_settings file found. 2015-12-16 02:41:09.019 | Traceback (most recent call last): 2015-12-16 02:41:09.019 | File "/opt/stack/horizon/manage.py", line 23, in 2015-12-16 02:41:09.019 | execute_from_command_line(sys.argv) 2015-12-16 02:41:09.019 | File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 354, in execute_from_command_line 2015-12-16 02:41:09.020 | utility.execute() 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 303, in execute 2015-12-16 02:41:09.020 | settings.INSTALLED_APPS 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 48, in __getattr__ 2015-12-16 02:41:09.020 | self._setup(name) 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 44, in _setup 2015-12-16 02:41:09.020 | self._wrapped = Settings(settings_module) 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 92, in __init__ 2015-12-16 02:41:09.021 | mod = importlib.import_module(self.SETTINGS_MODULE) 2015-12-16 02:41:09.021 | File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module 2015-12-16 02:41:09.021 | __import__(name) 2015-12-16 02:41:09.021 | File "/opt/stack/horizon/openstack_dashboard/settings.py", line 313, in 2015-12-16 02:41:09.021 | from openstack_dashboard import policy_backend 2015-12-16 02:41:09.021 | File "/opt/stack/horizon/openstack_dashboard/policy_backend.py", line 23, in 2015-12-16 02:41:09.021 | from openstack_dashboard.openstack.common import policy 2015-12-16 02:41:09.021 | File "/opt/stack/horizon/openstack_dashboard/openstack/common/policy.py", line 83, in 2015-12-16 02:41:09.021 | from oslo.config import cfg 2015-12-16 02:41:09.022 | ImportError: No module named config To manage notifications about this bug go to: https://bugs.launchpad.net/horizon/+bug/1526599/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1526599] [NEW] back-port of oslo import to stable kilo is not enough
Public bug reported: Back-port of oslo import to stable kilo is not enough and caused Devstack installation failed. [Error message example] 2015-12-16 02:41:08.120 | + cd /opt/stack/horizon 2015-12-16 02:41:08.120 | + ./run_tests.sh -N --compilemessages 2015-12-16 02:41:08.604 | WARNING:root:No local_settings file found. 2015-12-16 02:41:09.019 | Traceback (most recent call last): 2015-12-16 02:41:09.019 | File "/opt/stack/horizon/manage.py", line 23, in 2015-12-16 02:41:09.019 | execute_from_command_line(sys.argv) 2015-12-16 02:41:09.019 | File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 354, in execute_from_command_line 2015-12-16 02:41:09.020 | utility.execute() 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 303, in execute 2015-12-16 02:41:09.020 | settings.INSTALLED_APPS 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 48, in __getattr__ 2015-12-16 02:41:09.020 | self._setup(name) 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 44, in _setup 2015-12-16 02:41:09.020 | self._wrapped = Settings(settings_module) 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 92, in __init__ 2015-12-16 02:41:09.021 | mod = importlib.import_module(self.SETTINGS_MODULE) 2015-12-16 02:41:09.021 | File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module 2015-12-16 02:41:09.021 | __import__(name) 2015-12-16 02:41:09.021 | File "/opt/stack/horizon/openstack_dashboard/settings.py", line 313, in 2015-12-16 02:41:09.021 | from openstack_dashboard import policy_backend 2015-12-16 02:41:09.021 | File "/opt/stack/horizon/openstack_dashboard/policy_backend.py", line 23, in 2015-12-16 02:41:09.021 | from openstack_dashboard.openstack.common import policy 2015-12-16 02:41:09.021 | File "/opt/stack/horizon/openstack_dashboard/openstack/common/policy.py", line 83, in 2015-12-16 02:41:09.021 | from oslo.config import cfg 2015-12-16 02:41:09.022 | ImportError: No module named config ** Affects: horizon Importance: Undecided Status: New ** Tags: kilo-backport-potential -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Dashboard (Horizon). https://bugs.launchpad.net/bugs/1526599 Title: back-port of oslo import to stable kilo is not enough Status in OpenStack Dashboard (Horizon): New Bug description: Back-port of oslo import to stable kilo is not enough and caused Devstack installation failed. [Error message example] 2015-12-16 02:41:08.120 | + cd /opt/stack/horizon 2015-12-16 02:41:08.120 | + ./run_tests.sh -N --compilemessages 2015-12-16 02:41:08.604 | WARNING:root:No local_settings file found. 2015-12-16 02:41:09.019 | Traceback (most recent call last): 2015-12-16 02:41:09.019 | File "/opt/stack/horizon/manage.py", line 23, in 2015-12-16 02:41:09.019 | execute_from_command_line(sys.argv) 2015-12-16 02:41:09.019 | File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 354, in execute_from_command_line 2015-12-16 02:41:09.020 | utility.execute() 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 303, in execute 2015-12-16 02:41:09.020 | settings.INSTALLED_APPS 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 48, in __getattr__ 2015-12-16 02:41:09.020 | self._setup(name) 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 44, in _setup 2015-12-16 02:41:09.020 | self._wrapped = Settings(settings_module) 2015-12-16 02:41:09.020 | File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 92, in __init__ 2015-12-16 02:41:09.021 | mod = importlib.import_module(self.SETTINGS_MODULE) 2015-12-16 02:41:09.021 | File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module 2015-12-16 02:41:09.021 | __import__(name) 2015-12-16 02:41:09.021 | File "/opt/stack/horizon/openstack_dashboard/settings.py", line 313, in 2015-12-16 02:41:09.021 | from openstack_dashboard import policy_backend 2015-12-16 02:41:09.021 | File "/opt/stack/horizon/openstack_dashboard/policy_backend.py", line 23, in 2015-12-16 02:41:09.021 | from openstack_dashboard.openstack.common import policy 2015-12-16 02:41:09.021 | File "/opt/stack/horizon/openstack_dashboard/openstack/common/policy.py", line 83, in 2015-12-16 02:41:09.021 | from oslo.config import cfg 2015-12-16 02:41:09.022 | ImportError: No module named config To manage notifications about this bug go to: h
[Yahoo-eng-team] [Bug 1427015] Re: too many subnet-create cause q-dhcp failure
** Also affects: tempest Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427015 Title: too many subnet-create cause q-dhcp failure Status in neutron: In Progress Status in tempest: New Bug description: DHCP max port is only validated when port is creating. But when create_port_repuest has been sent in subnet create or update, validate will not be functionally. This results the number of total DHCP ports excess max_fixed_ips_per_port. If so, the DHCP agent will export error, and cannot restart itself. Also, user is not announced about that Fixed IP not been created after the subnet creation, even the "enable_dhcp" of subnet shows "True". [reproduce] 1. neutron net create testnet 2. neutron dhcp-agent-network-add testnet 3. neutron subnet-create testnet CIDR1 --name testsub1 4. neutron subnet-create testnet CIDR2 --name testsub2 5. neutron subnet-create testnet CIDR3 --name testsub3 6. neutron subnet-create testnet CIDR4 --name testsub4 7. neutron subnet-create testnet CIDR5 --name testsub5 >>>since default value of max_fixed_ips_per_port is 5, it is ok till here. 8-1. neutron subnet-create testnet CIDR6 --name testsub6 >>>error repetly occured in q-dhcp.log. Also, confirmed the following case cause the same error 9-1. neutron subnet-create testnet CIDR6 --name testsub6 --enable_dhcp False 9-2. neutron subnet-update testsub6 --enable_dhcp True [trace log] 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent Traceback (most recent call last): 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 112, in call_driver 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent getattr(driver, action)(**action_kwargs) 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 132, in restart 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent self.enable() 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 205, in enable 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent interface_name = self.device_manager.setup(self.network) 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 919, in setup 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent port = self.setup_dhcp_port(network) 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 863, in setup_dhcp_port 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 'fixed_ips': port_fixed_ips}}) 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 441, in update_dhcp_port 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent port_id=port_id, port=port, host=self.host) 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 156, in call 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent retry=self.retry) 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent timeout=timeout, retry=retry) 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 349, in send 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent retry=retry) 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 340, in _send 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent raise result 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent RemoteError: Remote error: InvalidInput Invalid input for operation: Exceeded maximim amount of fixed ips per port. 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent [u'Traceback (most recent call last):\n', u' File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply\nexecutor_callback))\n', u' File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch\nexecutor_callback)\n', u' File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u' File "/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 312, in update_dhcp_port\nr
[Yahoo-eng-team] [Bug 1439472] [NEW] OVS doesn't restart properly when Exception occurred
682 DEBUG neutron.agent.linux.utils [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Running command (rootwrap daemon): ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--columns=tag', 'list', 'Port', 'qvo36c9baf7-a4'] execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:98 2015-04-01 22:07:00.686 DEBUG neutron.agent.linux.utils [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Command: ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--columns=tag', 'list', 'Port', u'qvo36c9baf7-a4'] Exit code: 0 Stdin: Stdout: {"data":[[1]],"headings":["tag"]} Stderr: execute /opt/stack/neutron/neutron/agent/linux/utils.py:132 2015-04-01 22:07:00.687 DEBUG neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Setting status for 36c9baf7-a45d-4207-98b5-8155523c7a1b to UP treat_devices_added_or_updated /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:1175 2015-04-01 22:07:00.687 DEBUG oslo_messaging._drivers.amqpdriver [req-352cd26d-7278-483e-a873-7558d0f37acd None None] MSG_ID is 6e87e064d5b34ef7b293dbcfdfa60340 _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311 2015-04-01 22:07:00.687 DEBUG oslo_messaging._drivers.amqp [req-352cd26d-7278-483e-a873-7558d0f37acd None None] UNIQUE_ID is b6d420f5532d4be9ba30311656b3090e. _add_unique_id /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258 2015-04-01 22:07:00.764 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Configuration for device 36c9baf7-a45d-4207-98b5-8155523c7a1b completed. 2015-04-01 22:07:00.765 DEBUG neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-352cd26d-7278-483e-a873-7558d0f37acd None None] process_network_ports - iteration:188 - treat_devices_added_or_updated completed. Skipped 0 devices of 4 devices currently available. Time elapsed: 0.541 process_network_ports /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:1281 2015-04-01 22:07:00.765 DEBUG neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-352cd26d-7278-483e-a873-7558d0f37acd None None] process_network_ports - iteration:188 - treat_devices_removed completed in 0.000 process_network_ports /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:1299 2015-04-01 22:07:00.765 DEBUG neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Agent rpc_loop - iteration:188 - ports processed. Elapsed:0.716 rpc_loop /opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:1526 2015-04-01 22:07:00.765 DEBUG neutron.agent.linux.utils [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Running command (rootwrap daemon): ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 'list-ports', 'br-ex'] execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:98 2015-04-01 22:07:00.770 DEBUG neutron.agent.linux.utils [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Command: ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 'list-ports', u'br-ex'] Exit code: 0 Stdin: Stdout: qg-66217dea-97 Stderr: execute /opt/stack/neutron/neutron/agent/linux/utils.py:132 2015-04-01 22:07:00.771 DEBUG neutron.agent.linux.utils [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Running command (rootwrap daemon): ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--if-exists', '--columns=name,external_ids,ofport', 'list', 'Interface', 'qg-66217dea-97'] execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:98 2015-04-01 22:07:00.777 DEBUG neutron.agent.linux.utils [req-352cd26d-7278-483e-a873-7558d0f37acd None None] Command: ['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', '--if-exists', '--columns=name,external_ids,ofport', 'list', 'Interface', u'qg-66217dea-97'] Exit code: 0 Stdin: Stdout: {"data":[["qg-66217dea-97",["map",[["attached-mac","fa:16:3e:6e:00:93"],["iface-id","66217dea-974a-4f2b-bba7-7d4a85cd52b7"],["iface-status","active"]]],1]],"headings":["name","external_ids","ofport"]} Stderr: execute /opt/stack/neutron/neutron/agent/linux/utils.py:132
[Yahoo-eng-team] [Bug 1432460] Re: neutron unit test fails with unexpected keyword "retry_on_request"
sudo pip install oslo.db --upgrade After the above upgrade, oslo_db/api.py will be updated to the upstream. And the bug is no longer happened. ** Changed in: neutron Status: New => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1432460 Title: neutron unit test fails with unexpected keyword "retry_on_request" Status in OpenStack Neutron (virtual network service): Invalid Bug description: Now(2015,03,16) all neutron unit test fails, and export the following error. === Failed to import test module: neutron.tests.unit.test_extension_extended_attribute Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 445, in _find_test_path module = self._get_module_from_name(name) File "/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 384, in _get_module_from_name __import__(name) File "/home/stack/neutron/neutron/tests/unit/test_extension_extended_attribute.py", line 29, in from neutron.plugins.ml2 import plugin as ml2_plugin File "/home/stack/neutron/neutron/plugins/ml2/plugin.py", line 89, in extradhcpopt_db.ExtraDhcpOptMixin): File "/home/stack/neutron/neutron/plugins/ml2/plugin.py", line 591, in Ml2Plugin retry_on_request=True) TypeError: __init__() got an unexpected keyword argument 'retry_on_request' == This is due to the following fix: https://review.openstack.org/#/c/149261/ In neutron/plugins/ml2/plugin.py Line 602 "retry_on_request" is added. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1432460/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1432460] [NEW] neutron unit test fails with unexpected keyword "retry_on_request"
Public bug reported: Now(2015,03,16) all neutron unit test fails, and export the following error. === Failed to import test module: neutron.tests.unit.test_extension_extended_attribute Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 445, in _find_test_path module = self._get_module_from_name(name) File "/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 384, in _get_module_from_name __import__(name) File "/home/stack/neutron/neutron/tests/unit/test_extension_extended_attribute.py", line 29, in from neutron.plugins.ml2 import plugin as ml2_plugin File "/home/stack/neutron/neutron/plugins/ml2/plugin.py", line 89, in extradhcpopt_db.ExtraDhcpOptMixin): File "/home/stack/neutron/neutron/plugins/ml2/plugin.py", line 591, in Ml2Plugin retry_on_request=True) TypeError: __init__() got an unexpected keyword argument 'retry_on_request' == This is due to the following fix: https://review.openstack.org/#/c/149261/ In neutron/plugins/ml2/plugin.py Line 602 "retry_on_request" is added. ** Affects: neutron Importance: Undecided Assignee: watanabe.isao (watanabe.isao) Status: New ** Changed in: neutron Assignee: (unassigned) => watanabe.isao (watanabe.isao) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1432460 Title: neutron unit test fails with unexpected keyword "retry_on_request" Status in OpenStack Neutron (virtual network service): New Bug description: Now(2015,03,16) all neutron unit test fails, and export the following error. === Failed to import test module: neutron.tests.unit.test_extension_extended_attribute Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 445, in _find_test_path module = self._get_module_from_name(name) File "/usr/local/lib/python2.7/dist-packages/unittest2/loader.py", line 384, in _get_module_from_name __import__(name) File "/home/stack/neutron/neutron/tests/unit/test_extension_extended_attribute.py", line 29, in from neutron.plugins.ml2 import plugin as ml2_plugin File "/home/stack/neutron/neutron/plugins/ml2/plugin.py", line 89, in extradhcpopt_db.ExtraDhcpOptMixin): File "/home/stack/neutron/neutron/plugins/ml2/plugin.py", line 591, in Ml2Plugin retry_on_request=True) TypeError: __init__() got an unexpected keyword argument 'retry_on_request' == This is due to the following fix: https://review.openstack.org/#/c/149261/ In neutron/plugins/ml2/plugin.py Line 602 "retry_on_request" is added. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1432460/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1427015] [NEW] too many subnet-create cause q-dhcp failure
er port.\n']. 2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 2015-02-28 00:31:45.553 DEBUG oslo_concurrency.lockutils [req-41e2c225-2f9f-4e82-a18e-c79faf13cc49 admin admin] Lock "dhcp-agent" released by "subnet_update_end" :: held 0.358s inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:442 2015-02-28 00:31:45.732 3011 DEBUG neutron.agent.dhcp.agent [-] resync (b682f8e6-5250-4c8c-bb83-93427cfd6185): [RemoteError(u'Remote error: InvalidInput Invalid input for operation: Exceeded maximim amount of fixed ips per port.\n[u\'Traceback (most recent call last):\\n\', u\' File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply\\nexecutor_callback))\\n\', u\' File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch\\nexecutor_callback)\\n\', u\' File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch\\nresult = func(ctxt, **new_args)\\n\', u\' File "/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 312, in update_dhcp_port\\nreturn self._port_action(plugin, context, port, \\\'update_port\\\')\\n\', u\' File "/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 75, in _port_action\\n return plugin.update_port(context, port[\\\'id\\\'], port)\\n\', u\' File "/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 1014, in update_port\\nport)\\n\', u\' File "/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1389, in update_port\\noriginal[\\\'mac_address\\\'], port[\\\'device_owner\\\'])\\n\', u\' File "/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 466, in _update_ips_for_port\\nraise n_exc.InvalidInput(error_message=msg)\\n\', u\'InvalidInput: Invalid input for operation: Exceeded maximim amount of fixed ips per port.\\n\'].',)] _periodic_resync_helper /opt/stack/neutron/neutron/agent/dhcp/agent.py:185 2015-02-28 00:31:45.733 3011 DEBUG oslo_concurrency.lockutils [-] Lock "dhcp-agent" acquired by "sync_state" :: waited 0.000s inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:430 2015-02-28 00:31:45.733 3011 INFO neutron.agent.dhcp.agent [-] Synchronizing state 2015-02-28 00:31:45.734 3011 DEBUG oslo_messaging._drivers.amqpdriver [-] MSG_ID is 81477b08ea9f4328bafe8f90ef2d3f33 _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310 2015-02-28 00:31:45.735 3011 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID is 0f306089fd55406bba1b5e7af7c489ce. _add_unique_id /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226 2015-02-28 00:31:45.861 3011 DEBUG neutron.agent.dhcp.agent [-] Calling driver for network: b682f8e6-5250-4c8c-bb83-93427cfd6185 action: enable call_driver /opt/stack/neutron/neutron/agent/dhcp/agent.py:103 2015-02-28 00:31:45.862 3011 DEBUG neutron.agent.linux.utils [-] Unable to access /opt/stack/data/neutron/dhcp/b682f8e6-5250-4c8c-bb83-93427cfd6185/pid get_value_from_file /opt/stack/neutron/neutron/agent/linux/utils.py:171 2015-02-28 00:31:45.862 3011 DEBUG oslo_messaging._drivers.amqpdriver [-] MSG_ID is 2f0b3ab027e74f819d7969701ee4a414 _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:310 2015-02-28 00:31:45.862 3011 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID is 8008ae78501a4269990536eb149cc6b7. _add_unique_id /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:226 2015-02-28 00:31:45.891 3011 ERROR neutron.agent.dhcp.agent [-] Unable to enable dhcp for b682f8e6-5250-4c8c-bb83-93427cfd6185. ** Affects: neutron Importance: Undecided Assignee: watanabe.isao (watanabe.isao) Status: New ** Changed in: neutron Assignee: (unassigned) => watanabe.isao (watanabe.isao) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1427015 Title: too many subnet-create cause q-dhcp failure Status in OpenStack Neutron (virtual network service): New Bug description: [reproduce] 1. neutron net create testnet 2. neutron dhcp-agent-network-add testnet 3. neutron subnet-create testnet CIDR1 --name testsub1 4. neutron subnet-create testnet CIDR2 --name testsub2 5. neutron subnet-create testnet CIDR3 --name testsub3 6. neutron subnet-create testnet CIDR4 --name testsub4 7. neutron subnet-create testnet CIDR5 --name testsub5 >>>since default value of max_fixed_ips_per_port is 5, it is ok till here. 8. neutron subnet-create testnet CIDR6 --name tests
[Yahoo-eng-team] [Bug 1388698] Re: dhcp_agents_per_network does not work appropriately.
This bug is fixed by [1] https://review.openstack.org/#/c/131150/ ** Changed in: neutron Status: In Progress => Invalid -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1388698 Title: dhcp_agents_per_network does not work appropriately. Status in OpenStack Neutron (virtual network service): Invalid Bug description: Hi, I want to ask about dhcp_agents_per_network option in neutron.conf. That's instruction is in neutron.conf - # Number of DHCP agents scheduled to host a network. This enables redundant # DHCP agents for configured networks. # dhcp_agents_per_network = 1 dhcp_agents_per_network = 1 - I hit situation that network is hosted by multiple dhcp-agents evenif dhcp_agents_per_network = 1. I think dhcp_agents_per_network does not work appropriately. The procedure are as follows. Conditions: A) multiple network nodes. B) dhcp-agents are alives in each network nodes. C) one network is hosted by one dhcp-agent. ex: network node1: dhcp-agent1 hosts network1 and network2. network node2: dhcp-agent2 hosts no network. procedures: 1) stop dhcp-agent1 and dhcp-agent2. 2) start dhcp-agent2. result: network node1: dhcp-agent1 hosts network1 and network2. network node2: dhcp-agent2 hosts network1 and network2. dnsmasq hosting network1 boots on network node1 and 2. also, dnsmasq hosting network2 boots on network node1 and 2. dhcp_agents_per_network option means "active_dhcp_agents_per_network" or "dhcp_agents_per_network"? To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1388698/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1417379] [NEW] KeyError returned when subnet-update enable_dhcp to False
Public bug reported: [reproduce] 1.net-create(test) 2.subnet-create(subnet1) * enable_dhcp: True 3.subnet-create(subnet2) * enable_dhcp: True 4.subnet-update(subnet2) * enable_dhcp: True -> False [Trace log] In the same time of step 4 of [reproduce], the following log start to out put: /var/log/neutron/dhcp-agent.log ERROR neutron.agent.dhcp_agent [req-3ca55527-3620-4698-bb40-95a5fe2e2f73 admin 7a787c3b6a6e4ac9b5d8fe48028197bf] Unable to restart dhcp for 9673de7e-bd5d-4eba-9191-3e98de2043dd. TRACE neutron.agent.dhcp_agent Traceback (most recent call last): TRACE neutron.agent.dhcp_agent File "/opt/stack/neutron/neutron/agent/dhcp_agent.py", line 129, in call_driver TRACE neutron.agent.dhcp_agent getattr(driver, action)(**action_kwargs) TRACE neutron.agent.dhcp_agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 159, in restart TRACE neutron.agent.dhcp_agent self.enable() TRACE neutron.agent.dhcp_agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 207, in enable TRACE neutron.agent.dhcp_agent interface_name = self.device_manager.setup(self.network) TRACE neutron.agent.dhcp_agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 951, in setup TRACE neutron.agent.dhcp_agent port = self.setup_dhcp_port(network) TRACE neutron.agent.dhcp_agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 941, in setup_dhcp_port TRACE neutron.agent.dhcp_agent for fixed_ip in dhcp_port.fixed_ips] TRACE neutron.agent.dhcp_agent KeyError: u'779c00d0-64c9-416a-8048-94530c716a83' TRACE neutron.agent.dhcp_agent ・All agents looks fine :-) ・Restart changes nothing. :-( ** Affects: neutron Importance: Undecided Assignee: watanabe.isao (watanabe.isao) Status: New ** Changed in: neutron Assignee: (unassigned) => watanabe.isao (watanabe.isao) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1417379 Title: KeyError returned when subnet-update enable_dhcp to False Status in OpenStack Neutron (virtual network service): New Bug description: [reproduce] 1.net-create(test) 2.subnet-create(subnet1) * enable_dhcp: True 3.subnet-create(subnet2) * enable_dhcp: True 4.subnet-update(subnet2) * enable_dhcp: True -> False [Trace log] In the same time of step 4 of [reproduce], the following log start to out put: /var/log/neutron/dhcp-agent.log ERROR neutron.agent.dhcp_agent [req-3ca55527-3620-4698-bb40-95a5fe2e2f73 admin 7a787c3b6a6e4ac9b5d8fe48028197bf] Unable to restart dhcp for 9673de7e-bd5d-4eba-9191-3e98de2043dd. TRACE neutron.agent.dhcp_agent Traceback (most recent call last): TRACE neutron.agent.dhcp_agent File "/opt/stack/neutron/neutron/agent/dhcp_agent.py", line 129, in call_driver TRACE neutron.agent.dhcp_agent getattr(driver, action)(**action_kwargs) TRACE neutron.agent.dhcp_agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 159, in restart TRACE neutron.agent.dhcp_agent self.enable() TRACE neutron.agent.dhcp_agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 207, in enable TRACE neutron.agent.dhcp_agent interface_name = self.device_manager.setup(self.network) TRACE neutron.agent.dhcp_agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 951, in setup TRACE neutron.agent.dhcp_agent port = self.setup_dhcp_port(network) TRACE neutron.agent.dhcp_agent File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 941, in setup_dhcp_port TRACE neutron.agent.dhcp_agent for fixed_ip in dhcp_port.fixed_ips] TRACE neutron.agent.dhcp_agent KeyError: u'779c00d0-64c9-416a-8048-94530c716a83' TRACE neutron.agent.dhcp_agent ・All agents looks fine :-) ・Restart changes nothing. :-( To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1417379/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1416308] [NEW] remove_router_interface need to improve its validate to avoid 500 DBError
Public bug reported: 500 DBError should not be returned in a user operation. [User operation] curl -i -X PUT -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" http://192.168.122.201:9696/v2.0/routers/edff6799-7f1b-4d9e-bc8e-115cd22afd82/remove_router_interface -d '{"id":"edff6799-7f1b-4d9e-remove_router_interface"}' HTTP/1.1 500 Internal Server Error Content-Type: application/json; charset=UTF-8 Content-Length: 150 X-Openstack-Request-Id: req-31c11f44-73c4-47bf-bbc3-a3b0eb41148d Date: Fri, 30 Jan 2015 06:30:26 GMT {"NeutronError": {"message": "Request Failed: internal server error while processing your request.", "type": "HTTPInternalServerError", "detail": ""}} When subnet_id or port_id is not defined in the body of the above RESR API request, 400 Error should be returned. However, the varidate for this is not enough and cause a 500 DB Error. [TraceLog] 2015-01-28 21:37:16.956 2589 ERROR neutron.api.v2.resource [req-c65b69cd-9a1b-4083-b85f-ea98030f5022 None] add_router_interface failed 2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource Traceback (most recent call last): 2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource File "/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py", line 87, in resource 2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource result = method(request=request, **args) 2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource File "/usr/lib/python2.6/site-packages/neutron/api/v2/base.py", line 194, in _handle_action 2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource return getattr(self._plugin, name)(*arg_list, **kwargs) 2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource File "/usr/lib/python2.6/site-packages/neutron/db/l3_db.py", line 367, in add_router_interface 2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource 'tenant_id': subnet['tenant_id'], 2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource UnboundLocalError: local variable 'subnet' referenced before assignment 2015-01-28 21:37:16.956 2589 TRACE neutron.api.v2.resource [About fix] A varidate like in add_router_interface should be good enough. See add_router_interface is returning 400 correctly. curl -i -X PUT -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" http://192.168.122.201:9696/v2.0/routers/139d6962-0919-444a-8ee4-8e47e35f054b/add_router_interface -d '{"router":{"name":"test_router"}}' HTTP/1.1 400 Bad Request Content-Type: application/json; charset=UTF-8 Content-Length: 134 X-Openstack-Request-Id: req-bca4b0ca-9e3a-4171-b783-ccc9a8c1cb80 Date: Thu, 29 Jan 2015 02:20:39 GMT {"NeutronError": {"message": "Bad router request: Either subnet_id or port_id must be specified", "type": "BadRequest", "detail": ""}} ** Affects: neutron Importance: Undecided Assignee: watanabe.isao (watanabe.isao) Status: New ** Changed in: neutron Assignee: (unassigned) => watanabe.isao (watanabe.isao) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1416308 Title: remove_router_interface need to improve its validate to avoid 500 DBError Status in OpenStack Neutron (virtual network service): New Bug description: 500 DBError should not be returned in a user operation. [User operation] curl -i -X PUT -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" http://192.168.122.201:9696/v2.0/routers/edff6799-7f1b-4d9e-bc8e-115cd22afd82/remove_router_interface -d '{"id":"edff6799-7f1b-4d9e-remove_router_interface"}' HTTP/1.1 500 Internal Server Error Content-Type: application/json; charset=UTF-8 Content-Length: 150 X-Openstack-Request-Id: req-31c11f44-73c4-47bf-bbc3-a3b0eb41148d Date: Fri, 30 Jan 2015 06:30:26 GMT {"NeutronError": {"message": "Request Failed: internal server error while processing your request.", "type": "HTTPInternalServerError", "detail": ""}} When subnet_id or port_id is not defined in the body of the above RESR API request, 400 Error should be returned. However, the varidate for this is not enough and cause a 500 DB Error. [TraceLog] 2015-01-28 21:37:16.956 2589 ERROR neutron.api.v2.resource [req-c65b69cd-9a1b-4083-b85f-ea98030f5022 None] add_router_interface failed 2
[Yahoo-eng-team] [Bug 1408230] [NEW] name validate check is necessary for neutron-core
Public bug reported: So far, the validate check of name when creating network subnet and port is not functional. This is because the validate attribute of name in the RESOURCE_ATTRIBUTE_MAP of neutron/api/v2/attributes.py is set to "None". When user inputs a more than 255 length name on purpose, an internal DB Error will return. ==CLI result: network== $ neutron net-create 1234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678 Request Failed: internal server error while processing your request. === ==CLI result: subnet== $ neutron subnet-create hogehoge --name 1234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678 192.168.1.0/24 Request Failed: internal server error while processing your request. === ==CLI result: port== stack@neutron-ctrl:~/devstack$ neutron port-create hogehoge --name 1234567812345 67812345678123456781234567812345678123456781234567812345678123456781234567812345 67812345678123456781234567812345678123456781234567812345678123456781234567812345 67812345678123456781234567812345678123456781234567812345678123456781234567812345 678 Request Failed: internal server error while processing your request. ==Trace log: network== 2015-01-08 02:11:05.152 2469 TRACE neutron.api.v2.resource DBError: (DataError) (1406, "Data too long for column 'name' at row 1") 'INSERT INTO networks (tenant_id, id, name, status, admin_state_up, shared) VALUES (%s, %s, %s, %s, %s, %s)' ('ea398e22f5b74d8aa7ed19a41269690e', 'f9cd32ac-6fb6-4a2f-9fd9-d8c48df5ade0', '1234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678', 'ACTIVE', 1, 0) ==Trace log: subnet== 2015-01-08 01:54:56.821 2469 TRACE neutron.api.v2.resource DBError: (DataError) (1406, "Data too long for column 'name' at row 1") 'INSERT INTO subnets (tenant_id, id, name, network_id, ip_version, cidr, gateway_ip, enable_dhcp, shared, ipv6_ra_mode, ipv6_address_mode) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)' ('ea398e22f5b74d8aa7ed19a41269690e', '3dde1013-8f9c-41f6-9d87-0a47bac77ab1', '1234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678', '5d61a7be-f0e0-4391-930a-62c100a4dcad', 4, '192.168.1.0/24', '192.168.1.1', 1, 0, None, None) ==Trace log: port== 2015-01-08 02:00:11.032 2469 TRACE neutron.api.v2.resource DBError: (DataError) (1406, "Data too long for column 'name' at row 1") 'INSERT INTO ports (tenant_id, id, name, network_id, mac_address, admin_state_up, status, device_id, device_owner) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)' ('ea398e22f5b74d8aa7ed19a41269690e', '983d6be6-987b-4eb7-94e7-a85518600b3c', '1234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678123456781234567812345678', '5d61a7be-f0e0-4391-930a-62c100a4dcad', 'fa:16:3e:1d:f6:cb', 1, 'DOWN', '', '') === It is better for neutron to return something like 400 Bad Request Error intend of internal DB Error. The validate check of name length should be limited to 255. ** Affects: neutron Importance: Undecided Assignee: watanabe.isao (watanabe.isao) Status: In Progress ** Tags: neutron-core ** Changed in: neutron Assignee: (unassigned) => watanabe.isao (watanabe.isao) ** Changed in: neutron Status: New => In Progress -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1408230 Title: name validate check is necessary for neutron-core Status in OpenStack Neutron (virtual network service):