I have stable/metaka Octavia which has been running OK until today, whenever I created loadbalancer, the amphorae vm is created with mgmt nic. But look like vip plugin failed. I can ping to amphorae mgmt. NIC from controller(where Octavia process is running), but look like some rest api call into amphorae to plug in vip failed :
Ping works: [localadmin@dmz-eth2-ucs1]logs> ping 192.168.0.7 PING 192.168.0.7 (192.168.0.7) 56(84) bytes of data. 64 bytes from 192.168.0.7: icmp_seq=1 ttl=64 time=1.11 ms 64 bytes from 192.168.0.7: icmp_seq=2 ttl=64 time=0.461 ms ^C o-cw.log: 2016-12-09 11:03:54.468 31408 DEBUG octavia.controller.worker.tasks.network_tasks [-] Retrieving network details for amphora ae80ae54-395f-4fad-b0de-39f17dd9b19e execute /opt/stack/octavia/octavia/controller/worker/tasks/network_tasks.py:380 2016-12-09 11:03:55.441 31408 DEBUG octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' (76823522-b504-4d6a-8ba7-c56015cb39a9) transitioned into state 'SUCCESS' from state 'RUNNING' with result '{u'ae80ae54-395f-4fad-b0de-39f17dd9b19e': <octavia.network.data_models.AmphoraNetworkConfig object at 0x7faebb7e99d0>}' _task_receiver /usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178 2016-12-09 11:03:55.444 31408 DEBUG octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraPostVIPPlug' (3b798537-3f20-46a3-abe2-a2c24c569cd9) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/local/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189 2016-12-09 11:03:55.446 31408 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url plug/vip/100.100.100.9 request /opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:218 2016-12-09 11:03:55.446 31408 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url https://192.168.0.7:9443/0.5/plug/vip/100.100.100.9 request /opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py:221 2016-12-09 11:03:55.452 31408 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying. 2016-12-09 11:03:56.458 31408 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying. 2016-12-09 11:03:57.462 31408 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying. 2016-12-09 11:03:58.466 31408 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying. 2016-12-09 11:03:59.470 31408 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying. 2016-12-09 11:04:00.474 31408 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying. 2016-12-09 11:04:02.487 31408 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying. …… ransitioned into state 'REVERTED' from state 'REVERTING' with result 'None' 2016-12-09 11:29:10.509 31408 WARNING octavia.controller.worker.controller_worker [-] Flow 'post-amphora-association-octavia-post-loadbalancer-amp_association-subflow' (f7b0d080-830a-4d6a-bb85-919b6461252f) transitioned into state 'REVERTED' from state 'RUNNING' 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher [-] Exception during message handling: contacting the amphora timed out 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, in _dispatch_and_reply 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher incoming.message)) 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 185, in _dispatch 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 127, in _do_dispatch 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/octavia/octavia/controller/queue/endpoint.py", line 45, in create_load_balancer 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher self.worker.create_load_balancer(load_balancer_id) 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/octavia/octavia/controller/worker/controller_worker.py", line 322, in create_load_balancer 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher post_lb_amp_assoc.run() 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", line 230, in run 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher for _state in self.run_iter(timeout=timeout): 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", line 308, in run_iter 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher failure.Failure.reraise_if_any(fails) 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/taskflow/types/failure.py", line 336, in reraise_if_any 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher failures[0].reraise() 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/taskflow/types/failure.py", line 343, in reraise 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher six.reraise(*self._exc_info) 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", line 82, in _execute_task 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher result = task.execute(**arguments) 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", line 229, in execute 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher loadbalancer, amphorae_network_config) 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 131, in post_vip_plug 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher net_info) 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 325, in plug_vip 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher json=net_info) 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 246, in request 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher raise driver_except.TimeOutException() 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher TimeOutException: contacting the amphora timed out 2016-12-09 11:29:10.509 31408 ERROR oslo_messaging.rpc.dispatcher Can somebody advise what went wrong? Or suggest way to debug this, I ssh into the vm and can see that amphorae agent is running inside the vm and listening on the API port 9443, but I don’t know where the log is Thanks Wanjing
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev