Hi, I also hit the loopingcall error while running magnum 4.1.1 (ocata). It is tracked by this bug: https://bugs.launchpad.net/magnum/+bug/1666790. I cherry picked the fix to ocata locally, but this needs to be done upstream as well.
I think that the heat stack create timeout is unrelated to that issue though. Try the following to debug the issue: - Check the cluster's heat stack and its component resources. - If created, SSH to the master and slave nodes, checking systemd services are up and cloud-init succeeded. Regards, Mark On 12 May 2017 at 05:57, KiYoun Sung <kys...@devstack.co.kr> wrote: > Hello, > Magnum Team. > > I installed magnum on Openstack Ocata(by fuel 11.0). > I referred to this guide.(https://docs.openstack. > org/project-install-guide/container-infrastructure-managemen > t/ocata/install.html) > > Below is my installation information. > root@controller:~# dpkg -l | grep magnum > magnum-api 4.1.0-0ubuntu1~cloud0 > all OpenStack containers as a service > magnum-common 4.1.0-0ubuntu1~cloud0 > all OpenStack containers as a service - API server > magnum-conductor 4.1.0-0ubuntu1~cloud0 > all OpenStack containers as a service - conductor > python-magnum 4.1.0-0ubuntu1~cloud0 > all OpenStack containers as a service - Python library > python-magnumclient 2.5.0-0ubuntu1~cloud0 > all client library for Magnum API - Python 2.x > > After installation, > I created cluster-template for kubernetes like this. > (magnum cluster-template-create --name k8s-cluster-template \ --image > fedora-atomic-latest \ --keypair testkey \ --external-network > admin_floating_net \ --dns-nameserver 8.8.8.8 \ --flavor m1.small \ > --docker-volume-size 5 \ --network-driver flannel \ --coe kubernetes ) > > and I create cluster, > but "magnum clutser-create' command was failed. > (magnum cluster-create --name k8s-cluster \ --cluster-template > k8s-cluster-template \ --node-count 1 \ --timeout 10 ) > > After 10 minutes(option "--timeout 10"), > creation was failed, and the status is "CREATE_FAILED" > > I executed "openstack server list" command, > there is a only kube-master instance. > (root@controller:~# openstack server list > +--------------------------------------+-------------------- > -------------------+--------+------------------------------- > -+----------------------+ > | ID | Name > | Status | Networks | Image Name | > +--------------------------------------+-------------------- > -------------------+--------+------------------------------- > -+----------------------+ > | bf9c5097-74fd-4457-a8a2-4feae76d4111 | k8-i27fw72w5t-0-i6lg6mzpzrl6-kube- > | ACTIVE | private=10.0.0.9, 172.16.1.135 | fedora-atomic-latest | > | | master-ekjrg2v6ztss > | | | | > +--------------------------------------+-------------------- > -------------------+--------+--------------------------------+----------------------+ > > ) > > I think kube-master instance create is successful. > I can connect that instance > and docker container was running normally. > > Why this command was failed? > > Here is my /var/log/magnum/magnum-conductor.log and /var/log/nova-all.log. > magnum-conductor.log have a ERROR. > =============================================== > 2017-05-12 04:05:00.684 756 ERROR magnum.common.keystone > [req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Keystone API > connection failed: no password, trust_id or token found. > 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception > [req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Exception in string > format operation, kwargs: {'code': 500} > 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception Traceback (most > recent call last): > 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception File > "/usr/lib/python2.7/dist-packages/magnum/common/exception.py", line 92, > in __init__ > 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception self.message > = self.message % kwargs > 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception KeyError: > u'client' > 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall > [req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Fixed interval > looping call 'magnum.service.periodic.ClusterUpdateJob.update_status' > failed > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall Traceback (most > recent call last): > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall File > "/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 137, > in _run_loop > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall result = > func(*self.args, **self.kw) > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall File > "/usr/lib/python2.7/dist-packages/magnum/service/periodic.py", line 71, > in update_status > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall > cdriver.update_cluster_status(self.ctx, self.cluster) > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall File > "/usr/lib/python2.7/dist-packages/magnum/drivers/heat/driver.py", line > 80, in update_cluster_status > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall > poller.poll_and_check() > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall File > "/usr/lib/python2.7/dist-packages/magnum/drivers/heat/driver.py", line > 169, in poll_and_check > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall stack = > self.openstack_client.heat().stacks.get( > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall File > "/usr/lib/python2.7/dist-packages/magnum/common/exception.py", line 59, > in wrapped > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall return > func(*args, **kw) > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall File > "/usr/lib/python2.7/dist-packages/magnum/common/clients.py", line 94, in > heat > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall > region_name=region_name) > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall File > "/usr/lib/python2.7/dist-packages/magnum/common/clients.py", line 45, in > url_for > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall return > self.keystone().session.get_endpoint(**kwargs) > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall File > "/usr/lib/python2.7/dist-packages/magnum/common/keystone.py", line 59, in > session > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall auth = > self._get_auth() > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall File > "/usr/lib/python2.7/dist-packages/magnum/common/keystone.py", line 98, in > _get_auth > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall raise > exception.AuthorizationFailure() > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall > AuthorizationFailure: %(client)s connection failed. %(message)s > 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall > 2017-05-12 04:10:11.634 756 ERROR magnum.drivers.heat.driver > [req-7bf01756-8b81-4387-8512-e29cd2b71aab - - - - -] Cluster error, stack > status: CREATE_FAILED, stack_id: c1bf3ab0-c869-458d-a16d-6f4ab8284046, > reason: Timed out > =============================================== > > And, heat-all.log has same logging repeatedly. > =============================================== > 2017-05-12T04:09:58.514153+00:00 controller heat-engine: 2017-05-12 > 04:09:58.512 8813 DEBUG heat.engine.scheduler > [req-90ba259d-abef-4d45-a87e-0c08cbd5a4ae > - > - - - -] Task create from HeatWaitCondition "master_wait_condition" Stack > "k8s-cluster-nyj64j3yjhjs-kube_masters-k6i27fw72w5t-0-i6lg6mzpzrl6" > [57626786-4858-4 > 715-9c01-97ad6156d6f7] running step /usr/lib/python2.7/dist-packag > es/heat/engine/scheduler.py:215 > 2017-05-12T04:09:58.532625+00:00 controller heat-engine: 2017-05-12 > 04:09:58.531 8813 DEBUG heat.engine.scheduler > [req-90ba259d-abef-4d45-a87e-0c08cbd5a4ae > - > - - - -] Task create from HeatWaitCondition "master_wait_condition" Stack > "k8s-cluster-nyj64j3yjhjs-kube_masters-k6i27fw72w5t-0-i6lg6mzpzrl6" > [57626786-4858-4 > 715-9c01-97ad6156d6f7] sleeping _sleep /usr/lib/python2.7/dist-packag > es/heat/engine/scheduler.py:156 > 2017-05-12T04:09:58.740335+00:00 controller heat-engine: 2017-05-12 > 04:09:58.738 8879 DEBUG heat.engine.scheduler > [req-e0532cce-e310-4222-80d9-159c9aee6afa > - > - - - -] Task create from TemplateResource "0" Stack > "k8s-cluster-nyj64j3yjhjs-kube_masters-k6i27fw72w5t" > [093664fa-7914-41a3-924e-7e5706320301] running step > /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:215 > 2017-05-12T04:09:58.746408+00:00 controller heat-engine: 2017-05-12 > 04:09:58.745 8879 DEBUG heat.engine.scheduler > [req-e0532cce-e310-4222-80d9-159c9aee6afa > - > - - - -] Task create from TemplateResource "0" Stack > "k8s-cluster-nyj64j3yjhjs-kube_masters-k6i27fw72w5t" > [093664fa-7914-41a3-924e-7e5706320301] sleeping _sle > ep /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:156 > 2017-05-12T04:09:58.980256+00:00 controller heat-engine: 2017-05-12 > 04:09:58.979 8847 DEBUG heat.engine.scheduler > [req-fade682c-de27-4d9d-8756-67f6ebd5ffc4 > - > - - - -] Task create from ResourceGroup "kube_masters" Stack > "k8s-cluster-nyj64j3yjhjs" [c1bf3ab0-c869-458d-a16d-6f4ab8284046] running > step /usr/lib/python2.7 > /dist-packages/heat/engine/scheduler.py:215 > 2017-05-12T04:09:58.986982+00:00 controller heat-engine: 2017-05-12 > 04:09:58.985 8847 DEBUG heat.engine.scheduler > [req-fade682c-de27-4d9d-8756-67f6ebd5ffc4 > - > - - - -] Task create from ResourceGroup "kube_masters" Stack > "k8s-cluster-nyj64j3yjhjs" [c1bf3ab0-c869-458d-a16d-6f4ab8284046] > sleeping _sleep /usr/lib/python > 2.7/dist-packages/heat/engine/scheduler.py:156 > 2017-05-12T04:09:59.533940+00:00 controller heat-engine: 2017-05-12 > 04:09:59.532 8813 DEBUG heat.engine.scheduler > [req-90ba259d-abef-4d45-a87e-0c08cbd5a4ae > - > - - - -] Task create from HeatWaitCondition "master_wait_condition" Stack > "k8s-cluster-nyj64j3yjhjs-kube_masters-k6i27fw72w5t-0-i6lg6mzpzrl6" > [57626786-4858-4 > 715-9c01-97ad6156d6f7] running step /usr/lib/python2.7/dist-packag > es/heat/engine/scheduler.py:215 > 2017-05-12T04:09:59.554131+00:00 controller heat-engine: 2017-05-12 > 04:09:59.553 8813 DEBUG heat.engine.scheduler > [req-90ba259d-abef-4d45-a87e-0c08cbd5a4ae > - > - - - -] Task create from HeatWaitCondition "master_wait_condition" Stack > "k8s-cluster-nyj64j3yjhjs-kube_masters-k6i27fw72w5t-0-i6lg6mzpzrl6" > [57626786-4858-4 > 715-9c01-97ad6156d6f7] sleeping _sleep /usr/lib/python2.7/dist-packag > es/heat/engine/scheduler.py:156 > 2017-05-12T04:09:59.747987+00:00 controller heat-engine: 2017-05-12 > 04:09:59.746 8879 DEBUG heat.engine.scheduler > [req-e0532cce-e310-4222-80d9-159c9aee6afa > - > - - - -] Task create from TemplateResource "0" Stack > "k8s-cluster-nyj64j3yjhjs-kube_masters-k6i27fw72w5t" > [093664fa-7914-41a3-924e-7e5706320301] running step > /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:215 > 2017-05-12T04:09:59.764422+00:00 controller heat-engine: 2017-05-12 > 04:09:59.755 8879 DEBUG heat.engine.scheduler > [req-e0532cce-e310-4222-80d9-159c9aee6afa > - > - - - -] Task create from TemplateResource "0" Stack > "k8s-cluster-nyj64j3yjhjs-kube_masters-k6i27fw72w5t" > [093664fa-7914-41a3-924e-7e5706320301] sleeping _sle > ep /usr/lib/python2.7/dist-packages/heat/engine/scheduler.py:156 > 2017-05-12T04:09:59.989512+00:00 controller heat-engine: 2017-05-12 > 04:09:59.987 8847 DEBUG heat.engine.scheduler > [req-fade682c-de27-4d9d-8756-67f6ebd5ffc4 > - > - - - -] Task create from ResourceGroup "kube_masters" Stack > "k8s-cluster-nyj64j3yjhjs" [c1bf3ab0-c869-458d-a16d-6f4ab8284046] running > step /usr/lib/python2.7 > /dist-packages/heat/engine/scheduler.py:215 > 2017-05-12T04:09:59.997531+00:00 controller heat-engine: 2017-05-12 > 04:09:59.996 8847 DEBUG heat.engine.scheduler > [req-fade682c-de27-4d9d-8756-67f6ebd5ffc4 > - > - - - -] Task create from ResourceGroup "kube_masters" Stack > "k8s-cluster-nyj64j3yjhjs" [c1bf3ab0-c869-458d-a16d-6f4ab8284046] > sleeping _sleep /usr/lib/python > 2.7/dist-packages/heat/engine/scheduler.py:156 > ... > =============================================== > > Thank you. > Best Regards. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev