Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Jay Lau
Can you check the kubelet log on your minions? Seems the container failed
to start, there might be something wrong for your minions node. Thanks.

2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide [1],
 but was not able to go through. I was blocked by connecting to the redis
 slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to check
 the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

 Is anyone able to reproduce the problem above? If yes, I am going to file
 a bug.

 Thanks,
 Hongbin

 [1]
 https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Hongbin Lu
Thanks Jay,

I checked the kubelet log. There are a lot of Watch closed error like
below. Here is the full log http://fpaste.org/188964/46261561/ .

*Status:Failure, Message:unexpected end of JSON input, Reason:*
*Status:Failure, Message:501: All the given peers are not reachable*

Please note that my environment was setup by following the quickstart
guide. It seems that all the kube components were running (checked by using
systemctl status command), and all nodes can ping each other. Any further
suggestion?

Thanks,
Hongbin


On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau jay.lau@gmail.com wrote:

 Can you check the kubelet log on your minions? Seems the container failed
 to start, there might be something wrong for your minions node. Thanks.

 2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide [1],
 but was not able to go through. I was blocked by connecting to the redis
 slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to
 check the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

 Is anyone able to reproduce the problem above? If yes, I am going to file
 a bug.

 Thanks,
 Hongbin

 [1]
 https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay Lau (Guangya Liu)

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 

Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Hongbin Lu
Hi Jay,

I tried the native k8s commands (in a fresh bay):

*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-master.yaml*
*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-sentinel-service.yaml*
*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-controller.yaml*
*kubectl create -s http://192.168.1.249:8080 http://192.168.1.249:8080 -f
./redis-sentinel-controller.yaml*

It still didn't work (same symptom as before). I cannot spot any difference
between the original yaml file and the parsed yam file. Any other idea?

Thanks,
Hongbin

On Sun, Feb 22, 2015 at 8:38 PM, Jay Lau jay.lau@gmail.com wrote:

 I suspect that there are some error after the pod/services parsed, can you
 please use the native k8s command have a try first then debug k8s api part
 to check the difference of the original json file and the parsed json file?
 Thanks!

 kubectl create -f .json xxx



 2015-02-23 1:40 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Thanks Jay,

 I checked the kubelet log. There are a lot of Watch closed error like
 below. Here is the full log http://fpaste.org/188964/46261561/ .

 *Status:Failure, Message:unexpected end of JSON input, Reason:*
 *Status:Failure, Message:501: All the given peers are not reachable*

 Please note that my environment was setup by following the quickstart
 guide. It seems that all the kube components were running (checked by using
 systemctl status command), and all nodes can ping each other. Any further
 suggestion?

 Thanks,
 Hongbin


 On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau jay.lau@gmail.com wrote:

 Can you check the kubelet log on your minions? Seems the container
 failed to start, there might be something wrong for your minions node.
 Thanks.

 2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide
 [1], but was not able to go through. I was blocked by connecting to the
 redis slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to
 check the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 

Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Jay Lau
I suspect that there are some error after the pod/services parsed, can you
please use the native k8s command have a try first then debug k8s api part
to check the difference of the original json file and the parsed json file?
Thanks!

kubectl create -f .json xxx



2015-02-23 1:40 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Thanks Jay,

 I checked the kubelet log. There are a lot of Watch closed error like
 below. Here is the full log http://fpaste.org/188964/46261561/ .

 *Status:Failure, Message:unexpected end of JSON input, Reason:*
 *Status:Failure, Message:501: All the given peers are not reachable*

 Please note that my environment was setup by following the quickstart
 guide. It seems that all the kube components were running (checked by using
 systemctl status command), and all nodes can ping each other. Any further
 suggestion?

 Thanks,
 Hongbin


 On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau jay.lau@gmail.com wrote:

 Can you check the kubelet log on your minions? Seems the container failed
 to start, there might be something wrong for your minions node. Thanks.

 2015-02-22 15:08 GMT+08:00 Hongbin Lu hongbin...@gmail.com:

 Hi all,

 I tried to go through the new redis example at the quickstart guide [1],
 but was not able to go through. I was blocked by connecting to the redis
 slave container:

 *$ docker exec -i -t $REDIS_ID redis-cli*
 *Could not connect to Redis at 127.0.0.1:6379 http://127.0.0.1:6379:
 Connection refused*

 Here is the container log:

 *$ docker logs $REDIS_ID*
 *Error: Server closed the connection*
 *Failed to find master.*

 It looks like the redis master disappeared at some point. I tried to
 check the status in about every one minute. Below is the output.

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Pending*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Pending*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *redis-master   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/   name=redis,redis-sentinel=true,role=master
  Running*
 *   kubernetes/redis:v1*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Failed*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*

 *$ kubectl get pod*
 *NAME   IMAGE(S)  HOST
  LABELS  STATUS*
 *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
 *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
 http://10.0.0.5/   name=redis
  Running*
 *3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
 http://10.0.0.4/
 name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

 Is anyone able to reproduce the problem above? If yes, I am going to
 file a bug.

 Thanks,
 Hongbin

 [1]
 https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack


 __
 OpenStack Development Mailing List (not for usage