Re: [kubernetes-users] livenessProbe failed won't set pod status Ready=False

2018-02-04 Thread Colstuwjx
> > > Sorry, my mistake, it seems that the ready=True is due to >> `initialDelaySeconds` has been set `30s`, and within the 30 seconds, the >> nginx POD would be `Ready`. >> > > Are you really really sure that is the case? > > Can you send a yaml and kubectl commands/output to reproduce? >

[kubernetes-users] Re: livenessProbe failed won't set pod status Ready=False

2018-02-02 Thread Colstuwjx
Sorry, my mistake, it seems that the ready=True is due to `initialDelaySeconds` has been set `30s`, and within the 30 seconds, the nginx POD would be `Ready`. BTW, `initialDelaySeconds` is likely to said `I'm not ready in this period, and it's ok, I need some time to warm up`, but POD status

[kubernetes-users] livenessProbe failed won't set pod status Ready=False

2018-02-02 Thread Colstuwjx
Hi team, I have been setup a nginx pod, and was confused about the healthcheck effect: 1. while readinessProbe failed, the nginx pod would be set Ready=False, but the POD didn't be killed; 2. while livenessProbe failed, the nginx pod would be killed, restartCount +1, and the Ready is always

Re: [kubernetes-users] destoryed pod containers trigger

2018-01-31 Thread Colstuwjx
> kubectl logs ... --previous ? > I have tried that, but it shows `Error from server (BadRequest): previous terminated container "main" in pod "demo-1050-5fb5698d4f-8qtsw" not found` BTW, I found `maximum-dead-containers-per-container` and `maximum-dead-containers` to configure the policy,

Re: [kubernetes-users] destoryed pod containers trigger

2018-01-31 Thread Colstuwjx
> > > > >> But, what if we want to trigger the detail exited reason for the exited >> containers? Is there any parameters configure that? >> > > Have you checked the terminationGracePeriod? I think it will do just that. > I'm afraid not, I need to check the exited container, such as some

[kubernetes-users] destoryed pod containers trigger

2018-01-31 Thread Colstuwjx
Hi team, As I known, kubernetes will kill the POD while the readiness probe failed over than `FailureThreshold` limit, and the unhealthy containers will be deleted by kubelet. But, what if we want to trigger the detail exited reason for the exited containers? Is there any parameters configure

Re: [kubernetes-users] How to configure decenterlized dns resolution like the way in docker `--dns`

2018-01-22 Thread Colstuwjx
Thanks! On Monday, January 22, 2018 at 10:57:20 PM UTC+8, John Belamaric wrote: > > This is in alpha stage in 1.9: > > https://github.com/kubernetes/features/issues/504 > > > https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network/pod-resolv-conf.md > > The plan