Re: [kubernetes-users] destoryed pod containers trigger

2018-01-31 Thread Colstuwjx


> kubectl logs ... --previous ? 
>

I have tried that, but it shows `Error from server (BadRequest): previous 
terminated container "main" in pod "demo-1050-5fb5698d4f-8qtsw" not found` 

BTW, I found `maximum-dead-containers-per-container` and 
`maximum-dead-containers` to configure the policy, and these two has been 
deprecated, the new coming parameters are: `--eviction-hard`, 
`--eviction-soft`, just configure them as the eviction policy.

reference 
document: 
https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] Can you route external traffic to a pod without using a Google Cloud Loadbalancer or routing directly to the Node?

2018-01-31 Thread 'Tim Hockin' via Kubernetes user discussion and Q
On Jan 31, 2018 12:03 PM,  wrote:

Hi guys,

I was wondering if there is another way to route external traffic to a Pod.
So I know that you can use a Kubernetes Service of type "LoadBalancer"
which on GKE will automatically create a Google Cloud Loadbalancer for you
(as described here https://kubernetes.io/docs/concepts/services-networking/
service/#type-loadbalancer). However having a Google Cloud Loadbalancer is
complete overkill for my small use case and also relatively expensive.

Furthermore I've seen solutions online, where people would use externalIPs
on the service and then used the external IP of the Node itself to access
the Pod (see for example here https://serverfault.com/
questions/801189/expose-port-80-and-443-on-google-
container-engine-without-load-balancer). However since your container can
be assigned to any Node, this solution is not really suitable as with each
new deployment you have to look up the IP current Node.


You have identified the major issues.  Add to that the fact that the set of
IPs assigned to all your VMs can change as nodes come and go.


Isn't there a way, to just reserve an external IP via Google Cloud and then
"attach" a Kubernetes Service to it?


That is what a Service type=LoadBalancer is doing.  Pretty much literally,
though the details are more involved.


--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Can you route external traffic to a pod without using a Google Cloud Loadbalancer or routing directly to the Node?

2018-01-31 Thread simontheleg
Hi guys,

I was wondering if there is another way to route external traffic to a Pod. So 
I know that you can use a Kubernetes Service of type "LoadBalancer" which on 
GKE will automatically create a Google Cloud Loadbalancer for you (as described 
here 
https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer).
 However having a Google Cloud Loadbalancer is complete overkill for my small 
use case and also relatively expensive. 

Furthermore I've seen solutions online, where people would use externalIPs on 
the service and then used the external IP of the Node itself to access the Pod 
(see for example here 
https://serverfault.com/questions/801189/expose-port-80-and-443-on-google-container-engine-without-load-balancer).
 However since your container can be assigned to any Node, this solution is not 
really suitable as with each new deployment you have to look up the IP current 
Node. 


Isn't there a way, to just reserve an external IP via Google Cloud and then 
"attach" a Kubernetes Service to it? 

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] destoryed pod containers trigger

2018-01-31 Thread 'Tim Hockin' via Kubernetes user discussion and Q
kubectl logs ... --previous ?

On Wed, Jan 31, 2018 at 6:38 AM, Colstuwjx  wrote:
>>
>>
>>>
>>> But, what if we want to trigger the detail exited reason for the exited
>>> containers? Is there any parameters configure that?
>>
>>
>> Have you checked the terminationGracePeriod? I think it will do just that.
>
>
> I'm afraid not, I need to check the exited container, such as some container
> with wrong configurations, and determine the root cause.
> After  the `terminationGracePeriod `, the unhealthy container would be
> deleted, and we can't do things like `docker inspect `
> to trigger that case.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] destoryed pod containers trigger

2018-01-31 Thread Rodrigo Campos
On Wed, Jan 31, 2018 at 06:38:36AM -0800, Colstuwjx wrote:
> >
> >> But, what if we want to trigger the detail exited reason for the exited 
> >> containers? Is there any parameters configure that?
> >
> > Have you checked the terminationGracePeriod? I think it will do just that.
> 
> I'm afraid not, I need to check the exited container, such as some 
> container with wrong configurations, and determine the root cause.
> After  the `terminationGracePeriod `, the unhealthy container would be 
> deleted, and we can't do things like `docker inspect 
> ` to trigger that case.

Ohh, sorry, my bad. I didn't understood that.

And sorry again, not sure how to do that. I've never looked into that myself :-/

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] destoryed pod containers trigger

2018-01-31 Thread Colstuwjx

>
>
>  
>
>> But, what if we want to trigger the detail exited reason for the exited 
>> containers? Is there any parameters configure that?
>>
>
> Have you checked the terminationGracePeriod? I think it will do just that.
>

I'm afraid not, I need to check the exited container, such as some 
container with wrong configurations, and determine the root cause.
After  the `terminationGracePeriod `, the unhealthy container would be 
deleted, and we can't do things like `docker inspect 
` to trigger that case.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Re: Give me normal container name plz!

2018-01-31 Thread Eugene
Oops pardon. This is solved by grouping and filtering

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] destoryed pod containers trigger

2018-01-31 Thread Rodrigo Campos
On Wednesday, January 31, 2018, Colstuwjx  wrote:

> Hi team,
>
> As I known, kubernetes will kill the POD while the readiness probe failed
> over than `FailureThreshold` limit, and the unhealthy containers will be
> deleted by kubelet.
>

I think only the liveness probe will do that.



> But, what if we want to trigger the detail exited reason for the exited
> containers? Is there any parameters configure that?
>

Have you checked the terminationGracePeriod? I think it will do just that.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Give me normal container name plz!

2018-01-31 Thread Eugene



Hello! I want to configure the CPU utilization for containers in GKE. As a 
result, I see the image as in the screenshot. The name of the container is 
formed from a namespace, a default pool, a container name, and so on. How 
can I leave only the container name?

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] destoryed pod containers trigger

2018-01-31 Thread Colstuwjx
Hi team,

As I known, kubernetes will kill the POD while the readiness probe failed 
over than `FailureThreshold` limit, and the unhealthy containers will be 
deleted by kubelet.
But, what if we want to trigger the detail exited reason for the exited 
containers? Is there any parameters configure that?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] kubeadm init fails on vmware vm

2018-01-31 Thread Kim Nielsen
Hi,

I'm trying to install kubernetes on a vmware vm which in theory should be 
pretty easy. I have done several test installs using qemu and virtualbox 
and both of those works perfectly using this guide:

https://blog.alexellis.io/kubernetes-in-10-minutes/ (The only difference is 
that I'm installing kubernetes 1.9) and this way seems to be supported by 
several other documents and videos. Anyhow I have decided that I wanted to 
move my test into our vmware (6.5) environment but I have been trying to 
weeks now and I cannot figure out what is going wrong.

I'm using

# . /etc/os-release ; echo $VERSION
16.04.3 LTS (Xenial Xerus)

# kubeadm version
kubeadm version: {Major:"1", Minor:"9", GitVersion:"v1.9.2", 
GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", 
BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", 
Platform:"linux/amd64"}

and the command:

# kubeadm init --pod-network-cidr=10.244.0.0/16 
--apiserver-advertise-address=10.78.0.7 --kubernetes-version stable-1.9

fails with the message:

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some 
way (required cgroups disabled)
- There is no internet connection, so the kubelet cannot pull the following 
control plane images:
- gcr.io/google_containers/kube-apiserver-amd64:v1.9.2
- gcr.io/google_containers/kube-controller-manager-amd64:v1.9.2
- gcr.io/google_containers/kube-scheduler-amd64:v1.9.2

If you are on a systemd-powered system, you can try to troubleshoot the 
error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster


The VM has internet access and kubelet is running

● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor 
preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
   └─10-kubeadm.conf
   Active: active (running) since Wed 2018-01-31 10:44:58 CET; 32min ago
 Docs: http://kubernetes.io/docs/
 Main PID: 3039 (kubelet)
Tasks: 14
   Memory: 42.4M
  CPU: 27.121s
   CGroup: /system.slice/kubelet.service
   └─3039 /usr/bin/kubelet 
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf 
--kubeconfig=/etc/kubernetes/kubelet.conf 
--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true 
--network-plugin=cni --cni-conf-dir=/etc/cni/n

Jan 31 11:17:51 ramsley kubelet[3039]: E0131 11:17:51.1311013039 
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to 
list *v1.Service: Get 
https://10.78.0.7:6443/api/v1/services?limit=500=0: dial 
tcp 10.78.0.7:644
Jan 31 11:17:51 ramsley kubelet[3039]: E0131 11:17:51.1314473039 
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: 
Failed to list *v1.Pod: Get 
https://10.78.0.7:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dramsley=500
Jan 31 11:17:51 ramsley kubelet[3039]: E0131 11:17:51.3080973039 
eviction_manager.go:238] eviction manager: unexpected err: failed to get 
node info: node "ramsley" not found
Jan 31 11:17:52 ramsley kubelet[3039]: E0131 11:17:52.1315623039 
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to 
list *v1.Node: Get 
https://10.78.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dramsley=500
Jan 31 11:17:52 ramsley kubelet[3039]: E0131 11:17:52.1330823039 
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to 
list *v1.Service: Get 
https://10.78.0.7:6443/api/v1/services?limit=500=0: dial 
tcp 10.78.0.7:644
Jan 31 11:17:52 ramsley kubelet[3039]: E0131 11:17:52.1341043039 
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: 
Failed to list *v1.Pod: Get 
https://10.78.0.7:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dramsley=500
Jan 31 11:17:53 ramsley kubelet[3039]: E0131 11:17:53.1325903039 
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to 
list *v1.Node: Get 
https://10.78.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dramsley=500
Jan 31 11:17:53 ramsley kubelet[3039]: E0131 11:17:53.1337363039 
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to 
list *v1.Service: Get 
https://10.78.0.7:6443/api/v1/services?limit=500=0: dial 
tcp 10.78.0.7:644
Jan 31 11:17:53 ramsley kubelet[3039]: E0131 11:17:53.1348373039 
reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: 
Failed to list *v1.Pod: Get 
https://10.78.0.7:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dramsley=500
Jan 31 11:17:53 ramsley kubelet[3039]: I0131 11:17:53.7949453039 
kubelet_node_status.go:273] Setting node annotation to enable volume 
controller attach/detach


the logs says:

Jan 31 11:15:09 ramsley kubelet[3039]: E0131 11:15:09.0321023039 
reflector.go:205]