Re: [kubernetes-users] simple k8s GCP cluster requires 2 nodes after upgrade to 1.6.11

2017-11-17 Thread 'Tim Hockin' via Kubernetes user discussion and Q
And know that we're looking at ways to optimize the scale-down resourcing to be more appropriate for 1-node, 1-core "clusters" On Fri, Nov 17, 2017 at 9:42 PM, 'Robert Bailey' via Kubernetes user discussion and Q wrote: > You can inspect the pods running in the

Re: [kubernetes-users] simple k8s GCP cluster requires 2 nodes after upgrade to 1.6.11

2017-11-17 Thread 'Robert Bailey' via Kubernetes user discussion and Q
You can inspect the pods running in the kube-system namespace by running kubectl get pods --namespace=kube-system Some of those pods can be disabled via the GKE API (e.g. turn off dashboard, disable logging and/or monitoring if you don't need them). On Fri, Nov 17, 2017 at 2:40 AM, 'Vitalii

[kubernetes-users] Kubelet exits without any indication of error condition (believe it may be failing in dependency checking for cgroup support)

2017-11-17 Thread pferrell
Kubelet binary is exiting (status code 1) when run on a custom linux distribution (Yocto project). The last log prior to kubelet exit is relating to cgroup root, but there is no real error logged. Is there a pre-flight script similar to docker's check-config to identify if any missing kernel

[kubernetes-users] simple k8s GCP cluster requires 2 nodes after upgrade to 1.6.11

2017-11-17 Thread 'Vitalii Tamazian' via Kubernetes user discussion and Q
Hi! I have small java/alpine linux microservice that previously was running fine on n1-standard-1 (1 vCPU, 3.75 GB memory) on GCP. But after nodepool upgrade to 1.6.11 my service become "unschedulable". And I was able to fix it only by adding the second node. So my cluster now runs on 2 vCPUs,