And know that we're looking at ways to optimize the scale-down
resourcing to be more appropriate for 1-node, 1-core "clusters"
On Fri, Nov 17, 2017 at 9:42 PM, 'Robert Bailey' via Kubernetes user
discussion and Q wrote:
> You can inspect the pods running in the
You can inspect the pods running in the kube-system namespace by running
kubectl get pods --namespace=kube-system
Some of those pods can be disabled via the GKE API (e.g. turn off
dashboard, disable logging and/or monitoring if you don't need them).
On Fri, Nov 17, 2017 at 2:40 AM, 'Vitalii
Kubelet binary is exiting (status code 1) when run on a custom linux
distribution (Yocto project).
The last log prior to kubelet exit is relating to cgroup root, but there is no
real error logged. Is there a pre-flight script similar to docker's
check-config to identify if any missing kernel
Hi!
I have small java/alpine linux microservice that previously was running
fine on n1-standard-1 (1 vCPU, 3.75 GB memory) on GCP.
But after nodepool upgrade to 1.6.11 my service become "unschedulable". And
I was able to fix it only by adding the second node. So my cluster now runs
on 2 vCPUs,