Re: [kubernetes-users] simple k8s GCP cluster requires 2 nodes after upgrade to 1.6.11

2017-11-17 Thread 'Tim Hockin' via Kubernetes user discussion and Q
And know that we're looking at ways to optimize the scale-down
resourcing to be more appropriate for 1-node, 1-core "clusters"

On Fri, Nov 17, 2017 at 9:42 PM, 'Robert Bailey' via Kubernetes user
discussion and Q  wrote:
> You can inspect the pods running in the kube-system namespace by running
>
> kubectl get pods --namespace=kube-system
>
>
> Some of those pods can be disabled via the GKE API (e.g. turn off dashboard,
> disable logging and/or monitoring if you don't need them).
>
> On Fri, Nov 17, 2017 at 2:40 AM, 'Vitalii Tamazian' via Kubernetes user
> discussion and Q  wrote:
>>
>> Hi!
>> I have small java/alpine linux microservice that previously was running
>> fine on n1-standard-1 (1 vCPU, 3.75 GB memory) on GCP.
>> But after nodepool upgrade to 1.6.11 my service become "unschedulable".
>> And I was able to fix it only by adding the second node. So my cluster now
>> runs on 2 vCPUs, 7.50 GB, which imo is a quite overkill for the service
>> which actually uses up to 300Mb of memory. The average cpu usage is very
>> low.
>> There is still a single pod in the cluster.
>>
>> Is there any way to check what consumes the rest of the resources? Is
>> there a way to make it schedulable on 1 node again?
>>
>> Thanks,
>> Vitalii
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] simple k8s GCP cluster requires 2 nodes after upgrade to 1.6.11

2017-11-17 Thread 'Robert Bailey' via Kubernetes user discussion and Q
You can inspect the pods running in the kube-system namespace by running

kubectl get pods --namespace=kube-system


Some of those pods can be disabled via the GKE API (e.g. turn off
dashboard, disable logging and/or monitoring if you don't need them).

On Fri, Nov 17, 2017 at 2:40 AM, 'Vitalii Tamazian' via Kubernetes user
discussion and Q  wrote:

> Hi!
> I have small java/alpine linux microservice that previously was running
> fine on n1-standard-1 (1 vCPU, 3.75 GB memory) on GCP.
> But after nodepool upgrade to 1.6.11 my service become "unschedulable".
> And I was able to fix it only by adding the second node. So my cluster now
> runs on 2 vCPUs, 7.50 GB, which imo is a quite overkill for the service
> which actually uses up to 300Mb of memory. The average cpu usage is very
> low.
> There is still a single pod in the cluster.
>
> Is there any way to check what consumes the rest of the resources? Is
> there a way to make it schedulable on 1 node again?
>
> Thanks,
> Vitalii
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


[kubernetes-users] Kubelet exits without any indication of error condition (believe it may be failing in dependency checking for cgroup support)

2017-11-17 Thread pferrell
Kubelet binary is exiting (status code 1) when run on a custom linux 
distribution (Yocto project).


The last log prior to kubelet exit is relating to cgroup root, but there is no 
real error logged.  Is there a pre-flight script similar to docker's 
check-config to identify if any missing kernel or program dependencies are 
missing?  Is there a more verbose logging available (already run --v=10)?

Thanks,
Phil


Last couple logs from kubelet (full log at end of message):
I1117 19:21:43.2738834081 manager.go:222] Version: 
{KernelVersion:4.4.87-yocto-standard ContainerOsVersion:SnapL 0.1.0 (Apple) 
DockerVersion:17.03.2-ce DockerAPIVersion:1.27 CadvisorVersion: 
CadvisorRevision:}
W1117 19:21:43.2746364081 server.go:232] No api server defined - no events 
will be sent to API server.
I1117 19:21:43.2746464081 server.go:422] --cgroups-per-qos enabled, but 
--cgroup-root was not specified.  defaulting to /
error: failed to run Kubelet: exit status 1




root@snapl-x86-64:~# uname -a
Linux snapl-x86-64 4.4.87-yocto-standard #2 SMP Wed Nov 15 15:53:35 PST 2017 
x86_64 x86_64 x86_64 GNU/Linux


root@snapl-x86-64:~# /usr/share/docker/check-config.sh 
info: reading kernel config from /proc/config.gz ...

Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled
- CONFIG_BRIDGE: enabled
- CONFIG_BRIDGE_NETFILTER: enabled
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module)
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled
- CONFIG_DEVPTS_MULTIPLE_INSTANCES: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_MEMCG_SWAP: missing
- CONFIG_MEMCG_SWAP_ENABLED: missing
- CONFIG_LEGACY_VSYSCALL_EMULATE: enabled
- CONFIG_MEMCG_KMEM: missing
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: missing
- CONFIG_IOSCHED_CFQ: enabled
- CONFIG_CFQ_GROUP_IOSCHED: missing
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: enabled
- CONFIG_IP_VS: enabled (as module)
- CONFIG_IP_VS_NFCT: enabled
- CONFIG_IP_VS_RR: enabled (as module)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
  - "overlay":
- CONFIG_VXLAN: enabled
  Optional (for encrypted networks):
  - CONFIG_CRYPTO: enabled
  - CONFIG_CRYPTO_AEAD: enabled
  - CONFIG_CRYPTO_GCM: enabled
  - CONFIG_CRYPTO_SEQIV: enabled
  - CONFIG_CRYPTO_GHASH: enabled
  - CONFIG_XFRM: enabled
  - CONFIG_XFRM_USER: enabled
  - CONFIG_XFRM_ALGO: enabled
  - CONFIG_INET_ESP: missing
  - CONFIG_INET_XFRM_MODE_TRANSPORT: missing
  - "ipvlan":
- CONFIG_IPVLAN: enabled
  - "macvlan":
- CONFIG_MACVLAN: enabled
- CONFIG_DUMMY: enabled
- Storage Drivers:
  - "aufs":
- CONFIG_AUFS_FS: missing
  - "btrfs":
- CONFIG_BTRFS_FS: enabled
- CONFIG_BTRFS_FS_POSIX_ACL: missing
  - "devicemapper":
- CONFIG_BLK_DEV_DM: enabled
- CONFIG_DM_THIN_PROVISIONING: enabled (as module)
  - "overlay":
- CONFIG_OVERLAY_FS: enabled
  - "zfs":
- /dev/zfs: missing
- zfs command: missing
- zpool command: missing

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 100


root@snapl-x86-64:~# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 17.03.2-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: 
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 3addd840653146c90a254301d6c3a663c7fd6429 (expected: 
4ab9917febca54791c5f071a9d1f404867857fcc)
runc version: 9d6821d1b53908e249487741eccd567249ca1d99-dirty (expected: 
54296cf40ad8143b62dbcaa1d90e520a2136ddfe)
init version: 0effd37 (expected: 949e6fa)
Kernel Version: 4.4.87-yocto-standard
Operating System: SnapL 0.1.0 (Apple)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.954GiB
Name: snapl-x86-64
ID: H6QT:KSHE:SSXG:QNNM:4JYK:AAQ6:W7QR:FF4R:SVH2:BHFM:CWBL:3HIS
Docker Root 

[kubernetes-users] simple k8s GCP cluster requires 2 nodes after upgrade to 1.6.11

2017-11-17 Thread 'Vitalii Tamazian' via Kubernetes user discussion and Q
Hi!
I have small java/alpine linux microservice that previously was running 
fine on n1-standard-1 (1 vCPU, 3.75 GB memory) on GCP.
But after nodepool upgrade to 1.6.11 my service become "unschedulable". And 
I was able to fix it only by adding the second node. So my cluster now runs 
on 2 vCPUs, 7.50 GB, which imo is a quite overkill for the service which 
actually uses up to 300Mb of memory. The average cpu usage is very low.
There is still a single pod in the cluster.

Is there any way to check what consumes the rest of the resources? Is there 
a way to make it schedulable on 1 node again?

Thanks,
Vitalii

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.