[kubernetes-users] Re: spark on kubernetes

2018-09-14 Thread 'Yinan Li' via Kubernetes user discussion and Q
Spark on Kubernetes doesn't yet support mounting ConfigMaps. Not very familiar with how HBase is configured. Is it using the Hadoop configuration system? If so, you can use Spark configuration properties with the prefix "spark.hadoop.*" to set Hadoop config options. Spark automatically removes

Re: [kubernetes-users] Re: Can kube-system pods start on "worker" nodes?

2018-09-14 Thread 'Robert Bailey' via Kubernetes user discussion and Q
On Fri, Sep 14, 2018 at 12:59 PM Yakov Sobolev wrote: > Thanks Robert. > > How > > can I see what process launches my kube-system pods? > I was asking what deployment tool you used to create the cluster. They are opinionated about how the control plane is managed and configured. > Our

Re: [kubernetes-users] Kubernetes memory leak on Master node.

2018-09-14 Thread Rodrigo Campos
It seems newer versions fixed a leak. Don't know which component is that is leaking for you, though. But, for example, this is one that have been fixed in a newer version: https://github.com/kubernetes/kubernetes/pull/65339 See the changelog here for all 1.10.x minors:

[kubernetes-users] Re: Can kube-system pods start on "worker" nodes?

2018-09-14 Thread Yakov Sobolev
Thanks Robert. How can I see what process launches my kube-system pods? Our cluster is running on VMs. kublet starts all components On Friday, September 14, 2018 at 2:28:10 PM UTC-4, Yakov Sobolev wrote: > With the master set to unschedulable, do you know what happens if one of > the pods

Re: [kubernetes-users] Can kube-system pods start on "worker" nodes?

2018-09-14 Thread 'Robert Bailey' via Kubernetes user discussion and Q
How are the master components running? Some deployments use static pods on the master which mean that they can never be relocated to a different machine. If you are running them as pods in the cluster (i.e. via self-hosting) then your deployment tool has likely put in the correct tolerations so

[kubernetes-users] Can kube-system pods start on "worker" nodes?

2018-09-14 Thread Yakov Sobolev
With the master set to unschedulable, do you know what happens if one of the pods already running on the master goes down? Have you seen those system-type pods come back up? It is OK to run these pods on nodes and not just the master; reference

[kubernetes-users] Re: Kubernetes memory leak on Master node.

2018-09-14 Thread Yakov Sobolev
Our master nodes only run following pods: calico-node kube-apiserver kube-controller-manager kube-dns kube-proxy kube-scheduler On Friday, September 14, 2018 at 1:42:22 PM UTC-4, Yakov Sobolev wrote: > We are running Kubernetes 1.10.2 and we noticed memory leak on the master > node. It is

[kubernetes-users] Re: how to taint back "Master Isolation"

2018-09-14 Thread Gabriel Sousa
found it kubectl taint nodes k8smaster03 node-role.kubernetes.io/master="":NoSchedule On Friday, 14 September 2018 18:02:03 UTC+1, Gabriel Sousa wrote: > > hello > > > how i revert the command "kubectl taint nodes --all > node-role.kubernetes.io/master-" > > tryed: > kubectl taint nodes node1

[kubernetes-users] Kubernetes memory leak on Master node.

2018-09-14 Thread Yakov Sobolev
We are running Kubernetes 1.10.2 and we noticed memory leak on the master node. It is known issue? What is the remedy? We are running several clusters on VMs and confirmed memory leak on all of them. Only out-of-the box components are running on master nodes. -- You received this message

[kubernetes-users] how to taint back "Master Isolation"

2018-09-14 Thread Gabriel Sousa
hello how i revert the command "kubectl taint nodes --all node-role.kubernetes.io/master-" tryed: kubectl taint nodes node1 key=node-role.kubernetes.io/master:NoSchedule but wont work, kubernetes version 1.11.3 -- You received this message because you are subscribed to the Google Groups

Re: [kubernetes-users] Set service-node-port-range in Google Kubernetes Engine

2018-09-14 Thread 'Tim Hockin' via Kubernetes user discussion and Q
We do not expose that as a parameter today. We can discuss the options here, but there's not short answer. Can you talk about what you're doing to need so many node ports? On Fri, Sep 14, 2018 at 8:27 AM Phạm Huy Hoàng wrote: > > For our use-case, we need to access a lot of services via

[kubernetes-users] Set service-node-port-range in Google Kubernetes Engine

2018-09-14 Thread Phạm Huy Hoàng
For our use-case, we need to access a lot of services via NodePort. By default, the NodePort range is 3-32767. With *kubeadm*, I can set the port range via *--service-node-port-range* flag. We are using Google Kubernetes Engine (GKE) cluster. How can I set the port range for a GKE

[kubernetes-users] spark on kubernetes

2018-09-14 Thread 'R Rao' via Kubernetes user discussion and Q
hi guys , trying to figure out how to run spark job that talks to my hbase. I do not want to bake/hardcode the hbase config into the driver or executor images . I want the configuration to be available via a configmap. Can anybody please help , am new to this . Thanks -- You received

[kubernetes-users] Re: Multi line log events for kubernetes pods

2018-09-14 Thread 'Matt Brown' via Kubernetes user discussion and Q
I believe the fluentd configuration used by the Stackdriver Logging addon is here: https://github.com/kubernetes/kubernetes/blob/c04fe8c27c9053a37face46abfebc45b9ac23dd7/cluster/addons/fluentd-gcp/fluentd-gcp-configmap.yaml#L108-L118, which I think uses this plugin