Spark on Kubernetes doesn't yet support mounting ConfigMaps. Not very
familiar with how HBase is configured. Is it using the Hadoop configuration
system? If so, you can use Spark configuration properties with the prefix
"spark.hadoop.*" to set Hadoop config options. Spark automatically removes
On Fri, Sep 14, 2018 at 12:59 PM Yakov Sobolev wrote:
> Thanks Robert.
>
> How
>
> can I see what process launches my kube-system pods?
>
I was asking what deployment tool you used to create the cluster. They are
opinionated about how the control plane is managed and configured.
> Our
It seems newer versions fixed a leak. Don't know which component is that is
leaking for you, though.
But, for example, this is one that have been fixed in a newer version:
https://github.com/kubernetes/kubernetes/pull/65339
See the changelog here for all 1.10.x minors:
Thanks Robert.
How can I see what process launches my kube-system pods?
Our cluster is running on VMs.
kublet starts all components
On Friday, September 14, 2018 at 2:28:10 PM UTC-4, Yakov Sobolev wrote:
> With the master set to unschedulable, do you know what happens if one of
> the pods
How are the master components running? Some deployments use static pods on
the master which mean that they can never be relocated to a different
machine. If you are running them as pods in the cluster (i.e. via
self-hosting) then your deployment tool has likely put in the correct
tolerations so
With the master set to unschedulable, do you know what happens if one of
the pods already running on the master goes down?
Have you seen those system-type pods come back up?
It is OK to run these pods on nodes and not just the master; reference
Our master nodes only run following pods:
calico-node
kube-apiserver
kube-controller-manager
kube-dns
kube-proxy
kube-scheduler
On Friday, September 14, 2018 at 1:42:22 PM UTC-4, Yakov Sobolev wrote:
> We are running Kubernetes 1.10.2 and we noticed memory leak on the master
> node. It is
found it
kubectl taint nodes k8smaster03
node-role.kubernetes.io/master="":NoSchedule
On Friday, 14 September 2018 18:02:03 UTC+1, Gabriel Sousa wrote:
>
> hello
>
>
> how i revert the command "kubectl taint nodes --all
> node-role.kubernetes.io/master-"
>
> tryed:
> kubectl taint nodes node1
We are running Kubernetes 1.10.2 and we noticed memory leak on the master
node. It is known issue? What is the remedy?
We are running several clusters on VMs and confirmed memory leak on all of
them. Only out-of-the box components are running on master nodes.
--
You received this message
hello
how i revert the command "kubectl taint nodes --all
node-role.kubernetes.io/master-"
tryed:
kubectl taint nodes node1 key=node-role.kubernetes.io/master:NoSchedule
but wont work,
kubernetes version 1.11.3
--
You received this message because you are subscribed to the Google Groups
We do not expose that as a parameter today. We can discuss the
options here, but there's not short answer. Can you talk about what
you're doing to need so many node ports?
On Fri, Sep 14, 2018 at 8:27 AM Phạm Huy Hoàng wrote:
>
> For our use-case, we need to access a lot of services via
For our use-case, we need to access a lot of services via NodePort. By
default, the NodePort range is 3-32767. With *kubeadm*, I can set the
port range via *--service-node-port-range* flag.
We are using Google Kubernetes Engine (GKE) cluster. How can I set the port
range for a GKE
hi guys ,
trying to figure out how to run spark job that talks to my hbase.
I do not want to bake/hardcode the hbase config into the driver or executor
images . I want the configuration to be available via a configmap.
Can anybody please help , am new to this .
Thanks
--
You received
I believe the fluentd configuration used by the Stackdriver Logging addon
is
here:
https://github.com/kubernetes/kubernetes/blob/c04fe8c27c9053a37face46abfebc45b9ac23dd7/cluster/addons/fluentd-gcp/fluentd-gcp-configmap.yaml#L108-L118,
which I think uses this
plugin
14 matches
Mail list logo