Re: [kubernetes-users] Pod name parameter while creating deployment in kubernetes

2017-08-02 Thread
Use labels. That's (part of) what they are for :) On Wed, Aug 2, 2017 at 11:55 PM, Eswari wrote: > Whenever I try to go to pod, need to give the complete pod name everytime. > > So, I am searching for the command to save the time > > On Thursday, August 3, 2017 at 12:17:13 PM UTC+5:30, Tim Hocki

Re: [kubernetes-users] Pod name parameter while creating deployment in kubernetes

2017-08-02 Thread Eswari
Whenever I try to go to pod, need to give the complete pod name everytime. So, I am searching for the command to save the time On Thursday, August 3, 2017 at 12:17:13 PM UTC+5:30, Tim Hockin wrote: > > A deployment creates a replicaset which creates your pod. You might > have N pods running (r

Re: [kubernetes-users] GCP Internal Load Balancer through VPN

2017-08-02 Thread
No external bug, and I can't comment on dates. Sorry. On Wed, Aug 2, 2017 at 3:04 PM, Paul Mazzuca wrote: > Awesome. Looking forward to it. Is this a feature that can be tracked? or do > you have a rough estimate as to when it will be pushed to production? > > On Wed, Aug 2, 2017 at 2:35 PM, 'Ti

Re: [kubernetes-users] Pod name parameter while creating deployment in kubernetes

2017-08-02 Thread
A deployment creates a replicaset which creates your pod. You might have N pods running (replicas) and you might have N replicasets (during an update, for example). The name is insignificant. What problem are you really having? On Wed, Aug 2, 2017 at 11:11 PM, Eswari wrote: > > Hi, > > When I

[kubernetes-users] Pod name parameter while creating deployment in kubernetes

2017-08-02 Thread Eswari
Hi, When I try to create deployment in kubectl commandline, it is giving some extended name to pod. Ex: *kubectl run testdeploy --image=imagename * output: deployment::: testdeploy pod testdeploy-3202566627-46j24 I don't like the pod name like this. Can I give pod name parame

[kubernetes-users] Re: Need to run multiple commands

2017-08-02 Thread Eswari
Thanks Andy.. It's working now On Wednesday, August 2, 2017 at 6:47:08 PM UTC+5:30, Andy Goldstein wrote: > > (kubernetes-dev to bcc) > > You can do it like this: > > kubectl run --attach testnew --image=imagename > --requests=cpu=200m --command -- /bin/bash -c "service nginx start && > whi

Re: [kubernetes-users] How to move a POD from node1 to node2?

2017-08-02 Thread
You can't move a pod. What you want is a Deployment or ReplicaSet. On Wed, Aug 2, 2017 at 10:11 PM, wrote: > > here is my yml file: > -- > kind: Pod > apiVersion: v1 > metadata: > name: pod-name > spec: > containers: > - name: con-name > image: ubuntu > resta

[kubernetes-users] How to move a POD from node1 to node2?

2017-08-02 Thread shaybery
here is my yml file: -- kind: Pod apiVersion: v1 metadata: name: pod-name spec: containers: - name: con-name image: ubuntu restartPolicy: "Never" nodeSelector: kubernetes.io/hostname: node1 -- kubectl create -f my.yml But now I wa

Re: [kubernetes-users] [QoS] Downscaling of burstable or best effort pods when guaranteed pods are scaled up

2017-08-02 Thread Rodrigo Campos
Thanks a lot!! On Wednesday, August 2, 2017, 'David Oppenheimer' via Kubernetes user discussion and Q&A wrote: > > > On Wed, Aug 2, 2017 at 4:05 PM, Rodrigo Campos > wrote: > >> On Wednesday, August 2, 2017, 'David Oppenheimer' via Kubernetes user >> discussion and Q&A > > >> wrote: >> >>> >>>

Re: [kubernetes-users] [QoS] Downscaling of burstable or best effort pods when guaranteed pods are scaled up

2017-08-02 Thread
On Wed, Aug 2, 2017 at 4:05 PM, Rodrigo Campos wrote: > On Wednesday, August 2, 2017, 'David Oppenheimer' via Kubernetes user > discussion and Q&A wrote: > >> >> >> On Wed, Aug 2, 2017 at 11:44 AM, Rodrigo Campos >> wrote: >> >>> The burstable pod must be reserving something also, and that rese

Re: [kubernetes-users] [QoS] Downscaling of burstable or best effort pods when guaranteed pods are scaled up

2017-08-02 Thread Rodrigo Campos
On Wednesday, August 2, 2017, 'David Oppenheimer' via Kubernetes user discussion and Q&A wrote: > > > On Wed, Aug 2, 2017 at 11:44 AM, Rodrigo Campos > wrote: > >> The burstable pod must be reserving something also, and that reservation >> (not the limit when there is idle capacity) is making it

Re: [kubernetes-users] Re: Kubernetes Operational View v0.3.0 released: read-only system dashboard for multiple K8s clusters

2017-08-02 Thread Shrinand Javadekar
Awesome.. On Wed, Aug 2, 2017 at 1:07 PM, Henning Jacobs wrote: > I only tried it out with Kubernetes 1.6+, but give it a shot :-) > > 2017-08-02 22:04 GMT+02:00 : > >> On Monday, January 16, 2017 at 12:41:22 PM UTC-6, Henning Jacobs wrote: >> > Kubernetes Operational View gives you a read-only

Re: [kubernetes-users] GCP Internal Load Balancer through VPN

2017-08-02 Thread Paul Mazzuca
Awesome. Looking forward to it. Is this a feature that can be tracked? or do you have a rough estimate as to when it will be pushed to production? On Wed, Aug 2, 2017 at 2:35 PM, 'Tim Hockin' via Kubernetes user discussion and Q&A wrote: > Yes. Hang tight :) > > On Aug 2, 2017 1:51 PM, "Paul Ma

[kubernetes-users] replica H-A

2017-08-02 Thread Snd LP
replica with one external IP. Does this accomplish so that, any external http/80 request to 192.168.0.155 get shard among two containers that have IP of 172.16.0.10 and 172.16.0.15 ? Thanks. kind: Service apiVersion: v1 metadata: name: nginx spec: selector: app: nginx ports: - pro

Re: [kubernetes-users] GCP Internal Load Balancer through VPN

2017-08-02 Thread
Yes. Hang tight :) On Aug 2, 2017 1:51 PM, "Paul Mazzuca" wrote: > Are there any plans to allow access to the IP address of the internal load > balancer for k8s through the Google cloud VPN? I am able to access the IP > from another computer instance, however not from my VPN connected network

[kubernetes-users] replica H-A

2017-08-02 Thread Snd LP
replica with one external IP. Does this accomplish so that, any external http/80 request to 192.168.0.155 get shard among two containers that have IP of 172.16.0.10 and 172.16.0.15 ? Thanks. kind: Service apiVersion: v1 metadata: name: nginx spec: selector: app: nginx ports: - prot

[kubernetes-users] GCP Internal Load Balancer through VPN

2017-08-02 Thread Paul Mazzuca
Are there any plans to allow access to the IP address of the internal load balancer for k8s through the Google cloud VPN? I am able to access the IP from another computer instance, however not from my VPN connected network. -- You received this message because you are subscribed to the Google

Re: [kubernetes-users] Re: Kubernetes Operational View v0.3.0 released: read-only system dashboard for multiple K8s clusters

2017-08-02 Thread Henning Jacobs
I only tried it out with Kubernetes 1.6+, but give it a shot :-) 2017-08-02 22:04 GMT+02:00 : > On Monday, January 16, 2017 at 12:41:22 PM UTC-6, Henning Jacobs wrote: > > Kubernetes Operational View gives you a read-only system dashboard for > multiple K8s clusters. > > It's in an early stage, b

[kubernetes-users] Re: Kubernetes Operational View v0.3.0 released: read-only system dashboard for multiple K8s clusters

2017-08-02 Thread pakalas
On Monday, January 16, 2017 at 12:41:22 PM UTC-6, Henning Jacobs wrote: > Kubernetes Operational View gives you a read-only system dashboard for > multiple K8s clusters. > It's in an early stage, but I find it already pretty useful (running on a TV > screen next to my desk). It regularly polls on

Re: [kubernetes-users] [QoS] Downscaling of burstable or best effort pods when guaranteed pods are scaled up

2017-08-02 Thread
On Wed, Aug 2, 2017 at 11:44 AM, Rodrigo Campos wrote: > The burstable pod must be reserving something also, and that reservation > (not the limit when there is idle capacity) is making it impossible to > schedule more pods. > > IIRC, burstable or guaranteed is specially relevant when eviction ne

Re: [kubernetes-users] [QoS] Downscaling of burstable or best effort pods when guaranteed pods are scaled up

2017-08-02 Thread Rodrigo Campos
The burstable pod must be reserving something also, and that reservation (not the limit when there is idle capacity) is making it impossible to schedule more pods. IIRC, burstable or guaranteed is specially relevant when eviction needs to be done (node running oom, inode exhaustion, etc.). But it

Re: [kubernetes-users] [QoS] Downscaling of burstable or best effort pods when guaranteed pods are scaled up

2017-08-02 Thread bahhoo
Yes, it is about the resources. The guaranteed pods are pending because the resources are used up. What I expect or want to have that the pods that are not guaranteed get terminated and resources become available when I want to spin up more guaranteed pods. -- You received this message becau

Re: [kubernetes-users] [QoS] Downscaling of burstable or best effort pods when guaranteed pods are scaled up

2017-08-02 Thread Rodrigo Campos
Afaik, guaranteed or burst is about pod resource usage (cpu, mem, etc). Not about the number of pods On Wednesday, August 2, 2017, wrote: > Hello, > > when I scale up the guarenteed pods in my cluster the best effort or the > burstable ones are not getting killed. The new guaranteed pods are se

[kubernetes-users] [QoS] Downscaling of burstable or best effort pods when guaranteed pods are scaled up

2017-08-02 Thread bahhoo
Hello, when I scale up the guarenteed pods in my cluster the best effort or the burstable ones are not getting killed. The new guaranteed pods are set to "pending" until I manually scale down the other pods. Only then they spin up. Is it possible to automate this process, so that the guaranteed

Re: [kubernetes-users] Replication controller pod "adoption"

2017-08-02 Thread Rodrigo Campos
On Wednesday, August 2, 2017, wrote: > Hi Rodrigo. Thanks for answering. > > Yes, you're right. I should not talk about the containers but pods.. let > me give another example to clarify. Suppose we have the following resources: > > POD > apiVersion: v1 > kind: Pod > metadata: >

Re: [kubernetes-users] Replication controller pod "adoption"

2017-08-02 Thread rgoncalves
Hi Rodrigo. Thanks for answering. Yes, you're right. I should not talk about the containers but pods.. let me give another example to clarify. Suppose we have the following resources: POD apiVersion: v1 kind: Pod metadata: labels: a: "1" name: sample-pod spec: container

[kubernetes-users] Configuring custom influxdb sink for heapster running in kube-system namespace on GKE

2017-08-02 Thread JITENDRA GANGWAR
I need to configure heapster to send kubernetes cluster metrics to our custom influx db server . For this I tried to edit heapster deployment in kube-system namespace but after some time deployment is getting reverted to original state . I am using GKE , master version is 1.5.7 and node version

Re: [kubernetes-users] Start the service at the time of deployment

2017-08-02 Thread Rodrigo Campos
On Wednesday, August 2, 2017, Eswari wrote: > Hello All, > > I tried this command to create deployment. > > kubectl run -it mydep --image=myk8/test:v1 bash --requests=cpu=200m > It seems that here you are specifying to run bash > After the deployment got succeeded, my service was in stopped st

Re: [kubernetes-users] Replication controller pod "adoption"

2017-08-02 Thread Rodrigo Campos
On Wednesday, August 2, 2017, wrote: > Hi all. > > According this tutorial (https://github.com/kubernetes/examples/tree/ > master/staging/storage/redis), replication controllers will "adopt" > existing pods whose labels match the replication controller selector > labels. The "adoption" is only us

[kubernetes-users] Re: Need to run multiple commands

2017-08-02 Thread Andy Goldstein
(kubernetes-dev to bcc) You can do it like this: kubectl run --attach testnew --image=imagename --requests=cpu=200m --command -- /bin/bash -c "service nginx start && while true; do echo bye; sleep 10;done" I replaced -it with --attach because in this example you aren't passing anything in via s

[kubernetes-users] Re: is it possible to map tosca taml to kubernetes?

2017-08-02 Thread hoangphuocbk2 . 07
On Monday, May 8, 2017 at 7:35:38 PM UTC+9, PCQ wrote: > Hello, > you still found an solution? Currently I investigate in this area and will > create an converter. Can you share your solution, thanks -- You received this message because you are subscribed to the Google Groups "Kubernetes user d

[kubernetes-users] Replication controller pod "adoption"

2017-08-02 Thread rgoncalves
Hi all. According this tutorial (https://github.com/kubernetes/examples/tree/master/staging/storage/redis), replication controllers will "adopt" existing pods whose labels match the replication controller selector labels. The "adoption" is only used to count existing PODs. If it's required to

Re: [kubernetes-users] Is there a way to dump Crash data on the Crashing POD before it dies.

2017-08-02 Thread Vinoth Narasimhan
Hi EJ, The Eviction on the kubelet is meant for Node right ?. Is this same is available for the PODs? On Wednesday, August 2, 2017 at 1:16:55 PM UTC+5:30, EJ Campbell wrote: > > Specifically: > > [image: Inline image] > Which would give you a chance to snapshot your process. > > On Wednesday, Au

Re: [kubernetes-users] Re: Detaching cinder volumes

2017-08-02 Thread rgoncalves
Thanks Michelle. Done: https://github.com/kubernetes/kubernetes/issues/50004 -- You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group. To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-us

[kubernetes-users] Need to run multiple commands

2017-08-02 Thread Eswari
Hello All, *kubectl run nginx --image=nginx --command -- ... * *kubectl run -it testnew --image=imagename --command -- "/bin/bash","-c","service nginx start && while true; do echo bye; sleep 10;done" --requests=cpu=200m* I have to run 2 commands at a time 1. bash 2.service nginx start how ca

Re: [kubernetes-users] Is there a way to dump Crash data on the Crashing POD before it dies.

2017-08-02 Thread Vinoth Narasimhan
Thanks EJ . We will try it. On Wednesday, August 2, 2017 at 1:15:13 PM UTC+5:30, EJ Campbell wrote: > > Perhaps one of the options here could be used? > Configure Out Of Resource Handling > > > Configure Out Of Resource Han

Re: [kubernetes-users] Is there a way to dump Crash data on the Crashing POD before it dies.

2017-08-02 Thread Matthias Rampke
No, the OOM killer is a last-resort action by the kernel because it is completely out of usable memory (for this container or globally). At this point, nothing and no one can interfere anymore, because to do so they would need memory that is not available until after the OOM killer. /MR On Wed, A

Re: [kubernetes-users] Is there a way to dump Crash data on the Crashing POD before it dies.

2017-08-02 Thread
Perhaps one of the options here could be used?  Configure Out Of Resource Handling | | | | Configure Out Of Resource Handling Production-Grade Container Orchestration | | | -EJ On Wednesday, August 2, 2017, 12:41:16 AM PDT, Vinoth Narasimhan wrote: Thanks Matthias for your reply.

Re: [kubernetes-users] Is there a way to dump Crash data on the Crashing POD before it dies.

2017-08-02 Thread Vinoth Narasimhan
Thanks Matthias for your reply. Can we add a "PreStop" hook on the POD before it going to die, to dump the heap to the emptyDir. Is this hook will execute before it crash ? On Wednesday, August 2, 2017 at 1:02:55 PM UTC+5:30, Matthias Rampke wrote: > > Raise your Kubernetes memory limit, or lowe

Re: [kubernetes-users] Is there a way to dump Crash data on the Crashing POD before it dies.

2017-08-02 Thread Matthias Rampke
Raise your Kubernetes memory limit, or lower the JVM heap size. If the container gets OOM-killed there is nothing it can do to still dump something. By lowering the heap size you may be able to provoke an OutOfMemoryException within the JVM before it gets killed; with the right flags set it will do

[kubernetes-users] Is there a way to dump Crash data on the Crashing POD before it dies.

2017-08-02 Thread Vinoth Narasimhan
Hi, We have using kubernetes in GKE. And we running java based container in k8s. Last week we have issue in one of our applications that kept restarting. While debugging we found that its killed by OOM. Is there a way in k8s to dump all the JVM memory.threads,heap all other standard debugging

[kubernetes-users] Start the service at the time of deployment

2017-08-02 Thread Eswari
Hello All, I tried this command to create deployment. kubectl run -it mydep --image=myk8/test:v1 bash --requests=cpu=200m After the deployment got succeeded, my service was in stopped state. So, I went to my pod and started the service. Is therre any way to start the service at the time of cre