Re: [kubernetes-users] Re: gcePersistentDisks limit per GKE cluster

2016-09-26 Thread 'David Aronchick' via Kubernetes user discussion and Q
If they're super low traffic - why not create a few large PDs and carve them up with PVCs? On Mon, Sep 26, 2016 at 10:40 PM, Quinn Comendant wrote: > On Mon, 26 Sep 2016 21:48:41 -0700, 'Tim Hockin' via Kubernetes user > discussion and Q wrote: > > The newer max PDs per

Re: [kubernetes-users] Re: gcePersistentDisks limit per GKE cluster

2016-09-26 Thread Quinn Comendant
On Mon, 26 Sep 2016 21:48:41 -0700, 'Tim Hockin' via Kubernetes user discussion and Q wrote: > The newer max PDs per machine. We support 16 PDs per machine. Huh! Well, there go my plans. Perhaps you can advise me then: I have 200 very-low-traffic wordpress sites I'd like to migrate to a GKE

Re: [kubernetes-users] Re: gcePersistentDisks limit per GKE cluster

2016-09-26 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The newer max PDs per machine. We support 16 PDs per machine. To fix this we need to add a new schedulable resource for PD connections, and that work is still in progress. On Mon, Sep 26, 2016 at 9:43 PM, Quinn Comendant wrote: > On Sun, 25 Sep 2016 20:46:22 -0700, 'Tim

Re: [kubernetes-users] Add custom nameserver KubeDNSv17

2016-09-26 Thread 'Tim Hockin' via Kubernetes user discussion and Q
Dnsmasq does have a bunch of cool flags, but we have not really qualified them yet. Use at your own risk, but they look promising. On Sep 26, 2016 5:04 PM, "Cole Mickens" wrote: > Is it also an option to specify the upstream servers directly in the > dnsmasq command

Re: [kubernetes-users] Add custom nameserver KubeDNSv17

2016-09-26 Thread Cole Mickens
Is it also an option to specify the upstream servers directly in the dnsmasq command line inside the kube-dns RC/Deployment? As in, editing this: https://github.com/kubernetes/kubernetes/blob/ master/cluster/addons/dns/skydns-rc.yaml.in#L86 to include a `--server` flag (possibly with

Re: [kubernetes-users] Add custom nameserver KubeDNSv17

2016-09-26 Thread Roberto
Hi, we want that pods can inherit the configuration from the host but when we deploy a new pod it only have the internal dns information. ie nameserver 10.111.x.x (kubedns IP) and search project.svc.cluster.local. If we add the --resolv-conf flag, we can add our own nameserver and the kubedns

[kubernetes-users] Add custom nameserver KubeDNSv17

2016-09-26 Thread Roberto
Hi, we have an issue in kubernetes and we really appreciate if you can help us. If I'm in the wrong section, please let me know. We have a kubernetes cluster deployed in GCE. We have created a VPN to our internal network and it's working ok. Now, we want that all the pods can use a custom dns

Re: [kubernetes-users] How to develop lxd runtime for kubernetes

2016-09-26 Thread 'Vishnu Kannan' via Kubernetes user discussion and Q
Related Kubernetes issue here . On Mon, Sep 26, 2016 at 10:38 AM, Jonathan Boulle < jonathan.bou...@coreos.com> wrote: > The future of integrating new container runtimes is the Container Runtime > Interface, so you should start by looking at

Re: [kubernetes-users] How to develop lxd runtime for kubernetes

2016-09-26 Thread Jonathan Boulle
The future of integrating new container runtimes is the Container Runtime Interface, so you should start by looking at the proposal and (early version of the) API:

[kubernetes-users] How to develop lxd runtime for kubernetes

2016-09-26 Thread Dilip Renkila
Hi all, As lxd is a promising technology by delivering full OS level containers rather than process containers like docker, i wanna know what are the specifications required by kubernetes in order to integrate lxd as a runtime or where can i found them Best Regards 'Dilip Renkila -- You

[kubernetes-users] Re: Publicly proposing SIG-CLI

2016-09-26 Thread 'Phillip Wittrock' via Kubernetes user discussion and Q
I'll start the ball rolling on getting the resources for a SIG setup and some proposed meeting times. On Fri, Sep 23, 2016 at 6:17 AM, Clayton Coleman wrote: > Yes, today API machinery owns API UX. Since the API has to work for > multiple clients well it needs to take

[kubernetes-users] Network incident; workers fail to return without reboot; pods left in odd state

2016-09-26 Thread Matt Hughes
Running k8s 1.3 on CoreOS. I recently experienced an incident in my cluster where my workers lost communication to the master. Masters were still up and able to communicate with the workers. Masters started draining nodes and marking them as NotReady. I addressed the worker to master