elle and others here are actively working on a number of
storage expansions, and your input helps prioritize.
thanks
Tim
> On Fri, Mar 31, 2017 at 1:35 AM 'Tim Hockin' via Kubernetes user discussion
> and Q <kubernetes-users@googlegroups.com> wrote:
>>
>> You can't use
You can't use PVClaim and Deployment together. You will get a single
claim to a single EBS volume. Unfortunately this is a weird
intersection of subsystems.
PVClaim says "This data has identity, and I will manage its lifetime",
and Deployment says "These pods have no identity, and you should
Wild guess - something is looking at underlying OS or hardware info (e.g.
number of processors) and scalng memory or threads based on that.
On Mar 28, 2017 9:39 AM, "bg" wrote:
> I have an image that is basic Java application. I'm trying to minimize the
> amount of memory
t; <kubernetes-users@googlegroups.com> wrote:
>
> There may be another way to get at what you're looking to do - why do you
> need the pod IP?
> On Mon, Mar 27, 2017 at 07:35 'Tim Hockin' via Kubernetes user discussion
> and Q <kubernetes-users@googlegroups.com> wrote:
&
On Sat, Mar 25, 2017 at 6:31 PM, wrote:
> Hi All,
>
> I have currently setup 3 Kubernetes master with KubeDNS running. I have 5
> Minions running with Kubeproxy installed.
>
> When I run nslookup as below inside container, it returns the ClusterIP of
> the service.
This should be part of docs - if you used kubeadm, please file a bug against it.
On Sun, Mar 26, 2017 at 10:03 PM, wrote:
> On Sunday, March 26, 2017 at 7:09:26 PM UTC+8, ede...@unity3d.com wrote:
>> On Saturday, March 25, 2017 at 7:29:25 AM UTC+8, Tim Hockin wrote:
>> >
Can you file this as a github issue and include as much detail as you
can? Also can you run `gcloud compute instances describe` for a few
of your nodes and include the `tags` block ?
On Fri, Mar 24, 2017 at 3:56 AM, wrote:
> Hi, we use kubeadm to deploy k8s(1.5.5) on GCE.
The trick is that every network is unique, so you have to fill in the
blanks on how to get traffic into your cluster from the outside. That
could be through a load-balancer (as in GCE or AWS) or it could be
through node IPs or something else. what you have will sort of limit
what you can do.
To
This is a network routing/setup/NAT issue outside the bounds of
Kubernetes proper. There's no way Kubernetes can know how your
network looks or how to configure every machine to have anew default
route, etc.
On Wed, Mar 8, 2017 at 10:44 AM, bg wrote:
> I have a 3 node
You need to have a controller that watches Services and configures your own
load-balancer. We can't know what kind of network architecture or
equipment you have.
On Mar 5, 2017 7:16 PM, "Qian Zhang" wrote:
> Hi,
>
> I have set up an on-prem K8s cluster in my own
There isn't a clean way to express what you want today. There are some
ideas about being able to express local storage as volumes, but that work
is a long pipeline for what feels.like a simple request.
We already have an idea of "medium" in emptyDir. What if we extended
that? The question
shows in kubectl pod
>> >>> >> >>>> >> >> describe?
>> >>> >> >>>> >> >> What
>> >>> >> >>>> >> >> about
>> >>> >> >>>> >>
;> >> an example for using volumes (that maybe use a public
>>> >> >>>> >> >> docker
>>> >> >>>> >> >> image), to
>>> >> >>>> >> >> understand how to use them first. Also, you may
s...@gmail.com>
>> >> >>>> >> >> wrote:
>> >> >>>> >> >>>
>> >> >>>> >> >>> I used the emptyDir...get the same error
>> >> >>>> >> >>>
t;>> apiVersion: v1
>> >>>> >> >>>> kind: PersistentVolumeClaim
>> >>>> >> >>>> metadata:
>> >>>> >> >>>> name: web-pv-claim
>> >>>> >> >>>> labels:
>>
>>> requests:
>>>> >> >>>> storage: 10Gi
>>>> >> >>>> --
>>>> >> >>>> spec:
>>>> >> >>>> containers:
>>>> >> >>>
Not enough information: What cloud environment? What does the PV
claim object look like? What does "doesn't load" mean?
On Tue, Feb 28, 2017 at 5:14 PM, Montassar Dridi
wrote:
> Hello!!
> The Dockerfile for my web application image, that I deployed within
>
What actually happens is that we use iptables to distinguish which LB
a packet came through, and do a local NAT to a pod backend for that
Service.
On Tue, Feb 21, 2017 at 5:46 AM, Rodrigo Campos wrote:
> I'd be surprised if it doesn't do port remapping on GKE.
> Service type
I think it is looking at the context's namespace and the embedded
namespace. If the two don't match, it has a grumble. Not a great UX.
On Tue, Feb 21, 2017 at 3:57 AM, wrote:
> Odd one this ?
>
> Previously the following worked:
>
If you need to read and write AT THE SAME TIME, you need a PV that
supports it, which EBS and GCE (and every block device) do not.
If you want to be able to use the data in serial steps of a pipeline,
it should be fine, since it is only mounted in one mode at a given
point in time.
On Tue, Feb
0.0.0.0 is valid as a bind-to address, meaning "any IP", but as a
connect-to address you probably want 'localhost'
On Mon, Feb 20, 2017 at 2:11 PM, Matthias Rampke wrote:
> I see three containers in this.
>
> Yes, 0.0.0.0: should work if the graphql container binds to
The new field is called 'envFrom'
On Tue, Feb 7, 2017 at 7:42 PM, wrote:
> On Wednesday, February 8, 2017 at 3:05:02 AM UTC+5:30, Rodrigo Campos wrote:
>> Cool. So is it working now? :)
>>
>> On Tuesday, February 7, 2017, Vinoth Narasimhan wrote:
This would be appropriate as an annotation, to start. PRs are welcome
if you have time, if not can you please file a bug against
https://github.com/kubernetes/ingress ?
On Mon, Feb 6, 2017 at 12:19 AM, Itamar O wrote:
> Hi,
>
> Using GCE Ingress controller [1] on GKE, it
GKE does not currently support NetworkPolicy beta. We're looking at
how to best support it as it moves to GA.
On Thu, Jan 26, 2017 at 9:36 AM, wrote:
> Hello,
>
> What network plugin does GKE use? In my tests, the Namespace has
> `net.beta.kubernetes.io/network-policy` annotation
Concretely the "tweak a sysctl" thing leaves machines that are
"dirty". Once you allow any users to do this, the machines become
less useful for anyone else who doesn't specifically tolerate that
tweak. Almost every sysctl represents a tradeoff. Optimize for
low-latency network? Pay higher CPU
You must spec hostname and subdomain. subdomain must be the name of a
headless service. That way the pod's notion of hostname will match
the DNS name assigned.
On Wed, Jan 18, 2017 at 2:24 AM, yazgoo wrote:
> Thanks,
>
> I've added:
>
> apiVersion:
Its hostname doesn't work unless you follow the section "A Records and
hostname based on Pod’s hostname and subdomain fields" in
https://kubernetes.io/docs/admin/dns/
On Wed, Jan 18, 2017 at 12:24 AM, yazgoo wrote:
> Hi,
>
> I've added,
>
> apiVersion: v1
> kind:
We don't do DNS for pods except for StatefulSets because it can change
rapidly, and DNS is just no good at that. You can set up a Service to
select all of the workers, and if you specify "None" as `clusterIP`
you won't get a VIP, just a bunch of A records.
On Tue, Jan 17, 2017 at 8:49 AM, yazgoo
Unfortunately that is the only real answer today, as far as I know.
We do not have an egress NAT.
On Thu, Jan 12, 2017 at 2:47 PM, wrote:
> Hi, we have to access some resource that uses an IP whitelist (plus
> authentication and SSL) in real time.
>
> So we need that
https://github.com/kubernetes/ingress/issues/112
On Fri, Jan 6, 2017 at 4:59 PM, Tim Hockin wrote:
> On Fri, Jan 6, 2017 at 3:32 PM, 'Mark Betz' via Kubernetes user
> discussion and Q wrote:
>> Ha, ok fair enough ...
>>
>>> The last part of
Things to check - are all of your nodes healthy? Is kube-proxy up and
running on each of them (kubectl get pods -n kube-system) ?
On Fri, Jan 6, 2017 at 4:14 PM, Tanner Bruce
wrote:
> Hi,
>
> I'm running kubernetes, on gcloud and have a service exposed with a load
>
On Fri, Jan 6, 2017 at 3:32 PM, 'Mark Betz' via Kubernetes user
discussion and Q wrote:
> Ha, ok fair enough ...
>
>> The last part of this reads as "I know I'm not
>> supposed to have an instance belong to more than one load balanced
>> instance
>> group, so I
I am not sure I understand
On Fri, Jan 6, 2017 at 11:38 AM, 'Mark Betz' via Kubernetes user
discussion and Q wrote:
> Say I have a cluster with two services: one is an http service that I want
> to expose to the world, and the other is a thrift service that I
Ahh, you want to start with a clone of the data, not an empty volume.
Why not use something like git-sync to pull the data down from some
canonical source?
On Thu, Jan 5, 2017 at 11:33 PM, Montassar Dridi
wrote:
> each new pod gets it's own persistent volume copy/clone
On Thu, Jan 5, 2017 at 9:52 PM, Montassar Dridi
wrote:
>
>
> On Thursday, January 5, 2017 at 11:17:26 PM UTC-5, Tim Hockin wrote:
>>
>> On Thu, Jan 5, 2017 at 5:24 PM, Montassar Dridi
>> wrote:
>> > Hi Tim,
>> >
>> > I'm trying to do something
On Thu, Jan 5, 2017 at 5:24 PM, Montassar Dridi
wrote:
> Hi Tim,
>
> I'm trying to do something like this example
> https://github.com/kubernetes/kubernetes/tree/master/examples/mysql-wordpress-pd
> I have a java web application and MYSQL database running within
Can you explain what you're trying to achieve?
Fundamentally, persistent volumes and replication are at odds with
each other. Replication implies fungibility and "all replicas are
identical". Persistent volumes implies "the data matters and is
potentially different".
Now, I can think of a
you can email me directly - thockin@google
On Wed, Jan 4, 2017 at 6:44 PM, Gil Michlin wrote:
> Hi,
>
> There is a critical bug in the GKE permissions system, where should I open a
> bug on it.
>
> Gil
>
> --
> You received this message because you are subscribed to the
For now, there is no way to signal within a pod. We'd like to get to
shared PID namespace, but there's some work to do still
On Wed, Dec 21, 2016 at 2:09 AM, Paul Ingles wrote:
> Hi all,
>
> We run a lot of infrastructure in AWS, make heavy use of RDS and rely on
> both
rsion: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.7",
> GitCommit:"92b4f971662de9d8770f8dcd2ee01ec226a6f6c0", GitTreeState:"clean",
> BuildDate:"2016-12-10T04:43:42Z", GoVersion:"go1.6.3", Compiler:"gc"
Ingress objects represent Google Cloud Load Balancer instances, whcih
can handle SSL termination *and* act as load-balancers.
On Tue, Dec 20, 2016 at 2:17 PM, wrote:
> Hi Folks,
>
> A noob question -
>
> What is the recommended way to expose an app via LB with SSL
No promises, but it doesn't seem unreasonable to me...
On Thu, Dec 15, 2016 at 1:30 PM, Adam Daughterson
<adam.daughter...@gmail.com> wrote:
> Will do, thanks for the quick response!
>
> On Thu, Dec 15, 2016 at 2:10 PM, 'Tim Hockin' via Kubernetes user discussion
> and Q
it.
>
> Thanks!
>
> On Wed, Dec 14, 2016 at 11:46 AM, 'Tim Hockin' via Kubernetes user
> discussion and Q <kubernetes-users@googlegroups.com> wrote:
>>
>> No, the probes look specifically for 200s, I think. Is there a reason
>> you can't return 200?
&
We're working on a proposal to mitigate this short-term.
On Thu, Dec 15, 2016 at 8:11 AM, Giovanni Tirloni wrote:
> I would start by reviewing the eviction policy to ensure thresholds
> aren't too low.
>
> This article has more information about best practices and
>
I would look at the network config, the flags on the master and kubelets,
and the existing namespace usages
On Dec 11, 2016 8:21 PM, "Bruno Bronosky" wrote:
> If you came into a new company with a production kubernetes cluster but
> they knew nothing about it (the
On Dec 11, 2016 12:58 PM, wrote:
Hi Tim,
Thanks for your answer and sorry for not replying before, I didn't realize
I should check the group site for answers.
Your solution for exposing the server on given external IP worked,
specifically setting hostPort in pod's
Ingress currently assumes ports 80 and 443
On Dec 10, 2016 8:54 PM, "Paolo Mainardi" wrote:
> Hello everyone!
> What i want to achieve is something like this: https://github.com/
> lwolf/kubernetes-gitlab/blob/41dfc87e618ca009d8b6588f3d866a
>
I am not sure that is true for GKE - where the whole node config is
blown away on node upgrade.
We are currently considering options for supporting NetworkPolicy on
GKE, but we don't have a finished plan just yet.
On Mon, Nov 28, 2016 at 3:56 PM, Christopher Liljenstolpe
wrote:
What you're asking for isn't really well supported. The problem is
that the source IP for your client is the VM's IP, and if that pod
should ever get moved, that IP will change. Kubernetes Services are
designed to avoid that, but they can't easily handle client IP.
If you really want this
To make PV provisioning work you need a valid StorageClass
(http://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses)
and the storage class admission controller
(http://kubernetes.io/docs/admin/admission-controllers/#defaultstorageclass)
and for the StorageClass object to elect
Sorry, you said Pod and I read container. :)
On Oct 26, 2016 1:02 PM, "Rodrigo Campos" wrote:
> You can specify one for each container (and make each do all the things
> you need).
>
> Would that work?
>
> On Wednesday, October 26, 2016, Peter Gardfjäll <
>
No, but you can mux it yourself behind the scenes.
On Wed, Oct 26, 2016 at 12:40 AM, Peter Gardfjäll
wrote:
> Is it possible to specify more than one `readinessProbe` for a pod?
>
> best regards, Peter
>
> --
> You received this message because you are subscribed
lates/services/dev/service-media-feature.yaml (git)-[master]
> The Service "media-feature" is invalid.
> spec.clusterIP: Invalid value: "": field is immutable
>
> whereas apply works and updates the Service
>
> thanks for your help
>
>
> On 20 O
I have a note on my desk that simply says "sig-brownfield". The idea
was that it *might* be interesting to have a discussion forum for
people deploying Kubernetes into existing environments, which has high
correlation with on-prem. There are many issues, but as you point
out, they almost all
The docs are weak (absent) on this. I filed a docs bug. The idea is
that when you use the configMap volume type, and later update the API
object, the files on disk get updated atomically.
On Fri, Oct 7, 2016 at 5:14 AM, Vinoth Narasimhan wrote:
> Tim,
>
> Is this somewhat
We support live-update of configmap as file projections. Obviously we
can not support that for env, but we could allow the user to ask for a
notification or something. Not sure what that API would look like..
On Thu, Oct 6, 2016 at 10:06 AM, Vinoth Narasimhan wrote:
> In
What about supporting env-expansion in the image name field? This
would allow things like populating ARCH from downward API, then using
`me/mycontainer-$(ARCH):v3.1.4` as image.
On Thu, Sep 29, 2016 at 9:12 AM, 'Eric Tune' via Kubernetes user
discussion and Q
On Sat, Oct 1, 2016 at 6:05 PM, Mike wrote:
> Thank you, Tim, for the reply. I sure better understand your logic. Given I
> am new to Kubernetes, would you please shed some lights on where the
> technical challenges are for such architecture (shared control plane)?
Off the
On Fri, Sep 30, 2016 at 8:55 PM, Rodrigo Campos <rodr...@sdfg.com.ar> wrote:
> On Fri, Sep 30, 2016 at 10:59:09AM -0700, 'Tim Hockin' via Kubernetes user
> discussion and Q wrote:
>> We need your help charting our course. With one command (running a
>> pod in your c
On Wed, Sep 28, 2016 at 11:54 AM, Mike wrote:
> Hi Tim,
>
> Thank you for the answer. The goal is to share the control plane among ,say,
> 100 smaller clusters (only worker nodes in each cluster) which will save you
> something like 100*3=300 control plane nodes so it seems
This is another variant of multi-tenancy, which is not a first-class
supported thing in Kubernetes yet. You're actually making it vastly
more complicated by describing it as multiple cloud accounts. That
implies different pools of VMs, which implies different clusters. I
think federation is
Correct. You can confirm this empirically: run a service that
produces a different result for each backend, and send traffic through
the service. With enough traffic (yay random!) you will approach
equal distribution.
On Wed, Sep 28, 2016 at 9:09 AM, Matt Hughes wrote:
>
On Tue, Sep 27, 2016 at 10:17 PM, Quinn Comendant <qu...@strangecode.com> wrote:
> On Mon, 26 Sep 2016 21:48:41 -0700, 'Tim Hockin' via Kubernetes user
> discussion and Q wrote:
>> The newer max PDs per machine. We support 16 PDs per machine. To fix
>> this we need
On Tue, Sep 27, 2016 at 8:15 PM, Quinn Comendant wrote:
> On Tue, 27 Sep 2016 09:09:37 -0700, 'Tim Hockin' wrote:
>> You could use 1 PD per site, with subdirs for mysql and content, and
>> mount them using the `subPath` feature. That cuts your needs in half,
>> right off
ic options like --resolv-conf flag, and it creates all the components
> (kube-dns, kube-proxy, heapster) automatically.
>
>
>
> On Mon, Sep 26, 2016 at 8:45 PM, 'Tim Hockin' via Kubernetes user discussion
> and Q <kubernetes-users@googlegroups.com> wrote:
>>
>> D
700, 'Tim Hockin' via Kubernetes user
> discussion and Q wrote:
>> This is not supported in Kubernetes yet
>
> Sorry, what part of Persistent disks is not supported by Kubernetes? All the
> docs and tutorials on cloud.google.com seem to imply their use is available.
>
Dnsmasq does have a bunch of cool flags, but we have not really qualified
them yet. Use at your own risk, but they look promising.
On Sep 26, 2016 5:04 PM, "Cole Mickens" wrote:
> Is it also an option to specify the upstream servers directly in the
> dnsmasq command
This is not supported in Kubernetes yet
On Sat, Sep 24, 2016 at 10:48 PM, Quinn Comendant wrote:
> On Wednesday, February 3, 2016 at 12:57:25 PM UTC-5, Rimas Mocevicius wrote:
>>
>> are there any limits on GKE how many gcePersistentDisks can be attached to
>> the cluster?
Also, command is currently indented under volumeMount, which is not right
On Sep 17, 2016 11:18 AM, "Derek Mahar" wrote:
> On Friday, 16 September 2016 17:19:22 UTC-4, Cole Mickens wrote:
>>
>> Hm, I guess you didn't ask about editting the Pod, just creating it.
>>
We sort of want to move away from pre-populating env vars for services
- it has come up as a name-conflict problem for people, it is rather
noisy, and it doesn't get updated when a Service changes. Env vars
are a really sub-standard API for this.
On Wed, Aug 24, 2016 at 11:42 PM, Mayank
I don't think we want a mechanism for pods to know what service
NodePorts point to them. It would be too noisy (every node) and
that's just not a common pattern. If you need to register nodePorts,
I think you should do it as a controller pod that runs in the cluster,
reads the kube API and syncs
There's a PR which stalled
(https://github.com/kubernetes/kubernetes/pull/23576) to add DNS
support to AWS and there's a PR in flight
(https://github.com/kubernetes/kubernetes/pull/30949) to add GCE
support.
We could use some clearer user requirements, I think.
Second, though, you should not
We don't yet have a way to back persistent volumes with node-local data.
On Fri, Aug 12, 2016 at 8:25 AM, kay ru wrote:
> I.e. use something like:
>
> nodeSelector:
>kubernetes.io/hostname: k8s-node-1
>
> But for PersistentVolume kind
>
> --
> You received this
201 - 273 of 273 matches
Mail list logo