31, 2018 at 2:58:52 PM UTC-7, Mike H wrote:
>
> Also, according to the section, "Determining if your Ingress is
> compatible"
> <https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#determining_if_your_ingress_is_compatible>,
>
> the
uilding a NAT gateway), but I cannot find
> reliable information about how to assign a reserved static IP to a GKE node.
>
> Cheers,
> Mike
>
> On Wednesday, May 3, 2017 at 12:13:42 PM UTC-7, Evan Jones wrote:
> > Correct, but at least at the moment we aren't using
assign a reserved static IP to a GKE node.
Cheers,
Mike
On Wednesday, May 3, 2017 at 12:13:42 PM UTC-7, Evan Jones wrote:
> Correct, but at least at the moment we aren't using auto-resizing, and I've
> never seen nodes get removed without us manually taking some action (e.g.
> u
to kubernetes-use...@googlegroups.com.
>
> > To post to this group, send email to kubernet...@googlegroups.com.
>
> > Visit this group at https://groups.google.com/group/kubernetes-users.
>
> > For more options, visit https://groups.google.com/d/optout.
>
>
>
*موقع اليوتيوب الذي عرض فيديوهات جلسة استماع الكونجرس الأمريكي *
* لمتابعة نشاطات غسل الأموال ونشاطات*
*السعودي معن عبدالواحد الصانع*
*مالك مستشفى وشركة سعد ومدارس سعد بالمنطقة الشرقية** بالسعودية * * ورئيس
مجلس ادارة بنك اوال البحريني*
*وتعليق محطة سي ان بي سي التلفزيونية*
*مترجم بال
I just installed kube-lego and then mark up my ingress and I get this
automagically. It's very simple and requires next to no setup. It looks
like that has been deprecated as of 1.8 and the new project is
https://github.com/jetstack/cert-manager/.
On Mon, Jan 29, 2018 at 10:47 AM 'mobicycle' via K
So I'm firmly on the "no stateful in k8s" team in production but I've been
containerizing databases for devs via minikube (and prior docker-compose)
for over a year and it's been fantastic. We run a handful of different
databases. The biggest issue is our laptops being limited to 16gb of ram,
s
So initially what you'd have to do is do multiple clusters and federate
them to have multiple masters running (albeit for diff clusters) but GKE
just added a multi-master feature to 1.8 as an Alpha feature you can
request. Not sure how far out it is, I'd love to have this so I can
federate less
nd plugin binaries need to be
manually copied over to the worker nodes. Is that to be expected?
Thanks,
Mike
On Mon, Jun 5, 2017 at 8:21 PM, Brandon Philips
wrote:
> Any reason to not use https://github.com/kubernetes/minikube?
>
> On Wed, May 31, 2017 at 9:02 AM Mike Cico wrote:
ailures, or what. Has anyone
seen this before? Are there other things I look for to try to figure out
what's going on?
Thanks,
Mike
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from th
w.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes
where you use liveness checks to make Kubernetes restart your pod if
the configmap changes.
Hope that's helpful
Cheers
Mike
--
Mike Bryant | Network Systems Team Leader | Ocado Technology
mike.bry...@ocado.com | 07
would you
please give an example?
>
> Thanks,
Mike
>
>
>
On Wednesday, November 23, 2016 at 1:30:23 PM UTC-8, Daniel Smith wrote:
>
>
>
> On Wed, Nov 23, 2016 at 11:39 AM, Mike wrote:
>
>> Our use case (big data) demands running few short-ter
not
require that much resources for smaller clusters. is this a correct
assumption?
3. Is there any guide on kubernetes components resource
consumption testing?
Thank you in advance for sharing your insights.
Mike
--
You received this message because you are subscribed to the
:
>
> On Wed, Sep 28, 2016 at 11:54 AM, Mike wrote:
> > Hi Tim,
> >
> > Thank you for the answer. The goal is to share the control plane among
> ,say,
> > 100 smaller clusters (only worker nodes in each cluster) which will save
> you
> > something like 10
y. The L7 load balancer has it's own node health checkers that may
> take a minute or 2 to discover that a service is healthy and if you
> multiply this a few times you get ~10 minutes.
>
That would jibe with what I was seeing.
> On Fri, Sep 30, 2016 at 9:16 AM Mike Cico >
&
or "uknown" during the time that the services would be
inaccessible.
The short of my question is, why do changes to the deployments take so long
to be discovered and surfaced by their ingress?
Thanks,
Mike
--
You received this message because you are subscribed to the Googl
plane for cost saving and simplicity. Again this
is how amazon ECS seems to operate.
Don't you need a control plane per worker group/pool to be able to use
federation?
Thanks,
Mike
On Wednesday, September 28, 2016 at 9:50:11 AM UTC-7, Tim Hockin wrote:
>
> This is another varia
> groups?
>
> Ian
>
> On Wed, Sep 28, 2016 at 10:59 AM Mike wrote:
>
>> I am new to Kubernetes and I have a question regarding the possibility of
>> sharing the control plane of a single Kubernetes cluster among a whole
>> bunch of workers that are
>>
&
I am new to Kubernetes and I have a question regarding the possibility of
sharing the control plane of a single Kubernetes cluster among a whole
bunch of workers that are
- grouped into 100% isolated groups
- each workers group does not need to access any pod in any other group
and ac
ds to happen here?
Thanks,
Mike
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to kubernetes-users+unsubscr...@googlegroups.com.
To
20 matches
Mail list logo