On Thursday, August 10, 2017 at 1:03:42 AM UTC-5, Tim Hockin wrote:
> The GKE team has heard the desire for this and is looking at possible
> ways to provide it.
> 
> On Wed, Aug 9, 2017 at 3:56 PM,  <csala...@devsu.com> wrote:
> > On Friday, June 16, 2017 at 11:24:15 AM UTC-5, pa...@qwil.co wrote:
> >> Yes, this is the right approach -- here's a detailed walk-through:
> >>
> >> https://github.com/johnlabarge/gke-nat-example
> >>
> >> On Friday, June 16, 2017 at 8:36:13 AM UTC-7, giorgio...@beinnova.it wrote:
> >> > Hello, I've the same problem described there. I have a GKE cluster and I 
> >> > need to connect to an external service. I find the NAT solution is right 
> >> > for my needs, my cluster resizes automatically. @Paul Tiplady have you 
> >> > config the external NAT? Can you share your experiences? I tried 
> >> > following this guide 
> >> > https://cloud.google.com/compute/docs/vpc/special-configurations#natgateway
> >> >  but seems it doesn't work.
> >> >
> >> > Thanks,
> >> > Giorgio
> >> > Il giorno mercoledì 3 maggio 2017 22:08:50 UTC+2, Paul Tiplady ha 
> >> > scritto:
> >> > > Yes, my reply was more directed to Rodrigo. In my use-case I do resize 
> >> > > clusters often (as part of the node upgrade process), so I want a 
> >> > > solution that's going to handle that case automatically. The NAT 
> >> > > Gateway approach appears to be the best (only?) option that handles 
> >> > > all cases seamlessly at this point.
> >> > >
> >> > >
> >> > > I don't know in which cases a VM could be destroyed, I'd also be 
> >> > > interested in seeing an enumeration of those cases. I'm taking a 
> >> > > conservative stance as the consequences of dropping traffic through 
> >> > > changing source-IP is quite severe in my case, and because I want to 
> >> > > keep the process for upgrading the cluster as simple as possible.  
> >> > > From 
> >> > > https://cloudplatform.googleblog.com/2015/03/Google-Compute-Engine-uses-Live-Migration-technology-to-service-infrastructure-without-application-downtime.html
> >> > >  it sounds like VM termination should not be caused by planned 
> >> > > maintenance, but I assume it could be caused by unexpected failures in 
> >> > > the datacenter. It doesn't seem reckless to manually set the IPs as 
> >> > > part of the upgrade process as you're suggesting.
> >> > >
> >> > >
> >> > > On Wed, May 3, 2017 at 12:13 PM, Evan Jones <evan....@bluecore.com> 
> >> > > wrote:
> >> > >
> >> > > Correct, but at least at the moment we aren't using auto-resizing, and 
> >> > > I've never seen nodes get removed without us manually taking some 
> >> > > action (e.g. upgrading Kubernetes releases or similar). Are there 
> >> > > automated events that can delete a VM and remove it, without us having 
> >> > > done something? Certainly I've observed machines rebooting, but that 
> >> > > also preserves dedicated IPs. I can live with having to take some 
> >> > > manual configuration action periodically, if we are changing something 
> >> > > with our cluster, but I would like to know if there is something I've 
> >> > > overlooked. Thanks!
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Wed, May 3, 2017 at 12:20 PM, Paul Tiplady <pa...@qwil.co> wrote:
> >> > >
> >> > > The public IP is not stable in GKE. You can manually assign a static 
> >> > > IP to a GKE node, but then if the node goes away (e.g. your cluster 
> >> > > was resized) the IP will be detached, and you'll have to manually 
> >> > > reassign. I'd guess this is also true on an AWS managed equivalent 
> >> > > like CoreOS's CloudFormation scripts.
> >> > >
> >> > >
> >> > > On Wed, May 3, 2017 at 8:52 AM, Evan Jones <evan....@triggermail.io> 
> >> > > wrote:
> >> > >
> >> > > As Rodrigo described, we are using Container Engine. I haven't fully 
> >> > > tested this yet, but my plan is to assign "dedicated IPs" to a set of 
> >> > > nodes, probably in their own Node Pool as part of the cluster. Those 
> >> > > are the IPs used by outbound connections from pods running those 
> >> > > nodes, if I recalling correctly from a previous experiment. Then I 
> >> > > will use Rodrigo's taint suggestion to schedule Pods on those nodes.
> >> > >
> >> > > If for whatever reason we need to remove those nodes from that pool, 
> >> > > or delete and recreate them, we can move the dedicated IP and taints 
> >> > > to new nodes, and the jobs should end up in the right place again.
> >> > >
> >> > >
> >> > > In short: I'm pretty sure this is going to solve our problem.
> >> > >
> >> > >
> >> > > Thanks!
> >
> > The approach of configuring a NAT works but it has 2 major drawbacks:
> >
> > 1. It creates a single point of failure (if the VM that runs the NAT fails)
> > 2. It's too complex!
> >
> > In my use case I don't need Auto-scaling enabled right now, so I think it's 
> > better to just change the IPs of the VMs to be static. Anyways in the 
> > future I know I will need this feature.
> >
> > Does somebody know if there are there any plans to provide this feature in 
> > GKE?
> >
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "Kubernetes user discussion and Q&A" group.
> > To unsubscribe from this group and stop receiving emails from it, send an 
> > email to kubernetes-users+unsubscr...@googlegroups.com.
> > To post to this group, send email to kubernetes-users@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kubernetes-users.
> > For more options, visit https://groups.google.com/d/optout.

Hi, are there any updates on this feature? Is it on the roadmap of the GKE team 
or it hasn't been planned yet?

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to