On Fri, Jan 6, 2017 at 3:32 PM, 'Mark Betz' via Kubernetes user
discussion and Q&A <kubernetes-users@googlegroups.com> wrote:
> Ha, ok fair enough ...
>
>> The last part of this reads as "I know I'm not
>> supposed to have an instance belong to more than one load balanced
>> instance
>> group, so I added my instance to more than one load balanced instance
>> group."
>
> Yeah I didn't think through it enough, clearly, but I guess somewhere in the
> back of my head I expected the http load balancer to proxy for just the
> instances in the nodepool the backend pods live in. The k8s http service
> proxies to pods that have affinity in one nodepool/instance group, while the
> thrift k8s service proxies pods that have affinity for another
> nodepool/instance group. I manually targeted the internal load balancer at
> that instance group, not realizing at the time that the http load balancer
> had targeted the same instances.
>
>> You should be able to use the same IG that was created for your HTTP
>> load-balancer except - oh no!  It's configured in utilization mode.
>> We know that utilization is meaningless for containers for now, so we
>> should probably switch that to rate mode.  But we don't have a sane
>> value for the rate.  Challenging.
>
> But its kind of a key point for us that the thrift services not be exposed
> to the world... are you saying (all other things being equal) we could have
> a single lb with two front-ends, one accepting traffic from outside and
> another just accepting internal traffic? Having a hard time wrapping my head
> around that but I'm not really a networking guy.

In my mental model it would be 2 LBs with the same BackendService->IG,
but that also broke down because the backend for an HTTP balancer is
not compatible with the backend service for ILB.  I'm trying to find a
way to make it possible to do this.

>> I'll look into this more.  I don't have an immediate answer for you,
>> sadly.
>
>
> Thanks, Tim. Appreciate the help. We're continuing to run tests around the
> intermittent timeout issues we're seeing with our thrift services, and if
> anything further illuminates the lb behavior I'll post it here.

Thanks

> --Mark
>
> On Friday, January 6, 2017 at 6:19:01 PM UTC-5, Tim Hockin wrote:
>>
>> I am not sure I understand
>>
>> On Fri, Jan 6, 2017 at 11:38 AM, 'Mark Betz' via Kubernetes user
>> discussion and Q&A <kubernet...@googlegroups.com> wrote:
>> > Say I have a cluster with two services: one is an http service that I
>> > want
>> > to expose to the world, and the other is a thrift service that I want to
>> > call from some other place (over a vpn gateway into the GCP project).
>> > For
>> > this use case I decide to go with two load balancers: the one k8s will
>> > create for inbound http traffic, and an internal one I will create to
>> > handle
>> > inbound thrift traffic from the vpn. From earlier experiments I know I'm
>> > not
>> > supposed to have an instance belong to more than one load balanced
>> > instance
>> > group, so I create a separate nodepool/instance group just for the
>> > thrift
>> > service to live in, set the thrift service to open a HostPort on those
>> > instances, and use that instance group as the back end for my internal
>> > load
>> > balancer.
>>
>> The last part of this reads as "I know I'm not
>> supposed to have an instance belong to more than one load balanced
>> instance
>> group, so I added my instance to more than one load balanced instance
>> group."
>>
>> You should be able to use the same IG that was created for your HTTP
>> load-balancer except - oh no!  It's configured in utilization mode.
>> We know that utilization is meaningless for containers for now, so we
>> should probably switch that to rate mode.  But we don't have a sane
>> value for the rate.  Challenging.
>>
>> > The problem is that kubernetes also includes the instances in the thrift
>> > instance group when it creates the load balancer for the inbound http
>> > traffic. So it seems like whatever I do, if I want to have more than one
>> > load balancer I can't avoid:
>> >
>> > status: {
>> >    code: 400
>> >    message: "Validation failed for instance
>> > 'projects/blah/instances/blah':
>> > instance may belong to at most one load-balanced instance group."
>> > }
>> >
>> > So we actually set this up as described, and connections seem to work
>> > however we have seen some timeout anomalies we're debugging. They could
>> > be
>> > completely unrelated but in the process of digging into them I came
>> > across
>> > that status, investigated that and ended up posting this message.
>> >
>> > My first question is: what is the practical effect of this
>> > condition/status
>> > in the project/cluster?
>> >
>> > Follow-up: is there a way I can enable this general use case without
>> > running
>> > into the above constraint?
>>
>> I'll look into this more.  I don't have an immediate answer for you,
>> sadly.
>>
>> > Thanks!
>> >
>> > --Mark
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "Kubernetes user discussion and Q&A" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to kubernetes-use...@googlegroups.com.
>> > To post to this group, send email to kubernet...@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernetes-users]... 'Mark Betz' via Kubernetes user discussion and Q&A
    • Re: [kubernet... 'Tim Hockin' via Kubernetes user discussion and Q&A
      • Re: [kube... 'Mark Betz' via Kubernetes user discussion and Q&A
        • Re: [... 'Tim Hockin' via Kubernetes user discussion and Q&A
          • R... 'Tim Hockin' via Kubernetes user discussion and Q&A

Reply via email to