Thanks Daniel!
On Friday, April 13, 2018 at 2:25:38 PM UTC-4, Daniel Smith wrote:
> I haven't checked, but I'd bet that the C++ gRPC library uses HTTP2, which
> seems to explicitly encourage connection reuse, which leads to this behavior.
> If you search around you may be able to find some
I haven't checked, but I'd bet that the C++ gRPC library uses HTTP2, which
seems to explicitly encourage connection reuse, which leads to this
behavior. If you search around you may be able to find some options.
On Fri, Apr 13, 2018 at 11:18 AM wrote:
>
> I am using
I am using gRPC on both sides, both in C++. The client sends asynchronous
requests. A new channel is created (and destroyed) with every request.
Thank you!
On Friday, April 13, 2018 at 2:09:48 PM UTC-4, Tim Hockin wrote:
> What are you using for a client? Is it by chance http and written in
What are you using for a client? Is it by chance http and written in go?
Some client libraries, including Go's http, aggressively reuse
connections.
If you try with something like exec netcat, I bet you see different results.
BTW, one might argue that if you depend on RR, you will eventually be
I am running them against the service's cluster IP address (through its name,
i.e. "btm-calculator" which translates to the cluster IP), and port 3006.
On Friday, April 13, 2018 at 1:19:32 PM UTC-4, Rodrigo Campos wrote:
> And how are you running the requests? Against which IP and which port?
And how are you running the requests? Against which IP and which port?
On Fri, Apr 13, 2018 at 10:17:04AM -0700, cristian.coch...@gmail.com wrote:
>
> OK, I changed my pods to respond almost immediately so that I can test with a
> statistically significant number of requests (10,000), and I am
OK, I changed my pods to respond almost immediately so that I can test with a
statistically significant number of requests (10,000), and I am still observing
the same behavior, only 1 pod receives all 10k requests. Can anyone explain why
this happens? I am including the service and deployment
Thank you Tim.
Is there now way to set it to true RR? If not, I will have to do my own
balancing, if there is no other suggestion.
On Friday, April 13, 2018 at 10:59:46 AM UTC-4, Tim Hockin wrote:
> Without a statistically significant load, this is random. What you are
> seeing satisfies
once :
> >
> >
> https://kubernetes.io/docs/tasks/access-application-cluster/load-balance-access-application-cluster/
> >
> >
> > https://kubernetes.io/docs/concepts/services-networking/service/
> >
> >
> >
> > Sent from Mail for Windows 10
> &
ster/
>
>
> https://kubernetes.io/docs/concepts/services-networking/service/
>
>
>
> Sent from Mail for Windows 10
>
>
> From: cristian...@gmail.com
> Sent: Friday, April 13, 2018 7:11 PM
> To: Kubernetes user discussion and Q
> Subject: [kubernet
es.io/docs/concepts/services-networking/service/
>>
>>
>>
>>
>>
>>
>>
>> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
>> Windows 10
>>
>>
>>
>> *From: *cristian.coch...@gmail.com
>> *Sent: *Frid
crosoft.com/fwlink/?LinkId=550986> for
> Windows 10
>
>
>
> *From: *cristian.coch...@gmail.com
> *Sent: *Friday, April 13, 2018 7:11 PM
> *To: *Kubernetes user discussion and Q
> <kubernetes-users@googlegroups.com>
> *Subject: *[kubernetes-users] ClusterIP service not dist
To: Kubernetes user discussion and Q
Subject: [kubernetes-users] ClusterIP service not distributing requests
evenlyamong pods in Google Kubernetes Engine
I have a ClusterIP service in my cluster with 4 pods behind it. I noticed that
requests to the service are not evenly distributed among pods
13 matches
Mail list logo