Thanks Daniel!
On Friday, April 13, 2018 at 2:25:38 PM UTC-4, Daniel Smith wrote:
> I haven't checked, but I'd bet that the C++ gRPC library uses HTTP2, which
> seems to explicitly encourage connection reuse, which leads to this behavior.
> If you search around you may be able to find some opti
I am using gRPC on both sides, both in C++. The client sends asynchronous
requests. A new channel is created (and destroyed) with every request.
Thank you!
On Friday, April 13, 2018 at 2:09:48 PM UTC-4, Tim Hockin wrote:
> What are you using for a client? Is it by chance http and written in g
I am running them against the service's cluster IP address (through its name,
i.e. "btm-calculator" which translates to the cluster IP), and port 3006.
On Friday, April 13, 2018 at 1:19:32 PM UTC-4, Rodrigo Campos wrote:
> And how are you running the requests? Against which IP and which port?
>
OK, I changed my pods to respond almost immediately so that I can test with a
statistically significant number of requests (10,000), and I am still observing
the same behavior, only 1 pod receives all 10k requests. Can anyone explain why
this happens? I am including the service and deployment m
Coincidentally (or not), while searching for answers about this, yesterday I
watched your "life of a packet" presentation at GC Next 17.:-)
>
> On Friday, April 13, 2018 at 10:59:46 AM UTC-4, Tim Hockin wrote:
> > Without a statistically significant load, this is random. What you are
> >
Thank you Tim.
Is there now way to set it to true RR? If not, I will have to do my own
balancing, if there is no other suggestion.
On Friday, April 13, 2018 at 10:59:46 AM UTC-4, Tim Hockin wrote:
> Without a statistically significant load, this is random. What you are
> seeing satisfies tha
you using session affinity?
>
>
>
> On Fri, Apr 13, 2018, 10:34 AM Cristian Cocheci wrote:
>
>
>
> Thank you Sunil, but the LoadBalancer type is used for exposing the service
> externally, which I don't need. All I need is my service exposed inside the
> cl
Thank you Sunil, but the LoadBalancer type is used for exposing the service
externally, which I don't need. All I need is my service exposed inside the
cluster.
On Fri, Apr 13, 2018 at 10:30 AM, Sunil Bhai wrote:
> HI,
>
>
>
> Check this once :
>
>
>
> https://kubernetes.io/docs/tasks/access-ap
I have only 1 node with multiple processors and a lot of memory. I actually
did this on purpose to eliminate the "how are the pods distributed on
nodes" variable.
I tail the application logs of the 4 pods at the same time, that's how I
noticed the uneven distribution. Also, in my response from the
I have a ClusterIP service in my cluster with 4 pods behind it. I noticed that
requests to the service are not evenly distributed among pods. After further
reading I learned that the kube-proxy pod is responsible for setting up the
iptables rules that forward requests to the pods. After logging
10 matches
Mail list logo