Without a statistically significant load, this is random. What you are seeing satisfies that definition.
The real reason is that round-robin is a lie. Each node in a cluster will do it's own RR from any number of clients. On Fri, Apr 13, 2018, 10:51 AM <cristian.coch...@gmail.com> wrote: > On Friday, April 13, 2018 at 10:39:38 AM UTC-4, Tim Hockin wrote: > > The load is random, but the distribution should be approximately equal > for non-trivial loads. E.g. when we run tests for 1000 requests you can > see it is close to equal. > > > > > > How unequal is it? Are you using session affinity? > > > > > > > > On Fri, Apr 13, 2018, 10:34 AM Cristian Cocheci <cristian...@gmail.com> > wrote: > > > > > > > > Thank you Sunil, but the LoadBalancer type is used for exposing the > service externally, which I don't need. All I need is my service exposed > inside the cluster. > > > > > > > > > > On Fri, Apr 13, 2018 at 10:30 AM, Sunil Bhai <placi...@gmail.com> wrote: > > > > > > > > HI, > > > > Check this once : > > > > > https://kubernetes.io/docs/tasks/access-application-cluster/load-balance-access-application-cluster/ > > > > > > https://kubernetes.io/docs/concepts/services-networking/service/ > > > > > > > > Sent from Mail for Windows 10 > > > > > > From: cristian...@gmail.com > > Sent: Friday, April 13, 2018 7:11 PM > > To: Kubernetes user discussion and Q&A > > Subject: [kubernetes-users] ClusterIP service not distributing requests > evenlyamong pods in Google Kubernetes Engine > > > > > > I have a ClusterIP service in my cluster with 4 pods behind it. I > noticed that requests to the service are not evenly distributed among pods. > After further reading I learned that the kube-proxy pod is responsible for > setting up the iptables rules that forward requests to the pods. After > logging into the kube-proxy pod and listing the nat table rules, this is > what I got: > > > > Chain KUBE-SVC-4F4JXO37LX4IKRUC (1 references) > > target prot opt source destination > > KUBE-SEP-6X4IVU3LDAAZJUPD all -- 0.0.0.0/0 0.0.0.0/0 > /* default/btm-calculator: */ statistic mode random probability > 0.25000000000 > > KUBE-SEP-TXRPWWIIUWW3MNFH all -- 0.0.0.0/0 0.0.0.0/0 > /* default/btm-calculator: */ statistic mode random probability > 0.33332999982 > > KUBE-SEP-HW6SF2LJM4S7X5ZN all -- 0.0.0.0/0 0.0.0.0/0 > /* default/btm-calculator: */ statistic mode random probability > 0.50000000000 > > KUBE-SEP-TTJKD52QZSH2OH4O all -- 0.0.0.0/0 0.0.0.0/0 > /* default/btm-calculator: */ > > > > The comments seem to suggest that the load is balanced according to the > static mode random probability with an uneven probability distribution. Is > this how it's supposed to work? Every piece of documentation that I read > about load balancing by a ClusterIP service indicates that it should be > round robin. Obviously this is not the case here. > > Is there a way to set a ClusterIP to perform round robin load balancing? > > > > Thank you, > > Cristian > > > > -- > > You received this message because you are subscribed to the Google > Groups "Kubernetes user discussion and Q&A" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to kubernetes-use...@googlegroups.com. > > To post to this group, send email to kubernet...@googlegroups.com. > > Visit this group at https://groups.google.com/group/kubernetes-users. > > For more options, visit https://groups.google.com/d/optout. > > > > > > > > > > > > > > -- > > > > You received this message because you are subscribed to the Google > Groups "Kubernetes user discussion and Q&A" group. > > > > To unsubscribe from this group and stop receiving emails from it, send > an email to kubernetes-use...@googlegroups.com. > > > > To post to this group, send email to kubernet...@googlegroups.com. > > > > Visit this group at https://groups.google.com/group/kubernetes-users. > > > > For more options, visit https://groups.google.com/d/optout. > > > I am not using session affinity, and I am not sending a statistically > significant number of requests. In my particular use case I only need to > send a number of requests of 100 or less. I also have the problem that I > mentioned above, if I send 20 requests in a loop, they ALL go to the same > pod. If I wait a while and send another group of 20 requests, they MIGHT go > to a different pod, but they all go to the same pod (even if different than > the first one). This is a big issue for me, since my requests are actually > heavy calculations, an I was hoping to use this mechanism as a way of > parallelizing my computations. > > -- > You received this message because you are subscribed to the Google Groups > "Kubernetes user discussion and Q&A" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to kubernetes-users+unsubscr...@googlegroups.com. > To post to this group, send email to kubernetes-users@googlegroups.com. > Visit this group at https://groups.google.com/group/kubernetes-users. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group. To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscr...@googlegroups.com. To post to this group, send email to kubernetes-users@googlegroups.com. Visit this group at https://groups.google.com/group/kubernetes-users. For more options, visit https://groups.google.com/d/optout.