Re: [kubernetes-users] Network Policy to limit open connections per pod

2018-03-28 Thread Jonathan Tronson
When the downstream service went south we rapidly went from ~25k to 500k in the table in less than a minute. I wouldn’t think there would be a reasonable number to set that to that could prevent the entire node from being affected. TPS was so high that catastrophe could be delayed a bit but not

Re: [kubernetes-users] Network Policy to limit open connections per pod

2018-03-28 Thread 'Tim Hockin' via Kubernetes user discussion and Q
The simple answer is to change the limit. The more robust answer would be toake the limit more dynamic, but that can fail at runtime if, for example, kernel memory is fragmented. Also I am not sure that tunable can be live-adjusted. :( We have ideas about how to be more frugal with conntrack

Re: [kubernetes-users] Network Policy to limit open connections per pod

2018-03-28 Thread Rodrigo Campos
Just curious, but why not change the contrack limit? On Wednesday, March 28, 2018, wrote: > Is there anything similar to a network policy that limits x open > connections per pod? > > During a 100k TPS load test, a subset of pods had errors connecting to a > downstream

[kubernetes-users] Network Policy to limit open connections per pod

2018-03-28 Thread jtronson
Is there anything similar to a network policy that limits x open connections per pod? During a 100k TPS load test, a subset of pods had errors connecting to a downstream service and we maxed out the nf_conntrack table (500k) which affected the rest of the pods on each node that had this issue