On Fri, Dec 5, 2014 at 7:20 PM, Daniel Lieberman <dlieber...@bitpusher.com> wrote: > On Dec 5, 2014, at 5:21 AM, Baptiste <bed...@gmail.com> wrote: >> >> On Thu, Dec 4, 2014 at 11:50 PM, Daniel Lieberman >> <dlieber...@bitpusher.com> wrote: >>> We have a situation where our app servers sometimes get into a bad state, >>> and hitting a working server is more important than enforcing persistence. >>> Generally the number of connections to a bad server grows rapidly, so we've >>> set a maxconn value on the server line which effectively takes a server out >>> of the pool when the bad state occurs. >>> >>> If we fill up the connection slots, the server is almost definitely bad, so >>> we'd rather not queue at all. Since maxqueue 0 means unlimited, it looks >>> like the minimum queue size is 1. Is that right? Is there any way to >>> enforce a redispatch whenever we're at maxconn, without any connections >>> getting queued? >>> >>> Thanks, >>> -Daniel >> >> >> hi Daniel, >> >> We can do this :) >> I just need to know how you do persistence currently. >> Please send us your simplest frontend and backend configuration. >> >> Baptiste > > We do cookie-based persistence, but also use balance source to use consistent > backends on browsers which don't support cookies (relevant for a significant > fraction of the mobile users of this app). (In our case, switching app > servers results an annoying UI quirk, but doesn't break the session.) > > Here's one of the relevant fe/be configs (lightly sanitized): > > frontend service1 > bind 1.2.3.4:80 > bind 1.2.3.4:81 accept-proxy > bind-process 1 > default_backend service1 > > backend service1 bind-process 1 > balance source > hash-type consistent wt6 avalanche > option forwardfor > option http-server-close > option http-pretend-keepalive > option httplog > option httpchk GET /healthCheck.htm HTTP/1.1\r\nHost:\ example.com > > cookie SERVERID insert indirect > > server app1 app1:8080 cookie app1 maxconn 25 maxqueue 5 weight 100 check > server app2 app2:8080 cookie app2 maxconn 25 maxqueue 5 weight 100 check > [and many more app servers] > > > Thanks, > -Daniel
Hi Daniel, Here is my proposition: In your frontend, you monitor the cookie and the number of established connections to the server. You switch to an other farm with an other algorithm when the server is full. This farm will choose an other server and a new cookie will be generated, compatible with the service one. That said, there may be collisions (the round robin algorithm could redirect you to the server already chosen by the source IP hash). Second issue, if the client doesn't send any cookie, then it will bypass the rules :/ An alternative to the way below would to use a use-server rule in the service1 backend, but it would have the limitation as above + a snowbowl effect since all the traffic from a server would be forced to go to a single alternative one. frontend service1 bind 1.2.3.4:80 bind 1.2.3.4:81 accept-proxy bind-process 1 use_backend bk_roundrobin if { req.cook(SERVERID) app1 } { srv_conn(service1/app1) ge 25 } use_backend bk_roundrobin if { req.cook(SERVERID) app2 } { srv_conn(service1/app2) ge 25 } default_backend service1 backend service1 bind-process 1 balance source hash-type consistent wt6 avalanche option forwardfor option http-server-close option http-pretend-keepalive option httplog option httpchk GET /healthCheck.htm HTTP/1.1\r\nHost:\ example.com cookie SERVERID insert indirect server app1 app1:8080 cookie app1 maxconn 25 maxqueue 5 weight 100 check server app2 app2:8080 cookie app2 maxconn 25 maxqueue 5 weight 100 check backend bk_roundrobin bind-process 1 balance roundrobin option forwardfor option http-server-close option http-pretend-keepalive option httplog option httpchk GET /healthCheck.htm HTTP/1.1\r\nHost:\ example.com cookie SERVERID insert indirect server app1 app1:8080 cookie app1 maxconn 25 maxqueue 5 weight 100 check server app2 app2:8080 cookie app2 maxconn 25 maxqueue 5 weight 100 check