Hi All, When trying out rate limiting with listen / send/accept proxy, and nbproc I get some strange behavior. What happens is when I hammer a page on SSL, I don't see a 503 page and the rate limiting seems to have no effect. However, if I grab an HTTP page from the same client after this, I get a 503 (so apparently I tripped the rate limiter, it just wasn't kicking in with HTTPS). If then go back to HTTPS, I do see the 503 Page.
I tried binding the the frontend and backend to the same single process, but this doesn't seem to help. Is there a way to get reliable rate limiting with ssl running on multiple processes? The config I am using is as follows: global daemon nbproc 4 log 127.0.0.1 local1 defaults clitimeout 10s srvtimeout 10s timeout connect 10s listen ssl-front mode tcp option tcplog log global bind-process 2 3 4 bind 0.0.0.0:443 ssl crt /etc/haproxy/cert/wild.foo.com.pem server http 127.0.0.1:81 send-proxy frontend http-in bind 127.0.0.1:81 accept-proxy bind 0.0.0.0:80 name non-ssl acl is_ssl dst_port 81 reqadd X-SSL:\ Enabled if is_ssl bind-process 1 stick-table type ip size 1000k expire 1m store gpc0,conn_rate(10s) acl source_is_abuser src_get_gpc0(http-in) gt 0 acl source_is_serious_abuse src_conn_rate(http-in) gt 200 tcp-request connection reject if source_is_serious_abuse tcp-request connection track-sc1 src if !source_is_abuser use_backend be_go-away if source_is_abuser option httplog option http-server-close option forwardfor log global mode http default_backend test_backend backend test_backend bind-process 1 stick-table type ip size 1000k expire 2m store conn_rate(10s) tcp-request content track-sc2 src acl conn_rate_abuse sc2_conn_rate gt 10 acl mark_as_abuser sc1_inc_gpc0 gt 0 tcp-request content reject if conn_rate_abuse mark_as_abuser mode http server web1 127.0.0.1:82 backend be_go-away mode http errorfile 503 /etc/haproxy-shared/errors/503rate.http