Hi,

Have you tried increasing the number of processes/threads?
I dont see any nbthreads or nbproc in your config.

Check out https://www.haproxy.com/blog/multithreading-in-haproxy/

BR.,
Emerson


Em seg., 12 de dez. de 2022 às 02:49, Iago Alonso <[email protected]>
escreveu:

> Hello,
>
> We are performing a lot of load tests, and we hit what we think is an
> artificial limit of some sort, or a parameter that we are not taking
> into account (HAProxy config setting, kernel parameter…). We are
> wondering if there’s a known limit on what HAProxy is able to process,
> or if someone has experienced something similar, as we are thinking
> about moving to bigger servers, and we don’t know if we will observe a
> big difference.
>
> When trying to perform the load test in production, we observe that we
> can sustain 200k connections, and 10k rps, with a load1 of about 10.
> The maxsslrate and maxsslconn are maxed out, but we handle the
> requests fine, and we don’t return 5xx. Once we increase the load just
> a bit and hit 11k rps and about 205k connections, we start to return
> 5xx and we rapidly decrease the load, as these are tests against
> production.
>
> Production server specs:
> CPU: AMD Ryzen 7 3700X 8-Core Processor (16 threads)
> RAM: DDR4 64GB (2666 MT/s)
>
> When trying to perform a load test with synthetic tests using k6 as
> our load generator against staging, we are able to sustain 750k
> connections, with 20k rps. The load generator has a ramp-up time of
> 120s to achieve the 750k connections, as that’s what we are trying to
> benchmark.
>
> Staging server specs:
> CPU: AMD Ryzen 5 3600 6-Core Processor (12 threads)
> RAM: DDR4 64GB (3200 MT/s)
>
> I've made a post about this on discourse, and I got the suggestion to
> post here. In said post, I've included screenshots of some of our
> Prometheus metrics.
>
> https://discourse.haproxy.org/t/theoretical-limits-for-a-haproxy-instance/8168
>
> Custom kernel parameters:
> net.ipv4.ip_local_port_range = "12768    60999"
> net.nf_conntrack_max = 5000000
> fs.nr_open = 5000000
>
> HAProxy config:
> global
>     log /dev/log len 65535 local0 warning
>     chroot /var/lib/haproxy
>     stats socket /run/haproxy-admin.sock mode 660 level admin
>     user haproxy
>     group haproxy
>     daemon
>     maxconn 2000000
>     maxconnrate 2500
>     maxsslrate 2500
>
> defaults
>     log     global
>     option  dontlognull
>     timeout connect 10s
>     timeout client  120s
>     timeout server  120s
>
> frontend stats
>     mode http
>     bind *:8404
>     http-request use-service prometheus-exporter if { path /metrics }
>     stats enable
>     stats uri /stats
>     stats refresh 10s
>
> frontend k8s-api
>     bind *:6443
>     mode tcp
>     option tcplog
>     timeout client 300s
>     default_backend k8s-api
>
> backend k8s-api
>     mode tcp
>     option tcp-check
>     timeout server 300s
>     balance leastconn
>     default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
> maxconn 500 maxqueue 256 weight 100
>     server master01 x.x.x.x:6443 check
>     server master02 x.x.x.x:6443 check
>     server master03 x.x.x.x:6443 check
>     retries 0
>
> frontend k8s-server
>     bind *:80
>     mode http
>     http-request add-header X-Forwarded-Proto http
>     http-request add-header X-Forwarded-Port 80
>     default_backend k8s-server
>
> backend k8s-server
>     mode http
>     balance leastconn
>     option forwardfor
>     default-server inter 10s downinter 5s rise 2 fall 2 check
>     server worker01a x.x.x.x:31551 maxconn 200000
>     server worker02a x.x.x.x:31551 maxconn 200000
>     server worker03a x.x.x.x:31551 maxconn 200000
>     server worker04a x.x.x.x:31551 maxconn 200000
>     server worker05a x.x.x.x:31551 maxconn 200000
>     server worker06a x.x.x.x:31551 maxconn 200000
>     server worker07a x.x.x.x:31551 maxconn 200000
>     server worker08a x.x.x.x:31551 maxconn 200000
>     server worker09a x.x.x.x:31551 maxconn 200000
>     server worker10a x.x.x.x:31551 maxconn 200000
>     server worker11a x.x.x.x:31551 maxconn 200000
>     server worker12a x.x.x.x:31551 maxconn 200000
>     server worker13a x.x.x.x:31551 maxconn 200000
>     server worker14a x.x.x.x:31551 maxconn 200000
>     server worker15a x.x.x.x:31551 maxconn 200000
>     server worker16a x.x.x.x:31551 maxconn 200000
>     server worker17a x.x.x.x:31551 maxconn 200000
>     server worker18a x.x.x.x:31551 maxconn 200000
>     server worker19a x.x.x.x:31551 maxconn 200000
>     server worker20a x.x.x.x:31551 maxconn 200000
>     server worker01an x.x.x.x:31551 maxconn 200000
>     server worker02an x.x.x.x:31551 maxconn 200000
>     server worker03an x.x.x.x:31551 maxconn 200000
>     retries 0
>
> frontend k8s-server-https
>     bind *:443 ssl crt /etc/haproxy/certs/
>     mode http
>     http-request add-header X-Forwarded-Proto https
>     http-request add-header X-Forwarded-Port 443
>     http-request del-header X-SERVER-SNI
>     http-request set-header X-SERVER-SNI %[ssl_fc_sni] if { ssl_fc_sni
> -m found }
>     http-request set-var(txn.fc_sni) hdr(X-SERVER-SNI) if {
> hdr(X-SERVER-SNI) -m found }
>     http-request del-header X-SERVER-SNI
>     default_backend k8s-server-https
>
> backend k8s-server-https
>     mode http
>     balance leastconn
>     option forwardfor
>     default-server inter 10s downinter 5s rise 2 fall 2  check no-check-ssl
>     server worker01a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker02a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker03a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker04a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker05a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker06a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker07a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker08a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker09a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker10a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker11a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker12a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker13a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker14a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker15a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker16a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker17a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker18a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker19a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker20a x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker01an x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker02an x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     server worker03an x.x.x.x:31445 ssl ca-file /etc/haproxy/ca/ca.crt
> sni var(txn.fc_sni) maxconn 200000
>     retries 0
>
> frontend k8s-nfs-monitor
>     bind *:8080
>     mode http
>     monitor-uri /health_nfs_cluster
>     acl k8s_server_down nbsrv(k8s-server) le 2
>     acl nfs_down nbsrv(nfs) lt 1
>     monitor fail if nfs_down || k8s_server_down
>
> backend nfs
>     mode tcp
>     default-server inter 5s downinter 2s rise 1 fall 2
>         server nfs01 x.x.x.x:2049 check
>
> frontend k8s-cluster-monitor
>     bind *:8081
>     mode http
>     monitor-uri /health_cluster
>     acl k8s_server_down nbsrv(k8s-server) le 2
>     monitor fail if k8s_server_down
>
> Thanks in advance.
>
>

Reply via email to