Hi all,

I am trying to implement request rate limiting to protect our servers from
too
many requests. While we were able to get this working correctly in the
frontend section, it is required to implement the same in the backend
section
due to the configuration we use in our data center for different services.

This is the current configuration that I tried to get 1000 RPS for the
entire
backend section:

backend HTTP-be
        http-request track-sc2 fe_id
        stick-table type integer size 1m expire 60s store
                     http_req_rate(100ms),gpc0,gpc0_rate(100ms)
        acl mark_seen sc2_inc_gpc0 ge 0
        acl outside_limit sc2_gpc0_rate() gt 100
        http-request deny deny_status 429 if mark_seen outside_limit
        server my-server <ip>:80

But running "wrk -t 1 -c 1 <args>" gives:
         RPS: 19.60 (Total requests: 18669 Good: 100 Errors: 18569)
with following haproxy metrics:
         0x2765d1c: key=3 use=1 exp=60000 gpc0=7270 gpc0_rate(100)=364
http_req_rate(100)=364

and "wrk -t 20 -c 1000 <args>" gives:
        RPS: 6.62 (Total requests: 1100022 Good: 100 Errors: 1099922)
with following haproxy metrics:
        0x2765d1c: key=3 use=94 exp=59999 gpc0=228218 gpc0_rate(100)=7229
http_req_rate(100)=7229

As seen above, only a total of 100 requests succeeded, not 100 * 10 * 20 =
20K
(for the 20 second duration). gpc0_rate does not *seem* to get reset to
zero at
configured interval (100ms), may be due to a mistake in the configuration
settings
above.

Another setting that we tried was:

backend HTTP-be
        acl too_fast be_sess_rate gt 1000
        tcp-request inspect-delay 1s
        tcp-request content accept if ! too_fast
        tcp-request content accept if WAIT_END
        server my-server <ip>:80

While this behaved much better than earlier, I still get a lot of
discrepancy based
on the traffic volume:
wrk -t 40 -c 200 -d 15s <url>
RPS: 841.14 (Total requests: 12634 Good: 12634 Errors: 0 Time: 15.02)

wrk -t 40 -c 400 -d 15s <url>
RPS: 1063.78 (Total requests: 15978 Good: 15978 Errors: 0 Time: 15.02)

wrk -t 40 -c 800 -d 15s <url>
RPS: 1123.03 (Total requests: 16868 Good: 16868 Errors: 0 Time: 15.02)

wrk -t 40 -c 1600 -d 15s <url>
RPS: 1382.98 (Total requests: 20883 Good: 20883 Errors: 0 Time: 15.10)

The last one is off by 38%

Could someone help on what I am doing wrong, or how to achieve this? I
prefer using the first approach with stick table if possible, as it
provides finer
time granularity.

Thanks for any help.

Regards,
- Krishna

Reply via email to