Re: Lua: processing expired on timeout when using core.msleep

2022-08-23 Thread Bren
--- Original Message ---
On Tuesday, August 23rd, 2022 at 4:26 AM, Christopher Faulet 
 wrote:

> It could be good to share your config, at least the part calling your lua 
> script. 

I think these are the only relevant bits:

tcp-request inspect-delay 10s
http-request lua.delay_request 15000 3

I'm delaying requests a random number of ms between 15000 and 3.

> But this error can be triggered when the inspect-delay for tcp rules
> evaluation is expires.

Perhaps this is what is happening?

Bren



Server state file: port doesn't change after config update

2022-08-22 Thread Bren
Hello,

We've been seeing another minor issue I've been meaning to ask about. We're 
using a server state file:

server-state-file /var/lib/haproxy/server_state

In my systemd config for haproxy I've added a couple lines to save the server 
state on reload/stop:

ExecReload=/usr/local/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS
ExecReload=/opt/bin/save_server_state.sh
ExecReload=/bin/kill -USR2 $MAINPID
ExecStop=/opt/bin/save_server_state.sh

The script simply runs:

echo "show servers state" | socat /var/run/haproxy/admin.sock - > \
  /var/lib/haproxy/server_state

I've noticed that when I change the port on a backend server and reload, 
haproxy does not update the port for that server. I have to shut down haproxy, 
delete the state file, then start it back up for it to update the port 
(changing the port and renaming the server, reloading, then renaming it back 
and reloading again works too).

When I change other parts of the config, for example, if I add "disabled" to a 
server line and reload, haproxy updates the status of the server. It seems like 
haproxy isn't looking at the port when deciding if it should update the config 
for a server when using a state file.

Is this expected behavior?

Bren



Lua: processing expired on timeout when using core.msleep

2022-08-22 Thread Bren
Greetings,

This is a minor issue I've been meaning to ask about. I have a fairly simple 
Lua script that simply runs core.msleep on certain requests for a random number 
of ms to slow them down. I've noticed this in the logs:

[ALERT]    (3650884) : Lua function 'delay_request': aborting Lua processing on 
expired timeout.

I've always been under the impression that a sleep shouldn't cause any 
timeouts. Both tune.lua.session-timeout and tune.lua.service-timeout says:

If the Lua does a sleep, the sleep is not taken in account.

Am I missing something?

Bren



Re: [*EXT*] [ANNOUNCE] haproxy-2.6.3

2022-08-22 Thread Bren
--- Original Message ---
On Monday, August 22nd, 2022 at 7:03 AM, William Lallemand 
 wrote:

> I'm going to issue a 2.6.4 today with the patch.

Just rolled out 2.6.4 and everything appears to be working as expected now. 
Thanks!



Re: [*EXT*] [ANNOUNCE] haproxy-2.6.3

2022-08-20 Thread Bren
--- Original Message ---
On Saturday, August 20th, 2022 at 3:43 PM, Vincent Bernat  
wrote:

> Do you have something here too?

Nope. In fact I just removed that from the build.

> This does not match the file shipped by HAProxy, but this may explain
> why you also run into this bug.

What ships with the source is:

Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid" 
"EXTRAOPTS=-S /run/haproxy-master.sock"

I'm using the config for this:

stats socket /run/haproxy/admin.sock user haproxy group haproxy mode 660 level 
admin

So I probably removed that last part.



Re: [*EXT*] [ANNOUNCE] haproxy-2.6.3

2022-08-20 Thread Bren
--- Original Message ---
On Saturday, August 20th, 2022 at 9:50 AM, Willy Tarreau  wrote:

> Did you notice if it failed to serve anything and ate CPU from start or
> if it completely started and only then ate CPU ?

It appears to start up normally according to the logs and then it uses 100% 
CPU. I can get to the stats pages and it serves traffic as expected.

I noticed that "systemctl status haproxy" always shows:

Active: activating (start)

Then after a couple of minutes it restarts the process. It does this 
continuously.

Here is the systemd config. This is directly from the source distro with some 
modifications:

[Unit]
Description=HAProxy Load Balancer
After=network-online.target rsyslog.service
Wants=network-online.target rsyslog.service

[Service]
EnvironmentFile=-/etc/default/haproxy
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
ExecStartPre=/usr/local/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS
ExecStart=/usr/local/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE $EXTRAOPTS
# Checks the config first.
ExecReload=/usr/local/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS
# This saves off the server state before reload which haproxy loads on
# (re)start.
ExecReload=/opt/bin/save_server_state.sh
ExecReload=/bin/kill -USR2 $MAINPID
# This should save server state on stop.
ExecStop=/opt/bin/save_server_state.sh
ExecStopPost=/usr/local/bin/systemd-email-notifier haproxy
KillMode=mixed
Restart=always
SuccessExitStatus=143
Type=notify

[Install]
WantedBy=multi-user.target


We have a staging server that I always deploy to first so I can test anything 
you want me to there w/o affecting production.

Bren



Re: [*EXT*] [ANNOUNCE] haproxy-2.6.3

2022-08-20 Thread Bren
--- Original Message ---
On Friday, August 19th, 2022 at 3:16 PM, Ionel GARDAIS 
 wrote:


> 
> 
> Hi Willy,
> 
> I had to rollback to 2.6.2

I also had to roll back. I compile from source and push out the binary with 
Ansible which hung on reload. I observed an haproxy process running as root 
using 100% CPU. It never restarted - I had to kill the processes.

When I started haproxy back up it began using 100% CPU again so I rolled back. 
This is on Debian 11. No "expose-fd listeners" in the config and no unusual log 
entries that I can see.

HAProxy version 2.6.3-76f187b 2022/08/19 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2027.
Known bugs: http://www.haproxy.org/bugs/bugs-2.6.3.html
Running on: Linux 5.10.0-14-amd64 #1 SMP Debian 5.10.113-1 (2022-04-29) x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement 
-Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int 
-Wno-atomic-alignment
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1 
USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_PROMEX=1
  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS -DDEBUG_DONT_SHARE_POOLS 
-DDEBUG_POOL_INTEGRITY

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 +PCRE2_JIT 
+POLL +THREAD +BACKTRACE -STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY 
+LINUX_SPLICE +LIBCRYPT +CRYPT_H -ENGINE +GETADDRINFO +OPENSSL +LUA +ACCEPT4 
-CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES 
-WURFL +SYSTEMD -OBSOLETE_LINKER +PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT 
-QUIC +PROMEX -MEMORY_PROFILING

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=4).
Built with OpenSSL version : OpenSSL 1.1.1n  15 Mar 2022
Running on OpenSSL version : OpenSSL 1.1.1n  15 Mar 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with the Prometheus exporter as a service
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE2 version : 10.36 2020-12-04
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 10.2.1 20210110

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : prometheus-exporter
Available filters :
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace



Re: Per-client queue for rate limiting?

2022-02-01 Thread Bren
--- Original Message ---

On Sunday, January 30th, 2022 at 2:18 AM, Orlando Della Casa 
 wrote:

> I’d like to put a rate limit on incoming HTTP requests, but without sending 
> 429 errors.

You could simply delay the request with Lua. We use a stick table to track 
requests and if an IP exceeds the limit, a Lua function gets called that delays 
the request for a random amount of time. You could probably set a var with the 
current req rate and calculate a delay based on that.

Here's a simple example:

# delay_request.lua
function delay_request (txn)
  local http_req_rate = txn:get_var('txn.http_req_rate')
  -- calculate your delay somehow
  core.msleep(delay_ms)
end

core.register_action('delay_request', {'http-req'}, delay_request, 0)

# haproxy.cfg
global
  lua-load /path/to/delay_request.lua

frontend fe
  stick-table type ipv6 size 1m expire 1m store http_req_rate(1m)
  http-request track-sc0 src
  acl limit_exceeded src_http_req_rate() gt 60
  http-request set-var(txn.http_req_rate) src_http_req_rate()
  http-request lua.delay_request if limit_exceeded
  ...

Bren



Re: Stick table counter not working after upgrade to 2.2.11

2021-03-22 Thread Bren
‐‐‐ Original Message ‐‐‐

On Monday, March 22nd, 2021 at 3:06 PM, Sander Klein  wrote:

> Hi,
>
> I have upgraded to haproxy 2.2.11 today and it seems like my stick table 
> counter is not working anymore.

I was going to upgrade to 2.2.11 soon so I tested this quick and can confirm 
that counters no longer decrement over time. Tested this using the 
haproxy:2.2.11 Docker image and a standard stick table:

fe-test
  http-request track-sc0 src table be-test

be-test
  stick-table type ipv6 size 1m expire 24h store http_req_rate(2s)

Bren



Unexpectedly high memory usage when loading large maps

2021-03-16 Thread Bren
Hello,

(I tried subscribing to this list a few times but it appears that subscribing 
isn't working. Any responses will probably have to be CCed to me for now. Thank 
you.)

I've been testing adding a x-geoip-country header as shown in this blog post:

https://www.haproxy.com/blog/bot-protection-with-haproxy/

I generated the country_iso_code.map file which ended up being about 3.4 
million records and 64MB. It looks like this:

x.x.x.x/24 YY

Where x.x.x.x is the IP block and YY is the country code of course.

Then I loaded it into my config like this:

http-request set-header x-geoip-country 
%[src,map_ip(/usr/local/etc/haproxy/country_iso_code.map)]

By the way, the blog post appears to be incorrect here. It says to use map not 
map_ip which doesn't work:

http-request set-header x-geoip-country 
%[src,map(/etc/hapee-1.8/country_iso_code.map)]

Anyway, what shocked me is that haproxy now takes several seconds to load and 
uses about 860MB of memory. Without the "http-request set-header" line haproxy 
reloads nearly instantly and uses < 20MB.

This seems excessive to me but then I can imagine that this is due to haproxy 
loading the map data into memory in a way that provides super fast lookups at 
the expense of ram. I just wanted to reach out to verify that this is the case 
before pushing this into production. I couldn't find any mention anywhere on 
loading large maps into haproxy.

I'm using the haproxy:2.2-alpine Docker image to test this. Here's the stripped 
down production config I'm using:

# vim: syntax=haproxy
global
log stdout format raw local0
maxconn 2
ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
stats socket :8100
stats timeout 1h

defaults
balance roundrobin
backlog 1
log global
mode http

option contstats
option dontlog-normal
option dontlognull
option log-separate-errors
option httplog

timeout client 30s
timeout client-fin 30s
timeout connect 5s
timeout http-keep-alive 5s
timeout http-request 5s
timeout queue 10s
timeout server 30s
timeout tarpit 5s
timeout tunnel 30s

errorfile 400 /usr/local/etc/haproxy/errors/400.http
errorfile 403 /usr/local/etc/haproxy/errors/403.http
errorfile 408 /usr/local/etc/haproxy/errors/408.http
errorfile 500 /usr/local/etc/haproxy/errors/500.http
errorfile 502 /usr/local/etc/haproxy/errors/502.http
errorfile 503 /usr/local/etc/haproxy/errors/503.http
errorfile 504 /usr/local/etc/haproxy/errors/504.http

frontend fe-main
bind :8101 ssl crt /usr/local/etc/haproxy/server.pem alpn h2,http/1.1

maxconn 1
no option dontlog-normal
option http-buffer-request
option forwardfor
tcp-request inspect-delay 10s

capture request header Host len 64
capture request header X-Forwarded-For len 64
capture request header Referer len 256

http-request set-header x-geoip-country 
%[src,map_ip(/usr/local/etc/haproxy/country_iso_code.map)]

default_backend be-test

backend be-test
server web web:8200 maxconn 128


Any input on this would be much appreciated!

Thanks,
Bren