Re: [PATCH] MINOR : converter: add param converter

2022-12-13 Thread Willy Tarreau
On Wed, Dec 14, 2022 at 12:19:59AM -0700, Thayne McCombs wrote:
> Add a converter that extracts a parameter from string of delimited
> key/value pairs.

Great, now merged. Thank you!
Willy



[PATCH] MINOR : converter: add param converter

2022-12-13 Thread Thayne McCombs
Add a converter that extracts a parameter from string of delimited
key/value pairs.

Fixes: #1697
---
 doc/configuration.txt | 26 
 reg-tests/converter/param.vtc | 80 +++
 src/sample.c  | 64 
 3 files changed, 170 insertions(+)
 create mode 100644 reg-tests/converter/param.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c45f0b4b68..0cc2bdee3b 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17702,6 +17702,32 @@ or()
   This prefix is followed by a name. The separator is a '.'. The name may only
   contain characters 'a-z', 'A-Z', '0-9', '.' and '_'.
 
+param(,[])
+  This extracts the first occurrence of the parameter  in the input 
string
+  where parameters are delimited by , which defaults to "&", and the 
name
+  and value of the parameter are separated by a "=". If there is no "=" and
+  value before the end of the parameter segment, it is treated as equivalent to
+  a value of an empty string.
+
+  This can be useful for extracting parameters from a query string, or possibly
+  a x-www-form-urlencoded body. In particular, `query,param()` can be 
used
+  as an alternative to `urlp()` which only uses "&" as a delimiter,
+  whereas "urlp" also uses "?" and ";".
+
+  Note that this converter doesn't do anything special with url encoded
+  characters. If you want to decode the value, you can use the url_dec 
converter
+  on the output. If the name of the parameter in the input might contain 
encoded
+  characters, you'll probably want do normalize the input before calling
+  "param". This can be done using "http-request normalize-uri", in particular
+  the percent-decode-unreserved and percent-to-uppercase options.
+
+  Example :
+  str(a=b=d=r),param(a)   # b
+  str(a=c),param(a) # ""
+  str(a==a),param(b)  # ""
+  str(a=1;b=2;c=4),param(b,;) # 2
+  query,param(redirect_uri),urldec()
+
 port_only
   Converts a string which contains a Host header value into an integer by
   returning its port.
diff --git a/reg-tests/converter/param.vtc b/reg-tests/converter/param.vtc
new file mode 100644
index 00..1633603823
--- /dev/null
+++ b/reg-tests/converter/param.vtc
@@ -0,0 +1,80 @@
+varnishtest "param converter Test"
+
+feature ignore_unknown_macro
+
+server s1 {
+   rxreq
+   txresp -hdr "Connection: close"
+} -repeat 10 -start
+
+haproxy h1 -conf {
+   defaults
+   mode http
+   timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+   timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+   timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+   frontend fe
+   bind "fd@${fe}"
+
+   ### requests
+   http-request set-var(txn.query) query
+   http-response set-header Found %[var(txn.query),param(test)] if { 
var(txn.query),param(test) -m found }
+
+   default_backend be
+
+   backend be
+   server s1 ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe_sock} {
+   txreq -url "/foo/?test=1=4"
+   rxresp
+   expect resp.status == 200
+   expect resp.http.found == "1"
+
+   txreq -url "/?a=1=4=34"
+   rxresp
+   expect resp.status == 200
+   expect resp.http.found == "34"
+
+   txreq -url "/?test=bar"
+   rxresp
+   expect resp.status == 200
+   expect resp.http.found == "bar"
+
+   txreq -url "/?a=b=d"
+   rxresp
+   expect resp.status == 200
+   expect resp.http.found == ""
+
+   txreq -url "/?a=b=t=d"
+   rxresp
+   expect resp.status == 200
+   expect resp.http.found == "t"
+
+   txreq -url "/?a=b=d"
+   rxresp
+   expect resp.status == 200
+   expect resp.http.found == ""
+
+   txreq -url "/?test="
+   rxresp
+   expect resp.status == 200
+   expect resp.http.found == ""
+
+txreq -url "/?a=b"
+rxresp
+expect resp.status == 200
+expect resp.http.found == ""
+
+txreq -url "/?testing=123"
+rxresp
+expect resp.status == 200
+expect resp.http.found == ""
+
+txreq -url "/?testing=123=4"
+rxresp
+expect resp.status == 200
+expect resp.http.found == "4"
+} -run
diff --git a/src/sample.c b/src/sample.c
index 62a372b81c..7a612fc033 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -2607,6 +2607,69 @@ static int sample_conv_word(const struct arg *arg_p, 
struct sample *smp, void *p
return 1;
 }
 
+static int sample_conv_param_check(struct arg *arg, struct sample_conv *conv,
+   const char *file, int line, char **err)
+{
+   if (arg[1].type == ARGT_STR && arg[1].data.str.data != 1) {
+   memprintf(err, "Delimiter must be exactly 1 character.");
+   return 0;
+   }
+
+   return 1;
+}
+
+static int sample_conv_param(const struct arg *arg_p, struct sample *smp, void 
*private)
+{
+   char *pos, *end, *pend, *equal;
+   char delim = '&';
+   const char *name = 

Re: Theoretical limits for a HAProxy instance

2022-12-13 Thread Willy Tarreau
Hi,

On Tue, Dec 13, 2022 at 03:33:58PM +0100, Iago Alonso wrote:
> Hi,
> 
> We do hit our defined max ssl/conn rates, but given the spare
> resources, we don't expect to suddenly return 5xx.

What bothers me is that once this limit is reached there's no more
connection accepted by haproxy so you should indeed not see such
errors. What element produces these errors ? Since you've enabled
logging, can you check them in your logs and figure whether they're
sent by the server, by haproxy or none (it might well be the load
generator translating connection timeouts to 5xx for user reporting).

If the errors are produced by haproxy, then their exact value and the
termination flags are very important as they'll indicate where the
problem is.

Another thing you'll observe in your logs are the server's response
time. It's possible that your request rate is getting very close to
the server's limits.

Among other limitations the packet rate and network bandwidth might
represent a limit. For example, let's say you're requesting 10kB
objects. At 10k/s it's just less than a gigabit/s, at 11k/s it doesn't
pass anymore.

When you look at the haproxy stats page, the cumulated live network
bandwidth is reported at the top, it might give you an indication of
a possible limit.

But in any case, stats on the status codes and termination codes in
logs would be extremely useful. Note that you can do this natively
using halog (I seem to remember it's halog -st and halog -tc, but
please double-check, and in any case, cut|sort|uniq -c always works).

Regards,
Willy



Re: Theoretical limits for a HAProxy instance

2022-12-13 Thread Emerson Gomes
Hi,

Have you tried increasing the number of processes/threads?
I dont see any nbthreads or nbproc in your config.

Check out https://www.haproxy.com/blog/multithreading-in-haproxy/

BR.,
Emerson


Em seg., 12 de dez. de 2022 às 02:49, Iago Alonso 
escreveu:

> Hello,
>
> We are performing a lot of load tests, and we hit what we think is an
> artificial limit of some sort, or a parameter that we are not taking
> into account (HAProxy config setting, kernel parameter…). We are
> wondering if there’s a known limit on what HAProxy is able to process,
> or if someone has experienced something similar, as we are thinking
> about moving to bigger servers, and we don’t know if we will observe a
> big difference.
>
> When trying to perform the load test in production, we observe that we
> can sustain 200k connections, and 10k rps, with a load1 of about 10.
> The maxsslrate and maxsslconn are maxed out, but we handle the
> requests fine, and we don’t return 5xx. Once we increase the load just
> a bit and hit 11k rps and about 205k connections, we start to return
> 5xx and we rapidly decrease the load, as these are tests against
> production.
>
> Production server specs:
> CPU: AMD Ryzen 7 3700X 8-Core Processor (16 threads)
> RAM: DDR4 64GB (2666 MT/s)
>
> When trying to perform a load test with synthetic tests using k6 as
> our load generator against staging, we are able to sustain 750k
> connections, with 20k rps. The load generator has a ramp-up time of
> 120s to achieve the 750k connections, as that’s what we are trying to
> benchmark.
>
> Staging server specs:
> CPU: AMD Ryzen 5 3600 6-Core Processor (12 threads)
> RAM: DDR4 64GB (3200 MT/s)
>
> I've made a post about this on discourse, and I got the suggestion to
> post here. In said post, I've included screenshots of some of our
> Prometheus metrics.
>
> https://discourse.haproxy.org/t/theoretical-limits-for-a-haproxy-instance/8168
>
> Custom kernel parameters:
> net.ipv4.ip_local_port_range = "1276860999"
> net.nf_conntrack_max = 500
> fs.nr_open = 500
>
> HAProxy config:
> global
> log /dev/log len 65535 local0 warning
> chroot /var/lib/haproxy
> stats socket /run/haproxy-admin.sock mode 660 level admin
> user haproxy
> group haproxy
> daemon
> maxconn 200
> maxconnrate 2500
> maxsslrate 2500
>
> defaults
> log global
> option  dontlognull
> timeout connect 10s
> timeout client  120s
> timeout server  120s
>
> frontend stats
> mode http
> bind *:8404
> http-request use-service prometheus-exporter if { path /metrics }
> stats enable
> stats uri /stats
> stats refresh 10s
>
> frontend k8s-api
> bind *:6443
> mode tcp
> option tcplog
> timeout client 300s
> default_backend k8s-api
>
> backend k8s-api
> mode tcp
> option tcp-check
> timeout server 300s
> balance leastconn
> default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s
> maxconn 500 maxqueue 256 weight 100
> server master01 x.x.x.x:6443 check
> server master02 x.x.x.x:6443 check
> server master03 x.x.x.x:6443 check
> retries 0
>
> frontend k8s-server
> bind *:80
> mode http
> http-request add-header X-Forwarded-Proto http
> http-request add-header X-Forwarded-Port 80
> default_backend k8s-server
>
> backend k8s-server
> mode http
> balance leastconn
> option forwardfor
> default-server inter 10s downinter 5s rise 2 fall 2 check
> server worker01a x.x.x.x:31551 maxconn 20
> server worker02a x.x.x.x:31551 maxconn 20
> server worker03a x.x.x.x:31551 maxconn 20
> server worker04a x.x.x.x:31551 maxconn 20
> server worker05a x.x.x.x:31551 maxconn 20
> server worker06a x.x.x.x:31551 maxconn 20
> server worker07a x.x.x.x:31551 maxconn 20
> server worker08a x.x.x.x:31551 maxconn 20
> server worker09a x.x.x.x:31551 maxconn 20
> server worker10a x.x.x.x:31551 maxconn 20
> server worker11a x.x.x.x:31551 maxconn 20
> server worker12a x.x.x.x:31551 maxconn 20
> server worker13a x.x.x.x:31551 maxconn 20
> server worker14a x.x.x.x:31551 maxconn 20
> server worker15a x.x.x.x:31551 maxconn 20
> server worker16a x.x.x.x:31551 maxconn 20
> server worker17a x.x.x.x:31551 maxconn 20
> server worker18a x.x.x.x:31551 maxconn 20
> server worker19a x.x.x.x:31551 maxconn 20
> server worker20a x.x.x.x:31551 maxconn 20
> server worker01an x.x.x.x:31551 maxconn 20
> server worker02an x.x.x.x:31551 maxconn 20
> server worker03an x.x.x.x:31551 maxconn 20
> retries 0
>
> frontend k8s-server-https
> bind *:443 ssl crt /etc/haproxy/certs/
> mode http
> http-request add-header X-Forwarded-Proto https
> http-request add-header X-Forwarded-Port 443
> http-request del-header X-SERVER-SNI
> http-request set-header X-SERVER-SNI 

Re: [PATCH] MINOR : converter: add param converter

2022-12-13 Thread Tim Düsterhus

Thayne,

On 12/9/22 07:22, Thayne McCombs wrote:

Ok. I think this patch addresses all of your feedback. Thanks for
looking at it.


It appears that your mailer mangled the patch. It looks incorrectly 
formatted in my MUA and git fails to apply it. I recommend either using 
'git send-email' or just attaching the patch as a regular attachment. 
Both should be equally simple for Willy to apply.


Best regards
Tim Düsterhus



formilux | Business contacts

2022-12-13 Thread Tom Lewand
Hi formilux,

I understand that you are a Certified Partner of  Red Hat, Would you like to 
connect with Key Decision Makers from companies currently using Red Hat 
Software?

The contacts were verified & updated last month for all marketing initiatives.

Do let us know your current focus as requested below and I shall revert back 
with the volume of contacts, samples and a quote for your review;

Target Technology:?, Target Job Titles:_?,  
Target Geography:___?

Look forward to your feedback.

Thanks & Regards,
Tom Lewand
Business Development


Re: Theoretical limits for a HAProxy instance

2022-12-13 Thread Iago Alonso
Hi,

We do hit our defined max ssl/conn rates, but given the spare
resources, we don't expect to suddenly return 5xx.

Here's the output of `haproxy -vv` (I've also added it to the post on
discourse):

HAProxy version 2.6.6-274d1a4 2022/09/22 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2027.
Known bugs: http://www.haproxy.org/bugs/bugs-2.6.6.html
Running on: Linux 5.15.0-53-generic #59-Ubuntu SMP Mon Oct 17 18:53:30
UTC 2022 x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement
-Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2
-Wduplicated-cond -Wnull-dereference -fwrapv
-Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers
-Wno-cast-function-type -Wno-string-plus-int -Wno-atomic-alignment
  OPTIONS = USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_PROMEX=1
  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : +EPOLL -KQUEUE +NETFILTER +PCRE -PCRE_JIT -PCRE2
-PCRE2_JIT +POLL +THREAD +BACKTRACE -STATIC_PCRE -STATIC_PCRE2 +TPROXY
+LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -ENGINE +GETADDRINFO
+OPENSSL -LUA +ACCEPT4 -CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY +TFO +NS
+DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER
+PRCTL -PROCCTL +THREAD_DUMP -EVPORTS -OT -QUIC +PROMEX
-MEMORY_PROFILING

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=32).
Built with OpenSSL version : OpenSSL 3.0.7 1 Nov 2022
Running on OpenSSL version : OpenSSL 3.0.7 1 Nov 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with the Prometheus exporter as a service
Built with network namespace support.
Support for malloc_trim() is enabled.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 11.3.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : prometheus-exporter
Available filters :
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace

On Mon, Dec 12, 2022 at 2:50 PM Jarno Huuskonen  wrote:
>
> Hi,
>
> On Mon, 2022-12-12 at 09:47 +0100, Iago Alonso wrote:
> >
>
> Can you share haproxy -vv output ?
>
> > HAProxy config:
> > global
> > log /dev/log len 65535 local0 warning
> > chroot /var/lib/haproxy
> > stats socket /run/haproxy-admin.sock mode 660 level admin
> > user haproxy
> > group haproxy
> > daemon
> > maxconn 200
> > maxconnrate 2500
> > maxsslrate 2500
>
> From your graphs (haproxy_process_current_ssl_rate /
> haproxy_process_current_connection_rate) you might hit
> maxconnrate/maxsslrate
>
> -Jarno
>
> --
> Jarno Huuskonen