Re: [ANNOUNCE] haproxy-3.1-dev4

2024-07-24 Thread Aleksandar Lazic




On 2024-07-24 (Mi.) 18:50, Willy Tarreau wrote:

Hi,

HAProxy 3.1-dev4 was released on 2024/07/24. It added 113 new commits
after version 3.1-dev3.

Some nice goodies came in this version:


[snipp]


   - SPOE: the old applet-based architecture was replaced with the new
 mux-based one which allows idle connections sharing between threads,
 as well as queuing, load balancing, stickiness etc per request instead
 of per-connection and adds a lot of flexibility to the engine. We'd
 appreciate it a lot if SPOE users would take some time to verify that
 it works at least as well for them as before (and hopefully even
 better). Some good ideas may spark. Please check Christopher's
 response to the SPOE thread for more info.


Cool. Thank you that you handle this topic, even I don't use it for now :-)


   - ocsp: some processing was refined to better handle a corner case where
 the issuer chain is not in the same PEM file, though it also slightly
 changes how this is handled on the CLI.


[snipp]

Does this announcement have any impact to HAProxy?

"Intent to End OCSP Service"
https://letsencrypt.org/2024/07/23/replacing-ocsp-with-crls.html
https://news.ycombinator.com/item?id=41046956

I know there is https://docs.haproxy.org/3.0/configuration.html#5.1-crl-file but 
maybe it's worth to add a blog post about that topic and what impact this change 
have to HAProxy.


Regards
Alex




Re: [ANNOUNCE] haproxy-3.1-dev3

2024-07-10 Thread Aleksandar Lazic




On 2024-07-10 (Mi.) 16:39, Willy Tarreau wrote:

Hi,

HAProxy 3.1-dev3 was released on 2024/07/10. It added 35 new commits
after version 3.1-dev2.


[snipp]


And I'm still trying to free some time for the pending reviews (I have not
forgotten you but stuff that depends on multiple persons cannot always
wait).


There is no hurry about the connect patch. I have created in the meantime 
another solution in rust. :-)


It's no the best code in the world but it solves my issue.
https://github.com/git001/tls-proxy-tunnel/

So take your time for review.

Regards
Alex



Re: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-06-17 Thread Aleksandar Lazic

Hi.

Updated patch.

Changes:

Set the right 'X' for upstream-proxy-header
removed the upstream-proxy.png from patch
git-format against latest master

Any feedback and help is really appreciated.

Best regards
Alex

On 2024-06-13 (Do.) 03:00, Aleksandar Lazic wrote:

Hi.

New Version.

Changes:

I have now added a small diagram and doc for the upstream feature.
upstream-proxy.md
upstream-proxy.png

The Successful connection check is now.
ist("HTTP/1.1 200")

Added some "if(!chunk_memcat(..)){}"

Regards
Alex

On 2024-06-13 (Do.) 01:24, Aleksandar Lazic wrote:

Hi.

Thanks for testing and feedback.

On 2024-06-12 (Mi.) 20:35, Dave Cottlehuber wrote:

On Wed, 12 Jun 2024, at 13:04, Aleksandar Lazic wrote:

Hi.

Attached a new version with updated upstream-proxy.cfg.

This Patch have also the feature `upstream-proxy-target` to get rid of the
dependency for the srv->hostname.

```
tcp-request content upstream-proxy-target www.test1.com
```

Now have I tested the setup with `0.0.0.0` as server.

```
server https_Via_Proxy1 0.0.0.0:0 upstream-proxy-tunnel 127.0.0.1:3128
init-addr
127.0.0.1
```

@Dave: Can you use a name for the upstream-proxy-tunnel instead of IP?


Yes, it does the DNS lookup happily, and I can pass secret via env. nice!


That's great :-)


--- 8< ---
frontend stream_fe
   bind    :::443    v4v6
   mode tcp
   option tcplog
   default_backend stream_be

backend stream_be
   mode tcp
   tcp-request content upstream-proxy-header Host www.httpbin.org
   tcp-request content upstream-proxy-header "$AUTH" "$TOKEN"
   tcp-request content upstream-proxy-header Proxy-Connection Keep-Alive
   tcp-request content upstream-proxy-target www.httpbin.org
   server stream www.httpbin.org:443 upstream-proxy-tunnel "$PROXY":1
--- 8< ---

So this looks good, we send the right headers now thank-you!

Upstream proxy replies "HTTP/1.1 200 OK" which seems legit.

But then haproxy sends RST, instead of the buffered proxy data.

After a a bit more tcpdump & code reading, I made a small
modification in conn_recv_upstream_proxy_tunnel_response/2

struct ist upstream_proxy_successful = ist("HTTP/1.1 200 OK");

and then I get actual data back through the proxy - great!

This seems ok according to
https://datatracker.ietf.org/doc/html/rfc9110#name-connect

"Any 2xx (Successful) response indicates that the sender (and all inbound 
proxies) will switch to tunnel mode immediately after the response header 
section ..."


Is it possible to read up to "HTTP/1.1 200" and then ignore everything
up do 0x0d0a ? that should cover the RFC and both our examples.


That's a good point I will change the check if the connection was successful.


For me, there are still 2 things I'm not clear on:

- I don't follow yet what upstream-proxy-target provides yet, or is this just
   plumbing for later when we have requests?


Well Problem is that there must be an option to tell the upstream proxy what's 
the target host is. This could be done via "server->hostname" which is in the 
case above "www.httpbin.org:443".


When there are several targets host based on sni and there is a map can't the 
server name be used here is the solution the "0.0.0.0" and the 
"upstream-proxy-target" to solve the issue.


The final idea is something like this.

```
tcp-request content upstream-proxy-header Host %[req.ssl_sni,lower]
tcp-request content upstream-proxy-header "$AUTH" "$TOKEN"
tcp-request content upstream-proxy-header Proxy-Connection Keep-Alive
tcp-request content upstream-proxy-target %[req.ssl_sni,lower]

server stream 0.0.0.0:0 upstream-proxy-tunnel 
%[req.ssl_sni,lower,map_str(targets.map)]

```

The targets.map should have something like this.
#dest proxy
sni01 proxy01
sni02 proxy02

I hope the background of upstream-proxy-target is now more clear.


- In `server https_Via_Proxy1 0.0.0.0:0 upstream-proxy-tunnel 127.0.0.1:3128`
   from your config, what is 0.0.0.0:0 used for here? This binds to all IPv4
   but on a random free port?


This is required when the destination should be dynamic, it's documented here.
http://docs.haproxy.org/3.0/configuration.html#4.4-do-resolve


A+
Dave


Regards
Alex





Re: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-06-12 Thread Aleksandar Lazic

Hi.

Thanks for testing and feedback.

On 2024-06-12 (Mi.) 20:35, Dave Cottlehuber wrote:

On Wed, 12 Jun 2024, at 13:04, Aleksandar Lazic wrote:

Hi.

Attached a new version with updated upstream-proxy.cfg.

This Patch have also the feature `upstream-proxy-target` to get rid of the
dependency for the srv->hostname.

```
tcp-request content upstream-proxy-target www.test1.com
```

Now have I tested the setup with `0.0.0.0` as server.

```
server https_Via_Proxy1 0.0.0.0:0 upstream-proxy-tunnel 127.0.0.1:3128
init-addr
127.0.0.1
```

@Dave: Can you use a name for the upstream-proxy-tunnel instead of IP?


Yes, it does the DNS lookup happily, and I can pass secret via env. nice!


That's great :-)


--- 8< ---
frontend stream_fe
   bind:::443v4v6
   mode tcp
   option tcplog
   default_backend stream_be

backend stream_be
   mode tcp
   tcp-request content upstream-proxy-header Host www.httpbin.org
   tcp-request content upstream-proxy-header "$AUTH" "$TOKEN"
   tcp-request content upstream-proxy-header Proxy-Connection Keep-Alive
   tcp-request content upstream-proxy-target www.httpbin.org
   server stream www.httpbin.org:443 upstream-proxy-tunnel "$PROXY":1
--- 8< ---

So this looks good, we send the right headers now thank-you!

Upstream proxy replies "HTTP/1.1 200 OK" which seems legit.

But then haproxy sends RST, instead of the buffered proxy data.

After a a bit more tcpdump & code reading, I made a small
modification in conn_recv_upstream_proxy_tunnel_response/2

struct ist upstream_proxy_successful = ist("HTTP/1.1 200 OK");

and then I get actual data back through the proxy - great!

This seems ok according to
https://datatracker.ietf.org/doc/html/rfc9110#name-connect

"Any 2xx (Successful) response indicates that the sender (and all inbound proxies) 
will switch to tunnel mode immediately after the response header section ..."

Is it possible to read up to "HTTP/1.1 200" and then ignore everything
up do 0x0d0a ? that should cover the RFC and both our examples.


That's a good point I will change the check if the connection was successful.


For me, there are still 2 things I'm not clear on:

- I don't follow yet what upstream-proxy-target provides yet, or is this just
   plumbing for later when we have requests?


Well Problem is that there must be an option to tell the upstream proxy what's 
the target host is. This could be done via "server->hostname" which is in the 
case above "www.httpbin.org:443".


When there are several targets host based on sni and there is a map can't the 
server name be used here is the solution the "0.0.0.0" and the 
"upstream-proxy-target" to solve the issue.


The final idea is something like this.

```
tcp-request content upstream-proxy-header Host %[req.ssl_sni,lower]
tcp-request content upstream-proxy-header "$AUTH" "$TOKEN"
tcp-request content upstream-proxy-header Proxy-Connection Keep-Alive
tcp-request content upstream-proxy-target %[req.ssl_sni,lower]

server stream 0.0.0.0:0 upstream-proxy-tunnel 
%[req.ssl_sni,lower,map_str(targets.map)]

```

The targets.map should have something like this.
#dest proxy
sni01 proxy01
sni02 proxy02

I hope the background of upstream-proxy-target is now more clear.


- In `server https_Via_Proxy1 0.0.0.0:0 upstream-proxy-tunnel 127.0.0.1:3128`
   from your config, what is 0.0.0.0:0 used for here? This binds to all IPv4
   but on a random free port?


This is required when the destination should be dynamic, it's documented here.
http://docs.haproxy.org/3.0/configuration.html#4.4-do-resolve


A+
Dave


Regards
Alex



Re: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-06-12 Thread Aleksandar Lazic

Hi.

Attached a new version with updated upstream-proxy.cfg.

This Patch have also the feature `upstream-proxy-target` to get rid of the 
dependency for the srv->hostname.


```
tcp-request content upstream-proxy-target www.test1.com
```

Now have I tested the setup with `0.0.0.0` as server.

```
server https_Via_Proxy1 0.0.0.0:0 upstream-proxy-tunnel 127.0.0.1:3128 init-addr 
127.0.0.1

```

@Dave: Can you use a name for the upstream-proxy-tunnel instead of IP?

@ALL: I need some help to implement fetch and conv feature into the patch, could 
anyone help me for that.


Regards

Alex

On 2024-06-12 (Mi.) 12:57, Aleksandar Lazic wrote:

Hi Dave.

On 2024-06-12 (Mi.) 12:45, Aleksandar Lazic wrote:



On 2024-06-12 (Mi.) 12:26, Dave Cottlehuber wrote:

On Tue, 11 Jun 2024, at 22:57, Aleksandar Lazic wrote:

Hi Dave.

Thank you for your test and feedback.

When you put this line into backend, will this be better?

```
tcp-request connection upstream-proxy-header HOST www.httpbun.com
```

Regards
Alex


Hi Alex,

Sorry I forgot to mention that. This is not allowed by the backend:

[ALERT]    (76213) : config : parsing 
[/usr/local/etc/haproxy/haproxy.conf:228] : tcp-request connection is not 
allowed because backend stream_be is not a frontend


So there is likely a simple solution to allow these in either end.


Not yet, afaik.


Looks like the "tcp-request content ..." is the solution.

```
frontend stream_fe
  bind    :::8443    v4v6
  mode tcp
  option tcplog
  default_backend stream_be

backend stream_be

  mode tcp
  log global

  tcp-request content upstream-proxy-header HOST www.httpbun.com
  tcp-request content set-dst-port int(4433)
  #tcp-request content set-hostname %[str(www.test1.com)]
  tcp-request content upstream-proxy-header Host www.test1.com
  tcp-request content upstream-proxy-header Proxy-Authorization "basic 
%[env(MYPASS),base64]"


  #server stream www.httpbun.com:443 upstream-proxy-tunnel 123.45.67.89:8000
  server https_Via_Proxy1 www.test1.com:4433 upstream-proxy-tunnel 
127.0.0.1:3128 init-addr 127.0.0.1


```


Looks like there is still a lot of work to do.
I work on that, stay tuned for updates.

 From my point of view are the *upstream-proxy* settings only useful in the 
backend section.



A+
Dave


Regards
Alex

From aad7b0afbbb37c988fc61801f2d4f56a6d1d8240 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Wed, 12 Jun 2024 14:53:07 +0200
Subject: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

This commit makes it possible for HAProxy to reach
target server via a upstream http proxy.

This patch is based on the work of @brentcetinich
and refer to gh #1542
---
 doc/configuration.txt  |  42 ++
 examples/upstream-proxy-squid.conf |  60 +
 examples/upstream-proxy.cfg|  91 +
 include/haproxy/action-t.h |   7 +
 include/haproxy/connection-t.h |  29 -
 include/haproxy/connection.h   |   4 +
 include/haproxy/proxy-t.h  |   2 +
 include/haproxy/server-t.h |   8 +-
 include/haproxy/tcpcheck-t.h   |   1 +
 src/backend.c  |   5 +
 src/connection.c   | 197 +
 src/proto_quic.c   |   4 +
 src/proto_tcp.c|   2 +
 src/proxy.c|   4 +
 src/server.c   | 140 
 src/sock.c |   3 +
 src/tcp_rules.c| 112 +++-
 src/tcpcheck.c |   3 +
 src/xprt_handshake.c   |  18 +++
 19 files changed, 669 insertions(+), 63 deletions(-)
 create mode 100644 examples/upstream-proxy-squid.conf
 create mode 100644 examples/upstream-proxy.cfg

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 971c54d28..772e7a3cb 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -14269,6 +14269,8 @@ tarpit - - - -X   -   -
 track-sc1  X X X -X   X   -
 track-sc2  X X X -X   X   -
 unset-var  X X X XX   X   X
+upstream-proxy-header  - - X --   -   -
+upstream-proxy-target  - - X --   -   -
 use-service- - X -X   -   -
 wait-for-body  - - - -X   X   -
 wait-for-handshake - - - -X   -   -
@@ -15708,6 +15710,37 @@ unset-var()
   Example:
 http-request unset-var(req.my_var)
 
+upstream-proxy-header  
+  Usable in:  TCP RqCon| RqSes| RqCnt| RsCnt|HTTP Req| Res| Aft
+X  |   -  |   -  |   -  |  - |  - |  -
+
+  This rule add headers to the upstream proxy connection.
+
+  Arguments :
+ the Header name which should be added to the upstream p

Re: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-06-12 Thread Aleksandar Lazic

Hi Dave.

On 2024-06-12 (Mi.) 12:45, Aleksandar Lazic wrote:



On 2024-06-12 (Mi.) 12:26, Dave Cottlehuber wrote:

On Tue, 11 Jun 2024, at 22:57, Aleksandar Lazic wrote:

Hi Dave.

Thank you for your test and feedback.

When you put this line into backend, will this be better?

```
tcp-request connection upstream-proxy-header HOST www.httpbun.com
```

Regards
Alex


Hi Alex,

Sorry I forgot to mention that. This is not allowed by the backend:

[ALERT]    (76213) : config : parsing 
[/usr/local/etc/haproxy/haproxy.conf:228] : tcp-request connection is not 
allowed because backend stream_be is not a frontend


So there is likely a simple solution to allow these in either end.


Not yet, afaik.


Looks like the "tcp-request content ..." is the solution.

```
frontend stream_fe
  bind:::8443v4v6
  mode tcp
  option tcplog
  default_backend stream_be

backend stream_be

  mode tcp
  log global

  tcp-request content upstream-proxy-header HOST www.httpbun.com
  tcp-request content set-dst-port int(4433)
  #tcp-request content set-hostname %[str(www.test1.com)]
  tcp-request content upstream-proxy-header Host www.test1.com
  tcp-request content upstream-proxy-header Proxy-Authorization "basic 
%[env(MYPASS),base64]"


  #server stream www.httpbun.com:443 upstream-proxy-tunnel 123.45.67.89:8000
  server https_Via_Proxy1 www.test1.com:4433 upstream-proxy-tunnel 
127.0.0.1:3128 init-addr 127.0.0.1


```


Looks like there is still a lot of work to do.
I work on that, stay tuned for updates.

 From my point of view are the *upstream-proxy* settings only useful in the 
backend section.



A+
Dave


Regards
Alex





Re: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-06-12 Thread Aleksandar Lazic




On 2024-06-12 (Mi.) 12:26, Dave Cottlehuber wrote:

On Tue, 11 Jun 2024, at 22:57, Aleksandar Lazic wrote:

Hi Dave.

Thank you for your test and feedback.

When you put this line into backend, will this be better?

```
tcp-request connection upstream-proxy-header HOST www.httpbun.com
```

Regards
Alex


Hi Alex,

Sorry I forgot to mention that. This is not allowed by the backend:

[ALERT](76213) : config : parsing [/usr/local/etc/haproxy/haproxy.conf:228] 
: tcp-request connection is not allowed because backend stream_be is not a 
frontend

So there is likely a simple solution to allow these in either end.


Not yet, afaik.

Looks like there is still a lot of work to do.
I work on that, stay tuned for updates.

From my point of view are the *upstream-proxy* settings only useful in the 
backend section.



A+
Dave


Regards
Alex



Re: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-06-11 Thread Aleksandar Lazic

Hi Dave.

Thank you for your test and feedback.

When you put this line into backend, will this be better?

```
tcp-request connection upstream-proxy-header HOST www.httpbun.com
```

Regards
Alex

On 2024-06-11 (Di.) 23:52, Dave Cottlehuber wrote:

On Mon, 10 Jun 2024, at 22:09, Aleksandar Lazic wrote:

It is now possible to set via "tcp-request connection upstream-proxy-header"
headers for the upstream proxy

```
tcp-request connection upstream-proxy-header Host www.test1.com
tcp-request connection upstream-proxy-header Proxy-Authorization "basic
base64-value"
```


Thanks Alex!

## sending CONNECT & headers

A simple `listen` server works, but a split frontend/backend one doesn't,
no headers are present in tcpdump/ngrep nor in debug.

I read the header iteration function and I'm not sure what the difference
is, I guess the backend doesn't see the frontend header structure?

### works

listen stream_fe
   bind:::443v4v6
   mode tcp
   option tcplog
   tcp-request connection upstream-proxy-header HOST www.httpbun.com
   server stream www.httpbun.com:443 upstream-proxy-tunnel 123.45.67.89:8000

## headers missing when split frontend/backend

frontend stream_fe
   bind:::443v4v6
   mode tcp
   option tcplog
   tcp-request connection upstream-proxy-header HOST www.httpbun.com
   default_backend stream_be

backend stream_be
   server stream www.httpbun.com:443 upstream-proxy-tunnel 123.45.67.89:8000

In the failing case, `mtrash->orig` shows it as empty, when I uncomment
your DPRINTF line. Looking at starutp log it captures the header from
the config correctly:

 debug 
... config phase ...

Header name :HOST:
Header value :www.httpbun.com:
name  :HOST:
value :www.httpbun.com:

 so far so good...

... proxy phase ...

HTTP TUNNEL SEND start
proxy->id :stream_be:
hostname: www.httpbun.com
trash->data :38:
connect_length :39:
trash->data :40:
trash->orig :CONNECT www.httpbun.com:443 HTTP/1.1

... there should be more in orig here ...



the working single listen version shows iterating over the headers:

list each name  :HOST:
list each value :www.httpbin.org:

Built with:
$ gmake -j32 USE_ZLIB=1 USE_OPENSSL=1 USE_THREAD=1 USE_STATIC_PCRE2=1 
USE_PCRE2_JIT=1 TARGET=freebsd DEFINE='-DFREEBSD_PORTS -DDEBUG_FULL'

Run with:
$ ./haproxy -d -db -V -f /usr/local/etc/haproxy/haproxy.conf

Either way, I didn't get to make a tcp connection through, this might need some
more tcpdump work tomorrow.

A+
Dave





[PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-06-10 Thread Aleksandar Lazic

Hi.

Let me start a new Thread with a more clean patch for the upstream feature.

What's in the patch?

The haproxy can now use a "upstream-proxy-tunnel".

```
server https_Via_Proxy1 www.test1.com:4433 upstream-proxy-tunnel 127.0.0.1:3128 
init-addr 127.0.0.1

```

It is now possible to set via "tcp-request connection upstream-proxy-header" 
headers for the upstream proxy


```
tcp-request connection upstream-proxy-header Host www.test1.com
tcp-request connection upstream-proxy-header Proxy-Authorization "basic 
base64-value"

```

What is missing for now is the possibility to use samples as shown below. At 
this point would be some help very appreciated.


```
tcp-request connection upstream-proxy-header Proxy-Authorization "basic 
%[env(MYPASS),base64]"

```

I would also like to have a name resolution for the upstream-proxy-tunnel which 
is one of the next task for me.


The patch looks quite huge to me ~45k, there are still some Debugging lines 
there and I'm pretty sure there are a lot of optimizations possible but in my 
test setup works the connection via a SQUID Proxy :-)


Any feedback is very welcome.

Regards
AlexFrom 0b903fa0cfef0cefd0a1b819c9bd1b8e786e6aae Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Mon, 10 Jun 2024 23:58:18 +0200
Subject: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

This commit makes it possible for HAProxy to reach
target server via a upstream http proxy.

This patch is based on the work of @brentcetinich
and refer to gh #1542
---
 doc/configuration.txt  |  26 
 examples/upstream-proxy-squid.conf |  60 +
 examples/upstream-proxy.cfg|  49 +++
 include/haproxy/action-t.h |   4 +
 include/haproxy/connection-t.h |  29 -
 include/haproxy/connection.h   |   4 +
 include/haproxy/proxy-t.h  |   1 +
 include/haproxy/server-t.h |   8 +-
 include/haproxy/tcpcheck-t.h   |   1 +
 src/backend.c  |   5 +
 src/connection.c   | 197 +
 src/proto_quic.c   |   4 +
 src/proto_tcp.c|   2 +
 src/proxy.c|   3 +
 src/server.c   | 140 
 src/sock.c |   3 +
 src/tcp_rules.c|  71 ++-
 src/tcpcheck.c |   3 +
 src/xprt_handshake.c   |  18 +++
 19 files changed, 565 insertions(+), 63 deletions(-)
 create mode 100644 examples/upstream-proxy-squid.conf
 create mode 100644 examples/upstream-proxy.cfg

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 971c54d28..0e42cb8e3 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -14269,6 +14269,7 @@ tarpit - - - -X   -   -
 track-sc1  X X X -X   X   -
 track-sc2  X X X -X   X   -
 unset-var  X X X XX   X   X
+upstream-proxy-header  X - - --   -   -
 use-service- - X -X   -   -
 wait-for-body  - - - -X   X   -
 wait-for-handshake - - - -X   -   -
@@ -15708,6 +15709,22 @@ unset-var()
   Example:
 http-request unset-var(req.my_var)
 
+upstream-proxy-header  
+  Usable in:  TCP RqCon| RqSes| RqCnt| RsCnt|HTTP Req| Res| Aft
+X  |   -  |   -  |   -  |  - |  - |  -
+
+  This rule add headers to the upstream proxy connection.
+
+  Arguments :
+ the Header name which should be added tot the upstream proxy
+ call.
+the sample expression for the value
+
+  Example:
+tcp-request connection upstream-proxy-header Host www.test1.com
+tcp-request connection upstream-proxy-header Proxy-Authorization "basic %[env(MYPASS),base64]"
+
+  See also : server upstream-proxy-tunnel keyword
 
 use-service 
   Usable in:  TCP RqCon| RqSes| RqCnt| RsCnt|HTTP Req| Res| Aft
@@ -18076,6 +18093,13 @@ tls-tickets
   It may also be used as "default-server" setting to reset any previous
   "default-server" "no-tls-tickets" setting.
 
+upstream-proxy-tunnel :
+  May be used in the following contexts: tcp
+
+  This option enables upstream http proxy tunnel for outgoing connections to
+  the server. Using this option won't force the health check to go via upstream
+  http proxy by default.
+
 verify [none|required]
   May be used in the following contexts: tcp, http, log, peers, ring
 
@@ -21990,6 +22014,8 @@ fc_err_str : string
   | 41 | "SOCKS4 Proxy deny the request"   |
   | 42 | "SOCKS4 Proxy handshake aborted by server"|
   | 43 | "SSL fatal

Re: Now a Working Patchset

2024-06-09 Thread Aleksandar Lazic

Hi.

This patches uses now haproxy own buffers and adds some upstream headers.

The patches are still full of debug code because it's WIP :-)

The question about how to add tcp-request header is still not answered for me.

This "CONNECT" Stuff is for me a clear tcp stuff but i'm quite open for 
discussion if the http-request set-header feature should be used.


Any opinions on that?

Regards

Alex

On 2024-06-07 (Fr.) 00:57, Aleksandar Lazic wrote:

Hi.

I was able to create a working setup with the attached patches, I'm pretty 
sure that the patch will need some adoptions until it' ready to commit to the 
dev branch.


It would be nice to get some feedback.

There are some open questions which I'm not able to do by my self because it 
requires some design discussion.


* It is possible to set some headers like Host and Proxy-Authorization within 
in CONNECT Method [0]. As the "CONNECT" is a TCP session it would be nice to 
have in "tcp-request content|session " a "upstream-proxy-header" option but in 
the "struct proxy {...}" 
https://github.com/haproxy/haproxy/blob/master/include/haproxy/proxy-t.h#L271 
isn't any useful member in "struct {...} tcp_req;" 
https://github.com/haproxy/haproxy/blob/master/include/haproxy/proxy-t.h#L299-L304


My suggestion is to add list wich holdes the upstream proxy headers, opinions 
on this?


* What's the most efficient way to add this headers into the 
"conn_send_upstream_proxy_tunnel_request"?


As you can see in the patch I have tried to add the header into he 
"proxy_connect" string with "snprintf()", but without success just because I'm 
a little bit out of training with C, any help to fix this is very welcome.


* My test setup is shown in examples/upstream-proxy.cfg.

Best regards
Alex

[0]https://www.rfc-editor.org/rfc/rfc9110#name-connect

On 2024-05-31 (Fr.) 12:08, Aleksandar Lazic wrote:

Hi.

Anyone who have some Ideas how to fix the return way?

Regards

Alex

On 2024-05-27 (Mo.) 09:12, Aleksandar Lazic wrote:

Hi.

I have done some progress with the feature :-)

The test setup runs in 4 shells.

# shell1: curl -vk --connect-to www.test1.com:4433:127.0.0.1:8080 -H "Host: 
www.test1.com" https://www.test1.com:4433

# shell2: ./haproxy -d -f examples/upstream-proxy.cfg
# shell3: sudo podman run --rm -it --name squid -e TZ=UTC -p 3128:3128 
--network host ubuntu/squid
# shell4: openssl s_server -trace -www -bugs -debug -cert 
reg-tests/ssl/common.pem


The Request reaches the s_server but I 'm stuck with the return way 
"connection.c:conn_recv_upstream_proxy_tunnel_response()"


Have anyone an Idea what's wrong?

Maybe it's too late for 3.0 but it would be nice to have this feature in 3.1 
:-)


Regards

Alex

On 2024-05-24 (Fr.) 00:08, Aleksandar Lazic wrote:

Hi.

I have seen https://github.com/haproxy/haproxy/issues/1542 which requests 
that feature.


Now I have tried to "port" the 
https://github.com/brentcetinich/haproxy/commit/bc258bff030677d855a6a84fec881398e8f1e082 
to the current dev branch and attached the patch.


I'm pretty sure that there are some issues in the patch and I'm happy to 
make some rounds to fix the issues :-)


One question for me is, as I'm not that fit anymore in C and datatype, does 
this `0x1` still fits into 32bit?


```from the Patch

+++ b/include/haproxy/server-t.h
@@ -154,6 +154,7 @@ enum srv_initaddr {
 #define SRV_F_NON_PURGEABLE 0x2000   /* this server cannot be removed 
at runtime */

 #define SRV_F_DEFSRV_USE_SSL 0x4000  /* default-server uses SSL */
 #define SRV_F_DELETED 0x8000 /* srv is deleted but not yet 
purged */


+#define SRV_F_UPSTREAM_PROXY_TUNNEL 0x1  /* this server uses a 
upstream proxy tunnel with CONNECT method */


```

Another Question raised to me is: Why are not "TRACE(...)" entries in 
src/connection.c only DPRINTF?


On that way a big thanks to brentcetinich for his great work for the initil 
work to that patch.


Regards

Alex

On 2024-05-23 (Do.) 22:32, Aleksandar Lazic wrote:

Hi.

I follow the development more or less closely and I must say I not always 
understand all changes :-).


Just for my clarification is the following setup now possible with HAProxy 
with all the new shiny features  :-)


client => frontend
  |
  \-> backend server dest1 IP:port
    |
    \-> call "CONNECT IP:PORT" on upstream proxy
  |
  \-> TCP FLOW to destination IP


I know there is the 
http://docs.haproxy.org/2.9/configuration.html#5.2-socks4 option but sadly 
not too much enterprise Proxies admins offers socks4 nowadays.


I think the Scenario is still not possible but I would like to have a 
second eye opinion on that.


Maybe somebody on the list have a working solution for the scenario and 
can share it, maybe only via direct mai

Re: Now a Working Patchset

2024-06-09 Thread Aleksandar Lazic

Hallo Dave.

On 2024-06-07 (Fr.) 16:12, Dave Cottlehuber wrote:

On Thu, 6 Jun 2024, at 22:57, Aleksandar Lazic wrote:

Hi.

I was able to create a working setup with the attached patches, I'm
pretty sure
that the patch will need some adoptions until it' ready to commit to
the dev branch.

It would be nice to get some feedback.


Hi Alex,

This is pretty exciting, thanks! I rebased Brent's old patches last year
for IIRC 2.5, but couldn't figure out how to inject some headers for
TCP mode. Your C is better than mine already.

Patches compiled fine against 3.0.0. Minor nits:


Great, thanks for testing :-)


- examples/upstream-proxy-squid.conf needs the ^M line endings removed.


That's strange as I have created the file on Linux, anyway I have now saved it 
again with Linux line endings.



- a few trailing whitespace and stray tabs in the diff should go
   in upstream-proxy.cfg include/haproxy/server-t.h src/connection.c


Well, I will fix this after the patch have the full features :-)


I couldn't quite understand how to use upstream-proxy.cfg example:

server https_Via_Proxy1 www.test1.com:4433 upstream-proxy-tunnel 127.0.0.1:3128 
upstream-proxy-header-host "www.test1.com:4433" sni str(www.test1.com) 
init-addr 127.0.0.1

but what is the purpose of each of the fields here?

server https_Via_Proxy1
 - name as usual


Yep.


www.test1.com:4433
 - is this the url we are requesting to proxy?
 - not sure why its needed here


This is the destination host and will be used for the "CONNECT " call.


upstream-proxy-tunnel 127.0.0.1:3128
 - ok, this is the upstream proxy we are connecting to


Yep. The plan is that this address could be a name which is resolvable.


upstream-proxy-header-host "www.test1.com:4433"
 - not sure why its needed here


That's documented in the RFC https://www.rfc-editor.org/rfc/rfc7231#page-30

```
CONNECT server.example.com:80 HTTP/1.1
Host: server.example.com:80
```

I just wanted to be as much compliant as possible.
This line will be obsolete as soon as the "tcp-request  header ..." feature 
is implemented.



sni str(www.test1.com)
 - I assume I can add this from a fetch
 - i.e. dynamic for each connection?


That's useless for now, only when the destination server have another name as 
the destination host on sni level.



init-addr 127.0.0.1
 - I assume this is only needed for test


Yep.


We have the requested url 3x here, and I'm not clear why thats required.
Aren't they always the same?


It's mainly for testing.


Is it possible to have that URL set from say SNI sniffer fetch, similar
to 
https://www.haproxy.com/blog/enhanced-ssl-load-balancing-with-server-name-indication-sni-tls-extension
 ?


Well as written above I would like to have a "tcp-request ... header ..." 
feature which is able to handle the HAProxy expressions.

That's on the way :-)


My scenario:

I have a very similar setup, (done outside haproxy), where I SNI sniff the
header, compare it to a dynamic allow list, and then forward traffic through
firewall with CONNECT. To track usage, a custom header is pre-pended on connect.
We're not decrypting the TLS session to preserve privacy of message. Just not
destination.

Here's your setup, with a slight amendment to match what I'm doing:
  

Just for my clarification is the following setup now possible with HAProxy
with all the new shiny features  :-)


$ curl https://httpbun.com/
...
client => frontend
   sniff SNI request, check in ACL: are you allowed to go here?
   big brother is watching.
   |
   \-> backend server dest1 IP:port
     |
     \-> call "CONNECT httpbun.com:443 on upstream proxy
   |
  send additional headers, like
  host: httpbun.com:443
  authentication: bearer abc123
  |
  upstream replies HTTP/1.1 200 OK
  |
  now we switch to tunneling, and fwd the original TLS
  traffic
   \-> TCP FLOW to destination IP


In my case I would have to vary the httpbun.com in both CONNECT and HOST:
headers, per each allowed domain.


That's exactly what I also like to have :-)

I'm on the way to send updated patches which have now implemented the upstream 
header feature but with hard coded variables names, for now.


upstream-proxy-header*


In practice I could create lots of backends per each SNI header, if its
not possible to use the inspected SNI name.

A+
Dave
———
O for a muse of fire, that would ascend the brightest heaven of invention!


LG
Alex



Now a Working Patchset (was: Re: Patch proposal for FEATURE/MAJOR: Add upstream-proxy-tunnel feature)

2024-06-06 Thread Aleksandar Lazic

Hi.

I was able to create a working setup with the attached patches, I'm pretty sure 
that the patch will need some adoptions until it' ready to commit to the dev branch.


It would be nice to get some feedback.

There are some open questions which I'm not able to do by my self because it 
requires some design discussion.


* It is possible to set some headers like Host and Proxy-Authorization within in 
CONNECT Method [0]. As the "CONNECT" is a TCP session it would be nice to have 
in "tcp-request content|session " a "upstream-proxy-header" option but in the 
"struct proxy {...}" 
https://github.com/haproxy/haproxy/blob/master/include/haproxy/proxy-t.h#L271 
isn't any useful member in "struct {...} tcp_req;" 
https://github.com/haproxy/haproxy/blob/master/include/haproxy/proxy-t.h#L299-L304


My suggestion is to add list wich holdes the upstream proxy headers, opinions on 
this?


* What's the most efficient way to add this headers into the 
"conn_send_upstream_proxy_tunnel_request"?


As you can see in the patch I have tried to add the header into he 
"proxy_connect" string with "snprintf()", but without success just because I'm a 
little bit out of training with C, any help to fix this is very welcome.


* My test setup is shown in examples/upstream-proxy.cfg.

Best regards
Alex

[0]https://www.rfc-editor.org/rfc/rfc9110#name-connect

On 2024-05-31 (Fr.) 12:08, Aleksandar Lazic wrote:

Hi.

Anyone who have some Ideas how to fix the return way?

Regards

Alex

On 2024-05-27 (Mo.) 09:12, Aleksandar Lazic wrote:

Hi.

I have done some progress with the feature :-)

The test setup runs in 4 shells.

# shell1: curl -vk --connect-to www.test1.com:4433:127.0.0.1:8080 -H "Host: 
www.test1.com" https://www.test1.com:4433

# shell2: ./haproxy -d -f examples/upstream-proxy.cfg
# shell3: sudo podman run --rm -it --name squid -e TZ=UTC -p 3128:3128 
--network host ubuntu/squid
# shell4: openssl s_server -trace -www -bugs -debug -cert 
reg-tests/ssl/common.pem


The Request reaches the s_server but I 'm stuck with the return way 
"connection.c:conn_recv_upstream_proxy_tunnel_response()"


Have anyone an Idea what's wrong?

Maybe it's too late for 3.0 but it would be nice to have this feature in 3.1 :-)

Regards

Alex

On 2024-05-24 (Fr.) 00:08, Aleksandar Lazic wrote:

Hi.

I have seen https://github.com/haproxy/haproxy/issues/1542 which requests 
that feature.


Now I have tried to "port" the 
https://github.com/brentcetinich/haproxy/commit/bc258bff030677d855a6a84fec881398e8f1e082 to the current dev branch and attached the patch.


I'm pretty sure that there are some issues in the patch and I'm happy to make 
some rounds to fix the issues :-)


One question for me is, as I'm not that fit anymore in C and datatype, does 
this `0x1` still fits into 32bit?


```from the Patch

+++ b/include/haproxy/server-t.h
@@ -154,6 +154,7 @@ enum srv_initaddr {
 #define SRV_F_NON_PURGEABLE 0x2000   /* this server cannot be removed at 
runtime */

 #define SRV_F_DEFSRV_USE_SSL 0x4000  /* default-server uses SSL */
 #define SRV_F_DELETED 0x8000 /* srv is deleted but not yet 
purged */


+#define SRV_F_UPSTREAM_PROXY_TUNNEL 0x1  /* this server uses a upstream 
proxy tunnel with CONNECT method */


```

Another Question raised to me is: Why are not "TRACE(...)" entries in 
src/connection.c only DPRINTF?


On that way a big thanks to brentcetinich for his great work for the initil 
work to that patch.


Regards

Alex

On 2024-05-23 (Do.) 22:32, Aleksandar Lazic wrote:

Hi.

I follow the development more or less closely and I must say I not always 
understand all changes :-).


Just for my clarification is the following setup now possible with HAProxy 
with all the new shiny features  :-)


client => frontend
  |
  \-> backend server dest1 IP:port
    |
    \-> call "CONNECT IP:PORT" on upstream proxy
  |
  \-> TCP FLOW to destination IP


I know there is the 
http://docs.haproxy.org/2.9/configuration.html#5.2-socks4 option but sadly 
not too much enterprise Proxies admins offers socks4 nowadays.


I think the Scenario is still not possible but I would like to have a second 
eye opinion on that.


Maybe somebody on the list have a working solution for the scenario and can 
share it, maybe only via direct mail. ¯\_(ツ)_/¯


Regards
Alex

From 73cf1e51b3624e6cbad3a8b45e2c2f4557cfa81b Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 7 Jun 2024 00:35:36 +0200
Subject: [PATCH 4/4] This commit makes it possible for HAProxy to reach target
 server via a upstream http proxy.

This patch is based on the work of @brentcetinich
and refer to gh #1542
---
 examples/upstream-proxy.cfg | 30 +-
 include/haproxy/server-t.h  | 10 +
 src/connection.c  

Re: Patch proposal for FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-05-31 Thread Aleksandar Lazic

Hi.

Anyone who have some Ideas how to fix the return way?

Regards

Alex

On 2024-05-27 (Mo.) 09:12, Aleksandar Lazic wrote:

Hi.

I have done some progress with the feature :-)

The test setup runs in 4 shells.

# shell1: curl -vk --connect-to www.test1.com:4433:127.0.0.1:8080 -H "Host: 
www.test1.com" https://www.test1.com:4433

# shell2: ./haproxy -d -f examples/upstream-proxy.cfg
# shell3: sudo podman run --rm -it --name squid -e TZ=UTC -p 3128:3128 
--network host ubuntu/squid
# shell4: openssl s_server -trace -www -bugs -debug -cert 
reg-tests/ssl/common.pem


The Request reaches the s_server but I 'm stuck with the return way 
"connection.c:conn_recv_upstream_proxy_tunnel_response()"


Have anyone an Idea what's wrong?

Maybe it's too late for 3.0 but it would be nice to have this feature in 3.1 :-)

Regards

Alex

On 2024-05-24 (Fr.) 00:08, Aleksandar Lazic wrote:

Hi.

I have seen https://github.com/haproxy/haproxy/issues/1542 which requests 
that feature.


Now I have tried to "port" the 
https://github.com/brentcetinich/haproxy/commit/bc258bff030677d855a6a84fec881398e8f1e082 
to the current dev branch and attached the patch.


I'm pretty sure that there are some issues in the patch and I'm happy to make 
some rounds to fix the issues :-)


One question for me is, as I'm not that fit anymore in C and datatype, does 
this `0x1` still fits into 32bit?


```from the Patch

+++ b/include/haproxy/server-t.h
@@ -154,6 +154,7 @@ enum srv_initaddr {
 #define SRV_F_NON_PURGEABLE 0x2000   /* this server cannot be removed at 
runtime */

 #define SRV_F_DEFSRV_USE_SSL 0x4000  /* default-server uses SSL */
 #define SRV_F_DELETED 0x8000 /* srv is deleted but not yet 
purged */


+#define SRV_F_UPSTREAM_PROXY_TUNNEL 0x1  /* this server uses a upstream 
proxy tunnel with CONNECT method */


```

Another Question raised to me is: Why are not "TRACE(...)" entries in 
src/connection.c only DPRINTF?


On that way a big thanks to brentcetinich for his great work for the initil 
work to that patch.


Regards

Alex

On 2024-05-23 (Do.) 22:32, Aleksandar Lazic wrote:

Hi.

I follow the development more or less closely and I must say I not always 
understand all changes :-).


Just for my clarification is the following setup now possible with HAProxy 
with all the new shiny features  :-)


client => frontend
  |
  \-> backend server dest1 IP:port
    |
    \-> call "CONNECT IP:PORT" on upstream proxy
  |
  \-> TCP FLOW to destination IP


I know there is the 
http://docs.haproxy.org/2.9/configuration.html#5.2-socks4 option but sadly 
not too much enterprise Proxies admins offers socks4 nowadays.


I think the Scenario is still not possible but I would like to have a second 
eye opinion on that.


Maybe somebody on the list have a working solution for the scenario and can 
share it, maybe only via direct mail. ¯\_(ツ)_/¯


Regards
Alex





Patch proposal for FEATURE/MAJOR: Add upstream-proxy-tunnel feature (was: Re: Maybe stupid question but can HAProxy now use a upstream proxy)

2024-05-27 Thread Aleksandar Lazic

Hi.

I have done some progress with the feature :-)

The test setup runs in 4 shells.

# shell1: curl -vk --connect-to www.test1.com:4433:127.0.0.1:8080 -H "Host: 
www.test1.com" https://www.test1.com:4433

# shell2: ./haproxy -d -f examples/upstream-proxy.cfg
# shell3: sudo podman run --rm -it --name squid -e TZ=UTC -p 3128:3128 --network 
host ubuntu/squid

# shell4: openssl s_server -trace -www -bugs -debug -cert 
reg-tests/ssl/common.pem

The Request reaches the s_server but I 'm stuck with the return way 
"connection.c:conn_recv_upstream_proxy_tunnel_response()"


Have anyone an Idea what's wrong?

Maybe it's too late for 3.0 but it would be nice to have this feature in 3.1 :-)

Regards

Alex

On 2024-05-24 (Fr.) 00:08, Aleksandar Lazic wrote:

Hi.

I have seen https://github.com/haproxy/haproxy/issues/1542 which requests that 
feature.


Now I have tried to "port" the 
https://github.com/brentcetinich/haproxy/commit/bc258bff030677d855a6a84fec881398e8f1e082 
to the current dev branch and attached the patch.


I'm pretty sure that there are some issues in the patch and I'm happy to make 
some rounds to fix the issues :-)


One question for me is, as I'm not that fit anymore in C and datatype, does 
this `0x1` still fits into 32bit?


```from the Patch

+++ b/include/haproxy/server-t.h
@@ -154,6 +154,7 @@ enum srv_initaddr {
 #define SRV_F_NON_PURGEABLE 0x2000   /* this server cannot be removed at 
runtime */

 #define SRV_F_DEFSRV_USE_SSL 0x4000  /* default-server uses SSL */
 #define SRV_F_DELETED 0x8000 /* srv is deleted but not yet purged 
*/

+#define SRV_F_UPSTREAM_PROXY_TUNNEL 0x1  /* this server uses a upstream 
proxy tunnel with CONNECT method */


```

Another Question raised to me is: Why are not "TRACE(...)" entries in 
src/connection.c only DPRINTF?


On that way a big thanks to brentcetinich for his great work for the initil 
work to that patch.


Regards

Alex

On 2024-05-23 (Do.) 22:32, Aleksandar Lazic wrote:

Hi.

I follow the development more or less closely and I must say I not always 
understand all changes :-).


Just for my clarification is the following setup now possible with HAProxy 
with all the new shiny features  :-)


client => frontend
  |
  \-> backend server dest1 IP:port
    |
    \-> call "CONNECT IP:PORT" on upstream proxy
  |
  \-> TCP FLOW to destination IP


I know there is the http://docs.haproxy.org/2.9/configuration.html#5.2-socks4 
option but sadly not too much enterprise Proxies admins offers socks4 nowadays.


I think the Scenario is still not possible but I would like to have a second 
eye opinion on that.


Maybe somebody on the list have a working solution for the scenario and can 
share it, maybe only via direct mail. ¯\_(ツ)_/¯


Regards
Alex
From 5ac8750390ef91974691c07251f6c32782573c72 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Mon, 27 May 2024 09:05:39 +0200
Subject: [PATCH 2/2] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

This enables HAProxy to reach an target server via a upstream
http proxy.

This commit should close gh #1542
---
 doc/configuration.txt  |  9 +++
 examples/upstream-proxy-squid.conf | 60 +++
 examples/upstream-proxy.cfg| 23 +++
 include/haproxy/connection-t.h |  1 +
 include/haproxy/connection.h   |  1 +
 src/connection.c   | 96 +-
 src/xprt_handshake.c   | 15 -
 7 files changed, 186 insertions(+), 19 deletions(-)
 create mode 100644 examples/upstream-proxy-squid.conf
 create mode 100644 examples/upstream-proxy.cfg

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c0667af8f8..59a7460558 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -18015,6 +18015,13 @@ tls-tickets
   It may also be used as "default-server" setting to reset any previous
   "default-server" "no-tls-tickets" setting.
 
+upstream-proxy-tunnel :
+  May be used in the following contexts: tcp, http
+
+  This option enables upstream http proxy tunnel for outgoing connections to
+  the server. Using this option won't force the health check to go via upstream
+  http proxy by default.
+
 verify [none|required]
   May be used in the following contexts: tcp, http, log, peers, ring
 
@@ -21926,6 +21933,8 @@ fc_err_str : string
   | 41 | "SOCKS4 Proxy deny the request"   |
   | 42 | "SOCKS4 Proxy handshake aborted by server"|
   | 43 | "SSL fatal error" |
+  | 44 | "Error during reverse connect"|
+  | 45 | "Upstream http proxy write error during handshake"

Re: Maybe stupid question but can HAProxy now use a upstream proxy

2024-05-23 Thread Aleksandar Lazic

Hi.

I have seen https://github.com/haproxy/haproxy/issues/1542 which requests that 
feature.


Now I have tried to "port" the 
https://github.com/brentcetinich/haproxy/commit/bc258bff030677d855a6a84fec881398e8f1e082 
to the current dev branch and attached the patch.


I'm pretty sure that there are some issues in the patch and I'm happy to make 
some rounds to fix the issues :-)


One question for me is, as I'm not that fit anymore in C and datatype, does this 
`0x1` still fits into 32bit?


```from the Patch

+++ b/include/haproxy/server-t.h
@@ -154,6 +154,7 @@ enum srv_initaddr {
 #define SRV_F_NON_PURGEABLE 0x2000   /* this server cannot be removed at 
runtime */

 #define SRV_F_DEFSRV_USE_SSL 0x4000  /* default-server uses SSL */
 #define SRV_F_DELETED 0x8000 /* srv is deleted but not yet purged 
*/

+#define SRV_F_UPSTREAM_PROXY_TUNNEL 0x1  /* this server uses a upstream 
proxy tunnel with CONNECT method */


```

Another Question raised to me is: Why are not "TRACE(...)" entries in 
src/connection.c only DPRINTF?


On that way a big thanks to brentcetinich for his great work for the initil work 
to that patch.


Regards

Alex

On 2024-05-23 (Do.) 22:32, Aleksandar Lazic wrote:

Hi.

I follow the development more or less closely and I must say I not always 
understand all changes :-).


Just for my clarification is the following setup now possible with HAProxy 
with all the new shiny features  :-)


client => frontend
  |
  \-> backend server dest1 IP:port
    |
    \-> call "CONNECT IP:PORT" on upstream proxy
  |
  \-> TCP FLOW to destination IP


I know there is the http://docs.haproxy.org/2.9/configuration.html#5.2-socks4 
option but sadly not too much enterprise Proxies admins offers socks4 nowadays.


I think the Scenario is still not possible but I would like to have a second 
eye opinion on that.


Maybe somebody on the list have a working solution for the scenario and can 
share it, maybe only via direct mail. ¯\_(ツ)_/¯


Regards
Alex
From bf4e7c44ed939a2a9e119ca9b13b46efe9d43ab9 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Thu, 23 May 2024 23:52:58 +0200
Subject: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

This commit makes it possible for HAProxy to reach
target server via a upstream http proxy.

This patch is based on the work of @brentcetinich
and refer to gh #1542
---
 include/haproxy/connection-t.h |  14 +++-
 include/haproxy/connection.h   |   3 +
 include/haproxy/server-t.h |   8 +-
 include/haproxy/tcpcheck-t.h   |   1 +
 src/backend.c  |   5 ++
 src/connection.c   |  90 +
 src/proto_quic.c   |   4 +
 src/proto_tcp.c|   2 +
 src/server.c   | 138 -
 src/sock.c |   3 +
 src/tcpcheck.c |   3 +
 src/xprt_handshake.c   |  11 +++
 12 files changed, 225 insertions(+), 57 deletions(-)

diff --git a/include/haproxy/connection-t.h b/include/haproxy/connection-t.h
index 6ee0940be4..660c7bc7ba 100644
--- a/include/haproxy/connection-t.h
+++ b/include/haproxy/connection-t.h
@@ -132,8 +132,12 @@ enum {
 	CO_FL_ACCEPT_PROXY  = 0x0200,  /* receive a valid PROXY protocol header */
 	CO_FL_ACCEPT_CIP= 0x0400,  /* receive a valid NetScaler Client IP header */
 
+	/*  STOLEN unused : 0x0040, 0x0080 */
+	CO_FL_UPSTREAM_PROXY_TUNNEL_SEND	= 0x0040,  /* handshaking with upstream http proxy, going to send the handshake */
+	CO_FL_UPSTREAM_PROXY_TUNNEL_RECV = 0x0080,  /* handshaking with upstream http proxy, going to check if handshake succeed */
+
 	/* below we have all handshake flags grouped into one */
-	CO_FL_HANDSHAKE = CO_FL_SEND_PROXY | CO_FL_ACCEPT_PROXY | CO_FL_ACCEPT_CIP | CO_FL_SOCKS4_SEND | CO_FL_SOCKS4_RECV,
+	CO_FL_HANDSHAKE = CO_FL_SEND_PROXY | CO_FL_ACCEPT_PROXY | CO_FL_ACCEPT_CIP | CO_FL_SOCKS4_SEND | CO_FL_SOCKS4_RECV | CO_FL_UPSTREAM_PROXY_TUNNEL_SEND,
 	CO_FL_WAIT_XPRT = CO_FL_WAIT_L4_CONN | CO_FL_HANDSHAKE | CO_FL_WAIT_L6_CONN,
 
 	CO_FL_SSL_WAIT_HS   = 0x0800,  /* wait for an SSL handshake to complete */
@@ -155,6 +159,10 @@ enum {
 
 	/* below we have all SOCKS handshake flags grouped into one */
 	CO_FL_SOCKS4= CO_FL_SOCKS4_SEND | CO_FL_SOCKS4_RECV,
+
+	/* below we have all upstream http proxy tunnel handshake flags grouped into one */
+	CO_FL_UPSTREAM_PROXY_TUNNEL= CO_FL_UPSTREAM_PROXY_TUNNEL_SEND | CO_FL_UPSTREAM_PROXY_TUNNEL_RECV,
+
 };
 
 /* This function is used to report flags in debugging tools. Please reflect
@@ -241,6 +249,8 @@ enum {
 	CO_ERR_SSL_FATAL,/* SSL fatal error during a SSL_read or SSL_write */
 
 	CO_ER_REVERSE,   /* Error during reverse connect */
+
+	CO_ER_PROXY_CONNECT_SEND, /* Upstream http proxy write error during

Maybe stupid question but can HAProxy now use a upstream proxy

2024-05-23 Thread Aleksandar Lazic

Hi.

I follow the development more or less closely and I must say I not always 
understand all changes :-).


Just for my clarification is the following setup now possible with HAProxy with 
all the new shiny features  :-)


client => frontend
  |
  \-> backend server dest1 IP:port
|
\-> call "CONNECT IP:PORT" on upstream proxy
  |
  \-> TCP FLOW to destination IP


I know there is the http://docs.haproxy.org/2.9/configuration.html#5.2-socks4 
option but sadly not too much enterprise Proxies admins offers socks4 nowadays.


I think the Scenario is still not possible but I would like to have a second eye 
opinion on that.


Maybe somebody on the list have a working solution for the scenario and can 
share it, maybe only via direct mail. ¯\_(ツ)_/¯


Regards
Alex



Re: FCGI calls return 500 with "IH" Stream State

2024-05-16 Thread Aleksandar Lazic

Hi.

I have added fcgi trace

```
global
  log stdout format raw daemon debug

  pidfile /data/haproxy/run/haproxy.pid
  # maxconn  auto config from hap
  # nbthread auto config from hap

  master-worker

  #tune.comp.maxlevel 5

  expose-experimental-directives
  trace fcgi sink stdout
  trace fcgi verbosity advanced
  trace fcgi event any
  trace fcgi start now

    # turn on stats unix socket
  stats socket /data/haproxy/run/stats mode 660 level admin expose-fd listeners

```

and created with that output a issue.

https://github.com/haproxy/haproxy/issues/2568

Regards

Alex

On 2024-05-16 (Do.) 17:05, Aleksandar Lazic wrote:

Hi.

I have a strange behavior with HAProxy and FCGI PHP App.

When I call an admin URL returns HAProxy a 500, after a refresh of the same 
page returns the HAProxy 200.


```
10.128.2.35:39684 [16/May/2024:14:54:26.229] craft-cms fcgi-servers/craftcms1 
0/0/0/-1/1138 500 15416 - - IH-- 2/2/0/0/0 0/0 "GET /craftcms/admin/settings 
HTTP/1.1"


10.131.0.26:46546 [16/May/2024:14:56:01.870] craft-cms fcgi-servers/craftcms1 
0/0/0/1511/1514 200 113460 - -  2/2/0/0/0 0/0 "GET 
/craftcms/admin/settings HTTP/1.1"

```

How can I debug this 'I' flag which should never happen as the doc say.

https://docs.haproxy.org/2.9/configuration.html#8.5

```
    I : an internal error was identified by the proxy during a self-check.
    This should NEVER happen, and you are encouraged to report any log
    containing this, because this would almost certainly be a bug. It
    would be wise to preventively restart the process after such an
    event too, in case it would be caused by memory corruption.
```

I use the latest haproxy image haproxytech/haproxy-ubuntu:2.9 in OpenShift 
with that config.


```
global
    log stdout format raw daemon debug

    pidfile /data/haproxy/run/haproxy.pid
    # maxconn  auto config from hap
    # nbthread auto config from hap

    master-worker

    tune.comp.maxlevel 5

    # turn on stats unix socket
    stats socket /data/haproxy/run/stats mode 660 level admin expose-fd 
listeners

resolvers kube-dns
  nameserver dns1 dns-default.openshift-dns.svc.cluster.local:53
  accepted_payload_size 4096
  resolve_retries   3
  timeout resolve   1s
  timeout retry 1s
  hold other   30s
  hold refused 30s
  hold nx  30s
  hold timeout 30s
  hold valid   10s
  hold obsolete    30s

defaults
    mode    http
    balance leastconn
    log global
    option  httplog
    option  dontlognull
    option  log-health-checks
    option  forwardfor   except 10.196.106.108/32
    option  redispatch
    retries 3
    timeout http-request    10s
    timeout queue   30s
    timeout connect 10s
    timeout client  30s
    timeout server  30s
    timeout http-keep-alive 10s
    timeout check   10s
    #maxconn 3000

frontend craft-cms
  bind *:8080

  tcp-request inspect-delay 5s
  tcp-request content accept if HTTP

  # default check url from appgateway
  monitor-uri /health

  # https://www.haproxy.com/blog/load-balancing-php-fpm-with-haproxy-and-fastcgi
  # fix CVE-2019-11043
  http-request deny if { path_sub -i %0a %0d }

  # Mitigate CVE-2023-40225 (Proxy forwards malformed empty Content-Length 
headers)

  http-request deny if { hdr_len(content-length) 0 }

  # Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
  http-request del-header Proxy

  # DNS labels are case insensitive (RFC 4343), we need to convert the 
hostname into lowercase
  # before matching, or any requests containing uppercase characters will 
never match.

  http-request set-header Host %[req.hdr(Host),lower]

  acl exist-php-ext path_sub -i .php
  acl fpm-status path /fpm-status

  http-request set-path /index.php%[path] if !exist-php-ext !fpm-status !{ 
path_end .php }


  # https://www.haproxy.com/blog/haproxy-and-http-strict-transport-security-hsts
  # max-age is mandatory
  # 1600 seconds is a bit more than 6 months
  http-response set-header Strict-Transport-Security "max-age=1600; 
includeSubDomains; preload;"


  default_backend fcgi-servers

listen stats
  bind *:1936

  # Health check monitoring uri.
  monitor-uri /healthz

  # provide prometheus endpoint
  http-request use-service prometheus-exporter if { path /metrics }

  # Add your custom health check monitoring failure condition here.
  # monitor fail if 
  stats enable
  stats uri /

backend fcgi-servers

  option httpchk
  http-check connect proto fcgi
  http-check send meth GET uri /fpm-ping

  use-fcgi-app php-fpm

  # https://www.haproxy.com/blog/circuit-breaking-haproxy
  server-template craftcms 10 
"${CRAFT_SERVICE}.${NAMESPACE}.svc.cluster.local":9000 proto fcgi 

FCGI calls return 500 with "IH" Stream State

2024-05-16 Thread Aleksandar Lazic

Hi.

I have a strange behavior with HAProxy and FCGI PHP App.

When I call an admin URL returns HAProxy a 500, after a refresh of the same page 
returns the HAProxy 200.


```
10.128.2.35:39684 [16/May/2024:14:54:26.229] craft-cms fcgi-servers/craftcms1 
0/0/0/-1/1138 500 15416 - - IH-- 2/2/0/0/0 0/0 "GET /craftcms/admin/settings 
HTTP/1.1"


10.131.0.26:46546 [16/May/2024:14:56:01.870] craft-cms fcgi-servers/craftcms1 
0/0/0/1511/1514 200 113460 - -  2/2/0/0/0 0/0 "GET /craftcms/admin/settings 
HTTP/1.1"

```

How can I debug this 'I' flag which should never happen as the doc say.

https://docs.haproxy.org/2.9/configuration.html#8.5

```
I : an internal error was identified by the proxy during a self-check.
This should NEVER happen, and you are encouraged to report any log
containing this, because this would almost certainly be a bug. It
would be wise to preventively restart the process after such an
event too, in case it would be caused by memory corruption.
```

I use the latest haproxy image haproxytech/haproxy-ubuntu:2.9 in OpenShift with 
that config.


```
global
log stdout format raw daemon debug

pidfile /data/haproxy/run/haproxy.pid
# maxconn  auto config from hap
# nbthread auto config from hap

master-worker

tune.comp.maxlevel 5

# turn on stats unix socket
stats socket /data/haproxy/run/stats mode 660 level admin expose-fd 
listeners

resolvers kube-dns
  nameserver dns1 dns-default.openshift-dns.svc.cluster.local:53
  accepted_payload_size 4096
  resolve_retries   3
  timeout resolve   1s
  timeout retry 1s
  hold other   30s
  hold refused 30s
  hold nx  30s
  hold timeout 30s
  hold valid   10s
  hold obsolete30s

defaults
modehttp
balance leastconn
log global
option  httplog
option  dontlognull
option  log-health-checks
option  forwardfor   except 10.196.106.108/32
option  redispatch
retries 3
timeout http-request10s
timeout queue   30s
timeout connect 10s
timeout client  30s
timeout server  30s
timeout http-keep-alive 10s
timeout check   10s
#maxconn 3000

frontend craft-cms
  bind *:8080

  tcp-request inspect-delay 5s
  tcp-request content accept if HTTP

  # default check url from appgateway
  monitor-uri /health

  # https://www.haproxy.com/blog/load-balancing-php-fpm-with-haproxy-and-fastcgi
  # fix CVE-2019-11043
  http-request deny if { path_sub -i %0a %0d }

  # Mitigate CVE-2023-40225 (Proxy forwards malformed empty Content-Length 
headers)
  http-request deny if { hdr_len(content-length) 0 }

  # Strip off Proxy headers to prevent HTTpoxy (https://httpoxy.org/)
  http-request del-header Proxy

  # DNS labels are case insensitive (RFC 4343), we need to convert the hostname 
into lowercase
  # before matching, or any requests containing uppercase characters will never 
match.

  http-request set-header Host %[req.hdr(Host),lower]

  acl exist-php-ext path_sub -i .php
  acl fpm-status path /fpm-status

  http-request set-path /index.php%[path] if !exist-php-ext !fpm-status !{ 
path_end .php }


  # https://www.haproxy.com/blog/haproxy-and-http-strict-transport-security-hsts
  # max-age is mandatory
  # 1600 seconds is a bit more than 6 months
  http-response set-header Strict-Transport-Security "max-age=1600; 
includeSubDomains; preload;"


  default_backend fcgi-servers

listen stats
  bind *:1936

  # Health check monitoring uri.
  monitor-uri /healthz

  # provide prometheus endpoint
  http-request use-service prometheus-exporter if { path /metrics }

  # Add your custom health check monitoring failure condition here.
  # monitor fail if 
  stats enable
  stats uri /

backend fcgi-servers

  option httpchk
  http-check connect proto fcgi
  http-check send meth GET uri /fpm-ping

  use-fcgi-app php-fpm

  # https://www.haproxy.com/blog/circuit-breaking-haproxy
  server-template craftcms 10 
"${CRAFT_SERVICE}.${NAMESPACE}.svc.cluster.local":9000 proto fcgi check 
resolvers kube-dns init-addr none observe layer7  error-limit 5  on-error 
mark-down inter 10s  rise 30  slowstart 40s


fcgi-app php-fpm
log-stderr global
option keep-conn
option mpxs-conns
option max-reqs 10

docroot /app/web
index index.php
path-info ^(/.+\.php)(/.*)?$

```



Re: Question on deleting cookies from an HTTP request

2024-04-26 Thread Aleksandar Lazic

Hi Lokesh.

On 2024-04-27 (Sa.) 01:41, Lokesh Jindal wrote:

Hey folks

I have found that there is no operator "del-cookie" in HAProxy to delete cookies 
from the request. (HAProxy does support the operator "del-header").


Can you explain why such an operator is not supported? Is it due to complexity? 
Due to performance? It will be great if you can share details behind this design 
choice.


Well I'm pretty sure because nobody have added this feature into HAProxy. You 
are welcome to send a patch which add this feature.


Maybe you could add "delete" into the 
https://docs.haproxy.org/2.9/configuration.html#4.2-cookie function.


Please take a look into 
https://github.com/haproxy/haproxy/blob/master/CONTRIBUTING file if you plan to 
contribute.


We have use cases where we want to delete cookies from the request. Not having 
this support in HAProxy also makes me question if one should be deleting request 
cookies in the reverse proxy layer.


Maybe you can use some of the "*-header" functions to remove the cookie as shown 
in the example in https://docs.haproxy.org/2.9/configuration.html#4.4-replace-header



Thanks
Lokesh


Regards
Alex



Update for https://github.com/haproxy/wiki/wiki/SPOE:-Stream-Processing-Offloading-Engine

2024-04-15 Thread Aleksandar Lazic

Hi.

The "https://github.com/criteo/haproxy-spoe-go; is archived since Nov 7, 2023 
and there is a fork from that repo https://github.com/go-spop/spoe

Can we add this info to the wiki page?

There is also a rust implementation 
https://github.com/vkill/haproxy-spoa-example which could be added.


If it's possible then would I add this by my self.

Regards
Alex



Re: Dataplane exits at haproxytech/haproxy-ubuntu:2.9 in Containers

2024-04-02 Thread Aleksandar Lazic

Hi.

On 2024-03-18 (Mo.) 12:19, William Lallemand wrote:

On Sun, Mar 17, 2024 at 07:53:17PM +0100, Aleksandar Lazic wrote:

Hi.

Looks like there was a similar question in the forum
https://discourse.haproxy.org/t/trouble-with-starting-the-data-plane-api/9200

Any idea how to fix this?



Honestly no idea, you should try an issue there: 
https://github.com/haproxytech/dataplaneapi/issues


Thank you for the hint.
I have created this issue https://github.com/haproxytech/dataplaneapi/issues/329

Regards
Alex



Re: Dataplane exits at haproxytech/haproxy-ubuntu:2.9 in Containers

2024-03-17 Thread Aleksandar Lazic

Hi.

Looks like there was a similar question in the forum
https://discourse.haproxy.org/t/trouble-with-starting-the-data-plane-api/9200

Any idea how to fix this?

Regards
Alex


On 2024-03-13 (Mi.) 00:11, Aleksandar Lazic wrote:

Hi.

I try to run dataplane as "random" user inside haproxy.cfg.

That's the debug output of the start of the container. Even as I have set the 
--log-level=trace to the dataplane can't I see any reason why the api kills the 
process.



```
# Debug output with dataplane api
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:51:49_CET 
/datadisk/container-haproxy $ sudo buildah bud --tag craftcms-hap .

STEP 1/4: FROM haproxytech/haproxy-ubuntu:2.9
STEP 2/4: COPY container-files/ /
STEP 3/4: RUN set -x   && mkdir -p /data/haproxy/etc /data/haproxy/run 
/data/haproxy/maps /data/haproxy/ssl /data/haproxy/general 
/data/haproxy/spoe   && chown -R1001:0 /data   && chmod -R g=u /data   && touch 
/data/haproxy/etc/dataplaneapi.yaml
+ mkdir -p /data/haproxy/etc /data/haproxy/run /data/haproxy/maps 
/data/haproxy/ssl /data/haproxy/general /data/haproxy/spoe

+ chown -R 1001:0 /data
+ chmod -R g=u /data
+ touch /data/haproxy/etc/dataplaneapi.yaml
STEP 4/4: USER 1001
COMMIT craftcms-hap
Getting image source signatures
Copying blob d101c9453715 skipped: already exists
Copying blob 5c32e8ef5ef0 skipped: already exists
Copying blob 5bbbd68c0c20 skipped: already exists
Copying blob 2f5b49454406 [--] 0.0b / 0.0b
Copying blob 83d27970fa5a [--] 0.0b / 0.0b
Copying blob 5a567c1d5233 done
Copying config 1ac0ae6824 done
Writing manifest to image destination
Storing signatures
--> 1ac0ae6824c
Successfully tagged localhost/craftcms-hap:latest
1ac0ae6824c91a9bc4fa1f19979c0b9dc672981fb82949429006d53252f8de9c
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:54:21_CET 
/datadisk/container-haproxy $ sudo podman run -it --rm --network host --name 
haproxy craftcms-hap haproxy -f /data/haproxy/etc/haproxy.cfg -d

Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
   [NOTICE]   (1) : New program 'api' (3) forked
   [NOTICE]   (1) : New worker (4) forked
   [NOTICE]   (1) : Loading success.
Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
   [BWLIM] bwlim-in
   [BWLIM] bwlim-out
   [CACHE] cache
   [COMP] compression
   [FCGI] fcgi-app
   [SPOE] spoe
   [TRACE] trace
Using epoll() as the polling mechanism.
time="2024-03-12T22:54:24Z" level=info msg="HAProxy Data Plane API v2.9.1 
4d10854c"
time="2024-03-12T22:54:24Z" level=info msg="Build from: 
https://github.com/haproxytech/dataplaneapi.git;

time="2024-03-12T22:54:24Z" level=info msg="Reload strategy: custom"
time="2024-03-12T22:54:24Z" level=info msg="Build date: 2024-02-26T18:06:06Z"
:GLOBAL.accept(0008)=0038 from [unix:1] ALPN=
:GLOBAL.clicls[:]
:GLOBAL.srvcls[:]
:GLOBAL.closed[:]
0001:GLOBAL.accept(0008)=0039 from [unix:1] ALPN=
0001:GLOBAL.clicls[:]
0001:GLOBAL.srvcls[:]
0001:GLOBAL.closed[:]
[NOTICE]   (1) : haproxy version is 2.9.6-9eafce5
[NOTICE]   (1) : path to executable is /usr/local/sbin/haproxy

[ALERT]    (1) : Current program 'api' (3) exited with code 1 (Exit) #< Why exit

[ALERT]    (1) : exit-on-failure: killing every processes with SIGTERM
[ALERT]    (1) : Current worker (4) exited with code 143 (Terminated)
[WARNING]  (1) : All workers exited. Exiting... (1)
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:54:24_CET 
/datadisk/container-haproxy $

```

When I start HAProxy without the lines in the Block "program api" HAProxy is 
able to start. After I connect with another shell to the container and run the 
dataplane inside the container can I see that dataplane connects to haproxy and 
stops immediately.


# shell 1
```
sudo podman run -it --rm --network host --name haproxy craftcms-hap haproxy -f 
/data/haproxy/etc/haproxy.cfg -d

Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
[NOTICE]   (1) : New worker (3) forked
[NOTICE]   (1) : Loading success.
Available polling systems :
epol

Re: About the SPOE

2024-03-17 Thread Aleksandar Lazic

Hi.

On 2024-03-15 (Fr.) 15:09, Christopher Faulet wrote:

Hi all,

It was evoked on the ML by Willy and mentioned in few issues on GH. It is now 
official. The SPOE was marked as deprecated for the 3.0. It is not a pleasant 
announce because it is always an admission of failure to remove a feature. 
Sadly, this filter should be refactored to work properly. It was implemented as 
a functional PoC for the 1.7 and since then, no time was invest to improve it 
and make it truly maintainable in time. Worst, other parts of HAProxy evolved, 
especially applets part, making maintenance ever more expensive.


We must be realistic on the subject, there was no real adoption on the SPOE and 
this partly explains why no time was invest on it. So we are really sorry for 
users relying on it. But we cannot continue in this direction.


The 3.0 is be an LTS version. It means the SPOE will still be maintained on this 
version and lower ones for 5 years. On the 3.1, it will be marked as 
unmaintained and possibly removed if an alternative solution is implemented.


It remains few months before the 3.0 release to change our mind. Maybe this 
announce will be an electroshock to give it a new lease of life. Otherwise it is 
time to find an alternative solution based on an existing protocol.


For all 3.0 users, there is now a warning if a SPOE filter is configured. But 
there is also a global option to silent it. To do so, 
"expose-deprecated-directives" must be added in the global section.


Now we are open to discussion on this subject. Let us know your feeling and if 
you have any suggestion, we will be happy to talk about it.


As I fully understand this step it would he very helpful to have a filter which 
have the possibility to run some tasks outside of HAProxy in async way.


There was a short discussion, in the past, about the future of filters
https://www.mail-archive.com/haproxy@formilux.org/msg44164.html
maybe there are some Ideas which can be reused.

From my point of view would be a http-filter (1,2,!3 imho), with all the pros 
and cons, one of the best way for a filter, because this protocol is so widely 
used and a lot of knowledge could be reused. One of the biggest benefit is also 
that, even in Enterprise environments, could this filter be used as this 
protocol is able to run across a proxy.


FCGI is also another option as it's already part of the Filter chain :-).
I don't know too much about grpc but maybe this protocol could also be used as 
filter ¯\_(ツ)_/¯.


Lua-API with some external Daemons could be also used to move the workload out 
of HAProxy.


From my point of view, what ever solution is chosen the Idea behind the SPOE 
should be kept because it's a good concept to scale the filters outside of HAProxy.


I see a lot of possibilities here the main point is always how much work it it 
to maintain the filter chain.



Regards,


Jm2c

Regards
Alex



Dataplane exits at haproxytech/haproxy-ubuntu:2.9 in Containers

2024-03-12 Thread Aleksandar Lazic

Hi.

I try to run dataplane as "random" user inside haproxy.cfg.

That's the debug output of the start of the container. Even as I have set the 
--log-level=trace to the dataplane can't I see any reason why the api kills the 
process.



```
# Debug output with dataplane api
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:51:49_CET 
/datadisk/container-haproxy $ sudo buildah bud --tag craftcms-hap .

STEP 1/4: FROM haproxytech/haproxy-ubuntu:2.9
STEP 2/4: COPY container-files/ /
STEP 3/4: RUN set -x   && mkdir -p /data/haproxy/etc /data/haproxy/run 
/data/haproxy/maps /data/haproxy/ssl /data/haproxy/general 
/data/haproxy/spoe   && chown -R1001:0 /data   && chmod -R g=u /data   && touch 
/data/haproxy/etc/dataplaneapi.yaml
+ mkdir -p /data/haproxy/etc /data/haproxy/run /data/haproxy/maps 
/data/haproxy/ssl /data/haproxy/general /data/haproxy/spoe

+ chown -R 1001:0 /data
+ chmod -R g=u /data
+ touch /data/haproxy/etc/dataplaneapi.yaml
STEP 4/4: USER 1001
COMMIT craftcms-hap
Getting image source signatures
Copying blob d101c9453715 skipped: already exists
Copying blob 5c32e8ef5ef0 skipped: already exists
Copying blob 5bbbd68c0c20 skipped: already exists
Copying blob 2f5b49454406 [--] 0.0b / 0.0b
Copying blob 83d27970fa5a [--] 0.0b / 0.0b
Copying blob 5a567c1d5233 done
Copying config 1ac0ae6824 done
Writing manifest to image destination
Storing signatures
--> 1ac0ae6824c
Successfully tagged localhost/craftcms-hap:latest
1ac0ae6824c91a9bc4fa1f19979c0b9dc672981fb82949429006d53252f8de9c
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:54:21_CET 
/datadisk/container-haproxy $ sudo podman run -it --rm --network host --name 
haproxy craftcms-hap haproxy -f /data/haproxy/etc/haproxy.cfg -d

Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
  [NOTICE]   (1) : New program 'api' (3) forked
  [NOTICE]   (1) : New worker (4) forked
  [NOTICE]   (1) : Loading success.
Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
  [BWLIM] bwlim-in
  [BWLIM] bwlim-out
  [CACHE] cache
  [COMP] compression
  [FCGI] fcgi-app
  [SPOE] spoe
  [TRACE] trace
Using epoll() as the polling mechanism.
time="2024-03-12T22:54:24Z" level=info msg="HAProxy Data Plane API v2.9.1 
4d10854c"
time="2024-03-12T22:54:24Z" level=info msg="Build from: 
https://github.com/haproxytech/dataplaneapi.git;

time="2024-03-12T22:54:24Z" level=info msg="Reload strategy: custom"
time="2024-03-12T22:54:24Z" level=info msg="Build date: 2024-02-26T18:06:06Z"
:GLOBAL.accept(0008)=0038 from [unix:1] ALPN=
:GLOBAL.clicls[:]
:GLOBAL.srvcls[:]
:GLOBAL.closed[:]
0001:GLOBAL.accept(0008)=0039 from [unix:1] ALPN=
0001:GLOBAL.clicls[:]
0001:GLOBAL.srvcls[:]
0001:GLOBAL.closed[:]
[NOTICE]   (1) : haproxy version is 2.9.6-9eafce5
[NOTICE]   (1) : path to executable is /usr/local/sbin/haproxy

[ALERT](1) : Current program 'api' (3) exited with code 1 (Exit) #< Why exit

[ALERT](1) : exit-on-failure: killing every processes with SIGTERM
[ALERT](1) : Current worker (4) exited with code 143 (Terminated)
[WARNING]  (1) : All workers exited. Exiting... (1)
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:54:24_CET 
/datadisk/container-haproxy $

```

When I start HAProxy without the lines in the Block "program api" HAProxy is 
able to start. After I connect with another shell to the container and run the 
dataplane inside the container can I see that dataplane connects to haproxy and 
stops immediately.


# shell 1
```
sudo podman run -it --rm --network host --name haproxy craftcms-hap haproxy -f 
/data/haproxy/etc/haproxy.cfg -d

Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
[NOTICE]   (1) : New worker (3) forked
[NOTICE]   (1) : Loading success.
Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
:GLOBAL.accept(0008)=0038 from [unix:1] ALPN=

Re: http/3 flow control equivalent

2024-02-22 Thread Aleksandar Lazic

Hi.

On 2024-02-22 (Do.) 02:47, Miles Hampson wrote:

Hi,

I have noticed that transferring large files with http/2 to a backend server 
through HAProxy 2.9 (and earlier) over a network link with a bit of latency can 
be extremely slow unless the HTTP/2 Flow Control window size is increased quite 
a bit (i.e. 4x the default works well in our situation).


We are now trying out http/3 and hit the same issue. I don't think there is any 
connection migration, this is just from a test server. There don't seem to be 
any tune.h3 settings in the config manual, are there any connection settings 
that I might be able to adjust to improve this situation?


I haven't investigated much so far because I don't have any understanding of how 
QUIC stream flow control works yet, so the only thing I have tried is increasing 
the QUIC receive buffer (this is on an Ubuntu 22.04 server)


Please be so kind and tell us which HAProxy version do you use.
haproxy -vv

Please be so kind and share the HAProxy config without sensible data.

Are you able to test the latest dev version?


Thanks for your time,

Miles


Regards
Alex



Re: Haproxy accross LDAPS

2024-02-16 Thread Aleksandar Lazic

Hi Willy.

>> Have the client the CA Certificates from the LDAPS server?
> No, it haven't

This could be the issue as the Client must be able to verify the Server CA. Try 
to add the Server CA Chain into the Client and try the connection again.


If there is an option in the client that the client don't need to verify the CA 
can you also try to activate this option, but only for testing.


As the Config looks right, the CA issue could be the reason of the TLS 
connection issue.


```
[snip]

frontend Front_ROR_LDAPS
mode tcp

[snip]
```

Best Regards
Alex

On 2024-02-16 (Fr.) 06:08, TINK-LONG-KI Willy wrote:

Hi Aleksandar,

Thank you so much for your reply and your help, you will find in attached the 
config file of the HAPROXY and below  in red information requested.


Thank you so much for your help.

Kind regards,

Willy

----
*De :* Aleksandar Lazic 
*Envoyé :* jeudi 15 février 2024 15:20
*À :* TINK-LONG-KI Willy 
*Cc :* haproxy@formilux.org 
*Objet :* Re: Haproxy accross LDAPS
Hi Willy.

On 2024-02-15 (Do.) 09:07, TINK-LONG-KI Willy wrote:

Hello All,

I trying  to configure a backend on a HAPROXY (release 2.4.25) with LDAPS in
order to authenticate user by the LDAPS.


Any chance to use the latest 2.8 or 2.9?


Below informations about my configuration :

-Port use on the backend : 636
-Mode use on the backend : tcp
-SSL certifcate installed on the LDAPS server.

Do you know if that is possible please ?

When I try to connect to HAPROXY from internet I get this error message :

   ERR_04120_TLS_HANDSHAKE_ERROR The TLS handshake failed, reason: Unspecified:
Improper close state: Status = OK HandshakeStatus = NEED_WRAP
bytesConsumed = 0 bytesProduced = 7 sequenceNumber = 1


This is not a HAProxy error message.

Please can you share the config with minimal config and no sensitive 
information's.

The TCP Mode works quite well with TLS forwarding but this requires that the
target server, the ldap server, must handle the TLS Handshake.

You can see this in that picture
https://m365.eu.vadesecure.com/safeproxy/v4?f=JQRJ_Uz5yPCPp-B6jOmbT575xVwxzFR44U-b0s6PemPO4mDgKfLfB3kq-4D47NVm=Ph3ZFuCUnHz1u8PINUtpSMadyd9FOmq8P5_kNBq-bbA_U3hPACK9z-ehvOagSwWHw2smwqsWbV_73guivXKYtw=l4gg=r_32UO2JEjC-krA16kbYLUhau70siOrxbqxGsC5k7kqrKn8IyTgrtY0CQu8w3sw8=f05ca09bc48042d6f9dfa24716ed17b84e26ec2bc0812ff96fa01f78c60d720f=https%3A%2F%2Fwww.me2digital.com%2Fblog%2F2019%2F05%2Fhaproxy-sni-routing%2F
 
<https://m365.eu.vadesecure.com/safeproxy/v4?f=JQRJ_Uz5yPCPp-B6jOmbT575xVwxzFR44U-b0s6PemPO4mDgKfLfB3kq-4D47NVm=Ph3ZFuCUnHz1u8PINUtpSMadyd9FOmq8P5_kNBq-bbA_U3hPACK9z-ehvOagSwWHw2smwqsWbV_73guivXKYtw=l4gg=r_32UO2JEjC-krA16kbYLUhau70siOrxbqxGsC5k7kqrKn8IyTgrtY0CQu8w3sw8=f05ca09bc48042d6f9dfa24716ed17b84e26ec2bc0812ff96fa01f78c60d720f=https%3A%2F%2Fwww.me2digital.com%2Fblog%2F2019%2F05%2Fhaproxy-sni-routing%2F>

Is the LDAP Server configured for LDAPS?
Yes the ldap server is configured as a LDAPS with a SSL certificate
Have the client the CA Certificates from the LDAPS server?
No, it haven't
What's your ldap client config?
I use LDAP Apache Directory Studio, the configuration is very simple I set 
information below in configuration :

IP address of HAPROXY, the listen port and credentials


Thank you for your help.

Kind Regards,

Willy


Regards
Alex




Re: Haproxy accross LDAPS

2024-02-15 Thread Aleksandar Lazic

Hi Willy.

On 2024-02-15 (Do.) 09:07, TINK-LONG-KI Willy wrote:

Hello All,

I trying  to configure a backend on a HAPROXY (release 2.4.25) with LDAPS in 
order to authenticate user by the LDAPS.


Any chance to use the latest 2.8 or 2.9?


Below informations about my configuration :

-Port use on the backend : 636
-Mode use on the backend : tcp
-SSL certifcate installed on the LDAPS server.

Do you know if that is possible please ?

When I try to connect to HAPROXY from internet I get this error message :

  ERR_04120_TLS_HANDSHAKE_ERROR The TLS handshake failed, reason: Unspecified: 
Improper close state: Status = OK HandshakeStatus = NEED_WRAP

bytesConsumed = 0 bytesProduced = 7 sequenceNumber = 1


This is not a HAProxy error message.

Please can you share the config with minimal config and no sensitive 
information's.

The TCP Mode works quite well with TLS forwarding but this requires that the 
target server, the ldap server, must handle the TLS Handshake.


You can see this in that picture 
https://www.me2digital.com/blog/2019/05/haproxy-sni-routing/


Is the LDAP Server configured for LDAPS?
Have the client the CA Certificates from the LDAPS server?
What's your ldap client config?


Thank you for your help.

Kind Regards,

Willy


Regards
Alex



Re: unsubscribe

2024-02-12 Thread Aleksandar Lazic

Hi.

Here can you find the right way to Unsubscribe from the list 
https://www.haproxy.org/#tact


Regards

Alex

On 2024-02-12 (Mo.) 23:02, Nicolas Grilly wrote:


*Nicolas Grilly*
Managing Partner
+33 6 03 00 25 34
Recrutez plus rapidement avec VocationCity.com 
Hire faster with VocationCity.com 

Re: [ANNOUNCE] haproxy-2.9-dev10

2023-11-20 Thread Aleksandar Lazic

Hi Tristan.

On 2023-11-20 (Mo.) 15:14, Tristan wrote:

Hi Aleksandar,


On 20 Nov 2023, at 17:18, Aleksandar Lazic  wrote:

at configuration Change the reload leaves the old processes alive until 
"hard-stop-after" value and after that is the connection terminated which does

not looks like that the connection was takeover to the new process. The use
case was log shipping with HAProxy with mode tcp, as far as I have understood
the author in a proper way.


Is that new behavior? Because I was under the impression that this is by design


Well I don't know as I have not the setup in use, I'm just the messenger and ask
if somebody have seen also such a behavior within tcp mode.


If the new process took over an existing L4 connection, it seems like it’d cause
strange behavior in quite a few cases due to configuration changes.


Well as there are the *_takover functions for http and fcgi, maybe there is also 
such a function for tcp but I may have overseen it.



Either haproxy tries to reuse all old values and essentially needs to fork the 
new
process for takeover (which then is equivalent to the current old process living
for a while), or it applies new values to the existing connection (assuming it’s
even possible in all cases) which is likely to just break it (removed frontend,
backend, or server; or timeouts changes, etc etc).

Seems like it’s just a design choice to me [1] and that HAProxy’s approach is 
sort
of the only sane one…
Ofc that means potentially a lot of old processes, hence hard-stop-after and
max-reloads as tunables.

Now in a k8s environment I can imagine high churn in pods causing a lot of 
server
changes and making this a problem, but the official ingress controllers seems to
generally mitigate it by using the runtime api when it can instead of hard 
reloads,
and only using the latter in limited cases 樂
Maybe the used the « community » ingress controller (bless their maintainer, 
it’s
not a jab at it) which does rely more on hard reloads

Either way, sounds unlikely to be a fix for it?


I'm also not sure if the tcp mode have such a take over mechanism but it would 
be nice for the hitless/seamless reload.



Tristan

[1]: Also a bit out of topic but I always found ~infinite duration TCP 
connections
to be a very strange idea… So many things can go wrong (and will go wrong) if 
you
depend on it… at least it’s never going to be as reliable as client side retries
or moving to UDP where possible…


Regards
Alex



Re: [ANNOUNCE] haproxy-2.9-dev10

2023-11-20 Thread Aleksandar Lazic

Hi Willy.

On 2023-11-18 (Sa.) 15:40, Willy Tarreau wrote:

Hi,

HAProxy 2.9-dev10 was released on 2023/11/18. It added 154 new commits
after version 2.9-dev9.


Wow what a release :-)

[snipp]


   BUG/MEDIUM: mux-h2: fail earlier on malloc in takeover()
   BUG/MEDIUM: mux-h1: fail earlier on malloc in takeover()
   BUG/MEDIUM: mux-fcgi: fail earlier on malloc in takeover()


I just have seen this comits and asked my self could this have some positive 
effect to the hitless/seamless reload issue mentioned in this comment?


https://github.com/mholt/caddy-l4/issues/132#issuecomment-1672367076
> (I originally used HAProxy, but its promise of hitless reloads is a complete 
lie, whereas caddy-l4 actually does the right thing.)


I have contacted the author of the comment what the problem was and the answer 
was that at configuration Change the reload leaves the old processes alive until 
"hard-stop-after" value and after that is the connection terminated which does 
not looks like that the connection was takeover to the new process. The use case 
was log shipping with HAProxy with mode tcp, as far as I have understood the 
author in a proper way.


This behavior was seen with HAProxy 2.4 and 2.6.

Have anybody else face the issue that a long running connection, im mode tcp, 
was terminated with a reload of haproxy?


Regards
Alex



Re: Understanding haproxy's regex

2023-11-17 Thread Aleksandar Lazic

Hi Christoph.

On 2023-11-17 (Fr.) 10:26, Christoph Kukulies wrote:

I have the following line in my config:

backend website
     http-request replace-header Destination ^([^\ :]*)\ /(.*) \1\ /opencms/\2
     server www.mydomain.org  127.0.0.1:8080


Actually I'm used the write multiple patterns as \(pattern1\)\(pattern2\). So is 
it a different regex syntax?


The regex of HAProxy is explained here.
http://docs.haproxy.org/2.8/configuration.html#7.1.4

In addition could be this part interesting for you.
Quoting and escaping http://docs.haproxy.org/2.8/configuration.html#2.2

The "http-request replace-header ..." is documented here.
http://docs.haproxy.org/2.8/configuration.html#4.2-http-request%20replace-header

The main documentation for http-request is here 
http://docs.haproxy.org/2.8/configuration.html#4-http-request where you can see 
all supported options for that keyword.
Maybe the replace-header is not the right one for the "reqirep ..." line below 
as the reqirep keyword have handled all parts of the URL and Header and you have 
to figure out which part do you want now to replace, is it a header, a url part 
or a path part.



Maybe this answers helps you too.
https://stackoverflow.com/questions/75653221/replace-reqirep-directive-in-haproxy-2-2-25/75653335#75653335
https://discourse.haproxy.org/t/the-reqrep-directive-is-not-supported-anymore-since-haproxy-2-1/5147

The other question - since I don't understand the above statement at all - what 
does it exactly do?


With 2.4 the corresponding line was (instead of the replace-header):

  reqirep ^([^\ :]*)\ /(.*) \1\ /opencms/\2


This could not be the only line in the section. Please be so kind and share the 
whole config without any sensible data or the link from which you have copied 
the haproxy config.


The documentation for reqirep is here 
http://docs.haproxy.org/2.0/configuration.html#4-reqirep



Let me try to translate the regex:

substitute
pattern1:  begin-of-line followed by not blank and not colon repeatedly,
followed by a blank, followed by
pattern2:  any char repeatedly

by

 /opencms/

But the interspersed space don't make sense to me. spaces in URLs?

>

Thanks in advance,
--
Christoph


Regards
Alex



Re: AW: [EXT] Re: AW: Re: Question about syslog forwarding with HAProxy with keeping the client IP

2023-11-01 Thread Aleksandar Lazic

Hi Sören.

On 2023-11-01 (Mi.) 18:18, Hellwig, Sören wrote:

Hello Alex,

I can compile the version 2.8.3 from source and install the actual release of 
the 2.8 LTS version.


Yes you can but this will not solve the issue.
Have you read the full mail from the first answer, there are some suggestions 
how to solve the issue?



Best regards,
Sören Hellwig


Regards
Alex


-Ursprüngliche Nachricht-
Von: Aleksandar Lazic 
Gesendet: Mittwoch, 1. November 2023 15:36
An: Hellwig, Sören ; haproxy@formilux.org
Betreff: [EXT] Re: AW: Re: Question about syslog forwarding with HAProxy with 
keeping the client IP



On 2023-11-01 (Mi.) 15:17, Hellwig, Sören wrote:

Hello Aleksandar,

thank you for your reply. We are using HAproxy under SLES 15 SP4 and here is 
the version info:

srvkdgrllbp01:/etc/haproxy # haproxy -vv HAProxy version 2.8.0-fdd8154
2023/05/31 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2028.
Known bugs: http://www.haproxy.org/bugs/bugs-2.8.0.html


Uff that's old. Can you update?
Have you seen the rest of the answer in the previous mail, also?

Regards
Alex


Running on: Linux 5.14.21-150400.24.81-default #1 SMP PREEMPT_DYNAMIC
Tue Aug 8 14:10:43 UTC 2023 (90a74a8) x86_64 Build options :
TARGET  = linux-glibc
CPU = generic
CC  = cc
CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement 
-Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int 
-Wno-atomic-alignment
OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE=1
DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY
+CRYPT_H -DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE
-LIBATOMIC +LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH
-MEMORY_PROFILING +NETFILTER +NS -OBSOLETE_LINKER +OPENSSL
-OPENSSL_WOLFSSL -OT +PCRE -PCRE2 -PCRE2_JIT -PCRE_JIT +POLL +PRCTL
-PROCCTL -PROMEX -PTHREAD_EMULATION -QUIC +RT +SHM_OPEN +SLZ +SSL
-STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY
-WURFL -ZLIB

Default settings :
bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=2).
Built with OpenSSL version : OpenSSL 1.1.1l  24 Aug 2021 SUSE release
SUSE_OPENSSL_RELEASE Running on OpenSSL version : OpenSSL 1.1.1l  24
Aug 2021 SUSE release 150400.7.53.1 OpenSSL library supports TLS
extensions : yes OpenSSL library supports SNI : yes OpenSSL library
supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3 Built with Lua version :
Lua 5.3.6 Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with
transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND Built with PCRE version : 8.45 2021-06-15 Running on PCRE
version : 8.45 2021-06-15 PCRE library supports JIT : no (USE_PCRE_JIT
not set) Encrypted password support via crypt(3): yes Built with gcc
compiler version 7.5.0

Available polling systems :
epoll : pref=300,  test result OK
 poll : pref=200,  test result OK
   select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
 fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
 : mode=HTTP  side=FE|BE  mux=H1flags=HTX
   h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
 : mode=TCP   side=FE|BE  mux=PASS  flags=
 none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
  [BWLIM] bwlim-in
  [BWLIM] bwlim-out
  [CACHE] cache
  [COMP] compression
  [FCGI] fcgi-app
  [SPOE] spoe
  [TRACE] trace

Best regards,
Sören Hellwig

-Ursprüngliche Nachricht-
Von: Aleksandar Lazic 
Gesendet: Montag, 30. Oktober 2023 17:58
An: Hellwig, Sören ; haproxy@formilux.org
Betreff: [EXT] Re: Question about syslog forwarding with HAProxy with
keeping the client IP

Hi,

On 2023-10-30 (Mo.) 15:55, Hellwig, Sören wrote:

Hello Support-Team,

we are using the HAProxy as load balancer for our Graylog servers.


Which version of HAProxy?

haproxy -vv


The TCP based protocols works fine, but we have some trouble with the
syslog forwarding.

Our configuration file *haproxy.cfg* looks like this:

log-forward syslog

       # accept incomming UDP messages

       dgram-bind 10.1.2.50:514 transparent

 

Re: AW: [EXT] Re: Question about syslog forwarding with HAProxy with keeping the client IP

2023-11-01 Thread Aleksandar Lazic




On 2023-11-01 (Mi.) 15:17, Hellwig, Sören wrote:

Hello Aleksandar,

thank you for your reply. We are using HAproxy under SLES 15 SP4 and here is 
the version info:

srvkdgrllbp01:/etc/haproxy # haproxy -vv
HAProxy version 2.8.0-fdd8154 2023/05/31 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2028.
Known bugs: http://www.haproxy.org/bugs/bugs-2.8.0.html


Uff that's old. Can you update?
Have you seen the rest of the answer in the previous mail, also?

Regards
Alex


Running on: Linux 5.14.21-150400.24.81-default #1 SMP PREEMPT_DYNAMIC Tue Aug 8 
14:10:43 UTC 2023 (90a74a8) x86_64
Build options :
   TARGET  = linux-glibc
   CPU = generic
   CC  = cc
   CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement 
-Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int 
-Wno-atomic-alignment
   OPTIONS = USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE=1
   DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H 
-DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC 
+LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY +LUA +MATH -MEMORY_PROFILING +NETFILTER 
+NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_WOLFSSL -OT +PCRE -PCRE2 -PCRE2_JIT 
-PCRE_JIT +POLL +PRCTL -PROCCTL -PROMEX -PTHREAD_EMULATION -QUIC +RT +SHM_OPEN 
+SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 +SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY 
-WURFL -ZLIB

Default settings :
   bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=2).
Built with OpenSSL version : OpenSSL 1.1.1l  24 Aug 2021 SUSE release 
SUSE_OPENSSL_RELEASE
Running on OpenSSL version : OpenSSL 1.1.1l  24 Aug 2021 SUSE release 
150400.7.53.1
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.6
Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE version : 8.45 2021-06-15
Running on PCRE version : 8.45 2021-06-15
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 7.5.0

Available polling systems :
   epoll : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
: mode=HTTP  side=FE|BE  mux=H1flags=HTX
  h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
: mode=TCP   side=FE|BE  mux=PASS  flags=
none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG

Available services : none

Available filters :
 [BWLIM] bwlim-in
 [BWLIM] bwlim-out
 [CACHE] cache
 [COMP] compression
 [FCGI] fcgi-app
 [SPOE] spoe
 [TRACE] trace

Best regards,
Sören Hellwig

-Ursprüngliche Nachricht-
Von: Aleksandar Lazic 
Gesendet: Montag, 30. Oktober 2023 17:58
An: Hellwig, Sören ; haproxy@formilux.org
Betreff: [EXT] Re: Question about syslog forwarding with HAProxy with keeping 
the client IP

Hi,

On 2023-10-30 (Mo.) 15:55, Hellwig, Sören wrote:

Hello Support-Team,

we are using the HAProxy as load balancer for our Graylog servers.


Which version of HAProxy?

haproxy -vv


The TCP based protocols works fine, but we have some trouble with the
syslog forwarding.

Our configuration file *haproxy.cfg* looks like this:

log-forward syslog

      # accept incomming UDP messages

      dgram-bind 10.1.2.50:514 transparent

      # log message into ring buffer

      log ring@logbuffer format rfc5424 local0

ring logbuffer

      description "buffer for syslog"

      format rfc5424

      maxlen 1200

      size 32764

      timeout connect 5s

      timeout server 10s

      # send outgoing messages via TCP

      server logserver1 10.1.2.44:1514 log-proto octet-count check

      #server logserver1 10.1.2.44:1514 log-proto octet-count check
source
0.0.0.0 usesrc clientip

The syslog messages are forwarded to the logserver1 10.1.2.44.
Unfortunately some older Cisco switches did not send the hostname

Re: Question about syslog forwarding with HAProxy with keeping the client IP

2023-10-30 Thread Aleksandar Lazic

Hi,

On 2023-10-30 (Mo.) 15:55, Hellwig, Sören wrote:

Hello Support-Team,

we are using the HAProxy as load balancer for our Graylog servers.


Which version of HAProxy?

haproxy -vv

The TCP based protocols works fine, but we have some trouble with the syslog 
forwarding.


Our configuration file *haproxy.cfg* looks like this:

log-forward syslog

     # accept incomming UDP messages

     dgram-bind 10.1.2.50:514 transparent

     # log message into ring buffer

     log ring@logbuffer format rfc5424 local0

ring logbuffer

     description "buffer for syslog"

     format rfc5424

     maxlen 1200

     size 32764

     timeout connect 5s

     timeout server 10s

     # send outgoing messages via TCP

     server logserver1 10.1.2.44:1514 log-proto octet-count check

     #server logserver1 10.1.2.44:1514 log-proto octet-count check source 
0.0.0.0 usesrc clientip


The syslog messages are forwarded to the logserver1 10.1.2.44. Unfortunately 
some older Cisco switches did not send the hostname or IP address in the syslog 
packet.


Is there any chance to route the client IP though the ringbuffer to the 
logserver1?


As HAProxy does not handle the syslog protocl isn't there a option to add this 
info into the syslog protocol. A possible solution is to use for this specific 
devices a syslog receiver like fluentbit or rsyslog which adds the information 
and forwards the log line to haproxy or the destination server.


https://docs.fluentbit.io/manual/pipeline/inputs/syslog
https://docs.fluentbit.io/manual/pipeline/filters/record-modifier
https://docs.fluentbit.io/manual/pipeline/outputs

https://www.rsyslog.com/doc/v8-stable/configuration/modules/idx_input.html
https://www.rsyslog.com/doc/v8-stable/configuration/modules/idx_messagemod.html
https://www.rsyslog.com/doc/v8-stable/configuration/modules/idx_output.html

Just some ideas how to solve the issue.

The command *source* is not allowed in the *ring* section.  If I uncomment the 
last line no data is send to the logserver1.


Best regards,

Sören Hellwig

Dipl.-Ing. (FH) technische Informatik


Best regards
Alex



Re: How to limit client body/upload size?

2023-10-23 Thread Aleksandar Lazic

Hi.

On 2023-10-17 (Di.) 16:46, Gilles Van Vlasselaer wrote:
Hi, we are currently migrating servers and decided to drop NGINX in 
favour of HAProxy, however we had issues in the past where people would 
bomb us with massive file uploads on some services. Is there an 
equivalent like nginx's 'client_max_body_size' directive?


For header can you use this.
http://docs.haproxy.org/2.8/configuration.html#tune.http.maxhdr

I would try to use one of these.
http://docs.haproxy.org/2.8/configuration.html#7.3.6-req.body_len
http://docs.haproxy.org/2.8/configuration.html#7.3.6-req.body_size

Here is a example from the doc to check for a specific Content-length.

http-request deny if METH_POST { req.hdr_cnt(Content-length) eq 0 }

or

http-request deny if METH_POST { req.body_len ge YOUR_MAX_BODY_LEN }



Thanks in advance,

Gilles Van Vlasselaer


Regards
Alex



[PATCH] DOC: internal: filters: fix reference to entities.pdf

2023-10-22 Thread Aleksandar Lazic

Hi.

Here the patch to fix the filter.txt file.

Regards
AlexFrom 68bb30b6ad1b0ca5348a95219b09964aafe9ba36 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Sun, 22 Oct 2023 18:36:54 +0200
Subject: [PATCH] DOC: internal: filters: fix reference to entities.pdf

In doc/internals/api/filters.txt was the referece to
doc/internals/entities.pdf which was delteted in the
past.
---
 doc/internals/api/filters.txt | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/doc/internals/api/filters.txt b/doc/internals/api/filters.txt
index eee74cf63..e48f2ba91 100644
--- a/doc/internals/api/filters.txt
+++ b/doc/internals/api/filters.txt
@@ -47,7 +47,9 @@ SUMMARY
 First of all, to fully understand how filters work and how to create one, it is
 best to know, at least from a distance, what is a proxy (frontend/backend), a
 stream and a channel in HAProxy and how these entities are linked to each other.
-doc/internals/entities.pdf is a good overview.
+In doc/internals/api/layers.txt is a good overview of the different layers in
+HAProxy and in doc/internals/muxes.pdf is described the flow between the
+different muxes.
 
 Then, to support filters, many callbacks has been added to HAProxy at different
 places, mainly around channel analyzers. Their purpose is to allow filters to
-- 
2.34.1



Re: Missing doc entities in doc/internals

2023-10-20 Thread Aleksandar Lazic

Hi Willy.

On 2023-10-20 (Fr.) 23:21, Willy Tarreau wrote:

Hi Alex,

On Fri, Oct 20, 2023 at 11:11:59PM +0200, Aleksandar Lazic wrote:

I can't find any doc about entities in the current git

alex@alex-tuxedoinfinitybooks1517gen7 on 20/10/2023 at 23:06:19
/datadisk/git-repos/haproxy $ find . -iname "*entities"*
alex@alex-tuxedoinfinitybooks1517gen7 on 20/10/2023 at 23:06:27
/datadisk/git-repos/haproxy $

What's my mistake to find this doc?


No mistake, this file was so much outdated that not any single box
on it existed in recent versions so it was about time for it to
be removed. It was a bit heartbreaking, but killing a 15-years old
doc because the architecture evolves is not that bad of a news :-)


I feel your pain, got similar with appsession years ago :-)


I suggest that you have a look at doc/internals/api/layers.txt first,
then doc/internals/muxes.pdf whicih show the lower layers in boxes
and what remains of the stream layer on top as the channel.


Thanks. Will send a patch to fix the doc.


Regards,
willy


Regards
Alex



Missing doc entities in doc/internals

2023-10-20 Thread Aleksandar Lazic

Hi.

As I go thru the filter.txt now is this statement written.
https://github.com/haproxy/haproxy/blob/master/doc/internals/api/filters.txt#L50C15-L50C23


```
First of all, to fully understand how filters work and how to create 
one, it is
best to know, at least from a distance, what is a proxy 
(frontend/backend), a
stream and a channel in HAProxy and how these entities are linked to 
each other.

doc/internals/entities.pdf is a good overview.
```

I can't find any doc about entities in the current git

alex@alex-tuxedoinfinitybooks1517gen7 on 20/10/2023 at 23:06:19 
/datadisk/git-repos/haproxy $ find . -iname "*entities"*
alex@alex-tuxedoinfinitybooks1517gen7 on 20/10/2023 at 23:06:27 
/datadisk/git-repos/haproxy $


What's my mistake to find this doc?

Regards
Alex



Re: Some filter discussion for the future

2023-10-20 Thread Aleksandar Lazic

Hi.

FYI: I have created a repo for the rs filter 
https://github.com/git001/hap-rs-filter feel free to 
participate/contribute :-)


Regards
Alex

On 2023-10-19 (Do.) 22:53, Aleksandar Lazic wrote:

Hi Tristan.

On 2023-10-17 (Di.) 10:51, Tristan wrote:

Hi Aleksandar,

That is a welcome follow-up to the tangent we went on in the announce 
thread.


Thanks :-)

As there was the discussion about the future of the SPOE filter, let 
me start a discussion about some possible filter options.


[...]

The question which I have is how difficult is it to add a http filter 
based on httpclient similar to SPOE or FCGI filter.


Another option is to add some language specific filter like 
haproxy-rs-api shown in this comment 
https://github.com/khvzak/mlua/issues/320#issuecomment-1762027351 .


I personally find the latter much more appealing. If only because the 
http client is "just" a much more restricted version of it.


httpclient was just something comes into my mind. Maybe it's a better 
approch to have a flt_http.c similar to flt_spoe.c then can we use the 
full feature set of HAP within the backend section. ¯\_(ツ)_/¯


And since I was the first (in that thread, certainly not everywhere) 
to complain about the current language of choice for extending HAProxy 
(LUA), I have to say again that a target "language" like WASM sounds 
like an ideal selection:

- no need to pick/enforce/encourage a specific input language
- plenty of languages already compile to it, and likely to continue 
trending up since browsers support it


 From my point of view looks WASM also very promising as feature 
Technology but also a little bit a hype, so let's see what the time brings.


As I'm quite newbie at WASM I will mainly create any "echo all params" 
file in shell/perl/go/js or any other language and convert it to WASM :-).


The Idea to add the http filter is that there are so many http based 
tools out there and with that could HAProxy use such tools based on 
http.


That is true, but needing an HTTP API + the loss in efficiency sounds 
a bit painful.

And very painful if the response isn't so easy to parse.
Thinking of cases where XML decoding becomes relevant, for example 
SAML-related ones which are common for auth-related matters still.


That's a valid argument.


Any opinion on that?


Well on my end I certainly want to see this too. That said Willy had a 
few counterpoints of relevance in that other thread that are worth 
addressing here:


 > WASM on the other hand would provide more performance and compile-time
 > checks but I fear that it could also bring new classes of issues such
 > as higher memory usage, higher latencies, and would make it less
 > convenient to deploy updates since these would require to be rebuilt.

I'd say first that there are interpreters (and JITs) so the rebuild is 
not necessary.
However, even if it was, I'm not sure that the buildless use-case has 
that much traction as long as the build doesn't have to happen on the 
LBs directly.
For example I don't remember seeing complaints that SPOEs essentially 
require a build step.


Yep. SPOE is fully integrate into HAP that's so nice on that Protocol.

 > Also we don't want to put too much of the application into the load 
balancer.


That's a much more fundamental question however. This is your project, 
not mine, so your call.


But I have to emphasize that one reason I use HAProxy is specifically 
because it's extremely configurable and allows me to offload a lot of 
application-related logic directly at the edge.


Full Ack.

In a more impersonal way, that is also a direction many are interested 
in in general. See things like 
https://blog.cloudflare.com/cloudflare-snippets-alpha/ which are 
essentially ACL-triggered filters in HAProxy terms.


One example case I see up and again is tee-ing a request, for various 
reasons:
- for silent A/B testing between 2 backends (ie tee to 1 control and 1 
test)
- for routing the request that triggers a cached response both to the 
cache and to something interested in it for statistics; so users gets 
fast response and you still ALSO get to count those requests


And of course that has concerns related to memory used for buffering 
the content if there are 2 targets and thus you can't purely stream 
through. But in some places it has applicability I think.


 > But as I said I haven't had a look at the details so I don't know
 > if we can yield like in Lua, implement our own bindings for internal
 > functions, or limit the memory usage per request being processed.

That is much more difficult for me to answer, so to save you some time 
these seem to be the 3 main C-embeddable runtimes at the time of writing:

- https://github.com/bytecodealliance/wasm-micro-runtime
- https://github.com/wasm3/wasm3
- https://github.com/wasmerio/wasmer

I had a look and however didn't see a way to control memory or force 
yielding... so it's not encouraging. But 

Re: Some filter discussion for the future

2023-10-19 Thread Aleksandar Lazic

Hi Tristan.

On 2023-10-17 (Di.) 10:51, Tristan wrote:

Hi Aleksandar,

That is a welcome follow-up to the tangent we went on in the announce 
thread.


Thanks :-)

As there was the discussion about the future of the SPOE filter, let 
me start a discussion about some possible filter options.


[...]

The question which I have is how difficult is it to add a http filter 
based on httpclient similar to SPOE or FCGI filter.


Another option is to add some language specific filter like 
haproxy-rs-api shown in this comment 
https://github.com/khvzak/mlua/issues/320#issuecomment-1762027351 .


I personally find the latter much more appealing. If only because the 
http client is "just" a much more restricted version of it.


httpclient was just something comes into my mind. Maybe it's a better 
approch to have a flt_http.c similar to flt_spoe.c then can we use the 
full feature set of HAP within the backend section. ¯\_(ツ)_/¯


And since I was the first (in that thread, certainly not everywhere) to 
complain about the current language of choice for extending HAProxy 
(LUA), I have to say again that a target "language" like WASM sounds 
like an ideal selection:

- no need to pick/enforce/encourage a specific input language
- plenty of languages already compile to it, and likely to continue 
trending up since browsers support it


From my point of view looks WASM also very promising as feature 
Technology but also a little bit a hype, so let's see what the time brings.


As I'm quite newbie at WASM I will mainly create any "echo all params" 
file in shell/perl/go/js or any other language and convert it to WASM :-).


The Idea to add the http filter is that there are so many http based 
tools out there and with that could HAProxy use such tools based on http.


That is true, but needing an HTTP API + the loss in efficiency sounds a 
bit painful.

And very painful if the response isn't so easy to parse.
Thinking of cases where XML decoding becomes relevant, for example 
SAML-related ones which are common for auth-related matters still.


That's a valid argument.


Any opinion on that?


Well on my end I certainly want to see this too. That said Willy had a 
few counterpoints of relevance in that other thread that are worth 
addressing here:


 > WASM on the other hand would provide more performance and compile-time
 > checks but I fear that it could also bring new classes of issues such
 > as higher memory usage, higher latencies, and would make it less
 > convenient to deploy updates since these would require to be rebuilt.

I'd say first that there are interpreters (and JITs) so the rebuild is 
not necessary.
However, even if it was, I'm not sure that the buildless use-case has 
that much traction as long as the build doesn't have to happen on the 
LBs directly.
For example I don't remember seeing complaints that SPOEs essentially 
require a build step.


Yep. SPOE is fully integrate into HAP that's so nice on that Protocol.

 > Also we don't want to put too much of the application into the load 
balancer.


That's a much more fundamental question however. This is your project, 
not mine, so your call.


But I have to emphasize that one reason I use HAProxy is specifically 
because it's extremely configurable and allows me to offload a lot of 
application-related logic directly at the edge.


Full Ack.

In a more impersonal way, that is also a direction many are interested 
in in general. See things like 
https://blog.cloudflare.com/cloudflare-snippets-alpha/ which are 
essentially ACL-triggered filters in HAProxy terms.


One example case I see up and again is tee-ing a request, for various 
reasons:
- for silent A/B testing between 2 backends (ie tee to 1 control and 1 
test)
- for routing the request that triggers a cached response both to the 
cache and to something interested in it for statistics; so users gets 
fast response and you still ALSO get to count those requests


And of course that has concerns related to memory used for buffering the 
content if there are 2 targets and thus you can't purely stream through. 
But in some places it has applicability I think.


 > But as I said I haven't had a look at the details so I don't know
 > if we can yield like in Lua, implement our own bindings for internal
 > functions, or limit the memory usage per request being processed.

That is much more difficult for me to answer, so to save you some time 
these seem to be the 3 main C-embeddable runtimes at the time of writing:

- https://github.com/bytecodealliance/wasm-micro-runtime
- https://github.com/wasm3/wasm3
- https://github.com/wasmerio/wasmer

I had a look and however didn't see a way to control memory or force 
yielding... so it's not encouraging. But maybe I missed it.


Well that's another option to add a c bases wasm runtime into addons 
directory similar to promex.



 > During the Lua integration we used to say that it would teach us
 > new use cases that we're not aware of and that could ultimately

Re: CVE-2023-44487 and haproxy-1.8

2023-10-16 Thread Aleksandar Lazic



On 2023-10-16 (Mo.) 20:12, Lukas Tribus wrote:

On Mon, 16 Oct 2023 at 19:41, Aleksandar Lazic  wrote:




On 2023-10-16 (Mo.) 19:29, Илья Шипицин wrote:

Does 1.8 support http/2?


No.


Actually haproxy 1.8 supports H2 (without implementing HTX), as per
the documentation and announcements:

https://www.mail-archive.com/haproxy@formilux.org/msg28004.html
http://docs.haproxy.org/1.8/configuration.html#5.1-alpn


It does so by downgrading H2 to HTTP/1.1.


I don't know whether haproxy 1.8 actually is affected by the rapid
reset vulnerability or not. I suppose it's possible.


Well as far as I have understood the attack in a proper way, is the 
request in HTTP/2 mode and stay in that Mode, which isn't the case in 
1.8. As you already mentioned was in 1.8 the HTTP/2 request "converted" 
into HTTP/1 and 1.9 is the first version which supports end2end HTTP/2.


To be more precise here the quote from above announcement

```

  - HTTP/2 will not schedule a graceful connection shutdown anymore when
seeing a "Connection: close" header in a response. Instead a new HTTP
action "reject" has been implemented to work like its TCP counter-part.
```

This implies that the connection does not stay open and the attack could 
not work.

But maybe there is a better explanation why 1.8 is not affected.


Lukas


Regards
Alex



Re: CVE-2023-44487 and haproxy-1.8

2023-10-16 Thread Aleksandar Lazic

Hi .

On 2023-10-16 (Mo.) 19:55, Ryan O'Hara wrote:
I wondered exactly the same thing, but then saw this on the haproxy.org 
website:


"version 1.8 : multi-threading, HTTP/2, cache, on-the fly server 
addition/removal, seamless reloads, DNS SRV, hardware SSL engines, ..."


I know that haproxy-1.9 added end-to-end HTTP/2, so is that the 
determining factor? here? Many thanks.


Oh you are right. The 1.8 was the first one with the mux_h2.c in the 
tree. This was the first version with some first steps into HTTP/2 
world. From my point of view are the Statements from the HAProxy.com 
blog quite accurate why 1.8 is not affected with that CVE.



Ryan


Regards
Alex

On Mon, Oct 16, 2023 at 12:41 PM Aleksandar Lazic <mailto:al-hapr...@none.at>> wrote:




On 2023-10-16 (Mo.) 19:29, Илья Шипицин wrote:
 > Does 1.8 support http/2?

No.

 > On Mon, Oct 16, 2023, 18:58 Ryan O'Hara mailto:roh...@redhat.com>
 > <mailto:roh...@redhat.com <mailto:roh...@redhat.com>>> wrote:
 >
 >     Hi all.
 >
 >     I read the most recently HAProxy Newsletter, specifically the
 >     article "HAProxy is Not Affected by the HTTP/2 Rapid Reset
Attack"
 >     by Nick Ramirez [1]. A This article states that HAProxy
versions 1.9
 >     and later are *not* affetced, which is great. This implies that
 >     haproxy-1.8 *is* affected, but it also doesn't come right out and
 >     say that. I understand haproxy-1.8 is EOL, but do we know for
 >     certain that haproxy-1.8 is affected or not? Asking for a reason.
 >
 >     And shout-out to Nick for writing such a great article! Thank
you, Nick!
 >
 >     Ryan
 >
 >     [1]
 >

https://www.haproxy.com/blog/haproxy-is-not-affected-by-the-http-2-rapid-reset-attack-cve-2023-44487
 
<https://www.haproxy.com/blog/haproxy-is-not-affected-by-the-http-2-rapid-reset-attack-cve-2023-44487>
 
<https://www.haproxy.com/blog/haproxy-is-not-affected-by-the-http-2-rapid-reset-attack-cve-2023-44487
 
<https://www.haproxy.com/blog/haproxy-is-not-affected-by-the-http-2-rapid-reset-attack-cve-2023-44487>>
 >





Re: CVE-2023-44487 and haproxy-1.8

2023-10-16 Thread Aleksandar Lazic




On 2023-10-16 (Mo.) 19:29, Илья Шипицин wrote:

Does 1.8 support http/2?


No.

On Mon, Oct 16, 2023, 18:58 Ryan O'Hara > wrote:


Hi all.

I read the most recently HAProxy Newsletter, specifically the
article "HAProxy is Not Affected by the HTTP/2 Rapid Reset Attack"
by Nick Ramirez [1]. A This article states that HAProxy versions 1.9
and later are *not* affetced, which is great. This implies that
haproxy-1.8 *is* affected, but it also doesn't come right out and
say that. I understand haproxy-1.8 is EOL, but do we know for
certain that haproxy-1.8 is affected or not? Asking for a reason.

And shout-out to Nick for writing such a great article! Thank you, Nick!

Ryan

[1]

https://www.haproxy.com/blog/haproxy-is-not-affected-by-the-http-2-rapid-reset-attack-cve-2023-44487
 






Re: CVE-2023-44487 and haproxy-1.8

2023-10-16 Thread Aleksandar Lazic

Hi Ryan.

On 2023-10-16 (Mo.) 17:49, Ryan O'Hara wrote:

Hi all.

I read the most recently HAProxy Newsletter, specifically the article 
"HAProxy is Not Affected by the HTTP/2 Rapid Reset Attack" by Nick 
Ramirez [1]. A This article states that HAProxy versions 1.9 and later 
are *not* affetced, which is great. This implies that haproxy-1.8 *is* 
affected, but it also doesn't come right out and say that. I understand 
haproxy-1.8 is EOL, but do we know for certain that haproxy-1.8 is 
affected or not? Asking for a reason.


Well HTX, which was the transition to HTTP/2, was implemented in 1.9 
which is the reason why 1.8 is not affected.


https://www.haproxy.com/blog/haproxy-1-9-has-arrived


And shout-out to Nick for writing such a great article! Thank you, Nick!

Ryan


Regards
Alex

[1] 
https://www.haproxy.com/blog/haproxy-is-not-affected-by-the-http-2-rapid-reset-attack-cve-2023-44487




Some filter discussion for the future

2023-10-14 Thread Aleksandar Lazic

Hi.

As there was the discussion about the future of the SPOE filter, let me 
start a discussion about some possible filter options.


As far as I know have we this filters.

Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] fcgi-app
[COMP] compression
[TRACE] trace

There is also the httpclient in haproxy which looks quite mature, from 
my point of view.
The question which I have is how difficult is it to add a http filter 
based on httpclient similar to SPOE or FCGI filter.


Another option is to add some language specific filter like 
haproxy-rs-api shown in this comment 
https://github.com/khvzak/mlua/issues/320#issuecomment-1762027351 .


The Idea to add the http filter is that there are so many http based 
tools out there and with that could HAProxy use such tools based on http.


I know that's to late for 2.9 but let's start the discussion for the 
future versions of HAProxy. I know that http is not the most efficient 
Protocol in the world but widely used and this opens an lot of possible 
filters for haproxy. I know also that every http implementation have 
there quirks but it's still one of the most used Protocol for now.


Any opinion on that?

Regards
Alex



Re: HA Proxy

2023-10-13 Thread Aleksandar Lazic

Hi Mohammed.

Yes HAProxy supports all of the requested capacity and features from 
below. For a nice example what HAProxy is able to handle can you read 
this Blog post. 
https://www.haproxy.com/blog/haproxy-forwards-over-2-million-http-requests-per-second-on-a-single-aws-arm-instance


The very detailed Documentation can be found in the Web 
https://docs.haproxy.org/ or in the source repository under the doc 
directory 
https://git.haproxy.org/?p=haproxy.git;a=tree;f=doc;h=9a53977a683fd7e80f23fff2a18ef192ca908636;hb=HEAD


There are very good examples and explanations for HAProxy features on 
the HAProxy com Blog page https://www.haproxy.com/blog and you can also 
find some examples with your favorite Search engine. Please take care 
that some search results refer to some previous HAPRoxy Versions which 
are not maintained anymore, this means that the founded solution could 
work or need some rework for the current versions.


HAProxy have two versions the Opensource one and the Enterprise one.

If your company want support and is willing to pay for that can you get 
in touch with HAProxy Sales via the contact from 
https://www.haproxy.com/contact-us for the HAProxy Enterprise version 
https://www.haproxy.com/products/haproxy-enterprise.


Hth with best regards

Alex

On 2023-10-13 (Fr.) 09:41, Mohammed Anees A wrote:


Hi Team

We have a requirement to for a Software based NLB to Load Balance an 
enterprise application.


Following are the Capacity and Features of NLB required. Please 
confirm, does HA Proxy supports the below capacity and features ?. let 
us know the licensing model and Support structure.


Capacity :

  * Requests per Second =  5000 RPS
  * Concurrent Connections = 5000 Concurrent Sessions.
  * Throughput = 40 Mbps

Features :

 1. *Routing Profile *

Routing profile can be TCP based (layer 4) or HTTP based (layer 7).

**

 2. *Load Balancing Method*

All load balancing methods are supported. It is recommended to use 
Least Connections or Round Robin load balancing methods, for better 
distribution between Application servers.


 3. *Session persistence (stickiness)*

The LB must be configured with session persistence to enable a session 
connection with the same application server instance. Configure 
session persistence in all levels of load balancing (for example, if 
there is a global load balancer in front of a few local load balancers).


To achieve session persistence, configure the LB with one of the 
following persistence profiles:


  * HTTP Cookie
  * Client IP (Source address)

 4. *Health monitoring*

**

An important property of an LB is the ability to perform health 
monitoring checks (heartbeats) on each Application server. By using 
health monitors, the LB verifies the server response or checks for any 
network problems that can prevent a client from reaching a server. By 
doing so, the LB can place the server in or out of service and can 
make reliable load-balancing and high availability decisions.**


A common and recommended health monitor is *HTTP GET Request.***

 5. *Idle (execution) timeout*

Setting the execution timeout controls termination of idle 
connections. Configure an execution timeout of at least 4 hours.


 6. *HTTPS Configuration*

The load balancer supports several HTTPS configuration methods.

These include:

  * SSL bridging
  * SSL offload
  * SSL pass-through

SSL bridging and SSL offload are supported in HTTP based routing 
(layer 7), and require deploying TLS certificate on the LB. SSL 
pass-through is supported in TCP based routing (layer 4), and does not 
require deploying a certificate on the LB.


Regards

Mohammed Anees

+91 9944170656


Re: [ANNOUNCE] haproxy-2.9-dev7

2023-10-10 Thread Aleksandar Lazic

Hi.

On 2023-10-10 (Di.) 09:08, Willy Tarreau wrote:

Hi Tristan,

On Sun, Oct 08, 2023 at 12:15:00PM +, Tristan wrote:

Since this was brought up,


On 7 Oct 2023, at 14:34, Willy Tarreau  wrote:

[...]


Maybe this will then bring up SPOE to a level where the body of a request
can be scanned and bring it to a full WAF level or as WASM filter.


Any thoughts on the feasibility of a WASM based alternative to the current
LUA platform?

 From what I looked there are a few WASM runtimes set up for being embedded in
C applications, though I'm not expert enough on the tradeoffs of each to know
if there are dealbreakers.


I've never had a look yet. I can understand there are pros and cons.
When we added Lua, the goal was to be able to script a little bit more
what was too complicated to implement in rules. And I must say this has
served its purpose well, with dashboards, let's encrypt, authentication,
session management and whatever being done in Lua. Scripting languages
have a great advantage in field, they're easy to adapt or fix. Granted
Lua's syntax is not exactly what I would call awesome, but it's modular
and extensible enough to allow to do lots of things easily and at a low
execution cost.

WASM on the other hand would provide more performance and compile-time
checks but I fear that it could also bring new classes of issues such as
higher memory usage, higher latencies, and would make it less convenient
to deploy updates since these would require to be rebuilt. Also we don't
want to put too much of the application into the load balancer. But as I
said I haven't had a look at the details so I don't know if we can yield
like in Lua, implement our own bindings for internal functions, or limit
the memory usage per request being processed.


Hm, how could WASM be integrated into HAP if not with SPOE? I don't have 
now any Idea what's the best way could be.


Willy, please take a stable seat :-)

How about to use HTTP/(1/2/3), grpc or FCGI as filter protocols to be 
able to handle the body, instead of SPOE?



One option could be, as Alex suggested, to move that to an external
agent accessed via SPOE, but I must confess that I'm having an issue
with that: Since I drafted the basic needs in 2016 and Christopher
implemented a first experimental and limited version the same year, it
has not really taken off. It has become a chicken-and-egg problem. It
doesn't support streaming yet so it's not used by content inspection/
wafs/image compressors/on-disk caches etc, so it basically sees zero
adoption. And since it sees zero adoption, it has never been on anyone's
priority list to rework it. Such a rework does require particular knowledge
of the internals and good architectural skills to be able to implement a
v2 that would address all the current design's shortcomings by relying on
the muxes and idle connections, but the rare people who are able to work
on such a thing among the core team are constantly busy on much more
useful and important stuff, and I doubt anyone would have any interest
in working on this thankless thing.

So I feel like it's here to stay with its design limitations making it
unsuitable to many of the tasks it was imagined for, and that it could
actually be much less effort to simply remove it. Of course that's not
something to do between an odd and an even version, but maybe it's not
even too late to drop it from 2.9 if nobody cares anymore.


Well, this could be an option, from my point of view.

@Community: Culd you be so kind and tell us for which use cases you use 
SPOE, similar to Norman ( 
https://www.mail-archive.com/haproxy@formilux.org/msg44127.html ) and 
how big the afford could be to migrate to LUA filter.



Or to put it in a blunt way: does anyone want that 2.9 still supports
SPOE whose necessary redesign never happened in 7 years despite trying
to find time for this, and will likely never happen ? Or can we just
remove it ? I have nothing against preserving it a little bit more if
there really are users, but it would be nice if their use cases,
successes or issues were known, and even more if the effort could be
spread over multiple persons.


I think it would be nice when there are some use cases written in the 
https://github.com/haproxy/wiki/wiki/SPOE:-Stream-Processing-Offloading-Engine 
which are in use with SPOE to see how often this feature is used in HAP.




I also realize that a lot of work went into the current LUA support (a long
at the frighteningly long .c file for it speaks volumes).


My understanding is that many of the recent changes were attempts to
address certain design limitations and dirty corner cases.


But on one hand I find it rather difficult to use correctly in its current
state, in part because of the complete absence (to my knowledge) of something
equivalent to C headers for validation ahead of deployment, and also in part
(and more personally) because I never understood what anyone could possibly
like about LUA itself...


I don't think 

Re: [ANNOUNCE] haproxy-2.9-dev7

2023-10-08 Thread Aleksandar Lazic



On 2023-10-08 (So.) 14:15, Tristan wrote:

Since this was brought up,


On 7 Oct 2023, at 14:34, Willy Tarreau  wrote:

[…]


Maybe this will then bring up SPOE to a level where the body of a request
can be scanned and bring it to a full WAF level or as WASM filter.


Any thoughts on the feasibility of a WASM based alternative to the current LUA
platform?

From what I looked there are a few WASM runtimes set up for being embedded in C
applications, though I’m not expert enough on the tradeoffs of each to know if
there are dealbreakers.

I also realize that a lot of work went into the current LUA support (a long at 
the
frighteningly long .c file for it speaks volumes).

But on one hand I find it rather difficult to use correctly in its current 
state,
in part because of the complete absence (to my knowledge) of something 
equivalent
to C headers for validation ahead of deployment, and also in part (and more
personally) because I never understood what anyone could possibly like about LUA
itself…


There are at least 2 issues about the topic WASM and body handling of SPOE.
https://github.com/haproxy/haproxy/issues/1482
https://github.com/haproxy/haproxy/issues/913

From my point of view would it be very helpful when SPOE could handle
the body, but I think this is a huge change as there should also be some
protection about internal DoS for that topic. The benefit would be that
such a feature could open more languages within WASM context with all
there pro and cons.



[…]


Are there any plans to have something similar to XDS (
https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol ) for
dynamic configs at runtime, similar to the socket api and Data Plane API?


I used to have such plans a long time ago and even participated to a
few udpapi meetings. But at this point I think it's not haproxy's job
to perform service discovery


And that’s a very fair point. I wonder however how feasible it will 
realistically
be from dpapi’s perspective to add that to its remit.

That said I’d definitely be very interested as well. As much as handcrafted
configurations are nice, one quickly reaches their maintainability limits. And 
if
we’re to stop abusing DNS again and again, proper service discovery is the way.


I think that the DNS Stuff should be keep there and maybe be enhanced as
it looks to me some new Securtiy Topics are using DNS more and more like
ESNI, ECH, SVB, ...

Jm2c


Tristan


Regards
Alex



Re: [ANNOUNCE] haproxy-2.9-dev7

2023-10-07 Thread Aleksandar Lazic

Hi Willy.

On 2023-10-07 (Sa.) 14:45, Willy Tarreau wrote:

Hi Alex,

On Sat, Oct 07, 2023 at 01:51:43PM +0200, Aleksandar Lazic wrote:

Hi Willy.

On 2023-10-07 (Sa.) 10:26, Willy Tarreau wrote:

Hi,

HAProxy 2.9-dev7 was released on 2023/10/06. It added 75 new commits
after version 2.9-dev6.

This version fixes a number of issues in previous development releases
and prepares the work for subsequent patch series:


[snip]


- the post-parsing checks for the "mode" keyword were all revisited not
  to consider anymore that TCP and HTTP were mutual opposites. This will
  make it easier to bring new modes.


Does this imply that QUIC config raises a warning, as QUIC is based on UDP?


No, not at all, QUIC is still HTTP regarding this. The UDP part is not
visible at all there, it's really the lowest layer of QUIC.


Great, thanks :-)


Just for my curiosity  which modes have you are in mind?


We figured that the best way to address the log servers checks was in
fact to group them together with their health check method etc... this
looked like a backend. We realized that we didn't feel brave enough when
the log forwarders were implemented to have a "mode log" and that it's
what ought to have been done. It was still not too late to have one
for the backend. Maybe in the future log forwarders will just become
regular frontends, I don't know. Anyway, in the past we've had this
need a few times, for DNS, SPOE. If you remember, ~15 years ago I used
to mention FTP and SMTP as possible modes as well, though these ideas
had long been abandonned. Now the code is clean and ready to welcome
new modes if we figure we need new ones.


Oh yes, the good old time :-)

Hm, this makes me thinking to have also some mode like mysql,mqtt, what 
ever to use it for monitoring or any other purpose.
Maybe this will then bring up SPOE to a level where the body of a 
request can be scanned and bring it to a full WAF level or as WASM filter.


Are there any plans to have something similar to XDS ( 
https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol ) for 
dynamic configs at runtime, similar to the socket api and Data Plane API?



Cheers,
Willy


Regards
Alex



Re: [ANNOUNCE] haproxy-2.9-dev7

2023-10-07 Thread Aleksandar Lazic

Hi Willy.

On 2023-10-07 (Sa.) 10:26, Willy Tarreau wrote:

Hi,

HAProxy 2.9-dev7 was released on 2023/10/06. It added 75 new commits
after version 2.9-dev6.

This version fixes a number of issues in previous development releases
and prepares the work for subsequent patch series:


[snip]


   - the post-parsing checks for the "mode" keyword were all revisited not
 to consider anymore that TCP and HTTP were mutual opposites. This will
 make it easier to bring new modes.


Does this imply that QUIC config raises a warning, as QUIC is based on UDP?
Just for my curiosity  which modes have you are in mind?

[snip rest]

Best regards
Alex



Re: Patch sample_conv_json_query in sample.c to return array values

2023-09-15 Thread Aleksandar Lazic

Dear Jens.

Please can you create a patch as mentioned in 
https://github.com/haproxy/haproxy/blob/master/CONTRIBUTING as suggested 
in https://github.com/haproxy/haproxy/issues/2281#issuecomment-1721014384


Regards
Alex

On 2023-09-15 (Fr.) 14:57, Jens Popp wrote:

Hi,

currently the method sample_conv_query in sample.c returns an empty value, if 
the given json path leads to an json array. There are multiple use cases, where 
you need to check the content of an array, e.g., if the array contains a list 
of roles and you want to check, if the array contains a certain role (for 
OIDC). I propose the simple fix below, to copy the complete array (including 
brackets) in the result of the function:

...(Line 4162)
 case MJSON_TOK_ARRAY:
 // We copy the complete array, including square 
brackets into the return buffer
 // result looks like: 
["manage-account","manage-account-links","view-profile"]
 strncpy( trash->area, token, token_size);
 trash->data = token_size;
 trash->size = token_size;
 smp->data.u.str = *trash;
 smp->data.type = SMP_T_STR;
 return 1;
 case MJSON_TOK_NULL:

... (currently Line 4164)

If possible I would also like to fix this in current stable release 2.8.

Changes are also in my fork,

https://github.com/jenspopp/haproxy/blob/master/src/sample.c#L4162-L4171

Any comment / help is appreciated.

Best regards
Jens
[X]


An Elisa camLine Holding GmbH company - www.camline.com


camLine GmbH - Fraunhoferring 9, 85238 Petershausen, Germany
Amtsgericht München HRB 88821
Managing Directors: Frank Bölstler, Evelyn Tag, Bernhard Völker


The content of this message is CAMLINE CONFIDENTIAL. If you are not the 
intended recipient, please notify me, delete this email and do not use or 
distribute this email.








Re: HAProxy and musl (was: Re: HAproxy Error)

2023-09-14 Thread Aleksandar Lazic

Hi.

Resuscitate this old thread with a musl lib update.

https://musl.libc.org/releases.html

```
musl-1.2.4.tar.gz (sig) - May 1, 2023

This release adds TCP fallback to the DNS stub resolver, fixing the 
longstanding inability to query large DNS records and incompatibility 
with recursive nameservers that don't give partial results in truncated 
UDP responses. It also makes a number of other bug fixes and 
improvements in DNS and related functionality, including making both the 
modern and legacy API results differentiate between NODATA and NxDomain 
conditions so that the caller can handle them differently.




```

Regards
Alex


On 2020-04-16 (Do.) 13:26, Willy Tarreau wrote:

On Thu, Apr 16, 2020 at 12:29:42PM +0200, Tim Düsterhus wrote:

FWIW musl seems to work OK here when building for linux-glibc-legacy.


Yes. HAProxy linked against Musl is smoke tested as part of the Docker
Official Images program, because the Alpine-based Docker images use Musl
as their libc. In fact you can even use TARGET=linux-glibc + USE_BACKTRACE=.


By the way, I initially thought I was the only one building with musl
for my EdgeRouter-x that I'm using as a distcc load balancer for the
build farm at work. But if there are other users, we'd rather add
a linux-musl target, as the split between OS and library was precisely
made for this purpose!

Anyone objects against something like this (+ the appropriate entries
in other places and doc) ?


diff --git a/Makefile b/Makefile
index d5841a5..a3dad36 100644
--- a/Makefile
+++ b/Makefile
@@ -341,6 +341,18 @@ ifeq ($(TARGET),linux-glibc-legacy)
  USE_ACCEPT4 USE_LINUX_SPLICE USE_PRCTL USE_THREAD_DUMP USE_GETADDRINFO)
  endif
  
+# For linux >= 2.6.28 and musl

+ifeq ($(TARGET),linux-musl)
+  set_target_defaults = $(call default_opts, \
+USE_POLL USE_TPROXY USE_LIBCRYPT USE_DL USE_RT USE_CRYPT_H USE_NETFILTER  \
+USE_CPU_AFFINITY USE_THREAD USE_EPOLL USE_FUTEX USE_LINUX_TPROXY  \
+USE_ACCEPT4 USE_LINUX_SPLICE USE_PRCTL USE_THREAD_DUMP USE_NS USE_TFO \
+USE_GETADDRINFO)
+ifneq ($(shell echo __arm__/__aarch64__ | $(CC) -E -xc - | grep 
'^[^\#]'),__arm__/__aarch64__)
+  TARGET_LDFLAGS=-latomic
+endif
+endif
+
  # Solaris 8 and above
  ifeq ($(TARGET),solaris)
# We also enable getaddrinfo() which works since solaris 8.

Willy




Re: HaProxy does not updating DNS cache

2023-09-13 Thread Aleksandar Lazic

Hi.

On 2023-09-13 (Mi.) 14:39, Henning Svane wrote:

Hi

I have tried using a DNS with a TTL of 600 sec. and the DNS changes once 
in a while, but every time I have to restart Haproxy to get the updated 
DNS to work.


Even if I wait for hours. I can see with nslookup that the server can 
see the updated DNS correctly.


So is there a setting that makes HaProxy TTL aware? So HaProxy reloads 
the DNS record every time the TTL expires.


Please add always the output of `haproxy -vv`, thanks.


Regards

Henning


Regards
Alex



Re: how to upgrade haproxy

2023-08-28 Thread Aleksandar Lazic

Hi.


On 2023-08-28 (Mo.) 22:30, Atharva Shripad Dudwadkar wrote:

Hi Haproxy team,

Can we install haproxy using source code in ubuntu 20.04 and how ?


You can follow the Install file to compile HAProxy.

https://git.haproxy.org/?p=haproxy.git;a=blob;f=INSTALL;h=8492a4f37208a6099629101466fec3378a28e73c;hb=HEAD

Regards
Alex

On Thu, 24 Aug 2023 at 4:00 PM, Aleksandar Lazic <mailto:al-hapr...@none.at>> wrote:


Hi Atharva Shripad Dudwadkar.

On 2023-08-24 (Do.) 12:08, Willy Tarreau wrote:
 > Hi,
 >
 > On Thu, Aug 24, 2023 at 03:23:59PM +0530, Atharva Shripad
Dudwadkar wrote:
 >> Hi haproxy Team,
 >>
 >> Can you please help me with the upgrading process regarding
haproxy from
 >> 2.0.7 to 2.5. in RHEL. Could you please share with me upgrading
process?
 >
 > Please note that 2.5 is no longer supported, it was a short-lived
 > version. You should consider upgrading to a long term supported one
 > to replace your 2.0, these are 2.4, 2.6 or 2.8. Please look at the
 > packages here for various distros and from various maintainers:
 >
 > https://github.com/haproxy/wiki/wiki/Packages
<https://github.com/haproxy/wiki/wiki/Packages>

In addition to that site can you also open a RH Case and ask the Vendor
if there is a updated package, in case you expect some support for the
RHEL package :-).

https://access.redhat.com/support/cases/
<https://access.redhat.com/support/cases/>

 > Regards,
 > Willy

Regards
Alex

--
Sahil Shripad Dudwadkar Sent from iphone




Re: [ANNOUNCE] haproxy-2.9-dev4

2023-08-25 Thread Aleksandar Lazic

Hi.

On 2023-08-25 (Fr.) 19:35, Willy Tarreau wrote:

Hi,

HAProxy 2.9-dev4 was released on 2023/08/25. It added 59 new commits
after version 2.9-dev3.

Some interesting new stuff continues to arrive in this version:



[snipp]


   - reverse HTTP: see below for a complete description. I hope it will
 answer Alex's question :-)


Thank you :-)


   - xxhash was updated to 0.8.2 (we were on 0.8.1) because it fixes a
 build issue on ppc64le.

   - various doc/regtest/CI updates as usual.

Now, regarding reverse HTTP: that's a feature that we've been repeatedly
asked for over the last decade, constantly responding "not possible yet".
But with the flexibility of the current architecture, it appeared that
there was no more big show-stopper and it was about time to respond to
this demand. What is this ? The principle is to permit a server to
establish a connection to haproxy, then to switch the connection
direction on both sides, so that haproxy can send requests to that
server. There was a trend around this 20 years ago on HTTP/1 and it
didn't work well, to be honest. And we were counting on H2 to do that
because it allows to multiplex streams over a connection and to reset
a stream without breaking a connection.


[snipp good explanation]

Looks like that "Reverse HTTP Transport" will be only possible with H2 & 
H3 for now, right. This looks then to me that quic + H3 will be 
implemented also for server as "proto h3", right?


Will HAProxy be the first one which will have this or is there anybody 
else which have also implemented this into there SW?


Regards
Alex



Please what is 'new protocol named "reverse_connect"' for?

2023-08-24 Thread Aleksandar Lazic

Hi.

I just seen some commits about protocol for active reverse connect and 
ask me, what's the main use case for that protocol could be? As far as I 
have seen is it for now for H2 Settings but I'm not sure if I understood 
the commits right.


Regards
Alex



Re: how to upgrade haproxy

2023-08-24 Thread Aleksandar Lazic

Hi Atharva Shripad Dudwadkar.

On 2023-08-24 (Do.) 12:08, Willy Tarreau wrote:

Hi,

On Thu, Aug 24, 2023 at 03:23:59PM +0530, Atharva Shripad Dudwadkar wrote:

Hi haproxy Team,

Can you please help me with the upgrading process regarding haproxy from
2.0.7 to 2.5. in RHEL. Could you please share with me upgrading process?


Please note that 2.5 is no longer supported, it was a short-lived
version. You should consider upgrading to a long term supported one
to replace your 2.0, these are 2.4, 2.6 or 2.8. Please look at the
packages here for various distros and from various maintainers:

 https://github.com/haproxy/wiki/wiki/Packages


In addition to that site can you also open a RH Case and ask the Vendor 
if there is a updated package, in case you expect some support for the 
RHEL package :-).


https://access.redhat.com/support/cases/


Regards,
Willy


Regards
Alex



Re: WebTransport support/roadmap

2023-08-17 Thread Aleksandar Lazic

Hi.

On 2023-08-17 (Do.) 10:14, Artur wrote:

Feature request submitted: https://github.com/haproxy/haproxy/issues/2256


Thank you. I have added a simple picture based on your E-Mails, hope I 
have understood your request properly.


Regards
Alex



Re: WebTransport support/roadmap

2023-08-16 Thread Aleksandar Lazic

Hi.

On 2023-08-16 (Mi.) 17:29, Artur wrote:

Hello !

I wonder if there is a roadmap to support WebTransport protocol in haproxy.

There are some explanations/references (if needed) from socket.io dev 
team that started to support it :


https://socket.io/get-started/webtransport


Looks like that's Websocket for udp/QUIC just because the Websocket 
Protocol does not work with QUIC, imho.


Cite from https://datatracker.ietf.org/doc/html/draft-ietf-webtrans-http2/

```
By relying only on generic HTTP semantics, this protocol might allow 
deployment using any HTTP version. However, this document only defines 
negotiation for HTTP/2 [HTTP2] as the current most common TCP-based 
fallback to HTTP/3.

```

Please can you open a Feature request on 
https://github.com/haproxy/haproxy/issues so that anybody, maybe you 
:-), can pick it and implement it.


When I look back how a nightmare the  Websocket in the different version 
 was to implement it will this variant for QUIC not be much easier, 
from my point of view.


Jm2c


--
Best regards,
Artur


Regards
Alex



Re: Problems using custom error files with HTTP/2

2023-08-07 Thread Aleksandar Lazic

Hi.

On 2023-08-07 (Mo.) 18:35, Nick Wood wrote:

Hello all,


I'm not sure if anything further happened with this, but after upgrading 
from 2.6 to 2.8.1, custom pages are now broken by default over HTTP/2.


Please can you specific more deeper what you mean with "broken by default".

What does not work anymore?
what's your config?
Is the custom page also broken when you activate H2 on 2.6?

Has HTTP/2 support been enabled by default? If so how would one turn it 
off so we don't have to downgrade back to v2.6?


In the Announcement of 1.8 is described how to deactivate the H2.
https://www.mail-archive.com/haproxy@formilux.org/msg43600.html

```
- HTTP/2 is advertised by default in ALPN on TLS listeners. It was about
  time, 5 years have passed since it was introduced, it's been enabled by
  default in clear text as an HTTP/1 upgrade for 4 years, yet some users
  do not know how to enable it. From now on, ALPN defaults to "h2,http/1.1"
  on TCP and "h3" on QUIC so that these protocol versions work by default.
  It's still possible to set/reset the ALPN to disable them of course. The
  old concern some users were having about window sizes was addressed by
  having a setting for each side (front vs back).
```

That the doc link to the alpn keyword.
http://docs.haproxy.org/2.8/configuration.html#5.1-alpn


Thanks,

Nick


Regards
Alex


On 17/04/2023 15:09, Aleksandar Lazic wrote:



On 17.04.23 15:08, Willy Tarreau wrote:

On Mon, Apr 17, 2023 at 03:04:05PM +0200, Lukas Tribus wrote:

On Sat, 15 Apr 2023 at 23:08, Willy Tarreau  wrote:


On Sat, Apr 15, 2023 at 10:59:42PM +0200, Willy Tarreau wrote:

Hi Nick,

On Sat, Apr 15, 2023 at 09:44:32PM +0100, Nick Wood wrote:
And here is my configuration - I've slimmed it down to the 
absolute minimum

to reproduce the problem:

If the back end is down, the custom 503.http page should be served.

This works on HTTP/1.1 but not over HTTP/2:


Very useful, thank you. In fact it's irrelevant to the errorfile but
it's the 503 that is not produced in this case. I suspect that it's
interpreted on the server side as only a retryable connection error
and that if the HTTP/1 client had faced it on its second request it
would have been the same (in H1 there's a special case for the first
request on a connection, that is not automatically retryable, but
after the first one we have the luxry of closing silently to force
the client to retry, something that H2 supports natively).

I'm still trying to figure when this problem appeared, and it looks
like even 2.4.0 did behave like this. I'm still digging.


And indeed, this issue appeared with this commit in 1.9-dev10 4 
years ago:


   746fb772f ("MEDIUM: mux_h2: Always set CS_FL_NOT_FIRST for new 
conn_streams.")


So it makes h2 behave like the second and more H1 requests which 
are silent
about this. We overlooked this specificity, it would need to be 
rethought a

little bit I guess.


Even though we had this issue for a long time and nobody noticed, we
should probably not enable H2 on a massive scale with new 2.8 defaults
before this is fixed to avoid silently breaking this error condition.


I totally agree ;-)


Well, I would prefer to keep on the line so that such bugs could be 
found much earlier :-).


Jm2c


Willy







libcrypt may be removed completely in future Glibc releases

2023-08-02 Thread Aleksandar Lazic

Hi.

I have seen this lines in the current glibc release notes

https://sourceware.org/glibc/wiki/Release/2.38
```
2.1. Building libcrypt is disabled by default

If you still need Glibc libcrypt, pass --enable-crypt to the configure 
script.


Note that libcrypt may be removed completely in future Glibc releases. 
Distributions are encouraged to provide libcrypt via libxcrypt[1], 
instead of relying on Glibc libcrypt.

```

The libxcrypt page mention to be backward compatible but we should keep 
an eye on this, IMHO.


Regards
Alex

[1] https://github.com/besser82/libxcrypt



Re: QUIC with a fcgi backend

2023-07-24 Thread Aleksandar Lazic

Yaacov.

On 2023-07-24 (Mo.) 15:08, Christopher Faulet wrote:

Le 7/24/23 à 12:24, Yaacov Akiba Slama a écrit :

Hi Christopher,

Thanks for report. It is not a known issue, but I can confirm it. When
H3 HEADERS frames are converted to the internal HTTP representation
(HTX), a flag is missing to specify a content-length was found.

I pushed a flag, it should be fixed:

commit e42241ed2b1df77beb1817eb9bcc46bab793f25c (HEAD -> master,
haproxy.org/master)
Author: Christopher Faulet 
Date:   Mon Jul 24 11:37:10 2023 +0200


Thanks for the fix. I just tested and it works but I can still see a
weird behavior when using curl (I still didn't test with a browser):
when the uploaded data is big (bigger than bufsize), the connection is
not immediately closed but only after a timeout:

curl --http3-only -d @ 



curl: (55) ngtcp2_conn_handle_expiry returned error: ERR_IDLE_CLOSE



This time, I'm unable to reproduce. I guess we need help of the quic men 
(Fred or Amaury).


Are the HAProxy and the FCGI Server on the same host/network or is there 
any firewall or anything in between?


What's the error message on the HAProxy and on the FCGI server when the 
timeout occur?


Regards
Alex



Re: QUIC with a fcgi backend

2023-07-22 Thread Aleksandar Lazic

Hi.

On 2023-07-22 (Sa.) 21:48, Yaacov Akiba Slama wrote:

Hi,

It seems that there is a bug in QUIC when using a fastcgi backend:

As soon as the size of the uploaded data is more than bufsize, the 
server returns 400 Bad request and shows PH-- in the logs.


The problem occurs with both haproxy 2.8.1 and 2.9-dev2 (both build 
quictls OpenSSL_1_1_1u-quic1).


When using h2 or an http backend, everything is ok.

Is it a known problem?


Please can you share the config you use to be able to reproduce the 
issue. I think it's not know but it would be good to be able to 
reproduce it.



Thanks,

--yas


Regards
Alex



Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-13 Thread Aleksandar Lazic

Hi Andrew.

Thank you for your answers.

On 2023-07-13 (Do.) 08:22, Hopkins, Andrew wrote:

Hi Alex, thanks for taking a look at this change, to answer your questions:

* Do you plan to make releases which stable ABI on that we can rely on?
Yes, we have releases on GitHub that follow semantic versioning and 
within minor versions everything is backward compatible. Internal 
details of structs may change in an API compatible way over time but 
might not be ABI. This would be signaled in the release notes and 
version number.


Okay.


* Do you plan to add quic (Server part) faster then OpenSSL?

I have not looked into quic benchmarks but it uses the same 
cryptographic primitives as TLS so I imagine we'd be faster for a lot of 
the algorithms. It might not be useful for HAProxy which is all C, but 
AWS also launched s2n-quic [1] which does have extensive testing for 
correctness and performance. s2n-quic evenuses AWS-LC's libcrypto for 
all of the cryptographic operations [2] though our rust bindings 
aws-lc-rs [3].


Hm, this implies a dependency for rust which increases the complexity to 
build HAProxy. From my point of view isn't this very helpfull to bring 
the library into haproxy.


* Will be there some packages for debian/ubuntu/RHEL/... so that the 
users of HAProxy can "just install and run" HAProxy with that SSL Lib?


In the near future no. Currently AWS-LC does not support enough packages 
to fully replace libcrypto for the entire operating system, and 
balancing different programs using different library paths and libcrypto 
implementations is tricky. Eventually distributing static archives and 
shared libraries once we have more support makes sense. There is more 
context/history in this issue [4].


Uh that's a show stopper, at least from my point of view. This implies 
the same work as HAProxy team have for wolfssl, BoringSSL and quictls 
and that's a lot of work.


As the patch looks quite small and AWS-LC relies on BoringSSL are you 
handle the BoringSSL chnanges so that the API and not often ABI changes 
are handled by AWS-LC?


[1] https://github.com/aws/s2n-quic 
[2] https://github.com/aws/s2n-quic/pull/1840

[3] https://github.com/aws/aws-lc-rs
[4] https://github.com/aws/aws-lc/issues/804

Thanks, Andrew

----
*From:* Aleksandar Lazic 
*Sent:* Wednesday, July 12, 2023 1:14 AM
*To:* Hopkins, Andrew; haproxy@formilux.org
*Subject:* RE: [EXTERNAL][PATCH] BUILD: ssl: Build with new 
cryptographic library AWS-LC
CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and 
know the content is safe.




Hi Andrew.

On 2023-07-12 (Mi.) 02:26, Hopkins, Andrew wrote:

Hello HAProxy maintainers, I work on the AWS libcrypto (AWS-LC) project [1].
Our goal is to improve the cryptography we use internally at AWS and help our
customers externally. In the spirit of helping people use good crypto we know
it’s important to make it easy to use AWS-LC everywhere they use cryptography.
This is why we are interested in integrating AWS-LC into HAProxy.

AWS-LC is a fork of BoringSSL which you already partially support. We recently
merged in several PRs (Full OCSP support [2] and custom extension support [3])
to fully support HAProxy the same as OpenSSL. To ensure we continue to support
HAProxy long term we added HAProxy built with AWS-LC to our CI [4].

In our early testing we see modest improvements in overall throughput when
compared to OpenSSL 3.1 on x86 and arm CPUs. Following a similar setup as this
blog [5] I observe a small (~2.5%) increase in requests per second for 5 kb
requests on a C6i (x86) and C6g (arm) instance using TLS 1.3 and AES 256 GCM. 
For
both tests I used
`taskset -c 2-47 ./h1load -e -ll -P -t 46 -s 30 -d 120 -c 500 https://[c6i 
<https://[c6i> or c6g ip]:[aws-lc or openssl port]/?s=5k`.

This small difference in this symmetric crypto workload comes down to AWS-LC
and OpenSSL having similar AES implementations. We observe larger performance
improvements with our micro-benchmarks for algorithms related to the TLS
handshake such as 15% reduction for ECDH with P-256, and 40% reduction for
P-521 on a C6i. This comes from our s2n-bignum library[6], a formally verified
bignum library with a focus on performance and correctness.

When built with AWS-LC all current regression tests pass. I have included a
small patch to update your documentation with AWS-LC as an option and I
attempted to add AWS-LC to your CI. I need a little help figuring out how to
test that part. Lastly from your excellent contributing guide I am not 
subscribed
so I would like to be cc’d on all responses.


Sounds quite interesting library.

I have a few questions about the future plans of the library.

* Do you plan to make releases which stable ABI on that we can rely on?
    That's one of the pain points with BoringSSL.
* Do you plan to add quic (Ser

Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-12 Thread Aleksandar Lazic

Hi Andrew.

On 2023-07-12 (Mi.) 02:26, Hopkins, Andrew wrote:

Hello HAProxy maintainers, I work on the AWS libcrypto (AWS-LC) project [1].
Our goal is to improve the cryptography we use internally at AWS and help our
customers externally. In the spirit of helping people use good crypto we know
it’s important to make it easy to use AWS-LC everywhere they use cryptography.
This is why we are interested in integrating AWS-LC into HAProxy.

AWS-LC is a fork of BoringSSL which you already partially support. We recently
merged in several PRs (Full OCSP support [2] and custom extension support [3])
to fully support HAProxy the same as OpenSSL. To ensure we continue to support
HAProxy long term we added HAProxy built with AWS-LC to our CI [4].

In our early testing we see modest improvements in overall throughput when
compared to OpenSSL 3.1 on x86 and arm CPUs. Following a similar setup as this
blog [5] I observe a small (~2.5%) increase in requests per second for 5 kb
requests on a C6i (x86) and C6g (arm) instance using TLS 1.3 and AES 256 GCM. 
For
both tests I used
`taskset -c 2-47 ./h1load -e -ll -P -t 46 -s 30 -d 120 -c 500 https://[c6i or 
c6g ip]:[aws-lc or openssl port]/?s=5k`.

This small difference in this symmetric crypto workload comes down to AWS-LC
and OpenSSL having similar AES implementations. We observe larger performance
improvements with our micro-benchmarks for algorithms related to the TLS 
handshake such as 15% reduction for ECDH with P-256, and 40% reduction for 
P-521 on a C6i. This comes from our s2n-bignum library[6], a formally verified

bignum library with a focus on performance and correctness.

When built with AWS-LC all current regression tests pass. I have included a
small patch to update your documentation with AWS-LC as an option and I
attempted to add AWS-LC to your CI. I need a little help figuring out how to
test that part. Lastly from your excellent contributing guide I am not 
subscribed
so I would like to be cc’d on all responses.


Sounds quite interesting library.

I have a few questions about the future plans of the library.

* Do you plan to make releases which stable ABI on that we can rely on?
  That's one of the pain points with BoringSSL.
* Do you plan to add quic (Server part) faster then OpenSSL?
* Will be there some packages for debian/ubuntu/RHEL/... so that the 
users of HAProxy can "just install and run" HAProxy with that SSL Lib?



Thanks, Andrew


Regards
Alex


[1] https://github.com/aws/aws-lc
[2] https://github.com/aws/aws-lc/pull/1054
[3] https://github.com/aws/aws-lc/pull/1071
[4] https://github.com/aws/aws-lc/pull/1083
[5] 
https://www.haproxy.com/blog/haproxy-forwards-over-2-million-http-requests-per-second-on-a-single-aws-arm-instance
[6] https://github.com/awslabs/s2n-bignum






Re: QUIC (mostly) working on top of unpatched OpenSSL

2023-07-07 Thread Aleksandar Lazic

Hi.

Just a addendum below to my last mail.

On 2023-07-07 (Fr.) 00:33, Aleksandar Lazic wrote:

Hi Willy

On 2023-07-06 (Do.) 22:05, Willy Tarreau wrote:

Hi all,

as the subject says it, Fred managed to make QUIC mostly work on top of
a regular OpenSSL. Credit goes to the NGINX team who found a clever and
absolutely ugly way to abuse OpenSSL callbacks to intercept and inject
data from/to the TLS hello messages. It does have limitations, such as
0-RTT not being supported, and maybe other ones we're not aware of. I'm
hesitating in merging it because there are some non-negligible impacts
for the QUIC ecosystem itself in doing this, ranging from a possibly
lower performance or reliability that could disappoint some users of the
protocol, to discouraging the efforts to get a real alternative stack
working.

I've opened the discussion on the QUIC working group here to collect
various opinions and advices:

   
https://mailarchive.ietf.org/arch/browse/quic/?gbt=1=M9pkSGzTSHunNC1yeySaB3irCVo


Unsurprizingly, the perception for now is mostly aligned with my first
feelings, i.e. "OpenSSL will be happy and QUIC will be degraded, that's
a bad idea". But I also know that on the WG we exclusively speak between
implementors, who don't always have the users' perspective.

I would encourage those who really want to ease QUIC adoption to read
the thread above (possibly even share their opinion, that would be
welcome) so that we can come to a consensus regarding this (e.g. merge,
drop, merge conditioned at build time, or with an expert runtime option,
anything else, I don't know). I feel like it's a difficult stretch to
find the best approach. The "it's not possible at all with openssl,
period" excuse is no longer true, however "it's only a degraded approach"
remains true.

I wouldn't like end-users to just think "pwah, all that for this, I'm
not impressed" without realizing that they wouldn't be benefitting from
everything. But maybe it would be good enough for most of those who are
not going to rebuild QuicTLS or wolfSSL. I sincerely don't know and I
do welcome opinions.


Amazing work from the nginx team :-)

 From my point of view is the way to go wolfSSL as the way on which 
OpenSSL is does not looks very promising for the future, at least for 
me. This implies that HAProxy will have different packages for the OS 
and creates much more work for the nice packaging Persons :-(. I don't 
know how big the challenge is to run HAProxy complete with wolfSSL, if 
it's not already done but to have a package like "haproxy-openssl" and 
"haproxy-quic" which implies wolfSSL would be a nice solution for the 
HAProxy Users, imho. A nice Change would be if nginx and Apache HTTPd 
also move to wolfSSL :-).
What's not clear to me is how the future of wolfSSL will be as the 
Company behind the lib looks for now very open for Open Source Projects 
but who knows the future.


Maybe another option could be gnutls as it added the QUIC API in 3.7.0 
but I think that's even a higher challenge then to move from OpenSSL to 
wolfSSL then to gnutls just because there is not even a single line of 
code with gnutls.


https://lists.gnupg.org/pipermail/gnutls-help/2020-December/004670.html
...
** libgnutls: Added a new set of API to enable QUIC implementation 
(#826, #849, #850).

...

ngtcp2 have examples with different TLS library, just fyi.
https://github.com/ngtcp2/ngtcp2/tree/main/examples

Another Question is, is the TLS/SSL Layer in HAProxy enough separated to 
add another TLS implementation? I'm pretty sure that a lot of people 
knows this but just for the archive let me share the way how curl handle 
different TLS backends.


https://github.com/curl/curl/tree/master/lib/vtls

All in all from my point of view was OpenSSL a good library in the past 
but for the future should a more modern and open (from Org and Mindset) 
Library be used, Jm2c.


Interesting point, at least for me, it looks like that OpenSSL starts to 
implement quic, is there any official info from OpenSSL about this part 
in this year? Is there also a statement about the performance issue with 
3.x?


https://github.com/openssl/openssl/tree/master/ssl/quic


Cheers,
Willy


Regards
Alex





Re: QUIC (mostly) working on top of unpatched OpenSSL

2023-07-06 Thread Aleksandar Lazic

Hi Willy

On 2023-07-06 (Do.) 22:05, Willy Tarreau wrote:

Hi all,

as the subject says it, Fred managed to make QUIC mostly work on top of
a regular OpenSSL. Credit goes to the NGINX team who found a clever and
absolutely ugly way to abuse OpenSSL callbacks to intercept and inject
data from/to the TLS hello messages. It does have limitations, such as
0-RTT not being supported, and maybe other ones we're not aware of. I'm
hesitating in merging it because there are some non-negligible impacts
for the QUIC ecosystem itself in doing this, ranging from a possibly
lower performance or reliability that could disappoint some users of the
protocol, to discouraging the efforts to get a real alternative stack
working.

I've opened the discussion on the QUIC working group here to collect
various opinions and advices:

   
https://mailarchive.ietf.org/arch/browse/quic/?gbt=1=M9pkSGzTSHunNC1yeySaB3irCVo

Unsurprizingly, the perception for now is mostly aligned with my first
feelings, i.e. "OpenSSL will be happy and QUIC will be degraded, that's
a bad idea". But I also know that on the WG we exclusively speak between
implementors, who don't always have the users' perspective.

I would encourage those who really want to ease QUIC adoption to read
the thread above (possibly even share their opinion, that would be
welcome) so that we can come to a consensus regarding this (e.g. merge,
drop, merge conditioned at build time, or with an expert runtime option,
anything else, I don't know). I feel like it's a difficult stretch to
find the best approach. The "it's not possible at all with openssl,
period" excuse is no longer true, however "it's only a degraded approach"
remains true.

I wouldn't like end-users to just think "pwah, all that for this, I'm
not impressed" without realizing that they wouldn't be benefitting from
everything. But maybe it would be good enough for most of those who are
not going to rebuild QuicTLS or wolfSSL. I sincerely don't know and I
do welcome opinions.


Amazing work from the nginx team :-)

From my point of view is the way to go wolfSSL as the way on which 
OpenSSL is does not looks very promising for the future, at least for 
me. This implies that HAProxy will have different packages for the OS 
and creates much more work for the nice packaging Persons :-(. I don't 
know how big the challenge is to run HAProxy complete with wolfSSL, if 
it's not already done but to have a package like "haproxy-openssl" and 
"haproxy-quic" which implies wolfSSL would be a nice solution for the 
HAProxy Users, imho. A nice Change would be if nginx and Apache HTTPd 
also move to wolfSSL :-).
What's not clear to me is how the future of wolfSSL will be as the 
Company behind the lib looks for now very open for Open Source Projects 
but who knows the future.


Maybe another option could be gnutls as it added the QUIC API in 3.7.0 
but I think that's even a higher challenge then to move from OpenSSL to 
wolfSSL then to gnutls just because there is not even a single line of 
code with gnutls.


https://lists.gnupg.org/pipermail/gnutls-help/2020-December/004670.html
...
** libgnutls: Added a new set of API to enable QUIC implementation 
(#826, #849, #850).

...

ngtcp2 have examples with different TLS library, just fyi.
https://github.com/ngtcp2/ngtcp2/tree/main/examples

Another Question is, is the TLS/SSL Layer in HAProxy enough separated to 
add another TLS implementation? I'm pretty sure that a lot of people 
knows this but just for the archive let me share the way how curl handle 
different TLS backends.


https://github.com/curl/curl/tree/master/lib/vtls

All in all from my point of view was OpenSSL a good library in the past 
but for the future should a more modern and open (from Org and Mindset) 
Library be used, Jm2c.




Cheers,
Willy


Regards
Alex



Re: [PATCH 1/1] MEDIUM: ssl: new sample fetch method to get curve name

2023-06-20 Thread Aleksandar Lazic

Hi.

On 2023-06-20 (Di.) 18:50, Mariam John wrote:

Adds a new sample fetch method to get the curve name used in the
key agreement to enable better observability. In OpenSSLv3, the function
`SSL_get_negotiated_group` returns the NID of the curve and from the NID,
we get the curve name by passing the NID to OBJ_nid2sn. This was not
available in v1.1.1. SSL_get_curve_name(), which returns the curve name
directly was merged into OpenSSL master branch last week but will be available
only in its next release.
---
  doc/configuration.txt|  8 +
  reg-tests/ssl/ssl_client_samples.vtc |  2 ++
  reg-tests/ssl/ssl_curves.vtc |  4 +++
  src/ssl_sample.c | 46 
  4 files changed, 60 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 8bcfc3c06..d944ac132 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -20646,6 +20646,10 @@ ssl_bc_cipher : string
over an SSL/TLS transport layer. It can be used in a tcp-check or an
http-check ruleset.
  
+ssl_bc_curve : string

+  Returns the name of the curve used in the key agreement when the outgoing
+  connection was made over an SSL/TLS transport layer.
+
  ssl_bc_client_random : binary
Returns the client random of the back connection when the incoming 
connection
was made over an SSL/TLS transport layer. It is useful to to decrypt traffic
@@ -20944,6 +20948,10 @@ ssl_fc_cipher : string
Returns the name of the used cipher when the incoming connection was made
over an SSL/TLS transport layer.
  
+ssl_fc_curve : string

+  Returns the name of the curve used in the key agreement when the incoming
+  connection was made over an SSL/TLS transport layer.
+
  ssl_fc_cipherlist_bin([]) : binary
Returns the binary form of the client hello cipher list. The maximum
returned value length is limited by the shared capture buffer size


Please can you sort the key words in proper alphabetical order.

Please can you add "Require OpenSSL >= 3..." the right version simlar to 
https://docs.haproxy.org/2.8/configuration.html#7.3.4-ssl_fc_server_handshake_traffic_secret 
.




diff --git a/reg-tests/ssl/ssl_client_samples.vtc 
b/reg-tests/ssl/ssl_client_samples.vtc
index 5a84e4b25..1f078ea98 100644
--- a/reg-tests/ssl/ssl_client_samples.vtc
+++ b/reg-tests/ssl/ssl_client_samples.vtc
@@ -46,6 +46,7 @@ haproxy h1 -conf {
  http-response add-header x-ssl-s_serial %[ssl_c_serial,hex]
  http-response add-header x-ssl-key_alg %[ssl_c_key_alg]
  http-response add-header x-ssl-version %[ssl_c_version]
+http-response add-header x-ssl-curve-name %[ssl_fc_curve]
  
  bind "${tmpdir}/ssl.sock" ssl crt ${testdir}/common.pem ca-file ${testdir}/ca-auth.crt verify optional crt-ignore-err all crl-file ${testdir}/crl-auth.pem
  
@@ -69,6 +70,7 @@ client c1 -connect ${h1_clearlst_sock} {

  expect resp.http.x-ssl-s_serial == "02"
  expect resp.http.x-ssl-key_alg == "rsaEncryption"
  expect resp.http.x-ssl-version == "1"
+expect resp.http.x-ssl-curve-name == "X25519"
  } -run
  
  
diff --git a/reg-tests/ssl/ssl_curves.vtc b/reg-tests/ssl/ssl_curves.vtc

index 5cc70df14..3dbe47c4d 100644
--- a/reg-tests/ssl/ssl_curves.vtc
+++ b/reg-tests/ssl/ssl_curves.vtc
@@ -75,6 +75,7 @@ haproxy h1 -conf {
  listen ssl1-lst
  bind "${tmpdir}/ssl1.sock" ssl crt ${testdir}/common.pem ca-file 
${testdir}/set_cafile_rootCA.crt verify optional curves P-256:P-384
  server s1 ${s1_addr}:${s1_port}
+http-response add-header x-ssl-fc-curve-name %[ssl_fc_curve]
  
  # The prime256v1 curve, which is used by default by a backend when no

  # 'curves' or 'ecdhe' option is specified, is not allowed on this listener
@@ -98,6 +99,7 @@ haproxy h1 -conf {
  
  bind "${tmpdir}/ssl-ecdhe-256.sock" ssl crt ${testdir}/common.pem ca-file ${testdir}/set_cafile_rootCA.crt verify optional ecdhe prime256v1

  server s1 ${s1_addr}:${s1_port}
+http-response add-header x-ssl-fc-curve-name %[ssl_fc_curve]
  
  } -start
  
@@ -105,6 +107,7 @@ client c1 -connect ${h1_clearlst_sock} {

txreq
rxresp
expect resp.status == 200
+  expect resp.http.x-ssl-fc-curve-name == "prime256v1"
  } -run
  
  # The backend tries to use the prime256v1 curve that is not accepted by the

@@ -129,6 +132,7 @@ client c4 -connect ${h1_clearlst_sock} {
txreq -url "/ecdhe-256"
rxresp
expect resp.status == 200
+  expect resp.http.x-ssl-fc-curve-name == "prime256v1"
  } -run
  
  syslog Slg_cust_fmt -wait


Please can you create a dedicated test file for that feature so that the 
test can be exluded when the requierd OpenSSL is not used.
I think the "openssl_version_atleast(1.1.1)" should be "3." which is 
in the ssl_curves.vtc file.




diff --git a/src/ssl_sample.c b/src/ssl_sample.c
index 5aec97fef..d7a7a09f9 100644
--- a/src/ssl_sample.c
+++ b/src/ssl_sample.c
@@ -1304,6 +1304,46 @@ 

Re: OCSP renewal with 2.8

2023-06-03 Thread Aleksandar Lazic

Hi.

On 2023-06-02 (Fr.) 22:42, Lukas Tribus wrote:

On Fri, 2 Jun 2023 at 21:55, Willy Tarreau  wrote:

Initially during the design phase we thought about having 3 states:
"off", "on", "auto", with the last one only enabling updates for certs
that already had a .ocsp file. But along discussions with some users
we were told that it was not going to be that convenient (I don't
remember why, but I think that Rémi and/or William probably remember
the reason), and it ended up dropping "auto".

Alternately maybe instead of enabling for all certs, what would be
useful would be to just change the default, because if you have 100k
certs, it's likely that 99.9k work one way and the other ones the other
way, and what you want is to indicate the default and only mention the
exception for those concerned.


I suggest we make it configurable on the bind line like other ssl
options, so it will work for the common use cases that don't involve
crt-lists, like a simple crt statement pointing to a certificate or a
directory.

It could also be a global option *as well*, but imho it does need to
be a bind line configuration option, just like strict-sni, alpn and
ciphers, so we can enable it specifically (per frontend, per bind
line) without requiring crt-list.


+1 to this suggestion.



Lukas





@Wolfssl: any plans to add "ECH (Encrypted client hello) support" and question about Roadmap

2023-06-01 Thread Aleksandar Lazic

Hi,

As we have now a shiny new LTS let's take a look into the future :-)

As the Wolfssl looks like a good future alternative for OpenSSL is there 
any plan to add ECH (Encrypted client hello) ( 
https://github.com/haproxy/haproxy/issues/1924 ) into Wolfssl?


Is there any Idea which feature is planed to be added by HAProxy Company 
from the feature requests 
https://github.com/haproxy/haproxy/labels/type%3A%20feature ?


Regards
Alex



Re: Followup on openssl 3.0 note seen in another thread

2023-05-29 Thread Aleksandar Lazic

Hi Shawn.

On 2023-05-28 (So.) 05:30, Shawn Heisey wrote:

On 5/27/23 18:03, Shawn Heisey wrote:

On 5/27/23 14:56, Shawn Heisey wrote:
Yup.  It was using keepalive.  I turned keepalive off and repeated 
the tests.


I did the tests again with 200 threads.  The system running the tests 
has 12 hyperthreaded cores, so this definitely pushes its capabilities.


I had forgotten a crucial fact that means all my prior testing work was 
invalid:  Apache HttpClient 4.x defaults to a max simultaneous 
connection count of 2.  Not going to exercise concurrency with that!


I have increased that to 1024, my program's max thread count, and now 
the test is a LOT faster ... it's actually running 200 threads at the 
same time.  Two runs per branch here, one with 200 threads and one with 
24 threads.


Still no smoking gun showing 3.0 as the slowest of the bunch.  In fact, 
3.0 is giving the best results!  So my test method is still probably the 
wrong approach.


Maybe you can change the setup in that way

HAProxies FE => HAProxies BE => Destination Servers

Where the Destination Servers are also HAProxies which just returns a 
static content or any high performance low latency HTTPS Server.

With such a Setup can you test also the Client mode of the OpenSSL.

Regards
Alex


1.1.1t:
21:06:45.388 [main] INFO  o.e.t.h.MainSSLTest Count 20 234.54/s
21:06:45.388 [main] INFO  o.e.t.h.MainSSLTest 10th % 54 ms
21:06:45.388 [main] INFO  o.e.t.h.MainSSLTest 25th % 94 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest Median 188 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest 75th % 991 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest 95th % 3698 ms
21:06:45.389 [main] INFO  o.e.t.h.MainSSLTest 99th % 6924 ms
21:06:45.390 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 11983 ms
-
21:20:35.400 [main] INFO  o.e.t.h.MainSSLTest Count 24000 355.56/s
21:20:35.400 [main] INFO  o.e.t.h.MainSSLTest 10th % 40 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 25th % 46 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest Median 57 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 75th % 71 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 95th % 126 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 99th % 168 ms
21:20:35.401 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 721 ms

3.0.8:
20:50:12.916 [main] INFO  o.e.t.h.MainSSLTest Count 20 244.69/s
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest 10th % 56 ms
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest 25th % 93 ms
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest Median 197 ms
20:50:12.917 [main] INFO  o.e.t.h.MainSSLTest 75th % 949 ms
20:50:12.918 [main] INFO  o.e.t.h.MainSSLTest 95th % 3425 ms
20:50:12.918 [main] INFO  o.e.t.h.MainSSLTest 99th % 6679 ms
20:50:12.918 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 11582 ms
-
21:23:22.076 [main] INFO  o.e.t.h.MainSSLTest Count 24000 404.78/s
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 10th % 40 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 25th % 45 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest Median 53 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 75th % 63 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 95th % 90 ms
21:23:22.077 [main] INFO  o.e.t.h.MainSSLTest 99th % 121 ms
21:23:22.078 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 671 ms

3.1.0+locks:
20:33:32.805 [main] INFO  o.e.t.h.MainSSLTest Count 20 238.02/s
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest 10th % 58 ms
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest 25th % 95 ms
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest Median 196 ms
20:33:32.806 [main] INFO  o.e.t.h.MainSSLTest 75th % 1001 ms
20:33:32.807 [main] INFO  o.e.t.h.MainSSLTest 95th % 3475 ms
20:33:32.807 [main] INFO  o.e.t.h.MainSSLTest 99th % 6288 ms
20:33:32.807 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 10700 ms
-
21:26:24.555 [main] INFO  o.e.t.h.MainSSLTest Count 24000 402.89/s
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 10th % 39 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 25th % 45 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest Median 52 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 75th % 64 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 95th % 93 ms
21:26:24.556 [main] INFO  o.e.t.h.MainSSLTest 99th % 127 ms
21:26:24.557 [main] INFO  o.e.t.h.MainSSLTest 99.9 % 689 ms





Re: unsubscribe

2023-05-14 Thread Aleksandar Lazic

Hi.

On 14.05.23 22:07, Roman Gelfand wrote:




Here is the unsubscribe address.
https://www.haproxy.org/#tact

Regards
Alex



Re: equivalent of url32+src for hdr_ip(x-forwarded-for)?

2023-05-11 Thread Aleksandar Lazic
 [TRACE] trace


Hope that helps

Regards
Alex

On Thu, May 11, 2023 at 11:21 PM Aleksandar Lazic <mailto:al-hapr...@none.at>> wrote:


Dear Nathan.

On 11.05.23 23:59, Nathan Rixham wrote:
 > Hi All,
 >
 > I've run into an issue I can't figure out, essentially need to use
 > url32+src in stick tables, but where src is the x-forwarded-for
address
 > rather than the connecting source - any advice would be appreciated.

As this is a quite generic question please send us the following info's.

* haproxy -vv
* your config, reduced and without any sensible data
* A more detail explanation what exactly you want to do and what does
not work.

 > Cheers,
 >
 > Nathan

Regards
Alex





Re: equivalent of url32+src for hdr_ip(x-forwarded-for)?

2023-05-11 Thread Aleksandar Lazic

Dear Nathan.

On 11.05.23 23:59, Nathan Rixham wrote:

Hi All,

I've run into an issue I can't figure out, essentially need to use 
url32+src in stick tables, but where src is the x-forwarded-for address 
rather than the connecting source - any advice would be appreciated.


As this is a quite generic question please send us the following info's.

* haproxy -vv
* your config, reduced and without any sensible data
* A more detail explanation what exactly you want to do and what does 
not work.



Cheers,

Nathan


Regards
Alex



Re: Drain L4 host that fronts a L7 cluster

2023-05-05 Thread Aleksandar Lazic
Isn't is a similar request to 
https://github.com/haproxy/haproxy/issues/969 as I mentioned in the 
issue https://github.com/haproxy/haproxy/issues/2149


On 06.05.23 01:18, Abhijeet Rastogi wrote:

Thanks for the response Tristan.

For the future reader of this thread, a feature request was created
for this. https://github.com/haproxy/haproxy/issues/2146


On Fri, May 5, 2023 at 4:09 PM Tristan  wrote:

however, our reason to migrate to HAproxy is adding gRPC
compliance to the stack, so H2 support is a must. Thanks for the
workarounds, indeed interesting, I'll check them out.

  From a cursory look at the gRPC spec it seems like you would indeed
really need the GOAWAY to get anywhere


trigger the GOAWAY H2 frames (which isn't possible at

the moment, as far as I can tell)

*Is this a valid feature request for HAProxy?*
Maybe, we can provide "setting CLO mode" via the "http-request" directive?

I can't make that call, but at least it sounds quite useful to me indeed.

And in particular, being able to set CLO mode is likely a little bit
nicer in the long run than something like a hypothetical 'http-request
send-h2-goaway', since CLO mode can account for future protocols or spec
changes transparently as those eventually get added to HAProxy.

Interesting problem either way!

Cheers,
Tristan



--
Cheers,
Abhijeet (https://abhi.host)





Any Roadmap for "Server weight modulation based on smoothed average measurement" ( https://github.com/haproxy/haproxy/issues/1977 )

2023-04-28 Thread Aleksandar Lazic

Hi.

Is there any Plan when the work on this part will start or will be this
a smooth forward :-)

Regards
Alex



Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-04-28 Thread Aleksandar Lazic

Hi Willy.

On 28.04.23 11:14, Aleksandar Lazic wrote:

Hi Will.

On 28.04.23 11:07, Willy Tarreau wrote:


[snipp]


So from what I'm reading above, the regtest is fake and doesn't test
the presence of digits in the returned value. Could you please correct
it so that it properly verifies that your patch works, and then I'm
fine with merging it.


Okay will take a look and create a new patch.


Attached the new patch.

Regards
AlexFrom 01b0561f0aad6ecf14e1bef552d9c2ad66ad1d67 Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 28 Apr 2023 11:39:12 +0200
Subject: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

This Patch adds fetch samples for backends round trip time.
---
 doc/configuration.txt| 16 ++
 reg-tests/sample_fetches/tcpinfo_rtt.vtc | 39 
 src/tcp_sample.c | 32 +++
 3 files changed, 87 insertions(+)
 create mode 100644 reg-tests/sample_fetches/tcpinfo_rtt.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 32d2fec17..28f308f9d 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -19642,6 +19642,22 @@ be_name : string
   frontends with responses to check which backend processed the request. It can
   also be used in a tcp-check or an http-check ruleset.
 
+bc_rtt() : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the backend
+  connection.  is facultative, by default the unit is milliseconds. 
+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
+bc_rttvar() : integer
+  Returns the Round Trip Time (RTT) variance measured by the kernel for the
+  backend connection.  is facultative, by default the unit is milliseconds.
+   can be set to "ms" for milliseconds or "us" for microseconds. If the
+  server connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
 be_server_timeout : integer
   Returns the configuration value in millisecond for the server timeout of the
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
diff --git a/reg-tests/sample_fetches/tcpinfo_rtt.vtc b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
new file mode 100644
index 0..93300d528
--- /dev/null
+++ b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
@@ -0,0 +1,39 @@
+varnishtest "Test declaration of TCP rtt fetches"
+
+# feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(v2.8-dev8)'"
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+}  -start
+
+haproxy h1 -conf {
+  defaults common
+  mode http
+  timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+  frontend fe from common
+  bind "fd@${feh1}"
+
+  default_backend be
+
+  backend be from common
+
+  http-response set-header x-test1 "%[fc_rtt]"
+  http-response set-header x-test2 "%[bc_rtt(us)]"
+  http-response set-header x-test3 "%[fc_rttvar]"
+  http-response set-header x-test4 "%[bc_rttvar]"
+
+  server s1 ${s1_addr}:${s1_port}
+
+} -start
+
+client c1 -connect ${h1_feh1_sock} {
+txreq -req GET -url /
+rxresp
+expect resp.status == 200
+expect resp.http.x-test2 ~ "[0-9]+"
+} -run
\ No newline at end of file
diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 12eb25c4e..393e39e93 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -401,6 +401,35 @@ smp_fetch_fc_rttvar(const struct arg *args, struct sample *smp, const char *kw,
 	return 1;
 }
 
+/* get the mean rtt of a backend connection */
+static int
+smp_fetch_bc_rtt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 0))
+		return 0;
+
+	/* By default or if explicitly specified, convert rtt to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
+
+/* get the variance of the mean rtt of a backend connection */
+static int
+smp_fetch_bc_rttvar(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 1))
+		return 0;
+
+	/* By default or if explicitly specified, convert rttvar to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
+
+
 #if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || defined(__Ope

Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-04-28 Thread Aleksandar Lazic

Hi Will.

On 28.04.23 11:07, Willy Tarreau wrote:

Hi Alex,

On Fri, Apr 28, 2023 at 10:59:46AM +0200, Aleksandar Lazic wrote:

Hi Willy.

On 30.03.23 06:23, Willy Tarreau wrote:

On Thu, Mar 30, 2023 at 06:16:34AM +0200, Willy Tarreau wrote:

Hi Alex,

On Wed, Mar 29, 2023 at 04:06:10PM +0200, Aleksandar Lazic wrote:

Ping?


thanks for the ping, I missed it a few times when being busy with some
painful bugs in the past. I've pushed it to a topic branch to verify
what it does on the CI for non-linux OS; we might have to add a
"feature cmd" filter in the regtest to check for linux, and I don't
think we directly have this right now (though we could rely on
LINUX_SPLICE for now as a proxy). Or even simpler, we still have
the ability to use "EXCLUDE_TARGETS=freebsd,osx,generic" so I may
adapt your regtest to that as well if it fails on the CI.


Ah so... it passes because we have TCP_INFO on macos as well, and on
Windows we don't run vtest. However the "expect" rule is only for a
status code 200 :-)  I think it would be nice to check for the presence
of digits in these 4 headers. I'll try to do it as time permits but if
you beat me to it I'll take your proposal!


I'm not sure if I get you answer.
Do you need another patch from me?


Damn, I continue to forget about this one :-(  Actually it's extremely
difficult for me to dedicate time to modify stuff that I didn't create
because it's not in my radar.

So from what I'm reading above, the regtest is fake and doesn't test
the presence of digits in the returned value. Could you please correct
it so that it properly verifies that your patch works, and then I'm
fine with merging it.


Okay will take a look and create a new patch.


Thank you!
Willy


Regards
Alex



Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-04-28 Thread Aleksandar Lazic

Hi Willy.

On 30.03.23 06:23, Willy Tarreau wrote:

On Thu, Mar 30, 2023 at 06:16:34AM +0200, Willy Tarreau wrote:

Hi Alex,

On Wed, Mar 29, 2023 at 04:06:10PM +0200, Aleksandar Lazic wrote:

Ping?


thanks for the ping, I missed it a few times when being busy with some
painful bugs in the past. I've pushed it to a topic branch to verify
what it does on the CI for non-linux OS; we might have to add a
"feature cmd" filter in the regtest to check for linux, and I don't
think we directly have this right now (though we could rely on
LINUX_SPLICE for now as a proxy). Or even simpler, we still have
the ability to use "EXCLUDE_TARGETS=freebsd,osx,generic" so I may
adapt your regtest to that as well if it fails on the CI.


Ah so... it passes because we have TCP_INFO on macos as well, and on
Windows we don't run vtest. However the "expect" rule is only for a
status code 200 :-)  I think it would be nice to check for the presence
of digits in these 4 headers. I'll try to do it as time permits but if
you beat me to it I'll take your proposal!


I'm not sure if I get you answer.
Do you need another patch from me?


Thanks,
Willy


Regards
Alex



Re: Reproducible ERR_QUIC_PROTOCOL_ERROR with all QUIC-enabled versions (2.6 to latest 2.8-dev)

2023-04-18 Thread Aleksandar Lazic

Hi Bob.

On 18.04.23 17:07, Zakharychev, Bob wrote:
While experimenting with enabling QUIC in HAProxy sitting in front of 
our closed-source application I stumbled upon a reproducible QUIC 
protocol failure/malfunction while accessing specific CSS resource, 
which is served via internal application proxy: accessing it over QUIC 
results either in ERR_QUIC_PROTOCOL_FAILURE in the browser and no 
mention of request in HAProxy log or incomplete resource being download 
and CD-- request termination flags in HAProxy log (and logged request 
looks a bit different from other, successful, H3 requests). Accessing 
the same resource over HTTP/2 works fine. I need help with setting up a 
proper debug session so that I could capture all necessary information 
which may help with fixing this issue: HAProxy internal 
debugging/tracing flags to enable, etc. I don’t want to open a bug on 
GitHub for this and would appreciate if anyone from HAProxy team could 
reach out to me directly so that I could share relevant information and 
attempt to debug under your direction.


In case you use the HAProxy Enterprise can you get in touch via 
https://www.haproxy.com/contact-us/ or 
https://my.haproxy.com/portal/cust/login


Here are the support options listed.
https://www.haproxy.com/support/support-options/

In case you use the Open Source version please run `haproxy -vv` (with 
two `v`).


What is your configuration?
* Include as much configuration as possible, including global and 
default sections.

* Replace confidential data like domain names and IP addresses.



Thanks in advance,

    Vladimir “Bob” Zakharychev


Regards
Alex



Re: Puzzlement : empty field vs. ,field() -m

2023-04-17 Thread Aleksandar Lazic

Hi.

On 18.04.23 00:55, Jim Freeman wrote:

In splitting out fields from req.cook, populated fields work well, but
detecting an unset field has me befuddled:

   acl COOK_META_MISSING  req.cook(cook2hdr),field(3,\#) ! -m found -m str ''

does not detect that a cookie/field is empty ?

Running the attached 'hdrs' script against the attached haproxy.cfg sees :
===
...
cookie: cook2hdr=#
bar: bar
baz: baz
meta: ,bar,baz
foo:
===
when foo: should not be created, and meta: should only have 2 fields.

Am I just getting the idiom/incantation wrong ?

[ stock/current haproxy 2.6 from Debian/Ubuntu LTS backports ]


A `haproxy -vv` is better then guessing which version this is :-)

Looks like the doc does not mention the empty field case.

https://docs.haproxy.org/2.6/configuration.html#7.3.1-field

From the code looks like that the data is set to 0
https://github.com/haproxy/haproxy/blob/master/src/sample.c#L2432

I would just try to make a '! -m found' but that's untested, I'm pretty 
sure that some persons on this list have much more experience with empty 
return values test.


Regards
Alex



Re: Problems using custom error files with HTTP/2

2023-04-17 Thread Aleksandar Lazic




On 17.04.23 15:08, Willy Tarreau wrote:

On Mon, Apr 17, 2023 at 03:04:05PM +0200, Lukas Tribus wrote:

On Sat, 15 Apr 2023 at 23:08, Willy Tarreau  wrote:


On Sat, Apr 15, 2023 at 10:59:42PM +0200, Willy Tarreau wrote:

Hi Nick,

On Sat, Apr 15, 2023 at 09:44:32PM +0100, Nick Wood wrote:

And here is my configuration - I've slimmed it down to the absolute minimum
to reproduce the problem:

If the back end is down, the custom 503.http page should be served.

This works on HTTP/1.1 but not over HTTP/2:


Very useful, thank you. In fact it's irrelevant to the errorfile but
it's the 503 that is not produced in this case. I suspect that it's
interpreted on the server side as only a retryable connection error
and that if the HTTP/1 client had faced it on its second request it
would have been the same (in H1 there's a special case for the first
request on a connection, that is not automatically retryable, but
after the first one we have the luxry of closing silently to force
the client to retry, something that H2 supports natively).

I'm still trying to figure when this problem appeared, and it looks
like even 2.4.0 did behave like this. I'm still digging.


And indeed, this issue appeared with this commit in 1.9-dev10 4 years ago:

   746fb772f ("MEDIUM: mux_h2: Always set CS_FL_NOT_FIRST for new 
conn_streams.")

So it makes h2 behave like the second and more H1 requests which are silent
about this. We overlooked this specificity, it would need to be rethought a
little bit I guess.


Even though we had this issue for a long time and nobody noticed, we
should probably not enable H2 on a massive scale with new 2.8 defaults
before this is fixed to avoid silently breaking this error condition.


I totally agree ;-)


Well, I would prefer to keep on the line so that such bugs could be 
found much earlier :-).


Jm2c


Willy





Re: Opinions desired on HTTP/2 config simplification

2023-04-15 Thread Aleksandar Lazic

Hi.

On 15.04.23 11:32, Willy Tarreau wrote:

Hi everyone,

I was discussing with Tristan a few hours ago about the widespread
deployment of H2 and H3, with Cloudflare showing that H1 only accounts
for less than 7% of their traffic and H3 getting close to 30% [1],
and the fact that on the opposite yesterday I heard someone say "we
still have not tried H2, so H3..." (!).

Tristan said something along the lines of "if only proxies would enable
it by default by now", which resonated to me like when we decided to
switch some defaults on (keep-alive, http-reuse, threads, etc).

And it's true that at the beginning there was not even a question about
enabling H2 by default on the edge, but nowadays it's as reliable as H1
and used by virtually everyone, yet it still requires admins to know
about this TLS-specific extension called "ALPN" and the exact syntax of
its declaration, in order to enable H2 over TLS, while it's already on
by default for clear traffic.


Is there any experience about the backends and what protocol they use?
As far as I can see is QUIC/h3 not yet there, what's the plan to add 
QUIC/h3 as backend protocol?


```
podman run --rm --network host --name haproy-test --entrypoint /bin/bash 
-it haproxy:latest


Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTTP  side=FE|BE  mux=H1flags=HTX
 h1 : mode=HTTP  side=FE|BE  mux=H1flags=HTX|NO_UPG
   : mode=TCP   side=FE|BE  mux=PASS  flags=
   none : mode=TCP   side=FE|BE  mux=PASS  flags=NO_UPG
```

Do you see any benifit when there is a quic e2e connection?
Something like:

client - Q/H3 - HAProxy - Q/H3 - Backend


Thus you're seeing me coming with my question: does anyone have any
objection against turning "alpn h2,http/1.1" on by default for HTTP
frontends, and "alpn h3" by default for QUIC frontends, and have a new
"no-alpn" option to explicitly turn off ALPN negotiation on HTTP
frontends e.g. for debugging ? This would mean that it would no longer
be necessary to know the ALPN strings to configure these protocols. I
have not looked at the code but I think it should not be too difficult.
ALPN is always driven by the client anyway so the option states what we
do with it when it's presented, thus it will not make anything magically
fail.


Get a +1 for turning on the default settings.

This must be highlighted in the documentation as it could break some 
working setups which have not activated H2 on some the listener for some 
specific reasons.



And if we change this default, do you prefer that we do it for 2.8 that
will be an LTS release and most likely to be shipped with next year's
LTS distros, or do you prefer that we skip this one and start with 2.9,
hence postpone to LTS distros of 2026 ?


+1 for 2.8 .


Even if I wouldn't share my feelings, some would consider that I'm
trying to influence their opinion, so I'll share them anyway :-)  I
think that with the status change from "experimental-but-supported" to
"production" for QUIC in 2.8, having to manually and explicitly deal
with 3 HTTP versions in modern configs while the default (h1) only
corresponds to 7% of what clients prefer is probably an indicator that
it's the right moment to simplify these a little bit. But I'm open to
any argument in any direction.


As the history shows, that the a lot of peoples reuses some sample 
configs I would also consider to add a example quic+h2 setup in the 
examples directory because the current example quick config looks 
somehow wrong.


http://git.haproxy.org/?p=haproxy.git;a=blob;f=examples/quick-test.cfg;h=f27eeff432de116132d2df36121356af0938b8a4;hb=HEAD

I would be nice when the package owner of the distributions would also 
adopt there config examples but this is a decision which is done outside 
of haproxy :-)



It would be nice to be able to decide (and implement a change if needed)
before next week's dev8, so that it leaves some time to collect feedback
before end of May, so please voice in!

Thanks!
Willy

[1] https://radar.cloudflare.com/adoption-and-usage


Regards
Alex




Re: Problems using custom error files with HTTP/2

2023-04-15 Thread Aleksandar Lazic

Hi Nic,

On 15.04.23 19:35, Nick Wood wrote:

Hello all,


I have recently enabled HTTP/2 on our HAProxy server by adding the 
following to the bind line:



alpn h2,http/1.1


Everything appears to be working fine, apart from our custom error pages.

Rather than serving the custom page as before, browsers just report an 
error. In Chrome its ERR_HTTP2_SERVER_REFUSED_STREAM. In Firefox its a 
more generic response about the data being invalid.



Here is the content of /etc/haproxy/errorpages/503.http:



[snipp]



I've searched the archives but not found anyone else with this issue - 
apart from someone who didn't have the correct HTTP headers defined at 
the top of their error file - but mine look OK. I've tried using 
HTTP/1.1 instead of HTTP/1.0 and also removing the Connection: close 
header, but nothing makes a difference.



Any clues as to what I'm doing wrong would be much appreciated.


Please can you share the haproxy version `haproxy -vv`.

What is your configuration? Include as much configuration as possible, 
including global and default sections. Replace confidential data like 
domain names and IP addresses.



Thanks,


Nick


Best regards
Alex



Re: Interest in HA Proxy from Sonicwall

2023-04-05 Thread Aleksandar Lazic

Hi Kenny.

On 05.04.23 20:04, Kenny Lederman wrote:

Hi team,

Do you have an account rep assigned to Sonicwall that could help me with 
getting a POC set up?


This is the Open Source Mailing list, if you want to get in touch with 
the Company behind HAProxy please use this.


https://www.haproxy.com/contact-us/

Of course can you setup the Open Source HAProxy by your team, the 
documentation is hosted at this URL.


http://docs.haproxy.org/


Thank you,

Kenny Lederman


Best Regards
Alex


Enterprise Account Manager

(206) 455-6488 - Office

(847) 932-9771 - Cell

kenny.leder...@softchoice.com 





Softchoice 



415 1st Avenue North, Suite 300
Seattle, WA  98109



Manage Subscription 
Unsubscribe 
Privacy 







Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-03-29 Thread Aleksandar Lazic

Ping?

On 10.01.23 21:27, Aleksandar Lazic wrote:



On 09.12.22 13:17, Aleksandar Lazic wrote:

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer 
to get some rtt which are not 0 as the rtt is 0.


Here the updated Patch without the EWMA reference.


Regards
Alex




Re: RFQ HAPROXY SERVER for CTBC Bank

2023-03-29 Thread Aleksandar Lazic

HI.

On 29.03.23 05:02, Procurement - TTSolution wrote:

Hi Sir/Madam,

Please help to provide quotation below for:

 1. *HAPROXY SERVER – QTY: 1*


As willy already written is this list mainly for the OpenSource HAProxy.
You can get in touch for the Enterprise Version on this page.

https://www.haproxy.com/contact-us/


Thanks & Best Regards,

Najihah


Best regards
Alex



Re: HAProxy CE Docker Debian and Ubuntu images with QUIC

2023-03-20 Thread Aleksandar Lazic

Hi Dinko.

On 19.03.23 19:54, Dinko Korunic wrote:

Dear community,

As previously requested, we have also started building HAProxy CE  for 
2.6, 2.7 and 2.8 branches with QUIC (based on OpenSSL 1.1.1t-quic 
Release 1) built on top of Debian 11 Bullseye and Ubuntu 22.04 Jammy 
Jellyfish base images.


Thank you for the fast build.

Images are being built for only two architectures listed below due to 
build/stability issues (as opposed to Alpine variant, which is also 
built for linux/arm/v6 and linux/arm/v7):

- linux/amd64
- linux/arm64

Images are available at the usual Docker Hub repositories:
- 
https://hub.docker.com/repository/docker/haproxytech/haproxy-debian-quic 

- 
https://hub.docker.com/repository/docker/haproxytech/haproxy-ubuntu-quic 



The corresponding Github repositories with update scripts, Dockerfiles, 
configurations and GA workflows are at the respective places:
- https://github.com/haproxytech/haproxy-docker-debian-quic 

- https://github.com/haproxytech/haproxy-docker-ubuntu-quic 



Let me know if you spot any issues and/or have any problems with these.

As other our haproxytech Docker images, these will auto-rebuild on:
- dataplaneapi releases
- HAProxy CE releases

including also:
- QUICTLS/OpenSSL releases


Kind regards,
D.

--
Dinko Korunic                   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha





Re: HAProxy CE Docker Alpine image with QUIC

2023-03-18 Thread Aleksandar Lazic

Hi Dinko.

On 17.03.23 20:59, Dinko Korunic wrote:

Dear community,

Upon many requests, we have started building HAProxy CE for 2.6, 2.7 and 
2.8 branches with QUIC (based on OpenSSL 1.1.1t-quic Release 1) as 
Docker Alpine 3.17 images.


That's great news :-).

What should keep in mind is that Apline's musl libc does not handle TCP 
DNS queries, which limits the answers for dns Queries to ~30 entries.


https://www.linkedin.com/pulse/musl-libc-alpines-greatest-weakness-rogan-lynch/
=> https://twitter.com/richfelker/status/994629795551031296?lang=en

```
My choice not to do TCP in musl's stub resolver was based on an 
interpretation that truncated results are not just acceptable but better 
ux - not only do you save major round-trip delays to DNS but you also 
get a reasonable upper bound on # of addrs in result.


-Rich Felker (via twitter)
```

Any chance to get also a libc based image with quic?

Regards
Alex


All these are being built for several architectures, namely:
- linux/amd64
- linux/arm/v6
- linux/arm/v7
- linux/arm64

As usual, Docker pull will fetch appropriate image for your architecture 
if it exists.


These images are available at Docker Hub as usual (and they have 
dataplaneapi binary as well):


Docker 
hub.docker.com 





And sources (scripts, Dockerfiles, GA workflows etc.) are available below:

haproxy-docker-alpine-quic.png
haproxytech/haproxy-docker-alpine-quic: HAProxy CE Docker Alpine image 
with QUIC (quictls) 


github.com 




Kind regards,
D.

--
Dinko Korunic                   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha





Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-02-16 Thread Aleksandar Lazic

Hi.

Any chance to add this Patch?

Regards
Alex

On 10.01.23 21:27, Aleksandar Lazic wrote:



On 09.12.22 13:17, Aleksandar Lazic wrote:

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer 
to get some rtt which are not 0 as the rtt is 0.


Here the updated Patch without the EWMA reference.


Regards
Alex




Re: proxy

2023-01-11 Thread Aleksandar Lazic

Hi Adam.

On 12.01.23 01:30, Adam wrote:

Dear Friend
I have a service to broadcast channels and movies over the Internet
by panel iptv
And I have servers that I want to hide the real IP of in order to 
protect them from attacks

It is on the other hand a complaint of abuse
How do you help me with that
I have more than 10 Ubuntu servers
I am waiting for your reply


You can use haproxy for that and there are quite good blog posts about 
protection of backend servers.


https://www.haproxy.com/blog/category/security/

As you have also contacted cont...@haproxy.com you could get a offer for 
the HAProxy Enterprise product 
https://www.haproxy.com/products/haproxy-enterprise/ .


Regards
Alex



Re: [ANNOUNCE] haproxy-2.8-dev1

2023-01-10 Thread Aleksandar Lazic

Hi Willy.

On 07.01.23 19:49, Willy Tarreau wrote:

Hi Alex,

On Sat, Jan 07, 2023 at 06:31:40PM +0100, Aleksandar Lazic wrote:



On 07.01.23 10:38, Willy Tarreau wrote:

Hi,

HAProxy 2.8-dev1 was released on 2023/01/07. It added 206 new commits
after version 2.8-dev0.


[snipp]

Any chance to add this patch to 1.8?

[PATCH] MINOR: sample: Add bc_rtt and bc_rttvar
https://www.mail-archive.com/haproxy@formilux.org/msg42962.html

What's the plan for this feature request?


We can merge it. I think the reason it's been let rotting is that it
seems from its commit message to be quite strongly tied to the EWMA
stuff and in my opinion it should not. As you mentioned in the message
above, it has plenty of use cases, one of which is simply logging. Some
may want it to be backported just for logging and we don't want to put
such confusing references there. So let's just adjust the commit message
to be more factual about what it does (i.e. provide bc_rtt and bc_rtt_avg
to report the RTT measured over a TCP backend connection) and be done
with it.


That's a good point. Have send the patch without the EWMA commit message 
in the original mail thread .



Server weight modulation based on smoothed average measurement
https://github.com/haproxy/haproxy/issues/1977

which looks a per-requirement for

New Balancing algorithm (Peak) EWMA
https://github.com/haproxy/haproxy/issues/1570


I really have no status for all this. Feature requests accumulate facter
than bug reports and the only cases where I create one is to make sure
to dump what I have in mind after a discussion so that I have somewhere
to look for the details when trying to get back to it :-/


Okay, thanks for the explanation.


Cheers,
Willy


Regards
Alex



Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2023-01-10 Thread Aleksandar Lazic



On 09.12.22 13:17, Aleksandar Lazic wrote:

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer to 
get some rtt which are not 0 as the rtt is 0.


Here the updated Patch without the EWMA reference.


Regards
AlexFrom 7610bb7234bd324e06e56732a67bf8a0e65d7dbc Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 9 Dec 2022 13:05:52 +0100
Subject: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

This Patch adds the fetch sample for backends round trip time.

---
 doc/configuration.txt| 16 ++
 reg-tests/sample_fetches/tcpinfo_rtt.vtc | 39 
 src/tcp_sample.c | 33 
 3 files changed, 88 insertions(+)
 create mode 100644 reg-tests/sample_fetches/tcpinfo_rtt.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c45f0b4b6..e8526de7f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -18854,6 +18854,22 @@ be_server_timeout : integer
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
   also the "cur_server_timeout".
 
+bc_rtt() : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the backend
+  connection.  is facultative, by default the unit is milliseconds. 
+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
+bc_rttvar() : integer
+  Returns the Round Trip Time (RTT) variance measured by the kernel for the
+  backend connection.  is facultative, by default the unit is milliseconds.
+   can be set to "ms" for milliseconds or "us" for microseconds. If the
+  server connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
 be_tunnel_timeout : integer
   Returns the configuration value in millisecond for the tunnel timeout of the
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
diff --git a/reg-tests/sample_fetches/tcpinfo_rtt.vtc b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
new file mode 100644
index 0..f28a2072e
--- /dev/null
+++ b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
@@ -0,0 +1,39 @@
+varnishtest "Test declaration of TCP rtt fetches"
+
+# feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(v2.8-dev1)'"
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+}  -start
+
+haproxy h1 -conf {
+  defaults common
+  mode http
+  timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+  frontend fe from common
+  bind "fd@${feh1}"
+
+  default_backend be
+
+  backend be from common
+
+  http-response set-header x-test1 "%[fc_rtt]"
+  http-response set-header x-test2 "%[bc_rtt]"
+  http-response set-header x-test3 "%[fc_rttvar]"
+  http-response set-header x-test4 "%[bc_rttvar]"
+
+  server s1 ${s1_addr}:${s1_port}
+
+} -start
+
+client c1 -connect ${h1_feh1_sock} {
+txreq -req GET -url /
+rxresp
+expect resp.status == 200
+#expect resp.http.x-test2 ~ " ms"
+} -run
diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 925b93291..bf0d538ea 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -373,6 +373,34 @@ static inline int get_tcp_info(const struct arg *args, struct sample *smp,
 	return 1;
 }
 
+/* get the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rtt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 0))
+		return 0;
+
+	/* By default or if explicitly specified, convert rtt to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
+
+/* get the variance of the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rttvar(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 1))
+		return 0;
+
+	/* By default or if explicitly specified, convert rttvar to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) /

Re: [ANNOUNCE] haproxy-2.8-dev1

2023-01-07 Thread Aleksandar Lazic




On 07.01.23 10:38, Willy Tarreau wrote:

Hi,

HAProxy 2.8-dev1 was released on 2023/01/07. It added 206 new commits
after version 2.8-dev0.


[snipp]

Any chance to add this patch to 1.8?

[PATCH] MINOR: sample: Add bc_rtt and bc_rttvar
https://www.mail-archive.com/haproxy@formilux.org/msg42962.html

What's the plan for this feature request?

Server weight modulation based on smoothed average measurement
https://github.com/haproxy/haproxy/issues/1977

which looks a per-requirement for

New Balancing algorithm (Peak) EWMA
https://github.com/haproxy/haproxy/issues/1570

regards
alex



Re: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2022-12-14 Thread Aleksandar Lazic

Hi,

Any feedback to that patch?

On 09.12.22 13:17, Aleksandar Lazic wrote:

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer to 
get some rtt which are not 0 as the rtt is 0.


Regards
Alex




[PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

2022-12-09 Thread Aleksandar Lazic

Hi.

As I still think that the Balancing algorithm (Peak) EWMA ( 
https://github.com/haproxy/haproxy/issues/1570 ) could help to make a 
"better" decision to which server should the request be send, here the 
beginning of the patches.


In any cases it would be nice to know the rtt from the backend, Imho.

Does anybody know how I can "delay/sleep/wait" for the server answer to 
get some rtt which are not 0 as the rtt is 0.


Regards
AlexFrom 7610bb7234bd324e06e56732a67bf8a0e65d7dbc Mon Sep 17 00:00:00 2001
From: Aleksandar Lazic 
Date: Fri, 9 Dec 2022 13:05:52 +0100
Subject: [PATCH] MINOR: sample: Add bc_rtt and bc_rttvar

To be able to implement "Balancing algorithm (Peak) EWMA" is it
necessary to know the round trip time to the backend.

This Patch adds the fetch sample for the backend server.

Part of GH https://github.com/haproxy/haproxy/issues/1570

---
 doc/configuration.txt| 16 ++
 reg-tests/sample_fetches/tcpinfo_rtt.vtc | 39 
 src/tcp_sample.c | 33 
 3 files changed, 88 insertions(+)
 create mode 100644 reg-tests/sample_fetches/tcpinfo_rtt.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c45f0b4b6..e8526de7f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -18854,6 +18854,22 @@ be_server_timeout : integer
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
   also the "cur_server_timeout".
 
+bc_rtt() : integer
+  Returns the Round Trip Time (RTT) measured by the kernel for the backend
+  connection.  is facultative, by default the unit is milliseconds. 
+  can be set to "ms" for milliseconds or "us" for microseconds. If the server
+  connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
+bc_rttvar() : integer
+  Returns the Round Trip Time (RTT) variance measured by the kernel for the
+  backend connection.  is facultative, by default the unit is milliseconds.
+   can be set to "ms" for milliseconds or "us" for microseconds. If the
+  server connection is not established, if the connection is not TCP or if the
+  operating system does not support TCP_INFO, for example Linux kernels before
+  2.4, the sample fetch fails.
+
 be_tunnel_timeout : integer
   Returns the configuration value in millisecond for the tunnel timeout of the
   current backend. This timeout can be overwritten by a "set-timeout" rule. See
diff --git a/reg-tests/sample_fetches/tcpinfo_rtt.vtc b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
new file mode 100644
index 0..f28a2072e
--- /dev/null
+++ b/reg-tests/sample_fetches/tcpinfo_rtt.vtc
@@ -0,0 +1,39 @@
+varnishtest "Test declaration of TCP rtt fetches"
+
+# feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(v2.8-dev1)'"
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+txresp
+}  -start
+
+haproxy h1 -conf {
+  defaults common
+  mode http
+  timeout connect "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout client  "${HAPROXY_TEST_TIMEOUT-5s}"
+  timeout server  "${HAPROXY_TEST_TIMEOUT-5s}"
+
+  frontend fe from common
+  bind "fd@${feh1}"
+
+  default_backend be
+
+  backend be from common
+
+  http-response set-header x-test1 "%[fc_rtt]"
+  http-response set-header x-test2 "%[bc_rtt]"
+  http-response set-header x-test3 "%[fc_rttvar]"
+  http-response set-header x-test4 "%[bc_rttvar]"
+
+  server s1 ${s1_addr}:${s1_port}
+
+} -start
+
+client c1 -connect ${h1_feh1_sock} {
+txreq -req GET -url /
+rxresp
+expect resp.status == 200
+#expect resp.http.x-test2 ~ " ms"
+} -run
diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 925b93291..bf0d538ea 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -373,6 +373,34 @@ static inline int get_tcp_info(const struct arg *args, struct sample *smp,
 	return 1;
 }
 
+/* get the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rtt(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 0))
+		return 0;
+
+	/* By default or if explicitly specified, convert rtt to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0].data.sint == TIME_UNIT_MS)
+		smp->data.u.sint = (smp->data.u.sint + 500) / 1000;
+
+	return 1;
+}
+
+/* get the variance of the mean rtt of a backend/server connection */
+static int
+smp_fetch_bc_rttvar(const struct arg *args, struct sample *smp, const char *kw, void *private)
+{
+	if (!get_tcp_info(args, smp, 1, 1))
+		return 0;
+
+	/* By default or if explicitly specified, convert rttvar to ms */
+	if (!args || args[0].type == ARGT_STOP || args[0]

Re: Haproxy send-proxy probes error

2022-11-23 Thread Aleksandar Lazic
Hi.

There is already a bug entry in apache bz from 2019 about that message.

https://bz.apache.org/bugzilla/show_bug.cgi?id=63893

Regards
Alex

23.11.2022 21:36:26 Marcello Lorenzi :

> Hi All,
> we use haproxy 2.2.17-dd94a25 in our development environment and we configure 
> a backend with proxy protocol v2 to permit the source IP forwarding to a TLS 
> backend server. All the configuration works fine but we notice this error 
> reported on backend Apache error logs:
> 
> AH03507: RemoteIPProxyProtocol: unsupported command 20
> 
> We configure the options check-send-proxy on backend probes but the issue 
> persists. 
> 
> Is it possible to remove this persistent error?
> 
> Thanks,
> Marcello



Re: Rate Limit a specific HTML request

2022-11-22 Thread Aleksandar Lazic

Hi.

On 22.11.22 23:19, Branitsky, Norman wrote:

A "computationally expensive" request is a request sent to our Public Search
service - no login required so it seems to be the target of abuse.
For example:
https:///datamart/searchByName.do?anchor=169a72e.0


Okay, let me rephrase your question.

How can be a IP blocked which creates a request which takes
$too_much_time to response.

Where could be the $too_much_time defined?
Could it be the "timeout server ..." config parameter?

Could the "%Tr" or "%TR" be used from logformat for that?
https://docs.haproxy.org/2.6/configuration.html#8.2.6

or the request get a 504 for internal state.

Idea:

backend block_bad_client
  stick-table  type ip size 100k expire 30s store http_req_rate(10s)
  http-request track-sc0 src unless { $too_much_time }

and call the table block_bad_client in the frontend config.

Is this what you would like to do?

I'm not sure if this is possible with HAProxy.

Regards
Alex


Norman Branitsky
Senior Cloud Architect
P: 416-916-1752

-----Original Message-
From: Aleksandar Lazic 
Sent: Tuesday, November 22, 2022 4:27 PM
To: Branitsky, Norman 
Cc: HAProxy 
Subject: Re: Rate Limit a specific HTML request

Hi.

On 22.11.22 21:57, Branitsky, Norman wrote:

I have the following "generic" rate limit defined - 150 requests in
10s from the same IP address:

  stick-table  type ip size 100k expire 30s store
http_req_rate(10s)
  http-request track-sc0 src unless { src -f
/etc/CONFIG/haproxy/cidr.lst }
  http-request deny deny_status 429 if { sc_http_req_rate(0) gt 150
}

Is it possible to rate limit a specific "computationally expensive"
HTML request from the same IP address to a much smaller number?


What do you define as a "computationally expensive" request?

Maybe you could draw a bigger Picture and tell us what version of HAProxy do 
you use.

In the upcoming 2.7 is also a "Bandwidth limitation", maybe this could help to 
solve your issue.
https://urldefense.com/v3/__https://docs.haproxy.org/dev/configuration.html*9.7__;Iw!!A69Ausm6DtA!cXofLVgdVtpc37THsFRU0XMLkddQpViT0iPILErgEsXJ5Ij0hkHgjayqKAMX3sQrCOK74wbouLMjDkb0ZJe5a08n2NK9$

HTML is a Description Language therefore I think you want to restrict HTTP 
Request/Response, isn't it?

https://urldefense.com/v3/__https://www.rfc-editor.org/rfc/rfc1866__;!!A69Ausm6DtA!cXofLVgdVtpc37THsFRU0XMLkddQpViT0iPILErgEsXJ5Ij0hkHgjayqKAMX3sQrCOK74wbouLMjDkb0ZJe5a55k2_bp$


*Norman Branitsky*
Senior Cloud Architect
Tyler Technologies, Inc.


Regards
Alex


P: 416-916-1752
C: 416.843.0670
http://www.tylertech.com
Tyler Technologies






Re: Rate Limit a specific HTML request

2022-11-22 Thread Aleksandar Lazic

Hi.

On 22.11.22 21:57, Branitsky, Norman wrote:
I have the following "generic" rate limit defined - 150 requests in 10s 
from the same IP address:


 stick-table  type ip size 100k expire 30s store http_req_rate(10s)
 http-request track-sc0 src unless { src -f 
/etc/CONFIG/haproxy/cidr.lst }

 http-request deny deny_status 429 if { sc_http_req_rate(0) gt 150 }

Is it possible to rate limit a specific "computationally expensive" HTML 
request from the same IP address to a much smaller number?


What do you define as a "computationally expensive" request?

Maybe you could draw a bigger Picture and tell us what version of
HAProxy do you use.

In the upcoming 2.7 is also a "Bandwidth limitation", maybe this could 
help to solve your issue.

https://docs.haproxy.org/dev/configuration.html#9.7

HTML is a Description Language therefore I think you want to restrict
HTTP Request/Response, isn't it?

https://www.rfc-editor.org/rfc/rfc1866


*Norman Branitsky*
Senior Cloud Architect
Tyler Technologies, Inc.


Regards
Alex


P: 416-916-1752
C: 416.843.0670
www.tylertech.com
Tyler Technologies 





Re: How to return 429 Status Code instead of 503

2022-11-17 Thread Aleksandar Lazic
hi.

but there is a 429 error code in the source.

https://git.haproxy.org/?p=haproxy.git=search=HEAD=grep=HTTP_ERR_429

As you don't written which version you use, maybe you can use the latest 2.6 
version and give the error code 429 a chance :-)

regards
alex

17.11.2022 16:29:02 Chilaka Ramakrishna :

> Thanks Jarno, for the reply.
> 
> But i don't think this would work for me, I just want to change the status 
> code (return 429 instead of 503) that i can return, if queue timeout occurs 
> for a request..
> 
> Please confirm, if this is possible or this sort of provision is even exposed 
> by HAP.
> 
> On Thu, Nov 17, 2022 at 12:43 PM Jarno Huuskonen  
> wrote:
>> Hello,
>> 
>> On Tue, 2022-11-08 at 09:30 +0530, Chilaka Ramakrishna wrote:
>>> On queue timeout, currently HAProxy throws 503, But i want to return 429,
>>> I understand that 4xx means a client problem and client can't help here.
>>> But due to back compatibility reasons, I want to return 429 instead of
>>> 503. Is this possible ?
>> 
>> errorfile 503 /path/to/429.http
>> (http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#4-errorfile)
>> 
>> Or maybe it's possible with http-error
>> (http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#http-error)
>> 
>> -Jarno
>> 



  1   2   3   4   5   6   7   8   9   10   >