Strange output with version 2.4.27 and 'prepare map' socket command

2024-06-19 Thread Jens Wahnes

Hi,

after updating from HAProxy 2.4.26 to 2.4.27, I noticed some strange 
behavior change when issuing commands via the socket. I have a script 
that calls the "prepare map" command and looks at the output to 
determine the new map version number. This script failed after upgrading 
to 2.4.27. Going back to 2.4.26, everything is fine.


The command used in the script is:

/usr/bin/socat $SOCKET stdio <<< "prepare map ${MAPFILE}"

Previously, the output of the socket command would look like this:

```
New version created: 1

```

Note the empty line after the actual output.

With version 2.4.27, more often than not, the output from the socket 
command looks like this:


```
New version created: 1

```

The random garbage output does not appear every time I try this. When it 
appears, it looks like an excerpt from an HTTP request that is being 
processed at the same time, so it is different every time I run the 
socket command. Sometimes, it's just an empty line, like it used to be 
with version 2.4.26.


I did not look further to check if this happens with other socket 
commands as well. So I cannot say which socket commands exhibit this 
behavior, only that "prepare map" does. My question really is if I'm the 
only one seeing this or if it affects others as well.


The other version I tried, 2.8.10, did not show this error when issuing 
the "prepare map" command multiple times. So for me, it's just version 
2.4.27 that does not work as expected.


In the haproxy.cfg file, the socket definition looks like this:

```
stats socket /var/lib/haproxy/stats mode 600 expose-fd listeners level admin
```

Any ideas on this?
Thanks,
Jens


smime.p7s
Description: Kryptografische S/MIME-Signatur


Re: Suggestion

2022-07-10 Thread Jens Wahnes

Henning Svane wrote:

The possibility to have constant, so ex bind ip numbers can be defined 
under global, and then the constant can be used in the frontend, this 
way you only have to modify in global.


Are you aware of the fact that you can use variables in Haproxy's 
configuration? With variables being available, I don't see the need for 
constants.


You just need to remember that variable names need to be put into 
"quoted strings" to be used inside the configuration file.


To make use of variables, you can use either "setenv" or "presetenv" to 
set up variables and use those later on. When going for "presentenv", 
this allows you to use the same Haproxy config file on several servers, 
and adjust some of the variables outside of Haproxy (e.g. through 
systemd settings) if needed.


You can even chain these setenv or presetenv statements to make things 
even more variable. Consider this:


presetenv HERMESAGENTADDR192.168.22.33
presetenv DEFAULTPORT8080
presetenv HERMESPORT "$DEFAULTPORT"
presetenv HERMESHTTP "${HERMESAGENTADDR}:${HERMESPORT}"

And then later in the config file:

  backend foo
server  hermes "$HERMESHTTP" check agent-addr "$HERMESAGENTADDR"

This example is meant to show how you can make the agent-check default 
to the same IP address as the actual service, but allow easy overrides 
to that default for both the port and the server's address.


It is because I will use stick-table to control who logges in and as 
stick-table need to configured for either IPv4 and IPv6.


I'm not sure if I understand correctly what your requirements are, but 
in general, a stick-table of "type ipv6" can hold representations of 
both IPv4 and IPv6 addresses, of course. So I don't see the need to make 
a difference between IPv4 and IPv6 addresses in regard to stick tables.


Clearly, IPv6 addresses will need a few bits more to store, which you'd 
waste if you put only IPv4 addresses into a table that can hold IPv6 
addresses, but when you're going to deal with both types of IP addresses 
anyway, I don't think that trying to squeeze out a few bits here and 
there by putting them into separate tables is worthwhile. Allocating 
memory for two tables that are both sufficiently sized to leave some 
margin for high usage situations is probably going to waste more memory 
than one large table that can hold both types of addresses.



Jens



Re:

2022-06-17 Thread Jens Wahnes

Anh Luong:
I want to my HAproxy can listen TCP port (5672) and HTTP 
(8082,3050,3051,3055,8093,9200) port at same time. I don’t know whether 
I can enable both modes at same time or not.


Yes, you can do that. The simplest and most straightforward way would be 
to define two different frontends in your HAProxy configuration, one for 
TCP mode and the other for HTTP mode.



Jens



Trouble with HTTP request queueing when using HTTP/2 frontend and HTTP/1.1 backend

2022-03-21 Thread Jens Wahnes

Hello,

I'm a happy user of HAProxy and so far have been able to resolve any 
issues I've had by reading the docs and following the answers given on 
this mailing list or online tutorials. But now I've run into a problem I 
cannot resolve myself and hope someone could help me figure out what 
might be wrong.


The setup I have has been running fine for many months now. HAProxy 
terminates TLS for HTTPS requests and forwards these requests to a 
couple of backends. The backend servers are HTTP/1.1 only. As long as 
the frontend is limited to HTTP/1.1, the special backend I use for file 
uploads is operating exactly as intended. After enabling HTTP/2 on the 
frontend, however, the file upload backends are not working as before. 
The requests will not be processed properly and run into timeouts.


These file upload backends in my HAProxy configuration are somewhat 
special. They try to "serialize" certain HTTP file upload requests made 
by browser clients via AJAX calls (i.e. drag-and-drop of several files 
at once into the browser window). These files need to be processed one 
after the other per client (not in parallel). So the HAProxy backend in 
question uses a number of servers with a "maxconn 1" setting each, which 
will process the first request immediately but queue subsequent HTTP 
requests coming in at the same time until the previous request is 
finished. This approach certainly is not perfect in design, but has been 
working for me when using a somewhat high arbitrary number of 
pseudo-servers to realize it, so that each client making these file 
upload requests will be served by one "server" exclusively. This is what 
the backend definition looks like:


backend upload_ananas
option forwardfor if-none header X-Client-IP #except 127.0.0.0/8
stick-table type string len 32 size 10k expire 2m
stick match req.cook(mycookie),url_dec
stick store-request req.cook(mycookie),url_dec
timeout server 10m
timeout queue  20s
balance hdr(Cookie)
default-server   no-check maxconn 1 maxqueue 20 send-proxy-v2 track 
xy/ananas source "ipv6@[${BACKENDUSEIPV6}]"

server-template a 32 "${ANANAS}":9876


Once I switched the HTTP frontend to use HTTP/2 (using "alpn 
h2,http/1.1"), this special backend is not working as expected anymore. 
All is fine as long as there is only one request present for a certain 
server at any given time. However, when there are two or more requests 
at the same time, i.e. as soon as the queueing mechanism is supposed to 
kick in, the setup is not working the way it does with a HTTP/1.1 
frontend. The parallel requests aren't properly put into the queue (or 
taken out of the queue) in this case. From what I can see in the log 
file, the requests seem to be blocking one another, and nothing is 
happening until the timeout set by "timeout queue" is reached. At that 
point, 1 or 2 out of 4 requests in an example call may succeed, but the 
others will fail.


Clearly, this kind of setup is quite the opposite of what most people 
will be using. In my case, I'm deliberately trying to stuff requests 
into a queue, whereas normally, one would try to move requests to a 
server that has got slots open for processing. So I think that my use 
case is hitting different code paths than most other setups.


I've read in previous emails on the mailing list that the "maxconn" 
setting nowadays does not limit the number of TCP sessions to the 
backend server, but the number of parallel HTTP requests. This made we 
wonder if the trouble I'm seeing might have to do with the way 
multiplexed HTTP/2 requests are mapped to HTTP/1.1 backends. Could it be 
that when the backend server finishes  processing the first request, 
this isn't generating a proper event in HAProxy's backend logic, so that 
the next request is not being processed when it could? Or maybe there is 
something special about the number of "0" free slots in the server 
definition in this case, once the first slot has been taken?


Trying to work around the problem, I've switched on and off quite a few 
settings that may influence the way processing takes place, but still 
haven't been able to come up with a working configuration in the HTTP/2 
case. Some of the settings I tried were:

  * default-server max-reuse 0
  * http-reuse never
  * option http-server-close
  * option httpclose
  * option http-buffer-request
  * retry-on conn-failure 408 503
  * http-request wait-for-body time 15s at-least 16k if METH_POST

With HTTP/2 active, there will be log entries like this (I re-ordered 
them to be in order of begin of processing).


Mar 18 17:30:31 localhost haproxy[478250]: fd90:1234::21a:52738 
[18/Mar/2022:17:29:51.411] Loadbalancer~ zyx_ananas/a24 
12/0/1/40112/40125 200 1217 mycookie=3BRoK6tyijmqndBJRzLyT9Lq7dsiPmeT - 
 356/356/1/0/0 0/0 {serialize.example.com} "POST 
https://serialize.example.com/services/ajax.php/file/upload HTTP/2.0"
Mar 18 17:30:11 localhost haproxy[478250]: fd90:1234::21a:52738