Re: Should we change the -c output?
On Thu, Nov 9, 2023 at 5:00 PM William Lallemand wrote: > Hello, > > haproxy -c seems to be too verbose in the systemd logs by > showing "Configuration file is valid" for every reloads. > > Is there anyone against removing this message by default? > This will still output the alerts and warnings if some exists but the > "Configuration file is valid" message will only be displayed in > combination with -V. > > People tend to use the return code of the command and not the output, > but I prefer to ask. > > Change will only be applied starting from 2.9. Patch attached. > > -- > William Lallemand > Hi William, I used to use this message for 13 years while manually checking confs :) I think it may impact admins / devs who run these manual checks, but not too hard as we all look for "ERROR" or "WARNING" by default. I think it's "ok" to change this. I will just miss it :D Baptiste
Multiple http-check in backend
Hi, I am trying to figure out how to use multiple http-check in my backend. I can’t figure out the proper syntax. Any help is appreciated. backend avax-mainnet option httpchk stick-table type ip size 1m expire 1h stick on src balance leastconn http-check send meth POST uri /ext/info hdr Content-Type application/json body "{"jsonrpc":"2.0","method":"info.isBootstrapped","params":[{"chain": "C"}],"i> http-check expect rstring "isBootstrapped":true http-check send meth POST uri /ext/info hdr Content-Type application/json body "{"jsonrpc":"2.0","method":"info.isBootstrapped","params":[{"chain": "q2aTwKuy> http-check expect rstring "isBootstrapped":true http-check send meth POST uri /ext/info hdr Content-Type application/json body "{"jsonrpc":"2.0","method":"info.isBootstrapped","params":[{"chain": "2K33xS9A> http-check expect rstring "isBootstrapped":true default-server inter 5s fall 3 rise 2 on-marked-down shutdown-sessions server mun2np001 10.0.2.10:9650 check
Re: Source IP in Status WEBpage
On Wed, Jun 8, 2022 at 10:28 AM Henning Svane wrote: > Hi > > > > In PFsense implementation of HAProxy it is possible to see who, with there > IP number, are sending traffic through the loadbalancer. > > > > How can I do the same. I have look at there autogenerated configuration, > but cannot get the same to work under HAproxy under Ubunutu. > > I am using > > HAProxy version 2.5.7-1ppa1~focal 2022/05/14 > > > > Regards > > Henning > Hi Henning, I guess that you want to configure logging in HAProxy and check your /var/log/syslog file. You may want to configure syslogd to send HAProxy log messages in a dedicated folder. Baptiste
Re: SV: Traffic from HAproxy get error 401 and 500
Hi Henning, Please remove this "option http-server-close" from your configuration, entirely :) Baptiste
Re: New .NET SPOE Library
On Thu, Jun 2, 2022 at 10:00 PM Sébastien Crocquesel < s.crocque...@inulogic.com> wrote: > Dear all, > > I create a .NET Library to build SPOP agent and release it under MIT > Licence. The library is happily used in production for more than 2 years > now and serve more than 10K req/s per agent node. > > I would be pleased if it can be referenced on the spoe wiki page with > other current implementations. > > You may find more information at > https://github.com/inulogic/HAProxy.StreamProcessingOffload.AgentFramework > > > Best regards, > Sebastien > > Hi Sebastien! Thx a lot for your contribution! I just updated the wiki page: https://github.com/haproxy/wiki/wiki/SPOE:-Stream-Processing-Offloading-Engine Baptiste
Re: Traffic from HAproxy get error 401 and 500
On Mon, May 30, 2022 at 11:58 PM Henning Svane wrote: > Hi > > I have a strange problem. > > > > I have a HAProxy with 2 NICs > > NIC 1 VLAN 110 HAProxy have IP 10.40.152.10/28 > > NIC 2 VLAN 120 HAProxy have IP 10.40.252.10/28 is also the VLAN for > Exchange server IP 10.40.252.11/28 > > > > I have a outlook client in VLAN 100 10.40.2.1/24 > > I have 2 cases for testing: > > Case 1: VLAN 100 <-> FW <-> (NIC 1VLAN 110) HAProxy ( NIC 2 Exchange VLAN > 120) <-> Exchange Server > > Autodiscover.domain.com 10.40.152.10 > > Mail.doamin.com 10.40.152.10 > > Frontend: > > acl XMail hdr(host) -i mail. domain.com autodiscover. domain.com > domain.com > > acl XMail_Autodiscover url_beg -i /Autodiscover > > use_backend HA_DAG_XMail_Autodiscoverif XMail > XMail_Autodiscover > > > > Backend HA_DAG_XMail_Autodiscover: > > server XMailDB01 XMailDB01.domain.com:443 maxconn 100 ssl ca-file > /etc/haproxy/crt/mail_domain_com.pem > > > > Case 2: VLAN 100 <-> FW <-> VLAN 120 Exchange Server > > Autodiscover.domain.com 10.40.252.11 > > Mail.doamin.com 10.40.252.11 > > > > Case 1 gives HTTP Error 401 and 500 > > Case 2 works as it should > > > > Case 1 > > I have tried with fiddler to find out what goes on but have not found out > why I get Error 401 and 500 > > I am capturing traffic from both NIC 1 and NIC 2 but I cannot relay find > out what is going on and how to see what is the problem. > > > > Hope somebody have an idear how to fix this. > > > > Regards > > Henning > Hi Henning, You can start HAProxyin debug mode and check what happens and also share generated log lines, they may contain useful information such as termination status code for the session. Baptiste
Re: [ANNOUNCE] haproxy-2.4.0
WOW, amazing release! so many new toys to play with and some basement for future improvements! Thank you all. Baptiste
Re: DNS service discovery and consistent hashing
On Thu, May 13, 2021 at 10:58 PM Andrew Rodland wrote: > At Vimeo we have a custom tool since 2015 that monitors the membership of > clusters of servers, templates out a config with servers assigned to > backends, and manages reloading haproxy. We're looking into replacing this > with something a bit more off-the-shelf, and one of the options is > HAProxy's own DNS service discovery support. > > We're also using URI-based load balancing with consistent hashing, and the > stability of that mapping is important to us. Temporary disagreements while > membership is changing are inevitable, but we want the portion of the hash > space that a backend server sees to change as little as possible during its > lifetime, and for multiple haproxies running the same config, against the > same cluster, to converge on the same mapping. Our existing tool assigns a > persistent ID to each server, which is mapped to an "id" option in the > server line, which has worked quite well. > > From what we've seen in testing so far, using "server-template" with DNS > *doesn't* give us the behavior we want — the assignment of servers to slots > seems inconsistent, maybe depending on some combination of the order of > answers in the DNS packet or the order that new server appearances are > observed by haproxy. > > Long story short: > > 1. Is my interpretation right? > > 2. Would you be open to a patch to change that? I'm thinking of something > like setting puid from a hash of the SRV name or the A address, "open > addressing" style, with who goes first in case of a collision determined by > lexicographic order — but I'm quite open to guidance. > > Or should I just look somewhere other than the DNS service discovery? > > Thanks, > > Andrew Rodland > > (Please CC, I'm not on the list.) > > Hi Andrew, Inconsistency of server order in HAProxy configuration is related to DNS server implementation. HAProxy just processes the records as they are in the DNS response and so goes for assignment. DNS servers roundrobin the AN records to ensure that clients on internet will be themselves roundrobined against the server. Point is simple, each client individually use the first AN record found in the payload, so changing order is important. So first, If HAProxy (or your internal infrastructure) is the only client for this DNS server, maybe you could check if this one have an option to avoid roundrobining the AN records. That should do the trick. Internally, HAProxy will simply take the first server slot available when a new AN record is discovered. You could influence this behavior as well in HAProxy itself. The function resolv_validate_dns_response() (in resolvers.c) is the place where we turn the buffer payload into an internal DNS structure. If you can influence the ordering here, it should help solving this issue. I would also recommend using server-state to ensure the ordering and parameters are transferred across reloads. You also have some other options, such as: - use a third party tool, outside of HAProxy, to perform the DNS resolution, sort the result in a consistent way and push it into HAProxy (you can use the GO client library https://github.com/haproxytech/client-native) - implement your consistent hash in Lua apply it to a use-server directive in your backend (this might impact performance) Baptiste
Re: Setup HAProxy as a Forward Proxy for SMTP
Hi, >From the first link, I understand you're trying to do the following: user MUA ==> HAProxy ==> fleet of power MTA ==> Internet ==> destination MTA Is this correct? Baptiste On Thu, May 6, 2021 at 5:13 AM Brizz Bane wrote: > I am wanting to set up HAProxy to act as a proxy for PowerMTA. I do not > want a reverse or load balancing setup, so what I'm wanting to do is > atypical and I've not found much online. > > Here are a couple links describing PowerMTA's integration with HAProxy: > > > https://www.sparkpost.com/docs/tech-resources/pmta-50-features/#outbound-proxy-support > https://www.postmastery.com/powermta-5-0-using-a-proxy-for-email-delivery/ > > I have searched for hours and asked everywhere that I could think of. > I've not made any progress. > > How can I go about doing this? If you need any more information please > let me know. ANY help or guidance would be greatly appreciated. > > Thank you, > > brizz >
Re: [PATCH] BUG/MINOR: sample: Rename SenderComID/TargetComID to SenderCompID/TargetCompID
On Wed, Mar 10, 2021 at 5:15 AM Daniel Corbett wrote: > Hello, > > > > > > The recently introduced Financial Information eXchange (FIX) > > converters have some hard coded tags based on the specification that > > were misspelled. Specifically, SenderComID and TargetComID should > > be SenderCompID and TargetCompID according to the specification [1][2]. > > > > This patch updates all references, which includes the converters > > themselves, the regression test, and the documentation. > > > > [1] https://fiximate.fixtrading.org/en/FIX.5.0SP2_EP264/tag49.html > > [2] https://fiximate.fixtrading.org/en/FIX.5.0SP2_EP264/tag56.html > > > > > > > > Thanks, > > -- Daniel > > > Hi, Thank you Daniel for reporting / fixing this. The patch looks correct and may be applied. Baptiste
[PATCH] DNS: SRV resolution ignores duplicated AR records
Hi, Since 2.2, HAProxy runtime resolver uses the Additional records of an SRV response when available. That said, there was a small bug when multiple records points to the same hostname and IP, then the subsequent ones may become "unsynchronized". This patch fixes this issue. This is also related to github issue 971. Backport status is 2.2 and above. Baptiste From 78ddb9c32a1bb09e05ac592f8f8862491465aa69 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Wed, 25 Nov 2020 08:17:59 +0100 Subject: [PATCH] BUG/MINOR: dns: SRV records ignores duplicated AR records This bug happens when a service has multiple records on the same host and the server provides the A/ resolution in the response as AR (Additional Records). In such condition, the first occurence of the host will be taken from the Additional section, while the second (and next ones) will be process by an independent resolution task (like we used to do before 2.2). This can lead to a situation where the "synchronisation" of the resolution may diverge, like described in github issue #971. Because of this behavior, HAProxy mixes various type of requests to resolve the full list of servers: SRV+AR for all "first" occurences and A/ for all other occurences of an existing hostname. IE: with the following type of response: ;; ANSWER SECTION: _http._tcp.be2.tld. 3600IN SRV 5 500 80 A2.tld. _http._tcp.be2.tld. 3600IN SRV 5 500 86 A3.tld. _http._tcp.be2.tld. 3600IN SRV 5 500 80 A1.tld. _http._tcp.be2.tld. 3600IN SRV 5 500 85 A3.tld. ;; ADDITIONAL SECTION: A2.tld. 3600IN A 192.168.0.2 A3.tld. 3600IN A 192.168.0.3 A1.tld. 3600IN A 192.168.0.1 A3.tld. 3600IN A 192.168.0.3 the first A3 host is resolved using the Additional Section and the second one through a dedicated A request. When linking the SRV records to their respective Additional one, a condition was missing (chek if said SRV record is already attached to an Additional one), leading to stop processing SRV only when the target SRV field matches the Additional record name. Hence only the first occurence of a target was managed by an additional record. This patch adds a condition in this loop to ensure the record being parsed is not already linked to an Additional Record. If so, we can carry on the parsing to find a possible next one with the same target field value. backport status: 2.2 and above --- src/dns.c | 5 + 1 file changed, 5 insertions(+) diff --git a/src/dns.c b/src/dns.c index 3d484263c..90efdad34 100644 --- a/src/dns.c +++ b/src/dns.c @@ -1027,6 +1027,10 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, dns_answer_record->data_len = len; memcpy(dns_answer_record->target, tmpname, len); dns_answer_record->target[len] = 0; +if (dns_answer_record->ar_item != NULL) { + pool_free(dns_answer_item_pool, dns_answer_record->ar_item); + dns_answer_record->ar_item = NULL; +} break; case DNS_RTYPE_: @@ -1276,6 +1280,7 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, // looking for the SRV record in the response list linked to this additional record list_for_each_entry(tmp_record, &dns_p->answer_list, list) { if (tmp_record->type == DNS_RTYPE_SRV && +tmp_record->ar_item == NULL && !dns_hostname_cmp(tmp_record->target, dns_answer_record->name, tmp_record->data_len)) { /* Always use the received additional record to refresh info */ if (tmp_record->ar_item) -- 2.25.1
Re: [PATCH] BUG/MINOR: dns: SRV records ignores duplicated records
On Fri, Nov 27, 2020 at 4:57 AM Baptiste wrote: > Hi, > > This patch should fix github issue 971. I was not able to reproduce the > bug myself, but the behavior of HAProxy in Hynek's environment makes me > think this is a good candidate. > > In short, when a server returns an SRV response with multiple SRV records > sharing the same target field value (the destination hostname) with a > different service port, and also provide the A/ resolution in the > Additional section, then the first occurrence of the hostname is managed by > the Addtional section, while the second (and next) occurrence(s) will be > managed by a designated A/ resolution. > This can lead to a divergence of resolution at some point, and in Hyneck's > case, the server managed via the A/ resolution was not updated. > > This patch ensures that every SRV record is linked to its Additional > records, so no divergence could happen anymore. > > This should be backported to 2.2 and 2.3. > > Hi, Please don't apply right now. I think I can still do some clean up in there :) By the way I was able to install a nomad + consul cluster and confirm this patch fixes bug in github issue 971. Baptiste
[PATCH] BUG/MINOR: dns: SRV records ignores duplicated records
Hi, This patch should fix github issue 971. I was not able to reproduce the bug myself, but the behavior of HAProxy in Hynek's environment makes me think this is a good candidate. In short, when a server returns an SRV response with multiple SRV records sharing the same target field value (the destination hostname) with a different service port, and also provide the A/ resolution in the Additional section, then the first occurrence of the hostname is managed by the Addtional section, while the second (and next) occurrence(s) will be managed by a designated A/ resolution. This can lead to a divergence of resolution at some point, and in Hyneck's case, the server managed via the A/ resolution was not updated. This patch ensures that every SRV record is linked to its Additional records, so no divergence could happen anymore. This should be backported to 2.2 and 2.3. From b3c9ba9a7bf207c7f648a9885decc2631308850c Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Wed, 25 Nov 2020 08:17:59 +0100 Subject: [PATCH] BUG/MINOR: dns: SRV records ignores duplicated AR records This bug happens when a service has multiple records on the same host and the server provides the A/ resolution in the response as AR (Additional Records). In such condition, the first occurence of the host will be taken from the Additional section, while the second (and next ones) will be process by an independent resolution task (like we used to do before 2.2). This can lead to a situation where the "synchronisation" of the resolution may diverge, like described in github issue #971. Because of this behavior, HAProxy mixes various type of requests to resolve the full list of servers: SRV+AR for all "first" occurences and A/ for all other occurences of an existing hostname. IE: with the following type of response: ;; ANSWER SECTION: _http._tcp.be2.tld. 3600IN SRV 5 500 80 A2.tld. _http._tcp.be2.tld. 3600IN SRV 5 500 86 A3.tld. _http._tcp.be2.tld. 3600IN SRV 5 500 80 A1.tld. _http._tcp.be2.tld. 3600IN SRV 5 500 85 A3.tld. ;; ADDITIONAL SECTION: A2.tld. 3600IN A 192.168.0.2 A3.tld. 3600IN A 192.168.0.3 A1.tld. 3600IN A 192.168.0.1 A3.tld. 3600IN A 192.168.0.3 the first A3 host is resolved using the Additional Section and the second one through a dedicated A request. When linking the SRV records to their respective Additional one, a condition was missing (chek if said SRV record is already attached to an Additional one), leading to stop processing SRV only when the target SRV field matches the Additional record name. Hence only the first occurence of a target was managed by an additional record. This patch adds a condition in this loop to ensure the record being parsed is not already linked to an Additional Record. If so, we can carry on the parsing to find a possible next one with the same target field value. backport status: down to 2.2 --- src/dns.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/dns.c b/src/dns.c index 3d484263c..63faf1561 100644 --- a/src/dns.c +++ b/src/dns.c @@ -1027,6 +1027,7 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, dns_answer_record->data_len = len; memcpy(dns_answer_record->target, tmpname, len); dns_answer_record->target[len] = 0; +dns_answer_record->ar_item = NULL; break; case DNS_RTYPE_: @@ -1276,6 +1277,7 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, // looking for the SRV record in the response list linked to this additional record list_for_each_entry(tmp_record, &dns_p->answer_list, list) { if (tmp_record->type == DNS_RTYPE_SRV && +tmp_record->ar_item == NULL && !dns_hostname_cmp(tmp_record->target, dns_answer_record->name, tmp_record->data_len)) { /* Always use the received additional record to refresh info */ if (tmp_record->ar_item) -- 2.25.1
Re: [ANNOUNCE] haproxy-2.4-dev1
Hi, Cool release and another +1 for the backport of the "del-header -m". Baptiste
[PATCH] dns: major bug fix for 2.2
Hi, A couple of patches for the DNS runtime resolver: #1 is just a typo cleanup #2 fixes a "regression" introduced with the parsing of the Additional section from the SRV record responses. Basically, when HAProxy uses SRV records and Additional sections together, a server may not recover from its MAINT status after a scaled down and scaled up operation sequence. This can lead to all server going in MAINT in a backend. Both should be backported to 2.2. Baptiste From 04e6e0941f1e84ca3d41dfac00cd253c010a9422 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Tue, 4 Aug 2020 10:54:14 +0200 Subject: [PATCH 1/2] CLEANUP: dns: typo in reported error message "record" instead of "recrd". This should be backported to 2.2. --- src/dns.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/dns.c b/src/dns.c index c8f34874d..c97c7dc69 100644 --- a/src/dns.c +++ b/src/dns.c @@ -634,10 +634,10 @@ static void dns_check_dns_response(struct dns_resolution *res) switch (item->ar_item->type) { case DNS_RTYPE_A: - update_server_addr(srv, &(((struct sockaddr_in*)&item->ar_item->address)->sin_addr), AF_INET, "DNS additional recrd"); + update_server_addr(srv, &(((struct sockaddr_in*)&item->ar_item->address)->sin_addr), AF_INET, "DNS additional record"); break; case DNS_RTYPE_: - update_server_addr(srv, &(((struct sockaddr_in6*)&item->ar_item->address)->sin6_addr), AF_INET6, "DNS additional recrd"); + update_server_addr(srv, &(((struct sockaddr_in6*)&item->ar_item->address)->sin6_addr), AF_INET6, "DNS additional record"); break; } -- 2.17.1 From 3ec65443136714e4549886e5d970b47f0f52b41c Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Tue, 4 Aug 2020 10:57:21 +0200 Subject: [PATCH 2/2] MAJOR: dns: disabled servers through SRV records never recover A regression was introduced by 13a9232ebc63fdf357ffcf4fa7a1a5e77a1eac2b when I added support for Additional section of the SRV responses.. Basically, when a server is managed through SRV records additional section and it's disabled (because its associated Additional record has disappeared), it never leaves its MAINT state and so never comes back to production. This patch updates the "snr_update_srv_status()" function to clear the MAINT status when the server now has an IP address and also ensure this function is called when parsing Additional records (and associating them to new servers). This can cause severe outage for people using HAProxy + consul (or any other service registry) through DNS service discovery). This should fix issue #793. This should be backported to 2.2. --- src/dns.c| 3 +++ src/server.c | 6 ++ 2 files changed, 9 insertions(+) diff --git a/src/dns.c b/src/dns.c index c97c7dc69..333780293 100644 --- a/src/dns.c +++ b/src/dns.c @@ -648,6 +648,9 @@ static void dns_check_dns_response(struct dns_resolution *res) if (msg) send_log(srv->proxy, LOG_NOTICE, "%s", msg); +/* now we have an IP address associated to this server, we can update its status */ +snr_update_srv_status(srv, 0); + srv->svc_port = item->port; srv->flags &= ~SRV_F_MAPPORTS; if ((srv->check.state & CHK_ST_CONFIGURED) && diff --git a/src/server.c b/src/server.c index 918294b2f..3f26104cc 100644 --- a/src/server.c +++ b/src/server.c @@ -3733,6 +3733,12 @@ int snr_update_srv_status(struct server *s, int has_no_ip) /* If resolution is NULL we're dealing with SRV records Additional records */ if (resolution == NULL) { + /* since this server has an IP, it can go back in production */ + if (has_no_ip == 0) { + srv_clr_admin_flag(s, SRV_ADMF_RMAINT); + return 1; + } + if (s->next_admin & SRV_ADMF_RMAINT) return 1; -- 2.17.1
Re: SRV records resolution failure if Authority section is present
On Tue, Jul 28, 2020 at 2:59 PM Jerome Magnin wrote: > Hi, > > On Sun, Jul 26, 2020 at 10:41:18PM +0200, Willy Tarreau wrote: > > Thanks Jérôme, > > > > CCing Baptiste for approval (in case we've missed anything, I'm clueless > > about DNS). > > > > Baptiste just reviewed my patch, made a couple suggestions, so please > find an update attached to this email. > Hi, Patch approved :) Thx again Jerome ! Baptiste
Re: [PATCH] BUG/MAJOR: dns: fix null pointer dereference in snr_update_srv_status
Hi Jerome, Thanks a lot for the debugging and the fix. This is all good and can be applied. Baptiste On Tue, Jul 28, 2020 at 2:09 PM Jerome Magnin wrote: > Hi, > > this is a patch for issue #775. > > -- > Jérôme >
Re: Termination state: CL--
On Mon, Jun 1, 2020 at 1:40 PM Gaetan Deputier < gaetan.deput...@googlemail.com> wrote: > Hello! > > We have recently observed that a very small amount of our connections were > ended with the following state: CL--. Those connections are coming from > browsers and are correlated to weird behaviours observed in our downstream > application (where a HTTP header and a body seem to be exchanged with > another request). > > Looking at the documentation, this state that: > > > *C : the TCP session was unexpectedly aborted by the client.L : the proxy > was still transmitting LAST data to the client while the server had already > finished. This one is very rare as it can only happen when the client dies > while receiving the last packets.* > > Does someone have more details about the L state specifically? What we > should we expect in our application in terms of sessions/packets/request? > Thanks! > G- > Hi Gaetan, As Alexandar said, we would need your anonymized configuration and your haproxy version. Baptiste
Re: Time applied on DNS resolution with valid response
On Thu, May 21, 2020 at 11:47 AM Ricardo Fraile wrote: > Hello, > > > I'm fancing an extrange behaviour with DNS resolution and timeout/hold > times. As testing enviroment, I use Haproxy 1.8.25 and this sample conf: > > global > master-worker > log /dev/log local5 info > pidfile /var/run/haproxy.pid > nbproc 1 > > resolvers dns > nameserver dns1 1.1.1.1:53 > > resolve_retries 3 > timeout resolve 5s > timeout retry10s > hold other 10s > hold valid 60s > hold obsolete10s > hold refused 10s > hold nx 10s > hold timeout 10s > > listen proxy-tcp > mode tcp > bind *:80 > default-server check resolvers dns init-addr none resolve-prefer > ipv4 > > server host1 host1:80 > > > > On the DNS server, the entry for host1 is valid as noted here: > > # dig host1 @1.1.1.1 > > ;; ANSWER SECTION: > host1. 300 IN A 7.7.7.7 > > > > But getting the network traffic from the DNS server I can see the > following: > > 11:29:31.064136 IP [bal_ip].49967 > dns1: 121+ [1au] A? host1. (62) > 11:29:36.065749 IP [bal_ip].49967 > dns1: 14393+ [1au] A? host1. (62) > 11:29:41.067816 IP [bal_ip].49967 > dns1: 35337+ [1au] A? host1. (62) > > Each 5 seconds, as defined in "timeout resolve", it receives a query. > But as it is valid, why Haproxy doesn't hold it with the time defined on > "hold valid", 60 seconds? > > > > Thanks, > > Hi Ricardo Hold valid means that we keep this response for said period if the server becomes unresponsive or returns NX. HAProxy carry on performing queries at timeout.resolve period to ensure a faster convergence in case the response is updated. Baptiste
Re: reverse proxy with dynamic servers without restart
On Wed, Apr 29, 2020 at 7:49 AM Michal Vala wrote: > Hello, > I would like to do reverse proxy with haproxy so that part of the path > resolves to the server and I need to do it dynamically, servers are > created and destroyed on the fly. And I need to do the path rewrite so > the server part is removed from the path (I can do that with > http-request set-path %[path,regsub(^/ws-.+\//?,/)] on the backend). > > for example 'http://example.com/server-123abc/api/users' requests > server 'server-123abc' on '/api/users' > > Is it possible to do this completely dynamic, without knowing server > adresses ahead (we generate them dynamically with "random" names) AND > without restarting haproxy to load new configuration? I was thinking > about some regex matching, but I don't know how to do dynamic server > address and if it is somehow possible. > > Thanks > > -- > Michal Vala > Software Engineer, Eclipse Che > Red Hat Czech > > > Hi Michal, Yes, you can do this with the HTTP action 'do-resolve', such as: https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#4.2-http-request%20do-resolve You can extract whatever information and do a DNS resolution and use the resulting IP as a destination. Baptiste
Re: Server weight in server-template and consul dns
On Mon, Apr 27, 2020 at 3:05 AM Igor Cicimov wrote: > Hi, > > On Mon, Apr 20, 2020 at 10:25 PM Igor Cicimov < > ig...@encompasscorporation.com> wrote: > >> Hi, >> >> I have the following template in a server backend: >> >> server-template tomcats 10 _tomcat._tcp.service.consul resolvers consul >> resolve-prefer ipv4 check >> >> This is the SRV records resolution: >> >> # dig +short @127.0.0.1 -p 8600 _tomcat._tcp.service.consul SRV >> 1 10 8080 ip-10-20-3-21.node.dc1.consul. >> 1 10 8080 ip-10-20-4-244.node.dc1.consul. >> >> The server's weight reported by haproxy is 1 where I expected to see 10. >> Just to clarify, is this expected or there is a mixup between priority and >> weight? >> >> Thanks, >> Igor >> >> > Giving this another try. Maybe Baptiste can help to clarify which part of > the SRV record is considered as server weight, the record priority or the > record weight? > > Thanks, > Igor > > > Hi, This is the record weight. There is a trick for weights: DNS weight range if from 0 to 65535 while HAProxy weight is from 0 to 256. So basically, your DNS weight is divided by 256 before being applied. so adjust your DNS weight accordingly. Baptiste
Re: random 502's
Hi, first, you need to set a global maxconn to 3000, otherwise it may be limited by your system. In any case, the frontend maxconn will never be reachable with your current config. do you know if that happens on keep alive requests or if this happens on the first request of the connection? Do you have some timers provided by apache for this session? how many connections are established between apache and haproxy? Baptiste
[PATCH] MINOR: http_fetch: capture.req.ver not compatible with H2
The function smp_fetch_capture_req_ver called when using the fetch capture.req.ver don't return the right protocol version when H2 is in use. It returns only "HTTP/1.1". This patch fixes this behavior and now the expected string is returned, whatever protocol is used. Baptiste From 496563f9fd06fb41b9c90dd1d1a9dd2c48c46ed9 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Fri, 24 Apr 2020 09:14:54 +0200 Subject: [PATCH] MINOR: http_fetch: capture.req.ver not compatible with H2 The function smp_fetch_capture_req_ver called when using the fetch capture.req.ver don't return the right protocol version when H2 is in use. It returns only "HTTP/1.1". This patch fixes this behavior and now the expected string is returned, whatever protocol is used. --- src/http_fetch.c | 36 1 file changed, 28 insertions(+), 8 deletions(-) diff --git a/src/http_fetch.c b/src/http_fetch.c index 9cfcee2ab..ec611c16a 100644 --- a/src/http_fetch.c +++ b/src/http_fetch.c @@ -1439,20 +1439,40 @@ static int smp_fetch_capture_req_uri(const struct arg *args, struct sample *smp, static int smp_fetch_capture_req_ver(const struct arg *args, struct sample *smp, const char *kw, void *private) { struct http_txn *txn = smp->strm->txn; + struct ist version; + const char *ptr; - if (!txn || txn->req.msg_state >= HTTP_MSG_BODY) + if (!txn || !txn->uri) return 0; - if (txn->req.flags & HTTP_MSGF_VER_11) - smp->data.u.str.area = "HTTP/1.1"; - else - smp->data.u.str.area = "HTTP/1.0"; + ptr = txn->uri; - smp->data.u.str.data = 8; - smp->data.type = SMP_T_STR; + /* find and skip first space */ + while (*ptr != ' ' && *ptr != '\0') + ptr++; + if (!*ptr) + return 0; + ++ptr; + + /* find and skip second space */ + while (*ptr != ' ' && *ptr != '\0') + ptr++; + if (!*ptr) + return 0; + ++ptr; + + /* find the end of the string */ + version = ist2(ptr, 0); + while (*ptr != '\0') + ptr++; + version.len = ptr - version.ptr; + + smp->data.u.str.area = version.ptr; + smp->data.u.str.data = version.len; + smp->data.type = SMP_T_STR; smp->flags = SMP_F_CONST; - return 1; + return 1; } /* Retrieves the HTTP version from the response (either 1.0 or 1.1) and emits it -- 2.17.1
Re: server-state application failed for server 'x/y', invalid srv_admin_state value '32'
Hi Piba, my answers inline. Using 2.2-dev5-c3500c3, I've got both a server and a > servertemplate/server that are marked 'down' due to dns not replying > with (enough) records. That by itself is alright.. (and likely has been > like that for a while so i don't think its a regression.) > You're right, this has always been like that. > But when i perform a 'seemless reload' with a serverstates file it > causes the warnings below for both server and template.: > [WARNING] 095/150909 (74796) : server-state application failed for > server 'x/y', invalid srv_admin_state value '32' > [WARNING] 095/150909 (74796) : server-state application failed for > server 'x2/z3', invalid srv_admin_state value '32' > > Is there a way to get rid of these warnings, and if 32 is a invalid > value, how did it get into the state file at all? > I can confirm this is not supposed to happen! And I could reproduce this behavior since HAProxy 1.8. Not sure if its a bug or a feature request, but i do think it should be > changed :). Can it be added to some todo list? Thanks. > This is a bug from my point of view. I'll check this. Could you please open a github issue and tag me in there? Baptiste
Re: Multiple balance statements in a backend
On Fri, Apr 3, 2020 at 5:21 AM Igor Cicimov wrote: > Hi all, > > Probably another quite basic question that I can't find an example of in > the docs (at least as a warning not to do that as it does not make sense or > bad practise) or on the net. It is regarding the usage of multiple balance > statements in a backend like this: > > balance leastconn > balance hdr(Authorization) > > So basically is this a valid use case where we can expect both options to > get considered when load balancing or one is ignored as a duplicate (in > which case which one)? > > And in general how are duplicate statements being handled in the code, > .i.e. the first one or the last one is considered as valid, and are there > maybe any special statements that are exempt from the rule (like hopefully > balance :-) ) > > Thanks in advance. > > Igor > > Hi Igor, duplicate statement processing depends on the keyword: very few are cumulative, and most of them s "last found match". To come back to the original point, you already a chance to have 2 LB algorithm: if you do 'balance hdr(Authorization)' and no Authorization header can be found, then HAProxy fails back to a round robin mode. Now, if you need persistence, I think you can enable "balance leastconn" and then use a stick table to route known Authorization header to the right server. More information here: https://www.haproxy.com/fr/blog/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/ Baptiste
[PATCH] Converter to support Financial eXchange protocol
Hi here These patches introduce a few function to the ist API and also a converter to validate a FIX message and to extract data from a FIX payload. Thx at Christopher for his help during this dev. Baptiste From 4e9de7128c7065dc01b423dcce13b18487f1f353 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Tue, 17 Mar 2020 10:18:41 +0100 Subject: [PATCH 4/4] MINOR: conv: parses Financial Information eXchange messages This patch implements a couple of converters to validate and extract data from a FIX message. The validation consists in a few checks such as mandatory fields and checksum computation. The extraction can get any tag value based on a tag string or tag id. --- doc/configuration.txt | 36 include/proto/fix.h | 200 ++ include/types/fix.h | 55 src/sample.c | 72 +++ 4 files changed, 363 insertions(+) create mode 100644 include/proto/fix.h create mode 100644 include/types/fix.h diff --git a/doc/configuration.txt b/doc/configuration.txt index 8347e8a4d..81b53c59f 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -13926,6 +13926,42 @@ field(,[,]) str(f1_f2_f3__f5),field(-2,_,3) # f2_f3_ str(f1_f2_f3__f5),field(-3,_,0) # f1_f2_f3 +fix_tag_value() + Parses a FIX (Financial Information eXchange) message and extracts the value + from the tag . + can be a string or an integer pointing to the desired tag. Any integer + value is accepted, but only the following strings are translated into their + integer equivalent: BeginString, BodyLength, MsgType, SenderComID, + TagetComID, CheckSum. If more are needed, we can add them in proto/fix.h + easily. + + Note: only the first message sent by the client and the server can be parsed. + + Example: + tcp-request inspect-delay 10s + acl data_in_buffer req.len gt 10 + # MsgType tag ID is 35, so both lines below will return the same content + tcp-request content set-var(txn.foo) req.payload(0,0),fix_tag_value(35) \ + if data_in_buffer + tcp-request content set-var(txn.bar) req.payload(0,0),fix_tag_value(MsgType) \ + if data_in_buffer + +fix_validate + Parses a binary payload and performs sanity checks regarding FIX (Financial + Information eXchange): + - checks the BeginString tag + - checks that all tag IDs are well numeric + - checks that last tag in the message is the CheckSum one + - validate the checksum is right + + This converter returns a boolean, true if the payload contains a valid FIX + message, right if not. + + Example: + tcp-request inspect-delay 10s + acl data_in_buffer req.len gt 10 + tcp-request content reject if data_in_buffer !{ req.payload(0,0),fix_validate } + hex Converts a binary input sample to a hex string containing two hex digits per input byte. It is used to log or transfer hex dumps of some binary input data diff --git a/include/proto/fix.h b/include/proto/fix.h new file mode 100644 index 0..e7b8cf5ac --- /dev/null +++ b/include/proto/fix.h @@ -0,0 +1,200 @@ +/* + * include/proto/fix.h + * This file contains functions and macros declarations for FIX protocol decoding. + * + * Copyright 2020 Baptiste Assmann + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation, version 2.1 + * exclusively. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef _PROTO_FIX_H +#define _PROTO_FIX_H + +#include +#include + +#include + + +/* + * Return a FIX tag ID ptr from if one found, NULL if not. + * + * full list of tag ID available here, just in case we need to support more "string" equivalent in the future: + * https://www.onixs.biz/fix-dictionary/4.2/fields_by_tag.html + */ +static inline struct ist fix_tagid(struct ist tag) +{ + if (istisnumeric(tag)) + return tag; + + else if (strcasecmp(tag.ptr, "BeginString") == 0) + return FIX_TAG_BeginString; + + else if (strcasecmp(tag.ptr, "BodyLength") == 0) + return FIX_TAG_BodyLength; + + else if (strcasecmp(tag.ptr, "CheckSum") == 0) + return FIX_TAG_CheckSum; + + else if (strcasecmp(tag.ptr, "MsgType") == 0) + return FIX_TAG_MsgType; + + else if (strcasecmp(tag.ptr, "SenderComID") == 0) + return FIX_TAG_SenderComID; + + else if (strcasecmp(tag.ptr, "TagetComID") == 0) + return FIX_TAG_TargetComID; + + retu
[PATCHES] dns related
Hi there, A couple of patches here to cleanup and fix some bugs introduced by 13a9232ebc63fdf357ffcf4fa7a1a5e77a1eac2b. Baptiste From 801e4f1d7ad1f9858f4b646fc4badebab3b46715 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Wed, 19 Feb 2020 00:53:26 +0100 Subject: [PATCH 1/2] CLEANUP: remove obsolete comments This patch removes some old comments introduced by 13a9232ebc63fdf357ffcf4fa7a1a5e77a1eac2b. Those comments are related to issues already fixed. --- src/dns.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/src/dns.c b/src/dns.c index bbc4f4ac1..3e52e1731 100644 --- a/src/dns.c +++ b/src/dns.c @@ -1030,7 +1030,6 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, /* now parsing additional records */ nb_saved_records = 0; - //TODO: check with Dinko for DNS poisoning for (i = 0; i < dns_p->header.arcount; i++) { if (reader >= bufend) return DNS_RESP_INVALID; @@ -1202,7 +1201,6 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, continue; tmp_record->ar_item = dns_answer_record; } - //TODO: there is a leak for now, since we don't clean up AR records LIST_ADDQ(&dns_p->ar_list, &dns_answer_record->list); } -- 2.17.1 From 9c5f4f464380a1f67c7d5d802d6c05c0086cebfe Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Wed, 19 Feb 2020 01:08:51 +0100 Subject: [PATCH 2/2] BUG/MEDIUM: dns: improper parsing of aditional records 13a9232ebc63fdf357ffcf4fa7a1a5e77a1eac2b introduced parsing of Additionnal DNS response section to pick up IP address when available. That said, this introduced a side effect for other query types (A and ) leading to consider those responses invalid when parsing the Additional section. This patch avoids this situation by ensuring the Additional section is parsed only for SRV queries. --- src/dns.c | 26 ++ 1 file changed, 6 insertions(+), 20 deletions(-) diff --git a/src/dns.c b/src/dns.c index 3e52e1731..953f9414c 100644 --- a/src/dns.c +++ b/src/dns.c @@ -1028,7 +1028,9 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, /* Save the number of records we really own */ dns_p->header.ancount = nb_saved_records; - /* now parsing additional records */ + /* now parsing additional records for SRV queries only */ + if (dns_query->type != DNS_RTYPE_SRV) + goto skip_parsing_additional_records; nb_saved_records = 0; for (i = 0; i < dns_p->header.arcount; i++) { if (reader >= bufend) @@ -1043,25 +1045,7 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, if (len == 0) { pool_free(dns_answer_item_pool, dns_answer_record); - return DNS_RESP_INVALID; - } - - /* Check if the current record dname is valid. previous_dname - * points either to queried dname or last CNAME target */ - if (dns_query->type != DNS_RTYPE_SRV && memcmp(previous_dname, tmpname, len) != 0) { - pool_free(dns_answer_item_pool, dns_answer_record); - if (i == 0) { -/* First record, means a mismatch issue between - * queried dname and dname found in the first - * record */ -return DNS_RESP_INVALID; - } - else { -/* If not the first record, this means we have a - * CNAME resolution error */ -return DNS_RESP_CNAME_ERROR; - } - + continue; } memcpy(dns_answer_record->name, tmpname, len); @@ -1206,6 +1190,8 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, } } /* for i 0 to arcount */ + skip_parsing_additional_records: + /* Save the number of records we really own */ dns_p->header.arcount = nb_saved_records; -- 2.17.1
Re: Understanding resolvers usage
On Fri, Mar 20, 2020 at 5:02 PM Veiko Kukk wrote: > Hi > > I'd like to have better understanding how server-template and resolvers > work together. HAproxy 1.9.14. > > Relevant sections from config: > > resolvers dns >accepted_payload_size 1232 >parse-resolv-conf >hold valid 90s >resolve_retries 3 >timeout resolve 1s >timeout retry 1s > > server-template srv 4 _foo._tcp.server.name.tld ssl check resolvers dns > resolve-prefer ipv4 resolve-opts prevent-dup-ip > > After some time, when I check statistics from socket: > > echo "show resolvers" |/usr/bin/socat /var/run/haproxy.sock.stats1 stdio > > Resolvers section dns > nameserver 127.0.0.1: >sent:33508 >snd_error: 0 >valid: 33502 >update: 2 >cname: 0 >cname_error: 0 >any_err: 0 >nx: 0 >timeout: 0 >refused: 0 >other: 0 >invalid: 0 >too_big: 0 >truncated: 0 >outdated:6 > nameserver 8.8.8.8: >sent:33508 >snd_error: 0 >valid: 0 >update: 0 >cname: 0 >cname_error: 0 >any_err: 0 >nx: 0 >timeout: 0 >refused: 0 >other: 0 >invalid: 0 >too_big: 0 >truncated: 0 >outdated:33508 > nameserver 8.8.4.4: >sent:33508 >snd_error: 0 >valid: 0 >update: 0 >cname: 0 >cname_error: 0 >any_err: 0 >nx: 0 >timeout: 0 >refused: 0 >other: 0 >invalid: 0 >too_big: 0 >truncated: 0 >outdated:33508 > nameserver 64.6.64.6: >sent:33508 >snd_error: 0 >valid: 6 >update: 0 >cname: 0 >cname_error: 0 >any_err: 0 >nx: 0 >timeout: 0 >refused: 0 >other: 0 >invalid: 0 >too_big: 0 >truncated: 0 >outdated:33502 > > What I wonder about here is why are all nameservers used instead of only > the first one when there are no issues/errors with local caching server > 127.0.0.1:53. From the statistics, the 'sent:' value leaves me > impression that all DNS servers get all requests. I that true? > > /etc/resolv.conf itself: > > nameserver 127.0.0.1 > > nameserver 8.8.8.8 > nameserver 8.8.4.4 > nameserver 64.6.64.6 > > options timeout:1 attempts:2 > > I'd like to achieve situation where other nameservers would be used only > when local caching server fails. Don't want to manually configure only > local one in resolvers section (no failover) and would very much prefer > not to duplicate name server config in resolv.conf and HAproxy config. > > -- > Veiko > > > Hi Veiko You are correct, all servers are queried at the same time and we pick up the fastest non-error response. Other responses will be simply ignored. So if your local cache answers faster than google DNS servers, then you're already covered. Baptiste
Re: SRV Record Priority Values
> > I also think we wanted to have "server groups" first in HAProxy before > using the priority. The idea before server groups is that a bunch of server > should be used all together until they fail (or enough have failed), and in > such case, we want to fail over to the next group, and so on (unless first > group recovers, of course). > > > This would be amazing for us! We're struggling with occasionally having > all servers "up" in a pool (but struggling), and requests not getting moved > to the next (backup) pool when they fail. Having groups we could use to > control failover more closely would be really nice for us. SRV records, or > not. :) > I have a solution for this, using a map and some http rules (because I wanted to set up a Lua free implementation): Create the map called "failover.map": # public host header list of backends by order of priority myapp1.dom.net myapp1.na.dom.net:myapp1.emea.dom.net:m yapp1.apac.dom.net myapp2.dom.net myapp2.na.dom.net:myapp2.apac.dom.net Then, in your configuration: frontend fe ... # get the failover list from the map http-request set-var(txn.dst) hdr(Host),lower,word(1,:),map(failover.map) http-request capture var(txn.dst) len 128 # check each failover option, from left to right (could be done in Lua to avoid "hardcoding") acl beFound var(txn.dstbe) -m found http-request set-var(txn.dstbe) var(txn.dst),word(1,:) if ! beFound { var(txn.dst),word(1,:),nbsrv ge 1 } http-request set-var(txn.dstbe) var(txn.dst),word(2,:) if ! beFound { var(txn.dst),word(2,:),nbsrv ge 1 } http-request set-var(txn.dstbe) var(txn.dst),word(3,:) if ! beFound { var(txn.dst),word(3,:),nbsrv ge 1 } # backends for myapp1 backend myapp1.na.dom.net server w1 a.b.c.d:80 check backend myapp1.emea.dom.net server w1 a.b.c.e:80 check backend myapp1.apac.dom.net server w1 a.b.c.f:80 check # backends for myapp2 backend myapp2.na.dom.net server w1 a.b.c.g:80 check backend myapp2.apac.dom.net server w1 a.b.c.h:80 check Note that you can update the MAP through te runtime API. Hopefully this helps. Baptiste
Re: SRV Record Priority Values
> > What we can do for now, is consider "active" a priority 0 and backup, any > value greater than 0. > > I think that's perfectly acceptable for us. I'm not sure of anyone else on > the mailing list using SRV records, so I don't know who else we could ask > about that. > > Would I have all I need to begin a patch for this in src/dns.c or will it > require bringing in more pieces to accomplish the task? If it's going to be > involved, a few pointers before I dive in would be helpful. My C is rusty > (using mostly Rust now, anyways ;-) ), and my knowledge of the HAProxy > codebase is weak right now. > Hi Luke, Have a look at src/dns.c, function dns_check_dns_response. It must be done at 2 places. Just search for "weight" and do it right after. On latest commit, these are lines 590 and 660. Baptiste
Re: [PATCH] BUG/MINOR: dns: ignore trailing dot
On Thu, Feb 27, 2020 at 3:47 PM Lukas Tribus wrote: > As per issue #435 a hostname with a trailing dot confuses our DNS code, > as for a zero length DNS label we emit a null-byte. This change makes us > ignore the zero length label instead. > > Must be backported to 1.8. > --- > > As discussed in issue #435 > > --- > src/dns.c | 6 ++ > 1 file changed, 6 insertions(+) > > diff --git a/src/dns.c b/src/dns.c > index c131f08..e2fa387 100644 > --- a/src/dns.c > +++ b/src/dns.c > @@ -1208,6 +1208,12 @@ int dns_str_to_dn_label(const char *str, int > str_len, char *dn, int dn_len) > if (i == offset) > return -1; > > + /* ignore trailing dot */ > + if (i + 2 == str_len) { > + i++; > + break; > + } > + > dn[offset] = (i - offset); > offset = i+1; > continue; > -- > 2.7.4 > > Patch approved! Baptiste
Re: SRV Record Priority Values
> > I suspect that it's more a property of the resolvers than the servers. > I mean, if you know that you're using your DNS servers this way, this > should really have the same meaning for all servers. So you shouldn't > have a per-server option to adjust this behavior but a per-resolvers > section. > > > That's even better! And probably more easily implemented. I'll wait for > Baptiste's response. > Hi There, When we first designed support for SRV record, we thought about use cases for this "priority" field. That said, at that time, the conclusion was some kind of "it is not possible to match a 'backup' state with an integer, or it is a "waste" of information". What this means is that backup status would use priority 0 or 1 or some kind of. But we burn the remaining 65534 values from this field. I also think we wanted to have "server groups" first in HAProxy before using the priority. The idea before server groups is that a bunch of server should be used all together until they fail (or enough have failed), and in such case, we want to fail over to the next group, and so on (unless first group recovers, of course). Then, priority could be used to set up the groups, cause HAProxy would assign al server with same priority in the same group. What we can do for now, is consider "active" a priority 0 and backup, any value greater than 0. Baptiste
Re: dns fails to process response / hold valid? (since commit 2.2-dev0-13a9232)
Hi, I found a couple of bugs in that part of the code. Can you please try the attached patch? (0001 is useless but I share it too in case of) It will allow parsing of additional records for SRV queries only and when done, will silently ignore any record which are not A or . @maint team, please don't apply the patch yet, I want to test it much more before. Baptiste On Tue, Feb 18, 2020 at 2:03 PM Baptiste wrote: > Hi guys, > > Thx Tim for investigating. > I'll check the PCAP and see why such behavior happens. > > Baptiste > > > On Tue, Feb 18, 2020 at 12:09 AM Tim Düsterhus wrote: > >> Pieter, >> >> Am 09.02.20 um 15:35 schrieb PiBa-NL: >> > Before commit '2.2-dev0-13a9232, released 2020/01/22 (use additional >> > records from SRV responses)' i get seemingly proper working resolving of >> > server a name. >> > After this commit all responses are counted as 'invalid' in the socket >> > stats. >> >> I can confirm the issue with the provided configuration. The 'if (len == >> 0) {' check in line 1045 of the commit causes HAProxy to consider the >> responses 'invalid': >> >> >> https://github.com/haproxy/haproxy/commit/13a9232ebc63fdf357ffcf4fa7a1a5e77a1eac2b#diff-b2ddf457bc423779995466f7d8b9d147R1045-R1048 >> >> Best regards >> Tim Düsterhus >> > From fa0b9563c40006be83c3fa1b52eeb3dbbb1b028b Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Wed, 19 Feb 2020 00:53:26 +0100 Subject: [PATCH 1/2] CLEANUP: remove obsolete comments --- src/dns.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/src/dns.c b/src/dns.c index 86147a417..9e49babf1 100644 --- a/src/dns.c +++ b/src/dns.c @@ -1030,7 +1030,6 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, /* now parsing additional records */ nb_saved_records = 0; - //TODO: check with Dinko for DNS poisoning for (i = 0; i < dns_p->header.arcount; i++) { if (reader >= bufend) return DNS_RESP_INVALID; @@ -1202,7 +1201,6 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, continue; tmp_record->ar_item = dns_answer_record; } - //TODO: there is a leak for now, since we don't clean up AR records LIST_ADDQ(&dns_p->ar_list, &dns_answer_record->list); } -- 2.17.1 From 96a09ab7538af2644c7247be2313fc0cc294949b Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Wed, 19 Feb 2020 01:08:51 +0100 Subject: [PATCH 2/2] BUG/MEDIUM: dns: improper parsing of aditional records --- src/dns.c | 26 ++ 1 file changed, 6 insertions(+), 20 deletions(-) diff --git a/src/dns.c b/src/dns.c index 9e49babf1..5550ab976 100644 --- a/src/dns.c +++ b/src/dns.c @@ -1028,7 +1028,9 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, /* Save the number of records we really own */ dns_p->header.ancount = nb_saved_records; - /* now parsing additional records */ + /* now parsing additional records for SRV queries only */ + if (dns_query->type != DNS_RTYPE_SRV) + goto skip_parsing_additional_records; nb_saved_records = 0; for (i = 0; i < dns_p->header.arcount; i++) { if (reader >= bufend) @@ -1043,25 +1045,7 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, if (len == 0) { pool_free(dns_answer_item_pool, dns_answer_record); - return DNS_RESP_INVALID; - } - - /* Check if the current record dname is valid. previous_dname - * points either to queried dname or last CNAME target */ - if (dns_query->type != DNS_RTYPE_SRV && memcmp(previous_dname, tmpname, len) != 0) { - pool_free(dns_answer_item_pool, dns_answer_record); - if (i == 0) { -/* First record, means a mismatch issue between - * queried dname and dname found in the first - * record */ -return DNS_RESP_INVALID; - } - else { -/* If not the first record, this means we have a - * CNAME resolution error */ -return DNS_RESP_CNAME_ERROR; - } - + continue; } memcpy(dns_answer_record->name, tmpname, len); @@ -1206,6 +1190,8 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend, } } /* for i 0 to arcount */ + skip_parsing_additional_records: + /* Save the number of records we really own */ dns_p->header.arcount = nb_saved_records; -- 2.17.1
Re: dns fails to process response / hold valid? (since commit 2.2-dev0-13a9232)
Hi guys, Thx Tim for investigating. I'll check the PCAP and see why such behavior happens. Baptiste On Tue, Feb 18, 2020 at 12:09 AM Tim Düsterhus wrote: > Pieter, > > Am 09.02.20 um 15:35 schrieb PiBa-NL: > > Before commit '2.2-dev0-13a9232, released 2020/01/22 (use additional > > records from SRV responses)' i get seemingly proper working resolving of > > server a name. > > After this commit all responses are counted as 'invalid' in the socket > > stats. > > I can confirm the issue with the provided configuration. The 'if (len == > 0) {' check in line 1045 of the commit causes HAProxy to consider the > responses 'invalid': > > > https://github.com/haproxy/haproxy/commit/13a9232ebc63fdf357ffcf4fa7a1a5e77a1eac2b#diff-b2ddf457bc423779995466f7d8b9d147R1045-R1048 > > Best regards > Tim Düsterhus >
Re: [PATCH v4] BUG/MINOR: dns: allow 63 char in hostname
On Tue, Jan 28, 2020 at 12:19 AM Miroslav Zagorac wrote: > On 01/28/2020 12:02 AM, Baptiste wrote: > > On Sun, Jan 26, 2020 at 7:53 PM William Dauchy > wrote: > > > >> hostname were limited to 62 char, which is not RFC1035 compliant; > >> - the parsing loop should stop when above max label char > >> - fix len label test where d[i] was wrongly used > >> - simplify the whole function to avoid using two extra char* variable > >> > >> this should fix github issue #387 > >> ... > > > > This patch is "approved". > > Willy, you can apply. > > > > Baptiste > > > > Hello, > > whether in this function is sufficient to check the length of the label > and its contents (uppercase and lowercase letters, numbers and hyphen) > or whether RFC1035 should be followed where it states the following: > > "The labels must follow the rules for ARPANET host names. They must > start with a letter, end with a letter or digit, and have as interior > characters only letters, digits, and hyphen. There are also some > restrictions on the length. Labels must be 63 characters or less." > > -- > Zaga > > What can change the nature of a man? > Thanks Miroslav for the feedback. I am creating a github issue with this content so we can track it. Baptiste
Re: [PATCH v3] BUG/MINOR: dns: allow 63 char in hostname
On Sun, Jan 26, 2020 at 8:15 PM Илья Шипицин wrote: > > > вс, 26 янв. 2020 г. в 23:12, William Dauchy : > >> On Sun, Jan 26, 2020 at 7:08 PM Илья Шипицин >> wrote: >> > such things are fragile. once fixed, they can silently break during >> further refactoring. >> > on other hand, such functions are good candidates to write unit tests. >> >> I considered it but to my knowledge, this is currently not possible >> with varnishtest, as we would need to mock a dns resolution, and make >> haproxy starts. I don't know whether there are other plans for haproxy >> tests. >> > > > I do not mean varnishtest here. > > varnishtest is "full stack functional test", it is too expensive. > > I mean lightweight unit testing, for example, cmocka. > > >> -- >> William >> > On a side note, I am working on building tests for the DNS in HAProxy using socat + script as a DNS server in vtest. I am at a point where dig can query my socat+script, then I'll try HAProxy, then I'll do the vtest integration. Baptiste
Re: [PATCH v4] BUG/MINOR: dns: allow 63 char in hostname
On Sun, Jan 26, 2020 at 7:53 PM William Dauchy wrote: > hostname were limited to 62 char, which is not RFC1035 compliant; > - the parsing loop should stop when above max label char > - fix len label test where d[i] was wrongly used > - simplify the whole function to avoid using two extra char* variable > > this should fix github issue #387 > > Signed-off-by: William Dauchy > --- > src/dns.c | 31 +-- > 1 file changed, 13 insertions(+), 18 deletions(-) > > diff --git a/src/dns.c b/src/dns.c > index eefd8d0dc..28d47d26c 100644 > --- a/src/dns.c > +++ b/src/dns.c > @@ -1470,7 +1470,6 @@ int dns_str_to_dn_label(const char *str, int > str_len, char *dn, int dn_len) > */ > int dns_hostname_validation(const char *string, char **err) > { > - const char *c, *d; > int i; > > if (strlen(string) > DNS_MAX_NAME_SIZE) { > @@ -1479,36 +1478,32 @@ int dns_hostname_validation(const char *string, > char **err) > return 0; > } > > - c = string; > - while (*c) { > - d = c; > - > + while (*string) { > i = 0; > - while (*d != '.' && *d && i <= DNS_MAX_LABEL_SIZE) { > - i++; > - if (!((*d == '-') || (*d == '_') || > - ((*d >= 'a') && (*d <= 'z')) || > - ((*d >= 'A') && (*d <= 'Z')) || > - ((*d >= '0') && (*d <= '9' { > + while (*string && *string != '.' && i < > DNS_MAX_LABEL_SIZE) { > + if (!(*string == '-' || *string == '_' || > + (*string >= 'a' && *string <= 'z') || > + (*string >= 'A' && *string <= 'Z') || > + (*string >= '0' && *string <= '9'))) { > if (err) > *err = DNS_INVALID_CHARACTER; > return 0; > } > - d++; > + i++; > + string++; > } > > - if ((i >= DNS_MAX_LABEL_SIZE) && (d[i] != '.')) { > + if (!(*string)) > + break; > + > + if (*string != '.' && i >= DNS_MAX_LABEL_SIZE) { > if (err) > *err = DNS_LABEL_TOO_LONG; > return 0; > } > > - if (*d == '\0') > - goto out; > - > - c = ++d; > + string++; > } > - out: > return 1; > } > > -- > 2.24.1 > > This patch is "approved". Willy, you can apply. Baptiste
Re: "check-sni" doesn't seems to have effect on "tcp-check connect ssl"
On Mon, Jan 27, 2020 at 7:50 PM Nelson Branco wrote: > Do anyone know if “check-sni” should have effect as well on “tcp-check > connect ssl” at version “HAProxy version 1.8.8-1ubuntu0.9, released > 2019/12/02”? > Hi, What do you mean by "effect" ? Baptiste
Re: [PATCH] MEDIUM: dns: support for Additional section
gloups, I did fix all those points before sending the final version and I forgot to clean up the comments. Will send a patch to clean them up. Baptiste
[PATCH] MEDIUM: dns: support for Additional section
Hi there, For those using DNS service discovery through SRV record, you might be aware that HAProxy is quite verbose with your DNS server: it does one SRV query + 1 A/ per server found in the SRV response. This patch aims at improving this behavior par using first Additional records if available and relevant. If none found, previous behavior will apply (on a per server basis). This is behavior defined in RFC 2782 for DNS SRV records. Baptiste From a18ab5880ee04b75234eb65ca8a8be4a425d5ba6 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Fri, 7 Jun 2019 09:40:55 +0200 Subject: [PATCH] MEDIUM: dns: use Additional records from SRV responses Most DNS servers provide A/ records in the Additional section of a response, which correspond to the SRV records from the Answer section: ;; QUESTION SECTION: ;_http._tcp.be1.domain.tld. IN SRV ;; ANSWER SECTION: _http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A1.domain.tld. _http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A8.domain.tld. _http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A5.domain.tld. _http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A6.domain.tld. _http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A4.domain.tld. _http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A3.domain.tld. _http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A2.domain.tld. _http._tcp.be1.domain.tld. 3600 IN SRV 5 500 80 A7.domain.tld. ;; ADDITIONAL SECTION: A1.domain.tld. 3600IN A 192.168.0.1 A8.domain.tld. 3600IN A 192.168.0.8 A5.domain.tld. 3600IN A 192.168.0.5 A6.domain.tld. 3600IN A 192.168.0.6 A4.domain.tld. 3600IN A 192.168.0.4 A3.domain.tld. 3600IN A 192.168.0.3 A2.domain.tld. 3600IN A 192.168.0.2 A7.domain.tld. 3600IN A 192.168.0.7 SRV record support was introduced in HAProxy 1.8 and the first design did not take into account the records from the Additional section. Instead, a new resolution is associated to each server with its relevant FQDN. This behavior generates a lot of DNS requests (1 SRV + 1 per server associated). This patch aims at fixing this by: - when a DNS response is validated, we associate A/ records to relevant SRV ones - set a flag on associated servers to prevent them from running a DNS resolution for said FADN - update server IP address with information found in the Additional section If no relevant record can be found in the Additional section, then HAProxy will failback to running a dedicated resolution for this server, as it used to do. This behavior is the one described in RFC 2782. --- include/types/dns.h| 4 +- include/types/server.h | 1 + src/dns.c | 216 + src/server.c | 9 ++ 4 files changed, 229 insertions(+), 1 deletion(-) diff --git a/include/types/dns.h b/include/types/dns.h index 8347e93ab..7e592b285 100644 --- a/include/types/dns.h +++ b/include/types/dns.h @@ -151,6 +151,7 @@ struct dns_answer_item { struct sockaddr address; /* IPv4 or IPv6, network format */ chartarget[DNS_MAX_NAME_SIZE+1]; /* Response data: SRV or CNAME type target */ time_t last_seen; /* When was the answer was last seen */ + struct dns_answer_item *ar_item; /* pointer to a RRset from the additionnal section, if exists */ struct list list; }; @@ -158,7 +159,8 @@ struct dns_response_packet { struct dns_header header; struct list query_list; struct list answer_list; - /* authority and additional_information ignored for now */ + struct list ar_list; /* additional records */ + /* authority ignored for now */ }; /* Resolvers section and parameters. It is linked to the name servers diff --git a/include/types/server.h b/include/types/server.h index 842e033ad..598dfe6d8 100644 --- a/include/types/server.h +++ b/include/types/server.h @@ -142,6 +142,7 @@ enum srv_initaddr { #define SRV_F_COOKIESET0x0100/* this server has a cookie configured, so don't generate dynamic cookies */ #define SRV_F_FASTOPEN 0x0200/* Use TCP Fast Open to connect to server */ #define SRV_F_SOCKS4_PROXY 0x0400/* this server uses SOCKS4 proxy */ +#define SRV_F_NO_RESOLUTION 0x0800 /* disable runtime DNS resolution on this server */ /* configured server options for send-proxy (server->pp_opts) */ #define SRV_PP_V1 0x0001 /* proxy protocol version 1 */ diff --git a/src/dns.c b/src/dns.c index 5ecb46905..eefd8d0dc 100644 --- a/src/dns.c +++ b/src/dns.c @@ -516,6 +516,14 @@ static void dns_check_dns_response(struct dns_resolution *res) struct server *srv; struct dns_srvrq *srvrq; + /* clean up obsolete Ad
[PATCH] MINOR: http_act: enforce capture rule id checking in frontends only
Hi there, There is an issue with the configuration parser for http-request capture rules based on id and placed in a backend. (I did introduce this "bug" in e9544935e86278dfa3d49fb4b97b860774730625). The config parser mistakenly check the capture slot id in the backend structure (while it's only available in the frontend one). This patch enforce such check on frontends only and also updates documentation accordingly by warning admins to make them configure relevant capture slots in relevant frontends pointing to such backend. If the slot ID does not exist at runtime, not a big deal, this rule will be ignored. Baptiste From c8192107c7055e36a6b6ab9b262b448a52346776 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Thu, 16 Jan 2020 14:34:22 +0100 Subject: [PATCH] MINOR: http_act: don't check capture id in backend A wrong behavior was introduced by e9544935e86278dfa3d49fb4b97b860774730625, leading to preventing loading any configuration where a capture slot id is used in a backend. IE, the configuration below does not parse: frontend f bind *:80 declare capture request len 32 default_backend webserver backend webserver http-request capture req.hdr(Host) id 1 The point is that such type of configuration is valid and should run. This patch enforces the check of capture slot id only if the action rule is configured in a frontend. The point is that at configuration parsing time, it is impossible to check which frontend could point to this backend (furthermore if we use dynamic backend name resolution at runtime). The documentation has been updated to warn the user to ensure that relevant frontends have required declaration when such rule has to be used in a backend. If no capture slot can be found, then the action will just not be executed and HAProxy will process the next one in the list, as expected. --- doc/configuration.txt | 14 +- src/http_act.c| 4 +++- 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index 9ac898517..48248bc95 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -4316,9 +4316,11 @@ http-request capture [ len | id ] If the keyword "id" is used instead of "len", the action tries to store the captured string in a previously declared capture slot. This is useful to run captures in backends. The slot id can be declared by a previous directive - "http-request capture" or with the "declare capture" keyword. If the slot - doesn't exist, then HAProxy fails parsing the configuration to prevent - unexpected behavior at run time. + "http-request capture" or with the "declare capture" keyword. + When using this action in a backend, double check that the relevant + frontend(s) have the required capture slots otherwise, this rule will be + ignored at run time. This can't be detected at configuration parsing time + due to HAProxy's ability to dynamically resolve backend name at runtime. http-request del-acl() [ { if | unless } ] @@ -4997,8 +4999,10 @@ http-response capture id [ { if | unless } ] This is useful to run captures in backends. The slot id can be declared by a previous directive "http-response capture" or with the "declare capture" keyword. - If the slot doesn't exist, then HAProxy fails parsing the configuration - to prevent unexpected behavior at run time. + When using this action in a backend, double check that the relevant + frontend(s) have the required capture slots otherwise, this rule will be + ignored at run time. This can't be detected at configuration parsing time + due to HAProxy's ability to dynamically resolve backend name at runtime. http-response del-acl() [ { if | unless } ] diff --git a/src/http_act.c b/src/http_act.c index c8d9220fe..41f9a2e7e 100644 --- a/src/http_act.c +++ b/src/http_act.c @@ -424,7 +424,9 @@ static int check_http_req_capture(struct act_rule *rule, struct proxy *px, char if (rule->action_ptr != http_action_req_capture_by_id) return 1; - if (rule->arg.capid.idx >= px->nb_req_cap) { + /* capture slots can only be declared in frontends, so we can't check their + * existence in backends at configuration parsing step */ + if (px->cap & PR_CAP_FE && rule->arg.capid.idx >= px->nb_req_cap) { memprintf(err, "unable to find capture id '%d' referenced by http-request capture rule", rule->arg.capid.idx); return 0; -- 2.17.1
Re: [RFC PATCH] MINOR: debug: allow debug converter in default build
On Mon, Dec 16, 2019 at 9:22 AM Willy Tarreau wrote: > Hi Lukas, > > On Sun, Dec 15, 2019 at 05:23:38PM +0100, Lukas Tribus wrote: > > Currently this debug converter is only enabled when DEBUG_EXPR is > > defined at build time (which is different than other debug build > > options and unclear from the documentation). > > > > This moves the patch to the default build, so everyone can use it. > > I was thinking about repurposing this converter to use the ring buffer > instead, so that it can be used even with live traffic and let one > consult some history from the CLI. We could then have a few info > such as the request ID, and config file+line number where the > converter is present, followed by the pattern. > > We could imagine adding one (or multiple) optional arguments to force > the output to another destination (e.g. stdout, stderr, other ring), > and maybe another one for the format or to prepend a prefix. Then most > likely we'd use the ring buffer by default as it's the least impacting > one and the only self-sustaining output. And probably that we could > switch to stderr by default in backports (or make it mandatory to > force the destination). > > What do you think ? > > Cheers, > Willy > > Hi, My 2 cents. I personally use a lot this converter, si I'd be more than happy to get it available in default built! I think Willy's idea to route its output wherever we want is great too for production purpose. Can we also use an env variable? So we can easily switch from stdout to ring buffer without updating the config file? Baptiste
Re: Haproxy nbthreads + multi-threading lua?
On Mon, Dec 2, 2019 at 5:15 PM Dave Chiluk wrote: > Since 2.0 nbproc and nbthreads are now mutually exclusive, are there > any ways to make lua multi-threaded? > > One of our proxy's makes heavy use of lua scripting. I'm not sure if > this is still the case, but in earlier versions of HAProxy lua was > single threaded per process. Because of this we were running that > proxy with nbproc=4, and nbthread=4. This allowed us to scale without > being limited by lua. > > Has lua single-threaded-ness now been solved? Are there other options > I should be aware of related to that? What's the preferred way around > this? > > Thanks, > Dave. > > Hi Dave, (I think we met at kubecon) What's your use case for Lua exactly? Can't it be replaced by SPOE at some point? (which is compatible with nbthread and can run heavy processing outside of the HAProxy process)? You can answer me privately if you don't want such info to be public. Baptiste
Re: DNS resolution every second - v2.0.10
On Thu, Nov 28, 2019 at 2:17 PM Julien Pivotto wrote: > On 28 Nov 11:02, Baptiste wrote: > > On Thu, Nov 28, 2019 at 10:56 AM Julien Pivotto > > wrote: > > > > > On 28 Nov 10:38, Baptiste wrote: > > > > 'hold valid' still prevents HAProxy from changing the status of the > > > server > > > > in current Valid status to an other status for that period of time. > > > > Imagine your server is UP, DNS is valid, then your server returns NX > for > > > 2 > > > > minutes, then the status of the server won't change. If NX is > returned > > > for > > > > more than 5 minutes (as stated in your config), then it will change. > > > > > > > > Baptiste > > > > > > That is really great. Does it mean that with > > > > > > hold valid 1h > > > timeout resolve 30s > > > > > > we can have: > > > 1h of DNS downtime without impact on haproxy > > > > > > but if DNS is up, any change will be picked after 30 seconds? > > > > > > > > yep exactly! > > Previous behavior was wrong (using hold valid as timeout resolve). > > hold > Defines during which the last name resolution should be kept > based on last resolution > > So ... I guess the documentation is not clear here. > Would you mind clarifying it? I read it as: > > host valid 300s > > define a period of 300s during which the last name resolution should be > kept based > on last valid resolution > > I understand: if we get a valid resolution, we keep the last name > resolution for 300s. > > > > > > Baptiste > > -- > (o-Julien Pivotto > //\Open-Source Consultant > V_/_ Inuits - https://www.inuits.eu Actually, it's the status pointed to by "hold" of the latest resolution which is kept. I'll update the documentation to make it clearer. Baptiste
Re: DNS resolution every second - v2.0.10
On Thu, Nov 28, 2019 at 10:56 AM Julien Pivotto wrote: > On 28 Nov 10:38, Baptiste wrote: > > 'hold valid' still prevents HAProxy from changing the status of the > server > > in current Valid status to an other status for that period of time. > > Imagine your server is UP, DNS is valid, then your server returns NX for > 2 > > minutes, then the status of the server won't change. If NX is returned > for > > more than 5 minutes (as stated in your config), then it will change. > > > > Baptiste > > That is really great. Does it mean that with > > hold valid 1h > timeout resolve 30s > > we can have: > 1h of DNS downtime without impact on haproxy > > but if DNS is up, any change will be picked after 30 seconds? > > yep exactly! Previous behavior was wrong (using hold valid as timeout resolve). Baptiste
Re: DNS resolution every second - v2.0.10
'hold valid' still prevents HAProxy from changing the status of the server in current Valid status to an other status for that period of time. Imagine your server is UP, DNS is valid, then your server returns NX for 2 minutes, then the status of the server won't change. If NX is returned for more than 5 minutes (as stated in your config), then it will change. Baptiste
Re: DNS resolution every second - v2.0.10
@Willy, since 1.8 (I think), the DNS task is autonomous and not triggered by the check anymore. Second, HAProxy never ever follows up TTLs. Third, I "fixed" a bug in 2.0.10 which triggers this change of behavior. Basically, "timeout resolve" which is supposed to be the interval between 2 DNS resolutions was not applied when the response was valid. (f50e1ac4442be41ed8b9b7372310d1d068b85b33) So to recover from previous behavior, just increase this value, which is by default 1s. Baptiste
Re: [PATCH] CLEANUP: dns: resolution can never be null
I am personally all confused by this report :) Furthermore, as mentioned the test on eb was already done. If the fix is to remove the useless test on res, then William's patch is right. (Thx for handling it William) Baptiste
Re: [PATCH] [MEDIUM] dns: Add resolve-opts "ignore-weight"
Hi there, Since a short term reliable solution can't be found, we can apply this patch as a workaround. Baptiste >
Re: Firewall and Haproxy
On Sun, Nov 17, 2019 at 2:41 PM TomK wrote: > Hey All, > > When adding hosts to a F/W behind a VIP (keepalived for example) to > which Haproxy is bound, should just the VIP be added to the F/W or would > all member hosts behind Haproxy need to be added as well? > > If all member hosts behind haproxy need to be added, why? > > Only reason I can think of adding individual host members is for > troubleshooting purposes. Other then that, can't think of a valid > reason why each member host would connect separately. > > -- > Thx, > TK. > > Hi, You should just open traffic to ports configured on the VIP in HAProxy. Baptiste
Re: [PATCH] [MEDIUM] dns: Add resolve-opts "ignore-weight"
On Mon, Nov 18, 2019 at 2:37 PM Daniel Corbett wrote: > Hello, > > > On 11/18/19 7:05 AM, Willy Tarreau wrote: > > On Mon, Nov 18, 2019 at 12:06:08PM +0100, Baptiste wrote: > >> When we first designed this feature, we did it with this in mind "if > admins > >> can update a SRV record in a DNS server, they can adjust the weight > >> accordingly". > >> > >> I understand the need, but the response is way too short. It's a global > >> question of precedence in HAProxy from my point of view. > >> I am scared that if we start to adjust things this way, we'll end up > with > >> 1000s of flags overlapping each others and adding complexity on top of > >> complexity. > >> > >> The real question is "what prevents an admin from updating a DNS > record?" > >> Or why they don't failover to A/ records only? > > I must admit I understand a valid use case : have the DNS set up to > advertise > > the list of servers, and let the agent adjust the servers' health based > on > > their load, the fact that they're running backup or OS updates etc. Thus > in > > my opinion, the *use case* makes sense. What I'm unsure about is the > proper > > way to do it, because as you mention, it's more a matter of overall > > consistency between all sources. We could very well instead have a > per-backend > > setting indicating what source to fetch the weight from (agent, dns, > health, > > other?), where to fetch the maxconn from etc. Some may even want to > combine > > these (average, multiply, ...). I'm fine if you prefer to postpone it. > If in > > the end we decide to merge it as-is we could also backport it, and if we > > decide to address it differently, at least we won't have to maintain one > > extra short-lived flag. > > > > Thanks, > > Willy > > > I'm open to ideas on implementation method, definitely not stuck on this > method :)To be honest I was trying to find some "good first issues" > to tackle. > > GitHub request here: https://github.com/haproxy/haproxy/issues/48 > > > Thanks for taking the time to review and provide your input guys! > > > Thanks, > > -- Daniel > > > I replied back on the github issue to re-start conversation on the topic. Based on the answer, I'll give you my go or not :) If the go happens after the release, we can still backport this quick change if this is really useful to people. Baptiste
Re: [PATCH] [MEDIUM] dns: Add resolve-opts "ignore-weight"
On Mon, Nov 18, 2019 at 11:57 AM Willy Tarreau wrote: > Hi Daniel, > > On Sun, Nov 17, 2019 at 10:06:32AM -0500, Daniel Corbett wrote: > > Hello, > > > > > > I realize that new features are not preferred at the moment but I think > this > > might be a usability issue and hopefully it can be considered for > 2.1-dev, > > however, it's perfectly fine if it's decided to wait till next. > > > > It was noted in GitHub issue #48 that there are times when a > configuration > > may use the server-template directive with SRV records and simultaneously > > want to control weights separately using an agent-check or through the > > runtime api. This patch adds a new option "ignore-weight" to the > > "resolve-opts" directive. > > > > When specified, any weight indicated within an SRV record will be > ignored. > > This is for both initial resolution and ongoing resolution. > > In my opinion it's small enough to be mergeable. However I have no opinion > whether this is the best way to handle it or not, so I'll leave it to > others > to judge. For example it could be imagined that some would want to keep the > weight zero as special to take a server out of the farm maybe (though this > could complicate the logic). If others agree with the patch, I'm fine with > getting it merged even this late given that it doesn't seem to have side > effects. > > > I wanted to include VTC test with this, however, I could not think of an > > appropriate way to do it as I suspect we may need a "fake dns server" > > similar to what was made for syslog. > > vtest is really made to test proxies, i.e. have a client on one side, a > server on the other one, and synchronize them to make sure that what is > observed precisely corresponds to what is tested, which is the hardest > part to test on a proxy. It's really not suitable to run other types of > tests at the moment. Maybe we could imagine improving it to implement a > DNS responder, but even by doing this we'd start to add some entropy > (timing between checks causing ordering issues) making the tests harder > to reproduce. > > If we want to implement some testability for anything related to DNS, > we'll need to specify in very fine details how we want it to work I > guess. > > Thanks, > Willy > > Hi, When we first designed this feature, we did it with this in mind "if admins can update a SRV record in a DNS server, they can adjust the weight accordingly". I understand the need, but the response is way too short. It's a global question of precedence in HAProxy from my point of view. I am scared that if we start to adjust things this way, we'll end up with 1000s of flags overlapping each others and adding complexity on top of complexity. The real question is "what prevents an admin from updating a DNS record?" Or why they don't failover to A/ records only? Baptiste
Re: [PR/FEATURE] support for virtual hosts / Host header per server
> > What do others think ? Igor maybe you have a particular opinion on > this one ? Baptiste, anything from the dynamic use cases you're aware > of ? > > Hi Willy, I did some backlog and yes the use case around "external LB to multiple kubernetes clusters" is "real" (it's even a common use case). Now, I may miss some information to fully understand the current limitations. such type of rewrite should happen at the Ingress Controller layer, from my point of view. Something confuses me in the patch is that we use the configured server name (and not the fqdn) and so, all servers must have a different name. This prevent us from sending the same name to different servers in the backend. I don't know if that is a valid case for later or not. About the impact onto the dynamic changes of the servers in HAProxy, I would yes, we want this to be dynamic. But this imply the ability to change the server name. An easy solution would to enforce using the fqdn, since it's a parameter which makes more sense, same fqdn can be used for multiple servers, and fqdn can be updated using the CLI. (and fqdn can be set despite DNS resolution is not used). Baptiste
PATCH: DNS: enforce resolve timenout for all cases
Hi, Please find in attachment a new patch related to gihub issue #345. Basically, when the resolution status was VALID, we ignored the "timeout resolve", which goes against the documentation... And as stated in the github issue, there was some impacts: an entire backend could go down when the nameserver is not very reliable... Baptiste From d278cff87aa9037f1d05216ea14e2bc8bab5cd2a Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Thu, 7 Nov 2019 11:02:18 +0100 Subject: [PATCH] BUG: dns: timeout resolve not applied for valid resolutions Documentation states that the interval between 2 DNS resolution is driven by "timeout resolve " directive. From a code point of view, this was applied unless the latest status of the resolution was VALID. In such case, "hold valid" was enforce. This is a bug, because "hold" timers are not here to drive how often we want to trigger a DNS resolution, but more how long we want to keep an information if the status of the resolution itself as changed. This avoid flapping and prevent shutting down an entire backend when a DNS server is not answering. This issue was reported by hamshiva in github issue #345. Backport status: 1.8 --- src/dns.c | 5 + 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/src/dns.c b/src/dns.c index 15d40a1..78349a2 100644 --- a/src/dns.c +++ b/src/dns.c @@ -150,10 +150,7 @@ static inline uint16_t dns_rnd16(void) static inline int dns_resolution_timeout(struct dns_resolution *res) { - switch (res->status) { - case RSLV_STATUS_VALID: return res->resolvers->hold.valid; - default:return res->resolvers->timeout.resolve; - } + return res->resolvers->timeout.resolve; } /* Updates a resolvers' task timeout for next wake up and queue it */ -- 2.7.4
Re: [PATCH] bugfix to make do-resolve to use DNS cache
Hi Willy, Please find the patch updated. I also cleared a '{' '}' that I added on a if condition. This would make the code "cleaner" but should not be part of this patch at all. The new patch is in attachment. Sorry again for the mess. Baptiste On Wed, Nov 6, 2019 at 2:35 PM Baptiste wrote: > Hi Willy, Jarno, > > Sorry, I did forgot those 2 printf that were here for debugging purpose > only. > I can resend the patch tonight. > > Baptiste > > On Wed, Nov 6, 2019 at 7:43 AM Willy Tarreau wrote: > >> Hi Baptiste, >> >> thanks for the fix, but before taking it, are you really sure it's >> the version you wanted to send ? There are a couple of debugging >> printf() left so I could remove them by hand but maybe you intended >> to send a different patch, thus I'd rather let you double-check. >> >> thanks, >> Willy >> >> On Tue, Nov 05, 2019 at 10:04:30AM +0100, Baptiste wrote: >> > diff --git a/src/action.c b/src/action.c >> > index 7684202..36eedc8 100644 >> > --- a/src/action.c >> > +++ b/src/action.c >> > @@ -73,6 +73,7 @@ int check_trk_action(struct act_rule *rule, struct >> proxy *px, char **err) >> > int act_resolution_cb(struct dns_requester *requester, struct >> dns_nameserver *nameserver) >> > { >> > struct stream *stream; >> > +printf("%s %d\n", __FUNCTION__, __LINE__); >> > >> > if (requester->resolution == NULL) >> > return 0; >> > @@ -89,6 +90,7 @@ int act_resolution_cb(struct dns_requester >> *requester, struct dns_nameserver *na >> > int act_resolution_error_cb(struct dns_requester *requester, int >> error_code) >> > { >> > struct stream *stream; >> > +printf("%s %d\n", __FUNCTION__, __LINE__); >> > >> > if (requester->resolution == NULL) >> > return 0; >> > diff --git a/src/dns.c b/src/dns.c >> > index 15d40a1..d5bf449 100644 >> > --- a/src/dns.c >> > +++ b/src/dns.c >> > @@ -363,8 +363,9 @@ void dns_trigger_resolution(struct dns_requester >> *req) >> >* valid */ >> > exp = tick_add(res->last_resolution, resolvers->hold.valid); >> > if (resolvers->t && (res->status != RSLV_STATUS_VALID || >> > - !tick_isset(res->last_resolution) || tick_is_expired(exp, >> now_ms))) >> > + !tick_isset(res->last_resolution) || tick_is_expired(exp, >> now_ms))) { >> > task_wakeup(resolvers->t, TASK_WOKEN_OTHER); >> > + } >> > } >> > >> > >> > @@ -2150,8 +2151,13 @@ enum act_return dns_action_do_resolve(struct >> act_rule *rule, struct proxy *px, >> > struct dns_resolution *resolution; >> > struct sample *smp; >> > char *fqdn; >> > + struct dns_requester *req; >> > + struct dns_resolvers *resolvers; >> > + struct dns_resolution *res; >> > + int exp; >> > >> > /* we have a response to our DNS resolution */ >> > + use_cache: >> > if (s->dns_ctx.dns_requester && >> s->dns_ctx.dns_requester->resolution != NULL) { >> > resolution = s->dns_ctx.dns_requester->resolution; >> > if (resolution->step == RSLV_STEP_RUNNING) { >> > @@ -2211,6 +2217,22 @@ enum act_return dns_action_do_resolve(struct >> act_rule *rule, struct proxy *px, >> > >> > s->dns_ctx.parent = rule; >> > dns_link_resolution(s, OBJ_TYPE_STREAM, 0); >> > + >> > + /* Check if there is a fresh enough response in the cache of our >> associated resolution */ >> > + req = s->dns_ctx.dns_requester; >> > + if (!req || !req->resolution) { >> > + dns_trigger_resolution(s->dns_ctx.dns_requester); >> > + return ACT_RET_YIELD; >> > + } >> > + res = req->resolution; >> > + resolvers = res->resolvers; >> > + >> > + exp = tick_add(res->last_resolution, resolvers->hold.valid); >> > + if (resolvers->t && res->status == RSLV_STATUS_VALID && >> tick_isset(res->last_resolution) >> > +&& !tick_is_expired(exp, now_ms)) { >> > + goto use_cache; >> > + } >> > + >> > dns_trigger_resolution(s->dns_ctx.dns_requester)
Re: [PATCH] bugfix to make do-resolve to use DNS cache
Hi Willy, Jarno, Sorry, I did forgot those 2 printf that were here for debugging purpose only. I can resend the patch tonight. Baptiste On Wed, Nov 6, 2019 at 7:43 AM Willy Tarreau wrote: > Hi Baptiste, > > thanks for the fix, but before taking it, are you really sure it's > the version you wanted to send ? There are a couple of debugging > printf() left so I could remove them by hand but maybe you intended > to send a different patch, thus I'd rather let you double-check. > > thanks, > Willy > > On Tue, Nov 05, 2019 at 10:04:30AM +0100, Baptiste wrote: > > diff --git a/src/action.c b/src/action.c > > index 7684202..36eedc8 100644 > > --- a/src/action.c > > +++ b/src/action.c > > @@ -73,6 +73,7 @@ int check_trk_action(struct act_rule *rule, struct > proxy *px, char **err) > > int act_resolution_cb(struct dns_requester *requester, struct > dns_nameserver *nameserver) > > { > > struct stream *stream; > > +printf("%s %d\n", __FUNCTION__, __LINE__); > > > > if (requester->resolution == NULL) > > return 0; > > @@ -89,6 +90,7 @@ int act_resolution_cb(struct dns_requester *requester, > struct dns_nameserver *na > > int act_resolution_error_cb(struct dns_requester *requester, int > error_code) > > { > > struct stream *stream; > > +printf("%s %d\n", __FUNCTION__, __LINE__); > > > > if (requester->resolution == NULL) > > return 0; > > diff --git a/src/dns.c b/src/dns.c > > index 15d40a1..d5bf449 100644 > > --- a/src/dns.c > > +++ b/src/dns.c > > @@ -363,8 +363,9 @@ void dns_trigger_resolution(struct dns_requester > *req) > >* valid */ > > exp = tick_add(res->last_resolution, resolvers->hold.valid); > > if (resolvers->t && (res->status != RSLV_STATUS_VALID || > > - !tick_isset(res->last_resolution) || tick_is_expired(exp, > now_ms))) > > + !tick_isset(res->last_resolution) || tick_is_expired(exp, > now_ms))) { > > task_wakeup(resolvers->t, TASK_WOKEN_OTHER); > > + } > > } > > > > > > @@ -2150,8 +2151,13 @@ enum act_return dns_action_do_resolve(struct > act_rule *rule, struct proxy *px, > > struct dns_resolution *resolution; > > struct sample *smp; > > char *fqdn; > > + struct dns_requester *req; > > + struct dns_resolvers *resolvers; > > + struct dns_resolution *res; > > + int exp; > > > > /* we have a response to our DNS resolution */ > > + use_cache: > > if (s->dns_ctx.dns_requester && > s->dns_ctx.dns_requester->resolution != NULL) { > > resolution = s->dns_ctx.dns_requester->resolution; > > if (resolution->step == RSLV_STEP_RUNNING) { > > @@ -2211,6 +2217,22 @@ enum act_return dns_action_do_resolve(struct > act_rule *rule, struct proxy *px, > > > > s->dns_ctx.parent = rule; > > dns_link_resolution(s, OBJ_TYPE_STREAM, 0); > > + > > + /* Check if there is a fresh enough response in the cache of our > associated resolution */ > > + req = s->dns_ctx.dns_requester; > > + if (!req || !req->resolution) { > > + dns_trigger_resolution(s->dns_ctx.dns_requester); > > + return ACT_RET_YIELD; > > + } > > + res = req->resolution; > > + resolvers = res->resolvers; > > + > > + exp = tick_add(res->last_resolution, resolvers->hold.valid); > > + if (resolvers->t && res->status == RSLV_STATUS_VALID && > tick_isset(res->last_resolution) > > +&& !tick_is_expired(exp, now_ms)) { > > + goto use_cache; > > + } > > + > > dns_trigger_resolution(s->dns_ctx.dns_requester); > > return ACT_RET_YIELD; > > } > > -- > > 2.7.4 > > > >
[PATCH] bugfix to make do-resolve to use DNS cache
Hi there, David Birdsong reported a bug last week about http do-resolve action not using the DNS cache. The patch in attachment fixes this issue. There is no github issue associated to this bug. Backport status is up to 2.0. Baptiste From 74e1328ef08de6740c30b5b5989d1413bb904742 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Wed, 30 Oct 2019 16:06:53 +0100 Subject: [PATCH] BUG/MINOR: action: do-resolve now use cached response As reported by David Birdsong on the ML, the HTTP action do-resolve does not use the DNS cache. Actually, the action is "registred" to the resolution for said name to be resolved and wait until an other requester triggers the it. Once the resolution is finished, then the action is updated with the result. To trigger this, you must have a server with runtime DNS resolution enabled and run a do-resolve action with the same fqdn AND they use the same resolvers section. This patch fixes this behavior by ensuring the resolution associated to the action has a valid answer which is not considered as expired. If those conditions are valid, then we can use it (it's the "cache"). Backport status: 2.0 --- src/action.c | 2 ++ src/dns.c| 24 +++- 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/src/action.c b/src/action.c index 7684202..36eedc8 100644 --- a/src/action.c +++ b/src/action.c @@ -73,6 +73,7 @@ int check_trk_action(struct act_rule *rule, struct proxy *px, char **err) int act_resolution_cb(struct dns_requester *requester, struct dns_nameserver *nameserver) { struct stream *stream; +printf("%s %d\n", __FUNCTION__, __LINE__); if (requester->resolution == NULL) return 0; @@ -89,6 +90,7 @@ int act_resolution_cb(struct dns_requester *requester, struct dns_nameserver *na int act_resolution_error_cb(struct dns_requester *requester, int error_code) { struct stream *stream; +printf("%s %d\n", __FUNCTION__, __LINE__); if (requester->resolution == NULL) return 0; diff --git a/src/dns.c b/src/dns.c index 15d40a1..d5bf449 100644 --- a/src/dns.c +++ b/src/dns.c @@ -363,8 +363,9 @@ void dns_trigger_resolution(struct dns_requester *req) * valid */ exp = tick_add(res->last_resolution, resolvers->hold.valid); if (resolvers->t && (res->status != RSLV_STATUS_VALID || - !tick_isset(res->last_resolution) || tick_is_expired(exp, now_ms))) + !tick_isset(res->last_resolution) || tick_is_expired(exp, now_ms))) { task_wakeup(resolvers->t, TASK_WOKEN_OTHER); + } } @@ -2150,8 +2151,13 @@ enum act_return dns_action_do_resolve(struct act_rule *rule, struct proxy *px, struct dns_resolution *resolution; struct sample *smp; char *fqdn; + struct dns_requester *req; + struct dns_resolvers *resolvers; + struct dns_resolution *res; + int exp; /* we have a response to our DNS resolution */ + use_cache: if (s->dns_ctx.dns_requester && s->dns_ctx.dns_requester->resolution != NULL) { resolution = s->dns_ctx.dns_requester->resolution; if (resolution->step == RSLV_STEP_RUNNING) { @@ -2211,6 +2217,22 @@ enum act_return dns_action_do_resolve(struct act_rule *rule, struct proxy *px, s->dns_ctx.parent = rule; dns_link_resolution(s, OBJ_TYPE_STREAM, 0); + + /* Check if there is a fresh enough response in the cache of our associated resolution */ + req = s->dns_ctx.dns_requester; + if (!req || !req->resolution) { + dns_trigger_resolution(s->dns_ctx.dns_requester); + return ACT_RET_YIELD; + } + res = req->resolution; + resolvers = res->resolvers; + + exp = tick_add(res->last_resolution, resolvers->hold.valid); + if (resolvers->t && res->status == RSLV_STATUS_VALID && tick_isset(res->last_resolution) + && !tick_is_expired(exp, now_ms)) { + goto use_cache; + } + dns_trigger_resolution(s->dns_ctx.dns_requester); return ACT_RET_YIELD; } -- 2.7.4
Re: http-request do-resolve Woes
On Wed, Oct 30, 2019 at 4:48 PM David Birdsong wrote: > > On Wed, Oct 30, 2019 at 11:39 AM Baptiste wrote: > >> Thanks! >>> >>> It had that feel to it...seemed like a cache lock timeout and/or somehow >>> tied to the request interval. >>> >>> >> I think I know where to fix this behavior in the code. I will work on the >> "how to fix it" later tonight. >> In the meantime, you can apply the workaround below. This is doable >> because the DNS cache is per resolvers section: >> 1. create a second dummy DNS section: >> resolvers main_resolver_do-resolve >> nameserver dns1 8.8.8.8:53 >> >> which is a copy of the first one with a different name. >> >> 2. reference this new resolvers section in your do-resolve action: >> http-request do-resolve(txn.myip,main_resolver_do-resolve,ipv4) >> hdr(Host),lower >> >> And you should be good until I fix it and it's backported. >> > > Awesome, thanks! > > Quick question: should I pay attention to timer: Tr as a proxy for both > request received and DNS latency? I'm guessing that the capture and > dns-resolve cause delays in haproxy fully reading the request in, is that > right? > Capture should not take generate any delays. do-resolve does :) And yes, from what I saw, it is reported in HAProxy's Tr, so yes, you are correct. Baptiste
Re: http-request do-resolve Woes
> > Thanks! > > It had that feel to it...seemed like a cache lock timeout and/or somehow > tied to the request interval. > > I think I know where to fix this behavior in the code. I will work on the "how to fix it" later tonight. In the meantime, you can apply the workaround below. This is doable because the DNS cache is per resolvers section: 1. create a second dummy DNS section: resolvers main_resolver_do-resolve nameserver dns1 8.8.8.8:53 which is a copy of the first one with a different name. 2. reference this new resolvers section in your do-resolve action: http-request do-resolve(txn.myip,main_resolver_do-resolve,ipv4) hdr(Host),lower And you should be good until I fix it and it's backported. Baptiste
Re: http-request do-resolve Woes
On Tue, Oct 29, 2019 at 8:18 PM David Birdsong wrote: > I should have put the haproxy version in the mail too: > > haproxy 2.0.8 > > On Tue, Oct 29, 2019 at 3:07 PM David Birdsong > wrote: > >> I've narrowed down a behavior that I think might be a bug, but is >> definitely not ideal. >> >> This minimal configuration copies header: X-Host into Host and performs a >> dynamic DNS query against that field name, stores the output in a txn var, >> and then uses a backend whic sets the dest ip to that txn var. >> >> For any requests with an X-Host header that matches a name already >> tracked by DNS in a backend, I see that haproxy spends 4-9 seconds reading >> the request from the client while any X-Host values which are not currently >> tracked by a backend show haproxy spending 1ms reading in the request from >> the client (normal.) >> >> unnamed, fast: curl -v -H "X-Host: google.com" http://127.0.0.1:8080/foo >> >> named, very slow: curl -v -H "X-Host: mixpanel.com" >> http://127.0.0.1:8080/foo >> >> Config: >> https://gist.github.com/davidbirdsong/1c3ec695fdbab10f64783437ffab901c >> haproxy -vv >> https://gist.github.com/davidbirdsong/d4c1c71e715d8461ad73a4891caca6f1 >> >> cat /etc/lsb-release >> DISTRIB_ID=Ubuntu >> DISTRIB_RELEASE=16.04 >> DISTRIB_CODENAME=xenial >> DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS" >> >> >> david@david-VirtualBox:~/tls_demo/CA$ uname -a >> Linux david-VirtualBox 4.15.0-65-generic #74~16.04.1-Ubuntu SMP Wed Sep >> 18 09:51:44 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux >> >> Hi David, I confirm I can reproduce the issue and from my first quick look, it is related to DNS code in HAProxy. Basically, there is a cache of the valid responses and from what I observed, your do-resolve session is registered to the resolution and instead of pulling info from the cache, it's waiting until the next request is sent and gets updated with the next response. Let me fix this. Baptiste
Re: haproxy, Windows, pull requests
My 2 cents: "let's wait for Windows to adopt the Linux kernel"..
Re: haproxy doesn't bring failed server up
On Mon, Oct 7, 2019 at 9:09 PM Lukas Tribus wrote: > Hello, > > On Mon, Oct 7, 2019 at 10:00 AM rihad wrote: > > > > BTW, all these resolver hold settings are a bit confusing, is there a > way to tell > > haproxy to rely on the TTL it gets from DNS servers/resolvers? It seems > to be > > relying on some hard-coded default values instead. > > I don't think TTL is currently considered, no. How long it will cache > is configurable and defaults to 10 seconds ("valid"). Because you'd > use very low values here anyway (and have your recursive resolver do > proper TTL considering caching), I don't believe there is a huge > impact because of this. > > But I agree, it would be better to consider the TTL. > > > Lukas > > Hi, That is correct, the runtime resolver does not follow up the TTL. It's on purpose and by design to allow the admin themselves to decide when they want to trigger a new request and to avoid some DNS relay would rewrite TTLs to very long value (my ISP enforce anything lower than 20 minutes to 20 minutes). We could add on the roadmap to support TTL, as an option, but I need first to understand the use case. Baptiste
Re: Status of 1.5 ?
On Mon, Oct 28, 2019 at 3:54 PM Aleksandar Lazic wrote: > Hi. > > Am 25.10.2019 um 11:27 schrieb Willy Tarreau: > > Hi all, > > > > I'm just wondering what to do with 1.5. I've checked and it didn't > > receive any fix in almost 3 years. The ones recently merged into 1.6 > > that were possible candidates for 1.5 were not critical enough to > > warrant a new release for a long time. > > > > Now I'm wondering, is anyone interested in this branch to still be > > maintained ? Should I emit a new release with a few pending fixes > > just to flush the pipe and pursue its "critical fixes only" status a > > bit further, or should we simply declare it unmaintained ? I'm fine > > with either option, it's just that I hate working for no reason, and > > this version was released a bit more than 5 years ago now, so I can > > easily expect that it has few to no user by now. > > > > Please just let me know what you think, > > Well from my point of view is 1.5 not bad bud pretty old. There are some > distributions which still use 1.5 and maintain it, from my point of view > should > they switch to 1.8 as this is a LTS version. I know that's a pretty easy > statement but, that's it. > > Due to the fact that we have now 5 (1.6,1.7,1.8,1.9,2.0,2.1) Versions > which are > maintained I suggest to declare 1.5 as EOL, maybe we should also consider > to do > this also with 1.6 and 1.7. > > When we look into the current changes of the Network and the current and > upcoming challenges, QUIC/HTTP/3, ESNI, Containerized Setups, dynamic > reconfiguration and so on I would like to see that the focus is mainly on > that > new challenges. > > Jm2c. > > > Thanks, > > Willy > > Best Regards > Aleks > > Hi, I tend to agree on setting 1.5 as EOL. About 1.6 and 1.7, they could be EOLed in the next 2 years too, as Aleks stated, it will "enforce" people to use the latest shiny releases :) Baptiste
Re: Deprecating a few keywords for 2.1+
I was about going against as well for monitor-* keywords for now. There are no "simple" way to replace them currently and te 'return' thing will be the simple way. Note that you can use a Lua service for this purpose currently, but it's not as simple as having a single small directive in an HAProxy config file. Baptiste On Tue, Oct 29, 2019 at 4:34 AM Willy Tarreau wrote: > On Tue, Oct 29, 2019 at 12:40:52AM +0100, Aleksandar Lazic wrote: > > > Or maybe something like: > > > http-request deny deny_status 500 if { path_beg /health } { > nbsrv(yourbackend) lt 1 } > > > http-request deny deny_status 200 if { path_beg /health } > > > > Looks good but 'deny' and '200' feels wrong. > > > > Maybe we should have a 'http-request monitor ...' which replaces the > monitor* stuff? > > Well, guys you convinced me for monitor-uri. We still don't have the > "return" directive which would have been more suitable for this, but > in any case I agree that transcoding the monitor-fail rules to anything > else will be painful. > > Also, the code dealing with monitor-uri isn't the ugliest one as it's > still handled by the streams and could be converted to HTX lately. It's > just that seeing it being tested in the CLI code irritates me a little > bit. > > However, for "mode health" and "monitor-net", it's another story and > these ones cannot work in SSL nor with muxes :-/ > > To give you an idea, this is what we have in the FD accept code : > > if (p->mode == PR_MODE_HTTP || > (p->mode == PR_MODE_HEALTH && (p->options2 & PR_O2_CHK_ANY) == > PR_O2_HTTP_CHK)) > send(cfd, "HTTP/1.0 200 OK\r\n\r\n", 19, > MSG_DONTWAIT|MSG_NOSIGNAL|MSG_MORE); > else if (p->mode == PR_MODE_HEALTH) > send(cfd, "OK\n", 3, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_MORE); > > Sending these ones raw on the socket with SSL or H2 makes no sense, and > I'd rather stop hacking the socket at this level. That's why I'm really > impatient to drop these ones. > > Thanks, > Willy > >
Re: [PATCH] BUG/MINOR: dns: allow srv record weight set to 0
Hi Luke, I remember I first did that intentionally to avoid values below 255 being "rounded" to 0... And I assumed people would remove servers from their DNS if they want a weight to 0. Now, with some feedback, I can see I was wrong. Next time, don't hesitate to ask the question on the ML, or on github or to my mail directly. Baptiste On Mon, Oct 21, 2019 at 8:51 PM Luke Seelenbinder < luke.seelenbin...@stadiamaps.com> wrote: > Thank you for this bug fix…we're more than a little excited! > > When I initially found it, I was under the assumption it was on purpose. > :-) > > Best, > Luke > > — > Luke Seelenbinder > Stadia Maps | Founder > stadiamaps.com > > On 21 Oct 2019, at 16:35, Christopher Faulet wrote: > > Le 21/10/2019 à 16:20, Baptiste a écrit : > > Thx to 2 people who spotted a bug in my patch, (missing parenthesis). > here is the updated version. > On Mon, Oct 21, 2019 at 3:59 PM Baptiste mailto:bed...@gmail.com >> wrote: >hi there, >Following up some recent discussion about SRV record's weight and server >weight in HAProxy, we spotted a bug in the current code: when weight in > SRV >record is set to 0, then server weight in HAProxy was 1... >Thanks to Willy for proposing the solution applied into that patch. >Baptiste > > > Baptiste, > > I don't know if the comment is wrong or not. But with your patch, the > weight is now between 0 and 256. The function > server_parse_weight_change_request() is ok with that. So I can amend your > comment if you want. I just want to have a confirmation. > > -- > Christopher Faulet > > >
Re: [PATCH] BUG/MINOR: dns: allow srv record weight set to 0
My comment is wrong. A server weight can have a value of 256. Please update the comment :) Baptiste On Mon, Oct 21, 2019 at 4:35 PM Christopher Faulet wrote: > Le 21/10/2019 à 16:20, Baptiste a écrit : > > Thx to 2 people who spotted a bug in my patch, (missing parenthesis). > > > > here is the updated version. > > > > On Mon, Oct 21, 2019 at 3:59 PM Baptiste > <mailto:bed...@gmail.com>> wrote: > > > > hi there, > > > > Following up some recent discussion about SRV record's weight and > server > > weight in HAProxy, we spotted a bug in the current code: when weight > in SRV > > record is set to 0, then server weight in HAProxy was 1... > > Thanks to Willy for proposing the solution applied into that patch. > > > > Baptiste > > > > Baptiste, > > I don't know if the comment is wrong or not. But with your patch, the > weight is > now between 0 and 256. The function server_parse_weight_change_request() > is ok > with that. So I can amend your comment if you want. I just want to have a > confirmation. > > -- > Christopher Faulet >
Re: [PATCH] BUG/MINOR: dns: allow srv record weight set to 0
Thx to 2 people who spotted a bug in my patch, (missing parenthesis). here is the updated version. On Mon, Oct 21, 2019 at 3:59 PM Baptiste wrote: > hi there, > > Following up some recent discussion about SRV record's weight and server > weight in HAProxy, we spotted a bug in the current code: when weight in SRV > record is set to 0, then server weight in HAProxy was 1... > Thanks to Willy for proposing the solution applied into that patch. > > Baptiste > From a8467daeb5cf2129f5471ef117039f778c2842fd Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Mon, 21 Oct 2019 15:13:48 +0200 Subject: [PATCH] BUG/MINOR: dns: allow srv record weight set to 0 Processing of SRV record weight was inaccurate and when a SRV record's weight was set to 0, HAProxy enforced it to '1'. This patch aims at fixing this without breaking compability with previous behavior. Backport status: 1.8 to 2.0 --- src/dns.c | 16 ++-- 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/src/dns.c b/src/dns.c index 0ce6e8302..3e1bb421e 100644 --- a/src/dns.c +++ b/src/dns.c @@ -543,10 +543,12 @@ static void dns_check_dns_response(struct dns_resolution *res) !memcmp(srv->hostname_dn, item->target, item->data_len)) { int ha_weight; - /* Make sure weight is at least 1, so - * that the server will be used. + /* DNS weight range if from 0 to 65535 + * HAProxy weight is from 0 to 255 + * The rule below ensures that weight 0 is well respected + * while allowing a "mapping" from DNS weight into HAProxy's one. */ - ha_weight = item->weight / 256 + 1; + ha_weight = (item->weight + 255) / 256; if (srv->uweight != ha_weight) { char weight[9]; @@ -590,10 +592,12 @@ static void dns_check_dns_response(struct dns_resolution *res) !(srv->flags & SRV_F_CHECKPORT)) srv->check.port = item->port; -/* Make sure weight is at least 1, so - * that the server will be used. +/* DNS weight range if from 0 to 65535 + * HAProxy weight is from 0 to 255 + * The rule below ensures that weight 0 is well respected + * while allowing a "mapping" from DNS weight into HAProxy's one. */ -ha_weight = item->weight / 256 + 1; +ha_weight = (item->weight + 255) / 256; snprintf(weight, sizeof(weight), "%d", ha_weight); server_parse_weight_change_request(srv, weight); -- 2.17.1
[PATCH] BUG/MINOR: dns: allow srv record weight set to 0
hi there, Following up some recent discussion about SRV record's weight and server weight in HAProxy, we spotted a bug in the current code: when weight in SRV record is set to 0, then server weight in HAProxy was 1... Thanks to Willy for proposing the solution applied into that patch. Baptiste From 35598ed8ffce74e4cc834566566957dde5ede167 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Mon, 21 Oct 2019 15:13:48 +0200 Subject: [PATCH] BUG/MINOR: dns: allow srv record weight set to 0 Processing of SRV record weight was inaccurate and when a SRV record's weight was set to 0, HAProxy enforced it to '1'. This patch aims at fixing this without breaking compability with previous behavior. Backport status: 1.8 to 2.0 --- src/dns.c | 16 ++-- 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/src/dns.c b/src/dns.c index 0ce6e8302..813dfc3fc 100644 --- a/src/dns.c +++ b/src/dns.c @@ -543,10 +543,12 @@ static void dns_check_dns_response(struct dns_resolution *res) !memcmp(srv->hostname_dn, item->target, item->data_len)) { int ha_weight; - /* Make sure weight is at least 1, so - * that the server will be used. + /* DNS weight range if from 0 to 65535 + * HAProxy weight is from 0 to 255 + * The rule below ensures that weight 0 is well respected + * while allowing a "mapping" from DNS weight into HAProxy's one. */ - ha_weight = item->weight / 256 + 1; + ha_weight = item->weight + 255 / 256; if (srv->uweight != ha_weight) { char weight[9]; @@ -590,10 +592,12 @@ static void dns_check_dns_response(struct dns_resolution *res) !(srv->flags & SRV_F_CHECKPORT)) srv->check.port = item->port; -/* Make sure weight is at least 1, so - * that the server will be used. +/* DNS weight range if from 0 to 65535 + * HAProxy weight is from 0 to 255 + * The rule below ensures that weight 0 is well respected + * while allowing a "mapping" from DNS weight into HAProxy's one. */ -ha_weight = item->weight / 256 + 1; +ha_weight = item->weight + 255 / 256; snprintf(weight, sizeof(weight), "%d", ha_weight); server_parse_weight_change_request(srv, weight); -- 2.17.1
Re: [PATCH] BUG/MEDIUM: dns: Correctly use weight specified in SRV record
On Thu, Oct 17, 2019 at 2:32 PM Daniel Corbett wrote: > Hello, > On 10/17/19 1:47 AM, Baptiste wrote: > > > > Hi Daniel, > > Thanks for the patch, but I don't think it's accurate. > What this part of the code aims to do is to "map" a DNS weight into an > HAProxy weight. > There is a ratio of 256 between both: DNS being in range "0-65535" and > HAProxy in range "0-255". > What your code does, is that it ignores any DNS weight above 256 and force > them to 1... > > The only "bug" I can see here now is that a server's weight can never be > 0. But nobody reported this as an issue yet. > > I'll check what question is asked into #48 and answer it. > > > Ah ha! Thanks for the explanation and my apologies for the noise. This > makes sense now. > > I have put together another patch that I will send later for the > "resolve-opts ignore-weight" within that same issue report but wanted to > get this one out first. > > Thanks for taking the time to review this Baptiste. > > -- Daniel > No problem! I'll fix the documentation and add some comment in the code for the short term "fix". I'll also fix the weight at 0 as well asap. That said, this may be a legitimate feature request: have a "custom" ratio to apply to DNS weight to map it into HAProxy's weight. Default ratio would be 256 and if people want to say "DNS weight 50 must match HAProxy weight of 50" then they could set ratio to 1. values above 256 will of course be truncated to 256. (this idea was provided by Willy). This is a mid term fix and if we see people need that (IE broken DNS server implementation, etc...). Baptiste
Re: [PATCH] BUG/MEDIUM: dns: Correctly use weight specified in SRV record
On Thu, Oct 17, 2019 at 5:35 AM Daniel Corbett wrote: > Hello, > > > In #48 it was reported that when using the server-template > > directive combined with an SRV record that HAProxy would > always set the weight to "1" regardless of what the SRV record > contains. > > It was found that in an attempt to force a minimum value of "1" > actually ended up forcing "1" in all situations. This was due to > an improper equation: ( x / 256 ) + 1 > > This patch should be backported to 1.8 and 1.9 > > > > Thanks, > > -- Daniel > > > Hi Daniel, Thanks for the patch, but I don't think it's accurate. What this part of the code aims to do is to "map" a DNS weight into an HAProxy weight. There is a ratio of 256 between both: DNS being in range "0-65535" and HAProxy in range "0-255". What your code does, is that it ignores any DNS weight above 256 and force them to 1... The only "bug" I can see here now is that a server's weight can never be 0. But nobody reported this as an issue yet. I'll check what question is asked into #48 and answer it. As a conclusion, please don't apply this patch. Baptiste
BUG/MINOR: action: do-resolve does not yield when requests carry body
Hi, This patch fixes the issue reported by David in github issue 227. Basically, when the task is woken up by the scheduler because there is some data in the request body, then it mistakenly understand there is nothing to do and so cleans up the resolution and tell the scheduler it's done with its tasks. This patch now check if the associated resolution is still in RUNNING state and tell the scheduler to wake it up later if this is the case. Baptiste From 53461e0e39cbba85adca545c33497e944f0ee426 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Tue, 1 Oct 2019 15:32:40 +0200 Subject: [PATCH] BUG/MINOR: action: do-resolve does not yield on requests with body @davidmogar reported a github issue (#227) about problems with do-resolve action when the request contains a body. The variable was never populated in such case, despite tcpdump shows a valid DNS response coming back. The do-resolve action is a task in HAProxy and so it's waken by the scheduler each time the scheduler think such task may have some work to do. When a simple HTTP request is sent, then the task is called, it sends the DNS request, then the scheduler will wake up the task again later once the DNS response is there. Now, when the client send a PUT or a POST request (or any other type) with a BODY, then the do-resolve action if first waken up once the headers are processed. It sends the DNS request. Then, when the bytes for the body are processed by HAProxy AND the DNS response has not yet been received, then the action simply terminates and cleans up all the data associated to this resolution... This patch detect such behavior and if the action is now waken up while a DNS resolution is in RUNNING state, then the action will tell the scheduler to wake it up again later. Backport status: 2.0 and above --- src/dns.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/src/dns.c b/src/dns.c index ef840e50c..0ce6e8302 100644 --- a/src/dns.c +++ b/src/dns.c @@ -2150,6 +2150,9 @@ enum act_return dns_action_do_resolve(struct act_rule *rule, struct proxy *px, /* we have a response to our DNS resolution */ if (s->dns_ctx.dns_requester && s->dns_ctx.dns_requester->resolution != NULL) { resolution = s->dns_ctx.dns_requester->resolution; + if (resolution->step == RSLV_STEP_RUNNING) { + return ACT_RET_YIELD; + } if (resolution->step == RSLV_STEP_NONE) { /* We update the variable only if we have a valid response. */ if (resolution->status == RSLV_STATUS_VALID) { -- 2.17.1
Re: [PR/FEATURE] support for virtual hosts / Host header per server
Hi Romain, Can you tell us (or me individually) why you can't use HAProxy with Kubernetes because of this? I am interested by the use case. Baptiste On Tue, Oct 1, 2019 at 2:10 PM Morotti, Romain D < romain.d.moro...@jpmorgan.com> wrote: > What is the status on this? > > The lack of this functionality is a blocker to use HAProxy with kubernetes. > > -Original Message- > From: Willy Tarreau [mailto:w...@1wt.eu] > Sent: 01 August 2019 04:59 > To: Morotti, Romain D > Cc: haproxy@formilux.org > Subject: Re: [PR/FEATURE] support for virtual hosts / Host header per > server > > Hello Romain, > > On Wed, Jul 31, 2019 at 04:02:04PM +, Morotti, Romain D wrote: > > Hello, > > > > Didn't get any reply here. Is anybody reviewing this mailing list? > > Sorry about this but I simply think that most developers are busy chasing > complex bugs and since it's the holiday period it's more difficult to find > time to review patches. > > Regards, > Willy > > > This message is confidential and subject to terms at: > https://www.jpmorgan.com/emaildisclaimer including on confidential, > privileged or legal entity information, viruses and monitoring of > electronic messages. If you are not the intended recipient, please delete > this message and notify the sender immediately. Any unauthorized use is > strictly prohibited. > > >
Re: [cache] allow caching of OPTIONS request
On Mon, Aug 12, 2019 at 10:19 PM Willy Tarreau wrote: > Hi Baptiste, > > On Mon, Aug 12, 2019 at 09:35:56PM +0200, Baptiste wrote: > > The use case is to avoid too many requests hitting an application server > > for "preflight requests". > > But does this *really* happen to a point of being a concern with OPTIONS > requests ? I mean, if OPTIONS represent a small percentage of the traffic > I'd rather not start to hack around the standards and regret in 2 versions > later... > > > It seems it owns its own header for caching: > > https://www.w3.org/TR/cors/#access-control-max-age-response-header. > > Some description here: > https://www.w3.org/TR/cors/#preflight-result-cache-0 > > But all this spec is explicitly for user-agents and not at all for > intermediaries. And it doesn't make use of any single Cache-Control > header field, it solely uses its own set of headers precisely to > avoid mixing the two! And it doesn't suggest to violate the HTTP > standards. > > > I do agree we should disable this by default and add an option > > "enable-caching-cors-responses" to enable it on demand and clearly state > in > > the doc that this is not RFC compliant. > > Let me know if that is ok for you. > > I still feel extremely uncomfortable with this because given that it > requires to violate the basic standards to achieve something that is > expected to be normal, that smells strongly like there is a wrong > assumption somewhere in the chain, either regarding how it's being > used or about some requirements. > > If you don't mind I'd rather bring the question on the HTTP working > group to ask if we're missing something obvious or if user-agents > suddenly decided to break the internet by purposely making non- > cacheable requests, which is totally contrary to their tradition. > > As you know we've known a period many years ago where people used > to say "I inserted haproxy and my application stopped working". Now > these days are over (the badmouth will say haproxy stopped working) > in main part because we took care of properly dealing with the > standards. And clearly I'm extremely cautious not to revive these > bad memories. > > Thanks, > Willy > Hi Willy, Yes I understand. Would be great to have the feedback from the http working group. In the mean time, if some people here would like to share with Willy and I privately some numbers on what percentage of the traffic do OPTIONS requests represent, this would be helpful. Baptiste
Re: [cache] allow caching of OPTIONS request
On Mon, Aug 12, 2019 at 8:14 AM Willy Tarreau wrote: > Guys, > > On Wed, Aug 07, 2019 at 02:07:09PM +0200, Baptiste wrote: > > Hi Vincent, > > > > HAProxy does not follow the max-age in the Cache-Control anyway. > > I know it's a bit late but I'm having an objection against this change. > The reason is simple, OPTIONS is explicitly documented as being > non-cacheable : https://tools.ietf.org/html/rfc7231#section-4.3.7 > > So not only by implementing it we're going to badly break a number > of properly running applications, but in addition we cannot expect > any cache-control from the server in response to an OPTIONS request > precisely because this is forbidden by the HTTP standard. > > When I search for OPTIONS and cache on the net, I only find AWS's > Cloudfront which offers an option to enable it, and a number of > feature requests responded to by "don't do that you're wrong". So > at the very least we need to disable this by default, and possibly > condition it with a well visible option such as "yes-i-know-i-am- > breaking-the-cache-and-promise-never-to-file-a-bug-report" but what > would be better would be to understand the exact use case and why it > is considered to be valid despite being a blatant violation of the > HTTP standard! History tells us that purposely violating standards > only happens for bad reasons and systematically results in security > issues. > > Thanks, > Willy > Hi Willy, The use case is to avoid too many requests hitting an application server for "preflight requests". It seems it owns its own header for caching: https://www.w3.org/TR/cors/#access-control-max-age-response-header. Some description here: https://www.w3.org/TR/cors/#preflight-result-cache-0 I do agree we should disable this by default and add an option "enable-caching-cors-responses" to enable it on demand and clearly state in the doc that this is not RFC compliant. Let me know if that is ok for you. Baptiste
Re: [cache] allow caching of OPTIONS request
On Wed, Aug 7, 2019 at 3:18 PM William Lallemand wrote: > On Wed, Aug 07, 2019 at 12:38:05PM +0200, Baptiste wrote: > > Hi there, > > > > Please find in attachement a couple of patches to allow caching responses > > to OPTIONS requests, used in CORS pattern. > > In modern API where CORS is applied, there may be a bunch of OPTIONS > > requests coming in to the API servers, so caching these responses will > > improve API response time and lower the load on the servers. > > Given that HAProxy does not yet support the Vary header, this means this > > feature is useful in a single case, when the server send the following > > header "set access-control-allow-origin: *". > > > > William, can you check if my patches look correct, or if this is totally > > wrong and then I'll open an issue on github for tracking this one. > > > > Looks good to me, pushed in master. > > -- > William Lallemand > Great, thanks! Baptiste
Re: [cache] allow caching of OPTIONS request
Hi Vincent, HAProxy does not follow the max-age in the Cache-Control anyway. Here is what the configuration would look like: backend X http-request cache-use cors if METH_OPTIONS http-response cache-store cors if METH_OPTIONS cache cors total-max-size 64 max-object-size 1024 max-age 60 You see, the time the object will be cached by HAProxy is defined in your cache storage bucket. Baptiste On Wed, Aug 7, 2019 at 1:47 PM GALLISSOT VINCENT wrote: > Hi there, > > > May I add that, in the CORS implementation, there is a specific header > used for the caching duration: *Access-Control-Max-Age* > > This header is supported by most of browsers and its specification is > available : https://fetch.spec.whatwg.org/#http-access-control-max-age > > One would think of using this header value instead of the well known > Cache-Control header when dealing with CORS and OPTIONS requests. > > Cheers, > Vincent > > -- > *De :* Baptiste > *Envoyé :* mercredi 7 août 2019 12:38 > *À :* HAProxy; William Lallemand > *Objet :* [cache] allow caching of OPTIONS request > > Hi there, > > Please find in attachement a couple of patches to allow caching responses > to OPTIONS requests, used in CORS pattern. > In modern API where CORS is applied, there may be a bunch of OPTIONS > requests coming in to the API servers, so caching these responses will > improve API response time and lower the load on the servers. > Given that HAProxy does not yet support the Vary header, this means this > feature is useful in a single case, when the server send the following > header "set access-control-allow-origin: *". > > William, can you check if my patches look correct, or if this is totally > wrong and then I'll open an issue on github for tracking this one. > > Baptiste >
[cache] allow caching of OPTIONS request
Hi there, Please find in attachement a couple of patches to allow caching responses to OPTIONS requests, used in CORS pattern. In modern API where CORS is applied, there may be a bunch of OPTIONS requests coming in to the API servers, so caching these responses will improve API response time and lower the load on the servers. Given that HAProxy does not yet support the Vary header, this means this feature is useful in a single case, when the server send the following header "set access-control-allow-origin: *". William, can you check if my patches look correct, or if this is totally wrong and then I'll open an issue on github for tracking this one. Baptiste From b1ed59901522dc32fa112e77c93c9a723ecc2189 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Wed, 7 Aug 2019 12:24:36 +0200 Subject: [PATCH 2/2] MINOR: http: allow caching of OPTIONS request Allow HAProxy to cache responses to OPTIONS HTTP requests. This is useful in the use case of "Cross-Origin Resource Sharing" (cors) to cache CORS responses from API servers. Since HAProxy does not support Vary header for now, this would be only useful for "access-control-allow-origin: *" use case. --- src/cache.c | 13 - 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/src/cache.c b/src/cache.c index 5b4062384..001532651 100644 --- a/src/cache.c +++ b/src/cache.c @@ -560,8 +560,8 @@ enum act_return http_action_store_cache(struct act_rule *rule, struct proxy *px, if (!(txn->req.flags & HTTP_MSGF_VER_11)) goto out; - /* cache only GET method */ - if (txn->meth != HTTP_METH_GET) + /* cache only GET or OPTIONS method */ + if (txn->meth != HTTP_METH_GET && txn->meth != HTTP_METH_OPTIONS) goto out; /* cache key was not computed */ @@ -1058,6 +1058,9 @@ int sha1_hosturi(struct stream *s) ctx.blk = NULL; switch (txn->meth) { + case HTTP_METH_OPTIONS: + chunk_memcat(trash, "OPTIONS", 7); + break; case HTTP_METH_HEAD: case HTTP_METH_GET: chunk_memcat(trash, "GET", 3); @@ -1093,10 +1096,10 @@ enum act_return http_action_req_cache_use(struct act_rule *rule, struct proxy *p struct cache_flt_conf *cconf = rule->arg.act.p[0]; struct cache *cache = cconf->c.cache; - /* Ignore cache for HTTP/1.0 requests and for requests other than GET - * and HEAD */ + /* Ignore cache for HTTP/1.0 requests and for requests other than GET, + * HEAD and OPTIONS */ if (!(txn->req.flags & HTTP_MSGF_VER_11) || - (txn->meth != HTTP_METH_GET && txn->meth != HTTP_METH_HEAD)) + (txn->meth != HTTP_METH_GET && txn->meth != HTTP_METH_HEAD && txn->meth != HTTP_METH_OPTIONS)) txn->flags |= TX_CACHE_IGNORE; http_check_request_for_cacheability(s, &s->req); -- 2.17.1 From e3aee8fe302e108e2652842f537dc850978d2e59 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Mon, 5 Aug 2019 16:55:32 +0200 Subject: [PATCH 1/2] MINOR: http: add method to cache hash Current HTTP cache hash contains only the Host header and the url path. That said, request method should also be added to the mix to support caching other request methods on the same URL. IE GET and OPTIONS. --- src/cache.c | 16 +--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/src/cache.c b/src/cache.c index 9cef0cab6..5b4062384 100644 --- a/src/cache.c +++ b/src/cache.c @@ -1041,9 +1041,9 @@ enum act_parse_ret parse_cache_store(const char **args, int *orig_arg, struct pr return ACT_RET_PRS_OK; } -/* This produces a sha1 hash of the concatenation of the first - * occurrence of the Host header followed by the path component if it - * begins with a slash ('/'). */ +/* This produces a sha1 hash of the concatenation of the HTTP method, + * the first occurrence of the Host header followed by the path component + * if it begins with a slash ('/'). */ int sha1_hosturi(struct stream *s) { struct http_txn *txn = s->txn; @@ -1056,6 +1056,16 @@ int sha1_hosturi(struct stream *s) trash = get_trash_chunk(); ctx.blk = NULL; + + switch (txn->meth) { + case HTTP_METH_HEAD: + case HTTP_METH_GET: + chunk_memcat(trash, "GET", 3); + break; + default: + return 0; + } + if (!http_find_header(htx, ist("Host"), &ctx, 0)) return 0; chunk_memcat(trash, ctx.value.ptr, ctx.value.len); -- 2.17.1
Re: load-server-state-from-file "automatic" transfer?
On Wed, Jul 24, 2019 at 1:38 PM Daniel Schneller < daniel.schnel...@centerdevice.com> wrote: > Hi! > > I have been looking into load-server-state-from file to prevent 500 errors > being > reported after a service reload. Currently we are seeing these, because > the new > instance comes up and first wants to see the minimum configured number of > health > checks for a backend server to succeed, before it hands requests to it. > > From what I can tell, the state file needs to be saved manually before a > service > reload, so that the new process coming up can read it back. I can do that, > of course, > but I was wondering what the reasoning was to not have this data > transferred to a > new process in a similar fashion as file handles or stick-tables (via > peers)? > > Thanks a lot! > > Daniel > > > > -- > Daniel Schneller > Principal Cloud Engineer > GPG key at https://keybase.io/dschneller > > CenterDevice GmbH > Rheinwerkallee 3 > 53227 Bonn > www.centerdevice.com > __ > Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael > Rosbach, Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn, USt-IdNr.: > DE-815299431 > > Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche > und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige > Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie > bitte sofort den Absender und löschen Sie diese E-Mail und evtl. beigefügte > Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen evtl. > beigefügter Dateien sowie die unbefugte Weitergabe dieser E-Mail ist nicht > gestattet. > > Pflichtinformationen gemäß Artikel 13 DSGVO > Im Falle des Erstkontakts sind wir gemäß Art. 12, 13 DSGVO verpflichtet, > Ihnen folgende datenschutzrechtliche Pflichtinformationen zur Verfügung zu > stellen: Wenn Sie uns per E-Mail kontaktieren, verarbeiten wir Ihre > personenbezogenen Daten nur, soweit an der Verarbeitung ein berechtigtes > Interesse besteht (Art. 6 Abs. 1 lit. f DSGVO), Sie in die > Datenverarbeitung eingewilligt haben (Art. 6 Abs. 1 lit. a DSGVO), die > Verarbeitung für die Anbahnung, Begründung, inhaltliche Ausgestaltung oder > Änderung eines Rechtsverhältnisses zwischen Ihnen und uns erforderlich ist > (Art. 6 Abs. 1 lit. b DSGVO) oder eine sonstige Rechtsnorm die Verarbeitung > gestattet. Ihre personenbezogenen Daten verbleiben bei uns, bis Sie uns zur > Löschung auffordern, Ihre Einwilligung zur Speicherung widerrufen oder der > Zweck für die Datenspeicherung entfällt (z. B. nach abgeschlossener > Bearbeitung Ihres Anliegens). Zwingende gesetzliche Bestimmungen – > insbesondere steuer- und handelsrechtliche Aufbewahrungsfristen – bleiben > unberührt. Sie haben jederzeit das Recht, unentgeltlich Auskunft über > Herkunft, Empfänger und Zweck Ihrer gespeicherten personenbezogenen Daten > zu erhalten. Ihnen steht außerdem ein Recht auf Widerspruch, auf > Datenübertragbarkeit und ein Beschwerderecht bei der zuständigen > Aufsichtsbehörde zu. Ferner können Sie die Berichtigung, die Löschung und > unter bestimmten Umständen die Einschränkung der Verarbeitung Ihrer > personenbezogenen Daten verlangen. Details entnehmen Sie unserer > Datenschutzerklärung (https://www.centerdevice.de/datenschutz/). Unseren > Datenschutzbeauftragten erreichen Sie per E-Mail unter erdm...@sicdata.de. > > > Hi Daniel, You're making a good point. Use the file system was the simplest and fastest way to go when we first designed this feature 4 or 5 years ago. I do agree that now with master/worker and threaded model being pushed that using the runtime-api may make sense and would be even more "cloud native". Maybe @William would have an advice on this one. Baptiste
Re: http-request do-resolve for rDNS queries
Hi Luke, It is not yet doable with do-resolve. That said you can easily write an slow agent to do this. I can help if you need to. Baptiste Le ven. 21 juin 2019 à 15:25, Luke Seelenbinder a écrit : > Hello all, > > Is it possible to use the new `http-request do-resolve` to do reverse DNS > lookups? It's left unspecified in the documentation, and I think it'd be > helpful to clarify for posterity. > > I'd like to integrate this as part of a IP blocking methodology, but that > would depend on rDNS being supported. > > Thanks! > > Luke > > — > *Luke Seelenbinder* > SermonAudio.com <http://sermonaudio.com> | Senior Software Engineer > > > > > >
Re: [PATCH] server state: cleanup and load global file in a tree
On Friday, June 14, 2019, Willy Tarreau wrote: > Hi Baptiste, > > On Thu, Jun 13, 2019 at 04:29:43PM +0200, Baptiste wrote: > > Last mail, this is not backportable. HAProxy 2.0+ only. > > The second one is quite a substantial change at this stage where we're > finalizing cleanup and minor fixes. Given that it's only about improving > the load time, I consider it as an optimization and prefer to postpone > it for 2.1. I've quickly glanced over it and saw a few minor mistakes > in the error checks indicating that a bit more review time should be > assigned to it. And if really needed later we may imagine backporting > it once it has cooked long enough in 2.1. > > The first one however seems OK since it will affect how state files are > loaded, we'll definitely not change this in the middle of a release. > > So I'm taking the first one right now, please resubmit the second one > later, or ping me so that I show you the minor things to address. > > Thanks, > Willy > Sure. Let's sync after the release. Baptiste
Re: [PATCH] server state: cleanup and load global file in a tree
Last mail, this is not backportable. HAProxy 2.0+ only. On Thu, Jun 13, 2019 at 4:12 PM Baptiste wrote: > these patches replace to 2 previous ones. I fixed a compilation warning > about possible used of uninitialized variable in the second patch. > I also ran the reg-tests successfully. > > Cheers > >>
Re: [PATCH] server state: cleanup and load global file in a tree
these patches replace to 2 previous ones. I fixed a compilation warning about possible used of uninitialized variable in the second patch. I also ran the reg-tests successfully. Cheers > From 0c5b17976ec703b12040d813bdd6ac975af7b4d7 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Thu, 13 Jun 2019 13:24:29 +0200 Subject: [PATCH 2/2] MEDIUM: server: server-state global file stored in a tree Server states can be recovered from either a "global" file (all backends) or a "local" file (per backend). The way the algorithm to parse the state file was first implemented was good enough for a low number of backends and servers per backend. Basically, for each backend the state file (global or local) is opened, parsed entirely and for each line we check if it contains data related to a server from the backend we're currently processing. We must read the file entirely, just in case some lines for the current backend are stored at the end of the file. This does not scale at all! This patch changes the behavior above for the "global" file only. Now, the global file is read and parsed once and all lines it contains are stored in a tree, for faster discovery. This result in way much less fopen, fgets, and strcmp calls, which make loading of very big state files very quick now. --- include/types/server.h | 11 ++ src/server.c | 412 - 2 files changed, 294 insertions(+), 129 deletions(-) diff --git a/include/types/server.h b/include/types/server.h index 4a0772685..568528976 100644 --- a/include/types/server.h +++ b/include/types/server.h @@ -345,6 +345,17 @@ struct server { struct sockaddr_storage socks4_addr; /* the address of the SOCKS4 Proxy, including the port */ }; + +/* Storage structure to load server-state lines from a flat file into + * an ebtree, for faster processing + */ +struct state_line { + char *line; + struct ebmb_node name_name; + /* WARNING don't put anything after name_name, it's used by the key */ +}; + + /* Descriptor for a "server" keyword. The ->parse() function returns 0 in case of * success, or a combination of ERR_* flags if an error is encountered. The * function pointer can be NULL if not implemented. The function also has an diff --git a/src/server.c b/src/server.c index 66fba992d..1a9dd1617 100644 --- a/src/server.c +++ b/src/server.c @@ -47,10 +47,14 @@ #include #include +#include + static void srv_update_status(struct server *s); static void srv_update_state(struct server *srv, int version, char **params); static int srv_apply_lastaddr(struct server *srv, int *err_code); static int srv_set_fqdn(struct server *srv, const char *fqdn, int dns_locked); +static void srv_state_parse_line(char *buf, const int version, char **params, char **srv_params); +static int srv_state_get_version(FILE *f); /* List head of all known server keywords */ static struct srv_kw_list srv_keywords = { @@ -69,6 +73,9 @@ struct dict server_name_dict = { .values = EB_ROOT_UNIQUE, }; +/* tree where global state_file is loaded */ +struct eb_root state_file = EB_ROOT; + int srv_downtime(const struct server *s) { if ((s->cur_state != SRV_ST_STOPPED) && s->last_change < now.tv_sec) // ignore negative time @@ -3363,6 +3370,130 @@ static void srv_update_state(struct server *srv, int version, char **params) } } + +/* + * read next line from file and return the server state version if one found. + * If no version is found, then 0 is returned + * Note that this should be the first read on + */ +static int srv_state_get_version(FILE *f) { + char buf[2]; + int ret; + + /* first character of first line of the file must contain the version of the export */ + if (fgets(buf, 2, f) == NULL) { + return 0; + } + + ret = atoi(buf); + if ((ret < SRV_STATE_FILE_VERSION_MIN) || + (ret > SRV_STATE_FILE_VERSION_MAX)) + return 0; + + return ret; +} + + +/* + * parses server state line stored in and supposedly in version . + * Set and accordingly. + * In case of error, params[0] is set to NULL. + */ +static void srv_state_parse_line(char *buf, const int version, char **params, char **srv_params) +{ + int buflen, arg, srv_arg; + char *cur, *end; + + buflen = strlen(buf); + cur = buf; + end = cur + buflen; + + /* we need at least one character */ + if (buflen == 0) { + params[0] = NULL; + return; + } + + /* ignore blank characters at the beginning of the line */ + while (isspace(*cur)) + ++cur; + + /* Ignore empty or commented lines */ + if (cur == end || *cur == '#') { + params[0] = NULL; + return; + } + + /* truncated lines */ + if (buf[buflen - 1] != '\n') { + //ha_warning("server-state file '%s': truncated line\n", filepath); + params[0] = NULL; + return; + } + + /* Removes trailing '\n' */ + buf[buflen - 1] = '\0'; + + /* we're now ready to move the line into *srv_params[] *
[PATCH] server state: cleanup and load global file in a tree
Hi all, Please find enclosed to this email a couple of patches: 0001: cleans up the server state code to match on server names only (since 7da71293e431b5ebb3d6289a55b0102331788ee6, server name is a reliable information) 0002: loads global server state file in a tree for fast processing. As an example, I set up a config file with 1000 backends of random number of servers (from 1 to 1000+), resulting in a state file of 240K lines... Loading this file without server state enabled takes 5.2s on my laptop and 5.6s with server state enabled. As a measure of comparison, HAProxy 1.9.x takes around 1m35s to load the same file (no tree involved)... Baptiste From f8ed4d51f8aadd61baec4094caec2e1e11a957ab Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Thu, 13 Jun 2019 13:24:29 +0200 Subject: [PATCH 2/2] MEDIUM: server: server-state global file stored in a tree Server states can be recovered from either a "global" file (all backends) or a "local" file (per backend). The way the algorithm to parse the state file was first implemented was good enough for a low number of backends and servers per backend. Basically, for each backend the state file (global or local) is opened, parsed entirely and for each line we check if it contains data related to a server from the backend we're currently processing. We must read the file entirely, just in case some lines for the current backend are stored at the end of the file. This does not scale at all! This patch changes the behavior above for the "global" file only. Now, the global file is read and parsed once and all lines it contains are stored in a tree, for faster discovery. This result in way much less fopen, fgets, and strcmp calls, which make loading of very big state files very quick now. --- include/types/server.h | 11 ++ src/server.c | 411 - 2 files changed, 293 insertions(+), 129 deletions(-) diff --git a/include/types/server.h b/include/types/server.h index 4a0772685..568528976 100644 --- a/include/types/server.h +++ b/include/types/server.h @@ -345,6 +345,17 @@ struct server { struct sockaddr_storage socks4_addr; /* the address of the SOCKS4 Proxy, including the port */ }; + +/* Storage structure to load server-state lines from a flat file into + * an ebtree, for faster processing + */ +struct state_line { + char *line; + struct ebmb_node name_name; + /* WARNING don't put anything after name_name, it's used by the key */ +}; + + /* Descriptor for a "server" keyword. The ->parse() function returns 0 in case of * success, or a combination of ERR_* flags if an error is encountered. The * function pointer can be NULL if not implemented. The function also has an diff --git a/src/server.c b/src/server.c index 66fba992d..777d0d0dc 100644 --- a/src/server.c +++ b/src/server.c @@ -47,10 +47,14 @@ #include #include +#include + static void srv_update_status(struct server *s); static void srv_update_state(struct server *srv, int version, char **params); static int srv_apply_lastaddr(struct server *srv, int *err_code); static int srv_set_fqdn(struct server *srv, const char *fqdn, int dns_locked); +static void srv_state_parse_line(char *buf, const int version, char **params, char **srv_params); +static int srv_state_get_version(FILE *f); /* List head of all known server keywords */ static struct srv_kw_list srv_keywords = { @@ -69,6 +73,9 @@ struct dict server_name_dict = { .values = EB_ROOT_UNIQUE, }; +/* tree where global state_file is loaded */ +struct eb_root state_file = EB_ROOT; + int srv_downtime(const struct server *s) { if ((s->cur_state != SRV_ST_STOPPED) && s->last_change < now.tv_sec) // ignore negative time @@ -3363,6 +3370,130 @@ static void srv_update_state(struct server *srv, int version, char **params) } } + +/* + * read next line from file and return the server state version if one found. + * If no version is found, then 0 is returned + * Note that this should be the first read on + */ +static int srv_state_get_version(FILE *f) { + char buf[2]; + int ret; + + /* first character of first line of the file must contain the version of the export */ + if (fgets(buf, 2, f) == NULL) { + return 0; + } + + ret = atoi(buf); + if ((ret < SRV_STATE_FILE_VERSION_MIN) || + (ret > SRV_STATE_FILE_VERSION_MAX)) + return 0; + + return ret; +} + + +/* + * parses server state line stored in and supposedly in version . + * Set and accordingly. + * In case of error, params[0] is set to NULL. + */ +static void srv_state_parse_line(char *buf, const int version, char **params, char **srv_params) +{ + int buflen, arg, srv_arg; + char *cur, *end; + + buflen = strlen(buf); + cur = buf; + end = cur + buflen; + + /* we need at least one character */ + if (buflen == 0) { + params[0] = NULL; + return; + } + + /* ignore blank characters at the beginning of the line */ + while (isspace(*c
[PATCH] Enable set-dst and set-dst-port at tcp-request content layer
Hi, For some reasons, 'tcp-request content' can't execute set-dst and set-dst-port. This patch fixes this issue. Note that this patch will be useful for the do-resolve action. Baptiste From c384d381dbbfa0adae04137238b4fd11593bd2bf Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Thu, 18 Apr 2019 16:21:13 +0200 Subject: [PATCH] MINOR: proto_tcp: tcp-request content: enable set-dst and set-dst-var The set-dst and set dst-var are available at both 'tcp-request connection' and 'http-request' but not at the layer in the middle. This patch fixes this miss and enables both set-dst and set-dst-var at 'tcp-request content' layer. --- doc/configuration.txt | 5 + src/proto_tcp.c | 4 +++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index c07961a..47edf38 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -9745,6 +9745,8 @@ tcp-request content [{if | unless} ] - sc-inc-gpc0() - sc-inc-gpc1() - sc-set-gpt0() +- set-dst +- set-dst-port - set-var() - unset-var() - silent-drop @@ -9778,6 +9780,9 @@ tcp-request content [{if | unless} ] wait until the inspect delay expires when the data to be tracked is not yet available. + The "set-dst" and "set-dst-port" are used to set respectively the destination + IP and port. More information on how to use it at "http-request set-dst". + The "set-var" is used to set the content of a variable. The variable is declared inline. For "tcp-request session" rules, only session-level variables can be used, without any layer7 contents. diff --git a/src/proto_tcp.c b/src/proto_tcp.c index 6a5fdef..cb895a2 100644 --- a/src/proto_tcp.c +++ b/src/proto_tcp.c @@ -2008,7 +2008,9 @@ static struct action_kw_list tcp_req_sess_actions = {ILH, { INITCALL1(STG_REGISTER, tcp_req_sess_keywords_register, &tcp_req_sess_actions); static struct action_kw_list tcp_req_cont_actions = {ILH, { - { "silent-drop", tcp_parse_silent_drop }, + { "silent-drop", tcp_parse_silent_drop }, + { "set-dst" , tcp_parse_set_src_dst }, + { "set-dst-port", tcp_parse_set_src_dst }, { /* END */ } }}; -- 2.7.4
[PATCH] http-request do-resolve
Hi all, Willy, Please find attached to this email the 4 patches for the http-request do-resolve action I submitted a few months ago. I integrated all feedback from Willy and also now support tcp-request content do-resolve. Baptiste From e96ff49ee05dbdc15dc7582349e6314dcfccb20e Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Tue, 30 Jan 2018 08:10:20 +0100 Subject: [PATCH 3/5] MINOR: obj_type: new object type for struct stream This patch creates a new obj_type for the struct stream in HAProxy. --- include/proto/obj_type.h | 13 + include/types/obj_type.h | 1 + include/types/stream.h | 4 +++- 3 files changed, 17 insertions(+), 1 deletion(-) diff --git a/include/proto/obj_type.h b/include/proto/obj_type.h index 47273ca..19865bb 100644 --- a/include/proto/obj_type.h +++ b/include/proto/obj_type.h @@ -30,6 +30,7 @@ #include #include #include +#include #include static inline enum obj_type obj_type(enum obj_type *t) @@ -158,6 +159,18 @@ static inline struct dns_srvrq *objt_dns_srvrq(enum obj_type *t) return __objt_dns_srvrq(t); } +static inline struct stream *__objt_stream(enum obj_type *t) +{ + return container_of(t, struct stream, obj_type); +} + +static inline struct stream *objt_stream(enum obj_type *t) +{ + if (!t || *t != OBJ_TYPE_STREAM) + return NULL; + return __objt_stream(t); +} + static inline void *obj_base_ptr(enum obj_type *t) { switch (obj_type(t)) { diff --git a/include/types/obj_type.h b/include/types/obj_type.h index e141d69..9410718 100644 --- a/include/types/obj_type.h +++ b/include/types/obj_type.h @@ -41,6 +41,7 @@ enum obj_type { OBJ_TYPE_CONN, /* object is a struct connection */ OBJ_TYPE_SRVRQ,/* object is a struct dns_srvrq */ OBJ_TYPE_CS, /* object is a struct conn_stream */ + OBJ_TYPE_STREAM, /* object is a struct stream */ OBJ_TYPE_ENTRIES /* last one : number of entries */ } __attribute__((packed)) ; diff --git a/include/types/stream.h b/include/types/stream.h index 93a39a3..b6a3e84 100644 --- a/include/types/stream.h +++ b/include/types/stream.h @@ -151,7 +151,9 @@ struct stream { struct stktable *table; } store[8]; /* tracked stickiness values to store */ int store_count; - /* 4 unused bytes here */ + + enum obj_type obj_type; /* object type == OBJ_TYPE_STREAM */ + /* 3 unused bytes here */ struct stkctr stkctr[MAX_SESS_STKCTR]; /* content-aware stick counters */ -- 2.7.4 From bd4bf0c60a8b78555c050b4ffbd399a239de8be6 Mon Sep 17 00:00:00 2001 From: Baptiste Assmann Date: Mon, 21 Jan 2019 08:34:50 +0100 Subject: [PATCH 4/5] MINOR: action: new '(http-request|tcp-request content) do-resolve' action The 'do-resolve' action is an http-request or tcp-request content action which allows to run DNS resolution at run time in HAProxy. The name to be resolved can be picked up in the request sent by the client and the result of the resolution is stored in a variable. The time the resolution is being performed, the request is on pause. If the resolution can't provide a suitable result, then the variable will be empty. It's up to the admin to take decisions based on this statement (return 503 to prevent loops). Read carefully the documentation concerning this feature, to ensure your setup is secure and safe to be used in production. This patch creates a global counter to track various errors reported by the action 'do-resolve'. --- doc/configuration.txt | 57 ++ include/proto/action.h | 3 + include/proto/dns.h| 4 + include/types/action.h | 7 ++ include/types/stats.h | 1 + include/types/stream.h | 9 ++ src/action.c | 34 ++ src/dns.c | 301 + src/proto_http.c | 1 + src/stats.c| 3 + src/stream.c | 11 ++ 11 files changed, 431 insertions(+) diff --git a/doc/configuration.txt b/doc/configuration.txt index 357a67e..a36103a 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -4186,6 +4186,60 @@ http-request deny [deny_status ] [ { if | unless } ] those that can be overridden by the "errorfile" directive. No further "http-request" rules are evaluated. +http-request do-resolve(,,[ipv4,ipv6]) : + + This action performs a DNS resolution of the output of and stores + the result in the variable . It uses the DNS resolvers section + pointed by . + It is possible to choose a resolution preference using the optional + arguments 'ipv4' or 'ipv6'. + When performing the DNS resolution, the client side connection is on + pause waiting till the end of the resolution. + If an IP address can be found, it is stored into . If any kind of + error occurs, then is not set. + One can use this action to discover a server IP address at run time and + based on information found in the request (IE a Host header). + If th
Re: DNS Resolver Issues
> > A reload of the HAProxy instance also forces the instances to query all > records from the resolver. > > Hi Bruno, Actually, this is true only when you don't use the 'resolvers' section or for the parameters who doesn't benefit from the resolvers section, here the 'addr' parameter. Baptiste
Re: DNS Resolver Issues
Hi all, Thanks @daniel for you very detailed report and @Piba for your help. As Piba pointed out, the issue is related to the 'addr' parameter. Currently, the only component in HAProxy which can benefit from dynamic resolution at run time is the 'server', which means any other object using a DNS hostname which does not resolve at start up may trigger an error, like you discovered with 'addr'. @Piba, feel free to fill up a feature request on github and Cc me there, so we can discuss this point. Baptiste On Sat, Mar 23, 2019 at 2:53 PM PiBa-NL wrote: > Hi Daniel, Baptiste, > > @Daniel, can you remove the 'addr loadbalancer-internal.xxx.yyy' from > the server check? It seems to me that that name is not being resolved by > the 'resolvers'. And even if it would it would be kinda redundant as it > is in the example as it is the same as the servername.?. Not sure how > far below scenarios are all explained by this though.. > > @Baptiste, is it intentional that a wrong 'addr' dns name makes haproxy > fail to start despite having the supposedly never failing > 'default-server init-addr last,libc,none' ? Is it possibly a good > feature request to support re-resolving a dns name for the addr setting > as well ? > > Regards, > PiBa-NL (Pieter) > > Op 21-3-2019 om 20:37 schreef Daniel Schneller: > > Hi! > > > > Thanks for the response. I had looked at the "hold" directives, but > since they all seem to have reasonable defaults, I did not touch them. > > I specified 10s explictly, but it did not make a difference. > > > > I did some more tests, however, and it seems to have more to do with the > number of responses for the initial(?) DNS queries. > > Hopefully these three tables make sense and don't get mangled in the > mail. The "templated" > > proxy is defined via "server-template" with 3 "slots". The "regular" one > just as "server". > > > > > > Test 1: Start out with both "valid" and "broken" DNS entries. Then > comment out/add back > > one at a time as described in (1)-(5). > > Each time after changing /etc/hosts, restart dnsmasq and check haproxy > via hatop. > > Haproxy started fresh once dnsmasq was set up to (1). > > > > | state state > > /etc/hosts | regular templated > > |- > > (1) BRK| UP/L7OK DOWN/L4TOUT > > VALID | MAINT/resolution > > | UP/L7OK > > | > > > > (2) BRK| DOWN/L4TOUT DOWN/L4TOUT > > #VALID | MAINT/resolution > > | MAINT/resolution > > | > > (3) #BRK | UP/L7OK UP/L7OK > > VALID | MAINT/resolution > > | MAINT/resolution > > | > > (4) BRK| UP/L7OK UP/L7OK > > VALID | DOWN/L4TOUT > > | MAINT/resolution > > | > > (5) BRK| DOWN/L4TOUT DOWN/L4TOUT > > #VALID | MAINT/resolution > > | MAINT/resolution > > > > This all looks normal and as expected. As soon as the "VALID" DNS entry > is present, the > > UP state follows within a few seconds. > > > > > > > > Test 2: Start out "valid only" (1) and proceed as described in (2)-(5), > again restarting > > dnsmasq each time, and haproxy reloaded after dnsmasq was set up to (1). > > > > | state state > > /etc/hosts | regular templated > > | > > (1) #BRK | UP/L7OK MAINT/resolution > > VALID | MAINT/resolution > > | UP/L7OK > > | > > (2) BRK| UP/L7OK DOWN/L4TOUT > > VALID | MAINT/resolution > > | UP/L7OK >
Re: read async auth date from file
Hi Jeff, If the file is only stored in the FS and you can't put its content into HAProxy's memory at run time (using a map as explained by Jarno), then you may want to use SPOE. So the blocking files IO will be done in a process running outside of HAProxy. You have an SPOA (agent) example in HAProxy's source code, written in C. If you want an SPOA in an other language, I would say "stay tuned" :) Baptiste On Sun, Mar 3, 2019 at 9:20 AM Jeff wrote: > I need to add an authorization header for a target server, e.g. >http-request add-header Authorization Bearer\ MYTOKENDATA > > where MYTOKENDATA is read from a file for each proxy message. > (MYTOKENDATA is written asynchronously to the file by another > process.) > > How to do this in HAProxy? > > thanks, > Jeff > >
Re: Does anyone *really* use 51d or WURFL ?
> > One could argue that 1) not building, 2) not working, and 3) > not being maintained doesn't exactly qualify as "stable". So maybe in the > end I'll do it there as well. And I'm sure it will not affect distros! > But I'm open to opinions on the subject. > > This seems to go against #1 quality of HAProxy: reliability... So you have my +1 :) Baptiste
Re: [PATCH] runtime do-resolve http action
On Fri, Jan 25, 2019 at 3:28 PM Willy Tarreau wrote: > On Fri, Jan 25, 2019 at 03:09:52PM +0100, Baptiste wrote: > > Hi Willy, > > > > Thanks for the review!!! > > I fixed most of the problems, but I have a 3 points I'd like to discuss: > > > > > + If an IP address can be found, it is stored into . If any kind > of > > > > + error occurs, then is not set. > > > > > > Just to be sure, it is not set or not modified ? I guess the latter, > which > > > is fine. > > > > > > > Yes, not set. So '-m found' can be used. > > So you actually *remove* the variable if you don't get a response, > that's it ? I would have possibly found it more convenient to just > stay on the not modified approach so that you could possibly chain > multiple do-resolve actions and hope that at least one of them could > pick the response. Think about environments where you have multiple > sets of resolvers (internal, admin, internet) and for unkonwn names > you don't know which onee to ask so you ask all of them with 3 > different rules. > The code let the variable untouched. I just call vars_set_by_name() if an IP is returned. http-request do-resolve(txn.myip,internal_dns,ipv4) hdr(Host),lower http-request do-resolve(txn.myip,external_dns,ipv4) hdr(Host),lower unless { var(txn.myip) -m found } should work. > > > + struct sample *smp; > > > > + > > > > + conn_get_from_addr(cli_conn); > > > > + > > > > + smp = sample_fetch_as_type(px, sess, s, > > > SMP_OPT_DIR_REQ|SMP_OPT_FINAL, rule->arg.dns.expr, SMP_T_STR); > > > > + if (smp) { > > > > + char *fqdn; > > > > + > > > > + fqdn = smp->data.u.str.area; > > > > + if (action_prepare_for_resolution(s, fqdn) == > -1) { > > > > + ha_alert("Can't create DNS resolution > for > > > server 'http request action'\n"); > > > > > > Please don't send runtime alerts. We've tried hard to clean them up so > > > that they remain only during startup. > > > > > > > In this function, I have a proxy structure. Should I use send_log() on it > > to report the error? > > You could but then it'd be better to perform some form of rate-limiting. > It is possible that the same reason causes the function to fail in loops > for all requests and it's not very cool to spam logs with info that are > already present in the request's failure anyway. In general an alert log > is made so that someone can do something about it. What could be done > however is to emit this error once if it's a matter of config, and to > increment a counter reported in "show info". We already do this at some > places, I just don't remember which ones :-) > Ok, I set up a global counter to track those errors. I call my field INF_DORESOLVE_ERRORS and the global varialble is called dns_doresolve_errors. A show info shows the following line: DoResolveErrors: 0 Let me know if that is ok for you this way. Also, I am planing to allow this action at the "tcp-request content" layer, to be able to execute it using SNI information. Baptiste
Re: Tune HAProxy in front of a large k8s cluster
On Wed, Feb 20, 2019 at 3:14 PM Joao Morais wrote: > > > > Em 20 de fev de 2019, à(s) 03:30, Baptiste escreveu: > > > > Hi Joao, > > > > I do have a question for you about your ingress controller design and > the "chained" frontends, summarized below: > > * The first frontend is on tcp mode binding :443, inspecting sni and > doing a triage; > >There is also a ssl-passthrough config - from the triage frontend > straight to a tcp backend. > > * The second frontend is binding a unix socket with ca-file (tls > authentication); > > * The last frontend is binding another unix socket, doing ssl-offload > but without ca-file. > > > > What feature is missing in HAProxy to allow switching these 3 frontends > into a single one? > > I understand that the ability to do ssl deciphering and ssl passthrough > on a single bind line is one of them. Is there anything else we could > improve? > > I wonder if crt-list would be useful in your case: > https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#5.1-crt-list > > > Hi Baptiste, I’m changing the approach of the frontend creation - if the > user configuration just need one, this one will listen :443 without need to > chain another one. Regarding switch to more frontends - or at least more > bind lines in the same frontend - and creating the mode-tcp one, here are > the current rules: > > * conflict on timeout client - and perhaps on other frontend configs - > distinct frontends will be created to each one > * if one really want to use a certificate that doesn’t match its domain - > crt-list sounds to solve this > * tls auth (bind with ca-file) and no tls auth - I don’t want to mix then > in the same frontend because of security - tls auth use sni, no tls auth > use host header > * ssl-passthrough as you have mentioned > > ~jm > > Hi Joao, I am not worried about having many frontends in a single HAProxy configuration, I am more worried by "chaining" frontends, for performance reasons. So having one frontend per app because they use different settings is fine, from my point of view, unless you must chain one TCP frontend to route traffic to the application frontend based on SNI. I don't understand the point about TLS auth. crt-list allows you to load multiple certificates and to define custom parameters for each of them, this include ca-file. It's a powerful feature. What I am trying to figure out is what would be a recommendation for a high performance deployment of your ingress controller. Baptiste
Re: Tune HAProxy in front of a large k8s cluster
On Wed, Feb 20, 2019 at 3:25 PM Joao Morais wrote: > > > > Em 20 de fev de 2019, à(s) 02:51, Igor Cicimov < > ig...@encompasscorporation.com> escreveu: > > > > > > On Wed, 20 Feb 2019 3:39 am Joao Morais > Hi Willy, > > > > > Em 19 de fev de 2019, à(s) 01:55, Willy Tarreau escreveu: > > > > > > use_backend foo if { var(req.host) ssl:www.example.com } > > > > > This is a nice trick that I’m planning to use with dynamic use_backend. > I need to concat host (sometimes ssl_fc_sni) and path. The question is: how > do I concatenate two strings? > > > > Something like this: > > http-request set-header X-Concat > %[req.fhdr(Authorization),word(3,.)]_%[src] > > > Hi Igor, this is almost the same workaround I’m using - the difference is > that I’m moving the result to a req.var and removing the header after that. > Wondering if 1.8 has a better option > > ~jm > > Well, set-var should do the trick, or I missed something. Baptiste
Re: Idea for the Wiki
On Tue, Feb 19, 2019 at 9:36 AM Willy Tarreau wrote: > Hi Baptiste, > > On Wed, Feb 06, 2019 at 03:55:37PM +0100, Baptiste wrote: > > I think one of the most important piece is guide lines on integrating > > HAProxy with third parties, IE: Observing HAProxy with influxdb, HAProxy > as > > a Kubernetes External Load-balancer, Service discovery with consul, and > so > > on. > > I don't really know where to put those in the summary you proposed, but > > that's what I want to see in such wiki :) > > For me it falls perfectly into the advanced use cases which aim at > covering interfacing with third-party products. > > Since I got no objection to the proposed plan, I've just created the > wiki's home page, and copy-pasted the proposed plan into a temporary > page that will serve as a guide about what can be worked on. I'll try > to devote a bit of time to this, those who always dreamed about > revamping the old architecture manual are welcome if they want to work > on this. My hope is that we can quickly delete the architecture.txt > file from the source repository :-) > > Cheers, > Willy > I just cloned the repo :) How should we organize directories and pages? IE for TLS offloading: /common/acceleration/tls_offloading.md ? I think it's quite important to agree on it now, because the folders will be part of the URL indexed by google :) I am not fan of the "advanced use cases" title, but we can brainstorm this later. And I wonder how / where we should put integration with third parties (kubernetes, docker, consul, influxdb, grafana, prometheus, etc...). I would like to have a page for each of these items. This will also help third party maintainers to push their integration documentation into this wiki, even if the page is just a link to their own documentation. Baptiste
Re: Tune HAProxy in front of a large k8s cluster
Hi Joao, I do have a question for you about your ingress controller design and the "chained" frontends, summarized below: * The first frontend is on tcp mode binding :443, inspecting sni and doing a triage; There is also a ssl-passthrough config - from the triage frontend straight to a tcp backend. * The second frontend is binding a unix socket with ca-file (tls authentication); * The last frontend is binding another unix socket, doing ssl-offload but without ca-file. What feature is missing in HAProxy to allow switching these 3 frontends into a single one? I understand that the ability to do ssl deciphering and ssl passthrough on a single bind line is one of them. Is there anything else we could improve? I wonder if crt-list would be useful in your case: https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#5.1-crt-list Baptiste >
Re: Tune HAProxy in front of a large k8s cluster
I would use a variable instead of a header: http-request set-var(req.myvar) req.hdr(host),concat(,path) Baptiste
Re: %[] in use-server directives
On Tue, Feb 19, 2019 at 9:54 PM Bruno Henc wrote: > Hi, > > > The following links should be able to help you out: > > > https://www.haproxy.com/blog/dynamic-configuration-haproxy-runtime-api/#dynamically-scaling-backend-servers > > > https://www.haproxy.com/blog/dynamic-scaling-for-microservices-with-runtime-api/#runtime-api > > You might need to build a development version of HAProxy to take > advantage of the latest features. > > Hi Bruno, Actually, those features are stable! Baptiste
Re: Allowing more codes for `errorfile` (like 404) (that can be later re-used with `http-request deny deny_status 404`)
> > Again this use-case is geared more towards CDN custom error pages or > service routers. > > I would add a +1 with to Ciprian on the "service router" use case. As well, I see API gateways returning 404s when a host or URL path is not known. Since HAProxy can be used in both use cases above, from my point of view, it would make sense to make it return 404 out of the box (without a hack). Baptiste
Re: Anyone heard about DPDK?
Hi, HAProxy requires a TCP stack below it. DPDK itself is not enough. Baptiste >
Re: Using server-template for DNS resolution
On Fri, Feb 8, 2019 at 6:09 AM Igor Cicimov wrote: > On Fri, Feb 8, 2019 at 2:29 PM Igor Cicimov < > ig...@encompasscorporation.com> wrote: > >> Hi, >> >> I have a Jetty frontend exposed for couple of ActiveMQ servers behind SSL >> terminating Haproxy-1.8.18. They share same storage and state via lock file >> and there is only one active AMQ at any given time. I'm testing this now >> with dynamic backend using Consul DNS resolution: >> >> # dig +short @127.0.0.1 -p 8600 activemq.service.consul >> 10.140.4.122 >> 10.140.3.171 >> >> # dig +short @127.0.0.1 -p 8600 _activemq._tcp.service.consul SRV >> 1 1 61616 ip-10-140-4-122.node.dc1.consul. >> 1 1 61616 ip-10-140-3-171.node.dc1.consul. >> >> The backends status, the current "master": >> >> root@ip-10-140-3-171:~/configuration-management# netstat -tuplen | grep >> java >> tcp0 0 0.0.0.0:81610.0.0.0:* >> LISTEN 5031374919617256/java >> tcp0 0 0.0.0.0:6161 0.0.0.0:* >> LISTEN 5031374919317256/java >> >> and the "slave": >> >> root@ip-10-140-4-122:~# netstat -tuplen | grep java >> >> So the service ports are not available on the second one. >> >> This is the relevant part of the HAP config that I think might be of >> interest: >> >> global >> server-state-base /var/lib/haproxy >> server-state-file hap_state >> >> defaults >> load-server-state-from-file global >> default-server init-addrlast,libc,none >> >> listen amq >> bind ... ssl crt ... >> mode http >> >> option prefer-last-server >> >> # when this is on the backend is down >> #option tcp-check >> >> default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s >> maxconn 25 maxqueue 256 weight 100 >> >> # working but both show as up >> server-template amqs 2 activemq.service.consul:8161 check >> >> # working old static setup >> #server ip-10-140-3-171 10.140.3.171:8161 check >> #server ip-10-140-4-122 10.140.4.122:8161 check >> >> This is working but the thing is I see both servers as UP in the HAP >> console: >> [image: amqs.png] >> Is this normal for this kind of setup or I'm doing something wrong? >> >> Another observation, when I have tcp check enabled like: >> >> option tcp-check >> >> the way I had it with the static lines like: >> >> server ip-10-140-3-171 10.140.3.171:8161 check >> server ip-10-140-4-122 10.140.4.122:8161 check >> >> then both servers show as down. >> Thanks in advance for any kind of input. >> Igor >> >> Ok, the state has changed now, I have correct state on one haproxy: > > [image: amqs_hap1.png] > but on the second the whole backend is down: > > [image: amqs_hap2.png] > I confirmed via telnet that I can connect to port 8161 to the running amq > server from both haproxy servers. > > Hi Igor, You're using the libc resolver function at startup time to resolve your backend, this is not recommended integration with Consul. You will find some good explanations in this blog article: https://www.haproxy.com/fr/blog/haproxy-and-consul-with-dns-for-service-discovery/ Basically, you should first create a "resolvers" section, in order to allow HAProxy to perform DNS resolution at runtime too. resolvers consul nameserver consul 127.0.0.1:8600 accepted_payload_size 8192 Then, you need to adjust your server-template line, like this: server-template amqs 10 _activemq._tcp.service.consul resolvers consul resolve-prefer ipv4 check In the example above, I am using on purpose the SRV records, because HAProxy supports it and it will use all information available in the response to update server's IP, weight and port. I hope this will help you. Baptiste
Re: Opinions about DoH (=DNS over HTTPS) as resolver for HAProxy
Hi there, I don't have much opinion about this one :) And I did not meet anybody needing such solution for now. >From an implementation point of view, as far as I understand, the idea is to write/read a DNS payload to/from an HTTP request. We already have the primitives to do this. The "most" complicated part would be to be able to to link the resolver scheduler to a backend. (maybe we could use this trick to do DNS over TCP too...) I will follow the thread on the github and may jump in if anybody wants to implement it :) Baptiste On Mon, Feb 4, 2019 at 10:46 PM Aleksandar Lazic wrote: > Hi Lukas. > Am 04.02.2019 um 21:39 schrieb Lukas Tribus: > > Hello, > > > > On Mon, 4 Feb 2019 at 12:14, Aleksandar Lazic > wrote: > >> > >> Hi. > >> > >> I have just opened a new Issue about DoH for resolving. > >> > >> https://github.com/haproxy/haproxy/issues/33 > >> > >> As I know that this is a major change in the Infrastructure I would > like to here what you think about this suggestion. > >> > >> My opinion was at the beginning against this change as there was only > some big provider but now there are some tutorials and other providers for > DoH I think now it's a good Idea. > > > > Frankly I don't see a real use-case. DoH is interesting for clients > > roaming around networks that don't have a local DNS resolver or with a > > completely untrusted or compromised connectivity to their DNS server. > > A haproxy instance on the other hand is usually something installed in > > a stable datacenter, often with a local resolver, and it is resolving > > names you configured with destination IP's that are visible to an > > attacker anyway. > > A possible use-case is: > > Let's say you have a hybrid cloud setup (on-prem, AWS, Azure, ...) and the > networks are connected via a unsecured L2/L3 internet connectivity. > > The networks are routed and the HAProxy VM/Container must resolve an > internal Backend via DNS but some regulations does not allow to send > plain DNS via the internet. > > Internal APP <-> INTERNET <-> HAProxy Pub Cloud <-> Client > || > Internal DNS <-> DoH<-> > > The Solution is to use a DoH on-prem which resolves the internal Backend > via classic DNS internally and send the answer back to HAProxy via HTTPS. > > Such a Setup helps to keep some VPN/IPSec setups out of the game. > I hope I have described the use-case in understandable words. > > > The DNS implementation is still lacking an important feature (TCP > > mode), which Baptiste does not really have time to work on as far as I > > can tell and would actually address a problem for certain huge > > deployments. At the same time I'm not sure I can up with a *real* > > use-case for DoH in haproxy - and there is always the possibility to > > install a local UDP to DoH resolver. Also a lot of setups nowadays are > > either systemd or docker managed, both of which ship their own > > resolver anyway (providing a local UDP/TCP service). > > Ack. It's not a small part, imho. > > On this wiki are some DOH Tools which show how DoH could be implemented. > > https://github.com/curl/curl/wiki/DNS-over-HTTPS > > > I'm not sure what the complexity of DoH is. I assume it's non trivial > > to do in a non-blocking way, without question more complicated than > > TCP mode. > > I don't agree on this as I think there are more or less equal hard to > implement. But I must say I'm only a "sometimes" Developer so I'm sure > I miss all the detail which make the difference. > > > So I'm not a fan of pushing DoH into haproxy. Especially if the > > use-case is unclear. But those are just my two cents. > > Thank you. > > > Also CC'ing Baptiste. > > > > > > cheers, > > lukas > > Regards > aleks >