Best way to maintain ACL configuration in haproxy
Hi all, I'm looking for a solution to maintain ACL in the Haproxy configuration. A would like to maintain a list of domain name tha will be affected to a specific Backend. For exemple, 100 domains on Backend01 and 200 on Backend02 I'm looking for a solution de manage dynamically the list for each Backend to be able to remove one domain from a list and put to another Backend. All the other will use the Default Backend. Can I unclude file in the Haproxy configuration ? Beacausse I was thinking to have 1 configuration file for each backend with something like is : acl is_dom01 hdr_dom(host) -i dom01 .fr acl is_dom02 hdr_dom(host) -i dom02 .fr acl is_dom03 hdr_dom(host) -i dom03 .fr use_backend Backend01 if is_dom01 || is_dom02 || is_dom03 Thanks for you help. Benoit
How to explain 503 and 504 error.
Hi All . I would like to understand why i have a lot of 503 and 504 error from haproxy. Regarding the documentation : 503 when no server was available to handle the request, or in response to monitoring requests which match the monitor fail condition 504 when the response timeout strikes before the server responds In my case, there are 6 webservers running apache2 and they are available. But 503 errors are in the log. For 504 error, the same, i can't explain why a lot of my website have this error. Here is an extract of the haproxy.log (debug level) Dec 29 18:54:18 127.0.0.1 haproxy[24349]: 66.249.72.182:47472 [29/Dec/2011:18:53:48.816] public webserver/http-16 0/0/0/-1/3 504 270 - - sHNN 517/517/46/6/0 0/0 {www.webpedigrees.com} GET /inbreeding.php?nid=446368nidp=nidm=nbgen=12 HTTP/1.1 Dec 29 18:54:47 127.0.0.1 haproxy[24349]: 94.142.131.19:52345 [29/Dec/2011:18:54:46.191] public webserver/http-19 942/0/-1/-1/943 503 288 - - CCNN 504/503/41/5/0 0/0 {www.cordeaucou.com} POST /photos_mariage_zoom_comment.php?idp=374image=336x=318tr=v HTTP/1.0 Dec 29 18:55:16 127.0.0.1 haproxy[24349]: 207.46.199.52:49370 [29/Dec/2011:18:55:16.121] public webserver/http-14 0/0/0/20/20 503 410 - - --NI 465/463/35/13/0 0/0 {judaicultures.info} GET /les-arts-judaiques/arts-artistes-artisans/evenements-litteraires-parutions/Parution-du-Midrash-Rabba-sur-l?debut_article_rubrique_numerotes=20 HTTP/1.0 Dec 29 18:55:39 127.0.0.1 haproxy[24349]: 66.249.72.225:36265 [29/Dec/2011:18:55:09.550] public webserver/http-17 0/0/4/-1/30006 504 270 - - sHNN 488/488/43/12/0 0/0 {www.webpedigrees.com} GET /inbreeding.php?nid=446374nidp=nidm=nbgen=11 HTTP/1.1 Dec 29 18:55:51 127.0.0.1 haproxy[24349]: 94.142.131.19:49698 [29/Dec/2011:18:55:50.811] public webserver/http-19 942/0/-1/-1/943 503 288 - - CCNN 423/422/43/3/0 0/0 {www.jeunes-sapeurs-pompiers.fr} POST /index.php?option=com_joomlaboardItemid=43func=post HTTP/1.0 Dec 29 18:56:45 127.0.0.1 haproxy[24349]: 213.111.38.127:60110 [29/Dec/2011:18:56:39.165] public webserver/http-17 6585/0/-1/-1/6585 503 288 - - CCVN 491/491/49/6/0 0/0 {demaistre.fr} GET /gallery2/d/49262-2/20110917_163242_IMG_2776.jpg HTTP/1.1 Dec 29 18:57:28 127.0.0.1 haproxy[24349]: 178.33.204.50:38933 [29/Dec/2011:18:56:58.380] public webserver/http-14 0/0/0/-1/3 504 270 - - sHNN 456/448/28/8/0 0/0 {meteor-center.com} POST /blog/wp-cron.php?doing_wp_cron HTTP/1.0 Dec 29 19:12:04 127.0.0.1 haproxy[24349]: 193.199.4.148:52226 [29/Dec/2011:19:12:03.482] public webserver/http-18 1189/0/-1/-1/1189 503 288 - - CCVN 536/536/41/9/0 0/0 {www.playzgame.com} GET /online-flash-games/picture_70x60/scnclrx_70x60.jpg HTTP/1.1 Dec 29 19:15:18 127.0.0.1 haproxy[24349]: 41.110.231.242:1613 [29/Dec/2011:19:14:45.657] public webserver/http-19 3183/0/2/-1/33186 504 270 - - sHVN 453/453/44/2/0 0/0 {www.thenia.net} POST /phpwebgallery/picture.php?/2247/category/28 HTTP/1.1 Most of the time on 503 error, it's CCNN the TCP session was unexpectedly aborted by the client. the proxy was waiting for the CONNECTION to establish on the server. The server might at most have noticed a connection attempt. the client provided NO cookie. This is usually the case for new visitors, so counting the number of occurrences of this flag in the logs generally indicate a valid trend for the site frequentation. NO cookie was provided by the server, and none was inserted either. For 504 it's sHNN the server-side timeout expired while waiting for the server to send or receive data. the proxy was waiting for complete, valid response HEADERS from the server (HTTP only). the client provided NO cookie. This is usually the case for new visitors, so counting the number of occurrences of this flag in the logs generally indicate a valid trend for the site frequentation. NO cookie was provided by the server, and none was inserted either. For example , this url is always in 504 error : http://www.webpedigrees.com/inbreeding.php?nid=446594nidp=nidm=nbgen=11 I don't understand why, specifically this url (in this case) Is any one have time to try to explain how to solve or how to find where can be the problem... Because now this is the only one problem on my server and i can't solve it myself :( Thank you very much ! Cheers, Benoît Georgelin
Re: Haproxy 502 errors, all the time on specific sites or backend
Thanks Cyril for this elements. Here the modules available on apache2: actions alias auth_basic auth_mysql auth_pam authn_file authz_default authz_groupfile authz_host authz_user autoindex cache cgi deflate dir env expires headers include mime mod-evasive negotiation php5 python rewrite rpaf setenvif ssl status Maybe one of them have troubles.. I will search about Content-Length header Cordialement, Benoît Georgelin Web 4 all Hébergeur associatif +33 977 218 005 +1 514 463 7255 benoit.georgelin@web 4 all.fr Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité - Mail original - De: Cyril Bonté cyril.bo...@free.fr À: Benoit GEORGELIN (web4all) benoit.george...@web4all.fr Cc: haproxy@formilux.org Envoyé: Jeudi 3 Novembre 2011 10:32:06 Objet: Re: Haproxy 502 errors, all the time on specific sites or backend Hi Benoit, Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit : Hi ! My name is Benoît and i'm in a associative project who provide web hosting. We are using Haproxy and we have a lot of problems with 502 errors :( So, i would like to know how to really debug this and find solutions :) There is some cases on mailling list archives but i will appreciate if someone can drive me with a real case on our infrastructure. My first observations, it it can help someone to target the issue : In your servers responses, there is no Content-Length header, this can make some troubles. 502 errors occurs when asking for compressed data : - curl -si -H Accept-Encoding: gzip,deflate http://sandka.org/portfolio/ HTTP/1.0 502 Bad Gateway - curl -si http://sandka.org/portfolio/ = results in a truncated page without Content-Length Header We'll have to find why your backends doesn't provide a Content-Length header (and what happens with compression, which should be sent in chunks). Details: Haproxy Stable 1.4.18 OS: Debian Lenny Configuration File: ## global log 127.0.0.1 local0 notice #debug maxconn 2 # count about 1 GB per 2 connections ulimit-n 40046 tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :( tune.maxrewrite 1024 #chroot /usr/share/haproxy user haproxy group haproxy daemon #nbproc 4 #debug #quiet defaults log global mode http retries 3 # 2 - 3 le 06102011 # maxconn 19500 # Should be slightly smaller than global.maxconn. OPTIONS ## option dontlognull option abortonclose #option redispatch # Désactive le 06102011 car balance en mode source et non RR # option tcpka #option log-separate-errors #option logasap TIMeOUT ## timeout client 30s #1m 40s Client and server timeout must match the longest timeout server 30s #1m 40s time we may wait for a response from the server. timeout queue 30s #1m 40s Don't queue requests too long if saturated. timeout connect 5s #10s 5s There's no reason to change this one. timeout http-request 5s #10s 5s A complete request may never take that long timeout http-keep-alive 10s timeout check 10s #10s ### # F R O N T E N D P U B L I C B E G I N # frontend public bind 123.456.789.123:80 default_backend webserver OPTIONS ## option dontlognull #option httpclose option httplog option http-server-close # option dontlog-normal # Gestion sur URL # Tout commenter le 21/10/2011 # log the name of the virtual server capture request header Host len 60 # # F R O N T E N D P U B L I C E N D ### ### # B A C K E N D W E B S E R V E R B E G I N # backend webserver balance source # Reactive le 06102011 # #balance roundrobin # Désactive le 06102011 # OPTIONS ## option httpchk option httplog option forwardfor #option httpclose # Désactive le 06102011 # option http-server-close option http-pretend-keepalive retries 5 cookie SERVERID insert indirect # Detect an ApacheKiller-like Attack acl killerapache hdr_cnt(Range) gt 10 # Clean up the request reqidel ^Range if killerapache server http-A 192.168.0.1:80 cookie http-A check inter 5000 server http-B 192.168.1.1:80 cookie http-B check inter 5000 server http-C 192.168.2.1:80 cookie http-C check inter 5000 server http-D 192.168.3.1:80 cookie http-D check inter 5000 server http-E 192.168.4.1:80 cookie http-E check inter 5000 # Every header should end with a colon followed by one space. reqideny ^[^:\ ]*[\ ]*$ # block Apache chunk exploit reqideny ^Transfer-Encoding:[\ ]*chunked reqideny ^Host:\ apache- # block annoying worms that fill the logs... reqideny ^[^:\ ]*\ .*(\.|%2e)(\.|%2e)(%2f|%5c|/| ) reqideny ^[^:\ ]*\ ([^\ ]*\ [^\ ]*\ |.*%00) reqideny ^[^:\ ]*\ .*script reqideny
Re: Haproxy 502 errors, all the time on specific sites or backend
Humm very interesting, a disabled mod_deflate on now it's working like a charm :( Do you know why? Cordialement, Benoît Georgelin - Mail original - De: Cyril Bonté cyril.bo...@free.fr À: Benoit GEORGELIN (web4all) benoit.george...@web4all.fr Cc: haproxy@formilux.org Envoyé: Jeudi 3 Novembre 2011 10:32:06 Objet: Re: Haproxy 502 errors, all the time on specific sites or backend Hi Benoit, Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit : Hi ! My name is Benoît and i'm in a associative project who provide web hosting. We are using Haproxy and we have a lot of problems with 502 errors :( So, i would like to know how to really debug this and find solutions :) There is some cases on mailling list archives but i will appreciate if someone can drive me with a real case on our infrastructure. My first observations, it it can help someone to target the issue : In your servers responses, there is no Content-Length header, this can make some troubles. 502 errors occurs when asking for compressed data : - curl -si -H Accept-Encoding: gzip,deflate http://sandka.org/portfolio/ HTTP/1.0 502 Bad Gateway - curl -si http://sandka.org/portfolio/ = results in a truncated page without Content-Length Header We'll have to find why your backends doesn't provide a Content-Length header (and what happens with compression, which should be sent in chunks). Details: Haproxy Stable 1.4.18 OS: Debian Lenny Configuration File: ## global log 127.0.0.1 local0 notice #debug maxconn 2 # count about 1 GB per 2 connections ulimit-n 40046 tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :( tune.maxrewrite 1024 #chroot /usr/share/haproxy user haproxy group haproxy daemon #nbproc 4 #debug #quiet defaults log global mode http retries 3 # 2 - 3 le 06102011 # maxconn 19500 # Should be slightly smaller than global.maxconn. OPTIONS ## option dontlognull option abortonclose #option redispatch # Désactive le 06102011 car balance en mode source et non RR # option tcpka #option log-separate-errors #option logasap TIMeOUT ## timeout client 30s #1m 40s Client and server timeout must match the longest timeout server 30s #1m 40s time we may wait for a response from the server. timeout queue 30s #1m 40s Don't queue requests too long if saturated. timeout connect 5s #10s 5s There's no reason to change this one. timeout http-request 5s #10s 5s A complete request may never take that long timeout http-keep-alive 10s timeout check 10s #10s ### # F R O N T E N D P U B L I C B E G I N # frontend public bind 123.456.789.123:80 default_backend webserver OPTIONS ## option dontlognull #option httpclose option httplog option http-server-close # option dontlog-normal # Gestion sur URL # Tout commenter le 21/10/2011 # log the name of the virtual server capture request header Host len 60 # # F R O N T E N D P U B L I C E N D ### ### # B A C K E N D W E B S E R V E R B E G I N # backend webserver balance source # Reactive le 06102011 # #balance roundrobin # Désactive le 06102011 # OPTIONS ## option httpchk option httplog option forwardfor #option httpclose # Désactive le 06102011 # option http-server-close option http-pretend-keepalive retries 5 cookie SERVERID insert indirect # Detect an ApacheKiller-like Attack acl killerapache hdr_cnt(Range) gt 10 # Clean up the request reqidel ^Range if killerapache server http-A 192.168.0.1:80 cookie http-A check inter 5000 server http-B 192.168.1.1:80 cookie http-B check inter 5000 server http-C 192.168.2.1:80 cookie http-C check inter 5000 server http-D 192.168.3.1:80 cookie http-D check inter 5000 server http-E 192.168.4.1:80 cookie http-E check inter 5000 # Every header should end with a colon followed by one space. reqideny ^[^:\ ]*[\ ]*$ # block Apache chunk exploit reqideny ^Transfer-Encoding:[\ ]*chunked reqideny ^Host:\ apache- # block annoying worms that fill the logs... reqideny ^[^:\ ]*\ .*(\.|%2e)(\.|%2e)(%2f|%5c|/| ) reqideny ^[^:\ ]*\ ([^\ ]*\ [^\ ]*\ |.*%00) reqideny ^[^:\ ]*\ .*script reqideny ^[^:\ ]*\ .*/(root\.exe\?|cmd\.exe\?|default\.ida\?) # allow other syntactically valid requests, and block any other method reqipass ^(GET|POST|HEAD|OPTIONS)\ /.*\ HTTP/1\.[01]$ reqipass ^OPTIONS\ \\*\ HTTP/1\.[01]$ errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc
Re: Haproxy 502 errors, all the time on specific sites or backend
It's working better, but now i have some blanks pages. Cordialement, Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité - Mail original - De: Benoit GEORGELIN (web4all) benoit.george...@web4all.fr À: Cyril Bonté cyril.bo...@free.fr Cc: haproxy@formilux.org Envoyé: Jeudi 3 Novembre 2011 10:47:57 Objet: Re: Haproxy 502 errors, all the time on specific sites or backend Humm very interesting, a disabled mod_deflate on now it's working like a charm :( Do you know why? Cordialement, Benoît Georgelin - Mail original - De: Cyril Bonté cyril.bo...@free.fr À: Benoit GEORGELIN (web4all) benoit.george...@web4all.fr Cc: haproxy@formilux.org Envoyé: Jeudi 3 Novembre 2011 10:32:06 Objet: Re: Haproxy 502 errors, all the time on specific sites or backend Hi Benoit, Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit : Hi ! My name is Benoît and i'm in a associative project who provide web hosting. We are using Haproxy and we have a lot of problems with 502 errors :( So, i would like to know how to really debug this and find solutions :) There is some cases on mailling list archives but i will appreciate if someone can drive me with a real case on our infrastructure. My first observations, it it can help someone to target the issue : In your servers responses, there is no Content-Length header, this can make some troubles. 502 errors occurs when asking for compressed data : - curl -si -H Accept-Encoding: gzip,deflate http://sandka.org/portfolio/ HTTP/1.0 502 Bad Gateway - curl -si http://sandka.org/portfolio/ = results in a truncated page without Content-Length Header We'll have to find why your backends doesn't provide a Content-Length header (and what happens with compression, which should be sent in chunks). Details: Haproxy Stable 1.4.18 OS: Debian Lenny Configuration File: ## global log 127.0.0.1 local0 notice #debug maxconn 2 # count about 1 GB per 2 connections ulimit-n 40046 tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :( tune.maxrewrite 1024 #chroot /usr/share/haproxy user haproxy group haproxy daemon #nbproc 4 #debug #quiet defaults log global mode http retries 3 # 2 - 3 le 06102011 # maxconn 19500 # Should be slightly smaller than global.maxconn. OPTIONS ## option dontlognull option abortonclose #option redispatch # Désactive le 06102011 car balance en mode source et non RR # option tcpka #option log-separate-errors #option logasap TIMeOUT ## timeout client 30s #1m 40s Client and server timeout must match the longest timeout server 30s #1m 40s time we may wait for a response from the server. timeout queue 30s #1m 40s Don't queue requests too long if saturated. timeout connect 5s #10s 5s There's no reason to change this one. timeout http-request 5s #10s 5s A complete request may never take that long timeout http-keep-alive 10s timeout check 10s #10s ### # F R O N T E N D P U B L I C B E G I N # frontend public bind 123.456.789.123:80 default_backend webserver OPTIONS ## option dontlognull #option httpclose option httplog option http-server-close # option dontlog-normal # Gestion sur URL # Tout commenter le 21/10/2011 # log the name of the virtual server capture request header Host len 60 # # F R O N T E N D P U B L I C E N D ### ### # B A C K E N D W E B S E R V E R B E G I N # backend webserver balance source # Reactive le 06102011 # #balance roundrobin # Désactive le 06102011 # OPTIONS ## option httpchk option httplog option forwardfor #option httpclose # Désactive le 06102011 # option http-server-close option http-pretend-keepalive retries 5 cookie SERVERID insert indirect # Detect an ApacheKiller-like Attack acl killerapache hdr_cnt(Range) gt 10 # Clean up the request reqidel ^Range if killerapache server http-A 192.168.0.1:80 cookie http-A check inter 5000 server http-B 192.168.1.1:80 cookie http-B check inter 5000 server http-C 192.168.2.1:80 cookie http-C check inter 5000 server http-D 192.168.3.1:80 cookie http-D check inter 5000 server http-E 192.168.4.1:80 cookie http-E check inter 5000 # Every header should end with a colon followed by one space. reqideny ^[^:\ ]*[\ ]*$ # block Apache chunk exploit reqideny ^Transfer-Encoding:[\ ]*chunked reqideny ^Host:\ apache- # block annoying worms that fill the logs... reqideny ^[^:\ ]*\ .*(\.|%2e)(\.|%2e)(%2f|%5c|/| ) reqideny ^[^:\ ]*\ ([^\ ]*\ [^\ ]*\ |.*%00) reqideny ^[^:\ ]*\ .*script reqideny ^[^:\ ]*\ .*/(root\.exe\?|cmd\.exe\?|default\.ida\?) # allow other
Re: Haproxy 502 errors, all the time on specific sites or backend
Can you give me more details about your analyse? (examples) I will try to understand more what's happen Is the response who is not complete or the header only? Thanks Cordialement, Benoît Georgelin Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité - Mail original - De: Cyril Bonté cyril.bo...@free.fr À: Benoit GEORGELIN (web4all) benoit.george...@web4all.fr Cc: haproxy@formilux.org Envoyé: Jeudi 3 Novembre 2011 10:54:46 Objet: Re: Haproxy 502 errors, all the time on specific sites or backend Le Jeudi 3 Novembre 2011 15:53:50 Benoit GEORGELIN a écrit : It's working better, but now i have some blanks pages. Yes, responses are still truncated most of the time. Cordialement, Afin de contribuer au respect de l'environnement, merci de n'imprimer ce mail qu'en cas de nécessité - Mail original - De: Benoit GEORGELIN (web4all) benoit.george...@web4all.fr À: Cyril Bonté cyril.bo...@free.fr Cc: haproxy@formilux.org Envoyé: Jeudi 3 Novembre 2011 10:47:57 Objet: Re: Haproxy 502 errors, all the time on specific sites or backend Humm very interesting, a disabled mod_deflate on now it's working like a charm :( Do you know why? Cordialement, Benoît Georgelin - Mail original - De: Cyril Bonté cyril.bo...@free.fr À: Benoit GEORGELIN (web4all) benoit.george...@web4all.fr Cc: haproxy@formilux.org Envoyé: Jeudi 3 Novembre 2011 10:32:06 Objet: Re: Haproxy 502 errors, all the time on specific sites or backend Hi Benoit, Le Jeudi 3 Novembre 2011 14:46:10 Benoit GEORGELIN a écrit : Hi ! My name is Benoît and i'm in a associative project who provide web hosting. We are using Haproxy and we have a lot of problems with 502 errors :( So, i would like to know how to really debug this and find solutions :) There is some cases on mailling list archives but i will appreciate if someone can drive me with a real case on our infrastructure. My first observations, it it can help someone to target the issue : In your servers responses, there is no Content-Length header, this can make some troubles. 502 errors occurs when asking for compressed data : - curl -si -H Accept-Encoding: gzip,deflate http://sandka.org/portfolio/ HTTP/1.0 502 Bad Gateway - curl -si http://sandka.org/portfolio/ = results in a truncated page without Content-Length Header We'll have to find why your backends doesn't provide a Content-Length header (and what happens with compression, which should be sent in chunks). Details: Haproxy Stable 1.4.18 OS: Debian Lenny Configuration File: ## global log 127.0.0.1 local0 notice #debug maxconn 2 # count about 1 GB per 2 connections ulimit-n 40046 tune.bufsize 65536 # Necessary for lot of CMS page like Prestashop :( tune.maxrewrite 1024 #chroot /usr/share/haproxy user haproxy group haproxy daemon #nbproc 4 #debug #quiet defaults log global mode http retries 3 # 2 - 3 le 06102011 # maxconn 19500 # Should be slightly smaller than global.maxconn. OPTIONS ## option dontlognull option abortonclose #option redispatch # Désactive le 06102011 car balance en mode source et non RR # option tcpka #option log-separate-errors #option logasap TIMeOUT ## timeout client 30s #1m 40s Client and server timeout must match the longest timeout server 30s #1m 40s time we may wait for a response from the server. timeout queue 30s #1m 40s Don't queue requests too long if saturated. timeout connect 5s #10s 5s There's no reason to change this one. timeout http-request 5s #10s 5s A complete request may never take that long timeout http-keep-alive 10s timeout check 10s #10s ### # F R O N T E N D P U B L I C B E G I N # frontend public bind 123.456.789.123:80 default_backend webserver OPTIONS ## option dontlognull #option httpclose option httplog option http-server-close # option dontlog-normal # Gestion sur URL # Tout commenter le 21/10/2011 # log the name of the virtual server capture request header Host len 60 # # F R O N T E N D P U B L I C E N D ### ### # B A C K E N D W E B S E R V E R B E G I N # backend webserver balance source # Reactive le 06102011 # #balance roundrobin # Désactive le 06102011 # OPTIONS ## option httpchk option httplog option forwardfor #option httpclose # Désactive le 06102011 # option http-server-close option http-pretend-keepalive retries 5 cookie SERVERID insert indirect # Detect an ApacheKiller-like Attack acl