Re: http-keep-alive broken?

2014-01-10 Thread Willy Tarreau
Hi Sander,

On Fri, Jan 10, 2014 at 08:57:18AM +0100, Sander Klein wrote:
 Hi,
 
 I'm sorry you haven't heard from me yet. But I didn't have time to look 
 into this issue. Hope to do it this weekend.

Don't rush on it, Baptiste has reported to me a reproducible issue on his
lab which seems to match your problem, and which is caused by the way the
polling works right now (which is the reason why I want to address this
before the release). I'm currently working on it. The fix is far from being
trivial, but necessary.

Thanks,
Willy




Re: http-keep-alive broken?

2014-01-10 Thread Sander Klein

Heyz,

On 10.01.2014 09:14, Willy Tarreau wrote:

Hi Sander,

On Fri, Jan 10, 2014 at 08:57:18AM +0100, Sander Klein wrote:

Hi,

I'm sorry you haven't heard from me yet. But I didn't have time to 
look

into this issue. Hope to do it this weekend.


Don't rush on it, Baptiste has reported to me a reproducible issue on 
his
lab which seems to match your problem, and which is caused by the way 
the

polling works right now (which is the reason why I want to address this
before the release). I'm currently working on it. The fix is far from 
being

trivial, but necessary.


Do you still want me to bisect? Or should I wait? If you think the 
problem is the same I'll just test the fix :-)


Sander



Re: Thousands of FIN_WAIT_2 CLOSED ESTABLISHED in haproxy1.5-dev21-6b07bf7

2014-01-10 Thread Ge Jin
Hi, Lukas!

 Like I said, you will need to reproduce the problem on a box with
 no traffic at all - so the impact of a single connection can be analyzed
 (sockets status on the frontend/backend, for example).

 Its nearly impossible to this on a busy box with a lot of production traffic.

 Also, the configuration needs to be trimmed down to a single, specific use
 case (you already said you suspect a specific backend).

I follow your idea, but it seems single connection couldn't reproduce
the problem. The sockets stats disappeared very fast when I open my
explorer.And I analyzed with wireshark and didn't see any abnormal
flows or packets.



Re: http-keep-alive broken?

2014-01-10 Thread Willy Tarreau
On Fri, Jan 10, 2014 at 10:47:51AM +0100, Sander Klein wrote:
 Heyz,
 
 On 10.01.2014 09:14, Willy Tarreau wrote:
 Hi Sander,
 
 On Fri, Jan 10, 2014 at 08:57:18AM +0100, Sander Klein wrote:
 Hi,
 
 I'm sorry you haven't heard from me yet. But I didn't have time to 
 look
 into this issue. Hope to do it this weekend.
 
 Don't rush on it, Baptiste has reported to me a reproducible issue on 
 his
 lab which seems to match your problem, and which is caused by the way 
 the
 polling works right now (which is the reason why I want to address this
 before the release). I'm currently working on it. The fix is far from 
 being
 trivial, but necessary.
 
 Do you still want me to bisect? Or should I wait? If you think the 
 problem is the same I'll just test the fix :-)

Don't waste your time bisecting. I'll propose you to test the patch
instead. The problem I've seen is always the same and is related to
the fact that the SSL layer might have read all pending data from a
socket but not delivered everything to the buffer by lack of space
for example. Once the buffer is flushed and we want to read what
follows, we re-enable polling. But there's no more data pending so
poll() does not wake up the reader anymore. It happened to work due
to the speculative polling we've been using a lot, but sometimes it
was not intentional and caused some syscalls to be attempted for no
reason (resulting in many EAGAIN in recvfrom). When fixing this, we
also break what made SSL happen to work :-(

I've been wanting for about 1 year to change the polling to include
the status of the FD there (EAGAIN or not). But since it's complex
and there were not many reasons for doing it, I preferred to delay.
Now is a good opportunity given that I broke it several times in the
last few weeks.

Hoping this clarifies things a little bit.

Willy




Re: Unix socket question

2014-01-10 Thread Willy Tarreau
Hello Craig,

On Thu, Jan 09, 2014 at 10:54:52AM -0500, Craig Smith wrote:
 Hello.
 
 I'm attempting to use HA Proxy with some custom scripts with auto scaling
 groups on EC2. If I run the 'disable server' command from a Unix socket
 what will happen to that active connections to that server? Will HAP wait
 until those connections are closed to mark the server down?

No it will immediately mark the server down without affecting the connections,
so that no new connection will be sent there anymore.

Regards,
Willy




Re: Thousands of FIN_WAIT_2 CLOSED ESTABLISHED in haproxy1.5-dev21-6b07bf7

2014-01-10 Thread Jose María Zaragoza
2014/1/7 Ge Jin altman87...@gmail.com:
 Hi, all!


 Recently, we use haproxy1.5-dev21 in our product.And we want to get
 the benefit of http-keep-alive. But after we added the option
 http-keep-alive and deployed new version of haproxy. We found that the
 connection of FIN_WAIT_2 CLOSED ESTABLISHED increased quickly. when we
 change to the tunnel mode, it decreased.

Hi:

I think that FIN_WAIT_2 state is related to the stated of backend
socket's endpoint, who is waiting for remote server closes its
endpoint

Why remote server doesn't perform a close() when http-keep-alive is
enabled in HA Proxy? It doesn't make any sense for me.

One question ,
what is the advantage to use  http-keep-alive over tunnel mode ?

Thanks and regards



RE: Hardware recommendations for HAProxy on large-scale site

2014-01-10 Thread Daniel Wilson
Thanks for the reply, Steven.

I think we are looking at a 2-arm NAT mode, but I'm not certain yet.

We do expect to handle a lot of sessions.  From what I've read on the HAProxy 
site, 108,000 connections/second is the current record for a single HAProxy 
instance.  As I understand it, that's limited by the NIC.  But will 8-16 GB of 
RAM allow us to get the most out of our server?  Or should we look at a lot 
more?

Daniel

-Original Message-
From: Steven Le Roux [mailto:ste...@le-roux.info] 
Sent: Friday, January 10, 2014 4:28 AM
To: Daniel Wilson
Cc: haproxy
Subject: Re: Hardware recommendations for HAProxy on large-scale site

Hi,

Multicore would be certainly not wasted since you could bind NICs IRQ on some 
core and haproxy on others.
1 or 2 NIC depends on your architecture. Are you one arm mode ? two arm ?

For RAM, will your service need many session handling ?

Even for hardware it's really a per infra/service configuration.


On Fri, Jan 10, 2014 at 2:35 AM, Daniel Wilson dan...@theewhiteboard.com 
wrote:
 What resources should we look to maximize when building a server to 
 get the most out of HAProxy?  I read in some forums that more than a 
 2-core processor would be wasted on HAProxy.  Is that true? Should we 
 get the most RAM we can (e.g. 100+ GB)?  Or would some other resource 
 saturate much faster?  Perhaps the NICs? Speaking of NICs, what do you 
 recommend?  I’m looking at 10 Gbps NIC’s, but should I look at 2?  Or 
 more?  Any particular brand well-proven?  Or any to avoid?



 Thanks for the help!



 Daniel Wilson

 Lead Software Developer

 The eWhiteboard Company





--
Steven Le Roux
Jabber-ID : ste...@jabber.fr
0x39494CCB ste...@le-roux.info
2FF7 226B 552E 4709 03F0  6281 72D7 A010 3949 4CCB




El planeador Maestro de Producción Altamente Competitivo

2014-01-10 Thread Katherine Gonzalez




El planeador Maestro de Producción Altamente 
Competitivo – Técnicas, Habilidades y Herramientas de 
EXCELENCIAMedellín 
16 - Bogotá17deEnero 
de 2014Conviértase en un verdadero 
“Mariscal de Campo” organizando, integrando y coordinando de manera inteligente 
los esfuerzos de las áreas que forman parte del negocio manufacturero (ventas, 
producción, control de inventarios, almacenes, compras, control de calidad, 
ingeniería, etc.)Para ampliar la información y obtener los b 
eneficios de inscripción temprana diligencie sin compromiso los siguientes 
datos: -Nombre:-Empresa:-Ciudad:-Teléfono:-E-mail: 
haproxy@formilux.org"Su información jamás será 
compartida ni comercializada. Garantizamos total confidencialidad y privacidad 
de sus datos"
Centro de atención telefónica: 01 8000 51 30 51, PBX 
(4) 444 09 18Importante: En cumplimiento con la 
ley 1581 de 2012, queremos comunicarle que si usted no desea recibir la 
información actualizada con los temas más innovadores de nuestra agenda de 
eventos de capacitación, puede des-suscribirse de estas invitaciones 
respondiendo este correo con el asunto quit.La des-suscripción puede tardar de 1 
a 5 días. Este correo no puede ser considerado intrusivo ya que cumple con 
las políticas antispa m internacionales y locales. Para des-suscribirse de estas 
invitaciones responda este correo con el asunto Delete.Este correo ha sido 
enviado enviado a: haproxy@formilux.org



Bug report for latest dev release, 1.5.21, segfault when using http expect string x and large 404 page (includes GDB output)

2014-01-10 Thread Steve Ruiz
I'm experimenting with haproxy on a centos6 VM here.  I found that when I
specified a health check page (option httpchk GET /url), and that page
didn't exist, we have a large 404 page returned, and that causes haproxy to
quickly segfault (seems like on the second try GET'ing and parsing the
page).  I couldn't figure out from the website where to submit a bug, so I
figure I'll try here first.

Steps to reproduce:
- setup http backend, with option httpchk and httpcheck expect string x.
Make option httpchk point to a non-existent page
- On backend server, set it up to serve large 404 response (in my case, the
404 page is 186kB, as it has an inline graphic and inline css)
- Start haproxy, and wait for it to segfault

I wasn't sure exactly what was causing this at first, so I did some work to
narrow it down with GDB.  The variable values from gdb led me to the cause
on my side, and hopefully can help you fix the issue.  I could not make
this work with simply a large page for the http response - in that case, it
seems to work as advertised, only inspecting the response up to
tune.chksize (default 16384 as i've left it).  But if I do this with a 404,
it seems to kill it.  Let me know what additional information you need if
any.  Thanks and kudos for the great bit of software!


*#haproxy config:*
#-
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#-

# Help in developing config here:
# https://www.twilio.com/engineering/2013/10/16/haproxy


#-
# Global settings
#-
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events.  This is done
#by adding the '-r' option to the SYSLOGD_OPTIONS in
#/etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
#   file. A line like the following can be added to
#   /etc/sysconfig/syslog
#
#local2.*   /var/log/haproxy.log
#
log 127.0.0.1 local2 info

chroot  /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
userhaproxy
group   haproxy
daemon

#enable stats
stats socket /tmp/haproxy.sock

listen ha_stats :8088
balance source
mode http
timeout client 3ms
stats enable
stats auth haproxystats:foobar
stats uri /haproxy?stats

#-
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#-
defaults
modehttp
log global
option  httplog
option  dontlognull
#keep persisten client connection open
option  http-server-close
option forwardfor   except 127.0.0.0/8
option  redispatch
# Limit number of retries - total time trying to connect = connect
timeout * (#retries + 1)
retries 2
timeout http-request10s
timeout queue   1m
#timeout opening a tcp connection to server - should be shorter than
timeout client and server
timeout connect 3100
timeout client  30s
timeout server  30s
timeout http-keep-alive 10s
timeout check   10s
maxconn 3000

#-
# main frontend which proxys to the backends
#-
frontend https_frontend
bind :80
 redirect scheme https if !{ ssl_fc }

#config help:
https://github.com/observing/balancerbattle/blob/master/haproxy.cfg
 bind *:443 ssl crt /etc/certs/mycert.pem ciphers
RC4-SHA:AES128-SHA:AES:!ADH:!aNULL:!DH:!EDH:!eNULL
mode http
 default_backend webapp

#-
# Main backend for web application servers
#-
backend webapp
balance roundrobin
#Insert cookie SERVERID to pin it to one leg
cookie SERVERID insert nocache indirect
#http check should pull url below
option httpchk GET /cp/testcheck.html HTTP/1.0
#option httpchk GET /cp/testcheck.php HTTP/1.0
#http check should find string below in response to be considered up
http-check expect string good
#Define servers - inter=interval of 5s, rise 2=become avail after 2
successful checks, fall 3=take out after 3 fails
  

Re: Bug report for latest dev release, 1.5.21, segfault when using http expect string x and large 404 page (includes GDB output)

2014-01-10 Thread Baptiste
Hi Steve,

Could you give a try to the tcp-check and tell us if your have the same issue.
In your backend, turn your httpchk related directives into:
  option tcp-check
  tcp-check send GET\ /cp/testcheck.html\ HTTP/1.0\r\n
  tcp-check send \r\n
  tcp-check expect string good

Baptiste


On Fri, Jan 10, 2014 at 11:16 PM, Steve Ruiz ste...@mirth.com wrote:
 I'm experimenting with haproxy on a centos6 VM here.  I found that when I
 specified a health check page (option httpchk GET /url), and that page
 didn't exist, we have a large 404 page returned, and that causes haproxy to
 quickly segfault (seems like on the second try GET'ing and parsing the
 page).  I couldn't figure out from the website where to submit a bug, so I
 figure I'll try here first.

 Steps to reproduce:
 - setup http backend, with option httpchk and httpcheck expect string x.
 Make option httpchk point to a non-existent page
 - On backend server, set it up to serve large 404 response (in my case, the
 404 page is 186kB, as it has an inline graphic and inline css)
 - Start haproxy, and wait for it to segfault

 I wasn't sure exactly what was causing this at first, so I did some work to
 narrow it down with GDB.  The variable values from gdb led me to the cause
 on my side, and hopefully can help you fix the issue.  I could not make this
 work with simply a large page for the http response - in that case, it seems
 to work as advertised, only inspecting the response up to tune.chksize
 (default 16384 as i've left it).  But if I do this with a 404, it seems to
 kill it.  Let me know what additional information you need if any.  Thanks
 and kudos for the great bit of software!


 #haproxy config:
 #-
 # Example configuration for a possible web application.  See the
 # full configuration options online.
 #
 #   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
 #
 #-

 # Help in developing config here:
 # https://www.twilio.com/engineering/2013/10/16/haproxy


 #-
 # Global settings
 #-
 global
 # to have these messages end up in /var/log/haproxy.log you will
 # need to:
 #
 # 1) configure syslog to accept network log events.  This is done
 #by adding the '-r' option to the SYSLOGD_OPTIONS in
 #/etc/sysconfig/syslog
 #
 # 2) configure local2 events to go to the /var/log/haproxy.log
 #   file. A line like the following can be added to
 #   /etc/sysconfig/syslog
 #
 #local2.*   /var/log/haproxy.log
 #
 log 127.0.0.1 local2 info

 chroot  /var/lib/haproxy
 pidfile /var/run/haproxy.pid
 maxconn 4000
 userhaproxy
 group   haproxy
 daemon

 #enable stats
 stats socket /tmp/haproxy.sock

 listen ha_stats :8088
 balance source
 mode http
 timeout client 3ms
 stats enable
 stats auth haproxystats:foobar
 stats uri /haproxy?stats

 #-
 # common defaults that all the 'listen' and 'backend' sections will
 # use if not designated in their block
 #-
 defaults
 modehttp
 log global
 option  httplog
 option  dontlognull
 #keep persisten client connection open
 option  http-server-close
 option forwardfor   except 127.0.0.0/8
 option  redispatch
 # Limit number of retries - total time trying to connect = connect
 timeout * (#retries + 1)
 retries 2
 timeout http-request10s
 timeout queue   1m
 #timeout opening a tcp connection to server - should be shorter than
 timeout client and server
 timeout connect 3100
 timeout client  30s
 timeout server  30s
 timeout http-keep-alive 10s
 timeout check   10s
 maxconn 3000

 #-
 # main frontend which proxys to the backends
 #-
 frontend https_frontend
 bind :80
 redirect scheme https if !{ ssl_fc }

 #config help:
 https://github.com/observing/balancerbattle/blob/master/haproxy.cfg
 bind *:443 ssl crt /etc/certs/mycert.pem ciphers
 RC4-SHA:AES128-SHA:AES:!ADH:!aNULL:!DH:!EDH:!eNULL
 mode http
 default_backend webapp

 #-
 # Main backend for web application servers
 #-
 backend webapp
 balance roundrobin

Re: Bug report for latest dev release, 1.5.21, segfault when using http expect string x and large 404 page (includes GDB output)

2014-01-10 Thread Steve Ruiz
Made those changes, and it seems to be working properly, no segfault yet
after ~2 minutes of checks.  Thanks!

Steve Ruiz
Manager - Hosting Operations
Mirth
ste...@mirth.com ste...@mirthcorp.com


On Fri, Jan 10, 2014 at 3:06 PM, Baptiste bed...@gmail.com wrote:

 Hi Steve,

 Could you give a try to the tcp-check and tell us if your have the same
 issue.
 In your backend, turn your httpchk related directives into:
   option tcp-check
   tcp-check send GET\ /cp/testcheck.html\ HTTP/1.0\r\n
   tcp-check send \r\n
   tcp-check expect string good

 Baptiste


 On Fri, Jan 10, 2014 at 11:16 PM, Steve Ruiz ste...@mirth.com wrote:
  I'm experimenting with haproxy on a centos6 VM here.  I found that when I
  specified a health check page (option httpchk GET /url), and that page
  didn't exist, we have a large 404 page returned, and that causes haproxy
 to
  quickly segfault (seems like on the second try GET'ing and parsing the
  page).  I couldn't figure out from the website where to submit a bug, so
 I
  figure I'll try here first.
 
  Steps to reproduce:
  - setup http backend, with option httpchk and httpcheck expect string x.
  Make option httpchk point to a non-existent page
  - On backend server, set it up to serve large 404 response (in my case,
 the
  404 page is 186kB, as it has an inline graphic and inline css)
  - Start haproxy, and wait for it to segfault
 
  I wasn't sure exactly what was causing this at first, so I did some work
 to
  narrow it down with GDB.  The variable values from gdb led me to the
 cause
  on my side, and hopefully can help you fix the issue.  I could not make
 this
  work with simply a large page for the http response - in that case, it
 seems
  to work as advertised, only inspecting the response up to tune.chksize
  (default 16384 as i've left it).  But if I do this with a 404, it seems
 to
  kill it.  Let me know what additional information you need if any.
  Thanks
  and kudos for the great bit of software!
 
 
  #haproxy config:
  #-
  # Example configuration for a possible web application.  See the
  # full configuration options online.
  #
  #   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
  #
  #-
 
  # Help in developing config here:
  # https://www.twilio.com/engineering/2013/10/16/haproxy
 
 
  #-
  # Global settings
  #-
  global
  # to have these messages end up in /var/log/haproxy.log you will
  # need to:
  #
  # 1) configure syslog to accept network log events.  This is done
  #by adding the '-r' option to the SYSLOGD_OPTIONS in
  #/etc/sysconfig/syslog
  #
  # 2) configure local2 events to go to the /var/log/haproxy.log
  #   file. A line like the following can be added to
  #   /etc/sysconfig/syslog
  #
  #local2.*   /var/log/haproxy.log
  #
  log 127.0.0.1 local2 info
 
  chroot  /var/lib/haproxy
  pidfile /var/run/haproxy.pid
  maxconn 4000
  userhaproxy
  group   haproxy
  daemon
 
  #enable stats
  stats socket /tmp/haproxy.sock
 
  listen ha_stats :8088
  balance source
  mode http
  timeout client 3ms
  stats enable
  stats auth haproxystats:foobar
  stats uri /haproxy?stats
 
  #-
  # common defaults that all the 'listen' and 'backend' sections will
  # use if not designated in their block
  #-
  defaults
  modehttp
  log global
  option  httplog
  option  dontlognull
  #keep persisten client connection open
  option  http-server-close
  option forwardfor   except 127.0.0.0/8
  option  redispatch
  # Limit number of retries - total time trying to connect = connect
  timeout * (#retries + 1)
  retries 2
  timeout http-request10s
  timeout queue   1m
  #timeout opening a tcp connection to server - should be shorter than
  timeout client and server
  timeout connect 3100
  timeout client  30s
  timeout server  30s
  timeout http-keep-alive 10s
  timeout check   10s
  maxconn 3000
 
  #-
  # main frontend which proxys to the backends
  #-
  frontend https_frontend
  bind :80
  redirect scheme https if !{ ssl_fc }
 
  #config help:
  

Re: Bug report for latest dev release, 1.5.21, segfault when using http expect string x and large 404 page (includes GDB output)

2014-01-10 Thread Baptiste
Well, let say this is a workaround...
We'll definitively have to fix the bug ;)

Baptiste

On Sat, Jan 11, 2014 at 12:24 AM, Steve Ruiz ste...@mirth.com wrote:
 Made those changes, and it seems to be working properly, no segfault yet
 after ~2 minutes of checks.  Thanks!

 Steve Ruiz
 Manager - Hosting Operations
 Mirth
 ste...@mirth.com


 On Fri, Jan 10, 2014 at 3:06 PM, Baptiste bed...@gmail.com wrote:

 Hi Steve,

 Could you give a try to the tcp-check and tell us if your have the same
 issue.
 In your backend, turn your httpchk related directives into:
   option tcp-check
   tcp-check send GET\ /cp/testcheck.html\ HTTP/1.0\r\n
   tcp-check send \r\n
   tcp-check expect string good

 Baptiste


 On Fri, Jan 10, 2014 at 11:16 PM, Steve Ruiz ste...@mirth.com wrote:
  I'm experimenting with haproxy on a centos6 VM here.  I found that when
  I
  specified a health check page (option httpchk GET /url), and that page
  didn't exist, we have a large 404 page returned, and that causes haproxy
  to
  quickly segfault (seems like on the second try GET'ing and parsing the
  page).  I couldn't figure out from the website where to submit a bug, so
  I
  figure I'll try here first.
 
  Steps to reproduce:
  - setup http backend, with option httpchk and httpcheck expect string x.
  Make option httpchk point to a non-existent page
  - On backend server, set it up to serve large 404 response (in my case,
  the
  404 page is 186kB, as it has an inline graphic and inline css)
  - Start haproxy, and wait for it to segfault
 
  I wasn't sure exactly what was causing this at first, so I did some work
  to
  narrow it down with GDB.  The variable values from gdb led me to the
  cause
  on my side, and hopefully can help you fix the issue.  I could not make
  this
  work with simply a large page for the http response - in that case, it
  seems
  to work as advertised, only inspecting the response up to tune.chksize
  (default 16384 as i've left it).  But if I do this with a 404, it seems
  to
  kill it.  Let me know what additional information you need if any.
  Thanks
  and kudos for the great bit of software!
 
 
  #haproxy config:
  #-
  # Example configuration for a possible web application.  See the
  # full configuration options online.
  #
  #   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
  #
  #-
 
  # Help in developing config here:
  # https://www.twilio.com/engineering/2013/10/16/haproxy
 
 
  #-
  # Global settings
  #-
  global
  # to have these messages end up in /var/log/haproxy.log you will
  # need to:
  #
  # 1) configure syslog to accept network log events.  This is done
  #by adding the '-r' option to the SYSLOGD_OPTIONS in
  #/etc/sysconfig/syslog
  #
  # 2) configure local2 events to go to the /var/log/haproxy.log
  #   file. A line like the following can be added to
  #   /etc/sysconfig/syslog
  #
  #local2.*   /var/log/haproxy.log
  #
  log 127.0.0.1 local2 info
 
  chroot  /var/lib/haproxy
  pidfile /var/run/haproxy.pid
  maxconn 4000
  userhaproxy
  group   haproxy
  daemon
 
  #enable stats
  stats socket /tmp/haproxy.sock
 
  listen ha_stats :8088
  balance source
  mode http
  timeout client 3ms
  stats enable
  stats auth haproxystats:foobar
  stats uri /haproxy?stats
 
  #-
  # common defaults that all the 'listen' and 'backend' sections will
  # use if not designated in their block
  #-
  defaults
  modehttp
  log global
  option  httplog
  option  dontlognull
  #keep persisten client connection open
  option  http-server-close
  option forwardfor   except 127.0.0.0/8
  option  redispatch
  # Limit number of retries - total time trying to connect = connect
  timeout * (#retries + 1)
  retries 2
  timeout http-request10s
  timeout queue   1m
  #timeout opening a tcp connection to server - should be shorter than
  timeout client and server
  timeout connect 3100
  timeout client  30s
  timeout server  30s
  timeout http-keep-alive 10s
  timeout check   10s
  maxconn 3000
 
  #-
  # main frontend which proxys to the backends
  

Re: Thousands of FIN_WAIT_2 CLOSED ESTABLISHED in haproxy1.5-dev21-6b07bf7

2014-01-10 Thread Baptiste
Hi Jose,

On Fri, Jan 10, 2014 at 1:10 PM, Jose María Zaragoza
demablo...@gmail.com wrote:
 One question ,
 what is the advantage to use  http-keep-alive over tunnel mode ?

with http-keepalive you can manipulate the requests and response
within the tunnel :)
consume a bit more of resources, but logging, rewrite rules, etc... will work.

Baptiste



Re: Bug report for latest dev release, 1.5.21, segfault when using http expect string x and large 404 page (includes GDB output)

2014-01-10 Thread Steve Ruiz
Thanks for the workaround + super fast response, and glad to help :).

Steve Ruiz
Manager - Hosting Operations
Mirth
ste...@mirth.com ste...@mirthcorp.com


On Fri, Jan 10, 2014 at 3:53 PM, Baptiste bed...@gmail.com wrote:

 Well, let say this is a workaround...
 We'll definitively have to fix the bug ;)

 Baptiste

 On Sat, Jan 11, 2014 at 12:24 AM, Steve Ruiz ste...@mirth.com wrote:
  Made those changes, and it seems to be working properly, no segfault yet
  after ~2 minutes of checks.  Thanks!
 
  Steve Ruiz
  Manager - Hosting Operations
  Mirth
  ste...@mirth.com
 
 
  On Fri, Jan 10, 2014 at 3:06 PM, Baptiste bed...@gmail.com wrote:
 
  Hi Steve,
 
  Could you give a try to the tcp-check and tell us if your have the same
  issue.
  In your backend, turn your httpchk related directives into:
option tcp-check
tcp-check send GET\ /cp/testcheck.html\ HTTP/1.0\r\n
tcp-check send \r\n
tcp-check expect string good
 
  Baptiste
 
 
  On Fri, Jan 10, 2014 at 11:16 PM, Steve Ruiz ste...@mirth.com wrote:
   I'm experimenting with haproxy on a centos6 VM here.  I found that
 when
   I
   specified a health check page (option httpchk GET /url), and that page
   didn't exist, we have a large 404 page returned, and that causes
 haproxy
   to
   quickly segfault (seems like on the second try GET'ing and parsing the
   page).  I couldn't figure out from the website where to submit a bug,
 so
   I
   figure I'll try here first.
  
   Steps to reproduce:
   - setup http backend, with option httpchk and httpcheck expect string
 x.
   Make option httpchk point to a non-existent page
   - On backend server, set it up to serve large 404 response (in my
 case,
   the
   404 page is 186kB, as it has an inline graphic and inline css)
   - Start haproxy, and wait for it to segfault
  
   I wasn't sure exactly what was causing this at first, so I did some
 work
   to
   narrow it down with GDB.  The variable values from gdb led me to the
   cause
   on my side, and hopefully can help you fix the issue.  I could not
 make
   this
   work with simply a large page for the http response - in that case, it
   seems
   to work as advertised, only inspecting the response up to tune.chksize
   (default 16384 as i've left it).  But if I do this with a 404, it
 seems
   to
   kill it.  Let me know what additional information you need if any.
   Thanks
   and kudos for the great bit of software!
  
  
   #haproxy config:
   #-
   # Example configuration for a possible web application.  See the
   # full configuration options online.
   #
   #   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
   #
   #-
  
   # Help in developing config here:
   # https://www.twilio.com/engineering/2013/10/16/haproxy
  
  
   #-
   # Global settings
   #-
   global
   # to have these messages end up in /var/log/haproxy.log you will
   # need to:
   #
   # 1) configure syslog to accept network log events.  This is done
   #by adding the '-r' option to the SYSLOGD_OPTIONS in
   #/etc/sysconfig/syslog
   #
   # 2) configure local2 events to go to the /var/log/haproxy.log
   #   file. A line like the following can be added to
   #   /etc/sysconfig/syslog
   #
   #local2.*   /var/log/haproxy.log
   #
   log 127.0.0.1 local2 info
  
   chroot  /var/lib/haproxy
   pidfile /var/run/haproxy.pid
   maxconn 4000
   userhaproxy
   group   haproxy
   daemon
  
   #enable stats
   stats socket /tmp/haproxy.sock
  
   listen ha_stats :8088
   balance source
   mode http
   timeout client 3ms
   stats enable
   stats auth haproxystats:foobar
   stats uri /haproxy?stats
  
   #-
   # common defaults that all the 'listen' and 'backend' sections will
   # use if not designated in their block
   #-
   defaults
   modehttp
   log global
   option  httplog
   option  dontlognull
   #keep persisten client connection open
   option  http-server-close
   option forwardfor   except 127.0.0.0/8
   option  redispatch
   # Limit number of retries - total time trying to connect = connect
   timeout * (#retries + 1)
   retries 2
   timeout http-request10s
   timeout queue   1m
   #timeout opening a tcp connection to server - should be shorter
 than
   timeout client and server
   timeout connect