Hi Steve,

On Fri, Jan 10, 2014 at 02:16:48PM -0800, Steve Ruiz wrote:
> I'm experimenting with haproxy on a centos6 VM here.  I found that when I
> specified a health check page (option httpchk GET /url), and that page
> didn't exist, we have a large 404 page returned, and that causes haproxy to
> quickly segfault (seems like on the second try GET'ing and parsing the
> page).  I couldn't figure out from the website where to submit a bug, so I
> figure I'll try here first.
> 
> Steps to reproduce:
> - setup http backend, with option httpchk and httpcheck expect string x.
> Make option httpchk point to a non-existent page
> - On backend server, set it up to serve large 404 response (in my case, the
> 404 page is 186kB, as it has an inline graphic and inline css)
> - Start haproxy, and wait for it to segfault
> 
> I wasn't sure exactly what was causing this at first, so I did some work to
> narrow it down with GDB.  The variable values from gdb led me to the cause
> on my side, and hopefully can help you fix the issue.  I could not make
> this work with simply a large page for the http response - in that case, it
> seems to work as advertised, only inspecting the response up to
> tune.chksize (default 16384 as i've left it).  But if I do this with a 404,
> it seems to kill it.  Let me know what additional information you need if
> any.  Thanks and kudos for the great bit of software!

Thanks for all these details. I remember that the http-expect code puts
a zero at the end of the received buffer prior to looking up the string.
But it might be possible that there would be some cases where it doesn't
do it, or maybe it dies after restoring it. Another thing I'm thinking
about is that we're using the trash buffer for many operations and I'm
realizing that the check buffer's size might possibly be larger :-/

In your case the check indeed died on the second request so it's out of
context.

I'll try to reproduce this and fix it, thanks very much for your valuable
information!

Willy


Reply via email to