AW: Featurerequest: Scheduled Loadbalancing

2011-03-01 Thread Sven Büsing
Hello,

Thanks for your responses. I will try it that way first. But I think it's more 
a workaround than a solution. Both ways are not consistent with the 
haproxy-configuration and more error-prone than a scheduling-function in the 
software itself would be. In a large environment with a large configuration it 
may also get confusing for others, when servers do not react as they would 
expect.

Regards,
Sven



Re: balance url_param with POST

2011-03-01 Thread Bryan Talbot
 I'm not seeing how to use reqrep to alter a POST uri by appending a
 'a=1' parameter to the end since there is no support for substitution
 groups.  Any pointers?

 We can't modify the contents of a POST request but we can indeed alter
 the URI. And yes it does support substitution groups. For instance, you
 could duplicate the only url param you currently have, eg something
 approximately like this :

     reqrep ^(.*)\?([^]*)$ \1?\2\2



Your'e right of course.  I don't know why I was thinking that I
couldn't use substitution groups.  Thanks for the pointer and this
will work for me as a work-around until a proper fix can be made.  I
only need to alter the request line and not POST content.

-Bryan



Re: 1.5 status

2011-03-01 Thread Willy Tarreau
On Tue, Mar 01, 2011 at 03:30:08PM +0800, Delta Yeh wrote:
 Hi Willy,
 
 Do you have any plan to add http compress feature  into haproxy ?

Yes, we'll probably implement it here at Exceliance once we're done
with SSL. The internal reworks needed to address SSL are the same as
compression. For a long time I've been against compression because
of the added latency that freezes the whole process while compressing
a buffer. With nowadays processors, compressing a 16kB buffer should
take less than a millisecond and will not slow the whole process down
too much. Also, the internal scheduler supports priorities so we can
lower the one of the compressing tasks.

 And what is the status of SSL feature?  I read  a post
 on the status of SSL  , in 2009 ? or maybe early 2010.

The devs should start here in a few months and take several months.

Regards,
Willy




Re: 1.5 status

2011-03-01 Thread Joe Williams

On Mar 1, 2011, at 12:46 AM, Willy Tarreau wrote:

 On Tue, Mar 01, 2011 at 03:30:08PM +0800, Delta Yeh wrote:
 Hi Willy,
 
 Do you have any plan to add http compress feature  into haproxy ?
 
 Yes, we'll probably implement it here at Exceliance once we're done
 with SSL. The internal reworks needed to address SSL are the same as
 compression. For a long time I've been against compression because
 of the added latency that freezes the whole process while compressing
 a buffer. With nowadays processors, compressing a 16kB buffer should
 take less than a millisecond and will not slow the whole process down
 too much. Also, the internal scheduler supports priorities so we can
 lower the one of the compressing tasks.
 
 And what is the status of SSL feature?  I read  a post
 on the status of SSL  , in 2009 ? or maybe early 2010.
 
 The devs should start here in a few months and take several months.


Thanks for all the details Willy, glad to hear things are easing up for you. :)

-Joe


Name: Joseph A. Williams
Email: j...@joetify.com
Blog: http://www.joeandmotorboat.com/
Twitter: http://twitter.com/williamsjoe




Re: PROXY protocol for stunnel 4.34 + haproxy 1.4.10

2011-03-01 Thread Cyril Bonté
Hi Hervé,

Le jeudi 24 février 2011 19:09:06, Hervé COMMOWICK a écrit :
 It seems that it miss some things to work correctly. It generates
 warnings

Thanks for the feedback, I didn't notice those warnings.

I've applied your updated patch today on haproxy 1.4.11 (in case someone wants 
to know if it works).

-- 
Cyril Bonté



Modifying HAProxy to be able to proxy based upon the POST request body

2011-03-01 Thread Nathaniel Irvin
We have been looking at modifying  HAProxy to be able to create ACL's that
read in the request body and forward based upon whether or not a certain
string is contained within it.

It seems like there is everything needed except the acl_fetch_line
function.  In this function, we have been able to read in most requests, but
requests bigger than 1000k seem to break because the entire message is not
read in before the function is called.

The flag ACL_TEST_F_MAY_CHANGE appears to be made to cover this purpose,
but setting this flag doesn't seem to do anything because the flag
ACL_PARTIAL is never set for http acl's, and there doesn't seem to be any
support for calling the fetch function again once the rest of the message is
received.

Is this assessment accurate?  And how should we go about solving this?  Any
help that you can offer would be great.



-- 
Nathaniel Irvin
Junior Developer
True North Service, Inc.


Re: Stick on src defaults to option persist?

2011-03-01 Thread Willy Tarreau
Hi Malcolm,

sorry I did not notice your first report.

First, Cyril is right. Weight 0 will never affect persistence and it
is fortunate because weight only applies to LB while persistence is a
way to bypass LB.

On Tue, Mar 01, 2011 at 04:17:30PM +, Malcolm Turnbull wrote:
 Cyril et al.
 Just to confirm after further testing this is definitely to do with
 keep-alive from browsers not closing the current connection to a
 keep-alive server.

Yes indeed. As long as the connection remains active between both ends,
the serverdoes not have to be re-selected so at no point is there any
behaviour change.

 So if you set a server to maintenance mode connections will
 effectively be drained (which is normally a good thing).
 It would be nice to have the option of killing the connections
 instantly as well though i.e. drain or kill (especially for things
 like RDP/Exchange etc.)

Good point. I know I will have to handle that for HTTP keep-alive on
the server, but that will be different, we'll simply force connections
to close upon the last response if the server is disabled.

What you're describing is much more complicated and would require that
we had a kill now flag in every session, that would preempt any other
processing. It should be feasible on a session basis (eg: kill one
session from the command line) but it will be a tougher work to be
able to enumerate them by server (right now sessions are not listed
by server).

I'm adding that to the nice to have section in the roadmap.

Thanks,
Willy




Re: balance url_param with POST

2011-03-01 Thread Willy Tarreau
Hi Bryan,

just to keep you updated, I'm pushing the fix into 1.4.12. While fixing
it I discovered that check_post got broken when I implemented client-side
keep-alive, because the content-length we're relying on is reset by the
forwarding code before the LB code is called. So I had to fix that too.

Regards,
Willy




Re: Modifying HAProxy to be able to proxy based upon the POST request body

2011-03-01 Thread Willy Tarreau
Hi Nathaniel,

On Tue, Mar 01, 2011 at 11:15:48AM -0800, Nathaniel Irvin wrote:
 We have been looking at modifying  HAProxy to be able to create ACL's that
 read in the request body and forward based upon whether or not a certain
 string is contained within it.

It's planned for 1.5 but not done yet. I think there's not too much work
anymore to do that. I would have preferred to do it *after* we convert
the ACL to use the pattern frameworks in order to avoid duplicate the
work, but given how late we are, having it right now should be fine.

 It seems like there is everything needed except the acl_fetch_line
 function.  In this function, we have been able to read in most requests, but
 requests bigger than 1000k seem to break because the entire message is not
 read in before the function is called.

POST body analysis is limited to the request buffer length anyway (16 kB by
default). You should never expect to buffer nor analyse large POSTs such as
the ones you're talking about above. That does not scale at all. At 1
concurrent connections, you're eating 10gigs of RAM ! Some proxy products
even store POST requests on disk to save memory.

 The flag ACL_TEST_F_MAY_CHANGE appears to be made to cover this purpose,
 but setting this flag doesn't seem to do anything because the flag
 ACL_PARTIAL is never set for http acl's, and there doesn't seem to be any
 support for calling the fetch function again once the rest of the message is
 received.

I see. That's one of the reasons it would have been better to conver the
ACL framework first :-/

Right now, what I'd suggest would be to simply enable the request POST body
analyser when at least one such ACL is needed. This analyser will wait for
either the whole content-length to be read or for the buffer to be full.
Then you can run your ACL on the contents and see if the content is there
or not.

It will also avoid the risk of rechecking the whole buffer upon each read
(potentially every 1460 bytes when dealing with remote clients sending one
segment at a time).

Regards,
Willy




Re: Modifying HAProxy to be able to proxy based upon the POST request body

2011-03-01 Thread Nathaniel Irvin
Thanks for the quick response!

Just a quick correction and question.

When I mentioned requests above 1000k, i actually meant 1000 bytes.
 Basically it seemed like it didn't work whenever the request was big enough
to force a 100-Continue.  I'm not sure if this would change anything, since
we still want to be capable of handling large requests, but I wanted to make
sure that I provided as much information as possible.

You mentioned enabling the request POST body analyzer.  What is the best way
to use this?

On Tue, Mar 1, 2011 at 12:19 PM, Willy Tarreau w...@1wt.eu wrote:

 Hi Nathaniel,

 On Tue, Mar 01, 2011 at 11:15:48AM -0800, Nathaniel Irvin wrote:
  We have been looking at modifying  HAProxy to be able to create ACL's
 that
  read in the request body and forward based upon whether or not a certain
  string is contained within it.

 It's planned for 1.5 but not done yet. I think there's not too much work
 anymore to do that. I would have preferred to do it *after* we convert
 the ACL to use the pattern frameworks in order to avoid duplicate the
 work, but given how late we are, having it right now should be fine.

  It seems like there is everything needed except the acl_fetch_line
  function.  In this function, we have been able to read in most requests,
 but
  requests bigger than 1000k seem to break because the entire message is
 not
  read in before the function is called.

 POST body analysis is limited to the request buffer length anyway (16 kB by
 default). You should never expect to buffer nor analyse large POSTs such as
 the ones you're talking about above. That does not scale at all. At 1
 concurrent connections, you're eating 10gigs of RAM ! Some proxy products
 even store POST requests on disk to save memory.

  The flag ACL_TEST_F_MAY_CHANGE appears to be made to cover this
 purpose,
  but setting this flag doesn't seem to do anything because the flag
  ACL_PARTIAL is never set for http acl's, and there doesn't seem to be
 any
  support for calling the fetch function again once the rest of the message
 is
  received.

 I see. That's one of the reasons it would have been better to conver the
 ACL framework first :-/

 Right now, what I'd suggest would be to simply enable the request POST body
 analyser when at least one such ACL is needed. This analyser will wait for
 either the whole content-length to be read or for the buffer to be full.
 Then you can run your ACL on the contents and see if the content is there
 or not.

 It will also avoid the risk of rechecking the whole buffer upon each read
 (potentially every 1460 bytes when dealing with remote clients sending one
 segment at a time).

 Regards,
 Willy




-- 
Nathaniel Irvin
Junior Developer
True North Service, Inc.


Re: Modifying HAProxy to be able to proxy based upon the POST request body

2011-03-01 Thread Willy Tarreau
On Tue, Mar 01, 2011 at 01:14:21PM -0800, Nathaniel Irvin wrote:
 Thanks for the quick response!
 
 Just a quick correction and question.
 
 When I mentioned requests above 1000k, i actually meant 1000 bytes.
  Basically it seemed like it didn't work whenever the request was big enough
 to force a 100-Continue.  I'm not sure if this would change anything, since
 we still want to be capable of handling large requests, but I wanted to make
 sure that I provided as much information as possible.
 
 You mentioned enabling the request POST body analyzer.  What is the best way
 to use this?

Right now, you should modify http_process_request so that it adds the
flag ANA_REQ_HTTP_BODY (or something like this) to the list of req-analysers
if the method is a POST. In the same code, there are other places where this
is done, eg when there's a non-null url_param_len. This analyser already
exists, and if enabled, will wait for the data to come into the buffer.

We need to check whether your ACLs will be processed after that point,
but I think it is the case. They're called in http_request_process_common

I think. I'm sorry I don't have the code in front of the eyes right now,
hence my approximative response. But the principle really is :

  1) detect if you need to wait for a body. Let's consider that any POST
 does for now.

  2) wait for that body

  3) freely apply any analysis on that body once you have it.

Regards,
Willy




HAProxy considers key listing HTTP response from Riak invalid

2011-03-01 Thread Jason J. W. Williams
Hello,

Can you tell me why HAProxy considers this response from a Riak
backend server invalid? https://gist.github.com/850204

I suspect it's the length of the Link header. Thank you in advance.

-J



Re: Modifying HAProxy to be able to proxy based upon the POST request body

2011-03-01 Thread Nathaniel Irvin
It seems like even after I set the flag ANA_REQ_HTTP_BODY, the fetch
function still does not work with the entire message in the buffer.  What it
seems like it is getting is just the CRLF in the 100-Continue message.  Also
in our fetch function, it seems like the buffer from previous requests is
preserved and merely written over, so sometimes we will have leftover data
from a previous buffer.

On Tue, Mar 1, 2011 at 2:29 PM, Willy Tarreau w...@1wt.eu wrote:

 On Tue, Mar 01, 2011 at 01:14:21PM -0800, Nathaniel Irvin wrote:
  Thanks for the quick response!
 
  Just a quick correction and question.
 
  When I mentioned requests above 1000k, i actually meant 1000 bytes.
   Basically it seemed like it didn't work whenever the request was big
 enough
  to force a 100-Continue.  I'm not sure if this would change anything,
 since
  we still want to be capable of handling large requests, but I wanted to
 make
  sure that I provided as much information as possible.
 
  You mentioned enabling the request POST body analyzer.  What is the best
 way
  to use this?

 Right now, you should modify http_process_request so that it adds the
 flag ANA_REQ_HTTP_BODY (or something like this) to the list of
 req-analysers
 if the method is a POST. In the same code, there are other places where
 this
 is done, eg when there's a non-null url_param_len. This analyser already
 exists, and if enabled, will wait for the data to come into the buffer.

 We need to check whether your ACLs will be processed after that point,
 but I think it is the case. They're called in http_request_process_common

 I think. I'm sorry I don't have the code in front of the eyes right now,
 hence my approximative response. But the principle really is :

  1) detect if you need to wait for a body. Let's consider that any POST
 does for now.

  2) wait for that body

  3) freely apply any analysis on that body once you have it.

 Regards,
 Willy




-- 
Nathaniel Irvin
Junior Developer
True North Service, Inc.


Re: Modifying HAProxy to be able to proxy based upon the POST request body

2011-03-01 Thread Willy Tarreau
On Tue, Mar 01, 2011 at 05:39:18PM -0800, Nathaniel Irvin wrote:
 It seems like even after I set the flag ANA_REQ_HTTP_BODY, the fetch
 function still does not work with the entire message in the buffer.

You should add some markers in the http request body parser to ensure
it is properly called and it does its job.

 What it
 seems like it is getting is just the CRLF in the 100-Continue message.

The body parser should be able to deal with expect: 100-continue and send
the response itself. So most likely you did not pass through it in time.

Oh I see what might be happening :

int http_process_request_body(struct session *s, struct buffer *req, int 
an_bit)
{
struct http_txn *txn = s-txn;
struct http_msg *msg = s-txn.req;
long long limit = s-be-url_param_post_limit;

See this limit above ? Since you have no url_param, by default it waits for
zero byte. Change that to a full buffer size :

long long limit = req-size;

Also, in session.c, you'll probably have to move the call to
http_process_request_body() earlier (eg: before process_switching_rules)
so that you have the data when you want to do the job.

 Also
 in our fetch function, it seems like the buffer from previous requests is
 preserved and merely written over, so sometimes we will have leftover data
 from a previous buffer.

It is normal and expected, as the buffer has a fixed size, everything
that is beyond your data size is undefined. If you noticed that, I
suspect that your parsing function did not take into account the
buffer's length (req-l) to detect the end of the known data.

Regards,
Willy




Re: HAProxy considers key listing HTTP response from Riak invalid

2011-03-01 Thread Willy Tarreau
On Tue, Mar 01, 2011 at 05:24:18PM -0700, Jason J. W. Williams wrote:
 Hello,
 
 Can you tell me why HAProxy considers this response from a Riak
 backend server invalid? https://gist.github.com/850204
 
 I suspect it's the length of the Link header.

Yes, your header has set a new world record : 58 kB ! Many systems
will never accept more than 4kB in a single header. Apache accepts
up to 8kB. Haproxy accepts 8kB by default for a whole request, so
the header itself was larger than the maximum request.

Also, there are 1000 different values in this header, some products
will experience issues because this is the same as concatenating 1000
times the same header and some products have low limits on the number
of headers (typically 100).

You can workaround this for now by setting tune.bufsize 65536 and
tune.maxrewrite 1024 in your global section, but if I were you, I'd
fix the application before putting it into production so that it does
not do such stupid things, otherwise it will be a nightmare to maintain
and debug in the long run.

Regards,
Willy




Re: HAProxy considers key listing HTTP response from Riak invalid

2011-03-01 Thread Jason J. W. Williams
Hi Willy,

I figured out it was the line length on the Links header a couple hours ago. 
Also explains why my client lib was bombing (put HAProxy in to try and debug). 
Fixed it by telling Riak to stream which skips the Links header and chunks 
the payload. 

Thank you for your help. :) Glad to make a new record. ;)

-J

Sent via iPhone

Is your e-mail Premiere?

On Mar 1, 2011, at 23:20, Willy Tarreau w...@1wt.eu wrote:

 On Tue, Mar 01, 2011 at 05:24:18PM -0700, Jason J. W. Williams wrote:
 Hello,
 
 Can you tell me why HAProxy considers this response from a Riak
 backend server invalid? https://gist.github.com/850204
 
 I suspect it's the length of the Link header.
 
 Yes, your header has set a new world record : 58 kB ! Many systems
 will never accept more than 4kB in a single header. Apache accepts
 up to 8kB. Haproxy accepts 8kB by default for a whole request, so
 the header itself was larger than the maximum request.
 
 Also, there are 1000 different values in this header, some products
 will experience issues because this is the same as concatenating 1000
 times the same header and some products have low limits on the number
 of headers (typically 100).
 
 You can workaround this for now by setting tune.bufsize 65536 and
 tune.maxrewrite 1024 in your global section, but if I were you, I'd
 fix the application before putting it into production so that it does
 not do such stupid things, otherwise it will be a nightmare to maintain
 and debug in the long run.
 
 Regards,
 Willy