Re: mailing list archives dead

2016-04-04 Thread Willy Tarreau
Hi Patrick,

On Mon, Apr 04, 2016 at 04:57:49PM -0400, Patrick Hemmer wrote:
> It looks like the mailing list archives stopped working mid-December.
> 
> https://marc.info/?l=haproxy

The people at marc.info are currently working on fixing this.
In the mean time you can use gmane instead :

   http://news.gmane.org/gmane.comp.web.haproxy

I'll probably change the link on the home page BTW, as it starts to
bother people.

Willy




mailing list archives dead

2016-04-04 Thread Patrick Hemmer
It looks like the mailing list archives stopped working mid-December.

https://marc.info/?l=haproxy

-Patrick


Re: KA-BOOM! Hit MaxConn despite higher setting in config file

2016-04-04 Thread Baptiste
One is process-wide, one is per frontend and both counts for a maximum
accepted incoming connections.

Baptiste

On Mon, Apr 4, 2016 at 9:07 PM, CJ Ess  wrote:
> Funny you should mention that, I pushed out the revised config and
> immediately got warning about session usage from our moniting. Turns you you
> need Maxconn defined as global for hard limits and default for the soft
> limit. In this case I'm not completely clear why the global maxconn is
> different then the default maxconn - I almost think it would make more sense
> to have different keywords. But I'll write it off as a learning experience
> in our transition to using keepalives.
>
>
> On Mon, Apr 4, 2016 at 1:44 PM, Cyril Bonté  wrote:
>>
>> Hi,
>>
>> Le 04/04/2016 19:14, CJ Ess a écrit :
>>>
>>> Moving the setting to global worked perfectly AND it upped the ulimit-n
>>> to a more appropriate value:
>>
>>
>> I feel unconfortable with the "Moving the setting" part.
>> Did you really MOVE the maxconn declaration from defaults (or
>> listen/frontend) to the global section ? Or did you ADD one to the global
>> section ?
>>
>> This is important, as the effect is not the same at all ;-)
>>
>>>
>>> ...
>>> Ulimit-n: 131351
>>> Maxsock: 131351
>>> Maxconn: 65535
>>> Hard_maxconn: 65535
>>> ...
>>>
>>> So we'll write this down as a learning experience. We recently
>>> transitioned from doing one request per connection to using keep-alives
>>> to the fullest, so I suspect that we've always had this problem but just
>>> never saw it because our connections turned over so quickly.
>>>
>>>
>>> On Sun, Apr 3, 2016 at 3:59 AM, Baptiste >> > wrote:
>>>
>>>
>>> Le 3 avr. 2016 03:45, "CJ Ess" >> > a écrit :
>>>  >
>>>  > Oops, that is important - I have both the maxconn and fullconn
>>> settings in the defaults section.
>>>  >
>>>  > On Sat, Apr 2, 2016 at 4:37 PM, PiBa-NL >> > wrote:
>>>  >>
>>>  >> Op 2-4-2016 om 22:32 schreef CJ Ess:
>>>  >>>
>>>  >>> So in my config file I have:
>>>  >>>
>>>  >>> maxconn 65535
>>>  >>
>>>  >> Where do you have that maxconn setting? In frontend , global, or
>>> both.?
>>>  >>
>>>  >>> fullconn 64511
>>>  >>>
>>>  >>> However, "show info" still has a maxconn 2000 limit and that
>>> caused a blow up because I exceeded the limit =(
>>>  >>>
>>>  >>> So my questions are 1)  is there a way to raise maxconn without
>>> restarting haproxy with the -P parameter (can I add -P when I do a
>>> reload?) 2) Are there any other related gotchas I need to take care
>>> of?
>>>  >>>
>>>  >>> I notice that ulimit-n and maxsock both show 4495 despite
>>> "ulimit -n" for the user showing 65536 (which is probably half of
>>> what I really want since each "session" is going to consume two
>>> sockets)
>>>  >>>
>>>  >>> I'm using haproxy 1.5.12
>>>  >>>
>>>  >>
>>>  >
>>>
>>> So add a maxconn in your global section.
>>> Your process is limited by default to 2000 connections forwarded.
>>>
>>> Baptiste
>>>
>>>
>>
>>
>> --
>> Cyril Bonté
>
>



Re: KA-BOOM! Hit MaxConn despite higher setting in config file

2016-04-04 Thread CJ Ess
Funny you should mention that, I pushed out the revised config and
immediately got warning about session usage from our moniting. Turns you
you need Maxconn defined as global for hard limits and default for the soft
limit. In this case I'm not completely clear why the global maxconn is
different then the default maxconn - I almost think it would make more
sense to have different keywords. But I'll write it off as a learning
experience in our transition to using keepalives.


On Mon, Apr 4, 2016 at 1:44 PM, Cyril Bonté  wrote:

> Hi,
>
> Le 04/04/2016 19:14, CJ Ess a écrit :
>
>> Moving the setting to global worked perfectly AND it upped the ulimit-n
>> to a more appropriate value:
>>
>
> I feel unconfortable with the "Moving the setting" part.
> Did you really MOVE the maxconn declaration from defaults (or
> listen/frontend) to the global section ? Or did you ADD one to the global
> section ?
>
> This is important, as the effect is not the same at all ;-)
>
>
>> ...
>> Ulimit-n: 131351
>> Maxsock: 131351
>> Maxconn: 65535
>> Hard_maxconn: 65535
>> ...
>>
>> So we'll write this down as a learning experience. We recently
>> transitioned from doing one request per connection to using keep-alives
>> to the fullest, so I suspect that we've always had this problem but just
>> never saw it because our connections turned over so quickly.
>>
>>
>> On Sun, Apr 3, 2016 at 3:59 AM, Baptiste > > wrote:
>>
>>
>> Le 3 avr. 2016 03:45, "CJ Ess" > > a écrit :
>>  >
>>  > Oops, that is important - I have both the maxconn and fullconn
>> settings in the defaults section.
>>  >
>>  > On Sat, Apr 2, 2016 at 4:37 PM, PiBa-NL > > wrote:
>>  >>
>>  >> Op 2-4-2016 om 22:32 schreef CJ Ess:
>>  >>>
>>  >>> So in my config file I have:
>>  >>>
>>  >>> maxconn 65535
>>  >>
>>  >> Where do you have that maxconn setting? In frontend , global, or
>> both.?
>>  >>
>>  >>> fullconn 64511
>>  >>>
>>  >>> However, "show info" still has a maxconn 2000 limit and that
>> caused a blow up because I exceeded the limit =(
>>  >>>
>>  >>> So my questions are 1)  is there a way to raise maxconn without
>> restarting haproxy with the -P parameter (can I add -P when I do a
>> reload?) 2) Are there any other related gotchas I need to take care
>> of?
>>  >>>
>>  >>> I notice that ulimit-n and maxsock both show 4495 despite
>> "ulimit -n" for the user showing 65536 (which is probably half of
>> what I really want since each "session" is going to consume two
>> sockets)
>>  >>>
>>  >>> I'm using haproxy 1.5.12
>>  >>>
>>  >>
>>  >
>>
>> So add a maxconn in your global section.
>> Your process is limited by default to 2000 connections forwarded.
>>
>> Baptiste
>>
>>
>>
>
> --
> Cyril Bonté
>


Re: Re: Increased CPU usage after upgrading 1.5.15 to 1.5.16

2016-04-04 Thread Lukas Tribus

Hi Nenad,



I suggest you try reverting commit 7610073a. I have exhibited very
similar issues and everything points to this commit (which was Willy's
first suspect).


So I assume this affects 1.6 and 1.7-dev as well, the bug is not 
specific to the

1.5 backport, right?



Thanks,

Lukas







[SPAM] Prix mini sur vos parfums et produits de beauté !

2016-04-04 Thread Beauty on the Moon par Beautymoon
Title: Beauty on the Moon





Jusqu'à 70% de réduction et 10% de remise supplémentaire avec notre code promo !

Consultez la version en ligne



	
		
			
		
	



	
		
			
		
	



	
		
			
			
		
	



	
		
			
			
		
	



	
		
			
			
		
	



	
		
			
			
		
	



	
		
			
		
	



	
		
			
			
		
		
			
			
			
		
	



Cette newsletter vous a été envoyée par le programme Beautymoon

Pour vous désabonner, accédez à cette page.

© Beautymoon 2016




Re: Haproxy running on 100% CPU and slow downloads

2016-04-04 Thread Lukas Tribus

Hi Sachin,


(due to email troubles on my side this may look like a new thread, sorry
about that)


> We have quite a few regex and acls in our config, is there a way to 
profile

> haproxy and see what could be slowing it down?

You can use strace for syscalls or ltrace for library calls to see if 
something
in particular shows up, but perf may be the better tool for this job (I 
never

used it though).


Like Pavlos said, lets collect some basic informations first:

- haproxy -vv output
- uname -a
- configuration (replace proprietary informations but leave everything 
else intact)

- does TLS resumption correctly work? Check with rfc5077-client:

git clone https://github.com/vincentbernat/rfc5077.git
cd rfc5077
make rfc5077-client


./rfc5077-client 



There's a chance that it is SSL/TLS related.



Regards,

Lukas




Re: KA-BOOM! Hit MaxConn despite higher setting in config file

2016-04-04 Thread Cyril Bonté

Hi,

Le 04/04/2016 19:14, CJ Ess a écrit :

Moving the setting to global worked perfectly AND it upped the ulimit-n
to a more appropriate value:


I feel unconfortable with the "Moving the setting" part.
Did you really MOVE the maxconn declaration from defaults (or 
listen/frontend) to the global section ? Or did you ADD one to the 
global section ?


This is important, as the effect is not the same at all ;-)



...
Ulimit-n: 131351
Maxsock: 131351
Maxconn: 65535
Hard_maxconn: 65535
...

So we'll write this down as a learning experience. We recently
transitioned from doing one request per connection to using keep-alives
to the fullest, so I suspect that we've always had this problem but just
never saw it because our connections turned over so quickly.


On Sun, Apr 3, 2016 at 3:59 AM, Baptiste > wrote:


Le 3 avr. 2016 03:45, "CJ Ess" > a écrit :
 >
 > Oops, that is important - I have both the maxconn and fullconn
settings in the defaults section.
 >
 > On Sat, Apr 2, 2016 at 4:37 PM, PiBa-NL > wrote:
 >>
 >> Op 2-4-2016 om 22:32 schreef CJ Ess:
 >>>
 >>> So in my config file I have:
 >>>
 >>> maxconn 65535
 >>
 >> Where do you have that maxconn setting? In frontend , global, or
both.?
 >>
 >>> fullconn 64511
 >>>
 >>> However, "show info" still has a maxconn 2000 limit and that
caused a blow up because I exceeded the limit =(
 >>>
 >>> So my questions are 1)  is there a way to raise maxconn without
restarting haproxy with the -P parameter (can I add -P when I do a
reload?) 2) Are there any other related gotchas I need to take care of?
 >>>
 >>> I notice that ulimit-n and maxsock both show 4495 despite
"ulimit -n" for the user showing 65536 (which is probably half of
what I really want since each "session" is going to consume two sockets)
 >>>
 >>> I'm using haproxy 1.5.12
 >>>
 >>
 >

So add a maxconn in your global section.
Your process is limited by default to 2000 connections forwarded.

Baptiste





--
Cyril Bonté



Re: KA-BOOM! Hit MaxConn despite higher setting in config file

2016-04-04 Thread CJ Ess
Moving the setting to global worked perfectly AND it upped the ulimit-n to
a more appropriate value:

...
Ulimit-n: 131351
Maxsock: 131351
Maxconn: 65535
Hard_maxconn: 65535
...

So we'll write this down as a learning experience. We recently transitioned
from doing one request per connection to using keep-alives to the fullest,
so I suspect that we've always had this problem but just never saw it
because our connections turned over so quickly.


On Sun, Apr 3, 2016 at 3:59 AM, Baptiste  wrote:

>
> Le 3 avr. 2016 03:45, "CJ Ess"  a écrit :
> >
> > Oops, that is important - I have both the maxconn and fullconn settings
> in the defaults section.
> >
> > On Sat, Apr 2, 2016 at 4:37 PM, PiBa-NL  wrote:
> >>
> >> Op 2-4-2016 om 22:32 schreef CJ Ess:
> >>>
> >>> So in my config file I have:
> >>>
> >>> maxconn 65535
> >>
> >> Where do you have that maxconn setting? In frontend , global, or both.?
> >>
> >>> fullconn 64511
> >>>
> >>> However, "show info" still has a maxconn 2000 limit and that caused a
> blow up because I exceeded the limit =(
> >>>
> >>> So my questions are 1)  is there a way to raise maxconn without
> restarting haproxy with the -P parameter (can I add -P when I do a reload?)
> 2) Are there any other related gotchas I need to take care of?
> >>>
> >>> I notice that ulimit-n and maxsock both show 4495 despite "ulimit -n"
> for the user showing 65536 (which is probably half of what I really want
> since each "session" is going to consume two sockets)
> >>>
> >>> I'm using haproxy 1.5.12
> >>>
> >>
> >
>
> So add a maxconn in your global section.
> Your process is limited by default to 2000 connections forwarded.
>
> Baptiste
>


Re: Haproxy running on 100% CPU and slow downloads

2016-04-04 Thread Pavlos Parissis
On 04/04/2016 05:23 μμ, Sachin Shetty wrote:
> Hi,
> 
> I am chasing some weird capacity issues in our setup. 
> 
> Haproxy which also does SSL is forwarding request to various other
> servers upstream. I am seeing a simple 100MB file download from our
> upstream components starts to slow down time to time like hitting as low
> as 1MBPS, usually is it greater than 100MBPS. When this happens, I tried
> downloading the file from the upstream component bypassing haproxy from
> the same box, and that is fast enough – 100MBPS. So it seems like
> haproxy is getting jammed on something. 

Did you use HTTPS on the server as well?

> 
> The only suspicious thing I see is that haproxy will be spinning on 100%
> CPU. So we added nbproc 4 and I still see the same pattern, when the
> speed drops, all haproxy proceses are hitting 80-100%. The request rate
> when the speed drops is about 5K/minute which is only 2X of requests
> when things are normal and download speeds are fine.

what is user and sys level of CPU?

> 
> We have quite a few regex and acls in our config, is there a way to
> profile haproxy and see what could be slowing it down?
> 

You better include the actual config, it will increase the level of
support that you may get.

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Haproxy running on 100% CPU and slow downloads

2016-04-04 Thread Sachin Shetty
Hi,

I am chasing some weird capacity issues in our setup.

Haproxy which also does SSL is forwarding request to various other servers
upstream. I am seeing a simple 100MB file download from our upstream
components starts to slow down time to time like hitting as low as 1MBPS,
usually is it greater than 100MBPS. When this happens, I tried downloading
the file from the upstream component bypassing haproxy from the same box,
and that is fast enough ­ 100MBPS. So it seems like haproxy is getting
jammed on something.

The only suspicious thing I see is that haproxy will be spinning on 100%
CPU. So we added nbproc 4 and I still see the same pattern, when the speed
drops, all haproxy proceses are hitting 80-100%. The request rate when the
speed drops is about 5K/minute which is only 2X of requests when things are
normal and download speeds are fine.

We have quite a few regex and acls in our config, is there a way to profile
haproxy and see what could be slowing it down?

Thanks
Sachin





Re: Question about Keep-Alive behaviour

2016-04-04 Thread Baptiste
Hi Craig,

This is partially handled by the "http-reuse" featureof HAProxy 1.6.
A real connection pool is on its way, it's a requirement for HTTP/2.
That said, no idea when we'll have it.

Baptiste



On Thu, Mar 31, 2016 at 5:11 PM, Craig McLure  wrote:
> Hi Baptiste,
>
> Thanks for the answer, it does help!
>
> There have been discussions on the list about maintaining a connection pool
> with backend servers for the purposes of keep-alive, are there any plans for
> this in the near future? If not, can you recommend a way to handle such
> behaviour outside of haproxy?
>
> Thanks.
>
> On 22 March 2016 at 20:44, Baptiste  wrote:
>>
>> On Tue, Mar 22, 2016 at 2:17 PM, Craig McLure  wrote:
>> > Hi,
>> >
>> > I'm hoping to experiment with enabling keep-alive on my service, but the
>> > documentation isn't entirely clear for my use case, the general
>> > implementation is as follows:
>> >
>> > 1) A HTTP request comes in
>> > 2) A LUA script grabs the request body, does some analysis on it, and
>> > injects a Cookie: header into the request
>> > 3) The request goes to a backend, where the cookie is used to determine
>> > the
>> > server the request should be dispatched too.
>> >
>> > This behaviour seems to work fine with the http-server-close or
>> > httpclose
>> > options, but I'm not entirely sure what would happen in a keep-alive
>> > session
>> > when the backend server switches. I've set http-reuse to 'safe'  but
>> > when
>> > the second request goes to a different backend server to the first, what
>> > happens to the original socket on the first server? Will it be reused by
>> > other connections or does it just get dropped in a 1:1 mapping style?
>> > Given
>> > that it's rare that two subsequent requests on a single connection will
>> > arrive at the same server, is it even worth having keep-alive support on
>> > the
>> > backends?
>> >
>> > Hopefully you guys can help.
>> >
>> > Thanks!
>>
>> Hi Craig,
>>
>> We miss the backend configuration and how you perform this persistence
>> to be able to deliver you the best support.
>> As far as I can tell, the persistence will have precedence over
>> keep-alive connections, if that helps. So Imagine a client which did a
>> first request which has been routed to server 1 where the connection
>> is now established, a second request comes from this same client and
>> your lua script sets a cookie to point it to server 2, then HAProxy
>> will close the first connection and establish a new one on the new
>> server.
>>
>> Baptiste
>
>



[CLEANUP]: proto_http

2016-04-04 Thread David CARLIER
HI all,

After the important cleanup of this week end, here a much more modest one.
Basically some gcc warnings suppressions.

Hope it is useful.
From 39d833189abea2f6c4671cc969302365dc7d9c45 Mon Sep 17 00:00:00 2001
From: David Carlier 
Date: Mon, 4 Apr 2016 11:54:42 +0100
Subject: [PATCH] CLEANUP: proto_http: few corrections for gcc warnings.

first, we modify the signatures of http_msg_forward_body and
http_msg_forward_chunked_body as they are declared as inline
below. Secondly, just verify the returns of the chunk initialization
which holds the Authorization Method (althought it is unlikely to fail  ...).
Both from gcc warnings.
---
 src/proto_http.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/src/proto_http.c b/src/proto_http.c
index 74cd260..0c37736 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -275,8 +275,8 @@ fd_set http_encode_map[(sizeof(fd_set) > (256/8)) ? 1 : ((256/8) / sizeof(fd_set
 
 static int http_apply_redirect_rule(struct redirect_rule *rule, struct stream *s, struct http_txn *txn);
 
-static int http_msg_forward_body(struct stream *s, struct http_msg *msg);
-static int http_msg_forward_chunked_body(struct stream *s, struct http_msg *msg);
+static inline int http_msg_forward_body(struct stream *s, struct http_msg *msg);
+static inline int http_msg_forward_chunked_body(struct stream *s, struct http_msg *msg);
 
 /* This function returns a reason associated with the HTTP status.
  * This function never fails, a message is always returned.
@@ -1589,7 +1589,9 @@ get_http_auth(struct stream *s)
 	if (!p || len <= 0)
 		return 0;
 
-	chunk_initlen(_method, h, 0, len);
+	if (chunk_initlen(_method, h, 0, len) != 1)
+		return 0;
+
 	chunk_initlen(>auth.method_data, p + 1, 0, ctx.vlen - len - 1);
 
 	if (!strncasecmp("Basic", auth_method.str, auth_method.len)) {
-- 
2.7.4



Re: Increased CPU usage after upgrading 1.5.15 to 1.5.16

2016-04-04 Thread Janusz Dziemidowicz
2016-03-31 9:46 GMT+02:00 Janusz Dziemidowicz :
> About the CPU problem. Reverting 7610073a indeed fixes my problem. If
> anyone has any idea what is the problem with this commit I am willing
> to test patches:)
> Some more details about my setup. All servers have moderate traffic
> (200-500mbit/s in peak). I do both plain HTTP and HTTPS + some small
> traffic in TCP mode (also both with and without TLS). I also make an
> extensive use of unix sockets for HTTP/2 support (decrypted HTTP/2
> traffic is routed via unix socket to nghttpx and then arrives back on
> another socket as HTTP/1.1).

Back to the original problem as the TLS ticket discussion has ended.
Anyone has any idea why 7610073a seems to increase CPU usage? I've
tried looking into this, but unfortunately I am not that familiar with
haproxy internals.

-- 
Janusz Dziemidowicz



Re: send-proxy behavior when the client closes the connection prematurely

2016-04-04 Thread Willy Tarreau
Hi Frederik,

On Thu, Mar 31, 2016 at 12:37:03PM -0700, Frederik Deweerdt wrote:
> >> It seems that we would be a bit more efficient if we also aborted when
> >> si_b->state was SI_ST_INI: that is, don't even try to open a connection
> >> to the backend if we're shutting down the frontend.
> >
> > No, we should not do this. You can already force this behaviour with
> > "option abortonclose".
> 
> Mmm, adding "option abortonclose" does work in "mode http", but not in
>  "mode tcp", which I've been using.

Why are you saying this ? I'm not seeing anything restricting abortonclose
to HTTP mode only.

> I can however confirm that if the
> check becomes:
> (s->be->options & PR_O_ABRT_CLOSE || channel_is_empty(req))
> rather than
> channel_is_empty(req)
> it does close the connection there.

Then you definitely need to enable the option :-)

> > I'm thinking about two possibilities :
> >   - either we consider that if we can't retrieve a connection's address
> > for a proxy protocol line we must fail and abort the connection ;
> >   - or we consider that when we're closing a front connection early
> > (with "early" still to be defined, maybe with something in the
> > request buffer), then we retrieve the destination address prior
> > to closing. Or maybe we should retrieve this each time the client
> > closes first (read 0 or error caught) except when the session is
> > idle.
> >
> > I guess we should do both. First ensure that we always have the
> > socket's addresses before closing on error, and then cover the
> > possible remaining cases by aborting outgoing connections with
> > an incomplete proxy proto.
> >
> > What do you think ?
> 
> I hadn't considered this - I was trying to address the "don't open
> a connection if the peer fd is closed" case - but I think that
> both fixes sound good, with a preference for retrieving each
> time the client closes first, since that has simpler semantics.
> At the very least, it sounds like the proxy code should be looking
> the CO_FL_ADDR_*_SET flags.

It's always what is looked at before performing the address lookup,
it's just that we want to avoid doing it if we don't need it. I guess
we can find a way to mark that a stream is idle and thus that the
client may safely close without causing its address to be retrieved
first. This state would happen after logging or after a stream is
recycled while waiting for a new request over a connection. I'll need
to think about it a bit more, because it might become easier with the
next changes needed to progress towards HTTP/2.

Regards,
Willy