Re: Confused by the performance result in tcp mode of haproxy

2012-09-25 Thread Godbach
Hi, Willy

Actually, I have tested that send network interrupts to another three
cores except the core haproxy run on last time, but found nothing
different from that send to one core. And the NICs didn't drop any
packets.

However,  I will test it later, and observe the results of 'ethtool
-S' and the count of network interrupts and so on.

I found some warning messages from dmesg:

[142654.793193] [ cut here ]
[142654.793395] WARNING: at net/ipv4/tcp.c:1301 tcp_cleanup_rbuf+0x54/0x150()
[142654.793972] Hardware name: System Product Name
[142654.794573] cleanup rbuf bug: copied 68D1EF11 seq 68CFF65F rcvnxt 68D3565D
[142654.795165] Modules linked in: ixgbe(O) binfmt_misc 8021q fcoe
garp stp llc libfcoe libfc scsi_transport_fc scsi_tgt ip6t_REJECT nf
_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter
ip6_tables snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_intel
snd_hda_codec nouveau snd_hwdep igb snd_seq snd_seq_device snd_pcm
eeepc_wmi asus_wmi ttm drm_kms_helper sparse_keymap snd_timer snd drm
coretemp rfkill i2c_algo_bit mxm_wmi wmi video lpc_ich mfd_core
crc32c_intel r8169 i2c_i801 i2c_core soundcore snd_page_alloc mii mdio
pcspkr serio_raw ghash_clmulni_intel microcode uinput [last unloaded:
ixgbe]
[142654.798215] Pid: 18374, comm: haproxy Tainted: GW  O 3.5.0 #1
[142654.798838] Call Trace:
[142654.799440]  [] warn_slowpath_common+0x7f/0xc0
[142654.800031]  [] warn_slowpath_fmt+0x46/0x50
[142654.800653]  [] ? sock_pipe_buf_release+0x20/0x20
[142654.801237]  [] tcp_cleanup_rbuf+0x54/0x150
[142654.801847]  [] tcp_read_sock+0x1b1/0x200
[142654.802440]  [] ? sock_sendpage+0x27/0x30
[142654.803037]  [] ? tcp_done+0x90/0x90
[142654.803644]  [] tcp_splice_read+0xc0/0x250
[142654.804239]  [] sock_splice_read+0x62/0x80
[142654.804843]  [] do_splice_to+0x7b/0xa0
[142654.805457]  [] sys_splice+0x540/0x560
[142654.806040]  [] system_call_fastpath+0x16/0x1b
[142654.806646] ---[ end trace 46d7fb693af33fde ]---

It seems that this bug should have been resloved by commit
1ca7ee30630e1022dbcf1b51be20580815ffab73 before 3.5.0 released. But it
still be appeared in kernel 3.5.0, or even be reported in kernel 3.5.3
as the link: https://bugzilla.redhat.com/show_bug.cgi?id=854367 said.

 In my opinion, from the introduction of this bug, it is possible that
splice will not work very well.

Best Regards,
Godbach



Re: capturing arbitrary cookies and multiple X-Forwarded-For values

2012-09-25 Thread Willy Tarreau
Hi Scott,

On Tue, Sep 25, 2012 at 03:43:54PM -0700, Scott Francis wrote:
> one question/feature request, and one possible bug.
> 
> first, the bug:
> 
> haproxy is logging to a local syslog process with
> 
> 
> global
> log 127.0.0.1 local5
> 
> 
> and syslog listening with:
> 
> 
> local5.* /var/log/haproxy
> 
> 
> haproxy.cfg contains the following frontend definition:
> 
> frontend http_proxy
> bind *:80
> mode http
> option forwardfor
> option http-server-close
> option http-pretend-keepalive
> option httplog
> default_backend apache_ui
> 
> 
> if I add the following two directives to my frontend definition, I get
> no log output *at all* (although "haproxy -c" returns success):
> 
> capture request header X-Forwarded-For len 15
> capture cookie openx3_access_token len 63
> 
> 
> however, if instead add the following two headers, I get both an
> X-Forwarded-For value and a cookie value (presumably the last cookie
> specified if there are multiple) in my log output (pipe-delimited
> inside braces), along with the rest of the typical output for "option
> httplog":
> 
> capture request header X-Forwarded-For len 15
> capture request header Cookie len 63
> 
> 
> is this a bug? should haproxy fail to log any output using "capture
> request header" and "capture cookie" directives in the same frontend?
> it appears to be legal syntax.

There was a bug more or less related to this in 1.5-dev11, what's your
version ? The bug was not exactly the same, though log format was changed
and possibly disabled in some conditions.

> now the question: is there a method to log (as you can see I'm
> attempting above) multiple cookies in log output?

Right now, either you use "capture cookie" which logs exactly one cookie
in both requests and responses (almost useless, was developped to track
an application bug which was causing session crossing by sending a wrong
cookie in some responses), or you can use "capture request header" and
log the full Cookie header, but be careful, this can be large sometimes.

> what about arbitrary cookie names?

There is no such thing right now, though it should not be terribly
difficult to implement since we already have the fetch functions for
cookies.

> (software devs have stated that
> they'd like all cookies sent by the client dumped, even if they're not
> ones we're expecting, which means I can't specify the cookie names
> ahead of time because I don't know what they might be).

Precisely then you need to log the complete cookie header.

> in a similar vein, is there a method to log the entirety of the
> X-Forwarded-For header as passed in the HTTP request, and not just the
> first instance of the last value?

I'm a bit shocked by what I found in the doc :
  "Only the first value of the last occurrence ..."

This does not make any sense at all, it's pretty useless. We could capture
the first one, the last one, or the whole, but first value of the last
occurrence makes no sense at all, it's totally random. And it does not
match my experience since I log multiple occurrences at home. I've just
checked the code and it captures the full line of the last occurrence of
the header. I'll have to fix the doc.

So in practice, if your visitors pass through a chain of squids which
each set an XFF header, you'll get the whole chain. The only issue you'll
get is if some of the last proxies add a line of their own (as haproxy
does), in which case you'll only get this line. But quite frankly, this
is not common at all. And if you pass through some proxies in your
infrastructures, most often they fold them again.

However, the maximum header length will still be truncated to 64
bytes (CAPTURE_LEN). If that's not enough, you can change this in
include/common/defaults.h (beware of memory usage). I think we should
turn this into a global setting now.

> We frequently get X-Forwarded-For
> headers that have 3-4 comma-separated values, and cannot currently
> change the rest of the infrastructure to transparently pass HTTP
> requests (multiple L7 proxies involved; no way to avoid multiple
> values in X-Forwarded-For, and we'd like to log the entire chain for
> forensic purposes).

I understand your requirements pretty well, haproxy has grown in exactly
similar environments :-)

Regards,
Willy




Re: invalid "/" in in content-type causes 502 error

2012-09-25 Thread Willy Tarreau
Hi David,

On Wed, Sep 26, 2012 at 09:45:32AM +0900, David Blomberg wrote:
> Recently had an issue develop.  A page made long ago had a 
> "Content/Type" instead of Content-type.  It was working until recently 
> and then started showing 502 errors. Issue resolved by user fixing their 
> page to have "Content-Type".

I don't believe it has worked recently, because invalid characters in
header names have been rejected for at last 1.3.15, maybe even before.
I have implemented "option accept-invalid-http-request" and -response
for this exact mistake which I had already encountered at a customer's.

>Was looking at the rfc and seems this could be treated as an 
> invalid/non-existant Content-type:
> *RFC2045*
> 
> Default RFC 822 messages without a MIME Content-Type header are taken
>by this protocol to be plain text in the US-ASCII character set,
>which can be explicitly specified as:
> 
>  Content-type: text/plain; charset=us-ascii
> 
>This default is assumed if no Content-Type header field is specified.
>It is also recommend that this default be assumed when a
>syntactically invalid Content-Type header field is encountered. In
>the presence of a MIME-Version header field and the absence of any
>Content-Type header field,
> ***end of RFC2045 quote***
> 
> Or is it that stray slashes in the header fields will cause these 
> failures?  Not necessarily a haproxy bug but looks like older versions 
> of haproxy may have allowed for overlooking these slash issues.

Don't confuse a slash in the header's value and a slash in the header's
name.

The correct header name is "Content-Type". Yours is called "Content/Type".
"Content-Type" is a token, "Content/Type" are two tokens with a "/" delimitor
in between.

I suspect that someone has set the header name by hand in the application
and mangled it.

You can force haproxy to let this pass through by setting
"option accept-invalid-http-response" but still the header will possibly
be rejected one step later, or be totally ignored anyway and be useless.

Quite clearly the server configuration must be fixed.

Regards,
Willy




Re: Can't configure haproxy

2012-09-25 Thread Buri Arslon
Thanks!

I sent a message to haproxy@formilux.org 3 times but it didn't appear in
the mailing list archive. And I saw several mails titled "Subscribe" and
decided to try it. :)

As I got your response after "Subscribe" maybe it worked ;D

I added "option http-server-close" to my frontend and backend servers. That
fixed my problem.

Thanks for your support,
-- buriwoy


On Tue, Sep 25, 2012 at 10:31 PM, Baptiste  wrote:

> Hey,
>
> First, you don't need to subscribe to send a mail to the ML :)
>
> I think you're missing an "option http-server-close" in your default
> section (or locally in your frontend / backends).
> Currently, you're using the tunnel mode, where a connection is
> established to a server andis like a tunnel where HAProxy can't see
> anything.
> For more info about HAProxy HTTP connections more, you can read:
>
> http://www.exceliance.fr/sites/default/files/biblio/aloha_load_balancer_http_connection_mode_memo2.pdf
>
> It applies to the Aloha load-balancer, which uses HAProxy :)
>
> By the way, you should add a keyword "cookie " (replace 
> by whatever you want, usually server name) if you want to enable
> cookie persistence. Add also a check parameter to allow haproxy to run
> health checks (and looj for http-chk in the doculentation)
> You can disable cookie persistence on the static farm, it is useless.
>
> cheers
>
>
> On Wed, Sep 26, 2012 at 2:36 AM, Buri Arslon  wrote:
> > Hi everybody,
> >
> > I'm new to haproxy. I can't figure out how to correctly configure
> haproxy.
> >
> > I bought ssl certificate for my site and www subdomain was included. I
> can't
> > afford buying wildcard certificate for my project. That's why I want to
> use
> > https://www.mysite.com/static for static files which is served by Nginx
> at
> > the port :81.
> >
> > And http must be redirected to https. (I think it is working).
> >
> > Here is my config: https://gist.github.com/3785284
> >
> > So, if http://mysite.com then it should be redirected to
> https://mysite.com
> >
> > https://www.mysite.com/static/css/mystyle.css should use static backend
> >
> > https://www.mysite.com/notstatic/link should be redirected to
> > https://mysite.com/notstatic/link
> >
> >
> >
> > What am I doing wrong? Any help, hint would be appreciated.
> >
> > Thanks,
> > buriwoy
>


Re: Can't configure haproxy

2012-09-25 Thread Baptiste
Hey,

First, you don't need to subscribe to send a mail to the ML :)

I think you're missing an "option http-server-close" in your default
section (or locally in your frontend / backends).
Currently, you're using the tunnel mode, where a connection is
established to a server andis like a tunnel where HAProxy can't see
anything.
For more info about HAProxy HTTP connections more, you can read:
http://www.exceliance.fr/sites/default/files/biblio/aloha_load_balancer_http_connection_mode_memo2.pdf

It applies to the Aloha load-balancer, which uses HAProxy :)

By the way, you should add a keyword "cookie " (replace 
by whatever you want, usually server name) if you want to enable
cookie persistence. Add also a check parameter to allow haproxy to run
health checks (and looj for http-chk in the doculentation)
You can disable cookie persistence on the static farm, it is useless.

cheers


On Wed, Sep 26, 2012 at 2:36 AM, Buri Arslon  wrote:
> Hi everybody,
>
> I'm new to haproxy. I can't figure out how to correctly configure haproxy.
>
> I bought ssl certificate for my site and www subdomain was included. I can't
> afford buying wildcard certificate for my project. That's why I want to use
> https://www.mysite.com/static for static files which is served by Nginx at
> the port :81.
>
> And http must be redirected to https. (I think it is working).
>
> Here is my config: https://gist.github.com/3785284
>
> So, if http://mysite.com then it should be redirected to https://mysite.com
>
> https://www.mysite.com/static/css/mystyle.css should use static backend
>
> https://www.mysite.com/notstatic/link should be redirected to
> https://mysite.com/notstatic/link
>
>
>
> What am I doing wrong? Any help, hint would be appreciated.
>
> Thanks,
> buriwoy



Subscribe

2012-09-25 Thread Buri Arslon
My message didn't appear in the archive


Re: invalid "/" in in content-type causes 502 error

2012-09-25 Thread Buri Arslon
Sorry, it's not reply. I'm just testing if my messages are going to
mailing-list.

On Tue, Sep 25, 2012 at 6:45 PM, David Blomberg  wrote:

> Recently had an issue develop.  A page made long ago had a "Content/Type"
> instead of Content-type.  It was working until recently and then started
> showing 502 errors. Issue resolved by user fixing their page to have
> "Content-Type".
>
>Was looking at the rfc and seems this could be treated as an
> invalid/non-existant Content-type:
> *RFC2045***
>
> Default RFC 822 messages without a MIME Content-Type header are taken
>by this protocol to be plain text in the US-ASCII character set,
>which can be explicitly specified as:
>
>  Content-type: text/plain; charset=us-ascii
>
>This default is assumed if no Content-Type header field is specified.
>It is also recommend that this default be assumed when a
>syntactically invalid Content-Type header field is encountered. In
>the presence of a MIME-Version header field and the absence of any
>Content-Type header field,
> ***end of RFC2045 quote***
>
> Or is it that stray slashes in the header fields will cause these
> failures?  Not necessarily a haproxy bug but looks like older versions of
> haproxy may have allowed for overlooking these slash issues.
>
> --
> Thank You
>
> David Blomberg
>
>
>


invalid "/" in in content-type causes 502 error

2012-09-25 Thread David Blomberg
Recently had an issue develop.  A page made long ago had a 
"Content/Type" instead of Content-type.  It was working until recently 
and then started showing 502 errors. Issue resolved by user fixing their 
page to have "Content-Type".


   Was looking at the rfc and seems this could be treated as an 
invalid/non-existant Content-type:

*RFC2045*

Default RFC 822 messages without a MIME Content-Type header are taken
   by this protocol to be plain text in the US-ASCII character set,
   which can be explicitly specified as:

 Content-type: text/plain; charset=us-ascii

   This default is assumed if no Content-Type header field is specified.
   It is also recommend that this default be assumed when a
   syntactically invalid Content-Type header field is encountered. In
   the presence of a MIME-Version header field and the absence of any
   Content-Type header field,
***end of RFC2045 quote***

Or is it that stray slashes in the header fields will cause these 
failures?  Not necessarily a haproxy bug but looks like older versions 
of haproxy may have allowed for overlooking these slash issues.


--
Thank You

David Blomberg




Re: Match when a header is missing?

2012-09-25 Thread Shawn Heisey

On 9/25/2012 3:58 PM, Willy Tarreau wrote:

On Tue, Sep 25, 2012 at 11:26:53PM +0200, Baptiste wrote:

1.5-dev branch may be broken because it is the development branch
version. For example, dev12 is broken on SSL if no SNI is sent (there
may be other bugs).

I would add that a number of people do use 1.5 in production, but they
closely follow the list to be aware of issues and are able to quickly
rebuild and deploy. Also, these people generally wait for changes to
settle a bit after a new dev release because they don't want to risk
their production on stupid bugs.


Good general advice.  The dev7 version had been out for a while before I 
upgraded.  As for what Baptiste said, I am not using SSL.  Version 1.4 
would work for me as far as features.  I can no longer remember what 
prompted me to go with a 1.5 dev version, but it had to be something 
specific.


If this discussion continues much further, it'll need a new thread on 
the mailing list.  I didn't mean to get this far off topic, I foolishly 
thought it would be a quick thing.


Thanks,
Shawn




capturing arbitrary cookies and multiple X-Forwarded-For values

2012-09-25 Thread Scott Francis
one question/feature request, and one possible bug.

first, the bug:

haproxy is logging to a local syslog process with


global
log 127.0.0.1 local5


and syslog listening with:


local5.* /var/log/haproxy


haproxy.cfg contains the following frontend definition:

frontend http_proxy
bind *:80
mode http
option forwardfor
option http-server-close
option http-pretend-keepalive
option httplog
default_backend apache_ui


if I add the following two directives to my frontend definition, I get
no log output *at all* (although "haproxy -c" returns success):

capture request header X-Forwarded-For len 15
capture cookie openx3_access_token len 63


however, if instead add the following two headers, I get both an
X-Forwarded-For value and a cookie value (presumably the last cookie
specified if there are multiple) in my log output (pipe-delimited
inside braces), along with the rest of the typical output for "option
httplog":

capture request header X-Forwarded-For len 15
capture request header Cookie len 63


is this a bug? should haproxy fail to log any output using "capture
request header" and "capture cookie" directives in the same frontend?
it appears to be legal syntax.

now the question: is there a method to log (as you can see I'm
attempting above) multiple cookies in log output?

what about arbitrary cookie names? (software devs have stated that
they'd like all cookies sent by the client dumped, even if they're not
ones we're expecting, which means I can't specify the cookie names
ahead of time because I don't know what they might be).

in a similar vein, is there a method to log the entirety of the
X-Forwarded-For header as passed in the HTTP request, and not just the
first instance of the last value? We frequently get X-Forwarded-For
headers that have 3-4 comma-separated values, and cannot currently
change the rest of the infrastructure to transparently pass HTTP
requests (multiple L7 proxies involved; no way to avoid multiple
values in X-Forwarded-For, and we'd like to log the entire chain for
forensic purposes).

thanks all!
-- 
   Scott Francis | darkuncle(at)darkuncle(dot)net | 0x5537F527
Less and less is done
 until non-action is achieved
 when nothing is done, nothing is left undone.
-- the Tao of Sysadmin



Re: Match when a header is missing?

2012-09-25 Thread Willy Tarreau
On Tue, Sep 25, 2012 at 11:26:53PM +0200, Baptiste wrote:
> >   I looked into upgrading recently to a
> > newer dev snapshot, but I see that the download section says "may be broken"
> > for 1.5dev12.  How is it broken?  Would I be OK running 1.5dev12 as a front
> > end for Solr servers?
> >
> 
> Hi,
> 
> 1.5-dev branch may be broken because it is the development branch
> version. For example, dev12 is broken on SSL if no SNI is sent (there
> may be other bugs).

I would add that a number of people do use 1.5 in production, but they
closely follow the list to be aware of issues and are able to quickly
rebuild and deploy. Also, these people generally wait for changes to
settle a bit after a new dev release because they don't want to risk
their production on stupid bugs.

Regards,
Willy




Re: Match when a header is missing?

2012-09-25 Thread Baptiste
>   I looked into upgrading recently to a
> newer dev snapshot, but I see that the download section says "may be broken"
> for 1.5dev12.  How is it broken?  Would I be OK running 1.5dev12 as a front
> end for Solr servers?
>

Hi,

1.5-dev branch may be broken because it is the development branch
version. For example, dev12 is broken on SSL if no SNI is sent (there
may be other bugs).

Concerning your checks, please enable the logs, it will give you some
clues on the reason why the check did not success.

cheers



Re: Badrequest in 1.5-dev12

2012-09-25 Thread Willy Tarreau
Hi Alexey,

[trimmed the CC list not to pollute mailboxes]

On Tue, Sep 25, 2012 at 02:37:52PM +0400, Alexey Vlasov wrote:
> Here's another dump:
> 
> [25/Sep/2012:12:54:00.380] frontend backend_pool610 (#15): invalid request
>   backend backend_pool610 (#15), server  (#-1), event #0
>   src xx.xx.143.35:57191, session #27, session flags 0x0080
>   HTTP msg state 26, msg flags 0x, tx flags 0x
>   HTTP chunk len 0 bytes, HTTP body len 0 bytes
>   buffer flags 0x00908002, out 0 bytes, total 938 bytes
>   pending 938 bytes, wrapping at 16384, error at position 23:
>   0  GET /phpinfo.php?PATH=/?
>   00034+ ?/&&pid=42 HTTP/1.1
> ...
>   00622  X-FORWARDED-URI: /%D0%9A%D0%B0%D1%82%D0%B0%D0%BB%D0%BE%D0%B3/&pid=42
>   00691+
>   00692  X-FORWARDED-REQUEST: GET 
> /%D0%9A%D0%B0%D1%82%D0%B0%D0%BB%D0%BE%D0%B3/&
>   00762+ pid=42 HTTP/1.1
> 
> The full request should be like the following:
> GET /phpinfo.php?PATH=/??/&&pid=42 HTTP/1.1
> 
> But as it can be seen from the dump, show error shows an unknown symbol "???"
> as I understand it is the 23-nd. But why it appears here is really 
> ununderstandable.

Even if those bytes don't display well in your terminal, they're
definitely invalid. I'm surprized they were not encoded before
being reported, I'll have to check why. However I suspect they're
the sames as what appears in X-FORWARDED-REQUEST. If so, you'll
note that the component which builds and emits those requests also
does something strange with the double "&" when there was only one
in the original request.

> In 1.5-dev7 version there aren't  errors also.

Because 1.5-dev7 did not comply with the standard, which was an
error that needed to be fixed.

> Here's tcpdump:
> # tcpdump -A -s 2000 -n -i lo dst host xx.xx.143.35 and dst port 9610

tcpdump -A is not exploitable because it replaces unprintable chars
with dots. Better use tcpdump -X instead. Also you can safely use
-s0 instead of -s2000. 0 will dump the exact packet size.

> I tried to add "option accept-invalid-htttp-request" to the section
> default, but haproxy still gives badrequest.

This is strange. I think the tcpdump -X will help us (I'll try to
reproduce). It is possible there is a bug anyway. If it's more convenient
to you, you can send me the cap file privately.

Regards,
Willy




Re: Match when a header is missing?

2012-09-25 Thread Bryan Talbot
On Tue, Sep 25, 2012 at 12:30 PM, Shawn Heisey  wrote:

> I have a need to cause haproxy to match an ACL when a header (User-Agent)
> is missing.  Can that be written with the configuration language in its
> current state?  I'm running 1.4.18 here.
>


How about
acl has_useragent  hdr_cnt(User-Agent) gt 0

-Bryan


Match when a header is missing?

2012-09-25 Thread Shawn Heisey
I have a need to cause haproxy to match an ACL when a header 
(User-Agent) is missing.  Can that be written with the configuration 
language in its current state?  I'm running 1.4.18 here.


In another installation, I am running 1.5.dev7, modified with a BUFSIZE 
of 65536 because we have some really long URLs sent to Solr.  I'm having 
a problem where my checks are occasionally failing, but I don't see any 
apparent problem on the Solr servers.  I looked into upgrading recently 
to a newer dev snapshot, but I see that the download section says "may 
be broken" for 1.5dev12.  How is it broken?  Would I be OK running 
1.5dev12 as a front end for Solr servers?


Thanks,
Shawn




Re: 1,5-dev12 TLS1.2

2012-09-25 Thread haproxy
Hi Willy,

Thanks for the response. It prompted me to look a bit closer and I found that 
whilst the compile was completing OK, and obviously the haproxy -vv was showing 
the 1.0.1c version, some mucked up symbolic links on the machine were still 
pointing to 0.9.8 shared libraries, and so it probably in reality was executing 
as a 0.9.8 build, which would explain why TLS1.2 wasn't happening! I've fixed 
those symlinks and recompiled - hey presto, all now working! Sorry for any 
confusion this may have caused - I will now get on with testing the great new 
features. Thanks once again!

Best Regards,


Andy

---
posted at http://www.serverphorums.com
http://www.serverphorums.com/read.php?10,569111,569478#msg-569478



Re: Confused by the performance result in tcp mode of haproxy

2012-09-25 Thread Willy Tarreau
Hi Godbach,

On Tue, Sep 25, 2012 at 06:30:25PM +0800, Godbach wrote:
> Hi, Willy
> 
> I have done performance of haproxy again today.
> 
> My tester has eigth Gigabit prots,  four gigabit ports aggregated
> to emulate clients, and the other four gigabit ports aggregated to
> emulate servers.  So the max throughput is expected to be 4Gbps.
> 
>No matter splice is enabled or disabled(with the options -dS or
> disabled with complied option), the throughput is 2.8Gbps more or less
> under such conditions as follow:
> 1) HTTP object size is 1MB
> 2) max concurrent session is 10,000
> 3) one HTTP transcation on each connection.
>  The throughput was not promoted by enabling splice.
> 
> The following settings are executed according to your suggestions:
> 1. kernel version: 3.5.0
> 2. haproxy version: 1.5-dev12
> 3. haproxy config added: tune.pipesize: 524288
> 4. sysctl:
>   net.ipv4.tcp_rmem = 4096262144  16745216
>   net.ipv4.tcp_wmem = 4096262144  16745216
> 5. haproxy running on core 0, and network interrupts are sent to core 1.
> 6. LRO is enabled
> Offload parameters for eth1(eth3):
> rx-checksumming: on
> tx-checksumming: on
> scatter-gather: on
> tcp-segmentation-offload: on
> udp-fragmentation-offload: off
> generic-segmentation-offload: on
> generic-receive-offload: on
> large-receive-offload: on
> rx-vlan-offload: off
> tx-vlan-offload: off
> ntuple-filters: off
> receive-hashing: on
> 
> The following are CPU usage and interrupts on different cores:
> 
> top - 17:12:30 up 23:10,  3 users,  load average: 0.62, 0.60, 0.53
> Tasks:  99 total,   3 running,  96 sleeping,   0 stopped,   0 zombie
> Cpu0  :  4.8%us, 30.3%sy,  0.0%ni, 63.8%id,  0.0%wa,  0.3%hi,  0.7%si,  0.0%st
> Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,  7.0%id,  0.0%wa,  2.3%hi, 90.6%si,  0.0%st
> Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
> Cpu3  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

As you can see, most of the time is spent in softirq (network stack+driver).
Do you know how many packets were processed per second ? And the interrupt rate
needs to be checked too.

You can get some advanced stats using ethtool -S on each device (just one on
the input path will be enough already).

If you can't make the interrupt processing consume less CPU, you can try to
spread your interrupts over multiple cores, provided that you're not using
the same core as haproxy.

You need to saturate either the network or enough CPUs, but right now given
that haproxy runs roughly at 35%, there is some room for improvement.

Regards,
Willy




Re: Badrequest in 1.5-dev12

2012-09-25 Thread Alexey Vlasov
On Sun, Sep 23, 2012 at 02:16:49PM +0200, Cyril Bonté wrote:
> And Willy added some documentation about that (with a note about Apache 
> allowing non-ascii characters) :
> http://haproxy.1wt.eu/git?p=haproxy.git;a=commit;h=2f1feb99a5499510183f398730cddc2a7e7df863
> 
> So, it looks like the only alternative is to add "option 
> accept-invalid-http-request" in your frontend configuration.

I tried to add "option accept-invalid-htttp-request" to the section
default, but haproxy still gives badrequest.

-- 
BRGDS. Alexey Vlasov.



Re: Badrequest in 1.5-dev12

2012-09-25 Thread Alexey Vlasov
Here's another dump:

[25/Sep/2012:12:54:00.380] frontend backend_pool610 (#15): invalid request
  backend backend_pool610 (#15), server  (#-1), event #0
  src xx.xx.143.35:57191, session #27, session flags 0x0080
  HTTP msg state 26, msg flags 0x, tx flags 0x
  HTTP chunk len 0 bytes, HTTP body len 0 bytes
  buffer flags 0x00908002, out 0 bytes, total 938 bytes
  pending 938 bytes, wrapping at 16384, error at position 23:
  0  GET /phpinfo.php?PATH=/Катал▒
  00034+ ▒г/&&pid=42 HTTP/1.1
...
  00622  X-FORWARDED-URI: /%D0%9A%D0%B0%D1%82%D0%B0%D0%BB%D0%BE%D0%B3/&pid=42
  00691+
  00692  X-FORWARDED-REQUEST: GET /%D0%9A%D0%B0%D1%82%D0%B0%D0%BB%D0%BE%D0%B3/&
  00762+ pid=42 HTTP/1.1

The full request should be like the following:
GET /phpinfo.php?PATH=/Каталог/&&pid=42 HTTP/1.1

But as it can be seen from the dump, show error shows an unknown symbol "▒"
as I understand it is the 23-nd. But why it appears here is really 
ununderstandable.

In 1.5-dev7 version there aren't  errors also.

Here's tcpdump:
# tcpdump -A -s 2000 -n -i lo dst host xx.xx.143.35 and dst port 9610

13:00:26.573281 IP xx.xx.143.35.38567 > xx.xx.143.35.9610: S 
4221045014:4221045014(0) win 32792 
E..<..@.@.i+Q..#Q..#..%...  ...@
.'..
13:00:26.573297 IP xx.xx.143.35.38567 > xx.xx.143.35.9610: . ack 3246742477 win 
257 
E..4..@.@.i2Q..#Q..#..%...  ...[
.'...'..
13:00:26.573357 IP xx.xx.143.35.38567 > xx.xx.143.35.9610: P 0:938(938) ack 1 
win 257 
E.@.@.e.Q..#Q..#..%...  ...[..y.
.'...'..GET /phpinfo.php?PATH=/../&&pid=42 HTTP/1.1
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, 
image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1
Accept-Language: ru,ru-RU;q=0.9,en;q=0.8
Accept-Encoding: gzip, deflate
Cookie: __ptca=137351919.6tcw1SGtta9M.1334637172.1334637172.1334637172.1; 
__ptv_3S8nQr=6tcw1SGtta9M; __pti_3S8nQr=6tcw1SGtta9M; 
__ptcz=137351919.1334637172.1.0.ptmcsr=(direct)|ptmcmd=(none)|ptmccn=(direct)
Cache-Control: no-cache
X-FORWARDED-URI: /%D0%9A%D0%B0%D1%82%D0%B0%D0%BB%D0%BE%D0%B3/&pid=42
X-FORWARDED-REQUEST: GET /%D0%9A%D0%B0%D1%82%D0%B0%D0%BB%D0%BE%D0%B3/&pid=42 
HTTP/1.1

On Fri, Sep 21, 2012 at 08:59:57PM +0200, Baptiste wrote:
> HAProxy clearly says that the error is at position 23, which looks to
> be a P, but I guess this is due to the copy/paste.
> A tcpdump may help understanding what type of character is at this position.
> 
> That said, sounds weird that it works with HAProxy 1.4 and does not
> anymore with 1.5-dev12.
> Could you give a try to 1.5-dev7 ?
> 
> cheers
> 
> 
> On Fri, Sep 21, 2012 at 8:17 PM, Alexey Vlasov  wrote:
> > Yes, it's 400 error. But the tuning unfortunately doesn't help.
> >
> > --
> > BRGDS. Alexey Vlasov.
> >
> > On Fri, Sep 21, 2012 at 06:50:30PM +0200, Thomas Heil wrote:
> >> Hi,
> >>
> >> If this is error 400. Maybe your Get Request become too long.
> >> Would you mind try increasing your buffsize but leaving maxrewrite on 1024.
> >> e.g
> >>
> >> --
> >> global
> >>tune.bufsize 32678
> >>tune.maxrewrite 1024
> >> --
> >>
> >> cheers,
> >> thomas
> >>
> >>
> >> On 21.09.2012 18:17, Alexey Vlasov wrote:
> >> > [21/Sep/2012:20:12:41.265] frontend backend_pool610 (#15): invalid 
> >> > request
> >> >   backend backend_pool610 (#15), server  (#-1), event #0
> >> >   src xx.xx.143.35:37769, session #71, session flags 0x0080
> >> >   HTTP msg state 26, msg flags 0x, tx flags 0x
> >> >   HTTP chunk len 0 bytes, HTTP body len 0 bytes
> >> >   buffer flags 0x00808002, out 0 bytes, total 913 bytes
> >> >   pending 913 bytes, wrapping at 16384, error at position 23:
> >> >
> >> >   0  GET /phpinfo.php?PATH=/РР°СалР
> >> >   00034+ ѕРі/&&pid=42 HTTP/1.1
> >> >
> >> >   00057  Host: test-l24-apache-aux4.p2
> >> >
> >> >   00092  User-Agent: Opera/9.80 (Windows NT 6.1; WOW64; U; ru) 
> >> > Presto/2.10.289
> >> >   00162+ Version/12.02
> >> >
> >> >   00177  Accept: text/html, application/xml;q=0.9, 
> >> > application/xhtml+xml, image
> >> >   00247+ /png, image/webp, image/jpeg, image/gif, image/x-xbitmap, 
> >> > */*;q=0.1
> >> >   00315+
> >> >
> >> >   00316  Accept-Language: ru,ru-RU;q=0.9,en;q=0.8
> >> >
> >> >   00358  Accept-Encoding: gzip, deflate
> >> >
> >> >   00390  Cookie: 
> >> > __ptca=137351919.6tcw1SGtta9M.1334637172.1334637172.1334637172
> >> >   00460+ .1; __ptv_3S8nQr=6tcw1SGtta9M; __pti_3S8nQr=6tcw1SGtta9M; 
> >> > __ptcz=13735
> >> >   00530+ 
> >> > 1919.1334637172.1.0.ptmcsr=(direct)|ptmcmd=(none)|ptmccn=(direct)
> >> >
> >> >   00597  X-FORWARDED-URI: 
> >> > /%D0%9A%D0%B0%D1%82%D0%B0%D0%BB%D0%BE%D0%B3/&pid=42
> >> >   00666+
> >> >
> >> >   00667  X-FORWARDED-REQUEST: GET 
> >> > /%D0%9A%D0%B0%D1%82%D0%B0%D0%BB%D0%BE%D0%B3/&
> >> >   00737+ pid=42 HTTP/1.1
> >> >
> >> >   00754  X-Forwarded-For: xx.x.248.121
> >> >
> >> >   00787  X-Forwarded-Host: test-l24-apache-aux4.p2
> >> >
> >> >   00834  X-Forwarded-Server: ww

Re: Confused by the performance result in tcp mode of haproxy

2012-09-25 Thread Godbach
Hi, Willy

I have done performance of haproxy again today.

My tester has eigth Gigabit prots,  four gigabit ports aggregated
to emulate clients, and the other four gigabit ports aggregated to
emulate servers.  So the max throughput is expected to be 4Gbps.

   No matter splice is enabled or disabled(with the options -dS or
disabled with complied option), the throughput is 2.8Gbps more or less
under such conditions as follow:
1) HTTP object size is 1MB
2) max concurrent session is 10,000
3) one HTTP transcation on each connection.
 The throughput was not promoted by enabling splice.

The following settings are executed according to your suggestions:
1. kernel version: 3.5.0
2. haproxy version: 1.5-dev12
3. haproxy config added: tune.pipesize: 524288
4. sysctl:
  net.ipv4.tcp_rmem = 4096262144  16745216
  net.ipv4.tcp_wmem = 4096262144  16745216
5. haproxy running on core 0, and network interrupts are sent to core 1.
6. LRO is enabled
Offload parameters for eth1(eth3):
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: off
tx-vlan-offload: off
ntuple-filters: off
receive-hashing: on

The following are CPU usage and interrupts on different cores:

top - 17:12:30 up 23:10,  3 users,  load average: 0.62, 0.60, 0.53
Tasks:  99 total,   3 running,  96 sleeping,   0 stopped,   0 zombie
Cpu0  :  4.8%us, 30.3%sy,  0.0%ni, 63.8%id,  0.0%wa,  0.3%hi,  0.7%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,  7.0%id,  0.0%wa,  2.3%hi, 90.6%si,  0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

   CPU0   CPU1   CPU2   CPU3
 50: 25   19986125 17  0   PCI-MSI-edge  eth1-TxRx-0
 51:  0   20002074 13  0   PCI-MSI-edge  eth1-TxRx-1
 52:  0   20004145 16  0   PCI-MSI-edge  eth1-TxRx-2
 53:  2   20004083 13  0   PCI-MSI-edge  eth1-TxRx-3
 54:  0  0  1  0   PCI-MSI-edge  eth1
 55: 14   16075935  7  0   PCI-MSI-edge  eth3-TxRx-0
 56:  3   16070740  3  0   PCI-MSI-edge  eth3-TxRx-1
 57:  5   16091911  3  0   PCI-MSI-edge  eth3-TxRx-2
 58:  4   16077275  3  0   PCI-MSI-edge  eth3-TxRx-3
 59:  2  0  0  0   PCI-MSI-edge  eth3

>From these results, the network interrupts were sent to CPU1 and
haproxy was running on CPU0 indeed.

I am wondering that what else I can do to eliminate this confused
result. If I send all network interrupts to one core and make haproxy
run on another core, there maybe cpu cache missing, so I was also
confused by this setting.

BTW, you can ignore the result that the throughput is only 2Gbps while
the object size is 1M in my first letter, because I only enabled two
gigabit ports after checking my settings.

Thank you!

Godbach