Re: haproxy does not correctly handle MSS on Freebsd

2016-08-21 Thread k simon

Thank you, Lukas. I would investigate it a bit more.

Simon
20160821



Re: haproxy does not correctly handle MSS on Freebsd

2016-08-19 Thread k simon

Hi Lukas,




Hi Simon,



Am 19.08.2016 um 12:41 schrieb k simon:

Hi,List:
  Haproxy's throughput is much less than nginx or squid on FreeBSD and
it's high cpu usage often. When I investigate it a bit more, I found
haproxy does not correctly handle MSS on FreeBSD.


Your kernel decides the segment size of a TCP packet. An TCP application
such as haproxy can give hints, like limiting the MSS further, but it
definitely does not segment TCP payload. I think your investigation goes
in the wrong direction ...




1. When haproxy bind to a physical interface and change
net.inet.tcp.mssdflt to a large value. Haproxy will use this value as
effective size of outgoing segments and ignore the advertised value.


Do you have tcpdump to show that? If your TCP segments are larger than
the negotiated MSS, then its a freebsd kernel bug, not a haproxy one.
Its not the applications job to segment TCP packets.




2.When haproxy bind to a loopback interface. The advertised value is
16344, it's correct. But haproxy send irregular segment size.


What's irregular? In your loopback tcpdump capture, I don't see any
packets with a segment size larger than 16344, so no irregularity there.



Packets's segment should be 16344 as the advertised value. I saw other 
applicaption worked as expected.





3. MSS option is invalid on FreeBSD.


Again, can you elaborate? What does "invalid" mean?




I have tested it with MSS 1200 and found haproxy advertised value have 
not changed. The value is equal to client's advertised value, eg. 1460.





When path_mtu_discovery=1, it worked as expected.


Haproxy is not aware of this parameter. Your kernel is. Is your CPU
usage problem gone with this setting, or do your just don't see any "MSS
irregularities" anymore?



Please do elaborate what you think its wrong with haproxy behavior
*exactly*, because just saying "invalid/irregular MSS behavior" without
specifying what exactly you mean isn't helpful.



Lukas





Re: haproxy does not correctly handle MSS on Freebsd

2016-08-19 Thread k simon





Hi,List:
  Haproxy's throughput is much less than nginx or squid on FreeBSD and
it's high cpu usage often. When I investigate it a bit more, I found
haproxy does not correctly handle MSS on FreeBSD.
1. When haproxy bind to a physical interface and change
net.inet.tcp.mssdflt to a large value. Haproxy will use this value as
effective size of outgoing segments and ignore the advertised value.

When path_mtu_discovery=1, it worked as expected.


2.When haproxy bind to a loopback interface. The advertised value is
16344, it's correct. But haproxy send irregular segment size.

Whenerver path_mtu_discovery set to 0 or 1, it worked weird .


3. MSS option is invalid on FreeBSD.
  I'm running haproxy instance inside a vimage jail, and it should act
the same as runing on bare box. It's really a serious problem and easily
to reproduced.


Regards
Simon






P.S.
1.
FreeBSD ha-l0-j2 10.3-STABLE FreeBSD 10.3-STABLE #0 r303988: Fri Aug 12
16:48:21 CST 2016
root@cache-farm-n2:/usr/obj/usr/src/sys/10-stable-r303988  amd64

2.
HA-Proxy version 1.6.8 2016/08/14
Copyright 2000-2016 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = clang37
  CFLAGS  = -O2 -pipe -fno-omit-frame-pointer -march=corei7
-fno-strict-aliasing -DFREEBSD_PORTS
  OPTIONS = USE_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1
USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2h  3 May 2016
Running on OpenSSL version : OpenSSL 1.0.2h  3 May 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.39 2016-06-14
PCRE library supports JIT : yes
Built without Lua support
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.



3.
frontend tcp-in
mode tcp
bind :1301

frontend virtual-frontend
mode http
bind 127.0.0.1:1000 accept-proxy





4.
17:41:36.515924 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [S], seq
1769628266, win 65535, options [mss 16344], length 0
17:41:36.515954 IP 127.0.0.1.1000 > 127.0.0.1.12558: Flags [S.], seq
360367860, ack 1769628267, win 65535, options [mss 16344], length 0
17:41:36.515957 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
773322:777418, ack 211, win 65535, length 4096
17:41:36.515985 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [.], ack 1,
win 65535, length 0
17:41:36.515994 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [P.], seq
1:49, ack 1, win 65535, length 48
17:41:36.516001 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [P.], seq
49:914, ack 1, win 65535, length 865
17:41:36.516085 IP 127.0.0.1.1000 > 127.0.0.1.12558: Flags [.], ack 914,
win 65535, length 0
17:41:36.516095 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
777418:778878, ack 211, win 65535, length 1460
17:41:36.516203 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack
778878, win 65535, length 0
17:41:36.516403 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
778878:784978, ack 211, win 65535, length 6100
17:41:36.516424 IP 127.0.0.1.12556 > 127.0.0.1.1000: Flags [F.], seq
477, ack 274, win 65535, length 0
17:41:36.516435 IP 127.0.0.1.1000 > 127.0.0.1.12556: Flags [.], ack 478,
win 65535, length 0
17:41:36.516466 IP 127.0.0.1.1000 > 127.0.0.1.12556: Flags [F.], seq
274, ack 478, win 65535, length 0
17:41:36.516487 IP 127.0.0.1.12556 > 127.0.0.1.1000: Flags [.], ack 275,
win 65534, length 0
17:41:36.516515 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
784978:789074, ack 211, win 65535, length 4096
17:41:36.516532 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack
789074, win 65535, length 0
17:41:36.516922 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
789074:790534, ack 211, win 65535, length 1460
17:41:36.516960 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
790534:793170, ack 211, win 65535, length 2636
17:41:36.516971 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack
793170, win 65535, length 0
17:41:36.517270 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
793170:796942, ack 211, win 65535, length 3772
17:41:36.517351 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
796942:798402, ack 211, win 65535, length 1460
17:41:36.517368 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack
798402, win 65535, length 0
17:41:36.517529 IP 127.0.0.1.1000 > 127.0.0.1.12405: Flags [P.], seq
482640:483712, ack 401, win 65535, length 1072
17:41:36.517536 IP 127.0.0.1.12405 > 127.0.0.1.1000: Flags [.], ack
483712, win

haproxy does not correctly handle MSS on Freebsd

2016-08-19 Thread k simon

Hi,List:
  Haproxy's throughput is much less than nginx or squid on FreeBSD and 
it's high cpu usage often. When I investigate it a bit more, I found 
haproxy does not correctly handle MSS on FreeBSD.
1. When haproxy bind to a physical interface and change 
net.inet.tcp.mssdflt to a large value. Haproxy will use this value as 
effective size of outgoing segments and ignore the advertised value.
2.When haproxy bind to a loopback interface. The advertised value is 
16344, it's correct. But haproxy send irregular segment size.

3. MSS option is invalid on FreeBSD.
  I'm running haproxy instance inside a vimage jail, and it should act 
the same as runing on bare box. It's really a serious problem and easily 
to reproduced.



Regards
Simon






P.S.
1.
FreeBSD ha-l0-j2 10.3-STABLE FreeBSD 10.3-STABLE #0 r303988: Fri Aug 12 
16:48:21 CST 2016 
root@cache-farm-n2:/usr/obj/usr/src/sys/10-stable-r303988  amd64


2.
HA-Proxy version 1.6.8 2016/08/14
Copyright 2000-2016 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = clang37
  CFLAGS  = -O2 -pipe -fno-omit-frame-pointer -march=corei7 
-fno-strict-aliasing -DFREEBSD_PORTS
  OPTIONS = USE_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=1 
USE_CPU_AFFINITY=1 USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 
USE_PCRE_JIT=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with OpenSSL version : OpenSSL 1.0.2h  3 May 2016
Running on OpenSSL version : OpenSSL 1.0.2h  3 May 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.39 2016-06-14
PCRE library supports JIT : yes
Built without Lua support
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.



3.
frontend tcp-in
mode tcp
bind :1301

frontend virtual-frontend
mode http
bind 127.0.0.1:1000 accept-proxy





4.
17:41:36.515924 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [S], seq 
1769628266, win 65535, options [mss 16344], length 0
17:41:36.515954 IP 127.0.0.1.1000 > 127.0.0.1.12558: Flags [S.], seq 
360367860, ack 1769628267, win 65535, options [mss 16344], length 0
17:41:36.515957 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
773322:777418, ack 211, win 65535, length 4096
17:41:36.515985 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [.], ack 1, 
win 65535, length 0
17:41:36.515994 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [P.], seq 
1:49, ack 1, win 65535, length 48
17:41:36.516001 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [P.], seq 
49:914, ack 1, win 65535, length 865
17:41:36.516085 IP 127.0.0.1.1000 > 127.0.0.1.12558: Flags [.], ack 914, 
win 65535, length 0
17:41:36.516095 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
777418:778878, ack 211, win 65535, length 1460
17:41:36.516203 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack 
778878, win 65535, length 0
17:41:36.516403 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
778878:784978, ack 211, win 65535, length 6100
17:41:36.516424 IP 127.0.0.1.12556 > 127.0.0.1.1000: Flags [F.], seq 
477, ack 274, win 65535, length 0
17:41:36.516435 IP 127.0.0.1.1000 > 127.0.0.1.12556: Flags [.], ack 478, 
win 65535, length 0
17:41:36.516466 IP 127.0.0.1.1000 > 127.0.0.1.12556: Flags [F.], seq 
274, ack 478, win 65535, length 0
17:41:36.516487 IP 127.0.0.1.12556 > 127.0.0.1.1000: Flags [.], ack 275, 
win 65534, length 0
17:41:36.516515 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
784978:789074, ack 211, win 65535, length 4096
17:41:36.516532 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack 
789074, win 65535, length 0
17:41:36.516922 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
789074:790534, ack 211, win 65535, length 1460
17:41:36.516960 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
790534:793170, ack 211, win 65535, length 2636
17:41:36.516971 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack 
793170, win 65535, length 0
17:41:36.517270 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
793170:796942, ack 211, win 65535, length 3772
17:41:36.517351 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
796942:798402, ack 211, win 65535, length 1460
17:41:36.517368 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack 
798402, win 65535, length 0
17:41:36.517529 IP 127.0.0.1.1000 > 127.0.0.1.12405: Flags [P.], seq 
482640:483712, ack 401, win 65535, length 1072
17:41:36.517536 IP 127.0.0.1.12405 > 127.0.0.1.1000: Flags [.], ack 
483712, win 65535, length 0
17:41:36.518827 IP 127.0.0.1.1000 > 127.0.0.1.12405: Flags [P.]

Does haproxy use regex for balance url_param lookup?

2016-06-26 Thread k simon
Hi, lists,
   I noticed that haproxy 1.6.5 hog the cpu periodiclly on FreeBSD 10 
with 800K-1M syscalls. I change the balance algo to "uri" and delete all 
the regular expressions can work around it. There maybe some bug with 
PCRE on FreeBSD or some bug in haproxy, but I can't confirm it.
   And does haproxy support wildcard in acl string match ? I can rewrite 
my acls to avoid the pcre lib totally.


Simon
20160626


subscribe

2016-06-26 Thread k simon


Re: can not set mss on FreeBSD 10

2014-05-13 Thread k simon

Thank you, Lukas. Maybe I can workaround it on the front router.

Regards
Simon

于 14-5-13 23:29, Lukas Tribus 写道:

Hi Simon,



Hi,Lists,
I found haproxy 1.4.25 can not set mss on FreeBSD 10-stable as below:

# /usr/local/sbin/haproxy -f /opt/etc/haproxy.conf
[WARNING] 132/170407 (71806) : Starting frontend http-in: cannot set MSS


The set MSS code is straightforward, its your OS' TCP stack that doesn't
let you do this.

Did this work in older release of HAproxy? I don't think so.


Check this out, it seems FreeBSD doesn't support settings MSS on listening
sockets:
http://www.freebsd.org/cgi/query-pr.cgi?pr=144000




Regards,

Lukas







Re: Socket Read Errors and Timeouts on FreeBSD

2014-05-13 Thread k simon

Hi,Willy,


Oh and BTW, are you running with PF ? I have some old memories of PF
abusively randomizing sequence numbers and preventing new connections
from being initiated using a same source port from the came client. It
was so odd that I had to disable it on my home reverse-proxy running
OpenBSD! That is easy to test, simply run "pfctl -d" to disable it and
test again.



  I have the similar trouble as John. But I used ipfw instead of pf, as 
of haproxy can not bind mss size on FreeBSD, maybe use pf's scrub rule 
is a good idea.

  BTW, pf has a state named sloopy, it does not check sequence numbers.


Regards
Simon



can not set mss on FreeBSD 10

2014-05-13 Thread k simon
Hi,Lists,
  I found haproxy 1.4.25 can not set mss on FreeBSD 10-stable as below:

# /usr/local/sbin/haproxy -f /opt/etc/haproxy.conf
[WARNING] 132/170407 (71806) : Starting frontend http-in: cannot set MSS

# haproxy -vv
HA-Proxy version 1.4.25 2014/03/27
Copyright 2000-2014 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = gcc47
  CFLAGS  = -g -Wall -O2 -pipe -msse3 -I/usr/local/include
-L/usr/local/lib -fno-strict-aliasing -DFREEBSD_PORTS
  OPTIONS = USE_TPROXY=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.


Regards
Simon



about pcre

2014-05-07 Thread k simon
Hi,Lists,
  I found I can not share the same regex txt for haproxy and squid. And
I noticed that haproxy use OS libc's regex by default, and can change it
with compile parameters "REGEX=pcre".
  Should I recompile haproxy and share the same regex txt?


Regards
Simon



Re: 1.5 dev22 issue on freebsd10-stable

2014-04-16 Thread k simon



于 14-4-16 21:35, Willy Tarreau 写道:

On Wed, Apr 16, 2014 at 02:32:03PM +0100, Simon Dick wrote:

On 16 April 2014 13:41, Ghislain  wrote:

Le 16/04/2014 08:39, Willy Tarreau a écrit :


On a personal note, I'd say that I consider the support for strace and
tcpdump as absolute prerequisite when it comes to any platform going into
production, to the point of even reconsidering the platform if it misses
them. Willy



well FreeBSD  has dtrace and truss for that so there is possibility for the
same followup :)


ktrace is quite useful too...


Sure, but I mean that the level of precision you get with strace is so nice
that I'd prefer to run in 32-bit mode to have it than in a blind 64-bit mode.

Willy



  OK, I'm not a developer and never used dtrace or ktrace before. May 
some gurus be kind give me some tips about use it.


Simon



Re: 1.5 dev22 issue on freebsd10-stable

2014-04-15 Thread k simon

Hi,Willy,
  I'm sorry about strace only support i386 on FreeBSD box, but I'm 
working on amd64.


# uname -a
FreeBSD ha-l1-n2 10.0-STABLE FreeBSD 10.0-STABLE #0 r264098: Fri Apr  4 
10:57:19 CST 2014 
root@ha-l1-n2:/usr/obj/usr/src/sys/10-stable-r264098  amd64



Simon

于 14-4-16 13:40, Willy Tarreau 写道:

Hi Simon,

On Wed, Apr 16, 2014 at 10:25:46AM +0800, k simon wrote:

Hi,Willy,


You must never have timewaits on a client, only on a server. So if
on your haproxy box you're seeing timewaits for connections going
to the backend servers, there's something wrong. Haproxy deploys
great efforts at avoiding them by doing a setsockopt(SO_LINGER) to
force the system to close with a reset. If you still get them after
upgrading, please run strace on the process so that we find what could
be causing them, as it would be abnormal.



   It seems that the timewait states occurs in the clients directions.

# netstat -an | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a,S[a]}'
LISTEN 3
FIN_WAIT_1 1558
FIN_WAIT_2 53
SYN_SENT 4
LAST_ACK 780
CLOSING 77
CLOSE_WAIT 52
CLOSED 9
SYN_RCVD 80
TIME_WAIT 7743
ESTABLISHED 7722


The numbers are not very high. I'm surprized that you have so many
FIN_WAIT_1 and LAST_ACK though.

You'll need to log some strace output to a file on each process
(please log timestamps using strace -tt as well) so that we can
compare the behaviour between 1.4 and 1.5.

Regards,
Willy





Re: 1.5 dev22 issue on freebsd10-stable

2014-04-15 Thread k simon

Hi,Willy,

> You must never have timewaits on a client, only on a server. So if
> on your haproxy box you're seeing timewaits for connections going
> to the backend servers, there's something wrong. Haproxy deploys
> great efforts at avoiding them by doing a setsockopt(SO_LINGER) to
> force the system to close with a reset. If you still get them after
> upgrading, please run strace on the process so that we find what could
> be causing them, as it would be abnormal.
>

  It seems that the timewait states occurs in the clients directions.

# netstat -an | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a,S[a]}'
LISTEN 3
FIN_WAIT_1 1558
FIN_WAIT_2 53
SYN_SENT 4
LAST_ACK 780
CLOSING 77
CLOSE_WAIT 52
CLOSED 9
SYN_RCVD 80
TIME_WAIT 7743
ESTABLISHED 7722

My backend interface:
# ifconfig vlan60
vlan60: flags=8843 metric 0 mtu 1500
options=3
ether 00:1b:21:36:62:1b
inet 192.168.130.84 netmask 0xff00 broadcast 192.168.130.255
inet 192.168.130.85 netmask 0x broadcast 192.168.130.85
inet 192.168.130.86 netmask 0x broadcast 192.168.130.86
media: Ethernet 1000baseT 
status: active
vlan: 60 parent interface: igb1

# netstat -an |grep 192.168.130 |more
tcp4   0  0 192.168.130.85.16416   192.168.130.33.3004 
ESTABLISHED
tcp4   0  0 192.168.130.85.15506   192.168.130.33.3004 
ESTABLISHED
tcp4   0  0 192.168.130.85.56697   192.168.130.53.3005 
ESTABLISHED
tcp4   0  0 192.168.130.85.19907   192.168.130.34.3005 
ESTABLISHED
tcp4   0  0 192.168.130.85.18708   192.168.130.34.3005 
ESTABLISHED
tcp4   0  0 192.168.130.85.17137   192.168.130.33.3004 
ESTABLISHED
tcp4   0  0 192.168.130.85.17950   192.168.130.33.3004 
ESTABLISHED
tcp4   0  0 192.168.130.85.19640   192.168.130.34.3005 
ESTABLISHED
tcp4   0  0 192.168.130.85.41590   192.168.130.52.3003 
ESTABLISHED
tcp4   0  0 192.168.130.85.22277   192.168.130.35.3006 
ESTABLISHED
tcp4   0  0 192.168.130.85.36508   192.168.130.52.3002 
ESTABLISHED
tcp4   0  0 192.168.130.85.12990   192.168.130.32.3003 
ESTABLISHED
tcp4   0  0 192.168.130.85.26643   192.168.130.40.3003 
ESTABLISHED
tcp4   0  0 192.168.130.85.51775   192.168.130.53.3004 
ESTABLISHED
tcp4   0  0 192.168.130.85.44149   192.168.130.50.3002 
ESTABLISHED
tcp4   0  0 192.168.130.85.57427   192.168.130.53.3006 
ESTABLISHED
tcp4   0  0 192.168.130.85.42355   192.168.130.50.3002 
ESTABLISHED
tcp4   0  0 192.168.130.85.21283   192.168.130.35.3006 
ESTABLISHED
tcp4   0  0 192.168.130.85.24548   192.168.130.40.3003 
ESTABLISHED
tcp4   0  0 192.168.130.85.23880   192.168.130.35.3006 
ESTABLISHED
tcp4   0  0 192.168.130.85.31224   192.168.130.54.3005 
ESTABLISHED
tcp4   0  0 192.168.130.85.13662   192.168.130.32.3003 
ESTABLISHED





# netstat -an |grep TIME_WAIT |more
tcp4   0  0 114.80.234.108.80  10.100.1.4.2577TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.2.4149TIME_WAIT
tcp4   0  0 114.80.234.72.80   10.100.1.3.2331TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.4.2576TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.2.4148TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.4.2575TIME_WAIT
tcp4   0  0 114.80.234.72.80   10.100.1.3.2330TIME_WAIT
tcp4   0  0 114.80.234.73.80   10.100.1.2.38769   TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.2.4147TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.4.2574TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.2.4146TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.4.2573TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.2.4145TIME_WAIT
tcp4   0  0 114.80.234.72.80   10.100.1.3.2329TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.2.4144TIME_WAIT
tcp4   0  0 114.80.234.73.80   10.100.1.2.38768   TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.4.2572TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.2.4143TIME_WAIT
tcp4   0  0 114.80.234.72.80   10.100.1.3.2328TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.2.4142TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.4.2571TIME_WAIT
tcp4   0  0 114.112.66.220.80  221.234.47.81.53770TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.2.4141TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.4.2570TIME_WAIT
tcp4   0  0 114.80.234.72.80   10.100.1.3.2327TIME_WAIT
tcp4   0  0 114.80.234.73.80   10.100.1.2.38767   TIME_WAIT
tcp4   0  0 114.80.234.108.80  10.100.1.2.4140TIME

Re: 1.5 dev22 issue on freebsd10-stable

2014-04-15 Thread k simon

Hi,Willy,
  Does your mean "BUG/MINOR: tcpcheck connect wrong behavior" or "
BUG/MEDIUM: checks: immediately report a connection success" ?
  I have not used tcp-check, just used http-check. Does it have the 
same bug?  And the out connections to the server farm is about just 
900+,  is TW state really a problem ? I have set the portrange from 
12000 to 6.


Simon


于 14-4-15 18:15, Willy Tarreau 写道:

Hi Simon,

On Tue, Apr 15, 2014 at 04:22:35PM +0800, k simon wrote:

Hi,List,
I got a 1.5 dev22 issue on freebsd 10-stable. It reported like below,
it's generate about 2-3 errors per minute when using "http-keep-alive"
,it's about 5-8 errors per minute with "http-server-close". I tried use
"source ip:port1-port2" in "server" section, but nothing helped. Then I
stop it,compiled haproxy 1.4-25 and execute it, the error messages
disappears. Is it a version 1.5 bug ?


I suspect this is caused by the health check bug which doesn't immediately
close the connections in raw TCP mode, and which probably marks them in
TIME_WAIT state, preventing you from reusing these ports.

Please check with latest snapshot if it goes away.

Willy





1.5 dev22 issue on freebsd10-stable

2014-04-15 Thread k simon
Hi,List,
   I got a 1.5 dev22 issue on freebsd 10-stable. It reported like below,
it's generate about 2-3 errors per minute when using "http-keep-alive"
,it's about 5-8 errors per minute with "http-server-close". I tried use
"source ip:port1-port2" in "server" section, but nothing helped. Then I
stop it,compiled haproxy 1.4-25 and execute it, the error messages
disappears. Is it a version 1.5 bug ?

Regards
Simon




Apr 15 14:56:05 localhost haproxy[17725]: Connect() failed for backend
squid3-bulk-keepalive: local address already in use.
Apr 15 14:56:10 localhost haproxy[17725]: Connect() failed for backend
squid3-bulk-keepalive: local address already in use.
Apr 15 14:56:12 localhost haproxy[17725]: Connect() failed for backend
squid3-bulk-keepalive: local address already in use.
Apr 15 14:56:17 localhost haproxy[17725]: Connect() failed for backend
squid3-bulk-keepalive: local address already in use.
Apr 15 14:56:20 localhost haproxy[17725]: Connect() failed for backend
squid3-bulk-keepalive: local address already in use.
Apr 15 14:56:24 localhost haproxy[17725]: Connect() failed for backend
squid3-bulk-keepalive: local address already in use.
Apr 15 14:56:26 localhost haproxy[17725]: Connect() failed for backend
squid3-bulk-keepalive: local address already in use.




net.inet.ip.portrange.lowfirst: 1023
net.inet.ip.portrange.lowlast: 600
net.inet.ip.portrange.first: 12000
net.inet.ip.portrange.last: 65535
net.inet.ip.portrange.hifirst: 12000
net.inet.ip.portrange.hilast: 65535


# sockstat -4 |wc -l
   13630


# netstat -an | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a,S[a]}'
LISTEN 3
FIN_WAIT_1 1406
FIN_WAIT_2 41
SYN_SENT 2
LAST_ACK 540
CLOSING 131
CLOSE_WAIT 41
CLOSED 5
SYN_RCVD 53
TIME_WAIT 2183
ESTABLISHED 8557


# haproxy -vv
HA-Proxy version 1.5-dev22-1a34d57 2014/02/03
Copyright 2000-2014 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -pipe -O2 -fno-strict-aliasing -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1
USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1g 7 Apr 2014
Running on OpenSSL version : OpenSSL 1.0.1g 7 Apr 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.34 2013-12-15
PCRE library supports JIT : yes

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.




recent test for dev22 on BSD

2014-03-20 Thread k simon
Hi,lists,
  I tested dev22 on FreeBSD 10-stable recently, and found:
1. "ipfw fwd" works well with dev22+tproxy. It's have a nice guide in
the /usr/local/share/examples.
But pf's divert-to and divert-reply can't work with haproxy. Maybe
haproxy does not use "getsockname(2)" and "setsockopt(2)".

2. There are some issue with "option http-server-close", haproxy crashed
after a while, whennever set it on frontend or backend.

3. Sometimes stalled with "tcp-smart-connect" and "tcp-smart-accept",
when I removed it, it's work normal. But I am not sure about it.

4.The dev22 can compiled on DragonflyBSD, but it's silent stalled.



Regards
Simon



Re: Does http-request worked with tunnel mode?

2014-03-14 Thread k simon
 Is it possible add X-Foward-For for each request in http-tunnel mode ?

Simon


于 14-3-11 11:53, k simon 写道:
> Hi,List,
> 
> I am puzzled with "set a header" for each request in "tunnel mode".
> As I know, tunnel mode only analyze the first transaction. But the
> "tcp-request content" documented it can be evaluated again by the rules
> being evaluated again for the next request.
> As "tcp-request content" only can "accept/reject" or "track sc1/sc2"
> counter. Now the question, can "http-request" do the similar thing in
> the early stage ?
> 
> P.S.
>I found "tcp-request content" worked well with "http-tunnel" and
> "httpclose". When using it with "server-close" or "http-keep-alive", it
> would have lost of CLOSED\LAST-ACK states etc.
> 
> Regards
> Simon
> 



Does http-request worked with tunnel mode?

2014-03-10 Thread k simon
Hi,List,

   I am puzzled with "set a header" for each request in "tunnel mode".
As I know, tunnel mode only analyze the first transaction. But the
"tcp-request content" documented it can be evaluated again by the rules
being evaluated again for the next request.
   As "tcp-request content" only can "accept/reject" or "track sc1/sc2"
counter. Now the question, can "http-request" do the similar thing in
the early stage ?

P.S.
  I found "tcp-request content" worked well with "http-tunnel" and
"httpclose". When using it with "server-close" or "http-keep-alive", it
would have lost of CLOSED\LAST-ACK states etc.

Regards
Simon



Re: HAProxy graceful restart old process not going away

2014-01-27 Thread k simon
 We got the simlar problem, then capture the traffic and found it's 
result in websocket. So we had to kill the old process manually when 
finished graceful restart.




于 28/1/14 下午2:37, Willy Tarreau 写道:

On Mon, Jan 27, 2014 at 11:24:46PM +, Wei Kong wrote:

We use

  /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid -sf 


In production to gracefully restart haproxy. But sometimes we notice that the
old haproxy process taking a long time to go away and if we make multiple
updates and it would result in multiple haproxy processes for a long time.
How can we make sure the old haproxy can go away in a reasonable amount of
time?


Maybe you have long transfers going on, or long keep-alive timeouts ?

Willy






Re: Feature request: TOS based ACL.

2014-01-02 Thread k simon

"man ip" on the freebsd box:

If the IP_RECVTTL option is enabled on a SOCK_DGRAM socket, the
recvmsg(2) call will return the IP TTL (time to live) field for a UDP
datagram. The msg_control field in the msghdr structure points to a
buffer that contains a cmsghdr structure followed by the TTL. The cms-
ghdr fields have the following values:

cmsg_len = CMSG_LEN(sizeof(u_char))
cmsg_level = IPPROTO_IP
cmsg_type = IP_RECVTTL

If the IP_RECVTOS option is enabled on a SOCK_DGRAM socket, the
recvmsg(2) call will return the IP TOS (type of service) field for a UDP
datagram. The msg_control field in the msghdr structure points to a
buffer that contains a cmsghdr structure followed by the TOS. The cms-
ghdr fields have the following values:

cmsg_len = CMSG_LEN(sizeof(u_char))
cmsg_level = IPPROTO_IP
cmsg_type = IP_RECVTOS


FreeBSD only support recv tos or ttl for udp packets. If you want split 
some tcp request traffic for special purpose, may be you can set ttl or 
tos on the front router/firewall ,then capture it with "ipfw" tool and 
redirect it to the customed "frontend". But that leads complex 
configurations.


Simon


于 2/1/14 下午11:56, Lukas Tribus 写道:

Hi,



Thats great, but is there can be anything like this?

acl bad_guys tos-acl 0x20
block if bad_guys

Ah ok, you want to match incoming TOS.

That is indeed not supported currently.


Also, not all *nixes provide an API for this. Linux has
IP_RECVTOS/IPV6_RECVTCLASS to do it, but BSD hasn't, also see:
http://stackoverflow.com/questions/1029849/what-is-the-bsd-or-portable-way-to-get-tos-byte-like-ip-recvtos-from-linux


Not sure what effort it would be to implement this.



Regards,

Lukas   





Re: HAProxy Next?

2013-12-17 Thread k simon
-haproxy is a good tcp proxy ,now it can classify http traffic, and it's 
cool to classify other type traffic such as telnet\ssh\ftp etc.






? 17/12/13 ??4:14, Annika Wickert ??:

Hi all,

we did some thinking about how to improve haproxy and which features 
we’d like to see in next versions.


We came up with the following list and would like to discuss if they 
can be done/should be done or not.
- One global statssocket which can be switched through to see stats of 
every bind process. And also an overall overview summed up from all 
backends and frontends.
- One global control socket to control every backend server and set 
them inactive or active on the fly.

- In general better nbproc > 1 support
- Include possibility in configfile to maintain one configfile for 
each backend / frontend pair

- CPU pinning in haproxy without manually using taskset/cpuset
- sflow output
- latency metrics at stats interface (frontend and backend, avg, 95%, 
90%, max, min)

- accesslist for statssocket or ldap authentication for stats socket

Are there any others things which would be cool? I hope we can have a 
nice discussion about a “fancy” feature set which could be provided by 
lovely haproxy.


Best regards,
Annika
---
Systemadministration

Travian Games GmbH
Wilhelm-Wagenfeld-Str. 22
80807 München
Germany

a.wick...@traviangames.com 
www.traviangames.de 

Sitz der Gesellschaft München
AG München HRB: 173511
Geschäftsführer: Siegfried Müller
USt-IdNr.: DE246258085

Diese Email einschließlich ihrer Anlagen ist vertraulich und nur für den
Adressaten bestimmt. Wenn Sie nicht der vorgesehene Empfänger sind,
bitten wir Sie, diese Email mit Anlagen unverzüglich und vollständig zu
löschen und uns umgehend zu benachrichtigen.

This email and its attachments are strictly confidential and are
intended solely for the attention of the person to whom it is addressed.
If you are not the intended recipient of this email, please delete it
including its attachments immediately and inform us accordingly.





Re: RES: RES: RES: RES: RES: RES: RES: RES: High CPU Usage (HaProxy)

2013-11-07 Thread k simon
I ran a haproxy(nbproc=6) on freebsd 10-beta2, each frontend bind to a 
socket and share the same backend.  Context switch normally 60k+. But the load 
and throughput is confused me, in the past days I ran a haproxy instance 
(nbproc=1), it can handle  up to 500Mbps traffic . 


The info below is not on busy time:
# uptime
 3:10PM  up 4 days,  4:34, 3 users, load averages: 3.41, 3.82, 3.77

# top -HPS
last pid: 16550;  load averages:  3.92,  4.01,  3.82
up 4+04:31:33  15:07:55
123 processes: 15 running, 77 sleeping, 1 zombie, 30 waiting
CPU 0:  0.0% user,  0.0% nice,  0.0% system, 34.8% interrupt, 65.2% idle
CPU 1:  0.0% user,  0.0% nice,  0.0% system, 41.0% interrupt, 59.0% idle
CPU 2: 15.6% user,  0.0% nice, 22.3% system,  0.4% interrupt, 61.7% idle
CPU 3: 17.2% user,  0.0% nice, 31.6% system,  0.0% interrupt, 51.2% idle
CPU 4: 17.2% user,  0.0% nice, 20.3% system,  0.0% interrupt, 62.5% idle
CPU 5: 21.5% user,  0.0% nice, 26.2% system,  0.0% interrupt, 52.3% idle
CPU 6: 12.9% user,  0.0% nice, 21.5% system,  0.0% interrupt, 65.6% idle
CPU 7: 14.1% user,  0.0% nice, 24.6% system,  0.0% interrupt, 61.3% idle
Mem: 871M Active, 692M Inact, 560M Wired, 417M Buf, 1820M Free
Swap: 8192M Total, 8192M Free

  PID USERNAME   PRI NICE   SIZERES STATE   C   TIMEWCPU COMMAND
   11 root   155 ki31 0K   128K CPU22  79.0H  61.96% [idle{idle: 
cpu2}]
   11 root   155 ki31 0K   128K CPU66  76.1H  58.98% [idle{idle: 
cpu6}]
   11 root   155 ki31 0K   128K RUN 7  75.5H  58.98% [idle{idle: 
cpu7}]
   11 root   155 ki31 0K   128K RUN 5  77.1H  56.98% [idle{idle: 
cpu5}]
   11 root   155 ki31 0K   128K CPU44  77.6H  55.96% [idle{idle: 
cpu4}]
   11 root   155 ki31 0K   128K RUN 0  79.6H  54.98% [idle{idle: 
cpu0}]
   11 root   155 ki31 0K   128K CPU33  78.6H  52.98% [idle{idle: 
cpu3}]
   11 root   155 ki31 0K   128K RUN 1  79.6H  51.95% [idle{idle: 
cpu1}]
   12 root   -92- 0K   496K WAIT1  20.8H  50.98% [intr{irq257: 
bge1}]
15786 root860   287M   249M CPU33  73:07  48.97% 
/usr/local/sbin/haproxy -q -f /usr/local/etc/haproxy.conf -p 
/var/run/haproxy.pid -sf 1570
   12 root   -92- 0K   496K CPU00  20.7H  48.00% [intr{irq256: 
bge0}]
15787 root860   239M   204M CPU44  73:53  46.97% 
/usr/local/sbin/haproxy -q -f /usr/local/etc/haproxy.conf -p 
/var/run/haproxy.pid -sf 1570
15789 root850   251M   214M CPU66  76:12  46.00% 
/usr/local/sbin/haproxy -q -f /usr/local/etc/haproxy.conf -p 
/var/run/haproxy.pid -sf 1570
15788 root520   246M   211M kqread  5  75:49  46.00% 
/usr/local/sbin/haproxy -q -f /usr/local/etc/haproxy.conf -p 
/var/run/haproxy.pid -sf 1570
15785 root840   315M   275M CPU22  74:23  42.97% 
/usr/local/sbin/haproxy -q -f /usr/local/etc/haproxy.conf -p 
/var/run/haproxy.pid -sf 1570
15790 root520   240M   205M RUN 7  73:22  40.97% 
/usr/local/sbin/haproxy -q -f /usr/local/etc/haproxy.conf -p 
/var/run/haproxy.pid -sf 1570


# netstat -b 1
input(Total)   output
   packets  errs idrops  bytespackets  errs  bytes colls
375218 0 0  232261646 424134 0  231909684 0
366811 0 0  226394383 420581 0  229453967 0
354549 0 0  218934804 398562 0  221149945 0
343143 0 0  207114219 386270 0  208563374 0

# vmstat 1
 procs  memory  pagedisks faults cpu
 r b w avmfre   flt  re  pi  pofr  sr ad0 pa0   in   sy   cs us sy 
id
 4 0 0   2335M  1821M89   0   0   072 133   0   0 1950 6575 11313  7 15 
78
 2 0 0   2335M  1821M 0   0   0   0 0 385   0   0 23955 262207 63652 16 
34 50
 4 0 0   2335M  1821M 0   0   0   0 0 385   6   0 24381 270892 64521 14 
34 51
 1 0 0   2335M  1821M 0   0   0   0 0 385   0   0 23866 263830 62999 12 
38 50
 4 0 0   2335M  1821M 0   0   0   0 0 385   0   0 24315 269539 63936 13 
36 51


# ifstat -b
   bge0bge1   vlan67 vlan705
 vlan708 vlan709  lagg0
 Kbps in  Kbps out   Kbps in  Kbps out   Kbps in  Kbps out   Kbps in  Kbps out  
 Kbps in  Kbps out   Kbps in  Kbps out   Kbps in  Kbps out
315561.8  321251.0  291125.2  293214.0  1.96726.08  552540.0  64954.99  
0.00  0.00  54150.98  548661.9  621974.4  628548.3
281826.2  299858.7  316745.4  300973.6  1.46705.18  543350.4  65773.75  
0.00  0.00  55186.89  534110.3  589127.2  588204.5
279791.3  287347.3  272280.2  269952.9  1.50686.32  502532.9  59567.50  
0.00  0.00  49501.31  496878.4  539236.8  550590.8
272555.7  282285.0  295684.0  285037.7  2.39650.30  515627.9  62465.26  
0.00  0.00 

Does haproxy in transparent mode support FreeBSD's divert mechanism ?

2013-11-06 Thread k simon
Hi, All:
In the past day, I want use pf’s “reply-to” on freebsd to solve ip address 
overlapping problem. But it’s seems that  pf’s “divert-to” and “divert-reply”  
cannot work with haproxy on the same machine.  Does haproxy in transparent mode 
support FreeBSD’s divert mechanism ?


Regards
Simon


Does haproxy in transparent mode support FreeBSD's divert mechanism ?

2013-11-06 Thread k simon
Hi, All:
   In the past day, I want use pf’s “reply-to” on freebsd to solve ip address 
overlapping problem. But it’s seems that  pf’s “divert-to” and “divert-reply”  
cannot work with haproxy on the same machine.  Does haproxy in transparent mode 
support FreeBSD’s divert mechanism ?


Regards
Simon


Re: ACL HTTP not capture all the HTTP traffic ?

2013-07-24 Thread k simon
Hi, Willy,

Yesterday we've tested the tunnel mode, we removed all the 
"http-server-close" nor "http-close" in the configuration. Haproxy works well 
with the content inspection. Now, we can confirm that  "content inspection" can 
not distinguish "HTTP" and "none http" traffic exactly when we set 
"http-server-close" on the http backend. 
We use haproxy in a special Scenario,this is to say, we use haproxy as 
a classifier and a forward proxy.  "http-server-close" is important to us too . 
Can you tell us does someway we can make "http-server-close" and "content 
inspection" works together?


configuration:
frontend tcp-in
  bind : 
  mode tcp
  log global
  option tcplog
  tcp-request inspect-delay 30s
  tcp-request content accept if HTTP
  use_backend NginxCluster if HTTP 
  default_backend Direct

backend NginxCluster
  mode http
  option abortonclose
  option http-server-close
  balance uri whole
  log global
  server ngx1 192.168.10.1:80 weight 20 check inter 5s maxconn 1
  server ngx2 192.168.10.2:80 weight 20 check inter 5s maxconn 1
  server ngx3 192.168.10.3:80 weight 20 check inter 5s maxconn 1

backend Direct
   mode tcp
   log global
option tcplog
no option httpclose
no option http-server-close
no option accept-invalid-http-response
option transparent  
option abortonclose 



Regards
Simon




在 2013-7-21,下午6:32, k simon 写道:

> Hi all,
> 
>   We changed the "http-server-close" to "http-close", and found we resolved 
> the problem. Now haproxy can accurate distinguished the "http" and "non http" 
> traffic. Obviously content inspection works well with short connection, but 
> not long connection. And now, 20k+ "fin_wait_2" and "close wait" state has 
> disappeared.
> 
> 
> Regards
> 
> Simon



Re: ACL HTTP not capture all the HTTP traffic ?

2013-07-21 Thread k simon
Hi all,

   We changed the "http-server-close" to "http-close", and found we resolved 
the problem. Now haproxy can accurate distinguished the "http" and "non http" 
traffic. Obviously content inspection works well with short connection, but not 
long connection. And now, 20k+ "fin_wait_2" and "close wait" state has 
disappeared.


Regards

Simon