Hi,

I am running haproxy 1.4.18 on RHEL6 (64 bit). I am trying to run a
large number of connections (I have long pooling backends), ideally
several hundred k. My servers have 24G RAM and are doing nothing else.
I have two problems.

Firstly, if I increase teh "nofile" hard limit in
/etc/security/limits.conf to anything over 1048576 (1 Megabyte,
although I think this is defined as a # of files) I can't SSH to the
box or start haproxy - new SSH connections are immediately closed
without entries in the logs. The kernel limit (/proc/sys/fs/file-max)
is set to >200k without problem. Haproxy balks as follows:

Starting haproxy: [WARNING] 276/012759 (2924) :
[/usr/sbin/haproxy.main()] Cannot raise FD limit to 2000046.
[WARNING] 276/012759 (2924) : [/usr/sbin/haproxy.main()] FD limit
(1048576) too low for maxconn=1000000/maxsock=2000046. Please raise
'ulimit-n' to 2000046 or more to avoid any trouble.

Secondly, I notice that as my number of connections rises I get "Out
of socket memory" errors in the kernel log. Google led me to believe
that this would be a problem caused by excessive orphan sockets, but
this seems not to be the case:

[root@frontend2 log]# cat /proc/net/sockstat | grep orphan
TCP: inuse 134376 orphan 0 tw 87 alloc 150249 mem 100217
(output taken during a time of problem, where haproxy was mostly
inaccessible and most connections were refused)

I also seem to have plenty of spare sockets:

[root@frontend2 log]# cat /proc/sys/net/ipv4/tcp_mem
2303712 3071616 4607424
[root@frontend2 log]# cat /proc/net/sockstat
sockets: used 150458
TCP: inuse 134376 orphan 1 tw 69 alloc 150250 mem 100217
UDP: inuse 12 mem 3
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0

(100217 < 4607424 by some margin).

Ideally i'd like to set the number of open files and sockets to as
close to infinite as possible - and allow them to use all the system
RAM.

Would anybody with more knowledge in this area be able to shed any light?

- Alex

Reply via email to