Hello, Over the past day I have been searching the net for the definitive answer for the recommended tweaks to the kernel for ipvs. If you are doing this then change this, but that does not exist. :)
I see suggestions on the net for changing all of the following: /proc/sys/net/core/wmem_max /proc/sys/net/core/wmem_default /proc/sys/net/core/rmem_max /proc/sys/net/core/rmem_default /proc/sys/fs/filemax /proc/sys/net/ipv4/tcp_tw_recycle /proc/sys/net/ipv4/tcp_tw_reuse /proc/sys/net/ipv4/tcp_max_tw_buckets /proc/sys/net/ipv4/tcp_fin_timeout /proc/sys/net/ipv4/tcp_max_syn_backlog /proc/sys/net/ipv4/tcp_syncookies /proc/sys/net/ipv4/ip_local_port_range /proc/sys/net/core/netdev_max_backlog I assume that these values are very specific to the desired goal your load balancer is trying to achieve. I am currently seeing on our lvs active/passive pair a drop in connections every 10 minutes undder high load. Turning off the ipvs_sync daemon caused this 10 minute spikes in failures to disappear. The 10 min spike in failures is gone however now failures seem to be following the number of connections. High connection = Higher error. We have a full GIGE internet connection and pass TBs of traffic per day in an LVS-NAT setup. Changing the magic parameter above in the kernel is going to increase the maximum allowed connections. We do see this error: IPVS: ip_vs_send_async error Checking out prior posts this will help and we can implement: http://www.mail-archive.com/[email protected]/msg04479.html Can a few experts on this list suggest which of the values above to target tuning first? Thank you, Neal _______________________________________________ Please read the documentation before posting - it's available at: http://www.linuxvirtualserver.org/ LinuxVirtualServer.org mailing list - [email protected] Send requests to [email protected] or go to http://lists.graemef.net/mailman/listinfo/lvs-users
