On Sun, 12 Jun 2011, Jenny Lee wrote:

With tcp_fin_timeout set at theoretical minimum of 12 secs, we can do 5K req/s 
with 64K ports.

Setting tcp_fin_timeout had no effect for me. Apparently there is conflicting / 
outdated information everywhere and I could not lower TIME_WAIT from its 
default of 60 secs which is hardcoded into include/net/tcp.h. But I doubt this 
would have any effect when you are constantly loading the machine.

Making localhost to localhost connections didn't help either.

I am not a network guru, so of course I am probably doing things wrong. But no 
matter how wrong you do stuff, they cannot escape brute-forcing :) And I have 
tried everything!

I Can't do more than 450-470 reqs/sec even with 200K in "/proc/sys/net/netfilter/nf_conntrack_max" 
and "/sys/module/nf_conntrack/parameters/hashsize". This allows me bypass "CONNTRACK table 
full" issues, but my ports run out.

Could you be kind enough to specify which OS you are using and if you are 
running the benches for extended periods of time?

Any TCP tuning options you are doing also would be very useful. Of course, when 
you are back in the office.

As I mentioned, we find your work on acls and workers valuable.

I'm running Debian with custom built kernels.

In the testing that I have done over the years, I have had tests at 6000+ connections/sec through forking proxies (that only log when they get a new connection, with connection rates calculated by the logs of the proxy so I know that they aren't using persistant or keep-alive connections)

unfortunantly the machine in my lab with squid on it is unplugged right now. I can get at the machines running ab and apache remotely, so I can hopefully get logged in and give you the kernel settings in the next coupld of days (things are _extremely_ hectic through most of monday, so it'll probably be monday night or tuesday before I get a chance)

David Lang

Reply via email to