Hi Steve, We had those things before I made the post, and so far I stuck with the same num.
Btw, there is a good article talk about the tw_recycle and tw_reuse, you may want to check it out, sometime tw_recycle is evil https://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux.html <https://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux.html> Thanks, Di Li > On Dec 13, 2016, at 11:08 AM, Lerner, Steve <[email protected]> wrote: > > We use Ubuntu Server and in the end the only tuning was: > > · /etc/sysctl.conf > o net.ipv4.tcp_tw_recycle = 1 > o net.core.somaxconn = 65535 > o net.ipv4.tcp_fin_timeout = 15 > o net.ipv4.tcp_keepalive_time = 300 > o net.ipv4.tcp_keepalive_probes = 5 > o net.ipv4.tcp_keepalive_intvl = 15 > > But of that batch I only think that SOMAXCONN made the difference. Try with > just that tuning and then add the rest. > > The test was simply: > > ab -p post.txt -l -r -n 1000000 -c 20000 -k -H "Host: [apache httpd server > IP]" http://[apache <http://[apache> traffic server forward proxy > IP]:8080/index.html > > where post.txt is the file to post. > > You can study apache bench manpage to understand the fields used and vary > them to see the results. I’d use multiple client VMs running posts via apache > bench targeting the single proxy server and be able to easily hit 9gbps and > above. > > To see the performance, we used commands ss-s and tops. > Run these on all the machine involved to keep an eye on everything. > > This was all run manually and quickly. > > -Steve > > > > Steve Lerner | Director / Architect - Performance Engineering | m > 212.495.9212 | [email protected] <mailto:[email protected]> > <image001.png> > > From: <[email protected]> on behalf of Di Li <[email protected]> > Reply-To: "[email protected]" <[email protected]> > Date: Tuesday, December 13, 2016 at 1:20 PM > To: "[email protected]" <[email protected]> > Subject: Re: benchmark ATS > > Hey Steve, > > Can you share some details on config or performance turning or results ? > > > Thanks, > Di Li > > > > > On Dec 13, 2016, at 9:46 AM, Lerner, Steve <[email protected] > <mailto:[email protected]>> wrote: > > I’ve benchmarked ATS forward proxy post with cache disabled to near 10gbps on > an Openstack VM with a 10Gbps NIC. > I used Apache Bench for this. > > Steve Lerner | Director / Architect - Performance Engineering | m > 212.495.9212 | [email protected] <mailto:[email protected]> > <image001.png> > > From: <[email protected] <mailto:[email protected]>> on behalf of Di Li > <[email protected] <mailto:[email protected]>> > Reply-To: "[email protected] > <mailto:[email protected]>" <[email protected] > <mailto:[email protected]>> > Date: Tuesday, December 13, 2016 at 12:32 PM > To: "[email protected] <mailto:[email protected]>" > <[email protected] <mailto:[email protected]>> > Subject: Re: benchmark ATS > > using 6.2.0, repeatable > > > Thanks, > Di Li > > > > > On Dec 13, 2016, at 1:28 AM, Reindl Harald <[email protected] > <mailto:[email protected]>> wrote: > > > Am 13.12.2016 um 09:45 schrieb Di Li: > > > When I doing some benchmark for outbound proxy, and has http_cache > enabled, well, first of all, the performance are pretty low, I guess I > didn’t do it right with the cache enabled, 2nd when I use wrk to have > 512 connection with 40 thread to go through proxy with http, it cause a > core dump, here’s the trace > > And when I disable the http.cache, the performance has went up a lot, > and no more coredump at all. > > > FATAL: CacheRead.cc <http://cacheread.cc/> <http://cacheread.cc > <http://cacheread.cc/>>:249: failed assert > `w->alternate.valid()` > traffic_server: using root directory '/ngs/app/oproxy/trafficserver' > traffic_server: Aborted (Signal sent by tkill() 20136 1001) > traffic_server - STACK TRACE > > is this repeatable? > which version of ATS? > > at least mention the software version should be common-sense > > had one such crash after upgrade to 7.0.0 and was not able to reproduce it, > even not with a "ab -k -n 10000000 -c 500" benchmark > >
