doesn’t seem I can get more than 50000 req/s with ab or with wrk
i have cache disabled, and I’m only running 1G link right now, so far with WRK or AB, the real destination is a simple nginx server running on http only, and so far I can’t get more than 50000 req/s I will dig more on it, but is there something obvious that need to be turned on the proxy server side ? ./wrk -c 256 -t 256 -d 10 -s ./scripts/via_proxy_get1.lua http://x.x.x.x:8080 Running 10s test @ http://x.x.x.x:8080 256 threads and 256 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.63ms 35.20ms 777.98ms 97.53% Req/Sec 169.26 55.34 530.00 74.19% 429181 requests in 10.10s, 354.04MB read Requests/sec: 42492.09 Transfer/sec: 35.05MB Benchmarking 10.12.17.58 [through :8080] (be patient) Completed 1000000 requests Completed 2000000 requests Completed 3000000 requests ^C Server Software: ATS/6.2.0 Server Hostname: 10.12.17.58 Server Port: 80 Document Path: / Document Length: 612 bytes Concurrency Level: 500 Time taken for tests: 88.196 seconds Complete requests: 3674058 Failed requests: 0 Write errors: 0 Keep-Alive requests: 3674058 Total transferred: 3156015822 bytes HTML transferred: 2248523496 bytes Requests per second: 41657.91 [#/sec] (mean) Time per request: 12.003 [ms] (mean) Time per request: 0.024 [ms] (mean, across all concurrent requests) Transfer rate: 34945.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 64 Processing: 1 12 10.9 7 687 Waiting: 1 12 10.9 7 687 Total: 1 12 11.0 7 702 Percentage of the requests served within a certain time (ms) Thanks, Di Li +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + SHIELD :: Self-Service Load-Balancer (LB) and Web App Firewall (WAF) + http://shield.apple.com <http://shield.apple.com/> + + SHIELD classes: + http://shield.apple.com/FAQ/doku.php?id=start#shield_classes_trainings_tutorials <http://shield.apple.com/FAQ/doku.php?id=start#shield_classes_trainings_tutorials> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > On Dec 13, 2016, at 10:20 AM, Di Li <[email protected]> wrote: > > Hey Steve, > > Can you share some details on config or performance turning or results ? > > > Thanks, > Di Li > > > > >> On Dec 13, 2016, at 9:46 AM, Lerner, Steve <[email protected] >> <mailto:[email protected]>> wrote: >> >> I’ve benchmarked ATS forward proxy post with cache disabled to near 10gbps >> on an Openstack VM with a 10Gbps NIC. >> I used Apache Bench for this. >> >> Steve Lerner | Director / Architect - Performance Engineering | m >> 212.495.9212 | [email protected] <mailto:[email protected]> >> <image001.png> >> >> From: <[email protected] <mailto:[email protected]>> on behalf of Di Li >> <[email protected] <mailto:[email protected]>> >> Reply-To: "[email protected] >> <mailto:[email protected]>" <[email protected] >> <mailto:[email protected]>> >> Date: Tuesday, December 13, 2016 at 12:32 PM >> To: "[email protected] <mailto:[email protected]>" >> <[email protected] <mailto:[email protected]>> >> Subject: Re: benchmark ATS >> >> using 6.2.0, repeatable >> >> >> Thanks, >> Di Li >> >> >> >> >> On Dec 13, 2016, at 1:28 AM, Reindl Harald <[email protected] >> <mailto:[email protected]>> wrote: >> >> >> Am 13.12.2016 um 09:45 schrieb Di Li: >> >> When I doing some benchmark for outbound proxy, and has http_cache >> enabled, well, first of all, the performance are pretty low, I guess I >> didn’t do it right with the cache enabled, 2nd when I use wrk to have >> 512 connection with 40 thread to go through proxy with http, it cause a >> core dump, here’s the trace >> >> And when I disable the http.cache, the performance has went up a lot, >> and no more coredump at all. >> >> >> FATAL: CacheRead.cc <http://cacheread.cc/> <http://cacheread.cc >> <http://cacheread.cc/>>:249: failed assert >> `w->alternate.valid()` >> traffic_server: using root directory '/ngs/app/oproxy/trafficserver' >> traffic_server: Aborted (Signal sent by tkill() 20136 1001) >> traffic_server - STACK TRACE >> >> is this repeatable? >> which version of ATS? >> >> at least mention the software version should be common-sense >> >> had one such crash after upgrade to 7.0.0 and was not able to reproduce it, >> even not with a "ab -k -n 10000000 -c 500" benchmark >> >
