On 30/03/2015 07:13 πμ, Krishna Kumar Unnikrishnan (Engineering) wrote:
> Hi all,
> 
> I am testing haproxy as follows:
> 
> System1: 24 Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz, 64 GB. This system
>     is running 3.19.0 kernel, and hosts the following servers:
>         1. nginx1 server - cpu 1-2, 1G memory, runs as a Linux
>             container using cpuset.cpus feature.
>         2. nginx2 server - cpu 3-4, 1G memory, runs via LXC.
>         3. nginx3 server - cpu 5-6, 1G memory, runs via LXC.
>         4. nginx4 server - cpu 7-8, 1G memory, runs via LXC.
>         5. haproxy - cpu 9-10, 1G memory runs via LXC. Runs haproxy
>             ver 1.5.8: configured with above 4 container's ip
>             addresses as the backend.
> 
> System2: 56 Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz, 128 GB. This system
>     is running 3.19.0, and run's 'ab' either to the haproxy node, or
>     directly to an nginx container. System1 & System2 are locally
>     connected via a switch with Intel 10G cards.
> 
> With very small packets of 64 bytes, I am getting the following results:
> 
> A. ab -n 100000 -c 4096 http://nginx1:80/64
> -----------------------------------------
> 
> Concurrency Level:      4096
> Time taken for tests:   3.232 seconds
> Complete requests:      100000
> Failed requests:        0
> Total transferred:      28800000 bytes
> HTML transferred:       6400000 bytes
> Requests per second:    30943.26 [#/sec] (mean)
> Time per request:       132.371 [ms] (mean)
> Time per request:       0.032 [ms] (mean, across all concurrent requests)
> Transfer rate:          8702.79 [Kbytes/sec] received
> 
> Connection Times (ms)
>               min  mean[+/-sd] median   max
> Connect:        9   65 137.4     45    1050
> Processing:     4   52  25.3     51     241
> Waiting:        3   37  19.2     35     234
> Total:         16  117 146.1    111    1142
> 
> Percentage of the requests served within a certain time (ms)
>   50%    111     66%    119     75%    122
>   80%    124     90%    133     95%    215
>   98%    254     99%   1126     100%   1142 (longest request)
> 
> B. ab -n 100000 -c 4096 http://haproxy:80/64
> ----------------------------------------------
> 
> Concurrency Level:      4096
> Time taken for tests:   5.503 seconds
> Complete requests:      100000
> Failed requests:        0
> Total transferred:      28800000 bytes
> HTML transferred:       6400000 bytes
> Requests per second:    18172.96 [#/sec] (mean)
> Time per request:       225.390 [ms] (mean)
> Time per request:       0.055 [ms] (mean, across all concurrent requests)
> Transfer rate:          5111.15 [Kbytes/sec] received
> 
> Connection Times (ms)
>               min  mean[+/-sd] median   max
> Connect:        0  134 358.3     23    3033
> Processing:     2   61  47.7     51     700
> Waiting:        2   50  43.0     42     685
> Total:          7  194 366.7     79    3122
> 
> Percentage of the requests served within a certain time (ms)
>   50%     79     66%    105     75%    134
>   80%    159     90%    318     95%   1076
>   98%   1140     99%   1240     100%   3122 (longest request)
> 
> I expected haproxy to deliver better results with multiple connections,
> since
> haproxy will round-robin between the 4 servers. I have done no tuning,
> and have
> used the config file at the end of this mail. With 256K file size, the times
> are slightly better for haproxy vs nginx. I notice that %requests served is
> similar for both cases till about 90%.
> 
> Any help is very much appreciated.
> 

You have mentioned the CPU load on the host and on the guest systems.
Use pidstat -p $(pgrep -d ',' haproxy) -u  1 to monitor CPU stats of
haproxy processes and  mpstat -P ALL 1 and check CPU load for software
interrupts.


Cheers,
Pavlos


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to