Re: VM benchmarks

2010-10-29 Thread Ariel
On Oct 28, 2010, at 5:38 PM, Cyril Bonté wrote:
 I reproduced nearly the same environment as you described and could not 
 reproduce this latency (only 1 nginx instance in my case).

First, I want to say thank you for your tests!  I learned a lot from seeing 
what you did.

The VirtualBox server I was using before is in another building.  I've asked 
many times but still don't know much about how it is set up.  I just know that 
if I ask for a new VM for development, the guy will set one up for me.  So to 
come closer to the tests you did, I installed VirtualBox at home.  Also, I  
added option  http-server-close to haproxy.cfg because this is closer to what 
it would be in my desired environment.  (And I also realize my first test 
didn't need two backends because without this option, it would maintain 
keepalive with the backend server and not switch until my keepalive expired.  I 
feel a little silly!)

My computer at home:
-
OS: Windows 7 Ultimate x64
CPU: Intel(R) Core(TM) i5 CPU 750 @ 2.67Ghz
Memory: 4Gb DDR3-1066

VM1:
--
Virtualbox 3.2.10
OS: Ubuntu 10.04 (new install)
Running haproxy 1.4.8
Kernel: 2.6.32-25-server #45-Ubuntu SMP x86_64 GNU/Linux
1 CPU, 512MB RAM, VT-x Enabled
Adapter Type: Intel PRO/1000 MT Desktop (82540EM)
(Bridged)

VM2 (same as above):
--
Running nginx 0.7.65

I ran my browser from the host OS clicking the button to call ajax and my 
results were much like yours.  Almost always 2-3ms for every HTTP 200 status 
response directly to nginx (no proxy).  Going through haproxy gave 3-4ms very 
consistently.

I tried connecting from my office to the computer at home, directly to nginx I 
got 10ms-11ms (8ms low, 14ms high).  Through haproxy was exactly the same.

So I think the high latency that I saw in my first VirtualBox environment (I 
still get the same strange results today, 150ms or so when using haproxy) is 
because other VMs on that computer are using lots of resources or maybe it is 
not configured correctly.  I tried running rose...@home on my computer at home 
(on the host OS) to maintain very high CPU usage and ran the tests again.  I 
got the same results as before... everything was very fast and haproxy is 
barely noticeable.

I also tried the `ab` utility you showed me but at home I have a D-Link router 
and it explodes when getting 10 requests per second.
This was the best result that finished without crashing from the office:
`ab -n1000 -c1 http://my public ip address at home:9091/ajax.txt`
Requests per second:8.48 [#/sec] (mean)
Time per request:   117.906 [ms] (mean)

I will try to make a network as close to what we have in production right now 
and keep testing.  Thank you very much for showing me how you did your tests!  
What other tools like ab should I try?  I see JMeter a lot in google, and one 
person mentioned httperf.

-a


RE: VM benchmarks

2010-10-29 Thread Mike Hoffs
Hi Ariel,

If u want i can do some tests on Intel modular server with empty vtrak storage 
on vmware virtualization platform.


Met een vriendelijke groet,   


Mike Hoffs




RE: VM benchmarks

2010-10-28 Thread Angelo Höngens
I'm wondering what the difference would be between the standard slow e1000 
virtual network card and the fast paravirtualized vmxnet3 virtual network card. 
In theory, the latter one should be much, much faster.. 

-- 

 
With kind regards,
 
 
Angelo Höngens
 
Systems Administrator
 
--
NetMatch
tourism internet software solutions
 
Ringbaan Oost 2b
5013 CA Tilburg
T: +31 (0)13 5811088
F: +31 (0)13 5821239
 
mailto:a.hong...@netmatch.nl
http://www.netmatch.nl
--


 -Original Message-
 From: Les Stroud [mailto:l...@lesstroud.com]
 Sent: woensdag 27 oktober 2010 21:55
 To: Ariel
 Cc: haproxy
 Subject: Re: VM benchmarks
 
 Check out this thread I had earlier in the month on the same topic:
 http://www.formilux.org/archives/haproxy/1010/3910.html
 
 Bottom line: vmware will slow down your upper level transaction limit
 by a significant amount (like an order of maginitude).  The software
 drivers underneath the network stack and the system stack add enough
 overhead to reduce your maximum transaction ceiling to around 6000
 trans/sec on haproxy (this is without a backend constraint).  On a
 hardware device, I am seeing much higher numbers (50k).
 
 LES
 
 
 On Oct 26, 2010, at 10:38 AM, Ariel wrote:
 
  Does anyone know of studies done comparing haproxy on dedicated
 hardware vs virtual machine?  Or perhaps some virtual machine specific
 considerations?
  -a
 




Re: VM benchmarks

2010-10-28 Thread Willy Tarreau
On Thu, Oct 28, 2010 at 07:10:32AM +, Angelo Höngens wrote:
 I'm wondering what the difference would be between the standard slow e1000 
 virtual network card and the fast paravirtualized vmxnet3 virtual network 
 card. In theory, the latter one should be much, much faster.. 

We've tested that at Exceliance. Yes it's a lot faster. But still a lot
slower than the native machine. To give you an idea, you can get about
6000 connections per second under ESX on a machine that natively supports
between 25000 and 4 depending on the NICs.

Regards,
Willy




Re: VM benchmarks

2010-10-28 Thread Cyril Bonté
Le jeudi 28 octobre 2010 15:58:55, Ariel a écrit :
 Hi Cyril,
 My test wasn't designed to look at higher load averages (many users at
 once) since the problem I was looking at was just increased latency for
 all requests.

You mean that with only 1 request at a time through haproxy you obtain a 
response in 150ms where a direct request gives a response in 10 to 30ms ?
I agree, this looks really strange.

I reproduced nearly the same environment as you described and could not 
reproduce this latency (only 1 nginx instance in my case).
To be clear on the config I used (I didn't took time to have a clean and tuned 
installation):
- 1 server running VirtualBox 3.2.8
  OS : Mandriva Cooker (not recently updated)
  Kernel : Linux localhost 2.6.35.6-server-1mnb #1 SMP ... x86_64 GNU/Linux
  CPU : Intel(R) Core(TM)2 Duo CPU E6750  @ 2.66GHz
  Memory : 4Gb
  IP : 192.168.0.128

  With 2 small VMs based on a Debian Lenny 5.0.6 :
Kernel : 2.6.26-2-amd64 #1 SMP ... x86_64 GNU/Linux

- Instance 1 :
  1 CPU allocated
  Memory : 512Mb
  IP : 192.168.0.23
  HAProxy 1.4.8 installed with your configuration (only one backend server
  pointing to the second VM instance)

- Instance 2 :
  1 CPU allocated
  Memory : 384Mb
  IP : 192.168.0.24
 nginx 0.7.65 embedding your ajax test

- 1 laptop used as the client
  OS : Ubuntu 10.10
  Kernel : 2.6.35-22-generic #35-Ubuntu SMP ... i686 GNU/Linux
  Memory : 2Gb

TEST 1 : Firefox/Firebug
- direct access to nginx via 192.168.0.24 : firebug shows response times about 
2ms
- access to haproxy via 192.168.0.23 : response times are about 3ms

TEST 2 : Chromium/Firebug lite
- direct access to nginx via 192.168.0.24 : response times between 10 and 15ms
- access to haproxy via 192.168.0.23 : response times still between 10 and 
15ms

TEST 3 : using ab for 1 requests with a concurrency of 1 (no keepalive)
- via nginx : ab -n1 -c1 http://192.168.0.24/ajax.txt
Percentage of the requests served within a certain time (ms)
  50%  2
  66%  2
  75%  2
  80%  2
  90%  2
  95%  2
  98%  2
  99%  3
 100% 16 (longest request)

- via haproxy : ab -n1 -c1 http://192.168.0.23/ajax.txt
Percentage of the requests served within a certain time (ms)
  50%  3
  66%  3
  75%  4
  80%  4
  90%  4
  95%  4
  98%  5
  99%  5
 100% 14 (longest request)
The results are similar.

TEST 4 : using ab for 1 requests with a concurrency of 10 (no keepalive)
- via nginx : ab -n1 -c10 http://192.168.0.24/ajax.txt
Percentage of the requests served within a certain time (ms)
  50%  6
  66%  6
  75%  6
  80%  6
  90%  7
  95%  8
  98%  8
  99%  9
 100% 25 (longest request)

- via haproxy : ab -n1 -c10 http://192.168.0.23/ajax.txt
Percentage of the requests served within a certain time (ms)
  50% 18
  66% 21
  75% 23
  80% 24
  90% 30
  95% 35
  98% 40
  99% 43
 100% 56 (longest request)
Ok, it starts to be less responsive but this is because the VirtualBox server 
now uses nearly 100% of its 2 CPU cores.
But this is still far from what you observe.

TEST 5 : using ab for 1 requests with a concurrency of 100 (no keepalive)
Just to be quite agressive with the VMs.
- via nginx : ab -n1 -c100 http://192.168.0.24/ajax.txt
Percentage of the requests served within a certain time (ms)
  50% 54
  66% 55
  75% 57
  80% 65
  90% 76
  95% 78
  98% 79
  99% 81
 100%268 (longest request)

- via haproxy : ab -n1 -c100 http://192.168.0.23/ajax.txt
Percentage of the requests served within a certain time (ms)
Percentage of the requests served within a certain time (ms)
  50%171
  66%184
  75%192
  80%198
  90%217
  95%241
  98%287
  99%314
 100%   3153 (longest request)

I can't help you much more but I hope this results will give you some points 
of comparison. What is the hardware of your Virtualbox server ?

-- 
Cyril Bonté



Re: VM benchmarks

2010-10-27 Thread Cyril Bonté
Hi Ariel,

Le mercredi 27 octobre 2010 16:58:19, Ariel a écrit :
 It's really strange.  I notice a huge improvement in non-virtualized
 environments as well.
 
 I modeled my network on all old laptops (like sub-500mhz era) using haproxy
 pointed to two backend nginx servers and I get 10-30ms response for static
 content (client and servers all on the same 100mbit LAN).  I then modeled
 the same setup in VirtualBox (all on the same computer) from client (host
 OS) to servers (three guest OS's) and I have an average time to fully
 downloaded content of over 150ms.  And yes the CPU supports VT-x and the
 virtualization is configured to use it.

Can you describe precisely the benchmark you used to measure the response time 
(concurrency, number of requests, duration, static file(s) size(s), ...) and 
provide your configuration file ?

Also, which network mode did you use in VirtualBox and which haproxy version 
was running ?

For simple tests, I've already played with haproxy in VirtualBox, kvm and 
openVZ, but never met such differences. But that can depend on your tests.
btw, I still have my VirtualBox VM's available, so I can try to reproduce your 
tests.

-- 
Cyril Bonté



VM benchmarks

2010-10-26 Thread Ariel
Does anyone know of studies done comparing haproxy on dedicated hardware vs 
virtual machine?  Or perhaps some virtual machine specific considerations?
-a


RE: VM benchmarks

2010-10-26 Thread Simon Green - Centric IT Ltd
Hi,

I'd be interested to see the same test with all devices in the same location. 
There shouldn't be any reason for this much difference! We run HAProxy on ESX 
so I might take a spare server to the DC and V2P the servers over to that for 
testing.

Will let you know on this one...



-Original Message-
From: Daniel Storjordet [mailto:dan...@desti.no] 
Sent: 26 October 2010 22:00
To: Ariel; haproxy@formilux.org
Subject: Re: VM benchmarks


We just moved HAProxy from ESXi servers into two dedicated Atom servers.

In the first setup the HAProxy innstallations balanced two webservers in the 
same ESXi enviorment. The web access times for this config was inbetween 
120-150ms (Connect, Request, Download).

In the new config the dedicated HAProxy boxes are located in a seperate 
datacenter 500km away from the same ESXi web servers. With this config we get 
lower web access times. Inbetween 110-130ms (Connect, Request, Download).

I expect that also moving the web servers to the new datacenter will result in 
an even better results.

--
mvh.

Daniel Storjordet

D E S T ! N O :: Strandgata 117 :: 4307 Sandnes Mob 45 51 73 71 :: Tel 51 62 50 
14  dan...@desti.no :: http://www.desti.no www.destinet.no - Webpublisering på 
nett www.func.no - Flysøk på nett



On 26.10.2010 16:38, Ariel wrote:
 Does anyone know of studies done comparing haproxy on dedicated hardware vs 
 virtual machine?  Or perhaps some virtual machine specific considerations?
 -a





Re: VM benchmarks

2010-10-26 Thread Hank A. Paulson
I don't have benchmarks, but have sites running haproxy on Xen VMs with apache 
on Xen VMs and can pump 120 Mbps and 80 million hits a day through one haproxy 
VM and that is with haproxy doing rsysloging of all requests to 2 remote 
rsyslog servers on top of the serving of requests with some layer 7 acls to 
route requests to different backends. Only 50-75 backend servers total though.


http keepalive helped alot with the type of requests that haproxy serves so it 
reduced the work load some from the non-keepalive version.


I also use auto-splice on there to reduce overhead somewhat.

On 10/26/10 7:38 AM, Ariel wrote:

Does anyone know of studies done comparing haproxy on dedicated hardware vs 
virtual machine?  Or perhaps some virtual machine specific considerations?
-a