Re: [Users] How to make oVirt I/O write faster than Virtualbox and others?

2013-03-15 Thread Itamar Heim

On 03/13/2013 05:53 PM, Adrian Gibanel wrote:

Although I haven't done a proper check I think it has improved a lot by 
disabling cpu scaling and letting performance

Take a look at:
http://forums.fedoraforum.org/showthread.php?t=272109

Proxmox with direct LV as a hard disk is still faster but that makes sense 
because oVirt 3.1 only worked with files in filesystem and not with LVs. Maybe 
that 3.2 direct LUN support implies also LV support.


you can check if that's the difference by using an LV for the disk 
(using a custom hook). if perf. change is dramatic, it may be worth 
considering local storage via lvm rather than local fs.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] How to make oVirt I/O write faster than Virtualbox and others?

2013-03-13 Thread Adrian Gibanel
- Mensaje original - 

> De: "Yaniv Kaul" 
> Para: "Adrian Gibanel" 
> CC: "users" 
> Enviados: Jueves, 7 de Marzo 2013 8:08:55
> Asunto: Re: [Users] How to make oVirt I/O write faster than
> Virtualbox and others?

> - Original Message -
> >
> > Every benchmark out there features KVM as the best virtualisation
> > technology. Even in the I/O write category. My results with oVirt
> > are deceiving. So I'm going to explain my test machine, setup and
> > ask your for advice to find out what's wrong. Any more data you
> > need please ask for it. I like oVirt mostly because of its
> > datacentre-aware web manager. But if it gets unusable I would have
> > to take a look at other systems.
> > Some random questions:
> >
> > * Is it a problem that sandbridge architecture is being detected as
> > an Intel Conroe architecture?
> > * Is there any easy way to test aio=native in oVirt when running
> > virtual machines just for testing it?

> Our testing showed that aio=threads works better for file-based
> storage (vs. aio=native for block storage).

> > * Should I test oVirt 3.2? Is there any improvement in I/O writing?

> Probably not, but you should try with oVirt 3.2 nevertheless, for the
> wealth of other features that might be useful now or later (direct
> LUN for example).
Interesting.

> > * What about Fedora 18? Any improvements in I/O writing or, I don't
> > know, the Virtio system?

> Probably a newer QEMU and KVM can provide better performance. Did not
> go through the complete changelog to verify it.
Ok.

> > * Any ovirt-node package for Debian/Ubuntu? The wiki seems like a
> > draft (http://www.ovirt.org/Ovirt_build_on_debian/ubuntu).
> > * Any I/O write consuming package that I should remove from stock
> > Fedora just before installing it from the web manager?
> >
> > So... Any idea?

> I'd start with comparing the complete QEMU command line between the
> instances. I don't think it's only the -cpu that'll affect the
> performance (unless the test is CPU bound?). Specifically, what
> about the cache= setting? We use, for data safety, cache=none.

That's what I did in the proxmox's manually run qemus.

Some news:
-

Although I haven't done a proper check I think it has improved a lot by 
disabling cpu scaling and letting performance 

Take a look at:
http://forums.fedoraforum.org/showthread.php?t=272109

Proxmox with direct LV as a hard disk is still faster but that makes sense 
because oVirt 3.1 only worked with files in filesystem and not with LVs. Maybe 
that 3.2 direct LUN support implies also LV support.

-- 

-- 
Adrián Gibanel 
I.T. Manager 

+34 675 683 301 
www.btactic.com 



Ens podeu seguir a/Nos podeis seguir en: 

i 


Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és 
cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El 
medio ambiente es cosa de todos. 

AVIS: 
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou 
el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o 
copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge 
per error, us agrairem que ho feu saber immediatament al remitent i que 
procediu a destruir el missatge . 

AVISO: 
El contenido de este mensaje y de sus anexos es confidencial. Si no es el 
destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o 
copiarlo sin tener la autorización correspondiente. Si han recibido este 
mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al 
remitente y que procedan a destruir el mensaje . 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] How to make oVirt I/O write faster than Virtualbox and others?

2013-03-06 Thread Yaniv Kaul
- Original Message -
> 
>   Every benchmark out there features KVM as the best virtualisation
>   technology. Even in the I/O write category. My results with oVirt
>   are deceiving. So I'm going to explain my test machine, setup and
>   ask your for advice to find out what's wrong. Any more data you
>   need please ask for it. I like oVirt mostly because of its
>   datacentre-aware web manager. But if it gets unusable I would have
>   to take a look at other systems.
> 
> Hardware machine for the host OS
> -
> * Sandy Bridge E
> * CPU : Intel Xeon E5-1620 (10 MB Intel Smart Cache)
> * Cores / Threads : 4 / 8
> * Frecuency : 3.6GHz / 3.8GHz Turbo Boost
> * RAM : 64 GB DDR3 ECC
> * Hard Disk : 2x 2TB SATA3
> * VT technology: Intel VT
> 
> Common OS Setup for the host OS
> -
> * 2 hard disks RAIDSoft
> 
> Guest OS common setup
> -
> * 1 socket x 1 core x 1 thread
> * 2 GB RAM
> * 300 GB Preallocated hard disk
> * Virtualmin installed (Just an excuse to have a mysql server)
> * Ubuntu 12.04 64bit
> 
> Write I/O test
> --
> The write I/O test is not standard one but a custom one. One of our
> needs is to create new Mysql INNODB tables. These tables have to be
> created in less than php maximum execution time so that some web
> installations don't timeout when creating their databases.
> DISCLAIMER: If you want to evaluate oVirt don't trust on these times,
> please do your own tests.
> 
> So the test creates table0 with two ints columns which are then
> filled with 100 insert intos. Repeat that for 99 more tables.
> Finally drop (delete) all the create tables.
> 
> What I run is:
> 
> mysql -u root -p -e "drop database test_create_tables"
> mysql -u root -p -e "create database test_create_tables"
> time mysql -u root -p test_create_tables < test_mysql.sql
> 
> I attach test_mysql.sql gzipped just in case anyone is curious.
> 
> Note that all the Proxmox tests are: Debian Squeeze + Proxmox (KVM).
> It was never used openvz as a virtualisation technology.
> 
> Test A - Debian Squeeze + Proxmox (KVM)
> -
> Description: This is proxmox booting a machine as KVM (not as
> openvz). As said before only 1 socket and 1 core.
> 
> real0m9.453s
> user0m0.104s
> sys 0m0.076s
> 
> Test B - Proxmox (KVM, aio=threads)
> 
> Description: Proxmox again. Qemu was run at hand by changing aio
> parametre to be aio=threads (as oVirt uses) instead of aio=native.
> 
> real0m9.510s
> user0m0.080s
> sys 0m0.096s
> 
> Test C - Proxmox (linux-image-virtual kernel installed, aio=threads)
> 
> Description: If we install linux-image-virtual kernel inside the
> guest machine times are improved a bit.
> 
> real0m8.691s
> user0m0.104s
> sys 0m0.080s
> 
> Test D - Proxmox (ubuntu,  linux-virtual, aio=threads y: -cpu
> kvm64,+lahf_lm,+ssse3,-cx16)
> ---
> 
> real0m8.790s
> user0m0.084s
> sys 0m0.096s
> 
> Test E - Proxmox (ubuntu,  linux-virtual, aio=threads y: -cpu
> kvm64,+lahf_lm,+ssse3,-cx16 -M pc-01.4)
> -
> real0m8.720s
> user0m0.100s
> sys 0m0.080s
> 
> Test F - Proxmox (ubuntu,  linux-virtual, aio=threads y: -cpu
> kvm64,+lahf_lm,+ssse3,-cx16 -M pc-01.4 -rtc
> base=2013-02-22T02:26:29,driftfix=slew)
> 
> 
> real0m8.790s
> user0m0.096s
> sys 0m0.084s
> 
> Test G - Virtualbox
> --
> Description: Ubuntu 12.04 64bit as a host. This is Virtualbox 4.2
> with the extension pack installed. Notice that neither I installed
> the guest additions tools in the guest machine nor the
> linux-image-virtual kernel.
> 
> real0m36.176s
> user0m0.612s
> sys 0m0.468s
> 
> Test H - Fedora 17 64bit - oVirt 3.1
> ---
> Description: This is Fedora 17 64bit oVirt 3.1 being installed in the
> Host. Web manager is installed in another machine. SELinux is in
> Permissive mode. Virtual machine is run at the same virtual machine
> where storage is. The datacenter is setup as "Localhost on host"
> type. Linux-image-virtual kernel installed inside the guest machine.
> 
> real0m52.246s
> user0m0.200s
> sys 0m0.128s
> 
> Test I - Fedora - oVirt 3.1 - vdsmd stopped.
> ---
> Description: This is Fedora 17 64bit oVirt 3.1 being installed in the
> Host. Web manager is installed in another machine. SELinux is in
> Permissive mode. Virtual machine is run at the same virtual machine
> where storage is. The datacenter is setup as "Localhost on host"
> type. Linux-image-virtual kernel installed inside the guest machine.
> vdsmd daemon was stopped at the host was stopped just in case it was
> the I/O decrease reason.
> 
> real0m45.932s
> user0m0.216s
> sys 0m0.100s
> 
> Some bits about the test:
> 
> * If you're asking, yes, I've repeated the test several times and the
> times I give here are representative.
> The password was inserted manually when running mysql commands but
> that's only

[Users] How to make oVirt I/O write faster than Virtualbox and others?

2013-03-02 Thread Adrian Gibanel

  Every benchmark out there features KVM as the best virtualisation technology. 
Even in the I/O write category. My results with oVirt are deceiving. So I'm 
going to explain my test machine, setup and ask your for advice to find out 
what's wrong. Any more data you need please ask for it. I like oVirt mostly 
because of its datacentre-aware web manager. But if it gets unusable I would 
have to take a look at other systems. 

Hardware machine for the host OS
-
* Sandy Bridge E
* CPU : Intel Xeon E5-1620 (10 MB Intel Smart Cache)
* Cores / Threads : 4 / 8
* Frecuency : 3.6GHz / 3.8GHz Turbo Boost
* RAM : 64 GB DDR3 ECC
* Hard Disk : 2x 2TB SATA3
* VT technology: Intel VT

Common OS Setup for the host OS
-
* 2 hard disks RAIDSoft

Guest OS common setup
-
* 1 socket x 1 core x 1 thread
* 2 GB RAM
* 300 GB Preallocated hard disk
* Virtualmin installed (Just an excuse to have a mysql server)
* Ubuntu 12.04 64bit

Write I/O test
--
The write I/O test is not standard one but a custom one. One of our needs is to 
create new Mysql INNODB tables. These tables have to be created in less than 
php maximum execution time so that some web installations don't timeout when 
creating their databases.
DISCLAIMER: If you want to evaluate oVirt don't trust on these times, please do 
your own tests.

So the test creates table0 with two ints columns which are then filled with 100 
insert intos. Repeat that for 99 more tables. Finally drop (delete) all the 
create tables.

What I run is:

mysql -u root -p -e "drop database test_create_tables"
mysql -u root -p -e "create database test_create_tables"
time mysql -u root -p test_create_tables < test_mysql.sql

I attach test_mysql.sql gzipped just in case anyone is curious.

Note that all the Proxmox tests are: Debian Squeeze + Proxmox (KVM). It was 
never used openvz as a virtualisation technology.

Test A - Debian Squeeze + Proxmox (KVM)
-
Description: This is proxmox booting a machine as KVM (not as openvz). As said 
before only 1 socket and 1 core.

real0m9.453s
user0m0.104s
sys 0m0.076s

Test B - Proxmox (KVM, aio=threads)

Description: Proxmox again. Qemu was run at hand by changing aio parametre to 
be aio=threads (as oVirt uses) instead of aio=native.

real0m9.510s
user0m0.080s
sys 0m0.096s

Test C - Proxmox (linux-image-virtual kernel installed, aio=threads)

Description: If we install linux-image-virtual kernel inside the guest machine 
times are improved a bit.

real0m8.691s
user0m0.104s
sys 0m0.080s

Test D - Proxmox (ubuntu,  linux-virtual, aio=threads y: -cpu 
kvm64,+lahf_lm,+ssse3,-cx16)
---

real0m8.790s
user0m0.084s
sys 0m0.096s

Test E - Proxmox (ubuntu,  linux-virtual, aio=threads y: -cpu 
kvm64,+lahf_lm,+ssse3,-cx16 -M pc-01.4)
-
real0m8.720s
user0m0.100s
sys 0m0.080s

Test F - Proxmox (ubuntu,  linux-virtual, aio=threads y: -cpu 
kvm64,+lahf_lm,+ssse3,-cx16 -M pc-01.4 -rtc 
base=2013-02-22T02:26:29,driftfix=slew)


real0m8.790s
user0m0.096s
sys 0m0.084s

Test G - Virtualbox
--
Description: Ubuntu 12.04 64bit as a host. This is Virtualbox 4.2 with the 
extension pack installed. Notice that neither I installed the guest additions 
tools in the guest machine nor the linux-image-virtual kernel.

real0m36.176s
user0m0.612s
sys 0m0.468s

Test H - Fedora 17 64bit - oVirt 3.1
---
Description: This is Fedora 17 64bit oVirt 3.1 being installed in the Host. Web 
manager is installed in another machine. SELinux is in Permissive mode. Virtual 
machine is run at the same virtual machine where storage is. The datacenter is 
setup as "Localhost on host" type. Linux-image-virtual kernel installed inside 
the guest machine.

real0m52.246s
user0m0.200s
sys 0m0.128s

Test I - Fedora - oVirt 3.1 - vdsmd stopped.
---
Description: This is Fedora 17 64bit oVirt 3.1 being installed in the Host. Web 
manager is installed in another machine. SELinux is in Permissive mode. Virtual 
machine is run at the same virtual machine where storage is. The datacenter is 
setup as "Localhost on host" type. Linux-image-virtual kernel installed inside 
the guest machine. vdsmd daemon was stopped at the host was stopped just in 
case it was the I/O decrease reason.

real0m45.932s
user0m0.216s
sys 0m0.100s

Some bits about the test:

* If you're asking, yes, I've repeated the test several times and the times I 
give here are representative.
The password was inserted manually when running mysql commands but that's only 
1 to 2 seconds less which doesn't explain the huge differences between the 
tests.
* I also tried Fedora 64bit and Centos 64bit as guest systems and the results 
were worse.
* I also tried other less powerful machines which work ok with Virtualbox but 
have poor I/O write results with oVirt.
* The B-F Pr