Re: [Users] Very bad write performance in VM (ovirt 3.3.3)

2014-02-11 Thread ml ml
Hi Vijay,

i deleted the Cluster/Datacenter and set it up with two new (physical)
Hosts and now the performance looks great.

I dunno what i did wrong. Thanks a lot


On Mon, Feb 10, 2014 at 6:10 PM, Vijay Bellur  wrote:

> On 02/09/2014 11:08 PM, ml ml wrote:
>
>> Yes, the only thing which brings the wirte I/O almost on my Host Level
>> is by enabling viodiskcache = writeback.
>> As far as i can tell this is caching enabled for the guest and the host
>> which is critical if sudden power loss happens.
>> Can  i turn this is on if i have a BBU in my Host System?
>>
>>
> I was referring to the set of gluster volume tunables in [1]. These
> options can be enabled through "volume set" interface in gluster CLI.
>
> The quorum options are used for providing tolerance against split-brains
> and the remaining ones are recommended normally for performance.
>
> -Vijay
>
> [1] https://github.com/gluster/glusterfs/blob/master/extras/
> group-virt.example
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Very bad write performance in VM (ovirt 3.3.3)

2014-02-10 Thread Vijay Bellur

On 02/09/2014 11:08 PM, ml ml wrote:

Yes, the only thing which brings the wirte I/O almost on my Host Level
is by enabling viodiskcache = writeback.
As far as i can tell this is caching enabled for the guest and the host
which is critical if sudden power loss happens.
Can  i turn this is on if i have a BBU in my Host System?



I was referring to the set of gluster volume tunables in [1]. These 
options can be enabled through "volume set" interface in gluster CLI.


The quorum options are used for providing tolerance against split-brains 
and the remaining ones are recommended normally for performance.


-Vijay

[1] 
https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Very bad write performance in VM (ovirt 3.3.3)

2014-02-09 Thread ml ml
Yes, the only thing which brings the wirte I/O almost on my Host Level is
by enabling viodiskcache = writeback.
As far as i can tell this is caching enabled for the guest and the host
which is critical if sudden power loss happens.
Can  i turn this is on if i have a BBU in my Host System?


On Sun, Feb 9, 2014 at 6:25 PM, Vijay Bellur  wrote:

> On 02/09/2014 09:11 PM, ml ml wrote:
>
>> I am on Cent OS 6.5 and i am using:
>>
>> [root@node1 ~]# rpm -qa | grep gluster
>> glusterfs-rdma-3.4.2-1.el6.x86_64
>> glusterfs-server-3.4.2-1.el6.x86_64
>> glusterfs-fuse-3.4.2-1.el6.x86_64
>> glusterfs-libs-3.4.2-1.el6.x86_64
>> glusterfs-3.4.2-1.el6.x86_64
>> glusterfs-api-3.4.2-1.el6.x86_64
>> glusterfs-cli-3.4.2-1.el6.x86_64
>> vdsm-gluster-4.13.3-3.el6.noarch
>>
>
> Have you turned on "Optimize for Virt Store" for the gluster volume?
>
> -Vijay
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Very bad write performance in VM (ovirt 3.3.3)

2014-02-09 Thread Vijay Bellur

On 02/09/2014 09:11 PM, ml ml wrote:

I am on Cent OS 6.5 and i am using:

[root@node1 ~]# rpm -qa | grep gluster
glusterfs-rdma-3.4.2-1.el6.x86_64
glusterfs-server-3.4.2-1.el6.x86_64
glusterfs-fuse-3.4.2-1.el6.x86_64
glusterfs-libs-3.4.2-1.el6.x86_64
glusterfs-3.4.2-1.el6.x86_64
glusterfs-api-3.4.2-1.el6.x86_64
glusterfs-cli-3.4.2-1.el6.x86_64
vdsm-gluster-4.13.3-3.el6.noarch


Have you turned on "Optimize for Virt Store" for the gluster volume?

-Vijay

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Very bad write performance in VM (ovirt 3.3.3)

2014-02-09 Thread ml ml
I am on Cent OS 6.5 and i am using:

[root@node1 ~]# rpm -qa | grep gluster
glusterfs-rdma-3.4.2-1.el6.x86_64
glusterfs-server-3.4.2-1.el6.x86_64
glusterfs-fuse-3.4.2-1.el6.x86_64
glusterfs-libs-3.4.2-1.el6.x86_64
glusterfs-3.4.2-1.el6.x86_64
glusterfs-api-3.4.2-1.el6.x86_64
glusterfs-cli-3.4.2-1.el6.x86_64
vdsm-gluster-4.13.3-3.el6.noarch

[root@node1 ~]# uname -a
Linux node1.hq.imos.net 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux


Thanks, Mario


On Sat, Feb 8, 2014 at 10:06 PM, Samuli Heinonen wrote:

> Hello,
>
> What version of GlusterFS you are using?
>
> ml ml  kirjoitti 8.2.2014 kello 21.24:
>
> anyone?
>
> On Friday, February 7, 2014, ml ml  wrote:
>
>> Hello List,
>>
>> i set up a Cluster with 2 Nodes and Glusterfs.
>>
>>
>> gluster> volume info all
>>
>> Volume Name: Repl2
>> Type: Replicate
>> Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: node1.local:/data
>> Brick2: node2.local:/data
>> Options Reconfigured:
>> auth.allow: *
>> user.cifs: enable
>> nfs.disable: off
>>
>>
>> I turned node2 off. Just to make sure i have not network bottle neck and
>> that it will not replicate for my first benchmarks.
>>
>>
>> My first test with bonnie on my local raw disk of node1 gave me 130MB/sec
>> write speed.
>> Then i did the same test on my cluster dir /data: 130MB/sec
>> Then i did the write test in a freshly installed Debian 7 vm: 10MB/sec
>>
>> This is terrible and i wonder why?!
>>
>> My tests where made with:
>> bonnie++ -u root -s 
>>
>> Here are my bonnie results: http://oi62.tinypic.com/20aara0.jpg
>>
>>
>> Since node2 is turned off, this cant be a network bottle neck.
>>
>> Any ideas?
>>
>> Thanks,
>> Mario
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Very bad write performance in VM (ovirt 3.3.3)

2014-02-08 Thread Samuli Heinonen
Hello,

What version of GlusterFS you are using?

ml ml  kirjoitti 8.2.2014 kello 21.24:

> anyone?
> 
> On Friday, February 7, 2014, ml ml  wrote:
> Hello List,
> 
> i set up a Cluster with 2 Nodes and Glusterfs.
> 
> 
> gluster> volume info all
>  
> Volume Name: Repl2
> Type: Replicate
> Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: node1.local:/data
> Brick2: node2.local:/data
> Options Reconfigured:
> auth.allow: *
> user.cifs: enable
> nfs.disable: off
> 
> 
> I turned node2 off. Just to make sure i have not network bottle neck and that 
> it will not replicate for my first benchmarks.
> 
> 
> My first test with bonnie on my local raw disk of node1 gave me 130MB/sec 
> write speed.
> Then i did the same test on my cluster dir /data: 130MB/sec
> Then i did the write test in a freshly installed Debian 7 vm: 10MB/sec 
> 
> This is terrible and i wonder why?!
> 
> My tests where made with:
> bonnie++ -u root -s 
> 
> Here are my bonnie results: http://oi62.tinypic.com/20aara0.jpg
> 
> 
> Since node2 is turned off, this cant be a network bottle neck.
> 
> Any ideas? 
> 
> Thanks,
> Mario
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Very bad write performance in VM (ovirt 3.3.3)

2014-02-08 Thread ml ml
anyone?

On Friday, February 7, 2014, ml ml  wrote:

> Hello List,
>
> i set up a Cluster with 2 Nodes and Glusterfs.
>
>
> gluster> volume info all
>
> Volume Name: Repl2
> Type: Replicate
> Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: node1.local:/data
> Brick2: node2.local:/data
> Options Reconfigured:
> auth.allow: *
> user.cifs: enable
> nfs.disable: off
>
>
> I turned node2 off. Just to make sure i have not network bottle neck and
> that it will not replicate for my first benchmarks.
>
>
> My first test with bonnie on my local raw disk of node1 gave me 130MB/sec
> write speed.
> Then i did the same test on my cluster dir /data: 130MB/sec
> Then i did the write test in a freshly installed Debian 7 vm: 10MB/sec
>
> This is terrible and i wonder why?!
>
> My tests where made with:
> bonnie++ -u root -s 
>
> Here are my bonnie results: http://oi62.tinypic.com/20aara0.jpg
>
>
> Since node2 is turned off, this cant be a network bottle neck.
>
> Any ideas?
>
> Thanks,
> Mario
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Very bad write performance in VM (ovirt 3.3.3)

2014-02-07 Thread ml ml
Hello List,

i set up a Cluster with 2 Nodes and Glusterfs.


gluster> volume info all

Volume Name: Repl2
Type: Replicate
Volume ID: 8af9b282-8b60-4d71-a0fd-9116b8fdcca7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1.local:/data
Brick2: node2.local:/data
Options Reconfigured:
auth.allow: *
user.cifs: enable
nfs.disable: off


I turned node2 off. Just to make sure i have not network bottle neck and
that it will not replicate for my first benchmarks.


My first test with bonnie on my local raw disk of node1 gave me 130MB/sec
write speed.
Then i did the same test on my cluster dir /data: 130MB/sec
Then i did the write test in a freshly installed Debian 7 vm: 10MB/sec

This is terrible and i wonder why?!

My tests where made with:
bonnie++ -u root -s 

Here are my bonnie results: http://oi62.tinypic.com/20aara0.jpg


Since node2 is turned off, this cant be a network bottle neck.

Any ideas?

Thanks,
Mario
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users