Re: [Gluster-users] Gluster speed sooo slow

2012-08-13 Thread Ivan Dimitrov
There is a big difference with working with small files (around 16kb) 
and big files (2mb). Performance is much better with big files. Witch is 
too bad for me ;(


On 8/11/12 2:15 AM, Gandalf Corvotempesta wrote:

What do you mean with small files? 16k ? 160k? 16mb?
Do you know any workaround or any other software for this?

Mee too i'm trying to create a clustered storage for many
small file

2012/8/10 Philip Poten philip.po...@gmail.com 
mailto:philip.po...@gmail.com


Hi Ivan,

that's because Gluster has really bad many small files performance
due to it's architecture.

On all stat() calls (which rsync is doing plenty of), all replicas are
being checked for integrity.

regards,
Philip

2012/8/10 Ivan Dimitrov dob...@amln.net mailto:dob...@amln.net:
 So I stopped a node to check the BIOS and after it went up, the
rebalance
 kicked in. I was looking for those kind of speeds on a normal
write. The
 rebalance is much faster than my rsync/cp.



https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png

 Best Regards
 Ivan Dimitrov


 On 8/10/12 1:23 PM, Ivan Dimitrov wrote:

 Hello
 What am I doing wrong?!?

 I have a test setup with 4 identical servers with 2 disks each in
 distribute-replicate 2. All servers are connected to a GB switch.

 I am experiencing really slow speeds at anything I do. Slow
write, slow
 read, not to mention random write/reads.

 Here is an example:
 random-files is a directory with 32768 files with average size
16kb.
 [root@gltclient]:~# rsync -a /root/speedtest/random-files/
 /home/gltvolume/
 ^^ This will take more than 3 hours.

 On any of the servers if I do iostat the disks are not loaded
at all:



https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png

 This is similar result for all servers.

 Here is an example of simple ls command on the content.
 [root@gltclient]:~# unalias ls
 [root@gltclient]:~# /usr/bin/time -f %e seconds ls
/home/gltvolume/ | wc
 -l
 2.81 seconds
 5393

 almost 3 seconds to display 5000 files?!?! When they are
32,000, the ls
 will take around 35-45 seconds.

 This directory is on local disk:
 [root@gltclient]:~# /usr/bin/time -f %e seconds ls
 /root/speedtest/random-files/ | wc -l
 1.45 seconds
 32768

 [root@gltclient]:~# /usr/bin/time -f %e seconds cat
/home/gltvolume/*
 /dev/null
 190.50 seconds

 [root@gltclient]:~# /usr/bin/time -f %e seconds du -sh
/home/gltvolume/
 126M/home/gltvolume/
 75.23 seconds


 Here is the volume information.

 [root@glt1]:~# gluster volume info

 Volume Name: gltvolume
 Type: Distributed-Replicate
 Volume ID: 16edd852-8d23-41da-924d-710b753bb374
 Status: Started
 Number of Bricks: 4 x 2 = 8
 Transport-type: tcp
 Bricks:
 Brick1: 1.1.74.246:/home/sda3
 Brick2: glt2.network.net:/home/sda3
 Brick3: 1.1.74.246:/home/sdb1
 Brick4: glt2.network.net:/home/sdb1
 Brick5: glt3.network.net:/home/sda3
 Brick6: gltclient.network.net:/home/sda3
 Brick7: glt3.network.net:/home/sdb1
 Brick8: gltclient.network.net:/home/sdb1
 Options Reconfigured:
 performance.io-thread-count: 32
 performance.cache-size: 256MB
 cluster.self-heal-daemon: on


 [root@glt1]:~# gluster volume status all detail
 Status of volume: gltvolume



--
 Brick: Brick 1.1.74.246:/home/sda3
 Port : 24009
 Online   : Y
 Pid  : 1479
 File System  : ext4
 Device   : /dev/sda3
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 179.3GB
 Total Disk Space : 179.7GB
 Inode Count  : 11968512
 Free Inodes  : 11901550



--
 Brick: Brick glt2.network.net:/home/sda3
 Port : 24009
 Online   : Y
 Pid  : 1589
 File System  : ext4
 Device   : /dev/sda3
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 179.3GB
 Total Disk Space : 179.7GB
 Inode Count  : 11968512
 Free Inodes  : 11901550



--
 Brick: Brick 1.1.74.246:/home/sdb1
 Port : 24010
 Online   : Y
 Pid  : 1485
 File System  : ext4
 Device 

Re: [Gluster-users] Gluster speed sooo slow

2012-08-13 Thread Fernando Frediani (Qube)
I think Gluster as it stands now and current level of development is more for 
Multimedia and Archival files, not for small files nor for running Virtual 
Machines. It requires still a fair amount of development which hopefully RedHat 
will put in place.

Fernando

From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Ivan Dimitrov
Sent: 13 August 2012 08:33
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster speed sooo slow

There is a big difference with working with small files (around 16kb) and big 
files (2mb). Performance is much better with big files. Witch is too bad for me 
;(

On 8/11/12 2:15 AM, Gandalf Corvotempesta wrote:
What do you mean with small files? 16k ? 160k? 16mb?
Do you know any workaround or any other software for this?

Mee too i'm trying to create a clustered storage for many
small file
2012/8/10 Philip Poten philip.po...@gmail.commailto:philip.po...@gmail.com
Hi Ivan,

that's because Gluster has really bad many small files performance
due to it's architecture.

On all stat() calls (which rsync is doing plenty of), all replicas are
being checked for integrity.

regards,
Philip

2012/8/10 Ivan Dimitrov dob...@amln.netmailto:dob...@amln.net:
 So I stopped a node to check the BIOS and after it went up, the rebalance
 kicked in. I was looking for those kind of speeds on a normal write. The
 rebalance is much faster than my rsync/cp.

 https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png

 Best Regards
 Ivan Dimitrov


 On 8/10/12 1:23 PM, Ivan Dimitrov wrote:

 Hello
 What am I doing wrong?!?

 I have a test setup with 4 identical servers with 2 disks each in
 distribute-replicate 2. All servers are connected to a GB switch.

 I am experiencing really slow speeds at anything I do. Slow write, slow
 read, not to mention random write/reads.

 Here is an example:
 random-files is a directory with 32768 files with average size 16kb.
 [root@gltclient]:~# rsync -a /root/speedtest/random-files/
 /home/gltvolume/
 ^^ This will take more than 3 hours.

 On any of the servers if I do iostat the disks are not loaded at all:

 https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png

 This is similar result for all servers.

 Here is an example of simple ls command on the content.
 [root@gltclient]:~# unalias ls
 [root@gltclient]:~# /usr/bin/time -f %e seconds ls /home/gltvolume/ | wc
 -l
 2.81 seconds
 5393

 almost 3 seconds to display 5000 files?!?! When they are 32,000, the ls
 will take around 35-45 seconds.

 This directory is on local disk:
 [root@gltclient]:~# /usr/bin/time -f %e seconds ls
 /root/speedtest/random-files/ | wc -l
 1.45 seconds
 32768

 [root@gltclient]:~# /usr/bin/time -f %e seconds cat /home/gltvolume/*
 /dev/null
 190.50 seconds

 [root@gltclient]:~# /usr/bin/time -f %e seconds du -sh /home/gltvolume/
 126M/home/gltvolume/
 75.23 seconds


 Here is the volume information.

 [root@glt1]:~# gluster volume info

 Volume Name: gltvolume
 Type: Distributed-Replicate
 Volume ID: 16edd852-8d23-41da-924d-710b753bb374
 Status: Started
 Number of Bricks: 4 x 2 = 8
 Transport-type: tcp
 Bricks:
 Brick1: 1.1.74.246:/home/sda3
 Brick2: glt2.network.net:/home/sda3
 Brick3: 1.1.74.246:/home/sdb1
 Brick4: glt2.network.net:/home/sdb1
 Brick5: glt3.network.net:/home/sda3
 Brick6: gltclient.network.net:/home/sda3
 Brick7: glt3.network.net:/home/sdb1
 Brick8: gltclient.network.net:/home/sdb1
 Options Reconfigured:
 performance.io-thread-count: 32
 performance.cache-size: 256MB
 cluster.self-heal-daemon: on


 [root@glt1]:~# gluster volume status all detail
 Status of volume: gltvolume

 --
 Brick: Brick 1.1.74.246:/home/sda3
 Port : 24009
 Online   : Y
 Pid  : 1479
 File System  : ext4
 Device   : /dev/sda3
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 179.3GB
 Total Disk Space : 179.7GB
 Inode Count  : 11968512
 Free Inodes  : 11901550

 --
 Brick: Brick glt2.network.net:/home/sda3
 Port : 24009
 Online   : Y
 Pid  : 1589
 File System  : ext4
 Device   : /dev/sda3
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 179.3GB
 Total Disk Space : 179.7GB
 Inode Count  : 11968512
 Free Inodes  : 11901550

 --
 Brick: Brick 1.1.74.246:/home/sdb1
 Port : 24010
 Online   : Y
 Pid  : 1485
 File System  : ext4
 Device   : /dev/sdb1
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space

Re: [Gluster-users] Gluster speed sooo slow

2012-08-13 Thread Brian Candler
On Mon, Aug 13, 2012 at 09:40:49AM +, Fernando Frediani (Qube) wrote:
I think Gluster as it stands now and current level of development is
more for Multimedia and Archival files, not for small files nor for
running Virtual Machines. It requires still a fair amount of
development which hopefully RedHat will put in place.

I know a large ISP is using gluster successfully for Maildir storage - or at
least was a couple of years ago when I last spoke to them about it - which
means very large numbers of small files.

I think you need to be clear on the difference between throughput and
latency.

Any networked filesystem is going to have latency, and gluster maybe suffers
more than most because of the FUSE layer at the client.  This will show as
poor throughput if a single client is sequentially reading or writing lots
of small files, because it has to wait a round trip for each request.

However, if you have multiple clients accessing at the same time, you can
still have high total throughput.  This is because the wasted time between
requests from one client is used to service other clients.

If gluster were to do aggressive client-side caching then it might be able
to make responses appear faster to a single client, but this would be at the
risk of data loss (e.g.  responding that a file has been committed to disk,
when in fact it hasn't).  But this would make no difference to total
throughput with multiple clients, which depends on the available bandwidth
into the disk drives and across the network.

So it all depends on your overall usage pattern. Only make your judgement
based on a single-threaded benchmark if that's what your usage pattern is
really going to be like: i.e.  are you really going to have a single user
accessing the filesystem, and their application reads or writes one file
after the other rather than multiple files concurrently.

Regards,

Brian.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster speed sooo slow

2012-08-13 Thread Fernando Frediani (Qube)
I heard from a Large ISP talking to someone that works there they were trying 
to use GlusteFS for Maildir and they had a hell because of the many small files 
and had customer complaining all the time.
Latency is acceptable on a networked filesystem, but the results people are 
reporting are beyond any latency problems, they are due to the way Gluster is 
structured and that was already confirmed by some people on this list, so 
changed are indeed needed on the code. If you take even a Gigabit network the 
round trip isn't that much really, (not more than a quarter of a ms) so it 
shouldn't be a big thing.
Yes FUSE might also contribute to decrease performance but still the 
performance problems are on the architecture of the filesystem.
One thing that is new to Gluster and that in my opinion could contribute to 
increase performance is the Distributed-Stripped volumes, but that doesn't 
still work for all enviroemnts.
So as it stands for Multimedia or Archive files fine, for other usages I 
wouldn't bet my chips and would rather test thoroughly first.

-Original Message-
From: Brian Candler [mailto:b.cand...@pobox.com] 
Sent: 13 August 2012 11:00
To: Fernando Frediani (Qube)
Cc: 'Ivan Dimitrov'; 'gluster-users@gluster.org'
Subject: Re: [Gluster-users] Gluster speed sooo slow

On Mon, Aug 13, 2012 at 09:40:49AM +, Fernando Frediani (Qube) wrote:
I think Gluster as it stands now and current level of development is
more for Multimedia and Archival files, not for small files nor for
running Virtual Machines. It requires still a fair amount of
development which hopefully RedHat will put in place.

I know a large ISP is using gluster successfully for Maildir storage - or at 
least was a couple of years ago when I last spoke to them about it - which 
means very large numbers of small files.

I think you need to be clear on the difference between throughput and latency.

Any networked filesystem is going to have latency, and gluster maybe suffers 
more than most because of the FUSE layer at the client.  This will show as poor 
throughput if a single client is sequentially reading or writing lots of small 
files, because it has to wait a round trip for each request.

However, if you have multiple clients accessing at the same time, you can still 
have high total throughput.  This is because the wasted time between requests 
from one client is used to service other clients.

If gluster were to do aggressive client-side caching then it might be able to 
make responses appear faster to a single client, but this would be at the risk 
of data loss (e.g.  responding that a file has been committed to disk, when in 
fact it hasn't).  But this would make no difference to total throughput with 
multiple clients, which depends on the available bandwidth into the disk drives 
and across the network.

So it all depends on your overall usage pattern. Only make your judgement based 
on a single-threaded benchmark if that's what your usage pattern is really 
going to be like: i.e.  are you really going to have a single user accessing 
the filesystem, and their application reads or writes one file after the other 
rather than multiple files concurrently.

Regards,

Brian.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster speed sooo slow

2012-08-13 Thread Ivan Dimitrov
I have a low traffic free hosting and I converted some x,000 users on 
glusterfs a few months ago. I'm not impressed at all and would probably 
not convert any more users. It works ok for now, but with 88GB of 2TB 
volume. It's kind of pointless for now... :(
I'm researching a way to convert my payed hosting users, but I can't 
find any system suitable for the job.


Fernando, what gluster structure are you talking about?

Best Regards
Ivan Dimitrov

Fernando, what
On 8/13/12 2:16 PM, Fernando Frediani (Qube) wrote:

I heard from a Large ISP talking to someone that works there they were trying 
to use GlusteFS for Maildir and they had a hell because of the many small files 
and had customer complaining all the time.
Latency is acceptable on a networked filesystem, but the results people are 
reporting are beyond any latency problems, they are due to the way Gluster is 
structured and that was already confirmed by some people on this list, so 
changed are indeed needed on the code. If you take even a Gigabit network the 
round trip isn't that much really, (not more than a quarter of a ms) so it 
shouldn't be a big thing.
Yes FUSE might also contribute to decrease performance but still the 
performance problems are on the architecture of the filesystem.
One thing that is new to Gluster and that in my opinion could contribute to 
increase performance is the Distributed-Stripped volumes, but that doesn't 
still work for all enviroemnts.
So as it stands for Multimedia or Archive files fine, for other usages I 
wouldn't bet my chips and would rather test thoroughly first.

-Original Message-
From: Brian Candler [mailto:b.cand...@pobox.com]
Sent: 13 August 2012 11:00
To: Fernando Frediani (Qube)
Cc: 'Ivan Dimitrov'; 'gluster-users@gluster.org'
Subject: Re: [Gluster-users] Gluster speed sooo slow

On Mon, Aug 13, 2012 at 09:40:49AM +, Fernando Frediani (Qube) wrote:

I think Gluster as it stands now and current level of development is
more for Multimedia and Archival files, not for small files nor for
running Virtual Machines. It requires still a fair amount of
development which hopefully RedHat will put in place.

I know a large ISP is using gluster successfully for Maildir storage - or at 
least was a couple of years ago when I last spoke to them about it - which 
means very large numbers of small files.

I think you need to be clear on the difference between throughput and latency.

Any networked filesystem is going to have latency, and gluster maybe suffers 
more than most because of the FUSE layer at the client.  This will show as poor 
throughput if a single client is sequentially reading or writing lots of small 
files, because it has to wait a round trip for each request.

However, if you have multiple clients accessing at the same time, you can still have high 
total throughput.  This is because the wasted time between requests from one 
client is used to service other clients.

If gluster were to do aggressive client-side caching then it might be able to 
make responses appear faster to a single client, but this would be at the risk 
of data loss (e.g.  responding that a file has been committed to disk, when in 
fact it hasn't).  But this would make no difference to total throughput with 
multiple clients, which depends on the available bandwidth into the disk drives 
and across the network.

So it all depends on your overall usage pattern. Only make your judgement based 
on a single-threaded benchmark if that's what your usage pattern is really 
going to be like: i.e.  are you really going to have a single user accessing 
the filesystem, and their application reads or writes one file after the other 
rather than multiple files concurrently.

Regards,

Brian.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster speed sooo slow

2012-08-13 Thread Fernando Frediani (Qube)
3.2 Ivan.

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Ivan Dimitrov
Sent: 13 August 2012 12:33
To: 'gluster-users@gluster.org'
Subject: Re: [Gluster-users] Gluster speed sooo slow

I have a low traffic free hosting and I converted some x,000 users on glusterfs 
a few months ago. I'm not impressed at all and would probably not convert any 
more users. It works ok for now, but with 88GB of 2TB volume. It's kind of 
pointless for now... :( I'm researching a way to convert my payed hosting 
users, but I can't find any system suitable for the job.

Fernando, what gluster structure are you talking about?

Best Regards
Ivan Dimitrov

Fernando, what
On 8/13/12 2:16 PM, Fernando Frediani (Qube) wrote:
 I heard from a Large ISP talking to someone that works there they were trying 
 to use GlusteFS for Maildir and they had a hell because of the many small 
 files and had customer complaining all the time.
 Latency is acceptable on a networked filesystem, but the results people are 
 reporting are beyond any latency problems, they are due to the way Gluster is 
 structured and that was already confirmed by some people on this list, so 
 changed are indeed needed on the code. If you take even a Gigabit network the 
 round trip isn't that much really, (not more than a quarter of a ms) so it 
 shouldn't be a big thing.
 Yes FUSE might also contribute to decrease performance but still the 
 performance problems are on the architecture of the filesystem.
 One thing that is new to Gluster and that in my opinion could contribute to 
 increase performance is the Distributed-Stripped volumes, but that doesn't 
 still work for all enviroemnts.
 So as it stands for Multimedia or Archive files fine, for other usages I 
 wouldn't bet my chips and would rather test thoroughly first.

 -Original Message-
 From: Brian Candler [mailto:b.cand...@pobox.com]
 Sent: 13 August 2012 11:00
 To: Fernando Frediani (Qube)
 Cc: 'Ivan Dimitrov'; 'gluster-users@gluster.org'
 Subject: Re: [Gluster-users] Gluster speed sooo slow

 On Mon, Aug 13, 2012 at 09:40:49AM +, Fernando Frediani (Qube) wrote:
 I think Gluster as it stands now and current level of development is
 more for Multimedia and Archival files, not for small files nor for
 running Virtual Machines. It requires still a fair amount of
 development which hopefully RedHat will put in place.
 I know a large ISP is using gluster successfully for Maildir storage - or at 
 least was a couple of years ago when I last spoke to them about it - which 
 means very large numbers of small files.

 I think you need to be clear on the difference between throughput and latency.

 Any networked filesystem is going to have latency, and gluster maybe suffers 
 more than most because of the FUSE layer at the client.  This will show as 
 poor throughput if a single client is sequentially reading or writing lots of 
 small files, because it has to wait a round trip for each request.

 However, if you have multiple clients accessing at the same time, you can 
 still have high total throughput.  This is because the wasted time between 
 requests from one client is used to service other clients.

 If gluster were to do aggressive client-side caching then it might be able to 
 make responses appear faster to a single client, but this would be at the 
 risk of data loss (e.g.  responding that a file has been committed to disk, 
 when in fact it hasn't).  But this would make no difference to total 
 throughput with multiple clients, which depends on the available bandwidth 
 into the disk drives and across the network.

 So it all depends on your overall usage pattern. Only make your judgement 
 based on a single-threaded benchmark if that's what your usage pattern is 
 really going to be like: i.e.  are you really going to have a single user 
 accessing the filesystem, and their application reads or writes one file 
 after the other rather than multiple files concurrently.

 Regards,

 Brian.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster speed sooo slow

2012-08-13 Thread Brian Candler
 One thing that is new to Gluster and that in my opinion could contribute
 to increase performance is the Distributed-Stripped volumes,

Only if you have a single huge file, and you are doing a large read or write
to it - i.e.  exactly the opposite case of lots of small files.

 for other usages I wouldn't bet my chips and would
 rather test thoroughly first.

I agree 100% - for whatever solution you choose.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster speed sooo slow

2012-08-10 Thread Ivan Dimitrov

Hello
What am I doing wrong?!?

I have a test setup with 4 identical servers with 2 disks each in 
distribute-replicate 2. All servers are connected to a GB switch.


I am experiencing really slow speeds at anything I do. Slow write, slow 
read, not to mention random write/reads.


Here is an example:
random-files is a directory with 32768 files with average size 16kb.
[root@gltclient]:~# rsync -a /root/speedtest/random-files/ /home/gltvolume/
^^ This will take more than 3 hours.

On any of the servers if I do iostat the disks are not loaded at all:
https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png

This is similar result for all servers.

Here is an example of simple ls command on the content.
[root@gltclient]:~# unalias ls
[root@gltclient]:~# /usr/bin/time -f %e seconds ls /home/gltvolume/ | 
wc -l

2.81 seconds
5393

almost 3 seconds to display 5000 files?!?! When they are 32,000, the ls 
will take around 35-45 seconds.


This directory is on local disk:
[root@gltclient]:~# /usr/bin/time -f %e seconds ls 
/root/speedtest/random-files/ | wc -l

1.45 seconds
32768

[root@gltclient]:~# /usr/bin/time -f %e seconds cat /home/gltvolume/* 
/dev/null

190.50 seconds

[root@gltclient]:~# /usr/bin/time -f %e seconds du -sh /home/gltvolume/
126M/home/gltvolume/
75.23 seconds


Here is the volume information.

[root@glt1]:~# gluster volume info

Volume Name: gltvolume
Type: Distributed-Replicate
Volume ID: 16edd852-8d23-41da-924d-710b753bb374
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 1.1.74.246:/home/sda3
Brick2: glt2.network.net:/home/sda3
Brick3: 1.1.74.246:/home/sdb1
Brick4: glt2.network.net:/home/sdb1
Brick5: glt3.network.net:/home/sda3
Brick6: gltclient.network.net:/home/sda3
Brick7: glt3.network.net:/home/sdb1
Brick8: gltclient.network.net:/home/sdb1
Options Reconfigured:
performance.io-thread-count: 32
performance.cache-size: 256MB
cluster.self-heal-daemon: on


[root@glt1]:~# gluster volume status all detail
Status of volume: gltvolume
--
Brick: Brick 1.1.74.246:/home/sda3
Port : 24009
Online   : Y
Pid  : 1479
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11901550
--
Brick: Brick glt2.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 1589
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11901550
--
Brick: Brick 1.1.74.246:/home/sdb1
Port : 24010
Online   : Y
Pid  : 1485
File System  : ext4
Device   : /dev/sdb1
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 228.8GB
Total Disk Space : 229.2GB
Inode Count  : 15269888
Free Inodes  : 15202933
--
Brick: Brick glt2.network.net:/home/sdb1
Port : 24010
Online   : Y
Pid  : 1595
File System  : ext4
Device   : /dev/sdb1
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 228.8GB
Total Disk Space : 229.2GB
Inode Count  : 15269888
Free Inodes  : 15202933
--
Brick: Brick glt3.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 28963
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11906058
--
Brick: Brick gltclient.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 3145
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11906058
--
Brick: Brick glt3.network.net:/home/sdb1
Port : 

Re: [Gluster-users] Gluster speed sooo slow

2012-08-10 Thread Ivan Dimitrov
So I stopped a node to check the BIOS and after it went up, the 
rebalance kicked in. I was looking for those kind of speeds on a normal 
write. The rebalance is much faster than my rsync/cp.


https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png

Best Regards
Ivan Dimitrov

On 8/10/12 1:23 PM, Ivan Dimitrov wrote:

Hello
What am I doing wrong?!?

I have a test setup with 4 identical servers with 2 disks each in 
distribute-replicate 2. All servers are connected to a GB switch.


I am experiencing really slow speeds at anything I do. Slow write, 
slow read, not to mention random write/reads.


Here is an example:
random-files is a directory with 32768 files with average size 16kb.
[root@gltclient]:~# rsync -a /root/speedtest/random-files/ 
/home/gltvolume/

^^ This will take more than 3 hours.

On any of the servers if I do iostat the disks are not loaded at all:
https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png 



This is similar result for all servers.

Here is an example of simple ls command on the content.
[root@gltclient]:~# unalias ls
[root@gltclient]:~# /usr/bin/time -f %e seconds ls /home/gltvolume/ 
| wc -l

2.81 seconds
5393

almost 3 seconds to display 5000 files?!?! When they are 32,000, the 
ls will take around 35-45 seconds.


This directory is on local disk:
[root@gltclient]:~# /usr/bin/time -f %e seconds ls 
/root/speedtest/random-files/ | wc -l

1.45 seconds
32768

[root@gltclient]:~# /usr/bin/time -f %e seconds cat 
/home/gltvolume/* /dev/null

190.50 seconds

[root@gltclient]:~# /usr/bin/time -f %e seconds du -sh /home/gltvolume/
126M/home/gltvolume/
75.23 seconds


Here is the volume information.

[root@glt1]:~# gluster volume info

Volume Name: gltvolume
Type: Distributed-Replicate
Volume ID: 16edd852-8d23-41da-924d-710b753bb374
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 1.1.74.246:/home/sda3
Brick2: glt2.network.net:/home/sda3
Brick3: 1.1.74.246:/home/sdb1
Brick4: glt2.network.net:/home/sdb1
Brick5: glt3.network.net:/home/sda3
Brick6: gltclient.network.net:/home/sda3
Brick7: glt3.network.net:/home/sdb1
Brick8: gltclient.network.net:/home/sdb1
Options Reconfigured:
performance.io-thread-count: 32
performance.cache-size: 256MB
cluster.self-heal-daemon: on


[root@glt1]:~# gluster volume status all detail
Status of volume: gltvolume
-- 


Brick: Brick 1.1.74.246:/home/sda3
Port : 24009
Online   : Y
Pid  : 1479
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11901550
-- 


Brick: Brick glt2.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 1589
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11901550
-- 


Brick: Brick 1.1.74.246:/home/sdb1
Port : 24010
Online   : Y
Pid  : 1485
File System  : ext4
Device   : /dev/sdb1
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 228.8GB
Total Disk Space : 229.2GB
Inode Count  : 15269888
Free Inodes  : 15202933
-- 


Brick: Brick glt2.network.net:/home/sdb1
Port : 24010
Online   : Y
Pid  : 1595
File System  : ext4
Device   : /dev/sdb1
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 228.8GB
Total Disk Space : 229.2GB
Inode Count  : 15269888
Free Inodes  : 15202933
-- 


Brick: Brick glt3.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 28963
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11906058
-- 


Brick: Brick gltclient.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 3145
File System  : ext4
Device

Re: [Gluster-users] Gluster speed sooo slow

2012-08-10 Thread Philip Poten
Hi Ivan,

that's because Gluster has really bad many small files performance
due to it's architecture.

On all stat() calls (which rsync is doing plenty of), all replicas are
being checked for integrity.

regards,
Philip

2012/8/10 Ivan Dimitrov dob...@amln.net:
 So I stopped a node to check the BIOS and after it went up, the rebalance
 kicked in. I was looking for those kind of speeds on a normal write. The
 rebalance is much faster than my rsync/cp.

 https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png

 Best Regards
 Ivan Dimitrov


 On 8/10/12 1:23 PM, Ivan Dimitrov wrote:

 Hello
 What am I doing wrong?!?

 I have a test setup with 4 identical servers with 2 disks each in
 distribute-replicate 2. All servers are connected to a GB switch.

 I am experiencing really slow speeds at anything I do. Slow write, slow
 read, not to mention random write/reads.

 Here is an example:
 random-files is a directory with 32768 files with average size 16kb.
 [root@gltclient]:~# rsync -a /root/speedtest/random-files/
 /home/gltvolume/
 ^^ This will take more than 3 hours.

 On any of the servers if I do iostat the disks are not loaded at all:

 https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png

 This is similar result for all servers.

 Here is an example of simple ls command on the content.
 [root@gltclient]:~# unalias ls
 [root@gltclient]:~# /usr/bin/time -f %e seconds ls /home/gltvolume/ | wc
 -l
 2.81 seconds
 5393

 almost 3 seconds to display 5000 files?!?! When they are 32,000, the ls
 will take around 35-45 seconds.

 This directory is on local disk:
 [root@gltclient]:~# /usr/bin/time -f %e seconds ls
 /root/speedtest/random-files/ | wc -l
 1.45 seconds
 32768

 [root@gltclient]:~# /usr/bin/time -f %e seconds cat /home/gltvolume/*
 /dev/null
 190.50 seconds

 [root@gltclient]:~# /usr/bin/time -f %e seconds du -sh /home/gltvolume/
 126M/home/gltvolume/
 75.23 seconds


 Here is the volume information.

 [root@glt1]:~# gluster volume info

 Volume Name: gltvolume
 Type: Distributed-Replicate
 Volume ID: 16edd852-8d23-41da-924d-710b753bb374
 Status: Started
 Number of Bricks: 4 x 2 = 8
 Transport-type: tcp
 Bricks:
 Brick1: 1.1.74.246:/home/sda3
 Brick2: glt2.network.net:/home/sda3
 Brick3: 1.1.74.246:/home/sdb1
 Brick4: glt2.network.net:/home/sdb1
 Brick5: glt3.network.net:/home/sda3
 Brick6: gltclient.network.net:/home/sda3
 Brick7: glt3.network.net:/home/sdb1
 Brick8: gltclient.network.net:/home/sdb1
 Options Reconfigured:
 performance.io-thread-count: 32
 performance.cache-size: 256MB
 cluster.self-heal-daemon: on


 [root@glt1]:~# gluster volume status all detail
 Status of volume: gltvolume

 --
 Brick: Brick 1.1.74.246:/home/sda3
 Port : 24009
 Online   : Y
 Pid  : 1479
 File System  : ext4
 Device   : /dev/sda3
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 179.3GB
 Total Disk Space : 179.7GB
 Inode Count  : 11968512
 Free Inodes  : 11901550

 --
 Brick: Brick glt2.network.net:/home/sda3
 Port : 24009
 Online   : Y
 Pid  : 1589
 File System  : ext4
 Device   : /dev/sda3
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 179.3GB
 Total Disk Space : 179.7GB
 Inode Count  : 11968512
 Free Inodes  : 11901550

 --
 Brick: Brick 1.1.74.246:/home/sdb1
 Port : 24010
 Online   : Y
 Pid  : 1485
 File System  : ext4
 Device   : /dev/sdb1
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 228.8GB
 Total Disk Space : 229.2GB
 Inode Count  : 15269888
 Free Inodes  : 15202933

 --
 Brick: Brick glt2.network.net:/home/sdb1
 Port : 24010
 Online   : Y
 Pid  : 1595
 File System  : ext4
 Device   : /dev/sdb1
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 228.8GB
 Total Disk Space : 229.2GB
 Inode Count  : 15269888
 Free Inodes  : 15202933

 --
 Brick: Brick glt3.network.net:/home/sda3
 Port : 24009
 Online   : Y
 Pid  : 28963
 File System  : ext4
 Device   : /dev/sda3
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 179.3GB