Re: [Gluster-users] Problem creating volume

2012-08-10 Thread Jeff Williams
Thanks, done.

Also, I noticed that even when you successfully create a volume and then delete 
it, it still leaves the extended attributes on the directories. Is this by 
design or should I report this as a bug (seems like a bug to me!).

Jeff

On Aug 9, 2012, at 6:58 PM, Joe Julian wrote:

 On 08/09/2012 07:26 AM, Jeff Williams wrote:
 Also, it would be great is the volume create command gave a message like:
 
 srv14:/content/sg13/vd00 or a prefix of it is already marked as part of a 
 volume (extended attribute trusted.glusterfs.volume-id exists on 
 /content/sg13/vd00)
 
 So we could be pointed in the right direction.
 
 (Per Vijay) You can suggest changes under:
 
 https://bugzilla.redhat.com/show_bug.cgi?id=815194
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Gluster speed sooo slow

2012-08-10 Thread Ivan Dimitrov

Hello
What am I doing wrong?!?

I have a test setup with 4 identical servers with 2 disks each in 
distribute-replicate 2. All servers are connected to a GB switch.


I am experiencing really slow speeds at anything I do. Slow write, slow 
read, not to mention random write/reads.


Here is an example:
random-files is a directory with 32768 files with average size 16kb.
[root@gltclient]:~# rsync -a /root/speedtest/random-files/ /home/gltvolume/
^^ This will take more than 3 hours.

On any of the servers if I do iostat the disks are not loaded at all:
https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png

This is similar result for all servers.

Here is an example of simple ls command on the content.
[root@gltclient]:~# unalias ls
[root@gltclient]:~# /usr/bin/time -f %e seconds ls /home/gltvolume/ | 
wc -l

2.81 seconds
5393

almost 3 seconds to display 5000 files?!?! When they are 32,000, the ls 
will take around 35-45 seconds.


This directory is on local disk:
[root@gltclient]:~# /usr/bin/time -f %e seconds ls 
/root/speedtest/random-files/ | wc -l

1.45 seconds
32768

[root@gltclient]:~# /usr/bin/time -f %e seconds cat /home/gltvolume/* 
/dev/null

190.50 seconds

[root@gltclient]:~# /usr/bin/time -f %e seconds du -sh /home/gltvolume/
126M/home/gltvolume/
75.23 seconds


Here is the volume information.

[root@glt1]:~# gluster volume info

Volume Name: gltvolume
Type: Distributed-Replicate
Volume ID: 16edd852-8d23-41da-924d-710b753bb374
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 1.1.74.246:/home/sda3
Brick2: glt2.network.net:/home/sda3
Brick3: 1.1.74.246:/home/sdb1
Brick4: glt2.network.net:/home/sdb1
Brick5: glt3.network.net:/home/sda3
Brick6: gltclient.network.net:/home/sda3
Brick7: glt3.network.net:/home/sdb1
Brick8: gltclient.network.net:/home/sdb1
Options Reconfigured:
performance.io-thread-count: 32
performance.cache-size: 256MB
cluster.self-heal-daemon: on


[root@glt1]:~# gluster volume status all detail
Status of volume: gltvolume
--
Brick: Brick 1.1.74.246:/home/sda3
Port : 24009
Online   : Y
Pid  : 1479
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11901550
--
Brick: Brick glt2.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 1589
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11901550
--
Brick: Brick 1.1.74.246:/home/sdb1
Port : 24010
Online   : Y
Pid  : 1485
File System  : ext4
Device   : /dev/sdb1
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 228.8GB
Total Disk Space : 229.2GB
Inode Count  : 15269888
Free Inodes  : 15202933
--
Brick: Brick glt2.network.net:/home/sdb1
Port : 24010
Online   : Y
Pid  : 1595
File System  : ext4
Device   : /dev/sdb1
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 228.8GB
Total Disk Space : 229.2GB
Inode Count  : 15269888
Free Inodes  : 15202933
--
Brick: Brick glt3.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 28963
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11906058
--
Brick: Brick gltclient.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 3145
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11906058
--
Brick: Brick glt3.network.net:/home/sdb1
Port : 

Re: [Gluster-users] Gluster speed sooo slow

2012-08-10 Thread Ivan Dimitrov
So I stopped a node to check the BIOS and after it went up, the 
rebalance kicked in. I was looking for those kind of speeds on a normal 
write. The rebalance is much faster than my rsync/cp.


https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png

Best Regards
Ivan Dimitrov

On 8/10/12 1:23 PM, Ivan Dimitrov wrote:

Hello
What am I doing wrong?!?

I have a test setup with 4 identical servers with 2 disks each in 
distribute-replicate 2. All servers are connected to a GB switch.


I am experiencing really slow speeds at anything I do. Slow write, 
slow read, not to mention random write/reads.


Here is an example:
random-files is a directory with 32768 files with average size 16kb.
[root@gltclient]:~# rsync -a /root/speedtest/random-files/ 
/home/gltvolume/

^^ This will take more than 3 hours.

On any of the servers if I do iostat the disks are not loaded at all:
https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png 



This is similar result for all servers.

Here is an example of simple ls command on the content.
[root@gltclient]:~# unalias ls
[root@gltclient]:~# /usr/bin/time -f %e seconds ls /home/gltvolume/ 
| wc -l

2.81 seconds
5393

almost 3 seconds to display 5000 files?!?! When they are 32,000, the 
ls will take around 35-45 seconds.


This directory is on local disk:
[root@gltclient]:~# /usr/bin/time -f %e seconds ls 
/root/speedtest/random-files/ | wc -l

1.45 seconds
32768

[root@gltclient]:~# /usr/bin/time -f %e seconds cat 
/home/gltvolume/* /dev/null

190.50 seconds

[root@gltclient]:~# /usr/bin/time -f %e seconds du -sh /home/gltvolume/
126M/home/gltvolume/
75.23 seconds


Here is the volume information.

[root@glt1]:~# gluster volume info

Volume Name: gltvolume
Type: Distributed-Replicate
Volume ID: 16edd852-8d23-41da-924d-710b753bb374
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 1.1.74.246:/home/sda3
Brick2: glt2.network.net:/home/sda3
Brick3: 1.1.74.246:/home/sdb1
Brick4: glt2.network.net:/home/sdb1
Brick5: glt3.network.net:/home/sda3
Brick6: gltclient.network.net:/home/sda3
Brick7: glt3.network.net:/home/sdb1
Brick8: gltclient.network.net:/home/sdb1
Options Reconfigured:
performance.io-thread-count: 32
performance.cache-size: 256MB
cluster.self-heal-daemon: on


[root@glt1]:~# gluster volume status all detail
Status of volume: gltvolume
-- 


Brick: Brick 1.1.74.246:/home/sda3
Port : 24009
Online   : Y
Pid  : 1479
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11901550
-- 


Brick: Brick glt2.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 1589
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11901550
-- 


Brick: Brick 1.1.74.246:/home/sdb1
Port : 24010
Online   : Y
Pid  : 1485
File System  : ext4
Device   : /dev/sdb1
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 228.8GB
Total Disk Space : 229.2GB
Inode Count  : 15269888
Free Inodes  : 15202933
-- 


Brick: Brick glt2.network.net:/home/sdb1
Port : 24010
Online   : Y
Pid  : 1595
File System  : ext4
Device   : /dev/sdb1
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 228.8GB
Total Disk Space : 229.2GB
Inode Count  : 15269888
Free Inodes  : 15202933
-- 


Brick: Brick glt3.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 28963
File System  : ext4
Device   : /dev/sda3
Mount Options: rw,noatime
Inode Size   : 256
Disk Space Free  : 179.3GB
Total Disk Space : 179.7GB
Inode Count  : 11968512
Free Inodes  : 11906058
-- 


Brick: Brick gltclient.network.net:/home/sda3
Port : 24009
Online   : Y
Pid  : 3145
File System  : ext4
Device

Re: [Gluster-users] Gluster speed sooo slow

2012-08-10 Thread Philip Poten
Hi Ivan,

that's because Gluster has really bad many small files performance
due to it's architecture.

On all stat() calls (which rsync is doing plenty of), all replicas are
being checked for integrity.

regards,
Philip

2012/8/10 Ivan Dimitrov dob...@amln.net:
 So I stopped a node to check the BIOS and after it went up, the rebalance
 kicked in. I was looking for those kind of speeds on a normal write. The
 rebalance is much faster than my rsync/cp.

 https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%202.04.09%20PM.png

 Best Regards
 Ivan Dimitrov


 On 8/10/12 1:23 PM, Ivan Dimitrov wrote:

 Hello
 What am I doing wrong?!?

 I have a test setup with 4 identical servers with 2 disks each in
 distribute-replicate 2. All servers are connected to a GB switch.

 I am experiencing really slow speeds at anything I do. Slow write, slow
 read, not to mention random write/reads.

 Here is an example:
 random-files is a directory with 32768 files with average size 16kb.
 [root@gltclient]:~# rsync -a /root/speedtest/random-files/
 /home/gltvolume/
 ^^ This will take more than 3 hours.

 On any of the servers if I do iostat the disks are not loaded at all:

 https://dl.dropbox.com/u/282332/Screen%20Shot%202012-08-10%20at%201.08.54%20PM.png

 This is similar result for all servers.

 Here is an example of simple ls command on the content.
 [root@gltclient]:~# unalias ls
 [root@gltclient]:~# /usr/bin/time -f %e seconds ls /home/gltvolume/ | wc
 -l
 2.81 seconds
 5393

 almost 3 seconds to display 5000 files?!?! When they are 32,000, the ls
 will take around 35-45 seconds.

 This directory is on local disk:
 [root@gltclient]:~# /usr/bin/time -f %e seconds ls
 /root/speedtest/random-files/ | wc -l
 1.45 seconds
 32768

 [root@gltclient]:~# /usr/bin/time -f %e seconds cat /home/gltvolume/*
 /dev/null
 190.50 seconds

 [root@gltclient]:~# /usr/bin/time -f %e seconds du -sh /home/gltvolume/
 126M/home/gltvolume/
 75.23 seconds


 Here is the volume information.

 [root@glt1]:~# gluster volume info

 Volume Name: gltvolume
 Type: Distributed-Replicate
 Volume ID: 16edd852-8d23-41da-924d-710b753bb374
 Status: Started
 Number of Bricks: 4 x 2 = 8
 Transport-type: tcp
 Bricks:
 Brick1: 1.1.74.246:/home/sda3
 Brick2: glt2.network.net:/home/sda3
 Brick3: 1.1.74.246:/home/sdb1
 Brick4: glt2.network.net:/home/sdb1
 Brick5: glt3.network.net:/home/sda3
 Brick6: gltclient.network.net:/home/sda3
 Brick7: glt3.network.net:/home/sdb1
 Brick8: gltclient.network.net:/home/sdb1
 Options Reconfigured:
 performance.io-thread-count: 32
 performance.cache-size: 256MB
 cluster.self-heal-daemon: on


 [root@glt1]:~# gluster volume status all detail
 Status of volume: gltvolume

 --
 Brick: Brick 1.1.74.246:/home/sda3
 Port : 24009
 Online   : Y
 Pid  : 1479
 File System  : ext4
 Device   : /dev/sda3
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 179.3GB
 Total Disk Space : 179.7GB
 Inode Count  : 11968512
 Free Inodes  : 11901550

 --
 Brick: Brick glt2.network.net:/home/sda3
 Port : 24009
 Online   : Y
 Pid  : 1589
 File System  : ext4
 Device   : /dev/sda3
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 179.3GB
 Total Disk Space : 179.7GB
 Inode Count  : 11968512
 Free Inodes  : 11901550

 --
 Brick: Brick 1.1.74.246:/home/sdb1
 Port : 24010
 Online   : Y
 Pid  : 1485
 File System  : ext4
 Device   : /dev/sdb1
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 228.8GB
 Total Disk Space : 229.2GB
 Inode Count  : 15269888
 Free Inodes  : 15202933

 --
 Brick: Brick glt2.network.net:/home/sdb1
 Port : 24010
 Online   : Y
 Pid  : 1595
 File System  : ext4
 Device   : /dev/sdb1
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 228.8GB
 Total Disk Space : 229.2GB
 Inode Count  : 15269888
 Free Inodes  : 15202933

 --
 Brick: Brick glt3.network.net:/home/sda3
 Port : 24009
 Online   : Y
 Pid  : 28963
 File System  : ext4
 Device   : /dev/sda3
 Mount Options: rw,noatime
 Inode Size   : 256
 Disk Space Free  : 179.3GB
 

Re: [Gluster-users] Problem creating volume

2012-08-10 Thread Brian Candler
On Fri, Aug 10, 2012 at 11:12:54AM +0200, Jeff Williams wrote:
 Also, I noticed that even when you successfully create a volume and then
 delete it, it still leaves the extended attributes on the directories.  Is
 this by design or should I report this as a bug (seems like a bug to me!).

https://bugzilla.redhat.com/show_bug.cgi?id=812214
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] 1/4 glusterfsd's runs amok; performance suffers;

2012-08-10 Thread Harry Mangalam
running 3.3 distributed on IPoIB on 4 nodes, 1 brick per node.  Any idea
why, on one of those nodes, glusterfsd would go berserk, running up to 370%
CPU and driving load to 30 (file performance on the clients slows to a
crawl). While very slow, it continued to serve out files. This is the
second time this has happened in about a week. I had turned on the gluster
nfs services, but wasn't using it when this happened.  It's now off.

kill -HUP did nothing to either glusterd or glusterfsd, so I had to kill
both and restart glusterd. That solved the overload on glusterfsd and
performance is back to near normal. I'm now doing a rebalance/fix-layout
which is running as expected, but will take the weekend to complete.  I did
notice that the affected node (pbs3) has more files than the others, tho
I'm not sure that this is significant.

Filesystem   Size  Used Avail Use% Mounted on
pbs1:/dev/sdb6.4T  1.9T  4.6T  29% /bducgl
pbs2:/dev/md08.2T  2.4T  5.9T  30% /bducgl
pbs3:/dev/md127  8.2T  5.9T  2.3T  73% /bducgl  ---
pbs4:/dev/sda6.4T  1.8T  4.6T  29% /bducgl


-- 
Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine
[m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487
415 South Circle View Dr, Irvine, CA, 92697 [shipping]
MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users