Re: [Gluster-users] NFS secondary groups not working.

2011-09-14 Thread Saurabh Jain
Hello Di Pe,


I tried on 3.2.3, it does not happen overthere
[saurabhj@Centos1 nfs-test]$ ls -lah
total 76K
drwxr-xr-x 7 root root   8.0K Sep  8 05:50 .
drwxr-xr-x 9 root root   4.0K Sep  8 05:55 ..
-rw-r--r-- 1 root root  0 Sep  8 01:23 a
-rw-r--r-- 1 root root  0 Sep  8 01:23 b
-rw-r--r-- 1 root root  0 Sep  8 01:23 c
drwxr-xr-x 3 root root   4.0K Sep  6 05:52 
fstest_832a4924d2073113d2422ecfc194abce
drwxr-xr-x 2 root root   4.0K Sep  8 01:23 share
drwxrwsr-x 2 root503 4.0K Sep  8 01:28 share1
drwxrwsr-x 2 root admins 4.0K Sep  8 05:55 share2
drwxrwxr-x 2 root staff  4.0K Sep  9 01:01 share3
[saurabhj@Centos1 nfs-test]$
[saurabhj@Centos1 nfs-test]$
[saurabhj@Centos1 nfs-test]$ cd share2
[saurabhj@Centos1 share2]$ ls -lah
total 44K
drwxrwsr-x 2 root admins 8.0K Sep  8 05:56 .
drwxr-xr-x 7 root root   8.0K Sep  8 05:50 ..
-rwxrwsr-x 1 saurabhj admins0 Sep  8 02:04 test
-rw-r--r-- 1 saurabhj admins0 Sep  8 05:55 test1
-rw-r--r-- 1 saurabhj admins0 Sep  9 01:01 testn
[saurabhj@Centos1 share2]$ touch test3
[saurabhj@Centos1 share2]$ ls -lia
total 48
 1566447115294410427 drwxrwsr-x 2 root admins 8192 Sep  9 01:03 .
   1 drwxr-xr-x 7 root root   8192 Sep  8 05:50 ..
11949810116864770188 -rwxrwsr-x 1 saurabhj admins0 Sep  8 02:04 test
 3472874673268323252 -rw-r--r-- 1 saurabhj admins0 Sep  8 05:55 test1
16864971545077150130 -rw-r--r-- 1 saurabhj admins0 Sep  9 01:03 test3
18402121251818432703 -rw-r--r-- 1 saurabhj admins0 Sep  9 01:01 testn
[saurabhj@Centos1 share2]$

If I am missing something please let me know or if you can provide steps to 
reproduce this issue.

Thanks,
Saurabh
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] error trying to create replicated volume on EC2

2011-09-14 Thread Brandon Simmons
I'm not sure I'm doing this right. I have two identical machines, A and B.:

--- CONSOLE ---
A ~$: gluster peer probe domU-BB-BB-BB-BB.compute-1.internal
Probe successful
A ~$: gluster peer status
Number of Peers: 1

Hostname: domU-BB-BB-BB-BB.compute-1.internal
Uuid: 8d5d9af4-6a92-4d56-b063-c8fc9ac17a45
State: Peer in Cluster (Connected)

B ~$: gluster peer status
Number of Peers: 1

Hostname: AA-AA-AA-AA (the IP of host A)
Uuid: 8d5d9af4-6a92-4d56-b063-c8fc9ac17a45
State: Peer in Cluster (Connected)

A ~$: sudo gluster volume create test-vol replica 2 transport tcp
domU-AA-AA-AA-AA.compute-1.internal:/mnt/brick1
domU-BB-BB-BB-BB.compute-1.internal:/mnt/brick1
Brick: domU-AA-AA-AA-AA.compute-1.internal:/mnt/brick1,
domU-BB-BB-BB-BB.compute-1.internal:/mnt/brick1 in the arguments mean
the same
--- CONSOLE ---

After I do the peer probe from server A, server A shows up as a peer
in server B, but it has the same UUID.

What have I done wrong here?

Thanks,
Brandon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] error trying to create replicated volume on EC2

2011-09-14 Thread Rahul C S
Hi Brandon,

Are those 2 cloned instances? If you have cloned B from A after booting it,
then this might have happened. So before cloning remove the /etc/glusterd
directory from A  then clone it.

For the solution, all you need to do is stop glusterd, remove /etc/glusterd
directory from both the machines  then start glusterd again. Now peer probe
should work fine  you can continue with volume creation.

On Tue, Sep 13, 2011 at 10:44 PM, Brandon Simmons
bsimm...@labarchives.comwrote:

 I'm not sure I'm doing this right. I have two identical machines, A and B.:

 --- CONSOLE ---
 A ~$: gluster peer probe domU-BB-BB-BB-BB.compute-1.internal
 Probe successful
 A ~$: gluster peer status
 Number of Peers: 1

 Hostname: domU-BB-BB-BB-BB.compute-1.internal
 Uuid: 8d5d9af4-6a92-4d56-b063-c8fc9ac17a45
 State: Peer in Cluster (Connected)

 B ~$: gluster peer status
 Number of Peers: 1

 Hostname: AA-AA-AA-AA (the IP of host A)
 Uuid: 8d5d9af4-6a92-4d56-b063-c8fc9ac17a45
 State: Peer in Cluster (Connected)

 A ~$: sudo gluster volume create test-vol replica 2 transport tcp
 domU-AA-AA-AA-AA.compute-1.internal:/mnt/brick1
 domU-BB-BB-BB-BB.compute-1.internal:/mnt/brick1
 Brick: domU-AA-AA-AA-AA.compute-1.internal:/mnt/brick1,
 domU-BB-BB-BB-BB.compute-1.internal:/mnt/brick1 in the arguments mean
 the same
 --- CONSOLE ---

 After I do the peer probe from server A, server A shows up as a peer
 in server B, but it has the same UUID.

 What have I done wrong here?

 Thanks,
 Brandon
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




-- 
Regards,
Rahul C S
Engineer @ Gluster.
Ph: +919591407901
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Slow performance - 4 hosts, 10 gigabit ethernet, Gluster 3.2.3

2011-09-14 Thread Pavan T C

On Friday 09 September 2011 10:30 AM, Thomas Jackson wrote:

Hi everyone,


Hello Thomas,

Try the following:

1. In the fuse volume file, try:

Under write-behind:
option cache-size 16MB

Under read-ahead:
option page-count 16

Under io-cache:
option cache-size=64MB

2. Did you get 9Gbits/Sec with iperf with a single thread or multiple 
threads?


3. Can you give me the output of:
sysctl -a | egrep 'rmem|wmem'

4. If it is not a problem for you, can you please create a pure 
distribute setup (instead of distributed-replicate) and then report the 
numbers?


5. What is the inode size with which you formatted you XFS filesystem ?
This last point might not be related to your throughput problem, but if 
you are planning to use this setup for a large number of files, you 
might be better off using an inode size of 512 instead of the default 
256 bytes. To do that, your mkfs command should be:


mkfs -t xfs -i size=512 /dev/disk device

Pavan



I am seeing slower-than-expected performance in Gluster 3.2.3 between 4
hosts with 10 gigabit eth between them all. Each host has 4x 300GB SAS 15K
drives in RAID10, 6-core Xeon E5645 @ 2.40GHz and 24GB RAM running Ubuntu
10.04 64-bit (I have also tested with Scientific Linux 6.1 and Debian
Squeeze - same results on those as well). All of the hosts mount the volume
using the FUSE module. The base filesystem on all of the nodes is XFS,
however tests with ext4 have yielded similar results.

Command used to create the volume:
gluster volume create cluster-volume replica 2 transport tcp
node01:/mnt/local-store/ node02:/mnt/local-store/ node03:/mnt/local-store/
node04:/mnt/local-store/

Command used to mount the Gluster volume on each node:
mount -t glusterfs localhost:/cluster-volume /mnt/cluster-volume

Creating a 40GB file onto a node's local storage (ie no Gluster
involvement):
dd if=/dev/zero of=/mnt/local-store/test.file bs=1M count=4
4194304 bytes (42 GB) copied, 92.9264 s, 451 MB/s

Getting the same file off the node's local storage:
dd if=/mnt/local-store/test.file of=/dev/null
4194304 bytes (42 GB) copied, 81.858 s, 512 MB/s

40GB file onto the Gluster storage:
dd if=/dev/zero of=/mnt/cluster-volume/test.file bs=1M count=4
4194304 bytes (42 GB) copied, 226.934 s, 185 MB/s

Getting the same file off the Gluster storage
dd if=/mnt/cluster-volume/test.file of=/dev/null
4194304 bytes (42 GB) copied, 661.561 s, 63.4 MB/s

I have also tried using Gluster 3.1, with similar results.

According to the Gluster docs, I should be seeing roughly the lesser of the
drive speed and the network speed. The network is able to push 0.9GB/sec
according to iperf so that definitely isn't a limiting factor here, and each
array is able to do 400-500MB/sec as per above benchmarks. I've tried
with/without jumbo frames as well, which doesn't make any major difference.

The glusterfs process is using 120% CPU according to top, and glusterfsd is
sitting at about 90%.

Any ideas / tips of where to start for speeding this config up?

Thanks,

Thomas

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] error trying to create replicated volume on EC2

2011-09-14 Thread Brandon Simmons
That should work great. Thanks!

Brandon

On Wed, Sep 14, 2011 at 5:42 AM, Rahul C S ra...@gluster.com wrote:
 Hi Brandon,

 Are those 2 cloned instances? If you have cloned B from A after booting it,
 then this might have happened. So before cloning remove the /etc/glusterd
 directory from A  then clone it.

 For the solution, all you need to do is stop glusterd, remove /etc/glusterd
 directory from both the machines  then start glusterd again. Now peer probe
 should work fine  you can continue with volume creation.

 On Tue, Sep 13, 2011 at 10:44 PM, Brandon Simmons bsimm...@labarchives.com
 wrote:

 I'm not sure I'm doing this right. I have two identical machines, A and
 B.:

 --- CONSOLE ---
 A ~$: gluster peer probe domU-BB-BB-BB-BB.compute-1.internal
 Probe successful
 A ~$: gluster peer status
 Number of Peers: 1

 Hostname: domU-BB-BB-BB-BB.compute-1.internal
 Uuid: 8d5d9af4-6a92-4d56-b063-c8fc9ac17a45
 State: Peer in Cluster (Connected)

 B ~$: gluster peer status
 Number of Peers: 1

 Hostname: AA-AA-AA-AA (the IP of host A)
 Uuid: 8d5d9af4-6a92-4d56-b063-c8fc9ac17a45
 State: Peer in Cluster (Connected)

 A ~$: sudo gluster volume create test-vol replica 2 transport tcp
 domU-AA-AA-AA-AA.compute-1.internal:/mnt/brick1
 domU-BB-BB-BB-BB.compute-1.internal:/mnt/brick1
 Brick: domU-AA-AA-AA-AA.compute-1.internal:/mnt/brick1,
 domU-BB-BB-BB-BB.compute-1.internal:/mnt/brick1 in the arguments mean
 the same
 --- CONSOLE ---

 After I do the peer probe from server A, server A shows up as a peer
 in server B, but it has the same UUID.

 What have I done wrong here?

 Thanks,
 Brandon
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



 --
 Regards,
 Rahul C S
 Engineer @ Gluster.
 Ph: +919591407901


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Slow performance - 4 hosts, 10 gigabit ethernet, Gluster 3.2.3

2011-09-14 Thread Thomas Jackson
Hi Pavan,

Thanks for the reply - my comments inline below

Regards,

Thomas

-Original Message-
From: Pavan T C [mailto:t...@gluster.com] 
Sent: Wednesday, 14 September 2011 9:19 PM
To: Thomas Jackson
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Slow performance - 4 hosts, 10 gigabit
ethernet, Gluster 3.2.3

 On Friday 09 September 2011 10:30 AM, Thomas Jackson wrote:
 Hi everyone,

 Hello Thomas,

 Try the following:

 1. In the fuse volume file, try:

 Under write-behind:
 option cache-size 16MB

 Under read-ahead:
 option page-count 16

 Under io-cache:
 option cache-size=64MB

TJ: Results here are not pretty!
root@my-host:~# dd if=/dev/zero of=/mnt/cluster-volume/test.file
bs=1M count=1
1048576 bytes (10 GB) copied, 107.888 s, 97.2 MB/s


 2. Did you get 9Gbits/Sec with iperf with a single thread or multiple
threads?

TJ: Single thread

 3. Can you give me the output of:
 sysctl -a | egrep 'rmem|wmem'

TJ: root@my-host:~# sysctl -a | egrep 'rmem|wmem'
error: permission denied on key 'vm.compact_memory'
vm.lowmem_reserve_ratio = 256   256 32
error: permission denied on key 'net.ipv4.route.flush'
net.core.wmem_max = 131071
net.core.rmem_max = 131071
net.core.wmem_default = 126976
net.core.rmem_default = 126976
net.ipv4.tcp_wmem = 409616384   4194304
net.ipv4.tcp_rmem = 409687380   4194304
net.ipv4.udp_rmem_min = 4096
net.ipv4.udp_wmem_min = 4096
error: permission denied on key 'net.ipv6.route.flush'

 4. If it is not a problem for you, can you please create a pure distribute
setup (instead of distributed-replicate) and then report the numbers?

TJ: I've been able to do this with 2 hosts, while I was at it I also tested
a pure replica and pure stripe setup for comparison
Distribute = 313 MB/sec
Replica = 166 MB/sec
Stripe = 529 MB/sec

 5. What is the inode size with which you formatted you XFS filesystem ?
 This last point might not be related to your throughput problem, but if
you are planning to use this setup for a large number of files, 
 you might be better off using an inode size of 512 instead of the default
256 bytes. To do that, your mkfs command should be:

 mkfs -t xfs -i size=512 /dev/disk device

TJ: This is destined for use with VM images, probably a maximum of 200 files
total. That said, I have tried with a bigger inode size and also with ext4
with very similar results each time

In a totally bizarre turn of events, turning on port bonding (each host has
2x 10gig storage ports) in ACTIVE/BACKUP mode has increased the speed a fair
bit
dd if=/dev/zero of=/mnt/cluster-volume/test.file bs=1M count=4
4194304 bytes (42 GB) copied, 176.459 s, 238 MB/s

I have noticed that the inodes are getting very frequently locked/unlocked
by the afs from some brief debugging, not sure if that is related 

 Pavan


 I am seeing slower-than-expected performance in Gluster 3.2.3 between 
 4 hosts with 10 gigabit eth between them all. Each host has 4x 300GB 
 SAS 15K drives in RAID10, 6-core Xeon E5645 @ 2.40GHz and 24GB RAM 
 running Ubuntu
 10.04 64-bit (I have also tested with Scientific Linux 6.1 and Debian 
 Squeeze - same results on those as well). All of the hosts mount the 
 volume using the FUSE module. The base filesystem on all of the nodes 
 is XFS, however tests with ext4 have yielded similar results.

 Command used to create the volume:
  gluster volume create cluster-volume replica 2 transport tcp 
 node01:/mnt/local-store/ node02:/mnt/local-store/ 
 node03:/mnt/local-store/ node04:/mnt/local-store/

 Command used to mount the Gluster volume on each node:
  mount -t glusterfs localhost:/cluster-volume /mnt/cluster-volume

 Creating a 40GB file onto a node's local storage (ie no Gluster
 involvement):
  dd if=/dev/zero of=/mnt/local-store/test.file bs=1M count=4
  4194304 bytes (42 GB) copied, 92.9264 s, 451 MB/s

 Getting the same file off the node's local storage:
  dd if=/mnt/local-store/test.file of=/dev/null
  4194304 bytes (42 GB) copied, 81.858 s, 512 MB/s

 40GB file onto the Gluster storage:
  dd if=/dev/zero of=/mnt/cluster-volume/test.file bs=1M count=4
  4194304 bytes (42 GB) copied, 226.934 s, 185 MB/s

 Getting the same file off the Gluster storage
  dd if=/mnt/cluster-volume/test.file of=/dev/null
  4194304 bytes (42 GB) copied, 661.561 s, 63.4 MB/s

 I have also tried using Gluster 3.1, with similar results.

 According to the Gluster docs, I should be seeing roughly the lesser 
 of the drive speed and the network speed. The network is able to push 
 0.9GB/sec according to iperf so that definitely isn't a limiting 
 factor here, and each array is able to do 400-500MB/sec as per above 
 benchmarks. I've tried with/without jumbo frames as well, which doesn't
make any major difference.

 The glusterfs process is