[Gluster-users] Export Gluster backed volume with standard NFS

2012-06-07 Thread Scot Kreienkamp
Hey everyone,

I currently have an NFS server that I need to make highly available.  I was 
thinking I would use Gluster, but since there's no way to match Gluster's built 
in NFS server to my current NFS exports file I can't use the Gluster NFS 
server.  So I was thinking I could have two bricks running replicated Gluster 
volumes, have them mount the Gluster volume from themselves with the FUSE 
client which would give them automatic failover, and then use standard NFS to 
re-export the mounts I need.  Is anyone already doing that or anything like it? 
 Any problems or performance issues?

Thanks

Scot Kreienkamp
skre...@la-z-boy.com




This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] /etc/exports functionality?

2012-05-31 Thread Scot Kreienkamp
Hi,

I have an NFS server that I would like to move over to gluster.  I have a test 
3.3 gluster setup (thanks for the shiny new release!), and I'm looking to 
replicate the setup I have now with my exports file.  I have one main directory 
and many subdirectories that are remotely mounted via NFS to various hosts.  I 
don't want to mount the entire filesystem on the remote hosts as that leaves 
too much room for error and mischief.  Is there a way to control access to a 
specific shared subdirectory either via NFS or the FUSE client?

Thanks!

Scot Kreienkamp
skre...@la-z-boy.com




This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Some benchmarks for anyone that's interested..

2012-05-02 Thread Scot Kreienkamp
Amar,

I am using local filesystem access so I can do a one-way RSYNC of specific 
directories to a remote machine.  I can't get geo-replication working anyway, 
but even if I could there doesn't appear to be any way to tell it to only 
geo-replicate specific directories.

Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com


-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Amar Tumballi
Sent: Wednesday, May 02, 2012 8:10 AM
To: lejeczek
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Some benchmarks for anyone that's interested..

On 05/02/2012 02:22 PM, lejeczek wrote:
 thanks for posting
 I'd be curious to see what kind of disproportion you get between:  raw
 fs / single brick volume with local fuse mountpoint which effectively
 points back to the same raw fs
 from my quick tests I saw massive gap between the two
 thanks


 Tests are:
 Single Disk (direct, no gluster)
 Gluster Replicated
 Gluster Striped Replicated
 Gluster Distributed Replicated
 Gluster Stripe


Hi All,

I would like to clarify few things before some one does performance runs
on GlusterFS.

First of all, GlusterFS is not designed/intended to be used as a local
filesystem, ie, without n/w in picture it should not be used for any
kind of benchmark. Please do let us know the exact use cases to use
GlusterFS without n/w in picture, and we can consider that in our designs.

If you are comparing GlusterFS's performance to your local file system
(like XFS/ext4/btrfs etc), performance numbers would look bad, for sure
(at least for short future).

This is the main reason, we recommend understanding the use-case before
deploying GlusterFS. Try to run with similar workload on the setup to
run benchmarks, because the pattern of fops, type of volume, type of
hardware/ type of network, all of these has a effect on benchmark
numbers you would get.


Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] unable to get Geo-replication working

2012-04-30 Thread Scot Kreienkamp
In my case it is already installed.

[root@hptv3130 ~]# rpm -qa|grep -i gluster
glusterfs-fuse-3.2.6-1.x86_64
glusterfs-debuginfo-3.2.6-1.x86_64
glusterfs-geo-replication-3.2.6-1.x86_64
glusterfs-rdma-3.2.6-1.x86_64
glusterfs-core-3.2.6-1.x86_64

Any more ideas?

Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com

-Original Message-
From: Greg Swift [mailto:gregsw...@gmail.com]
Sent: Monday, April 30, 2012 10:41 AM
To: Scot Kreienkamp
Cc: Mohit Anchlia; gluster-users@gluster.org
Subject: Re: [Gluster-users] unable to get Geo-replication working

I spent a lot of time troubleshooting this setup.  The resolution for
me was making sure the glusterfs-geo-replication software was
installed on the target system.

http://docs.redhat.com/docs/en-US/Red_Hat_Storage/2/html/User_Guide/chap-User_Guide-Geo_Rep-Preparation-Minimum_Reqs.html
States: Before deploying Geo-replication, you must ensure that both
Master and Slave are Red Hat Storage instances.

I realize that in a strictly literal sense this tells you that you
need the geo-replication software on the slave, however it would make
more sense to clearly state it. A geo-replication target not running
glusterfs just needs glusterfs-{core,geo-replication} not a full RH
Storage instance.

-greg

On Fri, Apr 27, 2012 at 12:26, Scot Kreienkamp skre...@la-z-boy.com wrote:
 I am trying to setup geo-replication between a gluster volume and a
 non-gluster volume, yes.  The command I used to start geo-replication is:



 gluster volume geo-replication RMSNFSMOUNT hptv3130:/nfs start





 Scot Kreienkamp

 Senior Systems Engineer

 skre...@la-z-boy.com



 From: Mohit Anchlia [mailto:mohitanch...@gmail.com]
 Sent: Friday, April 27, 2012 12:46 PM
 To: Scot Kreienkamp
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] unable to get Geo-replication working



 Are you trying to setup geo-replication between gluster voolume -
 non-gluster volume? or is it between gluster volume - gluster volume?



 It looks like there might be some configuration issue here. Please give your
 script of how you configured geo-replication?




 On Fri, Apr 27, 2012 at 8:18 AM, Scot Kreienkamp skre...@la-z-boy.com
 wrote:

 Sure



 [root@retv3130 RMSNFSMOUNT]# gluster peer status

 Number of Peers: 1



 Hostname: retv3131

 Uuid: 450cc731-60be-47be-a42d-d856a03dac01

 State: Peer in Cluster (Connected)





 [root@hptv3130 ~]# gluster peer status

 No peers present





 [root@retv3130 ~]# gluster volume geo-replication RMSNFSMOUNT
 root@hptv3130:/nfs status



 MASTER   SLAVE
 STATUS

 

 RMSNFSMOUNT  root@hptv3130:/nfs
 faulty









 Scot Kreienkamp

 Senior Systems Engineer

 skre...@la-z-boy.com



 From: Mohit Anchlia [mailto:mohitanch...@gmail.com]
 Sent: Friday, April 27, 2012 10:58 AM
 To: Scot Kreienkamp
 Subject: Re: [Gluster-users] unable to get Geo-replication working



 Can you look at the status of gluster geo-replication MASTER SLAVE status?
 Also, do gluster peer status on both MASTER and SLAVE? Paste the results
 here.

 On Fri, Apr 27, 2012 at 6:53 AM, Scot Kreienkamp skre...@la-z-boy.com
 wrote:

 Hey everyone,



 I'm trying to get geo-replication working from a two brick replicated volume
 to a single directory on a remote host.  I can ssh as either root or
 georep-user to the destination as either georep-user or root with no
 password using the default ssh commands given by the config command: ssh
 -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
 /etc/glusterd/geo-replication/secret.pem.  All the glusterfs rpms are
 installed on the remote host.  There are no firewalls running on any of the
 hosts and no firewalls in between them.  The remote_gsyncd command is
 correct as I can copy and paste it to the command line and run it on both
 source hosts and destination host.  I'm using the current production version
 of glusterfs 3.2.6, rsync 3.0.9, fuse-2.8.3 rpm's are installed, OpenSSH
 5.3, and Python 2.6.6 on RHEL6.2.  The remote directory is set to 777, world
 read-write so there are no permission errors.



 I'm using this command to start replication: gluster volume geo-replication
 RMSNFSMOUNT hptv3130:/nfs start



 Whenever I try to initiate geo-replication the status goes to starting for
 about 30 seconds, then goes to faulty.  On the slave I get these messages
 repeating in the geo-replication-slaves log:



 [2012-04-27 09:37:59.485424] I [resource(slave):201:service_loop] FILE:
 slave listening

 [2012-04-27 09:38:05.413768] I [repce(slave):60:service_loop] RepceServer:
 terminating on reaching EOF.

 [2012-04-27 09:38:15.35907] I [resource(slave):207:service_loop] FILE:
 connection inactive for 120 seconds, stopping

 [2012-04-27 09:38:15.36382] I [gsyncd(slave):302:main_i] top: exiting.

 [2012-04-27 09:38:19.952683] I [gsyncd(slave):290:main_i] top: syncing:
 file:///nfs

 [2012-04-27 09:38:19.955024] I [resource

Re: [Gluster-users] unable to get Geo-replication working

2012-04-30 Thread Scot Kreienkamp
Sure although if you read my original post I already tried this.

[root@retv3130 rets5030]# ssh -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i /etc/glusterd/geo-replication/secret.pem hptv3130 
ls -ld /nfs
setterm: $TERM is not defined.
drwxrwxrwx 2 root root 4096 Apr 26 16:06 /nfs

Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com


-Original Message-
From: Greg Swift [mailto:gregsw...@gmail.com]
Sent: Monday, April 30, 2012 12:09 PM
To: Scot Kreienkamp
Cc: Mohit Anchlia; gluster-users@gluster.org
Subject: Re: [Gluster-users] unable to get Geo-replication working

Okay... so can you try the following:

# ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/etc/glusterd/geo-replication/secret.pem hptv3130 ls -ld /nfs

-greg

On Mon, Apr 30, 2012 at 10:56, Scot Kreienkamp skre...@la-z-boy.com wrote:
 In my case it is already installed.

 [root@hptv3130 ~]# rpm -qa|grep -i gluster
 glusterfs-fuse-3.2.6-1.x86_64
 glusterfs-debuginfo-3.2.6-1.x86_64
 glusterfs-geo-replication-3.2.6-1.x86_64
 glusterfs-rdma-3.2.6-1.x86_64
 glusterfs-core-3.2.6-1.x86_64

 Any more ideas?

 Scot Kreienkamp
 Senior Systems Engineer
 skre...@la-z-boy.com

 -Original Message-
 From: Greg Swift [mailto:gregsw...@gmail.com]
 Sent: Monday, April 30, 2012 10:41 AM
 To: Scot Kreienkamp
 Cc: Mohit Anchlia; gluster-users@gluster.org
 Subject: Re: [Gluster-users] unable to get Geo-replication working

 I spent a lot of time troubleshooting this setup.  The resolution for
 me was making sure the glusterfs-geo-replication software was
 installed on the target system.

 http://docs.redhat.com/docs/en-US/Red_Hat_Storage/2/html/User_Guide/chap-User_Guide-Geo_Rep-Preparation-Minimum_Reqs.html
 States: Before deploying Geo-replication, you must ensure that both
 Master and Slave are Red Hat Storage instances.

 I realize that in a strictly literal sense this tells you that you
 need the geo-replication software on the slave, however it would make
 more sense to clearly state it. A geo-replication target not running
 glusterfs just needs glusterfs-{core,geo-replication} not a full RH
 Storage instance.

 -greg

 On Fri, Apr 27, 2012 at 12:26, Scot Kreienkamp skre...@la-z-boy.com wrote:
 I am trying to setup geo-replication between a gluster volume and a
 non-gluster volume, yes.  The command I used to start geo-replication is:



 gluster volume geo-replication RMSNFSMOUNT hptv3130:/nfs start





 Scot Kreienkamp

 Senior Systems Engineer

 skre...@la-z-boy.com



 From: Mohit Anchlia [mailto:mohitanch...@gmail.com]
 Sent: Friday, April 27, 2012 12:46 PM
 To: Scot Kreienkamp
 Cc: gluster-users@gluster.org
 Subject: Re: [Gluster-users] unable to get Geo-replication working



 Are you trying to setup geo-replication between gluster voolume -
 non-gluster volume? or is it between gluster volume - gluster volume?



 It looks like there might be some configuration issue here. Please give your
 script of how you configured geo-replication?




 On Fri, Apr 27, 2012 at 8:18 AM, Scot Kreienkamp skre...@la-z-boy.com
 wrote:

 Sure



 [root@retv3130 RMSNFSMOUNT]# gluster peer status

 Number of Peers: 1



 Hostname: retv3131

 Uuid: 450cc731-60be-47be-a42d-d856a03dac01

 State: Peer in Cluster (Connected)





 [root@hptv3130 ~]# gluster peer status

 No peers present





 [root@retv3130 ~]# gluster volume geo-replication RMSNFSMOUNT
 root@hptv3130:/nfs status



 MASTER   SLAVE
 STATUS

 

 RMSNFSMOUNT  root@hptv3130:/nfs
 faulty









 Scot Kreienkamp

 Senior Systems Engineer

 skre...@la-z-boy.com



 From: Mohit Anchlia [mailto:mohitanch...@gmail.com]
 Sent: Friday, April 27, 2012 10:58 AM
 To: Scot Kreienkamp
 Subject: Re: [Gluster-users] unable to get Geo-replication working



 Can you look at the status of gluster geo-replication MASTER SLAVE status?
 Also, do gluster peer status on both MASTER and SLAVE? Paste the results
 here.

 On Fri, Apr 27, 2012 at 6:53 AM, Scot Kreienkamp skre...@la-z-boy.com
 wrote:

 Hey everyone,



 I'm trying to get geo-replication working from a two brick replicated volume
 to a single directory on a remote host.  I can ssh as either root or
 georep-user to the destination as either georep-user or root with no
 password using the default ssh commands given by the config command: ssh
 -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
 /etc/glusterd/geo-replication/secret.pem.  All the glusterfs rpms are
 installed on the remote host.  There are no firewalls running on any of the
 hosts and no firewalls in between them.  The remote_gsyncd command is
 correct as I can copy and paste it to the command line and run it on both
 source hosts and destination host.  I'm using the current production version
 of glusterfs 3.2.6, rsync 3.0.9, fuse-2.8.3 rpm's are installed, OpenSSH
 5.3, and Python 2.6.6 on RHEL6.2.  The remote directory is set to 777, world

Re: [Gluster-users] unable to get Geo-replication working

2012-04-27 Thread Scot Kreienkamp
Sure

[root@retv3130 RMSNFSMOUNT]# gluster peer status
Number of Peers: 1

Hostname: retv3131
Uuid: 450cc731-60be-47be-a42d-d856a03dac01
State: Peer in Cluster (Connected)


[root@hptv3130 ~]# gluster peer status
No peers present


[root@retv3130 ~]# gluster volume geo-replication RMSNFSMOUNT 
root@hptv3130:/nfs status

MASTER   SLAVE  STATUS

RMSNFSMOUNT  root@hptv3130:/nfs faulty




Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com

From: Mohit Anchlia [mailto:mohitanch...@gmail.com]
Sent: Friday, April 27, 2012 10:58 AM
To: Scot Kreienkamp
Subject: Re: [Gluster-users] unable to get Geo-replication working

Can you look at the status of gluster geo-replication MASTER SLAVE status? 
Also, do gluster peer status on both MASTER and SLAVE? Paste the results here.
On Fri, Apr 27, 2012 at 6:53 AM, Scot Kreienkamp 
skre...@la-z-boy.commailto:skre...@la-z-boy.com wrote:
Hey everyone,

I'm trying to get geo-replication working from a two brick replicated volume to 
a single directory on a remote host.  I can ssh as either root or georep-user 
to the destination as either georep-user or root with no password using the 
default ssh commands given by the config command: ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/etc/glusterd/geo-replication/secret.pem.  All the glusterfs rpms are installed 
on the remote host.  There are no firewalls running on any of the hosts and no 
firewalls in between them.  The remote_gsyncd command is correct as I can copy 
and paste it to the command line and run it on both source hosts and 
destination host.  I'm using the current production version of glusterfs 3.2.6, 
rsync 3.0.9, fuse-2.8.3 rpm's are installed, OpenSSH 5.3, and Python 2.6.6 on 
RHEL6.2.  The remote directory is set to 777, world read-write so there are no 
permission errors.

I'm using this command to start replication: gluster volume geo-replication 
RMSNFSMOUNT hptv3130:/nfs start

Whenever I try to initiate geo-replication the status goes to starting for 
about 30 seconds, then goes to faulty.  On the slave I get these messages 
repeating in the geo-replication-slaves log:

[2012-04-27 09:37:59.485424] I [resource(slave):201:service_loop] FILE: slave 
listening
[2012-04-27 09:38:05.413768] I [repce(slave):60:service_loop] RepceServer: 
terminating on reaching EOF.
[2012-04-27 09:38:15.35907] I [resource(slave):207:service_loop] FILE: 
connection inactive for 120 seconds, stopping
[2012-04-27 09:38:15.36382] I [gsyncd(slave):302:main_i] top: exiting.
[2012-04-27 09:38:19.952683] I [gsyncd(slave):290:main_i] top: syncing: 
file:///nfsfile:///\\nfs
[2012-04-27 09:38:19.955024] I [resource(slave):201:service_loop] FILE: slave 
listening


I get these messages in etc-glusterfs-glusterd.vol.log on the slave:

[2012-04-27 09:39:23.667930] W [socket.c:1494:__socket_proto_state_machine] 
0-socket.management: reading from socket failed. Error (Transport endpoint is 
not connected), peer (127.0.0.1:1021http://127.0.0.1:1021/)
[2012-04-27 09:39:43.736138] I [glusterd-handler.c:3226:glusterd_handle_getwd] 
0-glusterd: Received getwd req
[2012-04-27 09:39:43.740749] W [socket.c:1494:__socket_proto_state_machine] 
0-socket.management: reading from socket failed. Error (Transport endpoint is 
not connected), peer (127.0.0.1:1023http://127.0.0.1:1023/)

As I understand it from searching the list that message is benign and can be 
ignored though.


Here are tails of all the logs on one of the sources:

[root@retv3130 RMSNFSMOUNT]# tail 
ssh%3A%2F%2Fgeorep-user%4010.2.1.60%3Afile%3A%2F%2F%2Fnfs.gluster.log
+--+
[2012-04-26 16:16:40.804047] E [socket.c:1685:socket_connect_finish] 
0-RMSNFSMOUNT-client-1: connection to  failed (Connection refused)
[2012-04-26 16:16:40.804852] I [rpc-clnt.c:1536:rpc_clnt_reconfig] 
0-RMSNFSMOUNT-client-0: changing port to 24009 (from 0)
[2012-04-26 16:16:44.779451] I [rpc-clnt.c:1536:rpc_clnt_reconfig] 
0-RMSNFSMOUNT-client-1: changing port to 24010 (from 0)
[2012-04-26 16:16:44.855903] I 
[client-handshake.c:1090:select_server_supported_programs] 
0-RMSNFSMOUNT-client-0: Using Program GlusterFS 3.2.6, Num (1298437), Version 
(310)
[2012-04-26 16:16:44.856893] I [client-handshake.c:913:client_setvolume_cbk] 
0-RMSNFSMOUNT-client-0: Connected to 
10.170.1.222:24009http://10.170.1.222:24009/, attached to remote volume 
'/nfs'.
[2012-04-26 16:16:44.856943] I [afr-common.c:3141:afr_notify] 
0-RMSNFSMOUNT-replicate-0: Subvolume 'RMSNFSMOUNT-client-0' came back up; going 
online.
[2012-04-26 16:16:44.866734] I [fuse-bridge.c:3339:fuse_graph_setup] 0-fuse: 
switched to graph 0
[2012-04-26 16:16:44.867391] I [fuse-bridge.c:3241:fuse_thread_proc] 0-fuse: 
unmounting /tmp/gsyncd-aux-mount-8zMs0J
[2012-04-26 16:16:44.868538] W

Re: [Gluster-users] unable to get Geo-replication working

2012-04-27 Thread Scot Kreienkamp
I am trying to setup geo-replication between a gluster volume and a non-gluster 
volume, yes.  The command I used to start geo-replication is:

gluster volume geo-replication RMSNFSMOUNT hptv3130:/nfs start


Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com

From: Mohit Anchlia [mailto:mohitanch...@gmail.com]
Sent: Friday, April 27, 2012 12:46 PM
To: Scot Kreienkamp
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] unable to get Geo-replication working

Are you trying to setup geo-replication between gluster voolume - non-gluster 
volume? or is it between gluster volume - gluster volume?

It looks like there might be some configuration issue here. Please give your 
script of how you configured geo-replication?


On Fri, Apr 27, 2012 at 8:18 AM, Scot Kreienkamp 
skre...@la-z-boy.commailto:skre...@la-z-boy.com wrote:
Sure

[root@retv3130 RMSNFSMOUNT]# gluster peer status
Number of Peers: 1

Hostname: retv3131
Uuid: 450cc731-60be-47be-a42d-d856a03dac01
State: Peer in Cluster (Connected)


[root@hptv3130 ~]# gluster peer status
No peers present


[root@retv3130 ~]# gluster volume geo-replication RMSNFSMOUNT 
root@hptv3130:/nfs status

MASTER   SLAVE  STATUS

RMSNFSMOUNT  root@hptv3130:/nfs faulty




Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.commailto:skre...@la-z-boy.com

From: Mohit Anchlia 
[mailto:mohitanch...@gmail.commailto:mohitanch...@gmail.com]
Sent: Friday, April 27, 2012 10:58 AM
To: Scot Kreienkamp
Subject: Re: [Gluster-users] unable to get Geo-replication working

Can you look at the status of gluster geo-replication MASTER SLAVE status? 
Also, do gluster peer status on both MASTER and SLAVE? Paste the results here.
On Fri, Apr 27, 2012 at 6:53 AM, Scot Kreienkamp 
skre...@la-z-boy.commailto:skre...@la-z-boy.com wrote:
Hey everyone,

I'm trying to get geo-replication working from a two brick replicated volume to 
a single directory on a remote host.  I can ssh as either root or georep-user 
to the destination as either georep-user or root with no password using the 
default ssh commands given by the config command: ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/etc/glusterd/geo-replication/secret.pem.  All the glusterfs rpms are installed 
on the remote host.  There are no firewalls running on any of the hosts and no 
firewalls in between them.  The remote_gsyncd command is correct as I can copy 
and paste it to the command line and run it on both source hosts and 
destination host.  I'm using the current production version of glusterfs 3.2.6, 
rsync 3.0.9, fuse-2.8.3 rpm's are installed, OpenSSH 5.3, and Python 2.6.6 on 
RHEL6.2.  The remote directory is set to 777, world read-write so there are no 
permission errors.

I'm using this command to start replication: gluster volume geo-replication 
RMSNFSMOUNT hptv3130:/nfs start

Whenever I try to initiate geo-replication the status goes to starting for 
about 30 seconds, then goes to faulty.  On the slave I get these messages 
repeating in the geo-replication-slaves log:

[2012-04-27 09:37:59.485424] I [resource(slave):201:service_loop] FILE: slave 
listening
[2012-04-27 09:38:05.413768] I [repce(slave):60:service_loop] RepceServer: 
terminating on reaching EOF.
[2012-04-27 09:38:15.35907] I [resource(slave):207:service_loop] FILE: 
connection inactive for 120 seconds, stopping
[2012-04-27 09:38:15.36382] I [gsyncd(slave):302:main_i] top: exiting.
[2012-04-27 09:38:19.952683] I [gsyncd(slave):290:main_i] top: syncing: 
file:///nfsfile:///\\nfs
[2012-04-27 09:38:19.955024] I [resource(slave):201:service_loop] FILE: slave 
listening


I get these messages in etc-glusterfs-glusterd.vol.log on the slave:

[2012-04-27 09:39:23.667930] W [socket.c:1494:__socket_proto_state_machine] 
0-socket.management: reading from socket failed. Error (Transport endpoint is 
not connected), peer (127.0.0.1:1021http://127.0.0.1:1021/)
[2012-04-27 09:39:43.736138] I [glusterd-handler.c:3226:glusterd_handle_getwd] 
0-glusterd: Received getwd req
[2012-04-27 09:39:43.740749] W [socket.c:1494:__socket_proto_state_machine] 
0-socket.management: reading from socket failed. Error (Transport endpoint is 
not connected), peer (127.0.0.1:1023http://127.0.0.1:1023/)

As I understand it from searching the list that message is benign and can be 
ignored though.


Here are tails of all the logs on one of the sources:

[root@retv3130 RMSNFSMOUNT]# tail 
ssh%3A%2F%2Fgeorep-user%4010.2.1.60%3Afile%3A%2F%2F%2Fnfs.gluster.log
+--+
[2012-04-26 16:16:40.804047] E [socket.c:1685:socket_connect_finish] 
0-RMSNFSMOUNT-client-1: connection to  failed (Connection refused)
[2012-04-26 16:16:40.804852] I [rpc-clnt.c:1536:rpc_clnt_reconfig] 
0-RMSNFSMOUNT-client-0: changing port to 24009 (from