x27;t make
complete sense. Anyway, thanks for your help
Deepak
On Wed, Nov 18, 2015 at 1:29 AM, Deepak Ravi wrote:
> Alright, here you go. Slaves xfs1 and xfs2:
>
> *[root@xfs1 ~]*# cat
> /var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2
> d7dbe99ee6d5\:glust
s.
>
> In Slave nodes look for errors in
> */var/log/glusterfs/geo-replication-slaves/**.log and
> */var/log/glusterfs/geo-replication-slaves/**.gluster.log
>
> regards
> Aravinda
>
> On 11/17/2015 10:02 PM, Deepak Ravi wrote:
>
> I also noted that the second master gf
11:29 0:00 grep
--color=auto gsyncd
[root@xfs2 ec2-user]#
On Tue, Nov 17, 2015 at 12:39 AM, Aravinda wrote:
> One status row should show Active and other should show Passive. Please
> provide logs from gfs1 and gfs2
> nodes(/var/log/glusterfs/geo-replication/gvol/*.log)
>
>
Hi all
I'm working on a Geo-replication setup that I'm having issues with.
Situation :
- In the east region of AWS, I Created a replicated volume between 2
nodes, lets call this volume *gvol*
-
*In the west region of AWS, I Created another replicated volume between 2
nodes, lets call