Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-04 Thread Saravanakumar Arumugam
Hi, On 02/03/2016 08:09 PM, ML mail wrote: Dear Aravinda, Thank you for the analysis and submitting a patch for this issue. I hope it can make it into the next GlusterFS release 3.7.7. As suggested I ran the find_gfid_issues.py script on my brick on the two master nodes and slave nodes but

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-04 Thread ML mail
That's correct, I had in total 394 files and directories which where not existant on any of my two master nodes bricks. So as you suggested I have now stopped the geo-rep and deleted the concerned files and directories on the slave node and restarted the geo-rep. It's all clean again but I will

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-03 Thread ML mail
Dear Aravinda, Thank you for the analysis and submitting a patch for this issue. I hope it can make it into the next GlusterFS release 3.7.7. As suggested I ran the find_gfid_issues.py script on my brick on the two master nodes and slave nodes but the only output it shows to me is the

[Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread ML mail
Hello, I just set up distributed geo-replication to a slave on my 2 nodes' replicated volume and noticed quite a few error messages (around 70 of them) in the slave's brick log file: The exact log file is: /var/log/glusterfs/bricks/data-myvolume-geo-brick.log [2016-01-31 22:19:29.524370] E

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread Saravanakumar Arumugam
On 02/01/2016 07:22 PM, ML mail wrote: I just found out I needed to run the getfattr on a mount and not on the glusterfs server directly. So here are the additional output you asked for: # getfattr -n glusterfs.gfid.string -m . logo-login-09.svg # file: logo-login-09.svg

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread ML mail
I just found out I needed to run the getfattr on a mount and not on the glusterfs server directly. So here are the additional output you asked for: # getfattr -n glusterfs.gfid.string -m . logo-login-09.svg # file: logo-login-09.svg glusterfs.gfid.string="1c648409-e98b-4544-a7fa-c2aef87f92ad"

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread ML mail
Hi, Thank you for your answer, below is the ouput of the requested commands. There is just one issue with the GFID, as it does not seem to work. I am running the getfattr command on the master but if I run it on the slave node it also says operation not supported. # getfattr -n

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread Saravanakumar Arumugam
Hi, On 02/01/2016 02:14 PM, ML mail wrote: Hello, I just set up distributed geo-replication to a slave on my 2 nodes' replicated volume and noticed quite a few error messages (around 70 of them) in the slave's brick log file: The exact log file is:

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread Aravinda
Hi ML, We analyzed the issue. Looks like Changelog is replayed may be because of Geo-rep worker crash or Active/Passive switch or both Geo-rep workers becoming active. From changelogs, CREATE logo-login-04.svg.part RENAME logo-login-04.svg.part logo-login-04.svg When it is replayed,

Re: [Gluster-users] Setting gfid failed on slave geo-rep node

2016-02-01 Thread ML mail
Sure, I will just send it to you through an encrypted cloud storage app and send you the password via private mail. Regards ML On Monday, February 1, 2016 3:14 PM, Saravanakumar Arumugam wrote: On 02/01/2016 07:22 PM, ML mail wrote: > I just found out I needed to run