Re: [Gluster-users] Unable to setup geo replication

2019-12-01 Thread Tan, Jian Chern
Still not working :(

[jiancher@jfsotc22 mnt]$ sudo gluster vol geo-rep jfsotc22-gv0 
pgsotc10.png.intel.com::pgsotc10-gv0 config | grep sync_xattrs
sync_xattrs:false
use_rsync_xattrs:false
[jiancher@jfsotc22 mnt]$ tail -n 30 
/var/log/glusterfs/geo-replication/jfsotc22-gv0_pgsotc10.png.intel.com_pgsotc10-gv0/gsyncd.log
[2019-12-02 01:55:06.407781] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change status=Initializing...
[2019-12-02 01:55:06.407979] I [monitor(monitor):157:monitor] Monitor: starting 
gsyncd worker   brick=/data/glusterbrick/gv0
slave_node=pgsotc10.png.intel.com
[2019-12-02 01:55:06.474308] I [gsyncd(agent /data/glusterbrick/gv0):308:main] 
: Using session config file 
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc10.png.intel.com_pgsotc10-gv0/gsyncd.conf
[2019-12-02 01:55:06.475533] I [changelogagent(agent 
/data/glusterbrick/gv0):72:__init__] ChangelogAgent: Agent listining...
[2019-12-02 01:55:06.477900] I [gsyncd(worker /data/glusterbrick/gv0):308:main] 
: Using session config file
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc10.png.intel.com_pgsotc10-gv0/gsyncd.conf
[2019-12-02 01:55:06.487359] I [resource(worker 
/data/glusterbrick/gv0):1366:connect_remote] SSH: Initializing SSH connection 
between master and slave...
[2019-12-02 01:55:10.901315] I [resource(worker 
/data/glusterbrick/gv0):1413:connect_remote] SSH: SSH connection between master 
and slave established.  duration=4.4138
[2019-12-02 01:55:10.901653] I [resource(worker 
/data/glusterbrick/gv0):1085:connect] GLUSTER: Mounting gluster volume 
locally...
[2019-12-02 01:55:11.948481] I [resource(worker 
/data/glusterbrick/gv0):1108:connect] GLUSTER: Mounted gluster volume   
duration=1.0466
[2019-12-02 01:55:11.948833] I [subcmds(worker 
/data/glusterbrick/gv0):80:subcmd_worker] : Worker spawn successful. 
Acknowledging back to monitor
[2019-12-02 01:55:13.969974] I [master(worker 
/data/glusterbrick/gv0):1603:register] _GMaster: Working dir  
path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc10.png.intel.com_pgsotc10-gv0/data-glusterbrick-gv0
[2019-12-02 01:55:13.970411] I [resource(worker 
/data/glusterbrick/gv0):1271:service_loop] GLUSTER: Register time   
time=1575251713
[2019-12-02 01:55:13.984259] I [gsyncdstatus(worker 
/data/glusterbrick/gv0):281:set_active] GeorepStatus: Worker Status Change  
status=Active
[2019-12-02 01:55:13.984892] I [gsyncdstatus(worker 
/data/glusterbrick/gv0):253:set_worker_crawl_status] GeorepStatus: Crawl Status 
Change  status=History Crawl
[2019-12-02 01:55:13.985096] I [master(worker 
/data/glusterbrick/gv0):1517:crawl] _GMaster: starting history crawl  
turns=1 stime=None  etime=1575251713entry_stime=None
[2019-12-02 01:55:13.985174] I [resource(worker 
/data/glusterbrick/gv0):1287:service_loop] GLUSTER: No stime available, using 
xsync crawl
[2019-12-02 01:55:13.992273] I [master(worker 
/data/glusterbrick/gv0):1633:crawl] _GMaster: starting hybrid crawl   
stime=None
[2019-12-02 01:55:13.993591] I [gsyncdstatus(worker 
/data/glusterbrick/gv0):253:set_worker_crawl_status] GeorepStatus: Crawl Status 
Change  status=Hybrid Crawl
[2019-12-02 01:55:14.994798] I [master(worker 
/data/glusterbrick/gv0):1644:crawl] _GMaster: processing xsync changelog  
path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc10.png.intel.com_pgsotc10-gv0/data-glusterbrick-gv0/xsync/XSYNC-CHANGELOG.1575251713
[2019-12-02 01:55:15.470127] I [master(worker 
/data/glusterbrick/gv0):1954:syncjob] Syncer: Sync Time Taken job=1   
num_files=2 return_code=14  duration=0.0165
[2019-12-02 01:55:15.470312] E [syncdutils(worker 
/data/glusterbrick/gv0):809:errlog] Popen: command returned error cmd=rsync 
-aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs 
--existing --acls --ignore-missing-args . -e ssh -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 
22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-47gkywzh/d91d12d424b6c44691090c3a561b932d.sock 
pgsotc10.png.intel.com:/proc/27159/cwd   error=14
[2019-12-02 01:55:15.477034] I [repce(agent 
/data/glusterbrick/gv0):97:service_loop] RepceServer: terminating on reaching 
EOF.
[2019-12-02 01:55:15.953736] I [monitor(monitor):278:monitor] Monitor: worker 
died in startup phase brick=/data/glusterbrick/gv0
[2019-12-02 01:55:15.955809] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change status=Faulty


From: Kotresh Hiremath Ravishankar 
Sent: Monday, December 2, 2019 2:37 AM
To: Tan, Jian Chern 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Unable to setup geo replication

Hi,

Please try disabling xattr sync and see geo-rep works fine

gluster vol geo-rep  :: config sync_xattrs false


On Thu, Nov 28, 2019 at 1:29 PM Tan, Jian Chern 
mailto:jian.chern@intel.com>> wrote:
Alright so it seems to work with some 

Re: [Gluster-users] Unable to setup geo replication

2019-12-01 Thread Kotresh Hiremath Ravishankar
Hi,

Please try disabling xattr sync and see geo-rep works fine

gluster vol geo-rep  :: config sync_xattrs
false


On Thu, Nov 28, 2019 at 1:29 PM Tan, Jian Chern 
wrote:

> Alright so it seems to work with some errors and this the output I’m
> getting.
>
> [root@jfsotc22 mnt]# rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> r...@pgsotc10.png.intel.com:/mnt/
>
> rsync: rsync_xal_set: lsetxattr("/mnt/file1","security.selinux") failed:
> Operation not supported (95)
>
>
>
> Number of files: 1 (reg: 1)
>
> Number of created files: 0
>
> Number of deleted files: 0
>
> Number of regular files transferred: 1
>
> Total file size: 9 bytes
>
> Total transferred file size: 9 bytes
>
> Literal data: 9 bytes
>
> Matched data: 0 bytes
>
> File list size: 0
>
> File list generation time: 0.003 seconds
>
> File list transfer time: 0.000 seconds
>
> Total bytes sent: 152
>
> Total bytes received: 141
>
>
>
> sent 152 bytes  received 141 bytes  65.11 bytes/sec
>
> total size is 9  speedup is 0.03
>
> rsync error: some files/attrs were not transferred (see previous errors)
> (code 23) at main.c(1189) [sender=3.1.3]
>
>
>
> The data is synced over to the other machine when I view the file there
>
> [root@pgsotc10 mnt]# cat file1
>
> testdata
>
> [root@pgsotc10 mnt]#
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Wednesday, November 27, 2019 5:25 PM
> *To:* Tan, Jian Chern 
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> Oh forgot about that. Just setup passwordless ssh and that particular node
> and try with default ssh pem key and remove
> /var/lib/glusterd/geo-replicationsecre.pem from the command line
>
>
>
> On Wed, Nov 27, 2019 at 12:43 PM Tan, Jian Chern 
> wrote:
>
> I’m getting this when I run that command so something’s wrong somewhere I
> guess.
>
>
>
> [root@jfsotc22 mnt]# rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> r...@pgsotc11.png.intel.com:/mnt/
>
> gsyncd sibling not found
>
> disallowed rsync invocation
>
> rsync: connection unexpectedly closed (0 bytes received so far) [sender]
>
> rsync error: error in rsync protocol data stream (code 12) at io.c(226)
> [sender=3.1.3]
>
> [root@jfsotc22 mnt]#
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, November 26, 2019 7:22 PM
> *To:* Tan, Jian Chern 
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> Ok, Then it should work.
> Could you confirm rsync runs successfully when executed manually as below.
>
>
>
> 1. On master node,
>  a) # mkdir /mastermnt
>  b) Mount master volume on /mastermnt
>  b) # echo "test data" /master/file1
>
> 2. On slave node
>  a) # mkdir /slavemnt
>  b) # Mount slave on /slavemnt
>
>  c) # touch /salvemnt/file1
>
> 3. On master node
>  a) # cd /mastermnt
>
>  b) # rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> r...@pgsotc11.png.intel.com:/slavemnt/
>
> 4. Check for content sync
>
>  a) cat /slavemnt/file1
>
>
>
> On Tue, Nov 26, 2019 at 1:19 PM Tan, Jian Chern 
> wrote:
>
> Rsync on both the slave and master are rsync  version 3.1.3  protocol
> version 31, so both are up to date as far as I know.
>
> Gluster version on both machines are glusterfs 5.10
>
> OS on both machines are Fedora 29 Server Edition
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, November 26, 2019 3:04 PM
> *To:* Tan, Jian Chern 
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> The error code 14 related IPC where any of pipe/fork fails in rsync code.
> Please upgrade the rsync if not done. Also check the rsync versions
> between master and slave to be same.
>
> Which version of gluster are you using?
> What's the host OS?
>
>

Re: [Gluster-users] Unable to setup geo replication

2019-11-27 Thread Tan, Jian Chern
Alright so it seems to work with some errors and this the output I’m getting.
[root@jfsotc22 mnt]# rsync -aR0 --inplace --super --stats --numeric-ids 
--no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e 
'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22 
-oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem' 
r...@pgsotc10.png.intel.com:/mnt/
rsync: rsync_xal_set: lsetxattr("/mnt/file1","security.selinux") failed: 
Operation not supported (95)

Number of files: 1 (reg: 1)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 1
Total file size: 9 bytes
Total transferred file size: 9 bytes
Literal data: 9 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.003 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 152
Total bytes received: 141

sent 152 bytes  received 141 bytes  65.11 bytes/sec
total size is 9  speedup is 0.03
rsync error: some files/attrs were not transferred (see previous errors) (code 
23) at main.c(1189) [sender=3.1.3]

The data is synced over to the other machine when I view the file there
[root@pgsotc10 mnt]# cat file1
testdata
[root@pgsotc10 mnt]#

From: Kotresh Hiremath Ravishankar 
Sent: Wednesday, November 27, 2019 5:25 PM
To: Tan, Jian Chern 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Unable to setup geo replication

Oh forgot about that. Just setup passwordless ssh and that particular node and 
try with default ssh pem key and remove 
/var/lib/glusterd/geo-replicationsecre.pem from the command line

On Wed, Nov 27, 2019 at 12:43 PM Tan, Jian Chern 
mailto:jian.chern@intel.com>> wrote:
I’m getting this when I run that command so something’s wrong somewhere I guess.

[root@jfsotc22 mnt]# rsync -aR0 --inplace --super --stats --numeric-ids 
--no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e 
'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22 
-oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem' 
r...@pgsotc11.png.intel.com:/mnt/<mailto:r...@pgsotc11.png.intel.com:/mnt/>
gsyncd sibling not found
disallowed rsync invocation
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) 
[sender=3.1.3]
[root@jfsotc22 mnt]#

From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Tuesday, November 26, 2019 7:22 PM
To: Tan, Jian Chern mailto:jian.chern@intel.com>>
Cc: gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Subject: Re: [Gluster-users] Unable to setup geo replication

Ok, Then it should work.
Could you confirm rsync runs successfully when executed manually as below.

1. On master node,
 a) # mkdir /mastermnt
 b) Mount master volume on /mastermnt
 b) # echo "test data" /master/file1
2. On slave node
 a) # mkdir /slavemnt
 b) # Mount slave on /slavemnt
 c) # touch /salvemnt/file1
3. On master node
 a) # cd /mastermnt
 b) # rsync -aR0 --inplace --super --stats --numeric-ids --no-implied-dirs 
--existing --xattrs --acls --ignore-missing-args file1 -e 'ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22 
-oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem' 
r...@pgsotc11.png.intel.com:/slavemnt/<mailto:r...@pgsotc11.png.intel.com:/slavemnt/>
4. Check for content sync
 a) cat /slavemnt/file1

On Tue, Nov 26, 2019 at 1:19 PM Tan, Jian Chern 
mailto:jian.chern@intel.com>> wrote:
Rsync on both the slave and master are rsync  version 3.1.3  protocol version 
31, so both are up to date as far as I know.
Gluster version on both machines are glusterfs 5.10
OS on both machines are Fedora 29 Server Edition

From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Tuesday, November 26, 2019 3:04 PM
To: Tan, Jian Chern mailto:jian.chern@intel.com>>
Cc: gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Subject: Re: [Gluster-users] Unable to setup geo replication

The error code 14 related IPC where any of pipe/fork fails in rsync code.
Please upgrade the rsync if not done. Also check the rsync versions between 
master and slave to be same.
Which version of gluster are you using?
What's the host OS?
What's the rsync version ?

On Tue, Nov 26, 2019 at 11:34 AM Tan, Jian Chern 
mailto:jian.chern@intel.com>> wrote:
I’m new to GlusterFS and trying to setup geo-replication with a master volume 
being mirrored to a slave volume on another machine. However I just can’t seem 
to get it to work after starting the geo replication volume with the logs 
showing it failing rsync with error code 14. I can’t seem to find any info 
about this online so any help would be much appreciated.

[2019-11-26 05:46:31.24706] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change  statu

Re: [Gluster-users] Unable to setup geo replication

2019-11-27 Thread Kotresh Hiremath Ravishankar
Oh forgot about that. Just setup passwordless ssh and that particular node
and try with default ssh pem key and remove
/var/lib/glusterd/geo-replicationsecre.pem from the command line

On Wed, Nov 27, 2019 at 12:43 PM Tan, Jian Chern 
wrote:

> I’m getting this when I run that command so something’s wrong somewhere I
> guess.
>
>
>
> [root@jfsotc22 mnt]# rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> r...@pgsotc11.png.intel.com:/mnt/
>
> gsyncd sibling not found
>
> disallowed rsync invocation
>
> rsync: connection unexpectedly closed (0 bytes received so far) [sender]
>
> rsync error: error in rsync protocol data stream (code 12) at io.c(226)
> [sender=3.1.3]
>
> [root@jfsotc22 mnt]#
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, November 26, 2019 7:22 PM
> *To:* Tan, Jian Chern 
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> Ok, Then it should work.
> Could you confirm rsync runs successfully when executed manually as below.
>
>
>
> 1. On master node,
>  a) # mkdir /mastermnt
>  b) Mount master volume on /mastermnt
>  b) # echo "test data" /master/file1
>
> 2. On slave node
>  a) # mkdir /slavemnt
>  b) # Mount slave on /slavemnt
>
>  c) # touch /salvemnt/file1
>
> 3. On master node
>  a) # cd /mastermnt
>
>  b) # rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> r...@pgsotc11.png.intel.com:/slavemnt/
>
> 4. Check for content sync
>
>  a) cat /slavemnt/file1
>
>
>
> On Tue, Nov 26, 2019 at 1:19 PM Tan, Jian Chern 
> wrote:
>
> Rsync on both the slave and master are rsync  version 3.1.3  protocol
> version 31, so both are up to date as far as I know.
>
> Gluster version on both machines are glusterfs 5.10
>
> OS on both machines are Fedora 29 Server Edition
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, November 26, 2019 3:04 PM
> *To:* Tan, Jian Chern 
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> The error code 14 related IPC where any of pipe/fork fails in rsync code.
> Please upgrade the rsync if not done. Also check the rsync versions
> between master and slave to be same.
>
> Which version of gluster are you using?
> What's the host OS?
>
> What's the rsync version ?
>
>
>
> On Tue, Nov 26, 2019 at 11:34 AM Tan, Jian Chern 
> wrote:
>
> I’m new to GlusterFS and trying to setup geo-replication with a master
> volume being mirrored to a slave volume on another machine. However I just
> can’t seem to get it to work after starting the geo replication volume with
> the logs showing it failing rsync with error code 14. I can’t seem to find
> any info about this online so any help would be much appreciated.
>
>
>
> [2019-11-26 05:46:31.24706] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change  status=Initializing...
>
> [2019-11-26 05:46:31.24891] I [monitor(monitor):157:monitor] Monitor:
> starting gsyncd workerbrick=/data/glusterimagebrick/jfsotc22-gv0
> slave_node=pgsotc11.png.intel.com
>
> [2019-11-26 05:46:31.90935] I [gsyncd(agent
> /data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config
> file
> path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
>
> [2019-11-26 05:46:31.92105] I [changelogagent(agent
> /data/glusterimagebrick/jfsotc22-gv0):72:__init__] ChangelogAgent: Agent
> listining...
>
> [2019-11-26 05:46:31.93148] I [gsyncd(worker
> /data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config
> file
> path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
>
> [2019-11-26 05:46:31.102422] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1366:connect_remote] SSH:
> Initializing SSH connection between master and slave...
>
> [2019-11-26 05:46:50.355233] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1413:connect_remote] SSH: SSH
> connection between master and slave established.duration=19.2526
>
> [2019-11-26 05:46:50.355583] I [resource(worker
> /data/glusterimagebrick/jf

Re: [Gluster-users] Unable to setup geo replication

2019-11-26 Thread Tan, Jian Chern
I’m getting this when I run that command so something’s wrong somewhere I guess.

[root@jfsotc22 mnt]# rsync -aR0 --inplace --super --stats --numeric-ids 
--no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e 
'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22 
-oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem' 
r...@pgsotc11.png.intel.com:/mnt/
gsyncd sibling not found
disallowed rsync invocation
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) 
[sender=3.1.3]
[root@jfsotc22 mnt]#

From: Kotresh Hiremath Ravishankar 
Sent: Tuesday, November 26, 2019 7:22 PM
To: Tan, Jian Chern 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Unable to setup geo replication

Ok, Then it should work.
Could you confirm rsync runs successfully when executed manually as below.

1. On master node,
 a) # mkdir /mastermnt
 b) Mount master volume on /mastermnt
 b) # echo "test data" /master/file1
2. On slave node
 a) # mkdir /slavemnt
 b) # Mount slave on /slavemnt
 c) # touch /salvemnt/file1
3. On master node
 a) # cd /mastermnt
 b) # rsync -aR0 --inplace --super --stats --numeric-ids --no-implied-dirs 
--existing --xattrs --acls --ignore-missing-args file1 -e 'ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22 
-oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem' 
r...@pgsotc11.png.intel.com:/slavemnt/<mailto:r...@pgsotc11.png.intel.com:/slavemnt/>
4. Check for content sync
 a) cat /slavemnt/file1

On Tue, Nov 26, 2019 at 1:19 PM Tan, Jian Chern 
mailto:jian.chern@intel.com>> wrote:
Rsync on both the slave and master are rsync  version 3.1.3  protocol version 
31, so both are up to date as far as I know.
Gluster version on both machines are glusterfs 5.10
OS on both machines are Fedora 29 Server Edition

From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Tuesday, November 26, 2019 3:04 PM
To: Tan, Jian Chern mailto:jian.chern@intel.com>>
Cc: gluster-users@gluster.org<mailto:gluster-users@gluster.org>
Subject: Re: [Gluster-users] Unable to setup geo replication

The error code 14 related IPC where any of pipe/fork fails in rsync code.
Please upgrade the rsync if not done. Also check the rsync versions between 
master and slave to be same.
Which version of gluster are you using?
What's the host OS?
What's the rsync version ?

On Tue, Nov 26, 2019 at 11:34 AM Tan, Jian Chern 
mailto:jian.chern@intel.com>> wrote:
I’m new to GlusterFS and trying to setup geo-replication with a master volume 
being mirrored to a slave volume on another machine. However I just can’t seem 
to get it to work after starting the geo replication volume with the logs 
showing it failing rsync with error code 14. I can’t seem to find any info 
about this online so any help would be much appreciated.

[2019-11-26 05:46:31.24706] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change  status=Initializing...
[2019-11-26 05:46:31.24891] I [monitor(monitor):157:monitor] Monitor: starting 
gsyncd workerbrick=/data/glusterimagebrick/jfsotc22-gv0  
slave_node=pgsotc11.png.intel.com<http://pgsotc11.png.intel.com>
[2019-11-26 05:46:31.90935] I [gsyncd(agent 
/data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config 
file
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
[2019-11-26 05:46:31.92105] I [changelogagent(agent 
/data/glusterimagebrick/jfsotc22-gv0):72:__init__] ChangelogAgent: Agent 
listining...
[2019-11-26 05:46:31.93148] I [gsyncd(worker 
/data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config 
file   
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
[2019-11-26 05:46:31.102422] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1366:connect_remote] SSH: Initializing 
SSH connection between master and slave...
[2019-11-26 05:46:50.355233] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1413:connect_remote] SSH: SSH connection 
between master and slave established.duration=19.2526
[2019-11-26 05:46:50.355583] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1085:connect] GLUSTER: Mounting gluster 
volume locally...
[2019-11-26 05:46:51.404998] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1108:connect] GLUSTER: Mounted gluster 
volume duration=1.0492
[2019-11-26 05:46:51.405363] I [subcmds(worker 
/data/glusterimagebrick/jfsotc22-gv0):80:subcmd_worker] : Worker spawn 
successful. Acknowledging back to monitor
[2019-11-26 05:46:53.431502] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1603:register] _GMaster: Working dir  
  
path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/data-gluste

Re: [Gluster-users] Unable to setup geo replication

2019-11-26 Thread Kotresh Hiremath Ravishankar
Ok, Then it should work.
Could you confirm rsync runs successfully when executed manually as below.

1. On master node,
 a) # mkdir /mastermnt
 b) Mount master volume on /mastermnt
 b) # echo "test data" /master/file1
2. On slave node
 a) # mkdir /slavemnt
 b) # Mount slave on /slavemnt
 c) # touch /salvemnt/file1
3. On master node
 a) # cd /mastermnt
 b) # rsync -aR0 --inplace --super --stats --numeric-ids
--no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22
-oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
r...@pgsotc11.png.intel.com:/slavemnt/
4. Check for content sync
 a) cat /slavemnt/file1

On Tue, Nov 26, 2019 at 1:19 PM Tan, Jian Chern 
wrote:

> Rsync on both the slave and master are rsync  version 3.1.3  protocol
> version 31, so both are up to date as far as I know.
>
> Gluster version on both machines are glusterfs 5.10
>
> OS on both machines are Fedora 29 Server Edition
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, November 26, 2019 3:04 PM
> *To:* Tan, Jian Chern 
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> The error code 14 related IPC where any of pipe/fork fails in rsync code.
> Please upgrade the rsync if not done. Also check the rsync versions
> between master and slave to be same.
>
> Which version of gluster are you using?
> What's the host OS?
>
> What's the rsync version ?
>
>
>
> On Tue, Nov 26, 2019 at 11:34 AM Tan, Jian Chern 
> wrote:
>
> I’m new to GlusterFS and trying to setup geo-replication with a master
> volume being mirrored to a slave volume on another machine. However I just
> can’t seem to get it to work after starting the geo replication volume with
> the logs showing it failing rsync with error code 14. I can’t seem to find
> any info about this online so any help would be much appreciated.
>
>
>
> [2019-11-26 05:46:31.24706] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change  status=Initializing...
>
> [2019-11-26 05:46:31.24891] I [monitor(monitor):157:monitor] Monitor:
> starting gsyncd workerbrick=/data/glusterimagebrick/jfsotc22-gv0
> slave_node=pgsotc11.png.intel.com
>
> [2019-11-26 05:46:31.90935] I [gsyncd(agent
> /data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config
> file
> path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
>
> [2019-11-26 05:46:31.92105] I [changelogagent(agent
> /data/glusterimagebrick/jfsotc22-gv0):72:__init__] ChangelogAgent: Agent
> listining...
>
> [2019-11-26 05:46:31.93148] I [gsyncd(worker
> /data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config
> file
> path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
>
> [2019-11-26 05:46:31.102422] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1366:connect_remote] SSH:
> Initializing SSH connection between master and slave...
>
> [2019-11-26 05:46:50.355233] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1413:connect_remote] SSH: SSH
> connection between master and slave established.duration=19.2526
>
> [2019-11-26 05:46:50.355583] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1085:connect] GLUSTER: Mounting
> gluster volume locally...
>
> [2019-11-26 05:46:51.404998] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1108:connect] GLUSTER: Mounted
> gluster volume duration=1.0492
>
> [2019-11-26 05:46:51.405363] I [subcmds(worker
> /data/glusterimagebrick/jfsotc22-gv0):80:subcmd_worker] : Worker spawn
> successful. Acknowledging back to monitor
>
> [2019-11-26 05:46:53.431502] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1603:register] _GMaster: Working
> dir
> path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/data-glusterimagebrick-jfsotc22-gv0
>
> [2019-11-26 05:46:53.431846] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1271:service_loop] GLUSTER: Register
> time time=1574747213
>
> [2019-11-26 05:46:53.445589] I [gsyncdstatus(worker
> /data/glusterimagebrick/jfsotc22-gv0):281:set_active] GeorepStatus: Worker
> Status Changestatus=Active
>
> [2019-11-26 05:46:53.446184] I [gsyncdstatus(worker
> /data/glusterimagebrick/jfsotc22-gv0):253:set_worker_crawl_status]
> GeorepStatus: Crawl Status Changestatus=History Crawl
>
> [2019-11-26 05:46:53.446367] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1517:crawl] _GMaster: starting
> history crawlturns=1 sti

Re: [Gluster-users] Unable to setup geo replication

2019-11-25 Thread Tan, Jian Chern
Rsync on both the slave and master are rsync  version 3.1.3  protocol version 
31, so both are up to date as far as I know.
Gluster version on both machines are glusterfs 5.10
OS on both machines are Fedora 29 Server Edition

From: Kotresh Hiremath Ravishankar 
Sent: Tuesday, November 26, 2019 3:04 PM
To: Tan, Jian Chern 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Unable to setup geo replication

The error code 14 related IPC where any of pipe/fork fails in rsync code.
Please upgrade the rsync if not done. Also check the rsync versions between 
master and slave to be same.
Which version of gluster are you using?
What's the host OS?
What's the rsync version ?

On Tue, Nov 26, 2019 at 11:34 AM Tan, Jian Chern 
mailto:jian.chern@intel.com>> wrote:
I’m new to GlusterFS and trying to setup geo-replication with a master volume 
being mirrored to a slave volume on another machine. However I just can’t seem 
to get it to work after starting the geo replication volume with the logs 
showing it failing rsync with error code 14. I can’t seem to find any info 
about this online so any help would be much appreciated.

[2019-11-26 05:46:31.24706] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change  status=Initializing...
[2019-11-26 05:46:31.24891] I [monitor(monitor):157:monitor] Monitor: starting 
gsyncd workerbrick=/data/glusterimagebrick/jfsotc22-gv0  
slave_node=pgsotc11.png.intel.com<http://pgsotc11.png.intel.com>
[2019-11-26 05:46:31.90935] I [gsyncd(agent 
/data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config 
file
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
[2019-11-26 05:46:31.92105] I [changelogagent(agent 
/data/glusterimagebrick/jfsotc22-gv0):72:__init__] ChangelogAgent: Agent 
listining...
[2019-11-26 05:46:31.93148] I [gsyncd(worker 
/data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config 
file   
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
[2019-11-26 05:46:31.102422] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1366:connect_remote] SSH: Initializing 
SSH connection between master and slave...
[2019-11-26 05:46:50.355233] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1413:connect_remote] SSH: SSH connection 
between master and slave established.duration=19.2526
[2019-11-26 05:46:50.355583] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1085:connect] GLUSTER: Mounting gluster 
volume locally...
[2019-11-26 05:46:51.404998] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1108:connect] GLUSTER: Mounted gluster 
volume duration=1.0492
[2019-11-26 05:46:51.405363] I [subcmds(worker 
/data/glusterimagebrick/jfsotc22-gv0):80:subcmd_worker] : Worker spawn 
successful. Acknowledging back to monitor
[2019-11-26 05:46:53.431502] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1603:register] _GMaster: Working dir  
  
path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/data-glusterimagebrick-jfsotc22-gv0
[2019-11-26 05:46:53.431846] I [resource(worker 
/data/glusterimagebrick/jfsotc22-gv0):1271:service_loop] GLUSTER: Register time 
time=1574747213
[2019-11-26 05:46:53.445589] I [gsyncdstatus(worker 
/data/glusterimagebrick/jfsotc22-gv0):281:set_active] GeorepStatus: Worker 
Status Changestatus=Active
[2019-11-26 05:46:53.446184] I [gsyncdstatus(worker 
/data/glusterimagebrick/jfsotc22-gv0):253:set_worker_crawl_status] 
GeorepStatus: Crawl Status Changestatus=History Crawl
[2019-11-26 05:46:53.446367] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1517:crawl] _GMaster: starting history 
crawlturns=1 stime=(1574669325, 0)   etime=1574747213
entry_stime=None
[2019-11-26 05:46:54.448994] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1546:crawl] _GMaster: slave's time  
stime=(1574669325, 0)
[2019-11-26 05:46:54.928395] I [master(worker 
/data/glusterimagebrick/jfsotc22-gv0):1954:syncjob] Syncer: Sync Time Taken 
  job=1   num_files=1 return_code=14  duration=0.0162
[2019-11-26 05:46:54.928607] E [syncdutils(worker 
/data/glusterimagebrick/jfsotc22-gv0):809:errlog] Popen: command returned error 
  cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids 
--no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-rgpu74f3/de0855b3336b4c3233934fcbeeb3674c.sock 
pgsotc11.png.intel.com:/proc/29549/cwd  error=14
[2019-11-26 05:46:54.935529] I [repce(agent 
/data/glusterimagebrick/jfsotc22-gv0):97:service_loop] RepceServer: terminating 
on reaching EOF.
[2019-11-26 05:46:55.410444] I [monitor(monitor):278:monitor] Monitor: worker 
died in startup phase

Re: [Gluster-users] Unable to setup geo replication

2019-11-25 Thread Kotresh Hiremath Ravishankar
The error code 14 related IPC where any of pipe/fork fails in rsync code.
Please upgrade the rsync if not done. Also check the rsync versions between
master and slave to be same.
Which version of gluster are you using?
What's the host OS?
What's the rsync version ?

On Tue, Nov 26, 2019 at 11:34 AM Tan, Jian Chern 
wrote:

> I’m new to GlusterFS and trying to setup geo-replication with a master
> volume being mirrored to a slave volume on another machine. However I just
> can’t seem to get it to work after starting the geo replication volume with
> the logs showing it failing rsync with error code 14. I can’t seem to find
> any info about this online so any help would be much appreciated.
>
>
>
> [2019-11-26 05:46:31.24706] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change  status=Initializing...
>
> [2019-11-26 05:46:31.24891] I [monitor(monitor):157:monitor] Monitor:
> starting gsyncd workerbrick=/data/glusterimagebrick/jfsotc22-gv0
> slave_node=pgsotc11.png.intel.com
>
> [2019-11-26 05:46:31.90935] I [gsyncd(agent
> /data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config
> file
> path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
>
> [2019-11-26 05:46:31.92105] I [changelogagent(agent
> /data/glusterimagebrick/jfsotc22-gv0):72:__init__] ChangelogAgent: Agent
> listining...
>
> [2019-11-26 05:46:31.93148] I [gsyncd(worker
> /data/glusterimagebrick/jfsotc22-gv0):308:main] : Using session config
> file
> path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/gsyncd.conf
>
> [2019-11-26 05:46:31.102422] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1366:connect_remote] SSH:
> Initializing SSH connection between master and slave...
>
> [2019-11-26 05:46:50.355233] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1413:connect_remote] SSH: SSH
> connection between master and slave established.duration=19.2526
>
> [2019-11-26 05:46:50.355583] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1085:connect] GLUSTER: Mounting
> gluster volume locally...
>
> [2019-11-26 05:46:51.404998] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1108:connect] GLUSTER: Mounted
> gluster volume duration=1.0492
>
> [2019-11-26 05:46:51.405363] I [subcmds(worker
> /data/glusterimagebrick/jfsotc22-gv0):80:subcmd_worker] : Worker spawn
> successful. Acknowledging back to monitor
>
> [2019-11-26 05:46:53.431502] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1603:register] _GMaster: Working
> dir
> path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc11.png.intel.com_pgsotc11-gv0/data-glusterimagebrick-jfsotc22-gv0
>
> [2019-11-26 05:46:53.431846] I [resource(worker
> /data/glusterimagebrick/jfsotc22-gv0):1271:service_loop] GLUSTER: Register
> time time=1574747213
>
> [2019-11-26 05:46:53.445589] I [gsyncdstatus(worker
> /data/glusterimagebrick/jfsotc22-gv0):281:set_active] GeorepStatus: Worker
> Status Changestatus=Active
>
> [2019-11-26 05:46:53.446184] I [gsyncdstatus(worker
> /data/glusterimagebrick/jfsotc22-gv0):253:set_worker_crawl_status]
> GeorepStatus: Crawl Status Changestatus=History Crawl
>
> [2019-11-26 05:46:53.446367] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1517:crawl] _GMaster: starting
> history crawlturns=1 stime=(1574669325, 0)
> etime=1574747213entry_stime=None
>
> [2019-11-26 05:46:54.448994] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1546:crawl] _GMaster: slave's time
> stime=(1574669325, 0)
>
> [2019-11-26 05:46:54.928395] I [master(worker
> /data/glusterimagebrick/jfsotc22-gv0):1954:syncjob] Syncer: Sync Time
> Taken   job=1   num_files=1 return_code=14  duration=0.0162
>
> [2019-11-26 05:46:54.928607] E [syncdutils(worker
> /data/glusterimagebrick/jfsotc22-gv0):809:errlog] Popen: command returned
> error   cmd=rsync -aR0 --inplace --files-from=- --super --stats
> --numeric-ids --no-implied-dirs --existing --xattrs --acls
> --ignore-missing-args . -e ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
> -p 22 -oControlMaster=auto -S
> /tmp/gsyncd-aux-ssh-rgpu74f3/de0855b3336b4c3233934fcbeeb3674c.sock
> pgsotc11.png.intel.com:/proc/29549/cwd  error=14
>
> [2019-11-26 05:46:54.935529] I [repce(agent
> /data/glusterimagebrick/jfsotc22-gv0):97:service_loop] RepceServer:
> terminating on reaching EOF.
>
> [2019-11-26 05:46:55.410444] I [monitor(monitor):278:monitor] Monitor:
> worker died in startup phase brick=/data/glusterimagebrick/jfsotc22-gv0
>
> [2019-11-26 05:46:55.412591] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change status=Faulty
>
> [2019-11-26 05:47:05.631944] I
> [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
> Change status=Initializing...
>
> ….
>
>
>
> Thanks!
>
> Jian Chern
>
>
> 
>
> Community