Re: [Gluster-users] Trying to fix files that don't want to heal

2019-12-01 Thread Ashish Pandey


- Original Message -

From: "Gudrun Mareike Amedick"  
To: "Ashish Pandey"  
Cc: "Gluster-users"  
Sent: Friday, November 29, 2019 8:45:13 PM 
Subject: Re: [Gluster-users] Trying to fix files that don't want to heal 

Hi Ashish, 

thanks for your reply. To fulfill the "no IO"-requirement, I'll have to wait 
until second week of December (9th – 14th). 

We originally planned to update GlusterFS from 4.1.7 to 5 and then to 6 in 
December. Should we do that upgrade before or after running those scripts? 
The best 

>> It will be best if you could do it before upgrading to newer version. 
BTW, why are you not planing to upgrade to gluster 7? 


Kind regards 

GudrunAm Freitag, den 29.11.2019, 00:38 -0500 schrieb Ashish Pandey: 
> Hey Gudrun, 
> 
> Could you please try to use the scripts and try to resolve it. 
> We have written some scripts and it is in final phase to get merge - 
> https://review.gluster.org/#/c/glusterfs/+/23380/ 
> 
> You can find the steps to use these scripts in README.md file 
> 
> --- 
> Ashish 
> 
> From: "Gudrun Mareike Amedick"  
> To: "Gluster-users"  
> Sent: Thursday, November 28, 2019 3:57:18 PM 
> Subject: [Gluster-users] Trying to fix files that don't want to heal 
> 
> Hi, 
> 
> I have a distributed dispersed volume with files that don't want to heal. I'm 
> trying to fix them manually. 
> 
> I'm currently working on a file that is present on all bricks, GFID exists in 
> .glusterfs-structure and getfattr shows identical attributes for all 
> files. They look like this: 
> 
> # getfattr -m. -d -e hex $brick/somepath/libssl.so.1.1 
> getfattr: Removing leading '/' from absolute path names 
> # file: $brick/$somepath/libssl.so.1.1 
> trusted.ec.config=0x080602000200 
> trusted.ec.dirty=0x0001 
> trusted.ec.size=0x000a 
> trusted.ec.version=0x00040005 
> trusted.gfid=0xdd7dd64f6bb34b5f891a5e32fe83874f 
> trusted.gfid2path.0c3a5b76c518ef60=0x34663064396234332d343730342d343634352d613834342d3338303532336137346632662f6c696273736c2e736f2e312e31
>  
> trusted.gfid2path.578ce2ec37aa0f9d=0x31636136323433342d396132642d343039362d616265352d6463353065613131333066632f6c696273736c2e736f2e312e31
>  
> trusted.glusterfs.quota.1ca62434-9a2d-4096-abe5-dc50ea1130fc.contri.3=0x00029201
>  
> trusted.glusterfs.quota.4f0d9b43-4704-4645-a844-380523a74f2f.contri.3=0x00029201
>  
> trusted.pgfid.1ca62434-9a2d-4096-abe5-dc50ea1130fc=0x0001 
> trusted.pgfid.4f0d9b43-4704-4645-a844-380523a74f2f=0x0001 
> 
> pgfid is "parent gfid" right? Both GFID's refer to a dir in my volume, both 
> of those dirs contain a file named libssl.so.1.1. They seem to be 
> hardlinks: 
> 
> find $brick/$somepath -samefile $brick/$someotherpath/libssl.so.1.1 
> $brick/$somepath/libssl.so.1 
> 
> This exceeds the limits of my GlusterFS knowledge. Is that something that 
> can/should happen? If not, is it the reason that file won't heal and how 
> do 
> I fix that? 
> 
> Kind regards 
> 
> Gudrun Amedick 
>  
> 
> Community Meeting Calendar: 
> 
> APAC Schedule - 
> Every 2nd and 4th Tuesday at 11:30 AM IST 
> Bridge: https://bluejeans.com/441850968 
> 
> NA/EMEA Schedule - 
> Every 1st and 3rd Tuesday at 01:00 PM EDT 
> Bridge: https://bluejeans.com/441850968 
> 
> Gluster-users mailing list 
> Gluster-users@gluster.org 
> https://lists.gluster.org/mailman/listinfo/gluster-users 
> 
 

Community Meeting Calendar: 

APAC Schedule - 
Every 2nd and 4th Tuesday at 11:30 AM IST 
Bridge: https://bluejeans.com/441850968 

NA/EMEA Schedule - 
Every 1st and 3rd Tuesday at 01:00 PM EDT 
Bridge: https://bluejeans.com/441850968 

Gluster-users mailing list 
Gluster-users@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users 



Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Healing completely loss file on replica 3 volume

2019-12-01 Thread Karthik Subrahmanya
Hi Dmitry,

Answers inline.

On Fri, Nov 29, 2019 at 6:26 PM Dmitry Antipov  wrote:

> I'm trying to manually garbage data on bricks

First of all changing data directly on the backend is not recommended and
is not supported.  All the operations needs to be done from the client
mount point.
Only few special cases needs changing few data about the file directly on
the backend.

> (when the volume is
> stopped) and then check whether healing is possible. For example:
>
> Start:
>
> # glusterd --debug
>
> Bricks (on EXT4 mounted with 'rw,realtime'):
>
> # mkdir /root/data0
> # mkdir /root/data1
> # mkdir /root/data2
>
> Volume:
>
> # gluster volume create gv0 replica 3 [local-ip]:/root/data0
> [local-ip]:/root/data1  [local-ip]:/root/data2 force
> volume create: gv0: success: please start the volume to access data
> # gluster volume start gv0
> volume start: gv0: success
>
> Mount:
>
> # mkdir /mnt/gv0
> # mount -t glusterfs [local-ip]:/gv0 /mnt/gv0
> WARNING: getfattr not found, certain checks will be skipped..
>
> Create file:
>
> # openssl rand 65536 > /mnt/gv0/64K
> # md5sum /mnt/gv0/64K
> ca53c9c1b6cd78f59a91cd1b0b866ed9 /mnt/gv0/64K
>
> Umount and down the volume:
>
> # umount /mnt/gv0
> # gluster volume stop gv0
> Stopping volume will make its data inaccessible. Do you want to continue?
> (y/n) y
> volume stop: gv0: success
>
> Check data on bricks:
>
> # md5sum /root/data[012]/64K
> ca53c9c1b6cd78f59a91cd1b0b866ed9  /root/data0/64K
> ca53c9c1b6cd78f59a91cd1b0b866ed9  /root/data1/64K
> ca53c9c1b6cd78f59a91cd1b0b866ed9  /root/data2/64K
>
> Seems OK. Then garbage all:
>
> # openssl rand 65536 > /root/data0/64K
> # openssl rand 65536 > /root/data1/64K
> # openssl rand 65536 > /root/data2/64K
> # md5sum /root/data[012]/64K
> c69096d15007578dab95d9940f89e167  /root/data0/64K
> b85292fb60f1a1d27f1b0e3bc6bfdfae  /root/data1/64K
> c2e90335cc2f600ddab5c53a992b2bb6  /root/data2/64K
>
> Restart the volume and start full heal:
>
> # gluster volume start gv0
> volume start: gv0: success
> # /usr/glusterfs/sbin/gluster volume heal gv0 full
> Launching heal operation to perform full self heal on volume gv0 has been
> successful
> Use heal info commands to check status.
>
> Finally:
>
> # gluster volume heal gv0 info summary
>
> Brick [local-ip]:/root/data0
> Status: Connected
> Total Number of entries: 0
> Number of entries in heal pending: 0
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> Brick [local-ip]:/root/data1
> Status: Connected
> Total Number of entries: 0
> Number of entries in heal pending: 0
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> Brick [local-ip]:/root/data2
> Status: Connected
> Total Number of entries: 0
> Number of entries in heal pending: 0
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> Since all 3 copies are different from each other, majority voting is
> useless
> and data (IIUC) should be marked as split-brain at least. But I'm seeing
> just
> zeroes everywhere above. Why it is so?
>
Since the data is changed directly on the backend, gluster will not be
knowing these changes. If the changes done from the client mount fails on
some bricks, only those will be recognized and marked by gluster so that it
can heal those when possible. Since this is a replica 3 volume and if you
end up in split-brain when you are doing the operations on the mount pint,
then that will be a bug. As far as this is considered it is not a bug or
issue on the gluster side.

HTH,
Karthik

>
> Thanks in advance,
> Dmitry
> 
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [ovirt-users] Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing

2019-12-01 Thread Krutika Dhananjay
Sorry about the late response.

I looked at the logs. These errors are originating from posix-acl
translator -



*[2019-11-17 07:55:47.090065] E [MSGID: 115050]
[server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-data_fast-server: 162496:
LOOKUP /.shard/5985adcb-0f4d-4317-8a26-1652973a2350.6
(be318638-e8a0-4c6d-977d-7a937aa84806/5985adcb-0f4d-4317-8a26-1652973a2350.6),
client:
CTX_ID:8bff2d95-4629-45cb-a7bf-2412e48896bc-GRAPH_ID:0-PID:13394-HOST:ovirt1.localdomain-PC_NAME:data_fast-client-0-RECON_NO:-0,
error-xlator: data_fast-access-control [Permission denied][2019-11-17
07:55:47.090174] I [MSGID: 139001]
[posix-acl.c:263:posix_acl_log_permit_denied] 0-data_fast-access-control:
client:
CTX_ID:8bff2d95-4629-45cb-a7bf-2412e48896bc-GRAPH_ID:0-PID:13394-HOST:ovirt1.localdomain-PC_NAME:data_fast-client-0-RECON_NO:-0,
gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
req(uid:36,gid:36,perm:1,ngrps:3),
ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
[Permission denied][2019-11-17 07:55:47.090209] E [MSGID: 115050]
[server-rpc-fops_v2.c:158:server4_lookup_cbk] 0-data_fast-server: 162497:
LOOKUP /.shard/5985adcb-0f4d-4317-8a26-1652973a2350.7
(be318638-e8a0-4c6d-977d-7a937aa84806/5985adcb-0f4d-4317-8a26-1652973a2350.7),
client:
CTX_ID:8bff2d95-4629-45cb-a7bf-2412e48896bc-GRAPH_ID:0-PID:13394-HOST:ovirt1.localdomain-PC_NAME:data_fast-client-0-RECON_NO:-0,
error-xlator: data_fast-access-control [Permission denied][2019-11-17
07:55:47.090299] I [MSGID: 139001]
[posix-acl.c:263:posix_acl_log_permit_denied] 0-data_fast-access-control:
client:
CTX_ID:8bff2d95-4629-45cb-a7bf-2412e48896bc-GRAPH_ID:0-PID:13394-HOST:ovirt1.localdomain-PC_NAME:data_fast-client-0-RECON_NO:-0,
gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
req(uid:36,gid:36,perm:1,ngrps:3),
ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
[Permission denied]*

Jiffin/Raghavendra Talur,
Can you help?

-Krutika

On Wed, Nov 27, 2019 at 2:11 PM Strahil Nikolov 
wrote:

> Hi Nir,All,
>
> it seems that 4.3.7 RC3 (and even RC4) are not the problem here(attached
> screenshot of oVirt running on v7 gluster).
> It seems strange that both my serious issues with oVirt are related to
> gluster issue (1st gluster v3  to v5 migration and now this one).
>
> I have just updated to gluster v7.0 (Centos 7 repos), and rebooted all
> nodes.
> Now both Engine and all my VMs are back online - so if you hit issues with
> 6.6 , you should give a try to 7.0 (and even 7.1 is coming soon) before
> deciding to wipe everything.
>
> @Krutika,
>
> I guess you will ask for the logs, so let's switch to gluster-users about
> this one ?
>
> Best Regards,
> Strahil Nikolov
>
> В понеделник, 25 ноември 2019 г., 16:45:48 ч. Гринуич-5, Strahil Nikolov <
> hunter86...@yahoo.com> написа:
>
>
> Hi Krutika,
>
> I have enabled TRACE log level for the volume data_fast,
>
> but the issue is not much clear:
> FUSE reports:
>
> [2019-11-25 21:31:53.478130] I [MSGID: 133022]
> [shard.c:3674:shard_delete_shards] 0-data_fast-shard: Deleted shards of
> gfid=6d9ed2e5-d4f2-4749-839b-2f1
> 3a68ed472 from backend
> [2019-11-25 21:32:43.564694] W [MSGID: 114031]
> [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-0:
> remote operation failed. Path:
> /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79
> (----) [Permission denied]
> [2019-11-25 21:32:43.565653] W [MSGID: 114031]
> [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-1:
> remote operation failed. Path:
> /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79
> (----) [Permission denied]
> [2019-11-25 21:32:43.565689] W [MSGID: 114031]
> [client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-data_fast-client-2:
> remote operation failed. Path:
> /.shard/b0af2b81-22cf-482e-9b2f-c431b6449dae.79
> (----) [Permission denied]
> [2019-11-25 21:32:43.565770] E [MSGID: 133010]
> [shard.c:2327:shard_common_lookup_shards_cbk] 0-data_fast-shard: Lookup on
> shard 79 failed. Base file gfid = b0af2b81-22cf-482e-9b2f-c431b6449dae
> [Permission denied]
> [2019-11-25 21:32:43.565858] W [fuse-bridge.c:2830:fuse_readv_cbk]
> 0-glusterfs-fuse: 279: READ => -1 gfid=b0af2b81-22cf-482e-9b2f-c431b6449dae
> fd=0x7fbf40005ea8 (Permission denied)
>
>
> While the BRICK logs on ovirt1/gluster1 report:
> 2019-11-25 21:32:43.564177] D [MSGID: 0] [io-threads.c:376:iot_schedule]
> 0-data_fast-io-threads: LOOKUP scheduled as fast priority fop
> [2019-11-25 21:32:43.564194] T [MSGID: 0]
> [defaults.c:2008:default_lookup_resume] 0-stack-trace: stack-address:
> 0x7fc02c00bbf8, winding from data_fast-io-threads to data_fast-upcall
> [2019-11-25 21:32:43.564206] T [MSGID: 0] [upcall.c:790:up_lookup]
> 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-upcall
> to data_fast-leases
> [2019-11-25 21:32:43.564215] T [MSGID: 0] [defaults.c:2766:default_lookup]
> 0-stack-trace: stack-address: 0x7fc02c00bbf8, winding from data_fast-leases
> to 

Re: [Gluster-users] Unable to setup geo replication

2019-12-01 Thread Tan, Jian Chern
Still not working :(

[jiancher@jfsotc22 mnt]$ sudo gluster vol geo-rep jfsotc22-gv0 
pgsotc10.png.intel.com::pgsotc10-gv0 config | grep sync_xattrs
sync_xattrs:false
use_rsync_xattrs:false
[jiancher@jfsotc22 mnt]$ tail -n 30 
/var/log/glusterfs/geo-replication/jfsotc22-gv0_pgsotc10.png.intel.com_pgsotc10-gv0/gsyncd.log
[2019-12-02 01:55:06.407781] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change status=Initializing...
[2019-12-02 01:55:06.407979] I [monitor(monitor):157:monitor] Monitor: starting 
gsyncd worker   brick=/data/glusterbrick/gv0
slave_node=pgsotc10.png.intel.com
[2019-12-02 01:55:06.474308] I [gsyncd(agent /data/glusterbrick/gv0):308:main] 
: Using session config file 
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc10.png.intel.com_pgsotc10-gv0/gsyncd.conf
[2019-12-02 01:55:06.475533] I [changelogagent(agent 
/data/glusterbrick/gv0):72:__init__] ChangelogAgent: Agent listining...
[2019-12-02 01:55:06.477900] I [gsyncd(worker /data/glusterbrick/gv0):308:main] 
: Using session config file
path=/var/lib/glusterd/geo-replication/jfsotc22-gv0_pgsotc10.png.intel.com_pgsotc10-gv0/gsyncd.conf
[2019-12-02 01:55:06.487359] I [resource(worker 
/data/glusterbrick/gv0):1366:connect_remote] SSH: Initializing SSH connection 
between master and slave...
[2019-12-02 01:55:10.901315] I [resource(worker 
/data/glusterbrick/gv0):1413:connect_remote] SSH: SSH connection between master 
and slave established.  duration=4.4138
[2019-12-02 01:55:10.901653] I [resource(worker 
/data/glusterbrick/gv0):1085:connect] GLUSTER: Mounting gluster volume 
locally...
[2019-12-02 01:55:11.948481] I [resource(worker 
/data/glusterbrick/gv0):1108:connect] GLUSTER: Mounted gluster volume   
duration=1.0466
[2019-12-02 01:55:11.948833] I [subcmds(worker 
/data/glusterbrick/gv0):80:subcmd_worker] : Worker spawn successful. 
Acknowledging back to monitor
[2019-12-02 01:55:13.969974] I [master(worker 
/data/glusterbrick/gv0):1603:register] _GMaster: Working dir  
path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc10.png.intel.com_pgsotc10-gv0/data-glusterbrick-gv0
[2019-12-02 01:55:13.970411] I [resource(worker 
/data/glusterbrick/gv0):1271:service_loop] GLUSTER: Register time   
time=1575251713
[2019-12-02 01:55:13.984259] I [gsyncdstatus(worker 
/data/glusterbrick/gv0):281:set_active] GeorepStatus: Worker Status Change  
status=Active
[2019-12-02 01:55:13.984892] I [gsyncdstatus(worker 
/data/glusterbrick/gv0):253:set_worker_crawl_status] GeorepStatus: Crawl Status 
Change  status=History Crawl
[2019-12-02 01:55:13.985096] I [master(worker 
/data/glusterbrick/gv0):1517:crawl] _GMaster: starting history crawl  
turns=1 stime=None  etime=1575251713entry_stime=None
[2019-12-02 01:55:13.985174] I [resource(worker 
/data/glusterbrick/gv0):1287:service_loop] GLUSTER: No stime available, using 
xsync crawl
[2019-12-02 01:55:13.992273] I [master(worker 
/data/glusterbrick/gv0):1633:crawl] _GMaster: starting hybrid crawl   
stime=None
[2019-12-02 01:55:13.993591] I [gsyncdstatus(worker 
/data/glusterbrick/gv0):253:set_worker_crawl_status] GeorepStatus: Crawl Status 
Change  status=Hybrid Crawl
[2019-12-02 01:55:14.994798] I [master(worker 
/data/glusterbrick/gv0):1644:crawl] _GMaster: processing xsync changelog  
path=/var/lib/misc/gluster/gsyncd/jfsotc22-gv0_pgsotc10.png.intel.com_pgsotc10-gv0/data-glusterbrick-gv0/xsync/XSYNC-CHANGELOG.1575251713
[2019-12-02 01:55:15.470127] I [master(worker 
/data/glusterbrick/gv0):1954:syncjob] Syncer: Sync Time Taken job=1   
num_files=2 return_code=14  duration=0.0165
[2019-12-02 01:55:15.470312] E [syncdutils(worker 
/data/glusterbrick/gv0):809:errlog] Popen: command returned error cmd=rsync 
-aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs 
--existing --acls --ignore-missing-args . -e ssh -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 
22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-47gkywzh/d91d12d424b6c44691090c3a561b932d.sock 
pgsotc10.png.intel.com:/proc/27159/cwd   error=14
[2019-12-02 01:55:15.477034] I [repce(agent 
/data/glusterbrick/gv0):97:service_loop] RepceServer: terminating on reaching 
EOF.
[2019-12-02 01:55:15.953736] I [monitor(monitor):278:monitor] Monitor: worker 
died in startup phase brick=/data/glusterbrick/gv0
[2019-12-02 01:55:15.955809] I [gsyncdstatus(monitor):248:set_worker_status] 
GeorepStatus: Worker Status Change status=Faulty


From: Kotresh Hiremath Ravishankar 
Sent: Monday, December 2, 2019 2:37 AM
To: Tan, Jian Chern 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Unable to setup geo replication

Hi,

Please try disabling xattr sync and see geo-rep works fine

gluster vol geo-rep  :: config sync_xattrs false


On Thu, Nov 28, 2019 at 1:29 PM Tan, Jian Chern 
mailto:jian.chern@intel.com>> wrote:
Alright so it seems to work with some errors and this 

Re: [Gluster-users] Unable to setup geo replication

2019-12-01 Thread Kotresh Hiremath Ravishankar
Hi,

Please try disabling xattr sync and see geo-rep works fine

gluster vol geo-rep  :: config sync_xattrs
false


On Thu, Nov 28, 2019 at 1:29 PM Tan, Jian Chern 
wrote:

> Alright so it seems to work with some errors and this the output I’m
> getting.
>
> [root@jfsotc22 mnt]# rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> r...@pgsotc10.png.intel.com:/mnt/
>
> rsync: rsync_xal_set: lsetxattr("/mnt/file1","security.selinux") failed:
> Operation not supported (95)
>
>
>
> Number of files: 1 (reg: 1)
>
> Number of created files: 0
>
> Number of deleted files: 0
>
> Number of regular files transferred: 1
>
> Total file size: 9 bytes
>
> Total transferred file size: 9 bytes
>
> Literal data: 9 bytes
>
> Matched data: 0 bytes
>
> File list size: 0
>
> File list generation time: 0.003 seconds
>
> File list transfer time: 0.000 seconds
>
> Total bytes sent: 152
>
> Total bytes received: 141
>
>
>
> sent 152 bytes  received 141 bytes  65.11 bytes/sec
>
> total size is 9  speedup is 0.03
>
> rsync error: some files/attrs were not transferred (see previous errors)
> (code 23) at main.c(1189) [sender=3.1.3]
>
>
>
> The data is synced over to the other machine when I view the file there
>
> [root@pgsotc10 mnt]# cat file1
>
> testdata
>
> [root@pgsotc10 mnt]#
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Wednesday, November 27, 2019 5:25 PM
> *To:* Tan, Jian Chern 
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> Oh forgot about that. Just setup passwordless ssh and that particular node
> and try with default ssh pem key and remove
> /var/lib/glusterd/geo-replicationsecre.pem from the command line
>
>
>
> On Wed, Nov 27, 2019 at 12:43 PM Tan, Jian Chern 
> wrote:
>
> I’m getting this when I run that command so something’s wrong somewhere I
> guess.
>
>
>
> [root@jfsotc22 mnt]# rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> r...@pgsotc11.png.intel.com:/mnt/
>
> gsyncd sibling not found
>
> disallowed rsync invocation
>
> rsync: connection unexpectedly closed (0 bytes received so far) [sender]
>
> rsync error: error in rsync protocol data stream (code 12) at io.c(226)
> [sender=3.1.3]
>
> [root@jfsotc22 mnt]#
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, November 26, 2019 7:22 PM
> *To:* Tan, Jian Chern 
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> Ok, Then it should work.
> Could you confirm rsync runs successfully when executed manually as below.
>
>
>
> 1. On master node,
>  a) # mkdir /mastermnt
>  b) Mount master volume on /mastermnt
>  b) # echo "test data" /master/file1
>
> 2. On slave node
>  a) # mkdir /slavemnt
>  b) # Mount slave on /slavemnt
>
>  c) # touch /salvemnt/file1
>
> 3. On master node
>  a) # cd /mastermnt
>
>  b) # rsync -aR0 --inplace --super --stats --numeric-ids
> --no-implied-dirs --existing --xattrs --acls --ignore-missing-args file1 -e
> 'ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no  -p 22
> -oControlMaster=auto -i /var/lib/glusterd/geo-replication/secret.pem'
> r...@pgsotc11.png.intel.com:/slavemnt/
>
> 4. Check for content sync
>
>  a) cat /slavemnt/file1
>
>
>
> On Tue, Nov 26, 2019 at 1:19 PM Tan, Jian Chern 
> wrote:
>
> Rsync on both the slave and master are rsync  version 3.1.3  protocol
> version 31, so both are up to date as far as I know.
>
> Gluster version on both machines are glusterfs 5.10
>
> OS on both machines are Fedora 29 Server Edition
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, November 26, 2019 3:04 PM
> *To:* Tan, Jian Chern 
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Unable to setup geo replication
>
>
>
> The error code 14 related IPC where any of pipe/fork fails in rsync code.
> Please upgrade the rsync if not done. Also check the rsync versions
> between master and slave to be same.
>
> Which version of gluster are you using?
> What's the host OS?
>
> What's the rsync version ?
>
>
>
> On Tue, Nov 26, 2019 at 11:34 AM Tan, Jian Chern 
> wrote:
>
> I’m new to GlusterFS and trying to setup geo-replication with a master
> volume being mirrored to a slave volume on another machine. However I just
> can’t seem to get it to work after starting the geo replication volume with
> the logs showing it failing rsync with error code 14. I can’t seem to find
> any info about this online so any help would be much appreciated.
>
>
>
> [2019-11-26 05:46:31.24706] I
>