Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-09-06 Thread Kotresh Hiremath Ravishankar
Could you append something to this file and check whether it gets synced
now?


On Thu, Sep 6, 2018 at 9:08 AM, Krishna Verma  wrote:

> Hi Kotresh,
>
>
>
> Did you get a chance to look into this?
>
>
>
> For replicated gluster volume, Still Master is not getting sync with slave.
>
>
>
> At Master :
>
> [root@gluster-poc-noida ~]# du -sh /repvol/rflowTestInt18.08-b001.t.Z
>
> 1.2G/repvol/rflowTestInt18.08-b001.t.Z
>
> [root@gluster-poc-noida ~]#
>
>
>
> At Slave:
>
> [root@gluster-poc-sj ~]# du -sh /repvol/rflowTestInt18.08-b001.t.Z
>
> du: cannot access ‘/repvol/rflowTestInt18.08-b001.t.Z’: No such file or
> directory
>
> [root@gluster-poc-sj ~]#
>
>
>
> File not reached at slave.
>
>
>
> /Krishna
>
>
>
> *From:* Krishna Verma
> *Sent:* Monday, September 3, 2018 4:41 PM
> *To:* 'Kotresh Hiremath Ravishankar' 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> Hi Kotesh:
>
>
>
> Gluster Master Site Servers : gluster-poc-noida and noi-poc-gluster
>
> Gluster Slave site servers: gluster-poc-sj and gluster-poc-sj2
>
>
>
> Master Client : noi-foreman02
>
> Slave Client: sj-kverma
>
>
>
> Step1: Create a LVM partition of 10 GB on all 4 Gluster nodes  (2 Master)
> *  (2 slave) and format that in ext4 filesystem and mount that on server.
>
>
>
> [root@gluster-poc-noida distvol]# df -hT /data/gluster-dist
>
> FilesystemType  Size  Used Avail Use% Mounted
> on
>
> /dev/mapper/centos-gluster--vol--dist ext4  9.8G  847M  8.4G   9%
> /data/gluster-dist
>
> [root@gluster-poc-noida distvol]#
>
>
>
> Step 2:  Created a Trusted storage pool as below:
>
>
>
> At Master:
>
> [root@gluster-poc-noida distvol]# gluster peer status
>
> Number of Peers: 1
>
>
>
> Hostname: noi-poc-gluster
>
> Uuid: 01316459-b5c8-461d-ad25-acc17a82e78f
>
> State: Peer in Cluster (Connected)
>
> [root@gluster-poc-noida distvol]#
>
>
>
> At Slave:
>
> [root@gluster-poc-sj ~]# gluster peer status
>
> Number of Peers: 1
>
>
>
> Hostname: gluster-poc-sj2
>
> Uuid: 6ba85bfe-cd74-4a76-a623-db687f7136fa
>
> State: Peer in Cluster (Connected)
>
> [root@gluster-poc-sj ~]#
>
>
>
> Step 3: Created distributed volume as below:
>
>
>
> At Master:  “gluster volume create glusterdist 
> gluster-poc-noida:/data/gluster-dist/distvol
> noi-poc-gluster:/data/gluster-dist/distvol”
>
>
>
> [root@gluster-poc-noida distvol]# gluster volume info glusterdist
>
>
>
> Volume Name: glusterdist
>
> Type: Distribute
>
> Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gluster-poc-noida:/data/gluster-dist/distvol
>
> Brick2: noi-poc-gluster:/data/gluster-dist/distvol
>
> Options Reconfigured:
>
> changelog.changelog: on
>
> geo-replication.ignore-pid-check: on
>
> geo-replication.indexing: on
>
> transport.address-family: inet
>
> nfs.disable: on
>
> [root@gluster-poc-noida distvol]#
>
>
>
> At Slave “ gluster volume create glusterdist 
> gluster-poc-sj:/data/gluster-dist/distvol
> gluster-poc-sj2:/data/gluster-dist/distvol”
>
>
>
> Volume Name: glusterdist
>
> Type: Distribute
>
> Volume ID: a982da53-a3d7-4b5a-be77-df85f584610d
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gluster-poc-sj:/data/gluster-dist/distvol
>
> Brick2: gluster-poc-sj2:/data/gluster-dist/distvol
>
> Options Reconfigured:
>
> transport.address-family: inet
>
> nfs.disable: on
>
>
>
> Step 4 : Gluster Geo Replication configuration
>
>
>
> On  all Gluster node: “yum install glusterfs-geo-replication.x86_64”
>
> On master node where I created session:
>
> *ssh-keygen*
>
> *ssh-copy-id root@gluster-poc-sj*
>
> *cp /root/.ssh/id_rsa.pub /var/lib/glusterd/geo-replication/secret.pem.pub*
>
> *scp /var/lib/glusterd/geo-replication/secret.pem*
> root@gluster-poc-sj:/var/lib/glusterd/geo-replication/   *
>
>
>
> *On Slave Node: *
>
>
>
>  *ln -s /usr/libexec/glusterfs/gsyncd
> /nonexistent/gsyncd   *
>
>
>
>  On Master Node:
>
>
>
> gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist
> create push-p

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-09-03 Thread Kotresh Hiremath Ravishankar
06] I [subcmds(worker 
> /data/gluster-dist/distvol):70:subcmd_worker]
> : Worker spawn successful. Acknowledging back to monitor
>
> [2018-09-03 07:27:48.401196] I [master(worker 
> /data/gluster-dist/distvol):1593:register]
> _GMaster: Working dir  path=/var/lib/misc/gluster/
> gsyncd/glusterdist_gluster-poc-sj_glusterdist/data-gluster-dist-distvol
>
> [2018-09-03 07:27:48.401477] I [resource(worker
> /data/gluster-dist/distvol):1282:service_loop] GLUSTER: Register time
> time=1535959668
>
> [2018-09-03 07:27:49.176095] I [gsyncdstatus(worker
> /data/gluster-dist/distvol):277:set_active] GeorepStatus: Worker Status
> Change  status=Active
>
> [2018-09-03 07:27:49.177079] I [gsyncdstatus(worker
> /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus:
> Crawl Status Change  status=History Crawl
>
> [2018-09-03 07:27:49.177339] I [master(worker 
> /data/gluster-dist/distvol):1507:crawl]
> _GMaster: starting history crawl  turns=1 stime=(1535701378, 0)
> entry_stime=(1535701378, 0) etime=1535959669
>
> [2018-09-03 07:27:50.179210] I [master(worker 
> /data/gluster-dist/distvol):1536:crawl]
> _GMaster: slave's timestime=(1535701378, 0)
>
> [2018-09-03 07:27:51.300096] I [gsyncd(config-get):297:main] : Using
> session config file   path=/var/lib/glusterd/geo-replication/glusterdist_
> gluster-poc-sj_glusterdist/gsyncd.conf
>
> [2018-09-03 07:27:51.399027] I [gsyncd(status):297:main] : Using
> session config file   path=/var/lib/glusterd/geo-
> replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
>
> [2018-09-03 07:27:52.510271] I [master(worker 
> /data/gluster-dist/distvol):1944:syncjob]
> Syncer: Sync Time Taken duration=1.6146 num_files=1 job=2
> return_code=0
>
> [2018-09-03 07:27:52.514487] I [master(worker 
> /data/gluster-dist/distvol):1374:process]
> _GMaster: Entry Time Taken  MKD=0   MKN=0   LIN=0   SYM=0REN=1
> RMD=0   CRE=0   duration=0.2745 UNL=0
>
> [2018-09-03 07:27:52.514615] I [master(worker 
> /data/gluster-dist/distvol):1384:process]
> _GMaster: Data/Metadata Time Taken  SETA=1  SETX=0
> meta_duration=0.2691 data_duration=1.7883DATA=1  XATT=0
>
> [2018-09-03 07:27:52.514844] I [master(worker 
> /data/gluster-dist/distvol):1394:process]
> _GMaster: Batch Completed   
> changelog_end=1535701379entry_stime=(1535701378,
> 0)  changelog_start=1535701379  stime=(1535701378, 0)
> duration=2.3353 num_changelogs=1mode=history_changelog
>
> [2018-09-03 07:27:52.515224] I [master(worker 
> /data/gluster-dist/distvol):1552:crawl]
> _GMaster: finished history crawl  endtime=1535959662
> stime=(1535701378, 0)entry_stime=(1535701378, 0)
>
> [2018-09-03 07:28:01.706876] I [gsyncd(config-get):297:main] : Using
> session config file   path=/var/lib/glusterd/geo-replication/glusterdist_
> gluster-poc-sj_glusterdist/gsyncd.conf
>
> [2018-09-03 07:28:01.803858] I [gsyncd(status):297:main] : Using
> session config file   path=/var/lib/glusterd/geo-
> replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
>
> [2018-09-03 07:28:03.521949] I [master(worker 
> /data/gluster-dist/distvol):1507:crawl]
> _GMaster: starting history crawl  turns=2 stime=(1535701378, 0)
> entry_stime=(1535701378, 0) etime=1535959683
>
> [2018-09-03 07:28:03.523086] I [master(worker 
> /data/gluster-dist/distvol):1552:crawl]
> _GMaster: finished history crawl  endtime=1535959677
> stime=(1535701378, 0)entry_stime=(1535701378, 0)
>
> [2018-09-03 07:28:04.62274] I [gsyncdstatus(worker
> /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus:
> Crawl Status Change   status=Changelog Crawl
>
> [root@gluster-poc-noida distvol]#
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Monday, September 3, 2018 12:44 PM
>
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Hi Krishna,
>
> The log is not complete. If you are re-trying, could you please try it out
> on 4.1.3 and share the logs.
>
> Thanks,
>
> Kotresh HR
>
>
>
> On Mon, Sep 3, 2018 at 12:42 PM, Krishna Verma  wrote:
>
> Hi Kotresh,
>
>
>
> Please find the log files attached.
>
>
>
> Request you to please have a look.
>
>
>
> /Krishna
>
>
>
>
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Monday, September 3, 2018 10:19 AM
>
>
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-repl

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-09-03 Thread Kotresh Hiremath Ravishankar
Hi Krishna,

The log is not complete. If you are re-trying, could you please try it out
on 4.1.3 and share the logs.

Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 12:42 PM, Krishna Verma  wrote:

> Hi Kotresh,
>
>
>
> Please find the log files attached.
>
>
>
> Request you to please have a look.
>
>
>
> /Krishna
>
>
>
>
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Monday, September 3, 2018 10:19 AM
>
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Hi Krishna,
>
> Indexing is the feature used by Hybrid crawl which only makes crawl
> faster. It has nothing to do with missing data sync.
>
> Could you please share the complete log file of the session where the
> issue is encountered ?
>
> Thanks,
>
> Kotresh HR
>
>
>
> On Mon, Sep 3, 2018 at 9:33 AM, Krishna Verma  wrote:
>
> Hi Kotresh/Support,
>
>
>
> Request your help to get it fix. My slave is not getting sync with master.
> When I restart the session after doing the indexing off then only it shows
> the file at slave but that is also blank with zero size.
>
>
>
> At master: file size is 5.8 GB.
>
>
>
> [root@gluster-poc-noida distvol]# du -sh 17.10.v001.20171023-201021_
> 17020_GPLV3.tar.gz
>
> 5.8G17.10.v001.20171023-201021_17020_GPLV3.tar.gz
>
> [root@gluster-poc-noida distvol]#
>
>
>
> But at slave, after doing the “indexing off” and restart the session and
> then wait for 2 days. It shows only 4.9 GB copied.
>
>
>
> [root@gluster-poc-sj distvol]# du -sh 17.10.v001.20171023-201021_
> 17020_GPLV3.tar.gz
>
> 4.9G17.10.v001.20171023-201021_17020_GPLV3.tar.gz
>
> [root@gluster-poc-sj distvol]#
>
>
>
> Similarly, I tested for small file of size 1.2 GB only that is still
> showing “0” size at slave  after days waiting time.
>
>
>
> At Master:
>
>
>
> [root@gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
>
> 1.2GrflowTestInt18.08-b001.t.Z
>
> [root@gluster-poc-noida distvol]#
>
>
>
> At Slave:
>
>
>
> [root@gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
>
> 0   rflowTestInt18.08-b001.t.Z
>
> [root@gluster-poc-sj distvol]#
>
>
>
> Below is my distributed volume info :
>
>
>
> [root@gluster-poc-noida distvol]# gluster volume info glusterdist
>
>
>
> Volume Name: glusterdist
>
> Type: Distribute
>
> Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gluster-poc-noida:/data/gluster-dist/distvol
>
> Brick2: noi-poc-gluster:/data/gluster-dist/distvol
>
> Options Reconfigured:
>
> changelog.changelog: on
>
> geo-replication.ignore-pid-check: on
>
> geo-replication.indexing: on
>
> transport.address-family: inet
>
> nfs.disable: on
>
> [root@gluster-poc-noida distvol]#
>
>
>
> Please help to fix, I believe its not a normal behavior of gluster rsync.
>
>
>
> /Krishna
>
> *From:* Krishna Verma
> *Sent:* Friday, August 31, 2018 12:42 PM
> *To:* 'Kotresh Hiremath Ravishankar' 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> Hi Kotresh,
>
>
>
> I have tested the geo replication over distributed volumes with 2*2
> gluster setup.
>
>
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist status
>
>
>
> MASTER NODE  MASTER VOL MASTER BRICK  SLAVE
> USERSLAVE  SLAVE NODE STATUSCRAWL
> STATUS   LAST_SYNCED
>
> 
> 
> -
>
> gluster-poc-noidaglusterdist/data/gluster-dist/distvol
> root  gluster-poc-sj::glusterdistgluster-poc-sj Active
> Changelog Crawl2018-08-31 10:28:19
>
> noi-poc-gluster  glusterdist/data/gluster-dist/distvol
> root  gluster-poc-sj::glusterdistgluster-poc-sj2Active
> History Crawl  N/A
>
> [root@gluster-poc-noida ~]#
>
>
>
> Not at client I copied a 848MB file from local disk to master mounted
> volume and it took only 1 minute and 15 seconds. Its great….
&

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-09-02 Thread Kotresh Hiremath Ravishankar
Hi Krishna,

Indexing is the feature used by Hybrid crawl which only makes crawl faster.
It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the issue
is encountered ?

Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 9:33 AM, Krishna Verma  wrote:

> Hi Kotresh/Support,
>
>
>
> Request your help to get it fix. My slave is not getting sync with master.
> When I restart the session after doing the indexing off then only it shows
> the file at slave but that is also blank with zero size.
>
>
>
> At master: file size is 5.8 GB.
>
>
>
> [root@gluster-poc-noida distvol]# du -sh 17.10.v001.20171023-201021_
> 17020_GPLV3.tar.gz
>
> 5.8G17.10.v001.20171023-201021_17020_GPLV3.tar.gz
>
> [root@gluster-poc-noida distvol]#
>
>
>
> But at slave, after doing the “indexing off” and restart the session and
> then wait for 2 days. It shows only 4.9 GB copied.
>
>
>
> [root@gluster-poc-sj distvol]# du -sh 17.10.v001.20171023-201021_
> 17020_GPLV3.tar.gz
>
> 4.9G17.10.v001.20171023-201021_17020_GPLV3.tar.gz
>
> [root@gluster-poc-sj distvol]#
>
>
>
> Similarly, I tested for small file of size 1.2 GB only that is still
> showing “0” size at slave  after days waiting time.
>
>
>
> At Master:
>
>
>
> [root@gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
>
> 1.2GrflowTestInt18.08-b001.t.Z
>
> [root@gluster-poc-noida distvol]#
>
>
>
> At Slave:
>
>
>
> [root@gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
>
> 0   rflowTestInt18.08-b001.t.Z
>
> [root@gluster-poc-sj distvol]#
>
>
>
> Below is my distributed volume info :
>
>
>
> [root@gluster-poc-noida distvol]# gluster volume info glusterdist
>
>
>
> Volume Name: glusterdist
>
> Type: Distribute
>
> Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gluster-poc-noida:/data/gluster-dist/distvol
>
> Brick2: noi-poc-gluster:/data/gluster-dist/distvol
>
> Options Reconfigured:
>
> changelog.changelog: on
>
> geo-replication.ignore-pid-check: on
>
> geo-replication.indexing: on
>
> transport.address-family: inet
>
> nfs.disable: on
>
> [root@gluster-poc-noida distvol]#
>
>
>
> Please help to fix, I believe its not a normal behavior of gluster rsync.
>
>
>
> /Krishna
>
> *From:* Krishna Verma
> *Sent:* Friday, August 31, 2018 12:42 PM
> *To:* 'Kotresh Hiremath Ravishankar' 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> Hi Kotresh,
>
>
>
> I have tested the geo replication over distributed volumes with 2*2
> gluster setup.
>
>
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterdist
> gluster-poc-sj::glusterdist status
>
>
>
> MASTER NODE  MASTER VOL MASTER BRICK  SLAVE
> USERSLAVE  SLAVE NODE STATUSCRAWL
> STATUS   LAST_SYNCED
>
> 
> 
> -
>
> gluster-poc-noidaglusterdist/data/gluster-dist/distvol
> root  gluster-poc-sj::glusterdistgluster-poc-sj Active
> Changelog Crawl2018-08-31 10:28:19
>
> noi-poc-gluster  glusterdist/data/gluster-dist/distvol
> root  gluster-poc-sj::glusterdistgluster-poc-sj2Active
> History Crawl  N/A
>
> [root@gluster-poc-noida ~]#
>
>
>
> Not at client I copied a 848MB file from local disk to master mounted
> volume and it took only 1 minute and 15 seconds. Its great….
>
>
>
> But even after waited for 2 hrs I was unable to see that file at slave
> site. Then I again erased the indexing by doing “gluster volume set
> glusterdist  indexing off” and restart the session. Magically I received
> the file instantly at slave after doing this.
>
>
>
> Why I need to do “indexing off” every time to reflect data at slave site?
> Is there any fix/workaround of it?
>
>
>
> /Krishna
>
>
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Friday, August 31, 2018 10:10 AM
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERN

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-09-02 Thread Krishna Verma
Hi Kotresh/Support,

Request your help to get it fix. My slave is not getting sync with master. When 
I restart the session after doing the indexing off then only it shows the file 
at slave but that is also blank with zero size.

At master: file size is 5.8 GB.

[root@gluster-poc-noida distvol]# du -sh 
17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[root@gluster-poc-noida distvol]#

But at slave, after doing the “indexing off” and restart the session and then 
wait for 2 days. It shows only 4.9 GB copied.

[root@gluster-poc-sj distvol]# du -sh 
17.10.v001.20171023-201021_17020_GPLV3.tar.gz
4.9G17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[root@gluster-poc-sj distvol]#

Similarly, I tested for small file of size 1.2 GB only that is still showing 
“0” size at slave  after days waiting time.

At Master:

[root@gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
1.2GrflowTestInt18.08-b001.t.Z
[root@gluster-poc-noida distvol]#

At Slave:

[root@gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
0   rflowTestInt18.08-b001.t.Z
[root@gluster-poc-sj distvol]#

Below is my distributed volume info :

[root@gluster-poc-noida distvol]# gluster volume info glusterdist

Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[root@gluster-poc-noida distvol]#

Please help to fix, I believe its not a normal behavior of gluster rsync.

/Krishna
From: Krishna Verma
Sent: Friday, August 31, 2018 12:42 PM
To: 'Kotresh Hiremath Ravishankar' 
Cc: Sunny Kumar ; Gluster Users 
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotresh,

I have tested the geo replication over distributed volumes with 2*2 gluster 
setup.

[root@gluster-poc-noida ~]# gluster volume geo-replication glusterdist 
gluster-poc-sj::glusterdist status

MASTER NODE  MASTER VOL MASTER BRICK  SLAVE USER
SLAVE  SLAVE NODE STATUSCRAWL STATUS   
LAST_SYNCED
-
gluster-poc-noidaglusterdist/data/gluster-dist/distvolroot  
gluster-poc-sj::glusterdistgluster-poc-sj ActiveChangelog Crawl
2018-08-31 10:28:19
noi-poc-gluster  glusterdist/data/gluster-dist/distvolroot  
gluster-poc-sj::glusterdistgluster-poc-sj2ActiveHistory Crawl  
N/A
[root@gluster-poc-noida ~]#

Not at client I copied a 848MB file from local disk to master mounted volume 
and it took only 1 minute and 15 seconds. Its great….

But even after waited for 2 hrs I was unable to see that file at slave site. 
Then I again erased the indexing by doing “gluster volume set glusterdist  
indexing off” and restart the session. Magically I received the file instantly 
at slave after doing this.

Why I need to do “indexing off” every time to reflect data at slave site? Is 
there any fix/workaround of it?

/Krishna


From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Friday, August 31, 2018 10:10 AM
To: Krishna Verma mailto:kve...@cadence.com>>
Cc: Sunny Kumar mailto:sunku...@redhat.com>>; Gluster 
Users mailto:gluster-users@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

Yes, this include the time take  to write 1GB file to master. geo-rep was not 
stopped while the data was copying to master.

This way, you can't really measure how much time geo-rep took.


But now I am trouble, My putty session was timed out while copying data to 
master and geo replication was active. After I restart putty session My Master 
data is not syncing with slave. Its Last_synced time is  1hrs behind the 
current time.

I restart the geo rep and also delete and again create the session but its  
“LAST_SYNCED” time is same.

Unless, geo-rep is Faulty, it would be processing/syncing. You should check 
logs for any errors.


Please help in this.

…. It's better if gluster volume has more distribute count like  3*3 or 4*3 :- 
Are you refereeing to create a distributed volume with 3 master node and 3 
slave node?

Yes,  that's correct. Please do the test with this. I recommend you to run the 
actual workload for which you are planning to use gluster instead of copying 
1GB file and testing.



/krishna

From: Kotresh 

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-31 Thread Krishna Verma
Hi Kotresh,

I have tested the geo replication over distributed volumes with 2*2 gluster 
setup.

[root@gluster-poc-noida ~]# gluster volume geo-replication glusterdist 
gluster-poc-sj::glusterdist status

MASTER NODE  MASTER VOL MASTER BRICK  SLAVE USER
SLAVE  SLAVE NODE STATUSCRAWL STATUS   
LAST_SYNCED
-
gluster-poc-noidaglusterdist/data/gluster-dist/distvolroot  
gluster-poc-sj::glusterdistgluster-poc-sj ActiveChangelog Crawl
2018-08-31 10:28:19
noi-poc-gluster  glusterdist/data/gluster-dist/distvolroot  
gluster-poc-sj::glusterdistgluster-poc-sj2ActiveHistory Crawl  
N/A
[root@gluster-poc-noida ~]#

Not at client I copied a 848MB file from local disk to master mounted volume 
and it took only 1 minute and 15 seconds. Its great….

But even after waited for 2 hrs I was unable to see that file at slave site. 
Then I again erased the indexing by doing “gluster volume set glusterdist  
indexing off” and restart the session. Magically I received the file instantly 
at slave after doing this.

Why I need to do “indexing off” every time to reflect data at slave site? Is 
there any fix/workaround of it?

/Krishna


From: Kotresh Hiremath Ravishankar 
Sent: Friday, August 31, 2018 10:10 AM
To: Krishna Verma 
Cc: Sunny Kumar ; Gluster Users 
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

Yes, this include the time take  to write 1GB file to master. geo-rep was not 
stopped while the data was copying to master.

This way, you can't really measure how much time geo-rep took.


But now I am trouble, My putty session was timed out while copying data to 
master and geo replication was active. After I restart putty session My Master 
data is not syncing with slave. Its Last_synced time is  1hrs behind the 
current time.

I restart the geo rep and also delete and again create the session but its  
“LAST_SYNCED” time is same.

Unless, geo-rep is Faulty, it would be processing/syncing. You should check 
logs for any errors.


Please help in this.

…. It's better if gluster volume has more distribute count like  3*3 or 4*3 :- 
Are you refereeing to create a distributed volume with 3 master node and 3 
slave node?

Yes,  that's correct. Please do the test with this. I recommend you to run the 
actual workload for which you are planning to use gluster instead of copying 
1GB file and testing.



/krishna

From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM

To: Krishna Verma mailto:kve...@cadence.com>>
Cc: Sunny Kumar mailto:sunku...@redhat.com>>; Gluster 
Users mailto:gluster-users@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater 
node is “Active” and another is “Passive”. Can I setup both the master as 
“Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. 
Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster 
volume which is replicated with the slave volume, it tooks 35 minutes and 49 
seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 
1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, 
Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's 
one way.
2. Or stop geo-rep until the 1GB file is written to master and then start 
geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like  3*3 or 4*3
It will help in two ways.
   1. The files gets distributed on master to multiple bricks
   2. So above will help geo-rep as files on multiple bricks are synced in 
parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
 Gluster Slave server and one cl

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-30 Thread Kotresh Hiremath Ravishankar
On Thu, Aug 30, 2018 at 4:55 PM, Krishna Verma  wrote:

> Hi Kotresh,
>
>
>
> 1. Time to write 1GB to master:   27 minutes and 29 seconds
>
> 2. Time for geo-rep to transfer 1GB to slave.   8 minutes
>
This is hard to believe, considering there is no
distribution and there is only one brick participating in syncing.
Could you retest and confirm.

>
>
> /Krishna
>
>
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Thursday, August 30, 2018 3:20 PM
>
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
>
>
>
>
> On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma  wrote:
>
> Hi Kotresh,
>
>
>
> After fix the library link on node "noi-poc-gluster ", the status of one
> mater node is “Active” and another is “Passive”. Can I setup both the
> master as “Active” ?
>
>
>
> Nope, since it's replica, it's redundant to sync same files from two
> nodes. Both replicas can't be Active.
>
>
>
>
>
> Also, when I copy a 1GB size of file from gluster client to master gluster
> volume which is replicated with the slave volume, it tooks 35 minutes and
> 49 seconds. Is there any way to reduce its time taken to rsync data.
>
>
>
> How did you measure this time? Does this include the time take for you to
> write 1GB file to master?
>
> There are two aspects to consider while measuring this.
>
>
>
> 1. Time to write 1GB to master
>
> 2. Time for geo-rep to transfer 1GB to slave.
>
>
>
> In your case, since the setup is 1*2 and only one geo-rep worker is
> Active, Step2 above equals to time for step1 + network transfer time.
>
>
>
> You can measure time in two scenarios
>
> 1. If geo-rep is started while the data is still being written to master.
> It's one way.
>
> 2. Or stop geo-rep until the 1GB file is written to master and then start
> geo-rep to get actual geo-rep time.
>
>
>
> To improve replicating speed,
>
> 1. You can play around with rsync options depending on the kind of I/O
>
> and configure the same for geo-rep as it also uses rsync internally.
>
> 2. It's better if gluster volume has more distribute count like  3*3 or 4*3
>
> It will help in two ways.
>
>1. The files gets distributed on master to multiple bricks
>
>2. So above will help geo-rep as files on multiple bricks are
> synced in parallel (multiple Actives)
>
>
>
> NOTE: Gluster master server and one client is in Noida, India Location.
>
>  Gluster Slave server and one client is in USA.
>
>
>
> Our approach is to transfer data from Noida gluster client will reach to
> the USA gluster client in a minimum time. Please suggest the best approach
> to achieve it.
>
>
>
> [root@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img
> /glusterfs/ ; date
>
> Thu Aug 30 12:26:26 IST 2018
>
> sending incremental file list
>
> gentoo_root.img
>
>   1.07G 100%  490.70kB/s0:35:36 (xfr#1, to-chk=0/1)
>
>
>
> Is this I/O time to write to master volume?
>
>
>
> sent 1.07G bytes  received 35 bytes  499.65K bytes/sec
>
> total size is 1.07G  speedup is 1.00
>
> Thu Aug 30 13:02:15 IST 2018
>
> [root@noi-dcops ~]#
>
>
>
>
>
>
>
> [root@gluster-poc-noida gluster]#  gluster volume geo-replication status
>
>
>
> MASTER NODE  MASTER VOLMASTER BRICK SLAVE USER
> SLAVE  SLAVE NODESTATUS CRAWL
> STATUS   LAST_SYNCED
>
> 
> 
> ---
>
> gluster-poc-noidaglusterep /data/gluster/gv0root
> ssh://gluster-poc-sj::glusterepgluster-poc-sjActive     Changelog
> Crawl2018-08-30 13:42:18
>
> noi-poc-gluster  glusterep /data/gluster/gv0root
> ssh://gluster-poc-sj::glusterepgluster-poc-sjPassive
> N/AN/A
>
> [root@gluster-poc-noida gluster]#
>
>
>
> Thanks in advance for your all time support.
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Thursday, August 30, 2018 10:51 AM
>
>
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-30 Thread Kotresh Hiremath Ravishankar
On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma  wrote:

> Hi Kotresh,
>
>
>
> Yes, this include the time take  to write 1GB file to master. geo-rep was
> not stopped while the data was copying to master.
>

This way, you can't really measure how much time geo-rep took.


>
> But now I am trouble, My putty session was timed out while copying data to
> master and geo replication was active. After I restart putty session My
> Master data is not syncing with slave. Its Last_synced time is  1hrs behind
> the current time.
>
>
>
> I restart the geo rep and also delete and again create the session but its
>  “LAST_SYNCED” time is same.
>

Unless, geo-rep is Faulty, it would be processing/syncing. You should check
logs for any errors.


>
> Please help in this.
>
>
>
> …. It's better if gluster volume has more distribute count like  3*3 or
> 4*3 :- Are you refereeing to create a distributed volume with 3 master
> node and 3 slave node?
>

Yes,  that's correct. Please do the test with this. I recommend you to run
the actual workload for which you are planning to use gluster instead of
copying 1GB file and testing.


>
>
>
> /krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Thursday, August 30, 2018 3:20 PM
>
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
>
>
>
>
> On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma  wrote:
>
> Hi Kotresh,
>
>
>
> After fix the library link on node "noi-poc-gluster ", the status of one
> mater node is “Active” and another is “Passive”. Can I setup both the
> master as “Active” ?
>
>
>
> Nope, since it's replica, it's redundant to sync same files from two
> nodes. Both replicas can't be Active.
>
>
>
>
>
> Also, when I copy a 1GB size of file from gluster client to master gluster
> volume which is replicated with the slave volume, it tooks 35 minutes and
> 49 seconds. Is there any way to reduce its time taken to rsync data.
>
>
>
> How did you measure this time? Does this include the time take for you to
> write 1GB file to master?
>
> There are two aspects to consider while measuring this.
>
>
>
> 1. Time to write 1GB to master
>
> 2. Time for geo-rep to transfer 1GB to slave.
>
>
>
> In your case, since the setup is 1*2 and only one geo-rep worker is
> Active, Step2 above equals to time for step1 + network transfer time.
>
>
>
> You can measure time in two scenarios
>
> 1. If geo-rep is started while the data is still being written to master.
> It's one way.
>
> 2. Or stop geo-rep until the 1GB file is written to master and then start
> geo-rep to get actual geo-rep time.
>
>
>
> To improve replicating speed,
>
> 1. You can play around with rsync options depending on the kind of I/O
>
> and configure the same for geo-rep as it also uses rsync internally.
>
> 2. It's better if gluster volume has more distribute count like  3*3 or 4*3
>
> It will help in two ways.
>
>1. The files gets distributed on master to multiple bricks
>
>2. So above will help geo-rep as files on multiple bricks are
> synced in parallel (multiple Actives)
>
>
>
> NOTE: Gluster master server and one client is in Noida, India Location.
>
>  Gluster Slave server and one client is in USA.
>
>
>
> Our approach is to transfer data from Noida gluster client will reach to
> the USA gluster client in a minimum time. Please suggest the best approach
> to achieve it.
>
>
>
> [root@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img
> /glusterfs/ ; date
>
> Thu Aug 30 12:26:26 IST 2018
>
> sending incremental file list
>
> gentoo_root.img
>
>   1.07G 100%  490.70kB/s0:35:36 (xfr#1, to-chk=0/1)
>
>
>
> Is this I/O time to write to master volume?
>
>
>
> sent 1.07G bytes  received 35 bytes  499.65K bytes/sec
>
> total size is 1.07G  speedup is 1.00
>
> Thu Aug 30 13:02:15 IST 2018
>
> [root@noi-dcops ~]#
>
>
>
>
>
>
>
> [root@gluster-poc-noida gluster]#  gluster volume geo-replication status
>
>
>
> MASTER NODE  MASTER VOLMASTER BRICK SLAVE USER
> SLAVE  SLAVE NODESTATUS CRAWL
> STATUS   LAST_SYNCED
>
> 
> 
> -

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-30 Thread Krishna Verma
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater 
node is “Active” and another is “Passive”. Can I setup both the master as 
“Active” ?

Also, when I copy a 1GB size of file from gluster client to master gluster 
volume which is replicated with the slave volume, it tooks 35 minutes and 49 
seconds. Is there any way to reduce its time taken to rsync data.

NOTE: Gluster master server and one client is in Noida, India Location.
 Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the 
USA gluster client in a minimum time. Please suggest the best approach to 
achieve it.

[root@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img 
/glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
  1.07G 100%  490.70kB/s0:35:36 (xfr#1, to-chk=0/1)

sent 1.07G bytes  received 35 bytes  499.65K bytes/sec
total size is 1.07G  speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[root@noi-dcops ~]#


[root@gluster-poc-noida gluster]#  gluster volume geo-replication status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
 SLAVE NODESTATUS CRAWL STATUS   
LAST_SYNCED
---
gluster-poc-noidaglusterep /data/gluster/gv0root  
ssh://gluster-poc-sj::glusterepgluster-poc-sjActive Changelog Crawl 
   2018-08-30 13:42:18
noi-poc-gluster  glusterep /data/gluster/gv0root  
ssh://gluster-poc-sj::glusterepgluster-poc-sjPassiveN/A 
   N/A
[root@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar 
Sent: Thursday, August 30, 2018 10:51 AM
To: Krishna Verma 
Cc: Sunny Kumar ; Gluster Users 
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” 
atleast for 1 master node. But its still at faulty state for the 2nd master 
server.

Below is the detail.

[root@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
   SLAVE NODESTATUSCRAWL STATUS   LAST_SYNCED

gluster-poc-noidaglusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepgluster-poc-sjActiveChangelog Crawl
2018-08-29 23:56:06
noi-poc-gluster  glusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepN/A   FaultyN/AN/A


[root@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gluster-poc-noida:/data/gluster/gv0   49152 0  Y   22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0  Y   19471
Self-heal Daemon on localhost   N/A   N/AY   32087
Self-heal Daemon on noi-poc-gluster N/A   N/AY   6272

Task Status of Volume glusterep
--
There are no active volume tasks



[root@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[root@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma mailto:kve...@cadence.com>>
Cc: Sunny Kumar mailto:sunku...@redhat.com>>; Gluster 
Users mailto:gluster-users@gluster.org>

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-30 Thread Krishna Verma
Hi Kotresh,

1. Time to write 1GB to master:   27 minutes and 29 seconds
2. Time for geo-rep to transfer 1GB to slave.   8 minutes

/Krishna


From: Kotresh Hiremath Ravishankar 
Sent: Thursday, August 30, 2018 3:20 PM
To: Krishna Verma 
Cc: Sunny Kumar ; Gluster Users 
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater 
node is “Active” and another is “Passive”. Can I setup both the master as 
“Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. 
Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster 
volume which is replicated with the slave volume, it tooks 35 minutes and 49 
seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 
1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, 
Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's 
one way.
2. Or stop geo-rep until the 1GB file is written to master and then start 
geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like  3*3 or 4*3
It will help in two ways.
   1. The files gets distributed on master to multiple bricks
   2. So above will help geo-rep as files on multiple bricks are synced in 
parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
 Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the 
USA gluster client in a minimum time. Please suggest the best approach to 
achieve it.

[root@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img 
/glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
  1.07G 100%  490.70kB/s0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes  received 35 bytes  499.65K bytes/sec
total size is 1.07G  speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[root@noi-dcops ~]#



[root@gluster-poc-noida gluster]#  gluster volume geo-replication status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
 SLAVE NODESTATUS CRAWL STATUS   
LAST_SYNCED
---
gluster-poc-noidaglusterep /data/gluster/gv0root  
ssh://gluster-poc-sj::glusterepgluster-poc-sjActive Changelog Crawl 
   2018-08-30 13:42:18
noi-poc-gluster  glusterep /data/gluster/gv0root  
ssh://gluster-poc-sj::glusterepgluster-poc-sjPassiveN/A 
   N/A
[root@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma mailto:kve...@cadence.com>>
Cc: Sunny Kumar mailto:sunku...@redhat.com>>; Gluster 
Users mailto:gluster-users@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” 
atleast for 1 master node. But its still at faulty state for the 2nd master 
server.

Below is the detail.

[root@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
   SLAVE NODESTATUSCRAWL STATUS   LAST_SYNCED

gluster-poc-noidaglusterep /data/gluster/gv0root  
gluster-poc-sj::glusterep 

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-30 Thread Krishna Verma
Hi Kotresh,

Just to update for sync issue.

I erased the indexing by doing gluster volume set  indexing off,
after stopping the geo-rep session and then start the session. Now its syncing.

For your query:

Yes, this include the time take  to write 1GB file to master. geo-rep was not 
stopped while the data was copying to master.

…. It's better if gluster volume has more distribute count like  3*3 or 4*3 :- 
Are you refereeing to create a distributed volume with 3 master node and 3 
slave node?


/Krishna


From: Krishna Verma
Sent: Thursday, August 30, 2018 3:52 PM
To: 'Kotresh Hiremath Ravishankar' 
Cc: Sunny Kumar ; Gluster Users 
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotresh,

Yes, this include the time take  to write 1GB file to master. geo-rep was not 
stopped while the data was copying to master.

But now I am trouble, My putty session was timed out while copying data to 
master and geo replication was active. After I restart putty session My Master 
data is not syncing with slave. Its Last_synced time is  1hrs behind the 
current time.

I restart the geo rep and also delete and again create the session but its  
“LAST_SYNCED” time is same.

Please help in this.

…. It's better if gluster volume has more distribute count like  3*3 or 4*3 :- 
Are you refereeing to create a distributed volume with 3 master node and 3 
slave node?


/krishna

From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM
To: Krishna Verma mailto:kve...@cadence.com>>
Cc: Sunny Kumar mailto:sunku...@redhat.com>>; Gluster 
Users mailto:gluster-users@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater 
node is “Active” and another is “Passive”. Can I setup both the master as 
“Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. 
Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster 
volume which is replicated with the slave volume, it tooks 35 minutes and 49 
seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 
1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, 
Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's 
one way.
2. Or stop geo-rep until the 1GB file is written to master and then start 
geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like  3*3 or 4*3
It will help in two ways.
   1. The files gets distributed on master to multiple bricks
   2. So above will help geo-rep as files on multiple bricks are synced in 
parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
 Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the 
USA gluster client in a minimum time. Please suggest the best approach to 
achieve it.

[root@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img 
/glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
  1.07G 100%  490.70kB/s0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes  received 35 bytes  499.65K bytes/sec
total size is 1.07G  speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[root@noi-dcops ~]#



[root@gluster-poc-noida gluster]#  gluster volume geo-replication status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
 SLAVE NODESTATUS CRAWL STATUS   
LAST_SYNCED
---
gluster-poc-noidaglusterep /data/gluster/gv0root  
ssh://gluster-poc-sj::glusterepgluster-poc-sjActive Changelog Crawl 
   2018-08-30 13:42:18
noi-poc-gluster  glusterep /data/gluster/gv0root  
ssh://gluster-poc-sj::glusterepgluster-poc-sjPassiveN/A 
   N/A
[root@gluster-poc-noida glus

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-30 Thread Krishna Verma
Hi Kotresh,

Yes, this include the time take  to write 1GB file to master. geo-rep was not 
stopped while the data was copying to master.

But now I am trouble, My putty session was timed out while copying data to 
master and geo replication was active. After I restart putty session My Master 
data is not syncing with slave. Its Last_synced time is  1hrs behind the 
current time.

I restart the geo rep and also delete and again create the session but its  
“LAST_SYNCED” time is same.

Please help in this.

…. It's better if gluster volume has more distribute count like  3*3 or 4*3 :- 
Are you refereeing to create a distributed volume with 3 master node and 3 
slave node?


/krishna

From: Kotresh Hiremath Ravishankar 
Sent: Thursday, August 30, 2018 3:20 PM
To: Krishna Verma 
Cc: Sunny Kumar ; Gluster Users 
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater 
node is “Active” and another is “Passive”. Can I setup both the master as 
“Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. 
Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster 
volume which is replicated with the slave volume, it tooks 35 minutes and 49 
seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 
1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, 
Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's 
one way.
2. Or stop geo-rep until the 1GB file is written to master and then start 
geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like  3*3 or 4*3
It will help in two ways.
   1. The files gets distributed on master to multiple bricks
   2. So above will help geo-rep as files on multiple bricks are synced in 
parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
 Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the 
USA gluster client in a minimum time. Please suggest the best approach to 
achieve it.

[root@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img 
/glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
  1.07G 100%  490.70kB/s0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes  received 35 bytes  499.65K bytes/sec
total size is 1.07G  speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[root@noi-dcops ~]#



[root@gluster-poc-noida gluster]#  gluster volume geo-replication status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
 SLAVE NODESTATUS CRAWL STATUS   
LAST_SYNCED
---
gluster-poc-noidaglusterep /data/gluster/gv0root  
ssh://gluster-poc-sj::glusterepgluster-poc-sjActive Changelog Crawl 
   2018-08-30 13:42:18
noi-poc-gluster  glusterep /data/gluster/gv0root  
ssh://gluster-poc-sj::glusterepgluster-poc-sjPassiveN/A 
   N/A
[root@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma mailto:kve...@cadence.com>>
Cc: Sunny Kumar mailto:sunku...@redhat.com>>; Gluster 
Users mailto:gluster-users@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” 
atleast for 1 master node. But its still at faulty state for the 2nd master 
server.

Below i

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-30 Thread Kotresh Hiremath Ravishankar
On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma  wrote:

> Hi Kotresh,
>
>
>
> After fix the library link on node "noi-poc-gluster ", the status of one
> mater node is “Active” and another is “Passive”. Can I setup both the
> master as “Active” ?
>

Nope, since it's replica, it's redundant to sync same files from two nodes.
Both replicas can't be Active.


>
> Also, when I copy a 1GB size of file from gluster client to master gluster
> volume which is replicated with the slave volume, it tooks 35 minutes and
> 49 seconds. Is there any way to reduce its time taken to rsync data.
>

How did you measure this time? Does this include the time take for you to
write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active,
Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master.
It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start
geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like  3*3 or 4*3
It will help in two ways.
   1. The files gets distributed on master to multiple bricks
   2. So above will help geo-rep as files on multiple bricks are synced
in parallel (multiple Actives)

>
>
> NOTE: Gluster master server and one client is in Noida, India Location.
>
>  Gluster Slave server and one client is in USA.
>
>
>
> Our approach is to transfer data from Noida gluster client will reach to
> the USA gluster client in a minimum time. Please suggest the best approach
> to achieve it.
>
>
>
> [root@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img
> /glusterfs/ ; date
>
> Thu Aug 30 12:26:26 IST 2018
>
> sending incremental file list
>
> gentoo_root.img
>
>   1.07G 100%  490.70kB/s0:35:36 (xfr#1, to-chk=0/1)
>

Is this I/O time to write to master volume?

>
>
> sent 1.07G bytes  received 35 bytes  499.65K bytes/sec
>
> total size is 1.07G  speedup is 1.00
>
> Thu Aug 30 13:02:15 IST 2018
>
> [root@noi-dcops ~]#
>


>
>
>
>
> [root@gluster-poc-noida gluster]#  gluster volume geo-replication status
>
>
>
> MASTER NODE  MASTER VOLMASTER BRICK SLAVE USER
> SLAVE  SLAVE NODESTATUS CRAWL
> STATUS   LAST_SYNCED
>
> 
> 
> ---
>
> gluster-poc-noidaglusterep /data/gluster/gv0root
> ssh://gluster-poc-sj::glusterepgluster-poc-sjActive Changelog
> Crawl2018-08-30 13:42:18
>
> noi-poc-gluster  glusterep /data/gluster/gv0root
> ssh://gluster-poc-sj::glusterepgluster-poc-sjPassive
> N/AN/A
>
> [root@gluster-poc-noida gluster]#
>
>
>
> Thanks in advance for your all time support.
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Thursday, August 30, 2018 10:51 AM
>
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Did you fix the library link on node "noi-poc-gluster " as well?
>
> If not please fix it. Please share the geo-rep log this node if it's
>
> as different issue.
>
> -Kotresh HR
>
>
>
> On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma 
> wrote:
>
> Hi Kotresh,
>
>
>
> Thank you so much for you input. Geo-replication is now showing “Active”
> atleast for 1 master node. But its still at faulty state for the 2nd
> master server.
>
>
>
> Below is the detail.
>
>
>
> [root@gluster-poc-noida glusterfs]# gluster volume geo-replication
> glusterep gluster-poc-sj::glusterep status
>
>
>
> MASTER NODE  MASTER VOLMASTER BRICK SLAVE USER
> SLAVESLAVE NODESTATUSCRAWL STATUS
> LAST_SYNCED
>
> 
> 
> 
>
> glust

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-29 Thread Kotresh Hiremath Ravishankar
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.

-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma  wrote:

> Hi Kotresh,
>
>
>
> Thank you so much for you input. Geo-replication is now showing “Active”
> atleast for 1 master node. But its still at faulty state for the 2nd
> master server.
>
>
>
> Below is the detail.
>
>
>
> [root@gluster-poc-noida glusterfs]# gluster volume geo-replication
> glusterep gluster-poc-sj::glusterep status
>
>
>
> MASTER NODE  MASTER VOLMASTER BRICK SLAVE USER
> SLAVESLAVE NODESTATUSCRAWL STATUS
> LAST_SYNCED
>
> 
> 
> 
>
> gluster-poc-noidaglusterep /data/gluster/gv0root
> gluster-poc-sj::glusterepgluster-poc-sjActiveChangelog Crawl
> 2018-08-29 23:56:06
>
> noi-poc-gluster  glusterep /data/gluster/gv0root
> gluster-poc-sj::glusterepN/A   FaultyN/A
> N/A
>
>
>
>
>
> [root@gluster-poc-noida glusterfs]# gluster volume status
>
> Status of volume: glusterep
>
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> 
> --
>
> Brick gluster-poc-noida:/data/gluster/gv0   49152 0  Y
> 22463
>
> Brick noi-poc-gluster:/data/gluster/gv0 49152 0  Y
> 19471
>
> Self-heal Daemon on localhost   N/A   N/AY
> 32087
>
> Self-heal Daemon on noi-poc-gluster N/A   N/AY
> 6272
>
>
>
> Task Status of Volume glusterep
>
> 
> --
>
> There are no active volume tasks
>
>
>
>
>
>
>
> [root@gluster-poc-noida glusterfs]# gluster volume info
>
>
>
> Volume Name: glusterep
>
> Type: Replicate
>
> Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 1 x 2 = 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gluster-poc-noida:/data/gluster/gv0
>
> Brick2: noi-poc-gluster:/data/gluster/gv0
>
> Options Reconfigured:
>
> transport.address-family: inet
>
> nfs.disable: on
>
> performance.client-io-threads: off
>
> geo-replication.indexing: on
>
> geo-replication.ignore-pid-check: on
>
> changelog.changelog: on
>
> [root@gluster-poc-noida glusterfs]#
>
>
>
> Could you please help me in that also please?
>
>
>
> It would be really a great help from your side.
>
>
>
> /Krishna
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Wednesday, August 29, 2018 10:47 AM
>
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Answer inline
>
>
>
> On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma  wrote:
>
> Hi Kotresh,
>
>
>
> I created the links before. Below is the detail.
>
>
>
> [root@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
>
> lrwxrwxrwx   1 root root  30 Aug 28 14:59 libgfchangelog.so ->
> /usr/lib64/libgfchangelog.so.1
>
>
>
> The link created is pointing to wrong library. Please fix this
>
>
>
> #cd /usr/lib64
>
> #rm libgfchangelog.so
>
> #ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
>
>
>
> lrwxrwxrwx   1 root root  23 Aug 23 23:35 libgfchangelog.so.0 ->
> libgfchangelog.so.0.0.1
>
> -rwxr-xr-x   1 root root       63384 Jul 24 19:11 libgfchangelog.so.0.0.1
>
> [root@gluster-poc-noida ~]# locate libgfchangelog.so
>
> /usr/lib64/libgfchangelog.so.0
>
> /usr/lib64/libgfchangelog.so.0.0.1
>
> [root@gluster-poc-noida ~]#
>
>
>
> Is it looks good what we exactly need or di I need to create any more link
> or How to get “libgfchangelog.so” file if missing.
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, August 28, 2018 4:22 PM
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
>
>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Hi Krishna,
>
> As per 

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-29 Thread Krishna Verma
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” 
atleast for 1 master node. But its still at faulty state for the 2nd master 
server.

Below is the detail.

[root@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
   SLAVE NODESTATUSCRAWL STATUS   LAST_SYNCED

gluster-poc-noidaglusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepgluster-poc-sjActiveChangelog Crawl
2018-08-29 23:56:06
noi-poc-gluster  glusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepN/A   FaultyN/AN/A


[root@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick gluster-poc-noida:/data/gluster/gv0   49152 0  Y   22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0  Y   19471
Self-heal Daemon on localhost   N/A   N/AY   32087
Self-heal Daemon on noi-poc-gluster N/A   N/AY   6272

Task Status of Volume glusterep
--
There are no active volume tasks



[root@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[root@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar 
Sent: Wednesday, August 29, 2018 10:47 AM
To: Krishna Verma 
Cc: Sunny Kumar ; Gluster Users 
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[root@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx   1 root root  30 Aug 28 14:59 libgfchangelog.so -> 
/usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx   1 root root  23 Aug 23 23:35 libgfchangelog.so.0 -> 
libgfchangelog.so.0.0.1
-rwxr-xr-x   1 root root   63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[root@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[root@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or 
How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma mailto:kve...@cadence.com>>
Cc: Sunny Kumar mailto:sunku...@redhat.com>>; Gluster 
Users mailto:gluster-users@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is 
what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present 
in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root1078 Aug 28 05:56 
libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root  23 Aug 28 05:56 libgfchangelog.so -> 
libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root  23 Aug 28 05:56 libgfchangelog.so.0 -> 
libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root  336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I di

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Kotresh Hiremath Ravishankar
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma  wrote:

> Hi Kotresh,
>
>
>
> I created the links before. Below is the detail.
>
>
>
> [root@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
>
> lrwxrwxrwx   1 root root  30 Aug 28 14:59 libgfchangelog.so ->
> /usr/lib64/libgfchangelog.so.1
>

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx   1 root root  23 Aug 23 23:35 libgfchangelog.so.0 ->
> libgfchangelog.so.0.0.1
>
> -rwxr-xr-x   1 root root   63384 Jul 24 19:11 libgfchangelog.so.0.0.1
>
> [root@gluster-poc-noida ~]# locate libgfchangelog.so
>
> /usr/lib64/libgfchangelog.so.0
>
> /usr/lib64/libgfchangelog.so.0.0.1
>
> [root@gluster-poc-noida ~]#
>
>
>
> Is it looks good what we exactly need or di I need to create any more link
> or How to get “libgfchangelog.so” file if missing.
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, August 28, 2018 4:22 PM
> *To:* Krishna Verma 
> *Cc:* Sunny Kumar ; Gluster Users <
> gluster-users@gluster.org>
>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Hi Krishna,
>
> As per the output shared, I don't see the file "libgfchangelog.so" which
> is what is required.
>
> I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
> present in "/usr/lib64/".
>
> If not create a symlink similar to "libgfchangelog.so.0"
>
>
>
> It should be something like below.
>
>
>
> #ls -l /usr/lib64 | grep libgfch
> -rwxr-xr-x. 1 root root1078 Aug 28 05:56 libgfchangelog.la
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
> lrwxrwxrwx. 1 root root  23 Aug 28 05:56 libgfchangelog.so ->
> libgfchangelog.so.0.0.1
> lrwxrwxrwx. 1 root root  23 Aug 28 05:56 libgfchangelog.so.0 ->
> libgfchangelog.so.0.0.1
> -rwxr-xr-x. 1 root root  336888 Aug 28 05:56 libgfchangelog.so.0.0.1
>
>
>
> On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma  wrote:
>
> Hi Kotresh,
>
>
>
> Thanks for the response, I did that also but nothing changed.
>
>
>
> [root@gluster-poc-noida ~]# ldconfig /usr/lib64
>
> [root@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
>
> libgfchangelog.so.0 (libc6,x86-64) =>
> /usr/lib64/libgfchangelog.so.0
>
> [root@gluster-poc-noida ~]#
>
>
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep stop
>
> Stopping geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
>
> Starting geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
>
>
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep status
>
>
>
> MASTER NODE  MASTER VOLMASTER BRICK SLAVE USER
> SLAVESLAVE NODESTATUSCRAWL STATUS
> LAST_SYNCED
>
> 
> 
> -
>
> gluster-poc-noidaglusterep     /data/gluster/gv0    root
> gluster-poc-sj::glusterep    N/A   FaultyN/A N/A
>
> noi-poc-gluster  glusterep /data/gluster/gv0root
>gluster-poc-sj::glusterepN/A   Faulty
> N/A N/A
>
> [root@gluster-poc-noida ~]#
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, August 28, 2018 4:00 PM
> *To:* Sunny Kumar 
> *Cc:* Krishna Verma ; Gluster Users <
> gluster-users@gluster.org>
>
>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Hi Krishna,
>
> Since your libraries are in /usr/lib64, you should be doing
>
> #ldconfig /usr/lib64
>
> Confirm that below command lists the library
>
> #ldconfig -p | grep libgfchangelog
>
>
>
>
>
> On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar  wrote:
>
> can you do ldconfi

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Krishna Verma
Hi Kotresh,

I created the links before. Below is the detail.

[root@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx   1 root root  30 Aug 28 14:59 libgfchangelog.so -> 
/usr/lib64/libgfchangelog.so.1
lrwxrwxrwx   1 root root  23 Aug 23 23:35 libgfchangelog.so.0 -> 
libgfchangelog.so.0.0.1
-rwxr-xr-x   1 root root   63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[root@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[root@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or 
How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar 
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma 
Cc: Sunny Kumar ; Gluster Users 
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is 
what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present 
in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root1078 Aug 28 05:56 
libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root  23 Aug 28 05:56 libgfchangelog.so -> 
libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root  23 Aug 28 05:56 libgfchangelog.so.0 -> 
libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root  336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma 
mailto:kve...@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[root@gluster-poc-noida ~]# ldconfig /usr/lib64
[root@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[root@gluster-poc-noida ~]#

[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep 
has been successful
[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep 
has been successful

[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
   SLAVE NODESTATUSCRAWL STATUSLAST_SYNCED
-
gluster-poc-noidaglusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepN/A   FaultyN/A N/A
noi-poc-gluster  glusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepN/A   FaultyN/A N/A
[root@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar 
mailto:khire...@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar mailto:sunku...@redhat.com>>
Cc: Krishna Verma mailto:kve...@cadence.com>>; Gluster 
Users mailto:gluster-users@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar 
mailto:sunku...@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
On Tue, Aug 28, 2018 at 3:45 PM Krishna Verma 
mailto:kve...@cadence.com>> wrote:
>
> Hi Sunny,
>
> I did the mentioned changes given in patch and restart the session for 
> geo-replication. But again same errors in the logs.
>
> I have attaching the config files and logs here.
>
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep stop
> Stopping geo-replication session between glusterep & 
> gluster-poc-sj::glusterep has been successful
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep delete
> Deleting geo-replication session between glusterep & 
> gluster-poc-sj::glusterep has been successful
> [root@gluster-poc-noida ~]# gluster volume geo-re

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Krishna Verma
Hi Sunny,

[root@gluster-poc-noida ~]# ldconfig /usr/local/lib
[root@gluster-poc-noida ~]# ldconfig -p /usr/local/lib | grep libgf
libgfxdr.so.0 (libc6,x86-64) => /lib64/libgfxdr.so.0
libgfrpc.so.0 (libc6,x86-64) => /lib64/libgfrpc.so.0
libgfortran.so.3 (libc6,x86-64) => /lib64/libgfortran.so.3
libgfortran.so.1 (libc6,x86-64) => /lib64/libgfortran.so.1
libgfdb.so.0 (libc6,x86-64) => /lib64/libgfdb.so.0
libgfchangelog.so.0 (libc6,x86-64) => /lib64/libgfchangelog.so.0
libgfapi.so.0 (libc6,x86-64) => /lib64/libgfapi.so.0
[root@gluster-poc-noida ~]#

-Original Message-
From: Sunny Kumar  
Sent: Tuesday, August 28, 2018 3:53 PM
To: Krishna Verma 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


can you do ldconfig /usr/local/lib and share the output of ldconfig -p 
/usr/local/lib | grep libgf On Tue, Aug 28, 2018 at 3:45 PM Krishna Verma 
 wrote:
>
> Hi Sunny,
>
> I did the mentioned changes given in patch and restart the session for 
> geo-replication. But again same errors in the logs.
>
> I have attaching the config files and logs here.
>
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep stop Stopping geo-replication session 
> between glusterep & gluster-poc-sj::glusterep has been successful 
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep delete Deleting geo-replication session 
> between glusterep & gluster-poc-sj::glusterep has been successful 
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep create push-pem force Creating 
> geo-replication session between glusterep & gluster-poc-sj::glusterep 
> has been successful [root@gluster-poc-noida ~]# gluster volume 
> geo-replication glusterep gluster-poc-sj::glusterep start 
> geo-replication start failed for glusterep gluster-poc-sj::glusterep 
> geo-replication command failed [root@gluster-poc-noida ~]# gluster 
> volume geo-replication glusterep gluster-poc-sj::glusterep start 
> geo-replication start failed for glusterep gluster-poc-sj::glusterep 
> geo-replication command failed [root@gluster-poc-noida ~]# vim 
> /usr/libexec/glusterfs/python/syncdaemon/repce.py
> [root@gluster-poc-noida ~]# systemctl restart glusterd 
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep start Starting geo-replication session 
> between glusterep & gluster-poc-sj::glusterep has been successful 
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep status
>
> MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE   
>  SLAVE NODESTATUSCRAWL STATUSLAST_SYNCED
> -
> gluster-poc-noidaglusterep /data/gluster/gv0root  
> gluster-poc-sj::glusterepN/A   FaultyN/A N/A
> noi-poc-gluster  glusterep /data/gluster/gv0root  
> gluster-poc-sj::glusterepN/A   FaultyN/A N/A
> [root@gluster-poc-noida ~]#
>
>
> /Krishna.
>
> -Original Message-
> From: Sunny Kumar 
> Sent: Tuesday, August 28, 2018 3:17 PM
> To: Krishna Verma 
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not 
> work
>
> EXTERNAL MAIL
>
>
> With same log message ?
>
> Can you please verify that
> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e=
>  patch is present if not can you please apply that.
> and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 
> /usr/lib64/libgfchangelog.so.
>
> Please share the log also.
>
> Regards,
> Sunny
> On Tue, Aug 28, 2018 at 3:02 PM Krishna Verma  wrote:
> >
> > Hi Sunny,
> >
> > Thanks for your response, I tried both, but still I am getting the same 
> > error.
> >
> >
> > [root@noi-poc-gluster ~]# ldconfig /usr/lib [root@noi-poc-gluster 
> > ~]#
> >
> > [root@noi-poc-gluster ~]# ln -s /usr/lib64/libgfchangelog.so.1 
> > /usr/lib64/libgfchangelog.so [root@noi-poc-gluster ~]# ls -l 
> > /usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59 
&g

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Kotresh Hiremath Ravishankar
Hi Krishna,

As per the output shared, I don't see the file "libgfchangelog.so" which is
what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root1078 Aug 28 05:56 libgfchangelog.la
lrwxrwxrwx. 1 root root  23 Aug 28 05:56 libgfchangelog.so ->
libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root  23 Aug 28 05:56 libgfchangelog.so.0 ->
libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root  336888 Aug 28 05:56 libgfchangelog.so.0.0.1


On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma  wrote:

> Hi Kotresh,
>
>
>
> Thanks for the response, I did that also but nothing changed.
>
>
>
> [root@gluster-poc-noida ~]# ldconfig /usr/lib64
>
> [root@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
>
> libgfchangelog.so.0 (libc6,x86-64) =>
> /usr/lib64/libgfchangelog.so.0
>
> [root@gluster-poc-noida ~]#
>
>
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep stop
>
> Stopping geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
>
> Starting geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
>
>
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep status
>
>
>
> MASTER NODE  MASTER VOLMASTER BRICK SLAVE USER
> SLAVESLAVE NODESTATUSCRAWL STATUS
> LAST_SYNCED
>
> 
> 
> -
>
> gluster-poc-noidaglusterep /data/gluster/gv0root
> gluster-poc-sj::glusterepN/A   FaultyN/A N/A
>
> noi-poc-gluster  glusterep /data/gluster/gv0root
>gluster-poc-sj::glusterepN/A   Faulty
> N/A N/A
>
> [root@gluster-poc-noida ~]#
>
>
>
> /Krishna
>
>
>
> *From:* Kotresh Hiremath Ravishankar 
> *Sent:* Tuesday, August 28, 2018 4:00 PM
> *To:* Sunny Kumar 
> *Cc:* Krishna Verma ; Gluster Users <
> gluster-users@gluster.org>
>
> *Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
>
>
>
> EXTERNAL MAIL
>
> Hi Krishna,
>
> Since your libraries are in /usr/lib64, you should be doing
>
> #ldconfig /usr/lib64
>
> Confirm that below command lists the library
>
> #ldconfig -p | grep libgfchangelog
>
>
>
>
>
> On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar  wrote:
>
> can you do ldconfig /usr/local/lib and share the output of ldconfig -p
> /usr/local/lib | grep libgf
>
> On Tue, Aug 28, 2018 at 3:45 PM Krishna Verma  wrote:
> >
> > Hi Sunny,
> >
> > I did the mentioned changes given in patch and restart the session for
> geo-replication. But again same errors in the logs.
> >
> > I have attaching the config files and logs here.
> >
> >
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep stop
> > Stopping geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep delete
> > Deleting geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep create push-pem force
> > Creating geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
> > geo-replication start failed for glusterep gluster-poc-sj::glusterep
> > geo-replication command failed
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
> > geo-replication start failed for glusterep gluster-poc-sj::glusterep
> > geo-replication command failed
> > [root@gluster-poc-noida ~]# vim /usr/libexec/glusterfs/python/
> syncdaemon/repce.py
> &g

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Krishna Verma
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[root@gluster-poc-noida ~]# ldconfig /usr/lib64
[root@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[root@gluster-poc-noida ~]#

[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep 
has been successful
[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep 
has been successful

[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
   SLAVE NODESTATUSCRAWL STATUSLAST_SYNCED
-
gluster-poc-noidaglusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepN/A   FaultyN/A N/A
noi-poc-gluster  glusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepN/A   FaultyN/A N/A
[root@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar 
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar 
Cc: Krishna Verma ; Gluster Users 

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar 
mailto:sunku...@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
On Tue, Aug 28, 2018 at 3:45 PM Krishna Verma 
mailto:kve...@cadence.com>> wrote:
>
> Hi Sunny,
>
> I did the mentioned changes given in patch and restart the session for 
> geo-replication. But again same errors in the logs.
>
> I have attaching the config files and logs here.
>
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep stop
> Stopping geo-replication session between glusterep & 
> gluster-poc-sj::glusterep has been successful
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep delete
> Deleting geo-replication session between glusterep & 
> gluster-poc-sj::glusterep has been successful
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep create push-pem force
> Creating geo-replication session between glusterep & 
> gluster-poc-sj::glusterep has been successful
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep start
> geo-replication start failed for glusterep gluster-poc-sj::glusterep
> geo-replication command failed
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep start
> geo-replication start failed for glusterep gluster-poc-sj::glusterep
> geo-replication command failed
> [root@gluster-poc-noida ~]# vim 
> /usr/libexec/glusterfs/python/syncdaemon/repce.py
> [root@gluster-poc-noida ~]# systemctl restart glusterd
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep start
> Starting geo-replication session between glusterep & 
> gluster-poc-sj::glusterep has been successful
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep status
>
> MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE   
>  SLAVE NODESTATUSCRAWL STATUSLAST_SYNCED
> -
> gluster-poc-noidaglusterep /data/gluster/gv0root  
> gluster-poc-sj::glusterepN/A   FaultyN/A N/A
> noi-poc-gluster  glusterep /data/gluster/gv0root  
> gluster-poc-sj::glusterepN/A   FaultyN/A N/A
> [root@gluster-poc-noida ~]#
>
>
> /Krishna.
>
> -Original Message-
> From: Sunny Kumar mailto:sunku...@redhat.com>>
> Sent: Tuesday, August 28, 2018 3:17 PM
> To: Krishna Verma mailto:kve...@cadence.com>>
> Cc: gluster-users@gluster.org<mailto:gluster-users@gluster.org>
> Subject: Re: [Gluster-users] Upgrade t

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Kotresh Hiremath Ravishankar
Hi Krishna,

Since your libraries are in /usr/lib64, you should be doing

#ldconfig /usr/lib64

Confirm that below command lists the library

#ldconfig -p | grep libgfchangelog



On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar  wrote:

> can you do ldconfig /usr/local/lib and share the output of ldconfig -p
> /usr/local/lib | grep libgf
> On Tue, Aug 28, 2018 at 3:45 PM Krishna Verma  wrote:
> >
> > Hi Sunny,
> >
> > I did the mentioned changes given in patch and restart the session for
> geo-replication. But again same errors in the logs.
> >
> > I have attaching the config files and logs here.
> >
> >
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep stop
> > Stopping geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep delete
> > Deleting geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep create push-pem force
> > Creating geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
> > geo-replication start failed for glusterep gluster-poc-sj::glusterep
> > geo-replication command failed
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
> > geo-replication start failed for glusterep gluster-poc-sj::glusterep
> > geo-replication command failed
> > [root@gluster-poc-noida ~]# vim /usr/libexec/glusterfs/python/
> syncdaemon/repce.py
> > [root@gluster-poc-noida ~]# systemctl restart glusterd
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep start
> > Starting geo-replication session between glusterep &
> gluster-poc-sj::glusterep has been successful
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> gluster-poc-sj::glusterep status
> >
> > MASTER NODE  MASTER VOLMASTER BRICK SLAVE USER
> SLAVESLAVE NODESTATUSCRAWL STATUS
> LAST_SYNCED
> > 
> 
> -
> > gluster-poc-noidaglusterep /data/gluster/gv0root
> gluster-poc-sj::glusterepN/A   FaultyN/A N/A
> > noi-poc-gluster  glusterep /data/gluster/gv0root
> gluster-poc-sj::glusterepN/A   FaultyN/A         N/A
> > [root@gluster-poc-noida ~]#
> >
> >
> > /Krishna.
> >
> > -Original Message-
> > From: Sunny Kumar 
> > Sent: Tuesday, August 28, 2018 3:17 PM
> > To: Krishna Verma 
> > Cc: gluster-users@gluster.org
> > Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> work
> >
> > EXTERNAL MAIL
> >
> >
> > With same log message ?
> >
> > Can you please verify that
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
> gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
> aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
> 6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
> fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
> can you please apply that.
> > and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
> /usr/lib64/libgfchangelog.so.
> >
> > Please share the log also.
> >
> > Regards,
> > Sunny
> > On Tue, Aug 28, 2018 at 3:02 PM Krishna Verma 
> wrote:
> > >
> > > Hi Sunny,
> > >
> > > Thanks for your response, I tried both, but still I am getting the
> same error.
> > >
> > >
> > > [root@noi-poc-gluster ~]# ldconfig /usr/lib [root@noi-poc-gluster ~]#
> > >
> > > [root@noi-poc-gluster ~]# ln -s /usr/lib64/libgfchangelog.so.1
> > > /usr/lib64/libgfchangelog.so [root@noi-poc-gluster ~]# ls -l
> > > /usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
> > > /usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
> > >
> > > /Krishna
> > >
> > > -Original Message-
> > > From: Sunny Kumar 
> > > Sent: Tuesday, August 28, 2018 2:55 PM
> > > 

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Krishna Verma
No, its again goes faulty after rebuild the session. And still getting the 
errors in the logs for lib

[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
   SLAVE NODESTATUSCRAWL STATUSLAST_SYNCED
-
gluster-poc-noidaglusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepN/A   FaultyN/A N/A
noi-poc-gluster  glusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepN/A   FaultyN/A N/A
[root@gluster-poc-noida ~]#

/krishna  

-Original Message-
From: Sunny Kumar  
Sent: Tuesday, August 28, 2018 3:56 PM
To: Krishna Verma 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Tue, Aug 28, 2018 at 3:55 PM Krishna Verma  wrote:
>
> Hi Sunny,
>
> [root@gluster-poc-noida ~]# ldconfig /usr/local/lib 
> [root@gluster-poc-noida ~]# ldconfig -p /usr/local/lib | grep libgf
> libgfxdr.so.0 (libc6,x86-64) => /lib64/libgfxdr.so.0
> libgfrpc.so.0 (libc6,x86-64) => /lib64/libgfrpc.so.0
> libgfortran.so.3 (libc6,x86-64) => /lib64/libgfortran.so.3
> libgfortran.so.1 (libc6,x86-64) => /lib64/libgfortran.so.1
> libgfdb.so.0 (libc6,x86-64) => /lib64/libgfdb.so.0
> libgfchangelog.so.0 (libc6,x86-64) => 
> /lib64/libgfchangelog.so.0
This is linked properly so check for geo-rep It should be working
> libgfapi.so.0 (libc6,x86-64) => /lib64/libgfapi.so.0 
> [root@gluster-poc-noida ~]#
>
> -Original Message-
> From: Sunny Kumar 
> Sent: Tuesday, August 28, 2018 3:53 PM
> To: Krishna Verma 
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not 
> work
>
> EXTERNAL MAIL
>
>
> can you do ldconfig /usr/local/lib and share the output of ldconfig -p 
> /usr/local/lib | grep libgf On Tue, Aug 28, 2018 at 3:45 PM Krishna Verma 
>  wrote:
> >
> > Hi Sunny,
> >
> > I did the mentioned changes given in patch and restart the session for 
> > geo-replication. But again same errors in the logs.
> >
> > I have attaching the config files and logs here.
> >
> >
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> > gluster-poc-sj::glusterep stop Stopping geo-replication session 
> > between glusterep & gluster-poc-sj::glusterep has been successful 
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> > gluster-poc-sj::glusterep delete Deleting geo-replication session 
> > between glusterep & gluster-poc-sj::glusterep has been successful 
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> > gluster-poc-sj::glusterep create push-pem force Creating 
> > geo-replication session between glusterep & 
> > gluster-poc-sj::glusterep has been successful 
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> > gluster-poc-sj::glusterep start geo-replication start failed for 
> > glusterep gluster-poc-sj::glusterep geo-replication command failed 
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> > gluster-poc-sj::glusterep start geo-replication start failed for 
> > glusterep gluster-poc-sj::glusterep geo-replication command failed 
> > [root@gluster-poc-noida ~]# vim 
> > /usr/libexec/glusterfs/python/syncdaemon/repce.py
> > [root@gluster-poc-noida ~]# systemctl restart glusterd 
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> > gluster-poc-sj::glusterep start Starting geo-replication session 
> > between glusterep & gluster-poc-sj::glusterep has been successful 
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> > gluster-poc-sj::glusterep status
> >
> > MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
> >SLAVE NODESTATUSCRAWL STATUSLAST_SYNCED
> > -
> > gluster-poc-noidaglusterep /data/gluster/gv0root  
> > gluster-poc-sj::glusterepN/A   FaultyN/A N/A
> > noi-poc-gluster  glusterep /data/gluster/gv0root  
> > gluster-poc-sj::glusterepN/A   Faul

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Sunny Kumar
On Tue, Aug 28, 2018 at 3:55 PM Krishna Verma  wrote:
>
> Hi Sunny,
>
> [root@gluster-poc-noida ~]# ldconfig /usr/local/lib
> [root@gluster-poc-noida ~]# ldconfig -p /usr/local/lib | grep libgf
> libgfxdr.so.0 (libc6,x86-64) => /lib64/libgfxdr.so.0
> libgfrpc.so.0 (libc6,x86-64) => /lib64/libgfrpc.so.0
> libgfortran.so.3 (libc6,x86-64) => /lib64/libgfortran.so.3
> libgfortran.so.1 (libc6,x86-64) => /lib64/libgfortran.so.1
> libgfdb.so.0 (libc6,x86-64) => /lib64/libgfdb.so.0
> libgfchangelog.so.0 (libc6,x86-64) => /lib64/libgfchangelog.so.0
This is linked properly so check for geo-rep It should be working
> libgfapi.so.0 (libc6,x86-64) => /lib64/libgfapi.so.0
> [root@gluster-poc-noida ~]#
>
> -Original Message-
> From: Sunny Kumar 
> Sent: Tuesday, August 28, 2018 3:53 PM
> To: Krishna Verma 
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
>
> EXTERNAL MAIL
>
>
> can you do ldconfig /usr/local/lib and share the output of ldconfig -p 
> /usr/local/lib | grep libgf On Tue, Aug 28, 2018 at 3:45 PM Krishna Verma 
>  wrote:
> >
> > Hi Sunny,
> >
> > I did the mentioned changes given in patch and restart the session for 
> > geo-replication. But again same errors in the logs.
> >
> > I have attaching the config files and logs here.
> >
> >
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> > gluster-poc-sj::glusterep stop Stopping geo-replication session
> > between glusterep & gluster-poc-sj::glusterep has been successful
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> > gluster-poc-sj::glusterep delete Deleting geo-replication session
> > between glusterep & gluster-poc-sj::glusterep has been successful
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> > gluster-poc-sj::glusterep create push-pem force Creating
> > geo-replication session between glusterep & gluster-poc-sj::glusterep
> > has been successful [root@gluster-poc-noida ~]# gluster volume
> > geo-replication glusterep gluster-poc-sj::glusterep start
> > geo-replication start failed for glusterep gluster-poc-sj::glusterep
> > geo-replication command failed [root@gluster-poc-noida ~]# gluster
> > volume geo-replication glusterep gluster-poc-sj::glusterep start
> > geo-replication start failed for glusterep gluster-poc-sj::glusterep
> > geo-replication command failed [root@gluster-poc-noida ~]# vim
> > /usr/libexec/glusterfs/python/syncdaemon/repce.py
> > [root@gluster-poc-noida ~]# systemctl restart glusterd
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> > gluster-poc-sj::glusterep start Starting geo-replication session
> > between glusterep & gluster-poc-sj::glusterep has been successful
> > [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep
> > gluster-poc-sj::glusterep status
> >
> > MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
> >SLAVE NODESTATUSCRAWL STATUSLAST_SYNCED
> > -
> > gluster-poc-noidaglusterep /data/gluster/gv0root  
> > gluster-poc-sj::glusterepN/A   FaultyN/A N/A
> > noi-poc-gluster  glusterep     /data/gluster/gv0root  
> > gluster-poc-sj::glusterepN/A   FaultyN/A N/A
> > [root@gluster-poc-noida ~]#
> >
> >
> > /Krishna.
> >
> > -Original Message-
> > From: Sunny Kumar 
> > Sent: Tuesday, August 28, 2018 3:17 PM
> > To: Krishna Verma 
> > Cc: gluster-users@gluster.org
> > Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> > work
> >
> > EXTERNAL MAIL
> >
> >
> > With same log message ?
> >
> > Can you please verify that
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e=
> >  patch is present if not can you please apply that.
> > and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 
> > /usr/lib64/libgfchangelog.so.
> >
> > Please share the log also.
> >
> > Regards,
>

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Sunny Kumar
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
On Tue, Aug 28, 2018 at 3:45 PM Krishna Verma  wrote:
>
> Hi Sunny,
>
> I did the mentioned changes given in patch and restart the session for 
> geo-replication. But again same errors in the logs.
>
> I have attaching the config files and logs here.
>
>
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep stop
> Stopping geo-replication session between glusterep & 
> gluster-poc-sj::glusterep has been successful
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep delete
> Deleting geo-replication session between glusterep & 
> gluster-poc-sj::glusterep has been successful
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep create push-pem force
> Creating geo-replication session between glusterep & 
> gluster-poc-sj::glusterep has been successful
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep start
> geo-replication start failed for glusterep gluster-poc-sj::glusterep
> geo-replication command failed
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep start
> geo-replication start failed for glusterep gluster-poc-sj::glusterep
> geo-replication command failed
> [root@gluster-poc-noida ~]# vim 
> /usr/libexec/glusterfs/python/syncdaemon/repce.py
> [root@gluster-poc-noida ~]# systemctl restart glusterd
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep start
> Starting geo-replication session between glusterep & 
> gluster-poc-sj::glusterep has been successful
> [root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
> gluster-poc-sj::glusterep status
>
> MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE   
>  SLAVE NODESTATUSCRAWL STATUSLAST_SYNCED
> -
> gluster-poc-noidaglusterep /data/gluster/gv0root  
> gluster-poc-sj::glusterepN/A   FaultyN/A N/A
> noi-poc-gluster  glusterep /data/gluster/gv0root  
> gluster-poc-sj::glusterepN/A   FaultyN/A N/A
> [root@gluster-poc-noida ~]#
>
>
> /Krishna.
>
> -Original Message-
> From: Sunny Kumar 
> Sent: Tuesday, August 28, 2018 3:17 PM
> To: Krishna Verma 
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
>
> EXTERNAL MAIL
>
>
> With same log message ?
>
> Can you please verify that
> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e=
>  patch is present if not can you please apply that.
> and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 
> /usr/lib64/libgfchangelog.so.
>
> Please share the log also.
>
> Regards,
> Sunny
> On Tue, Aug 28, 2018 at 3:02 PM Krishna Verma  wrote:
> >
> > Hi Sunny,
> >
> > Thanks for your response, I tried both, but still I am getting the same 
> > error.
> >
> >
> > [root@noi-poc-gluster ~]# ldconfig /usr/lib [root@noi-poc-gluster ~]#
> >
> > [root@noi-poc-gluster ~]# ln -s /usr/lib64/libgfchangelog.so.1
> > /usr/lib64/libgfchangelog.so [root@noi-poc-gluster ~]# ls -l
> > /usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
> > /usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
> >
> > /Krishna
> >
> > -Original Message-
> > From: Sunny Kumar 
> > Sent: Tuesday, August 28, 2018 2:55 PM
> > To: Krishna Verma 
> > Cc: gluster-users@gluster.org
> > Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
> > work
> >
> > EXTERNAL MAIL
> >
> >
> > Hi Krish,
> >
> > You can run -
> > #ldconfig /usr/lib
> >
> > If that still does not solves your problem you can do manual symlink
> > like - ln -s /usr/lib64/libgfchangelog.so.1
> > /usr/lib64/libgfchangelog.so
> >
> > Thanks,
> > Sunny Kumar
> > On Tue, Aug 28, 2018 at 1:47 PM Krishna Verma  wrote:
> > >
> > > Hi
&

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Krishna Verma
Hi Sunny,

I did the mentioned changes given in patch and restart the session for 
geo-replication. But again same errors in the logs. 

I have attaching the config files and logs here.


[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep 
has been successful
[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep delete
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep 
has been successful
[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep create push-pem force
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep 
has been successful
[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep start
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep start
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
[root@gluster-poc-noida ~]# vim 
/usr/libexec/glusterfs/python/syncdaemon/repce.py
[root@gluster-poc-noida ~]# systemctl restart glusterd
[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep 
has been successful
[root@gluster-poc-noida ~]# gluster volume geo-replication glusterep 
gluster-poc-sj::glusterep status

MASTER NODE  MASTER VOLMASTER BRICK SLAVE USERSLAVE 
   SLAVE NODESTATUSCRAWL STATUSLAST_SYNCED
-
gluster-poc-noidaglusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepN/A   FaultyN/A N/A
noi-poc-gluster  glusterep /data/gluster/gv0root  
gluster-poc-sj::glusterepN/A   FaultyN/A N/A
[root@gluster-poc-noida ~]#


/Krishna.

-Original Message-
From: Sunny Kumar  
Sent: Tuesday, August 28, 2018 3:17 PM
To: Krishna Verma 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


With same log message ?

Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e=
 patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 
/usr/lib64/libgfchangelog.so.

Please share the log also.

Regards,
Sunny
On Tue, Aug 28, 2018 at 3:02 PM Krishna Verma  wrote:
>
> Hi Sunny,
>
> Thanks for your response, I tried both, but still I am getting the same error.
>
>
> [root@noi-poc-gluster ~]# ldconfig /usr/lib [root@noi-poc-gluster ~]#
>
> [root@noi-poc-gluster ~]# ln -s /usr/lib64/libgfchangelog.so.1 
> /usr/lib64/libgfchangelog.so [root@noi-poc-gluster ~]# ls -l 
> /usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59 
> /usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
>
> /Krishna
>
> -Original Message-
> From: Sunny Kumar 
> Sent: Tuesday, August 28, 2018 2:55 PM
> To: Krishna Verma 
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not 
> work
>
> EXTERNAL MAIL
>
>
> Hi Krish,
>
> You can run -
> #ldconfig /usr/lib
>
> If that still does not solves your problem you can do manual symlink 
> like - ln -s /usr/lib64/libgfchangelog.so.1 
> /usr/lib64/libgfchangelog.so
>
> Thanks,
> Sunny Kumar
> On Tue, Aug 28, 2018 at 1:47 PM Krishna Verma  wrote:
> >
> > Hi
> >
> >
> >
> > I am getting below error in gsyncd.log
> >
> >
> >
> > OSError: libgfchangelog.so: cannot open shared object file: No such 
> > file or directory
> >
> > [2018-08-28 07:19:41.446785] E [repce(worker 
> > /data/gluster/gv0):197:__call__] RepceClient: call failed   
> > call=26469:139794524604224:1535440781.44method=init 
> > error=OSError
> >
> > [2018-08-28 07:19:41.447041] E [syncdutils(worker 
> > /data/gluster/gv0):330:log_raise_exception] : FAIL:
> >
> > Traceback (most recent call last):
> >
> >   File "/usr/libexec/glusterfs/python/syncdaemon/

Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Sunny Kumar
With same log message ?

Can you please verify that
https://review.gluster.org/#/c/glusterfs/+/20207/ patch is present if
not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.

Please share the log also.

Regards,
Sunny
On Tue, Aug 28, 2018 at 3:02 PM Krishna Verma  wrote:
>
> Hi Sunny,
>
> Thanks for your response, I tried both, but still I am getting the same error.
>
>
> [root@noi-poc-gluster ~]# ldconfig /usr/lib
> [root@noi-poc-gluster ~]#
>
> [root@noi-poc-gluster ~]# ln -s /usr/lib64/libgfchangelog.so.1 
> /usr/lib64/libgfchangelog.so
> [root@noi-poc-gluster ~]# ls -l /usr/lib64/libgfchangelog.so
> lrwxrwxrwx. 1 root root 30 Aug 28 14:59 /usr/lib64/libgfchangelog.so -> 
> /usr/lib64/libgfchangelog.so.1
>
> /Krishna
>
> -Original Message-
> From: Sunny Kumar 
> Sent: Tuesday, August 28, 2018 2:55 PM
> To: Krishna Verma 
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
>
> EXTERNAL MAIL
>
>
> Hi Krish,
>
> You can run -
> #ldconfig /usr/lib
>
> If that still does not solves your problem you can do manual symlink like - 
> ln -s /usr/lib64/libgfchangelog.so.1 /usr/lib64/libgfchangelog.so
>
> Thanks,
> Sunny Kumar
> On Tue, Aug 28, 2018 at 1:47 PM Krishna Verma  wrote:
> >
> > Hi
> >
> >
> >
> > I am getting below error in gsyncd.log
> >
> >
> >
> > OSError: libgfchangelog.so: cannot open shared object file: No such
> > file or directory
> >
> > [2018-08-28 07:19:41.446785] E [repce(worker 
> > /data/gluster/gv0):197:__call__] RepceClient: call failed   
> > call=26469:139794524604224:1535440781.44method=init 
> > error=OSError
> >
> > [2018-08-28 07:19:41.447041] E [syncdutils(worker 
> > /data/gluster/gv0):330:log_raise_exception] : FAIL:
> >
> > Traceback (most recent call last):
> >
> >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311,
> > in main
> >
> > func(args)
> >
> >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72,
> > in subcmd_worker
> >
> > local.service_loop(remote)
> >
> >   File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
> > 1236, in service_loop
> >
> > changelog_agent.init()
> >
> >   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216,
> > in __call__
> >
> > return self.ins(self.meth, *a)
> >
> >   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198,
> > in __call__
> >
> > raise res
> >
> > OSError: libgfchangelog.so: cannot open shared object file: No such
> > file or directory
> >
> > [2018-08-28 07:19:41.457555] I [repce(agent 
> > /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching 
> > EOF.
> >
> > [2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor] Monitor:
> > worker died in startup phase  brick=/data/gluster/gv0
> >
> >
> >
> > Below is my file location:
> >
> >
> >
> > /usr/lib64/libgfchangelog.so.0
> >
> > /usr/lib64/libgfchangelog.so.0.0.1
> >
> >
> >
> > What I can do to fix it ?
> >
> >
> >
> > /Krish
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org
> > _mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQ
> > yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_u6
> > vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB70
> > 1mkxoNZWYvU7XXug&e=
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Krishna Verma
Hi Sunny,

Thanks for your response, I tried both, but still I am getting the same error. 


[root@noi-poc-gluster ~]# ldconfig /usr/lib
[root@noi-poc-gluster ~]#

[root@noi-poc-gluster ~]# ln -s /usr/lib64/libgfchangelog.so.1 
/usr/lib64/libgfchangelog.so
[root@noi-poc-gluster ~]# ls -l /usr/lib64/libgfchangelog.so
lrwxrwxrwx. 1 root root 30 Aug 28 14:59 /usr/lib64/libgfchangelog.so -> 
/usr/lib64/libgfchangelog.so.1

/Krishna

-Original Message-
From: Sunny Kumar  
Sent: Tuesday, August 28, 2018 2:55 PM
To: Krishna Verma 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


Hi Krish,

You can run -
#ldconfig /usr/lib

If that still does not solves your problem you can do manual symlink like - ln 
-s /usr/lib64/libgfchangelog.so.1 /usr/lib64/libgfchangelog.so

Thanks,
Sunny Kumar
On Tue, Aug 28, 2018 at 1:47 PM Krishna Verma  wrote:
>
> Hi
>
>
>
> I am getting below error in gsyncd.log
>
>
>
> OSError: libgfchangelog.so: cannot open shared object file: No such 
> file or directory
>
> [2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] 
> RepceClient: call failed   call=26469:139794524604224:1535440781.44   
>  method=init error=OSError
>
> [2018-08-28 07:19:41.447041] E [syncdutils(worker 
> /data/gluster/gv0):330:log_raise_exception] : FAIL:
>
> Traceback (most recent call last):
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, 
> in main
>
> func(args)
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, 
> in subcmd_worker
>
> local.service_loop(remote)
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 
> 1236, in service_loop
>
> changelog_agent.init()
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, 
> in __call__
>
> return self.ins(self.meth, *a)
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, 
> in __call__
>
> raise res
>
> OSError: libgfchangelog.so: cannot open shared object file: No such 
> file or directory
>
> [2018-08-28 07:19:41.457555] I [repce(agent 
> /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
>
> [2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor] Monitor: 
> worker died in startup phase  brick=/data/gluster/gv0
>
>
>
> Below is my file location:
>
>
>
> /usr/lib64/libgfchangelog.so.0
>
> /usr/lib64/libgfchangelog.so.0.0.1
>
>
>
> What I can do to fix it ?
>
>
>
> /Krish
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org
> _mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQ
> yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_u6
> vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB70
> 1mkxoNZWYvU7XXug&e=
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Sunny Kumar
Hi Krish,

You can run -
#ldconfig /usr/lib

If that still does not solves your problem you can do manual symlink like -
ln -s /usr/lib64/libgfchangelog.so.1 /usr/lib64/libgfchangelog.so

Thanks,
Sunny Kumar
On Tue, Aug 28, 2018 at 1:47 PM Krishna Verma  wrote:
>
> Hi
>
>
>
> I am getting below error in gsyncd.log
>
>
>
> OSError: libgfchangelog.so: cannot open shared object file: No such file or 
> directory
>
> [2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] 
> RepceClient: call failed   call=26469:139794524604224:1535440781.44   
>  method=init error=OSError
>
> [2018-08-28 07:19:41.447041] E [syncdutils(worker 
> /data/gluster/gv0):330:log_raise_exception] : FAIL:
>
> Traceback (most recent call last):
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
>
> func(args)
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, in 
> subcmd_worker
>
> local.service_loop(remote)
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1236, in 
> service_loop
>
> changelog_agent.init()
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in 
> __call__
>
> return self.ins(self.meth, *a)
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in 
> __call__
>
> raise res
>
> OSError: libgfchangelog.so: cannot open shared object file: No such file or 
> directory
>
> [2018-08-28 07:19:41.457555] I [repce(agent 
> /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
>
> [2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor] Monitor: worker 
> died in startup phase  brick=/data/gluster/gv0
>
>
>
> Below is my file location:
>
>
>
> /usr/lib64/libgfchangelog.so.0
>
> /usr/lib64/libgfchangelog.so.0.0.1
>
>
>
> What I can do to fix it ?
>
>
>
> /Krish
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

2018-08-28 Thread Krishna Verma
Hi

I am getting below error in gsyncd.log

OSError: libgfchangelog.so: cannot open shared object file: No such file or 
directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] 
RepceClient: call failed   call=26469:139794524604224:1535440781.44
method=init error=OSError
[2018-08-28 07:19:41.447041] E [syncdutils(worker 
/data/gluster/gv0):330:log_raise_exception] : FAIL:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
  File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, in 
subcmd_worker
local.service_loop(remote)
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1236, in 
service_loop
changelog_agent.init()
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in 
__call__
return self.ins(self.meth, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in 
__call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such file or 
directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] 
RepceServer: terminating on reaching EOF.
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor] Monitor: worker 
died in startup phase  brick=/data/gluster/gv0

Below is my file location:

/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1

What I can do to fix it ?

/Krish
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users