Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-28 Thread Maurya M
Hi,
 In my glusterd.log i am seeing this error message , is this related to the
patch i applied? or do i need to open a new thread?

 I [MSGID: 106327] [glusterd-geo-rep.c:4483:glusterd_read_status_file]
0-management: Using passed config
template(/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf).
[2019-03-28 10:39:29.493554] E [MSGID: 106293]
[glusterd-geo-rep.c:679:glusterd_query_extutil_generic] 0-management:
reading data from child failed
[2019-03-28 10:39:29.493589] E [MSGID: 106305]
[glusterd-geo-rep.c:4377:glusterd_fetch_values_from_config] 0-management:
Unable to get configuration data for
vol_75a5fd373d88ba687f591f3353fa05cf(master), 172.16.201.35:
:vol_e783a730578e45ed9d51b9a80df6c33f(slave)
[2019-03-28 10:39:29.493617] E [MSGID: 106328]
[glusterd-geo-rep.c:4517:glusterd_read_status_file] 0-management: Unable to
fetch config values for vol_75a5fd373d88ba687f591f3353fa05cf(master),
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f(slave). Trying default
config template
[2019-03-28 10:39:29.553846] E [MSGID: 106328]
[glusterd-geo-rep.c:4525:glusterd_read_status_file] 0-management: Unable to
fetch config values for vol_75a5fd373d88ba687f591f3353fa05cf(master),
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f(slave)
[2019-03-28 10:39:29.553836] E [MSGID: 106293]
[glusterd-geo-rep.c:679:glusterd_query_extutil_generic] 0-management:
reading data from child failed
[2019-03-28 10:39:29.553844] E [MSGID: 106305]
[glusterd-geo-rep.c:4377:glusterd_fetch_values_from_config] 0-management:
Unable to get configuration data for
vol_75a5fd373d88ba687f591f3353fa05cf(master), 172.16.201.35:
:vol_e783a730578e45ed9d51b9a80df6c33f(slave)

also while do a status call, i am not seeing one of the nodes which was
reporting 'Passive' before ( did not change any configuration ) , any ideas
how to troubleshoot this?

thanks for your help.

Maurya

On Tue, Mar 26, 2019 at 8:34 PM Aravinda  wrote:

> Please check error message in gsyncd.log file in
> /var/log/glusterfs/geo-replication/
>
> On Tue, 2019-03-26 at 19:44 +0530, Maurya M wrote:
> > Hi Arvind,
> >  Have patched my setup with your fix: re-run the setup, but this time
> > getting a different error where it failed to commit the ssh-port on
> > my other 2 nodes on the master cluster, so manually copied the :
> > [vars]
> > ssh-port = 
> >
> > into gsyncd.conf
> >
> > and status reported back is as shown below :  Any ideas how to
> > troubleshoot this?
> >
> > MASTER NODE  MASTER VOL  MASTER
> > BRICK
> >SLAVE USERSLAVE
> >   SLAVE NODE  STATUS
> >  CRAWL STATUSLAST_SYNCED
> > ---
> > ---
> > ---
> > ---
> > --
> > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116f
> > b9427fb26f752d9ba8e45e183cb1/brickroot
> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f172.16.201.4
> >   PassiveN/A N/A
> > 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf
> > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266b
> > b08f0d466d346f8c0b19569736fb/brickroot
> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
> >  Faulty N/A N/A
> > 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf
> > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa4
> > 4c9380cdedac708e27e2c2a443a0/brickroot
> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
> >  Initializing...N/A N/A
> >
> >
> >
> >
> > On Tue, Mar 26, 2019 at 1:40 PM Aravinda  wrote:
> > > I got chance to investigate this issue further and identified a
> > > issue
> > > with Geo-replication config set and sent patch to fix the same.
> > >
> > > BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1692666
> > > Patch: https://review.gluster.org/22418
> > >
> > > On Mon, 2019-03-25 at 15:37 +0530, Maurya M wrote:
> > > > ran this command :  ssh -p  -i /var/lib/glusterd/geo-
> > > > replication/secret.pem root@gluster volume info --
> > > xml
> > > >
> > > > attaching the output.
> > > >
> > > >
> > > >
> > > > On Mon, Mar 25, 2019 at 2:13 PM Aravinda 
> > > wrote:
> > > > > Geo-rep is running `ssh -i /var/lib/glusterd/geo-
> > > > > replication/secret.pem
> > > > > root@ gluster volume info --xml` and parsing its
> > > output.
> > > > > Please try to to run the command from the same node and let us
> > > know
> > > > > the
> > > > > output.
> > > > >
> > > > >
> > > > > On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
> > > > > > Now the error is on the 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-26 Thread Aravinda
Please check error message in gsyncd.log file in
/var/log/glusterfs/geo-replication/

On Tue, 2019-03-26 at 19:44 +0530, Maurya M wrote:
> Hi Arvind,
>  Have patched my setup with your fix: re-run the setup, but this time
> getting a different error where it failed to commit the ssh-port on
> my other 2 nodes on the master cluster, so manually copied the :
> [vars]
> ssh-port = 
> 
> into gsyncd.conf
> 
> and status reported back is as shown below :  Any ideas how to
> troubleshoot this?
> 
> MASTER NODE  MASTER VOL  MASTER
> BRICK   
>SLAVE USERSLAVE   
>   SLAVE NODE  STATUS   
>  CRAWL STATUSLAST_SYNCED
> ---
> ---
> ---
> ---
> --
> 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf   
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116f
> b9427fb26f752d9ba8e45e183cb1/brickroot 
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f172.16.201.4 
>   PassiveN/A N/A
> 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf   
> /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266b
> b08f0d466d346f8c0b19569736fb/brickroot 
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A   
>  Faulty N/A N/A
> 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf   
> /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa4
> 4c9380cdedac708e27e2c2a443a0/brickroot 
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A   
>  Initializing...N/A N/A
> 
> 
> 
> 
> On Tue, Mar 26, 2019 at 1:40 PM Aravinda  wrote:
> > I got chance to investigate this issue further and identified a
> > issue
> > with Geo-replication config set and sent patch to fix the same.
> > 
> > BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1692666
> > Patch: https://review.gluster.org/22418
> > 
> > On Mon, 2019-03-25 at 15:37 +0530, Maurya M wrote:
> > > ran this command :  ssh -p  -i /var/lib/glusterd/geo-
> > > replication/secret.pem root@gluster volume info --
> > xml 
> > > 
> > > attaching the output.
> > > 
> > > 
> > > 
> > > On Mon, Mar 25, 2019 at 2:13 PM Aravinda 
> > wrote:
> > > > Geo-rep is running `ssh -i /var/lib/glusterd/geo-
> > > > replication/secret.pem 
> > > > root@ gluster volume info --xml` and parsing its
> > output.
> > > > Please try to to run the command from the same node and let us
> > know
> > > > the
> > > > output.
> > > > 
> > > > 
> > > > On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
> > > > > Now the error is on the same line 860 : as highlighted below:
> > > > > 
> > > > > [2019-03-25 06:11:52.376238] E
> > > > > [syncdutils(monitor):332:log_raise_exception] : FAIL:
> > > > > Traceback (most recent call last):
> > > > >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py",
> > line
> > > > > 311, in main
> > > > > func(args)
> > > > >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",
> > > > line
> > > > > 50, in subcmd_monitor
> > > > > return monitor.monitor(local, remote)
> > > > >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > > > line
> > > > > 427, in monitor
> > > > > return Monitor().multiplex(*distribute(local, remote))
> > > > >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > > > line
> > > > > 386, in distribute
> > > > > svol = Volinfo(slave.volume, "localhost", prelude)
> > > > >   File
> > "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > > line
> > > > > 860, in __init__
> > > > > vi = XET.fromstring(vix)
> > > > >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > > > 1300, in
> > > > > XML
> > > > > parser.feed(text)
> > > > >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > > > 1642, in
> > > > > feed
> > > > > self._raiseerror(v)
> > > > >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > > > 1506, in
> > > > > _raiseerror
> > > > > raise err
> > > > > ParseError: syntax error: line 1, column 0
> > > > > 
> > > > > 
> > > > > On Mon, Mar 25, 2019 at 11:29 AM Maurya M 
> > > > wrote:
> > > > > > Sorry my bad, had put the print line to debug, i am using
> > > > gluster
> > > > > > 4.1.7, will remove the print line.
> > > > > > 
> > > > > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda <
> > avish...@redhat.com>
> > > > > > wrote:
> > > > > > > Below print statement looks wrong. Latest Glusterfs code
> > > > doesn't
> > > > > > > have
> > > > > > > this print statement. Please let us know which version of

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-26 Thread Maurya M
Hi Arvind,
 Have patched my setup with your fix: re-run the setup, but this time
getting a different error where it failed to commit the ssh-port on my
other 2 nodes on the master cluster, so manually copied the :
*[vars]*
*ssh-port = *

into gsyncd.conf

and status reported back is as shown below :  Any ideas how to troubleshoot
this?

MASTER NODE  MASTER VOL  MASTER BRICK

 SLAVE USERSLAVE
  SLAVE NODE  STATUS CRAWL STATUSLAST_SYNCED
--
172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116fb9427fb26f752d9ba8e45e183cb1/brick
  root  172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
172.16.201.4*Passive*N/A N/A
172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266bb08f0d466d346f8c0b19569736fb/brick
  root  172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
   *Faulty *N/A N/A
172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa44c9380cdedac708e27e2c2a443a0/brick
  root  172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
   *Initializing*...N/A N/A




On Tue, Mar 26, 2019 at 1:40 PM Aravinda  wrote:

> I got chance to investigate this issue further and identified a issue
> with Geo-replication config set and sent patch to fix the same.
>
> BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1692666
> Patch: https://review.gluster.org/22418
>
> On Mon, 2019-03-25 at 15:37 +0530, Maurya M wrote:
> > ran this command :  ssh -p  -i /var/lib/glusterd/geo-
> > replication/secret.pem root@gluster volume info --xml
> >
> > attaching the output.
> >
> >
> >
> > On Mon, Mar 25, 2019 at 2:13 PM Aravinda  wrote:
> > > Geo-rep is running `ssh -i /var/lib/glusterd/geo-
> > > replication/secret.pem
> > > root@ gluster volume info --xml` and parsing its output.
> > > Please try to to run the command from the same node and let us know
> > > the
> > > output.
> > >
> > >
> > > On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
> > > > Now the error is on the same line 860 : as highlighted below:
> > > >
> > > > [2019-03-25 06:11:52.376238] E
> > > > [syncdutils(monitor):332:log_raise_exception] : FAIL:
> > > > Traceback (most recent call last):
> > > >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> > > > 311, in main
> > > > func(args)
> > > >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",
> > > line
> > > > 50, in subcmd_monitor
> > > > return monitor.monitor(local, remote)
> > > >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > > line
> > > > 427, in monitor
> > > > return Monitor().multiplex(*distribute(local, remote))
> > > >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > > line
> > > > 386, in distribute
> > > > svol = Volinfo(slave.volume, "localhost", prelude)
> > > >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > line
> > > > 860, in __init__
> > > > vi = XET.fromstring(vix)
> > > >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > > 1300, in
> > > > XML
> > > > parser.feed(text)
> > > >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > > 1642, in
> > > > feed
> > > > self._raiseerror(v)
> > > >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > > 1506, in
> > > > _raiseerror
> > > > raise err
> > > > ParseError: syntax error: line 1, column 0
> > > >
> > > >
> > > > On Mon, Mar 25, 2019 at 11:29 AM Maurya M 
> > > wrote:
> > > > > Sorry my bad, had put the print line to debug, i am using
> > > gluster
> > > > > 4.1.7, will remove the print line.
> > > > >
> > > > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda 
> > > > > wrote:
> > > > > > Below print statement looks wrong. Latest Glusterfs code
> > > doesn't
> > > > > > have
> > > > > > this print statement. Please let us know which version of
> > > > > > glusterfs you
> > > > > > are using.
> > > > > >
> > > > > >
> > > > > > ```
> > > > > >   File
> > > "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > > > > line
> > > > > > 860, in __init__
> > > > > > print "debug varible " %vix
> > > > > > ```
> > > > > >
> > > > > > As a workaround, edit that file and comment the print line
> > > and
> > > > > > test the
> > > > > > geo-rep config command.
> > > > > >
> > > > > >
> > > > > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> > > > > > > hi Aravinda,
> > > > > > >  had the session 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-26 Thread Aravinda
I got chance to investigate this issue further and identified a issue
with Geo-replication config set and sent patch to fix the same.

BUG: https://bugzilla.redhat.com/show_bug.cgi?id=1692666
Patch: https://review.gluster.org/22418

On Mon, 2019-03-25 at 15:37 +0530, Maurya M wrote:
> ran this command :  ssh -p  -i /var/lib/glusterd/geo-
> replication/secret.pem root@gluster volume info --xml 
> 
> attaching the output.
> 
> 
> 
> On Mon, Mar 25, 2019 at 2:13 PM Aravinda  wrote:
> > Geo-rep is running `ssh -i /var/lib/glusterd/geo-
> > replication/secret.pem 
> > root@ gluster volume info --xml` and parsing its output.
> > Please try to to run the command from the same node and let us know
> > the
> > output.
> > 
> > 
> > On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
> > > Now the error is on the same line 860 : as highlighted below:
> > > 
> > > [2019-03-25 06:11:52.376238] E
> > > [syncdutils(monitor):332:log_raise_exception] : FAIL:
> > > Traceback (most recent call last):
> > >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> > > 311, in main
> > > func(args)
> > >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",
> > line
> > > 50, in subcmd_monitor
> > > return monitor.monitor(local, remote)
> > >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > line
> > > 427, in monitor
> > > return Monitor().multiplex(*distribute(local, remote))
> > >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
> > line
> > > 386, in distribute
> > > svol = Volinfo(slave.volume, "localhost", prelude)
> > >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > line
> > > 860, in __init__
> > > vi = XET.fromstring(vix)
> > >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > 1300, in
> > > XML
> > > parser.feed(text)
> > >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > 1642, in
> > > feed
> > > self._raiseerror(v)
> > >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line
> > 1506, in
> > > _raiseerror
> > > raise err
> > > ParseError: syntax error: line 1, column 0
> > > 
> > > 
> > > On Mon, Mar 25, 2019 at 11:29 AM Maurya M 
> > wrote:
> > > > Sorry my bad, had put the print line to debug, i am using
> > gluster
> > > > 4.1.7, will remove the print line.
> > > > 
> > > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda 
> > > > wrote:
> > > > > Below print statement looks wrong. Latest Glusterfs code
> > doesn't
> > > > > have
> > > > > this print statement. Please let us know which version of
> > > > > glusterfs you
> > > > > are using.
> > > > > 
> > > > > 
> > > > > ```
> > > > >   File
> > "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > > > line
> > > > > 860, in __init__
> > > > > print "debug varible " %vix
> > > > > ```
> > > > > 
> > > > > As a workaround, edit that file and comment the print line
> > and
> > > > > test the
> > > > > geo-rep config command.
> > > > > 
> > > > > 
> > > > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> > > > > > hi Aravinda,
> > > > > >  had the session created using : create ssh-port  push-
> > pem
> > > > > and
> > > > > > also the :
> > > > > > 
> > > > > > gluster volume geo-replication
> > > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config
> > ssh-
> > > > > port
> > > > > > 
> > > > > > 
> > > > > > hitting this message:
> > > > > > geo-replication config-set failed for
> > > > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> > > > > > geo-replication command failed
> > > > > > 
> > > > > > Below is snap of status:
> > > > > > 
> > > > > > [root@k8s-agentpool1-24779565-1
> > > > > >
> > > > >
> > vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057
> > > > > 8e45ed9d51b9a80df6c33f]# gluster volume geo-replication
> > > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
> > > > > > 
> > > > > > MASTER NODE  MASTER VOL 
> > MASTER
> > > > > > BRICK 
> >
> > > > >  
> > > > > >SLAVE USERSLAVE 
> >
> > > > >  
> > > > > >   SLAVE NODESTATUS   
> >  CRAWL
> > > > > STATUS 
> > > > > >   LAST_SYNCED
> > > > > > -
> > 
> > > > > --
> > > > > > -
> > 
> > > > > --
> > > > > > -
> > 
> > > > > --
> > > > > > -
> > 
> > > > > --
> > > > > > 
> > > > > > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf   
> > > > > >
> > > > >
> > 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-25 Thread Maurya M
some addtion logs from gverify-mastermnt.log & gverify-slavemnt.log:

[2019-03-25 12:13:23.819665] W [rpc-clnt.c:1753:rpc_clnt_submit]
0-vol_75a5fd373d88ba687f591f3353fa05cf-client-2: error returned while
attempting to connect to host:(null), port:0
[2019-03-25 12:13:23.819814] W [dict.c:923:str_to_data]
(-->/usr/lib64/glusterfs/4.1.7/xlator/protocol/client.so(+0x40c0a)
[0x7f3eb4d86c0a] -->/lib64/libglusterfs.so.0(dict_set_str+0x16)
[0x7f3ebc334266] -->/lib64/libglusterfs.so.0(str_to_data+0x91)
[0x7f3ebc330ea1] ) 0-dict: *value is NULL [Invalid argument]*


 any idea how to fix this ? any patch file i can try with please share.

thanks,
Maurya


On Mon, Mar 25, 2019 at 3:37 PM Maurya M  wrote:

> ran this command :  ssh -p  -i
> /var/lib/glusterd/geo-replication/secret.pem root@gluster
> volume info --xml
>
> attaching the output.
>
>
>
> On Mon, Mar 25, 2019 at 2:13 PM Aravinda  wrote:
>
>> Geo-rep is running `ssh -i /var/lib/glusterd/geo-replication/secret.pem
>> root@ gluster volume info --xml` and parsing its output.
>> Please try to to run the command from the same node and let us know the
>> output.
>>
>>
>> On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
>> > Now the error is on the same line 860 : as highlighted below:
>> >
>> > [2019-03-25 06:11:52.376238] E
>> > [syncdutils(monitor):332:log_raise_exception] : FAIL:
>> > Traceback (most recent call last):
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
>> > 311, in main
>> > func(args)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
>> > 50, in subcmd_monitor
>> > return monitor.monitor(local, remote)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> > 427, in monitor
>> > return Monitor().multiplex(*distribute(local, remote))
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> > 386, in distribute
>> > svol = Volinfo(slave.volume, "localhost", prelude)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
>> > 860, in __init__
>> > vi = XET.fromstring(vix)
>> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1300, in
>> > XML
>> > parser.feed(text)
>> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in
>> > feed
>> > self._raiseerror(v)
>> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in
>> > _raiseerror
>> > raise err
>> > ParseError: syntax error: line 1, column 0
>> >
>> >
>> > On Mon, Mar 25, 2019 at 11:29 AM Maurya M  wrote:
>> > > Sorry my bad, had put the print line to debug, i am using gluster
>> > > 4.1.7, will remove the print line.
>> > >
>> > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda 
>> > > wrote:
>> > > > Below print statement looks wrong. Latest Glusterfs code doesn't
>> > > > have
>> > > > this print statement. Please let us know which version of
>> > > > glusterfs you
>> > > > are using.
>> > > >
>> > > >
>> > > > ```
>> > > >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
>> > > > line
>> > > > 860, in __init__
>> > > > print "debug varible " %vix
>> > > > ```
>> > > >
>> > > > As a workaround, edit that file and comment the print line and
>> > > > test the
>> > > > geo-rep config command.
>> > > >
>> > > >
>> > > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
>> > > > > hi Aravinda,
>> > > > >  had the session created using : create ssh-port  push-pem
>> > > > and
>> > > > > also the :
>> > > > >
>> > > > > gluster volume geo-replication
>> > > > vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-
>> > > > port
>> > > > > 
>> > > > >
>> > > > > hitting this message:
>> > > > > geo-replication config-set failed for
>> > > > > vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
>> > > > > geo-replication command failed
>> > > > >
>> > > > > Below is snap of status:
>> > > > >
>> > > > > [root@k8s-agentpool1-24779565-1
>> > > > >
>> > > > vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057
>> > > > 8e45ed9d51b9a80df6c33f]# gluster volume geo-replication
>> > > > vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
>> > > > >
>> > > > > MASTER NODE  MASTER VOL  MASTER
>> > > > > BRICK
>> > > >
>> > > > >SLAVE USERSLAVE
>> > > >
>> > > > >   SLAVE NODESTATUS CRAWL
>> > > > STATUS
>> > > > >   LAST_SYNCED
>> > > > > -
>> > > > --
>> > > > > -
>> > > > --
>> > > > > -
>> > > > --
>> > > > > -
>> > > > --
>> > > > > 
>> > > > > 172.16.189.4 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-25 Thread Maurya M
ran this command :  ssh -p  -i
/var/lib/glusterd/geo-replication/secret.pem root@gluster
volume info --xml

attaching the output.



On Mon, Mar 25, 2019 at 2:13 PM Aravinda  wrote:

> Geo-rep is running `ssh -i /var/lib/glusterd/geo-replication/secret.pem
> root@ gluster volume info --xml` and parsing its output.
> Please try to to run the command from the same node and let us know the
> output.
>
>
> On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
> > Now the error is on the same line 860 : as highlighted below:
> >
> > [2019-03-25 06:11:52.376238] E
> > [syncdutils(monitor):332:log_raise_exception] : FAIL:
> > Traceback (most recent call last):
> >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> > 311, in main
> > func(args)
> >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
> > 50, in subcmd_monitor
> > return monitor.monitor(local, remote)
> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> > 427, in monitor
> > return Monitor().multiplex(*distribute(local, remote))
> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> > 386, in distribute
> > svol = Volinfo(slave.volume, "localhost", prelude)
> >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
> > 860, in __init__
> > vi = XET.fromstring(vix)
> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1300, in
> > XML
> > parser.feed(text)
> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in
> > feed
> > self._raiseerror(v)
> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in
> > _raiseerror
> > raise err
> > ParseError: syntax error: line 1, column 0
> >
> >
> > On Mon, Mar 25, 2019 at 11:29 AM Maurya M  wrote:
> > > Sorry my bad, had put the print line to debug, i am using gluster
> > > 4.1.7, will remove the print line.
> > >
> > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda 
> > > wrote:
> > > > Below print statement looks wrong. Latest Glusterfs code doesn't
> > > > have
> > > > this print statement. Please let us know which version of
> > > > glusterfs you
> > > > are using.
> > > >
> > > >
> > > > ```
> > > >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > > line
> > > > 860, in __init__
> > > > print "debug varible " %vix
> > > > ```
> > > >
> > > > As a workaround, edit that file and comment the print line and
> > > > test the
> > > > geo-rep config command.
> > > >
> > > >
> > > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> > > > > hi Aravinda,
> > > > >  had the session created using : create ssh-port  push-pem
> > > > and
> > > > > also the :
> > > > >
> > > > > gluster volume geo-replication
> > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-
> > > > port
> > > > > 
> > > > >
> > > > > hitting this message:
> > > > > geo-replication config-set failed for
> > > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> > > > > geo-replication command failed
> > > > >
> > > > > Below is snap of status:
> > > > >
> > > > > [root@k8s-agentpool1-24779565-1
> > > > >
> > > > vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057
> > > > 8e45ed9d51b9a80df6c33f]# gluster volume geo-replication
> > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
> > > > >
> > > > > MASTER NODE  MASTER VOL  MASTER
> > > > > BRICK
> > > >
> > > > >SLAVE USERSLAVE
> > > >
> > > > >   SLAVE NODESTATUS CRAWL
> > > > STATUS
> > > > >   LAST_SYNCED
> > > > > -
> > > > --
> > > > > -
> > > > --
> > > > > -
> > > > --
> > > > > -
> > > > --
> > > > > 
> > > > > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
> > > > >
> > > > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_
> > > > 116f
> > > > > b9427fb26f752d9ba8e45e183cb1/brickroot
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
> > > >
> > > > >  CreatedN/A N/A
> > > > > 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf
> > > > >
> > > > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_
> > > > 266b
> > > > > b08f0d466d346f8c0b19569736fb/brickroot
> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
> > > >
> > > > >  CreatedN/A N/A
> > > > > 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf
> > > > >
> > > > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_
> > > > dfa4
> > > > > 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-25 Thread Aravinda
Geo-rep is running `ssh -i /var/lib/glusterd/geo-replication/secret.pem 
root@ gluster volume info --xml` and parsing its output.
Please try to to run the command from the same node and let us know the
output.


On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
> Now the error is on the same line 860 : as highlighted below:
> 
> [2019-03-25 06:11:52.376238] E
> [syncdutils(monitor):332:log_raise_exception] : FAIL:
> Traceback (most recent call last):
>   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> 311, in main
> func(args)
>   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
> 50, in subcmd_monitor
> return monitor.monitor(local, remote)
>   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> 427, in monitor
> return Monitor().multiplex(*distribute(local, remote))
>   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> 386, in distribute
> svol = Volinfo(slave.volume, "localhost", prelude)
>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
> 860, in __init__
> vi = XET.fromstring(vix)
>   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1300, in
> XML
> parser.feed(text)
>   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in
> feed
> self._raiseerror(v)
>   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in
> _raiseerror
> raise err
> ParseError: syntax error: line 1, column 0
> 
> 
> On Mon, Mar 25, 2019 at 11:29 AM Maurya M  wrote:
> > Sorry my bad, had put the print line to debug, i am using gluster
> > 4.1.7, will remove the print line.
> > 
> > On Mon, Mar 25, 2019 at 10:52 AM Aravinda 
> > wrote:
> > > Below print statement looks wrong. Latest Glusterfs code doesn't
> > > have
> > > this print statement. Please let us know which version of
> > > glusterfs you
> > > are using.
> > > 
> > > 
> > > ```
> > >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
> > > line
> > > 860, in __init__
> > > print "debug varible " %vix
> > > ```
> > > 
> > > As a workaround, edit that file and comment the print line and
> > > test the
> > > geo-rep config command.
> > > 
> > > 
> > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> > > > hi Aravinda,
> > > >  had the session created using : create ssh-port  push-pem
> > > and
> > > > also the :
> > > > 
> > > > gluster volume geo-replication
> > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-
> > > port
> > > > 
> > > > 
> > > > hitting this message:
> > > > geo-replication config-set failed for
> > > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> > > > geo-replication command failed
> > > > 
> > > > Below is snap of status:
> > > > 
> > > > [root@k8s-agentpool1-24779565-1
> > > >
> > > vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057
> > > 8e45ed9d51b9a80df6c33f]# gluster volume geo-replication
> > > vol_75a5fd373d88ba687f591f3353fa05cf
> > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
> > > > 
> > > > MASTER NODE  MASTER VOL  MASTER
> > > > BRICK 
> > >  
> > > >SLAVE USERSLAVE 
> > >  
> > > >   SLAVE NODESTATUS CRAWL
> > > STATUS 
> > > >   LAST_SYNCED
> > > > -
> > > --
> > > > -
> > > --
> > > > -
> > > --
> > > > -
> > > --
> > > > 
> > > > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf   
> > > >
> > > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_
> > > 116f
> > > > b9427fb26f752d9ba8e45e183cb1/brickroot 
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A 
> > >
> > > >  CreatedN/A N/A
> > > > 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf   
> > > >
> > > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_
> > > 266b
> > > > b08f0d466d346f8c0b19569736fb/brickroot 
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A 
> > >
> > > >  CreatedN/A N/A
> > > > 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf   
> > > >
> > > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_
> > > dfa4
> > > > 4c9380cdedac708e27e2c2a443a0/brickroot 
> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A 
> > >
> > > >  CreatedN/A N/A
> > > > 
> > > > any ideas ? where can find logs for the failed commands check
> > > in
> > > > gysncd.log , the trace is as below:
> > > > 
> > > > 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-25 Thread Maurya M
Now the error is on the same line 860 : as highlighted below:

[2019-03-25 06:11:52.376238] E
[syncdutils(monitor):332:log_raise_exception] : FAIL:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in
main
func(args)
  File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 50, in
subcmd_monitor
return monitor.monitor(local, remote)
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 427, in
monitor
return Monitor().multiplex(*distribute(local, remote))
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 386, in
distribute
svol = Volinfo(slave.volume, "localhost", prelude)
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 860,
in __init__
  *  vi = XET.fromstring(vix)*
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1300, in XML
parser.feed(text)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
  File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in
_raiseerror
raise err
ParseError: syntax error: line 1, column 0


On Mon, Mar 25, 2019 at 11:29 AM Maurya M  wrote:

> Sorry my bad, had put the print line to debug, i am using gluster 4.1.7,
> will remove the print line.
>
> On Mon, Mar 25, 2019 at 10:52 AM Aravinda  wrote:
>
>> Below print statement looks wrong. Latest Glusterfs code doesn't have
>> this print statement. Please let us know which version of glusterfs you
>> are using.
>>
>>
>> ```
>>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
>> 860, in __init__
>> print "debug varible " %vix
>> ```
>>
>> As a workaround, edit that file and comment the print line and test the
>> geo-rep config command.
>>
>>
>> On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
>> > hi Aravinda,
>> >  had the session created using : create ssh-port  push-pem and
>> > also the :
>> >
>> > gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
>> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-port
>> > 
>> >
>> > hitting this message:
>> > geo-replication config-set failed for
>> > vol_75a5fd373d88ba687f591f3353fa05cf
>> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
>> > geo-replication command failed
>> >
>> > Below is snap of status:
>> >
>> > [root@k8s-agentpool1-24779565-1
>> >
>> vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]#
>> gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
>> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
>> >
>> > MASTER NODE  MASTER VOL  MASTER
>> > BRICK
>> >SLAVE USERSLAVE
>> >   SLAVE NODESTATUS CRAWL STATUS
>> >   LAST_SYNCED
>> > ---
>> > ---
>> > ---
>> > ---
>> > 
>> > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
>> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116f
>> > b9427fb26f752d9ba8e45e183cb1/brickroot
>> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
>> >  CreatedN/A N/A
>> > 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf
>> > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266b
>> > b08f0d466d346f8c0b19569736fb/brickroot
>> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
>> >  CreatedN/A N/A
>> > 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf
>> > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa4
>> > 4c9380cdedac708e27e2c2a443a0/brickroot
>> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
>> >  CreatedN/A N/A
>> >
>> > any ideas ? where can find logs for the failed commands check in
>> > gysncd.log , the trace is as below:
>> >
>> > [2019-03-25 04:04:42.295043] I [gsyncd(monitor):297:main] :
>> > Using session config file  path=/var/lib/glusterd/geo-
>> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
>> > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > [2019-03-25 04:04:42.387192] E
>> > [syncdutils(monitor):332:log_raise_exception] : FAIL:
>> > Traceback (most recent call last):
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
>> > 311, in main
>> > func(args)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
>> > 50, in subcmd_monitor
>> > return monitor.monitor(local, remote)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> > 427, in monitor
>> > return Monitor().multiplex(*distribute(local, remote))
>> >   File 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-24 Thread Maurya M
Sorry my bad, had put the print line to debug, i am using gluster 4.1.7,
will remove the print line.

On Mon, Mar 25, 2019 at 10:52 AM Aravinda  wrote:

> Below print statement looks wrong. Latest Glusterfs code doesn't have
> this print statement. Please let us know which version of glusterfs you
> are using.
>
>
> ```
>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
> 860, in __init__
> print "debug varible " %vix
> ```
>
> As a workaround, edit that file and comment the print line and test the
> geo-rep config command.
>
>
> On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> > hi Aravinda,
> >  had the session created using : create ssh-port  push-pem and
> > also the :
> >
> > gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-port
> > 
> >
> > hitting this message:
> > geo-replication config-set failed for
> > vol_75a5fd373d88ba687f591f3353fa05cf
> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> > geo-replication command failed
> >
> > Below is snap of status:
> >
> > [root@k8s-agentpool1-24779565-1
> >
> vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]#
> gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
> >
> > MASTER NODE  MASTER VOL  MASTER
> > BRICK
> >SLAVE USERSLAVE
> >   SLAVE NODESTATUS CRAWL STATUS
> >   LAST_SYNCED
> > ---
> > ---
> > ---
> > ---
> > 
> > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116f
> > b9427fb26f752d9ba8e45e183cb1/brickroot
> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
> >  CreatedN/A N/A
> > 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf
> > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266b
> > b08f0d466d346f8c0b19569736fb/brickroot
> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
> >  CreatedN/A N/A
> > 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf
> > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa4
> > 4c9380cdedac708e27e2c2a443a0/brickroot
> > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
> >  CreatedN/A N/A
> >
> > any ideas ? where can find logs for the failed commands check in
> > gysncd.log , the trace is as below:
> >
> > [2019-03-25 04:04:42.295043] I [gsyncd(monitor):297:main] :
> > Using session config file  path=/var/lib/glusterd/geo-
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > [2019-03-25 04:04:42.387192] E
> > [syncdutils(monitor):332:log_raise_exception] : FAIL:
> > Traceback (most recent call last):
> >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> > 311, in main
> > func(args)
> >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
> > 50, in subcmd_monitor
> > return monitor.monitor(local, remote)
> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> > 427, in monitor
> > return Monitor().multiplex(*distribute(local, remote))
> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> > 370, in distribute
> > mvol = Volinfo(master.volume, master.host)
> >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
> > 860, in __init__
> > print "debug varible " %vix
> > TypeError: not all arguments converted during string formatting
> > [2019-03-25 04:04:48.997519] I [gsyncd(config-get):297:main] :
> > Using session config file   path=/var/lib/glusterd/geo-
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > [2019-03-25 04:04:49.93528] I [gsyncd(status):297:main] : Using
> > session config filepath=/var/lib/glusterd/geo-
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > [2019-03-25 04:08:07.194348] I [gsyncd(config-get):297:main] :
> > Using session config file   path=/var/lib/glusterd/geo-
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > [2019-03-25 04:08:07.262588] I [gsyncd(config-get):297:main] :
> > Using session config file   path=/var/lib/glusterd/geo-
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> > [2019-03-25 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-24 Thread Aravinda
Below print statement looks wrong. Latest Glusterfs code doesn't have
this print statement. Please let us know which version of glusterfs you
are using.


```
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
860, in __init__
print "debug varible " %vix
```

As a workaround, edit that file and comment the print line and test the
geo-rep config command.


On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> hi Aravinda,
>  had the session created using : create ssh-port  push-pem and
> also the :
> 
> gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-port
> 
> 
> hitting this message:
> geo-replication config-set failed for
> vol_75a5fd373d88ba687f591f3353fa05cf
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> geo-replication command failed
> 
> Below is snap of status:
> 
> [root@k8s-agentpool1-24779565-1
> vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]#
>  gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf 
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
> 
> MASTER NODE  MASTER VOL  MASTER
> BRICK   
>SLAVE USERSLAVE   
>   SLAVE NODESTATUS CRAWL STATUS 
>   LAST_SYNCED
> ---
> ---
> ---
> ---
> 
> 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf   
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116f
> b9427fb26f752d9ba8e45e183cb1/brickroot 
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A 
>  CreatedN/A N/A
> 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf   
> /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266b
> b08f0d466d346f8c0b19569736fb/brickroot 
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A 
>  CreatedN/A N/A
> 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf   
> /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa4
> 4c9380cdedac708e27e2c2a443a0/brickroot 
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A 
>  CreatedN/A N/A
> 
> any ideas ? where can find logs for the failed commands check in
> gysncd.log , the trace is as below:
> 
> [2019-03-25 04:04:42.295043] I [gsyncd(monitor):297:main] :
> Using session config file  path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:04:42.387192] E
> [syncdutils(monitor):332:log_raise_exception] : FAIL:
> Traceback (most recent call last):
>   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> 311, in main
> func(args)
>   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
> 50, in subcmd_monitor
> return monitor.monitor(local, remote)
>   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> 427, in monitor
> return Monitor().multiplex(*distribute(local, remote))
>   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> 370, in distribute
> mvol = Volinfo(master.volume, master.host)
>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
> 860, in __init__
> print "debug varible " %vix
> TypeError: not all arguments converted during string formatting
> [2019-03-25 04:04:48.997519] I [gsyncd(config-get):297:main] :
> Using session config file   path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:04:49.93528] I [gsyncd(status):297:main] : Using
> session config filepath=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:07.194348] I [gsyncd(config-get):297:main] :
> Using session config file   path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:07.262588] I [gsyncd(config-get):297:main] :
> Using session config file   path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:07.550080] I [gsyncd(config-get):297:main] :
> Using session config file   path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-24 Thread Maurya M
tried even this, did not work :

[root@k8s-agentpool1-24779565-1
vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]#
gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f* config ssh-command
'ssh -p '*
geo-replication config-set failed for vol_75a5fd373d88ba687f591f3353fa05cf
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
geo-replication command failed

On Mon, Mar 25, 2019 at 9:46 AM Maurya M  wrote:

> hi Aravinda,
>  had the session created using : create ssh-port  push-pem and also
> the :
>
> gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-port 
>
> hitting this message:
> geo-replication config-set failed for vol_75a5fd373d88ba687f591f3353fa05cf
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> geo-replication command failed
>
> Below is snap of status:
>
> [root@k8s-agentpool1-24779565-1
> vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]#
> gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
>
> MASTER NODE  MASTER VOL  MASTER BRICK
>
>  SLAVE USERSLAVE
>   SLAVE NODESTATUS CRAWL STATUSLAST_SYNCED
>
> 
> 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116fb9427fb26f752d9ba8e45e183cb1/brick
>   root  172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> N/A   CreatedN/A N/A
> 172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf
> /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266bb08f0d466d346f8c0b19569736fb/brick
>   root  172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> N/A   CreatedN/A N/A
> 172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf
> /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa44c9380cdedac708e27e2c2a443a0/brick
>   root  172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> N/A   CreatedN/A N/A
>
> any ideas ? where can find logs for the failed commands check in
> gysncd.log , the trace is as below:
>
> [2019-03-25 04:04:42.295043] I [gsyncd(monitor):297:main] : Using
> session config file
> path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:04:42.387192] E
> [syncdutils(monitor):332:log_raise_exception] : FAIL:
> Traceback (most recent call last):
>   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in
> main
> func(args)
>   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 50, in
> subcmd_monitor
> return monitor.monitor(local, remote)
>   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 427, in
> monitor
> return Monitor().multiplex(*distribute(local, remote))
>   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 370, in
> distribute
> mvol = Volinfo(master.volume, master.host)
>   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 860,
> in __init__
> print "debug varible " %vix
> TypeError: not all arguments converted during string formatting
> [2019-03-25 04:04:48.997519] I [gsyncd(config-get):297:main] : Using
> session config file
>  
> path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:04:49.93528] I [gsyncd(status):297:main] : Using
> session config file
> path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:07.194348] I [gsyncd(config-get):297:main] : Using
> session config file
>  
> path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:07.262588] I [gsyncd(config-get):297:main] : Using
> session config file
>  
> path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:07.550080] I [gsyncd(config-get):297:main] : Using
> session config file
>  
> path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:18.933028] I [gsyncd(config-get):297:main] : Using
> session config file
>  

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-24 Thread Maurya M
hi Aravinda,
 had the session created using : create ssh-port  push-pem and also the
:

gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-port 

hitting this message:
geo-replication config-set failed for vol_75a5fd373d88ba687f591f3353fa05cf
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
geo-replication command failed

Below is snap of status:

[root@k8s-agentpool1-24779565-1
vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]#
gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status

MASTER NODE  MASTER VOL  MASTER BRICK

 SLAVE USERSLAVE
  SLAVE NODESTATUS CRAWL STATUSLAST_SYNCED

172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116fb9427fb26f752d9ba8e45e183cb1/brick
  root  172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
 CreatedN/A N/A
172.16.189.35vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266bb08f0d466d346f8c0b19569736fb/brick
  root  172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
 CreatedN/A N/A
172.16.189.66vol_75a5fd373d88ba687f591f3353fa05cf
/var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa44c9380cdedac708e27e2c2a443a0/brick
  root  172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33fN/A
 CreatedN/A N/A

any ideas ? where can find logs for the failed commands check in gysncd.log
, the trace is as below:

[2019-03-25 04:04:42.295043] I [gsyncd(monitor):297:main] : Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:04:42.387192] E
[syncdutils(monitor):332:log_raise_exception] : FAIL:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in
main
func(args)
  File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 50, in
subcmd_monitor
return monitor.monitor(local, remote)
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 427, in
monitor
return Monitor().multiplex(*distribute(local, remote))
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 370, in
distribute
mvol = Volinfo(master.volume, master.host)
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 860,
in __init__
print "debug varible " %vix
TypeError: not all arguments converted during string formatting
[2019-03-25 04:04:48.997519] I [gsyncd(config-get):297:main] : Using
session config file
 
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:04:49.93528] I [gsyncd(status):297:main] : Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:08:07.194348] I [gsyncd(config-get):297:main] : Using
session config file
 
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:08:07.262588] I [gsyncd(config-get):297:main] : Using
session config file
 
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:08:07.550080] I [gsyncd(config-get):297:main] : Using
session config file
 
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:08:18.933028] I [gsyncd(config-get):297:main] : Using
session config file
 
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:08:19.25285] I [gsyncd(status):297:main] : Using
session config file
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:09:15.766882] I [gsyncd(config-get):297:main] : Using
session config file
 
path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf
[2019-03-25 04:09:16.30267] I [gsyncd(config-get):297:main] : Using
session config 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-24 Thread Aravinda
Use `ssh-port ` while creating the Geo-rep session

Ref: 
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/#creating-the-session

And set the ssh-port option before start.

```
gluster volume geo-replication  \
[@]:: config
ssh-port 
```

-- 
regards
Aravinda
http://aravindavk.in


On Sun, 2019-03-24 at 17:13 +0530, Maurya M wrote:
> did all the suggestion as mentioned in the log trace , have another
> setup using root user , but there i have issue on the ssh command as
> i am unable to change the ssh port to use default 22, but my servers
> (azure aks engine)  are configure to using  where i am unable to
> change the ports , restart of ssh service giving me error!
> 
> Is this syntax correct to config the ssh-command:
> gluster volume geo-replication vol_041afbc53746053368a1840607636e97
> xxx.xx.xxx.xx::vol_a5aee81a873c043c99a938adcb5b5781 config ssh-
> command '/usr/sbin/sshd -D  -p '
> 
> On Sun, Mar 24, 2019 at 4:38 PM Maurya M  wrote:
> > Did give the persmission on both "/var/log/glusterfs/" &
> > "/var/lib/glusterd/" too, but seems the directory where i mounted
> > using heketi is having issues:
> > 
> > [2019-03-22 09:48:21.546308] E [syncdutils(worker
> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > eab2394433f02f5617012d4ae3c28f/brick):305:log_raise_exception]
> > : connection to peer is broken
> > [2019-03-22 09:48:21.546662] E [syncdutils(worker
> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > eab2394433f02f5617012d4ae3c28f/brick):309:log_raise_exception]
> > : getting "No such file or directory"errors is most likely due
> > to MISCONFIGURATION, please remove all the public keys added by
> > geo-replication from authorized_keys file in slave nodes and run
> > Geo-replication create command again.
> > [2019-03-22 09:48:21.546736] E [syncdutils(worker
> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > eab2394433f02f5617012d4ae3c28f/brick):316:log_raise_exception]
> > : If `gsec_create container` was used, then run `gluster
> > volume geo-replication 
> > [@]:: config remote-gsyncd
> >  (Example GSYNCD_PATH:
> > `/usr/libexec/glusterfs/gsyncd`)
> > [2019-03-22 09:48:21.546858] E [syncdutils(worker
> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > eab2394433f02f5617012d4ae3c28f/brick):801:errlog] Popen: command
> > returned errorcmd=ssh -oPasswordAuthentication=no
> > -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-
> > replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-
> > aux-ssh-OaPGc3/c784230c9648efa4d529975bd779c551.sock 
> > azureuser@172.16.201.35 /nonexistent/gsyncd slave
> > vol_041afbc53746053368a1840607636e97 azureuser@172.16.201.35::vol_a
> > 5aee81a873c043c99a938adcb5b5781 --master-node 172.16.189.4 --
> > master-node-id dd4efc35-4b86-4901-9c00-483032614c35 --master-brick
> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > eab2394433f02f5617012d4ae3c28f/brick --local-node 172.16.201.35 --
> > local-node-id 7eb0a2b6-c4d6-41b1-a346-0638dbf8d779 --slave-timeout
> > 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-
> > gluster-command-dir /usr/sbin  error=127
> > [2019-03-22 09:48:21.546977] E [syncdutils(worker
> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > eab2394433f02f5617012d4ae3c28f/brick):805:logerr] Popen: ssh> bash:
> > /nonexistent/gsyncd: No such file or directory
> > [2019-03-22 09:48:21.565583] I [repce(agent
> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3
> > eab2394433f02f5617012d4ae3c28f/brick):80:service_loop] RepceServer:
> > terminating on reaching EOF.
> > [2019-03-22 09:48:21.565745] I [monitor(monitor):266:monitor]
> > Monitor: worker died before establishing connection  
> > brick=/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/br
> > ick_b3eab2394433f02f5617012d4ae3c28f/brick
> > [2019-03-22 09:48:21.579195] I
> > [gsyncdstatus(monitor):245:set_worker_status] GeorepStatus: Worker
> > Status Change status=Faulty
> > 
> > On Fri, Mar 22, 2019 at 10:23 PM Sunny Kumar 
> > wrote:
> > > Hi Maurya,
> > > 
> > > Looks like hook script is failed to set permissions for azureuser
> > > on
> > > "/var/log/glusterfs".
> > > You can assign permission manually for directory then it will
> > > work.
> > > 
> > > -Sunny
> > > 
> > > On Fri, Mar 22, 2019 at 2:07 PM Maurya M 
> > > wrote:
> > > >
> > > > hi Sunny,
> > > >  Passwordless ssh to :
> > > >
> > > > ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> > > /var/lib/glusterd/geo-replication/secret.pem -p 22 
> > > azureuser@172.16.201.35
> > > >
> > > > is login, but when the whole command is run getting permission
> > > issues again::
> > > >
> > > > ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> > > /var/lib/glusterd/geo-replication/secret.pem -p 22 
> > > azureuser@172.16.201.35 gluster --xml 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-24 Thread Maurya M
did all the suggestion as mentioned in the log trace , have another setup
using root user , but there i have issue on the ssh command as i am unable
to change the ssh port to use default 22, but my servers (azure aks
engine)  are configure to using  where i am unable to change the ports
, restart of ssh service giving me error!

Is this syntax correct to config the ssh-command:
gluster volume geo-replication vol_041afbc53746053368a1840607636e97
xxx.xx.xxx.xx::vol_a5aee81a873c043c99a938adcb5b5781 *config ssh-command
'/usr/sbin/sshd -D  -p '*

On Sun, Mar 24, 2019 at 4:38 PM Maurya M  wrote:

> Did give the persmission on both "/var/log/glusterfs/" &
> "/var/lib/glusterd/" too, but seems the directory where i mounted using
> heketi is having issues:
>
> [2019-03-22 09:48:21.546308] E [syncdutils(worker
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):305:log_raise_exception]
> : connection to peer is broken
>
> [2019-03-22 09:48:21.546662] E [syncdutils(worker
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):309:log_raise_exception]
> : getting "No such file or directory"errors is most likely due to
> MISCONFIGURATION, please remove all the public keys added by
> geo-replication from authorized_keys file in slave nodes and run
> Geo-replication create command again.
>
> [2019-03-22 09:48:21.546736] E [syncdutils(worker
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):316:log_raise_exception]
> : If `gsec_create container` was used, then run `gluster volume
> geo-replication  [@]:: config
> remote-gsyncd  (Example GSYNCD_PATH:
> `/usr/libexec/glusterfs/gsyncd`)
>
> [2019-03-22 09:48:21.546858] E [syncdutils(worker
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):801:errlog]
> Popen: command returned errorcmd=ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
> -p 22 -oControlMaster=auto -S
> /tmp/gsyncd-aux-ssh-OaPGc3/c784230c9648efa4d529975bd779c551.sock
> azureuser@172.16.201.35 /nonexistent/gsyncd slave
> vol_041afbc53746053368a1840607636e97
> azureuser@172.16.201.35::vol_a5aee81a873c043c99a938adcb5b5781
> --master-node 172.16.189.4 --master-node-id
> dd4efc35-4b86-4901-9c00-483032614c35 --master-brick
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick
> --local-node 172.16.201.35 --local-node-id
> 7eb0a2b6-c4d6-41b1-a346-0638dbf8d779 --slave-timeout 120 --slave-log-level
> INFO --slave-gluster-log-level INFO --slave-gluster-command-dir
> /usr/sbin  error=127
>
> [2019-03-22 09:48:21.546977] E [syncdutils(worker
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):805:logerr]
> Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory
>
> [2019-03-22 09:48:21.565583] I [repce(agent
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):80:service_loop]
> RepceServer: terminating on reaching EOF.
>
> [2019-03-22 09:48:21.565745] I [monitor(monitor):266:monitor] Monitor:
> worker died before establishing connection
> brick=/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick
>
> [2019-03-22 09:48:21.579195] I
> [gsyncdstatus(monitor):245:set_worker_status] GeorepStatus: Worker Status
> Change status=Faulty
>
> On Fri, Mar 22, 2019 at 10:23 PM Sunny Kumar  wrote:
>
>> Hi Maurya,
>>
>> Looks like hook script is failed to set permissions for azureuser on
>> "/var/log/glusterfs".
>> You can assign permission manually for directory then it will work.
>>
>> -Sunny
>>
>> On Fri, Mar 22, 2019 at 2:07 PM Maurya M  wrote:
>> >
>> > hi Sunny,
>> >  Passwordless ssh to :
>> >
>> > ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
>> /var/lib/glusterd/geo-replication/secret.pem -p 22
>> azureuser@172.16.201.35
>> >
>> > is login, but when the whole command is run getting permission issues
>> again::
>> >
>> > ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
>> /var/lib/glusterd/geo-replication/secret.pem -p 22
>> azureuser@172.16.201.35 gluster --xml --remote-host=localhost volume
>> info vol_a5aee81a873c043c99a938adcb5b5781 -v
>> > ERROR: failed to create logfile "/var/log/glusterfs/cli.log"
>> (Permission denied)
>> > ERROR: failed to open logfile /var/log/glusterfs/cli.log
>> >
>> > any idea here ?
>> >
>> > thanks,
>> > Maurya
>> >
>> >
>> > On Thu, Mar 21, 2019 at 2:43 PM Maurya M  wrote:
>> >>
>> >> hi Sunny,
>> >>  i did use the [1] link for the setup, when i encountered this error
>> during ssh-copy-id : (so setup the passwordless ssh, by manually copied the
>> private/ public keys to all the nodes , both master & slave)
>> >>
>> >> [root@k8s-agentpool1-24779565-1 ~]# 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-24 Thread Maurya M
Did give the persmission on both "/var/log/glusterfs/" &
"/var/lib/glusterd/" too, but seems the directory where i mounted using
heketi is having issues:

[2019-03-22 09:48:21.546308] E [syncdutils(worker
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):305:log_raise_exception]
: connection to peer is broken

[2019-03-22 09:48:21.546662] E [syncdutils(worker
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):309:log_raise_exception]
: getting "No such file or directory"errors is most likely due to
MISCONFIGURATION, please remove all the public keys added by
geo-replication from authorized_keys file in slave nodes and run
Geo-replication create command again.

[2019-03-22 09:48:21.546736] E [syncdutils(worker
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):316:log_raise_exception]
: If `gsec_create container` was used, then run `gluster volume
geo-replication  [@]:: config
remote-gsyncd  (Example GSYNCD_PATH:
`/usr/libexec/glusterfs/gsyncd`)

[2019-03-22 09:48:21.546858] E [syncdutils(worker
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):801:errlog]
Popen: command returned errorcmd=ssh -oPasswordAuthentication=no
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
-p 22 -oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-OaPGc3/c784230c9648efa4d529975bd779c551.sock
azureuser@172.16.201.35 /nonexistent/gsyncd slave
vol_041afbc53746053368a1840607636e97
azureuser@172.16.201.35::vol_a5aee81a873c043c99a938adcb5b5781 --master-node
172.16.189.4 --master-node-id dd4efc35-4b86-4901-9c00-483032614c35
--master-brick
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick
--local-node 172.16.201.35 --local-node-id
7eb0a2b6-c4d6-41b1-a346-0638dbf8d779 --slave-timeout 120 --slave-log-level
INFO --slave-gluster-log-level INFO --slave-gluster-command-dir
/usr/sbin  error=127

[2019-03-22 09:48:21.546977] E [syncdutils(worker
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):805:logerr]
Popen: ssh> bash: /nonexistent/gsyncd: No such file or directory

[2019-03-22 09:48:21.565583] I [repce(agent
/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick):80:service_loop]
RepceServer: terminating on reaching EOF.

[2019-03-22 09:48:21.565745] I [monitor(monitor):266:monitor] Monitor:
worker died before establishing connection
brick=/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3eab2394433f02f5617012d4ae3c28f/brick

[2019-03-22 09:48:21.579195] I
[gsyncdstatus(monitor):245:set_worker_status] GeorepStatus: Worker Status
Change status=Faulty

On Fri, Mar 22, 2019 at 10:23 PM Sunny Kumar  wrote:

> Hi Maurya,
>
> Looks like hook script is failed to set permissions for azureuser on
> "/var/log/glusterfs".
> You can assign permission manually for directory then it will work.
>
> -Sunny
>
> On Fri, Mar 22, 2019 at 2:07 PM Maurya M  wrote:
> >
> > hi Sunny,
> >  Passwordless ssh to :
> >
> > ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem -p 22 azureuser@172.16.201.35
> >
> > is login, but when the whole command is run getting permission issues
> again::
> >
> > ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem -p 22 azureuser@172.16.201.35
> gluster --xml --remote-host=localhost volume info
> vol_a5aee81a873c043c99a938adcb5b5781 -v
> > ERROR: failed to create logfile "/var/log/glusterfs/cli.log" (Permission
> denied)
> > ERROR: failed to open logfile /var/log/glusterfs/cli.log
> >
> > any idea here ?
> >
> > thanks,
> > Maurya
> >
> >
> > On Thu, Mar 21, 2019 at 2:43 PM Maurya M  wrote:
> >>
> >> hi Sunny,
> >>  i did use the [1] link for the setup, when i encountered this error
> during ssh-copy-id : (so setup the passwordless ssh, by manually copied the
> private/ public keys to all the nodes , both master & slave)
> >>
> >> [root@k8s-agentpool1-24779565-1 ~]# ssh-copy-id geou...@xxx.xx.xxx.x
> >> /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed:
> "/root/.ssh/id_rsa.pub"
> >> The authenticity of host ' xxx.xx.xxx.x   ( xxx.xx.xxx.x  )' can't be
> established.
> >> ECDSA key fingerprint is
> SHA256:B2rNaocIcPjRga13oTnopbJ5KjI/7l5fMANXc+KhA9s.
> >> ECDSA key fingerprint is
> MD5:1b:70:f9:7a:bf:35:33:47:0c:f2:c1:cd:21:e2:d3:75.
> >> Are you sure you want to continue connecting (yes/no)? yes
> >> /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s),
> to filter out any that are already installed
> >> /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you
> are prompted now it is to install the new keys
> >> Permission denied (publickey).
> >>
> >> To start afresh 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-22 Thread Sunny Kumar
Hi Maurya,

Looks like hook script is failed to set permissions for azureuser on
"/var/log/glusterfs".
You can assign permission manually for directory then it will work.

-Sunny

On Fri, Mar 22, 2019 at 2:07 PM Maurya M  wrote:
>
> hi Sunny,
>  Passwordless ssh to :
>
> ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
> /var/lib/glusterd/geo-replication/secret.pem -p 22 azureuser@172.16.201.35
>
> is login, but when the whole command is run getting permission issues again::
>
> ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
> /var/lib/glusterd/geo-replication/secret.pem -p 22 azureuser@172.16.201.35 
> gluster --xml --remote-host=localhost volume info 
> vol_a5aee81a873c043c99a938adcb5b5781 -v
> ERROR: failed to create logfile "/var/log/glusterfs/cli.log" (Permission 
> denied)
> ERROR: failed to open logfile /var/log/glusterfs/cli.log
>
> any idea here ?
>
> thanks,
> Maurya
>
>
> On Thu, Mar 21, 2019 at 2:43 PM Maurya M  wrote:
>>
>> hi Sunny,
>>  i did use the [1] link for the setup, when i encountered this error during 
>> ssh-copy-id : (so setup the passwordless ssh, by manually copied the 
>> private/ public keys to all the nodes , both master & slave)
>>
>> [root@k8s-agentpool1-24779565-1 ~]# ssh-copy-id geou...@xxx.xx.xxx.x
>> /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: 
>> "/root/.ssh/id_rsa.pub"
>> The authenticity of host ' xxx.xx.xxx.x   ( xxx.xx.xxx.x  )' can't be 
>> established.
>> ECDSA key fingerprint is SHA256:B2rNaocIcPjRga13oTnopbJ5KjI/7l5fMANXc+KhA9s.
>> ECDSA key fingerprint is MD5:1b:70:f9:7a:bf:35:33:47:0c:f2:c1:cd:21:e2:d3:75.
>> Are you sure you want to continue connecting (yes/no)? yes
>> /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to 
>> filter out any that are already installed
>> /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are 
>> prompted now it is to install the new keys
>> Permission denied (publickey).
>>
>> To start afresh what all needs to teardown / delete, do we have any script 
>> for it ? where all the pem keys do i need to delete?
>>
>> thanks,
>> Maurya
>>
>> On Thu, Mar 21, 2019 at 2:12 PM Sunny Kumar  wrote:
>>>
>>> Hey you can start a fresh I think you are not following proper setup steps.
>>>
>>> Please follow these steps [1] to create geo-rep session, you can
>>> delete the old one and do a fresh start. Or alternative you can use
>>> this tool[2] to setup geo-rep.
>>>
>>>
>>> [1]. 
>>> https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
>>> [2]. http://aravindavk.in/blog/gluster-georep-tools/
>>>
>>>
>>> /Sunny
>>>
>>> On Thu, Mar 21, 2019 at 11:28 AM Maurya M  wrote:
>>> >
>>> > Hi Sunil,
>>> >  I did run the on the slave node :
>>> >  /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser 
>>> > vol_041afbc53746053368a1840607636e97 vol_a5aee81a873c043c99a938adcb5b5781
>>> > getting this message "/home/azureuser/common_secret.pem.pub not present. 
>>> > Please run geo-replication command on master with push-pem option to 
>>> > generate the file"
>>> >
>>> > So went back and created the session again, no change, so manually copied 
>>> > the common_secret.pem.pub to /home/azureuser/ but still the 
>>> > set_geo_rep_pem_keys.sh is looking the pem file in different name :  
>>> > COMMON_SECRET_PEM_PUB=${master_vol}_${slave_vol}_common_secret.pem.pub , 
>>> > change the name of pem , ran the command again :
>>> >
>>> >  /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser 
>>> > vol_041afbc53746053368a1840607636e97 vol_a5aee81a873c043c99a938adcb5b5781
>>> > Successfully copied file.
>>> > Command executed successfully.
>>> >
>>> >
>>> > - went back and created the session , start the geo-replication , still 
>>> > seeing the  same error in logs. Any ideas ?
>>> >
>>> > thanks,
>>> > Maurya
>>> >
>>> >
>>> >
>>> > On Wed, Mar 20, 2019 at 11:07 PM Sunny Kumar  wrote:
>>> >>
>>> >> Hi Maurya,
>>> >>
>>> >> I guess you missed last trick to distribute keys in slave node. I see
>>> >> this is non-root geo-rep setup so please try this:
>>> >>
>>> >>
>>> >> Run the following command as root in any one of Slave node.
>>> >>
>>> >> /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh  
>>> >>  
>>> >>
>>> >> - Sunny
>>> >>
>>> >> On Wed, Mar 20, 2019 at 10:47 PM Maurya M  wrote:
>>> >> >
>>> >> > Hi all,
>>> >> >  Have setup a 3 master nodes - 3 slave nodes (gluster 4.1) for 
>>> >> > geo-replication, but once have the geo-replication configure the 
>>> >> > status is always on "Created',
>>> >> > even after have force start the session.
>>> >> >
>>> >> > On close inspect of the logs on the master node seeing this error:
>>> >> >
>>> >> > "E [syncdutils(monitor):801:errlog] Popen: command returned error   
>>> >> > cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
>>> >> > /var/lib/glusterd/geo-replication/secret.pem -p 22 
>>> >> > azureu...@x...xxx. gluster --xml --remote-host=localhost 
>>> >> > volume info 

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-22 Thread Maurya M
hi Sunny,
 Passwordless ssh to :

ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 azureuser@172.16.201.35

is login, but when the whole command is run getting permission issues
again::

ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 azureuser@172.16.201.35
gluster --xml --remote-host=localhost volume info
vol_a5aee81a873c043c99a938adcb5b5781 -v
ERROR: failed to create logfile "/var/log/glusterfs/cli.log" (Permission
denied)
ERROR: failed to open logfile /var/log/glusterfs/cli.log

any idea here ?

thanks,
Maurya


On Thu, Mar 21, 2019 at 2:43 PM Maurya M  wrote:

> hi Sunny,
>  i did use the [1] link for the setup, when i encountered this error
> during ssh-copy-id : (so setup the passwordless ssh, by manually copied the
> private/ public keys to all the nodes , both master & slave)
>
> [root@k8s-agentpool1-24779565-1 ~]# ssh-copy-id geou...@xxx.xx.xxx.x
> /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed:
> "/root/.ssh/id_rsa.pub"
> The authenticity of host ' xxx.xx.xxx.x   ( xxx.xx.xxx.x  )' can't be
> established.
> ECDSA key fingerprint is
> SHA256:B2rNaocIcPjRga13oTnopbJ5KjI/7l5fMANXc+KhA9s.
> ECDSA key fingerprint is
> MD5:1b:70:f9:7a:bf:35:33:47:0c:f2:c1:cd:21:e2:d3:75.
> Are you sure you want to continue connecting (yes/no)? yes
> /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to
> filter out any that are already installed
> /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are
> prompted now it is to install the new keys
> Permission denied (publickey).
>
> To start afresh what all needs to teardown / delete, do we have any script
> for it ? where all the pem keys do i need to delete?
>
> thanks,
> Maurya
>
> On Thu, Mar 21, 2019 at 2:12 PM Sunny Kumar  wrote:
>
>> Hey you can start a fresh I think you are not following proper setup
>> steps.
>>
>> Please follow these steps [1] to create geo-rep session, you can
>> delete the old one and do a fresh start. Or alternative you can use
>> this tool[2] to setup geo-rep.
>>
>>
>> [1].
>> https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
>> [2]. http://aravindavk.in/blog/gluster-georep-tools/
>>
>>
>> /Sunny
>>
>> On Thu, Mar 21, 2019 at 11:28 AM Maurya M  wrote:
>> >
>> > Hi Sunil,
>> >  I did run the on the slave node :
>> >  /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser
>> vol_041afbc53746053368a1840607636e97 vol_a5aee81a873c043c99a938adcb5b5781
>> > getting this message "/home/azureuser/common_secret.pem.pub not
>> present. Please run geo-replication command on master with push-pem option
>> to generate the file"
>> >
>> > So went back and created the session again, no change, so manually
>> copied the common_secret.pem.pub to /home/azureuser/ but still the
>> set_geo_rep_pem_keys.sh is looking the pem file in different name :
>> COMMON_SECRET_PEM_PUB=${master_vol}_${slave_vol}_common_secret.pem.pub ,
>> change the name of pem , ran the command again :
>> >
>> >  /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser
>> vol_041afbc53746053368a1840607636e97 vol_a5aee81a873c043c99a938adcb5b5781
>> > Successfully copied file.
>> > Command executed successfully.
>> >
>> >
>> > - went back and created the session , start the geo-replication , still
>> seeing the  same error in logs. Any ideas ?
>> >
>> > thanks,
>> > Maurya
>> >
>> >
>> >
>> > On Wed, Mar 20, 2019 at 11:07 PM Sunny Kumar 
>> wrote:
>> >>
>> >> Hi Maurya,
>> >>
>> >> I guess you missed last trick to distribute keys in slave node. I see
>> >> this is non-root geo-rep setup so please try this:
>> >>
>> >>
>> >> Run the following command as root in any one of Slave node.
>> >>
>> >> /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh  
>> >>  
>> >>
>> >> - Sunny
>> >>
>> >> On Wed, Mar 20, 2019 at 10:47 PM Maurya M  wrote:
>> >> >
>> >> > Hi all,
>> >> >  Have setup a 3 master nodes - 3 slave nodes (gluster 4.1) for
>> geo-replication, but once have the geo-replication configure the status is
>> always on "Created',
>> >> > even after have force start the session.
>> >> >
>> >> > On close inspect of the logs on the master node seeing this error:
>> >> >
>> >> > "E [syncdutils(monitor):801:errlog] Popen: command returned error
>>  cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
>> /var/lib/glusterd/geo-replication/secret.pem -p 22 azureu...@x...xxx.
>> gluster --xml --remote-host=localhost volume info
>> vol_a5ae34341a873c043c99a938adcb5b5781  error=255"
>> >> >
>> >> > Any ideas what is issue?
>> >> >
>> >> > thanks,
>> >> > Maurya
>> >> >
>> >> > ___
>> >> > Gluster-users mailing list
>> >> > Gluster-users@gluster.org
>> >> > https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-21 Thread Maurya M
hi Sunny,
 i did use the [1] link for the setup, when i encountered this error during
ssh-copy-id : (so setup the passwordless ssh, by manually copied the
private/ public keys to all the nodes , both master & slave)

[root@k8s-agentpool1-24779565-1 ~]# ssh-copy-id geou...@xxx.xx.xxx.x
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed:
"/root/.ssh/id_rsa.pub"
The authenticity of host ' xxx.xx.xxx.x   ( xxx.xx.xxx.x  )' can't be
established.
ECDSA key fingerprint is SHA256:B2rNaocIcPjRga13oTnopbJ5KjI/7l5fMANXc+KhA9s.
ECDSA key fingerprint is
MD5:1b:70:f9:7a:bf:35:33:47:0c:f2:c1:cd:21:e2:d3:75.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to
filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are
prompted now it is to install the new keys
Permission denied (publickey).

To start afresh what all needs to teardown / delete, do we have any script
for it ? where all the pem keys do i need to delete?

thanks,
Maurya

On Thu, Mar 21, 2019 at 2:12 PM Sunny Kumar  wrote:

> Hey you can start a fresh I think you are not following proper setup steps.
>
> Please follow these steps [1] to create geo-rep session, you can
> delete the old one and do a fresh start. Or alternative you can use
> this tool[2] to setup geo-rep.
>
>
> [1].
> https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
> [2]. http://aravindavk.in/blog/gluster-georep-tools/
>
>
> /Sunny
>
> On Thu, Mar 21, 2019 at 11:28 AM Maurya M  wrote:
> >
> > Hi Sunil,
> >  I did run the on the slave node :
> >  /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser
> vol_041afbc53746053368a1840607636e97 vol_a5aee81a873c043c99a938adcb5b5781
> > getting this message "/home/azureuser/common_secret.pem.pub not present.
> Please run geo-replication command on master with push-pem option to
> generate the file"
> >
> > So went back and created the session again, no change, so manually
> copied the common_secret.pem.pub to /home/azureuser/ but still the
> set_geo_rep_pem_keys.sh is looking the pem file in different name :
> COMMON_SECRET_PEM_PUB=${master_vol}_${slave_vol}_common_secret.pem.pub ,
> change the name of pem , ran the command again :
> >
> >  /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser
> vol_041afbc53746053368a1840607636e97 vol_a5aee81a873c043c99a938adcb5b5781
> > Successfully copied file.
> > Command executed successfully.
> >
> >
> > - went back and created the session , start the geo-replication , still
> seeing the  same error in logs. Any ideas ?
> >
> > thanks,
> > Maurya
> >
> >
> >
> > On Wed, Mar 20, 2019 at 11:07 PM Sunny Kumar 
> wrote:
> >>
> >> Hi Maurya,
> >>
> >> I guess you missed last trick to distribute keys in slave node. I see
> >> this is non-root geo-rep setup so please try this:
> >>
> >>
> >> Run the following command as root in any one of Slave node.
> >>
> >> /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh  
> >>  
> >>
> >> - Sunny
> >>
> >> On Wed, Mar 20, 2019 at 10:47 PM Maurya M  wrote:
> >> >
> >> > Hi all,
> >> >  Have setup a 3 master nodes - 3 slave nodes (gluster 4.1) for
> geo-replication, but once have the geo-replication configure the status is
> always on "Created',
> >> > even after have force start the session.
> >> >
> >> > On close inspect of the logs on the master node seeing this error:
> >> >
> >> > "E [syncdutils(monitor):801:errlog] Popen: command returned error
>  cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem -p 22 azureu...@x...xxx.
> gluster --xml --remote-host=localhost volume info
> vol_a5ae34341a873c043c99a938adcb5b5781  error=255"
> >> >
> >> > Any ideas what is issue?
> >> >
> >> > thanks,
> >> > Maurya
> >> >
> >> > ___
> >> > Gluster-users mailing list
> >> > Gluster-users@gluster.org
> >> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-21 Thread Sunny Kumar
Hey you can start a fresh I think you are not following proper setup steps.

Please follow these steps [1] to create geo-rep session, you can
delete the old one and do a fresh start. Or alternative you can use
this tool[2] to setup geo-rep.


[1]. https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
[2]. http://aravindavk.in/blog/gluster-georep-tools/


/Sunny

On Thu, Mar 21, 2019 at 11:28 AM Maurya M  wrote:
>
> Hi Sunil,
>  I did run the on the slave node :
>  /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser 
> vol_041afbc53746053368a1840607636e97 vol_a5aee81a873c043c99a938adcb5b5781
> getting this message "/home/azureuser/common_secret.pem.pub not present. 
> Please run geo-replication command on master with push-pem option to generate 
> the file"
>
> So went back and created the session again, no change, so manually copied the 
> common_secret.pem.pub to /home/azureuser/ but still the 
> set_geo_rep_pem_keys.sh is looking the pem file in different name :  
> COMMON_SECRET_PEM_PUB=${master_vol}_${slave_vol}_common_secret.pem.pub , 
> change the name of pem , ran the command again :
>
>  /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser 
> vol_041afbc53746053368a1840607636e97 vol_a5aee81a873c043c99a938adcb5b5781
> Successfully copied file.
> Command executed successfully.
>
>
> - went back and created the session , start the geo-replication , still 
> seeing the  same error in logs. Any ideas ?
>
> thanks,
> Maurya
>
>
>
> On Wed, Mar 20, 2019 at 11:07 PM Sunny Kumar  wrote:
>>
>> Hi Maurya,
>>
>> I guess you missed last trick to distribute keys in slave node. I see
>> this is non-root geo-rep setup so please try this:
>>
>>
>> Run the following command as root in any one of Slave node.
>>
>> /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh  
>>  
>>
>> - Sunny
>>
>> On Wed, Mar 20, 2019 at 10:47 PM Maurya M  wrote:
>> >
>> > Hi all,
>> >  Have setup a 3 master nodes - 3 slave nodes (gluster 4.1) for 
>> > geo-replication, but once have the geo-replication configure the status is 
>> > always on "Created',
>> > even after have force start the session.
>> >
>> > On close inspect of the logs on the master node seeing this error:
>> >
>> > "E [syncdutils(monitor):801:errlog] Popen: command returned error   
>> > cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
>> > /var/lib/glusterd/geo-replication/secret.pem -p 22 
>> > azureu...@x...xxx. gluster --xml --remote-host=localhost volume 
>> > info vol_a5ae34341a873c043c99a938adcb5b5781  error=255"
>> >
>> > Any ideas what is issue?
>> >
>> > thanks,
>> > Maurya
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-20 Thread Maurya M
Hi Sunil,
 I did run the on the slave node :
 /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser
vol_041afbc53746053368a1840607636e97 vol_a5aee81a873c043c99a938adcb5b5781
getting this message "/home/azureuser/common_secret.pem.pub not present.
Please run geo-replication command on master with push-pem option to
generate the file"

So went back and created the session again, no change, so manually copied
the common_secret.pem.pub to /home/azureuser/ but still the
set_geo_rep_pem_keys.sh is looking the pem file in different name
:  COMMON_SECRET_PEM_PUB=${master_vol}_${slave_vol}_common_secret.pem.pub ,
change the name of pem , ran the command again :

 /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser
vol_041afbc53746053368a1840607636e97 vol_a5aee81a873c043c99a938adcb5b5781
Successfully copied file.
Command executed successfully.


- went back and created the session , start the geo-replication , still
seeing the  same error in logs. Any ideas ?

thanks,
Maurya



On Wed, Mar 20, 2019 at 11:07 PM Sunny Kumar  wrote:

> Hi Maurya,
>
> I guess you missed last trick to distribute keys in slave node. I see
> this is non-root geo-rep setup so please try this:
>
>
> Run the following command as root in any one of Slave node.
>
> /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh  
>  
>
> - Sunny
>
> On Wed, Mar 20, 2019 at 10:47 PM Maurya M  wrote:
> >
> > Hi all,
> >  Have setup a 3 master nodes - 3 slave nodes (gluster 4.1) for
> geo-replication, but once have the geo-replication configure the status is
> always on "Created',
> > even after have force start the session.
> >
> > On close inspect of the logs on the master node seeing this error:
> >
> > "E [syncdutils(monitor):801:errlog] Popen: command returned error
>  cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem -p 22 azureu...@x...xxx.
> gluster --xml --remote-host=localhost volume info
> vol_a5ae34341a873c043c99a938adcb5b5781  error=255"
> >
> > Any ideas what is issue?
> >
> > thanks,
> > Maurya
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-replication status always on 'Created'

2019-03-20 Thread Sunny Kumar
Hi Maurya,

I guess you missed last trick to distribute keys in slave node. I see
this is non-root geo-rep setup so please try this:


Run the following command as root in any one of Slave node.

/usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh  
 

- Sunny

On Wed, Mar 20, 2019 at 10:47 PM Maurya M  wrote:
>
> Hi all,
>  Have setup a 3 master nodes - 3 slave nodes (gluster 4.1) for 
> geo-replication, but once have the geo-replication configure the status is 
> always on "Created',
> even after have force start the session.
>
> On close inspect of the logs on the master node seeing this error:
>
> "E [syncdutils(monitor):801:errlog] Popen: command returned error   cmd=ssh 
> -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
> /var/lib/glusterd/geo-replication/secret.pem -p 22 azureu...@x...xxx. 
> gluster --xml --remote-host=localhost volume info 
> vol_a5ae34341a873c043c99a938adcb5b5781  error=255"
>
> Any ideas what is issue?
>
> thanks,
> Maurya
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Geo-replication status always on 'Created'

2019-03-20 Thread Maurya M
Hi all,
 Have setup a 3 master nodes - 3 slave nodes (gluster 4.1) for
geo-replication, but once have the geo-replication configure the status is
always on "Created',
even after have force start the session.

On close inspect of the logs on the master node seeing this error:

"E [syncdutils(monitor):801:errlog] Popen: command returned error   cmd=ssh
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 azureu...@x...xxx.
gluster --xml --remote-host=localhost volume info
vol_a5ae34341a873c043c99a938adcb5b5781  error=255"

Any ideas what is issue?

thanks,
Maurya
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users