Re: [Gluster-users] georeplication woes

2018-07-24 Thread Maarten van Baarsel

On 24-07-2018 06:12:25, Kotresh Hiremath Ravishankar wrote:

Hi Kotresh,


Looks like gsyncd on slave is failing for some reason.

Please run the below cmd on the master.

#ssh -i /var/lib/glusterd/geo-replication/secret.pemĀ  georep@gluster-4.glstr

It should run gsyncd on the slave. If there is error, it should be fixed.
Please share the output of above cmd.


here we go:

root@gluster-3:/home/mrten# ssh -i 
/var/lib/glusterd/geo-replication/secret.pem  georep@gluster-4.glstr

usage: gsyncd.py [-h]

{monitor-status,monitor,worker,agent,slave,status,config-check,config-get,config-set,config-reset,voluuidget,delete}
 ...
gsyncd.py: error: too few arguments
Connection to gluster-4.glstr closed.


Maarten.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] georeplication woes

2018-07-23 Thread Kotresh Hiremath Ravishankar
Looks like gsyncd on slave is failing for some reason.

Please run the below cmd on the master.

#ssh -i /var/lib/glusterd/geo-replication/secret.pem  georep@gluster-4.glstr

It should run gsyncd on the slave. If there is error, it should be fixed.
Please share the output of above cmd.

Regards,
Kotresh

On Mon, Jul 23, 2018 at 9:28 PM, Maarten van Baarsel <
mrten_glusterus...@ii.nl> wrote:

>
>
> On 23/07/18 15:28, Sunny Kumar wrote:
> > Hi,
> > If you run this below command on master
> >
> > gluster vol geo-rep   config
> > slave-gluster-command-dir 
> >
> > on slave run "which gluster" to know gluster-binary-location on slave
>
> Done that, repeatedly, no change :(
>
> > It will make the same entry in gsyncd.conf file please recheck and
>
> (what gsyncd.conf? the one in /etc or someplace else?)
>
>
>
> > confirm both entries are same and also can you confirm that both
> > master and slave have same gluster version.
>
> slave:
>
> root@gluster-4:~$ /usr/sbin/gluster --version
> glusterfs 4.0.2
>
> master:
>
> root@gluster-3:/home/mrten# /usr/sbin/gluster --version
> glusterfs 4.0.2
>
>
> Looking at the slaves' cli.log:
>
> [2018-07-23 15:53:26.187547] I [cli.c:767:main] 0-cli: Started running
> /usr/sbin/gluster with version 4.0.2
> [2018-07-23 15:53:26.187611] I [cli.c:646:cli_rpc_init] 0-cli: Connecting
> to remote glusterd at localhost
> [2018-07-23 15:53:26.229756] I [MSGID: 101190] 
> [event-epoll.c:609:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 1
> [2018-07-23 15:53:26.229871] W [rpc-clnt.c:1739:rpc_clnt_submit]
> 0-glusterfs: error returned while attempting to connect to host:(null),
> port:0
> [2018-07-23 15:53:26.229963] I [socket.c:2625:socket_event_handler]
> 0-transport: EPOLLERR - disconnecting now
> [2018-07-23 15:53:26.230640] I [cli-rpc-ops.c:8785:gf_cli_mount_cbk]
> 0-cli: Received resp to mount
> [2018-07-23 15:53:26.230825] I [input.c:31:cli_batch] 0-: Exiting with: -1
>
> there's a weird warning there with host:(null), port:0
>
> M.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Thanks and Regards,
Kotresh H R
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] georeplication woes

2018-07-23 Thread Maarten van Baarsel



On 23/07/18 15:28, Sunny Kumar wrote:
> Hi,
> If you run this below command on master
> 
> gluster vol geo-rep   config
> slave-gluster-command-dir 
> 
> on slave run "which gluster" to know gluster-binary-location on slave

Done that, repeatedly, no change :(
 
> It will make the same entry in gsyncd.conf file please recheck and

(what gsyncd.conf? the one in /etc or someplace else?)



> confirm both entries are same and also can you confirm that both
> master and slave have same gluster version.

slave:

root@gluster-4:~$ /usr/sbin/gluster --version
glusterfs 4.0.2

master:

root@gluster-3:/home/mrten# /usr/sbin/gluster --version
glusterfs 4.0.2


Looking at the slaves' cli.log:

[2018-07-23 15:53:26.187547] I [cli.c:767:main] 0-cli: Started running 
/usr/sbin/gluster with version 4.0.2
[2018-07-23 15:53:26.187611] I [cli.c:646:cli_rpc_init] 0-cli: Connecting to 
remote glusterd at localhost
[2018-07-23 15:53:26.229756] I [MSGID: 101190] 
[event-epoll.c:609:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1
[2018-07-23 15:53:26.229871] W [rpc-clnt.c:1739:rpc_clnt_submit] 0-glusterfs: 
error returned while attempting to connect to host:(null), port:0
[2018-07-23 15:53:26.229963] I [socket.c:2625:socket_event_handler] 
0-transport: EPOLLERR - disconnecting now
[2018-07-23 15:53:26.230640] I [cli-rpc-ops.c:8785:gf_cli_mount_cbk] 0-cli: 
Received resp to mount
[2018-07-23 15:53:26.230825] I [input.c:31:cli_batch] 0-: Exiting with: -1

there's a weird warning there with host:(null), port:0

M.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] georeplication woes

2018-07-23 Thread Maarten van Baarsel
On 23/07/18 13:48, Sunny Kumar wrote:

Hi Sunny,

thanks again for replying!


>> Can I test something else? Is the command normally run in a jail?

> Please share gsyncd.log form master.

[2018-07-23 12:18:19.773240] I [monitor(monitor):158:monitor] Monitor: starting 
gsyncd worker   brick=/var/lib/gluster  slave_node=gluster-4.glstr
[2018-07-23 12:18:19.832611] I [gsyncd(agent /var/lib/gluster):297:main] : 
Using session config file   
path=/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.conf
[2018-07-23 12:18:19.832674] I [gsyncd(worker /var/lib/gluster):297:main] 
: Using session config file  
path=/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.conf
[2018-07-23 12:18:19.834259] I [changelogagent(agent 
/var/lib/gluster):72:__init__] ChangelogAgent: Agent listining...
[2018-07-23 12:18:19.848596] I [resource(worker 
/var/lib/gluster):1345:connect_remote] SSH: Initializing SSH connection between 
master and slave...
[2018-07-23 12:18:20.387191] E [syncdutils(worker 
/var/lib/gluster):301:log_raise_exception] : connection to peer is broken
[2018-07-23 12:18:20.387592] E [syncdutils(worker /var/lib/gluster):747:errlog] 
Popen: command returned error   cmd=ssh -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 
22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-nN8_GE/2648484453eaadd9d3042ceba9bafa6a.sock 
georep@gluster-4.glstr /nonexistent/gsyncd slave gl0 
georep@gluster-4.glstr::glbackup --master-node gluster-3.glstr --master-node-id 
9650e965-bf4f-4544-a42b-f4d540d23a1f --master-brick /var/lib/gluster 
--local-node gluster-4.glstr --local-node-id 
736f6431-2f9c-4115-9790-68f9a88d99a7 --slave-timeout 120 --slave-log-level INFO 
--slave-gluster-log-level INFO --slave-gluster-command-dir /usr/sbin/  
error=1
[2018-07-23 12:18:20.37] I [repce(agent /var/lib/gluster):80:service_loop] 
RepceServer: terminating on reaching EOF.
[2018-07-23 12:18:21.389723] I [monitor(monitor):266:monitor] Monitor: worker 
died in startup phase brick=/var/lib/gluster

repeated again and again.

Maarten.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] georeplication woes

2018-07-23 Thread Sunny Kumar
Hi Maarten,



On Mon, Jul 23, 2018 at 3:37 PM Maarten van Baarsel
 wrote:
>
> Hi Sunny,
>
> >> Can't run that command on the slave, it doesn't know the gl0 volume:
> >>
> >> root@gluster-4:/home/mrten# gluster volume geo-rep gl0 
> >> ssh://georep@gluster-4.glstr::glbackup config
>
> > please do not use ssh://
> > gluster volume geo-rep gl0 georep@gluster-4.glstr::glbackup config
> > just use it like config command.
>
> Sorry, does not make a difference on geo-rep slave side:
>
> root@gluster-4:/home/mrten# gluster volume geo-rep gl0 
> georep@gluster-4.glstr::glbackup config
> Volume gl0 does not exist
> geo-replication command failed

Apologies, if my statement to on slave you mistook to to set for slave.

This above command should run on master only like -

gluster vol geo-rep   config
slave-gluster-command-dir 

on slave run "which gluster" to know gluster-binary-location.

>
>
> Does work on the master side but the output is the same:
>
> root@gluster-3:/home/mrten# gluster volume geo-rep gl0 
> georep@gluster-4.glstr::glbackup config
> access_mount:false
> allow_network:
> change_detector:changelog
> change_interval:5
> changelog_archive_format:%Y%m
> changelog_batch_size:727040
> changelog_log_file:/var/log/glusterfs/geo-replication/gl0_gluster-4.glstr_glbackup/changes-${local_id}.log
> changelog_log_level:INFO
> checkpoint:0
> chnagelog_archive_format:%Y%m
> cli_log_file:/var/log/glusterfs/geo-replication/cli.log
> cli_log_level:INFO
> connection_timeout:60
> georep_session_working_dir:/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/
> gluster_cli_options:
> gluster_command:gluster
> gluster_command_dir:/usr/sbin/
> gluster_log_file:/var/log/glusterfs/geo-replication/gl0_gluster-4.glstr_glbackup/mnt-${local_id}.log
> gluster_log_level:INFO
> gluster_logdir:/var/log/glusterfs
> gluster_params:aux-gfid-mount acl
> gluster_rundir:/var/run/gluster
> glusterd_workdir:/var/lib/glusterd
> gsyncd_miscdir:/var/lib/misc/gluster/gsyncd
> ignore_deletes:false
> isolated_slaves:
> log_file:/var/log/glusterfs/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.log
> log_level:INFO
> log_rsync_performance:false
> master_disperse_count:1
> master_replica_count:1
> max_rsync_retries:10
> meta_volume_mnt:/var/run/gluster/shared_storage
> pid_file:/var/run/gluster/gsyncd-gl0-gluster-4.glstr-glbackup.pid
> remote_gsyncd:
> replica_failover_interval:1
> rsync_command:rsync
> rsync_opt_existing:
> rsync_opt_ignore_missing_args:
> rsync_options:
> rsync_ssh_options:
> slave_access_mount:false
> slave_gluster_command_dir:/usr/sbin/
> slave_gluster_log_file:/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/mnt-${master_node}-${master_brick_id}.log
> slave_gluster_log_file_mbr:/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/mnt-mbr-${master_node}-${master_brick_id}.log
> slave_gluster_log_level:INFO
> slave_gluster_params:aux-gfid-mount acl
> slave_log_file:/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/gsyncd.log
> slave_log_level:INFO
> slave_timeout:120
> special_sync_mode:
> ssh_command:ssh
> ssh_options:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
> /var/lib/glusterd/geo-replication/secret.pem
> ssh_options_tar:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
> /var/lib/glusterd/geo-replication/tar_ssh.pem
> ssh_port:22
> state_file:/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/monitor.status
> state_socket_unencoded:
> stime_xattr_prefix:trusted.glusterfs.4054e7ad-7eb9-41fe-94cf-b52a690bb655.f7ce9a54-0ce4-4056-9958-4fa3f1630154
> sync_acls:true
> sync_jobs:3
> sync_xattrs:true
> tar_command:tar
> use_meta_volume:true
> use_rsync_xattrs:false
> use_tarssh:false
> working_dir:/var/lib/misc/gluster/gsyncd/gl0_gluster-4.glstr_glbackup/
>
>
> This should be non-root georeplication, should this work?
>
> georep@gluster-4:~$ /usr/sbin/gluster volume status all
> Connection failed. Please check if gluster daemon is operational.
no need for that
>
> (run as the geo-replication user on the slave side)
>
>
> Can I test something else? Is the command normally run in a jail?
Please share gsyncd.log form master.
>
>
>
> Maarten.
- Sunny
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] georeplication woes

2018-07-23 Thread Maarten van Baarsel
Hi Sunny,

>> Can't run that command on the slave, it doesn't know the gl0 volume:
>>
>> root@gluster-4:/home/mrten# gluster volume geo-rep gl0 
>> ssh://georep@gluster-4.glstr::glbackup config

> please do not use ssh://
> gluster volume geo-rep gl0 georep@gluster-4.glstr::glbackup config
> just use it like config command.

Sorry, does not make a difference on geo-rep slave side:

root@gluster-4:/home/mrten# gluster volume geo-rep gl0 
georep@gluster-4.glstr::glbackup config
Volume gl0 does not exist
geo-replication command failed


Does work on the master side but the output is the same:

root@gluster-3:/home/mrten# gluster volume geo-rep gl0 
georep@gluster-4.glstr::glbackup config
access_mount:false
allow_network:
change_detector:changelog
change_interval:5
changelog_archive_format:%Y%m
changelog_batch_size:727040
changelog_log_file:/var/log/glusterfs/geo-replication/gl0_gluster-4.glstr_glbackup/changes-${local_id}.log
changelog_log_level:INFO
checkpoint:0
chnagelog_archive_format:%Y%m
cli_log_file:/var/log/glusterfs/geo-replication/cli.log
cli_log_level:INFO
connection_timeout:60
georep_session_working_dir:/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/
gluster_cli_options:
gluster_command:gluster
gluster_command_dir:/usr/sbin/
gluster_log_file:/var/log/glusterfs/geo-replication/gl0_gluster-4.glstr_glbackup/mnt-${local_id}.log
gluster_log_level:INFO
gluster_logdir:/var/log/glusterfs
gluster_params:aux-gfid-mount acl
gluster_rundir:/var/run/gluster
glusterd_workdir:/var/lib/glusterd
gsyncd_miscdir:/var/lib/misc/gluster/gsyncd
ignore_deletes:false
isolated_slaves:
log_file:/var/log/glusterfs/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.log
log_level:INFO
log_rsync_performance:false
master_disperse_count:1
master_replica_count:1
max_rsync_retries:10
meta_volume_mnt:/var/run/gluster/shared_storage
pid_file:/var/run/gluster/gsyncd-gl0-gluster-4.glstr-glbackup.pid
remote_gsyncd:
replica_failover_interval:1
rsync_command:rsync
rsync_opt_existing:
rsync_opt_ignore_missing_args:
rsync_options:
rsync_ssh_options:
slave_access_mount:false
slave_gluster_command_dir:/usr/sbin/
slave_gluster_log_file:/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/mnt-${master_node}-${master_brick_id}.log
slave_gluster_log_file_mbr:/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/mnt-mbr-${master_node}-${master_brick_id}.log
slave_gluster_log_level:INFO
slave_gluster_params:aux-gfid-mount acl
slave_log_file:/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/gsyncd.log
slave_log_level:INFO
slave_timeout:120
special_sync_mode:
ssh_command:ssh
ssh_options:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem
ssh_options_tar:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/tar_ssh.pem
ssh_port:22
state_file:/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/monitor.status
state_socket_unencoded:
stime_xattr_prefix:trusted.glusterfs.4054e7ad-7eb9-41fe-94cf-b52a690bb655.f7ce9a54-0ce4-4056-9958-4fa3f1630154
sync_acls:true
sync_jobs:3
sync_xattrs:true
tar_command:tar
use_meta_volume:true
use_rsync_xattrs:false
use_tarssh:false
working_dir:/var/lib/misc/gluster/gsyncd/gl0_gluster-4.glstr_glbackup/


This should be non-root georeplication, should this work? 

georep@gluster-4:~$ /usr/sbin/gluster volume status all
Connection failed. Please check if gluster daemon is operational.

(run as the geo-replication user on the slave side)


Can I test something else? Is the command normally run in a jail?



Maarten.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] georeplication woes

2018-07-22 Thread Sunny Kumar
Hi Maarten,

On Sat, Jul 21, 2018 at 12:34 AM Maarten van Baarsel
 wrote:
>
>
> Hi Sunny,
>
> thanks for your reply!
>
> > I think setup for gluster binary location is wrongly setup here.
> >
> > Can you please try setting it up by using following command
> >
> > on master
> > #gluster vol geo-rep   config
> > gluster-command-dir 
> >
> > and on slave
> > #gluster vol geo-rep   config
> > slave-gluster-command-dir 
>
> Is there a difference between _ and -? I only seem to have the _ ones:
>
> (gluster-3 is one of the master 3-replica, gluster-4 is the slave)
>
> root@gluster-3:/home/mrten# gluster volume geo-rep gl0 
> ssh://georep@gluster-4.glstr::glbackup config | fgrep command
> gluster_command:gluster
> gluster_command_dir:/usr/sbin/
> rsync_command:rsync
> slave_gluster_command_dir:/usr/sbin/
> ssh_command:ssh
> tar_command:tar
>
> Can't run that command on the slave, it doesn't know the gl0 volume:
>
> root@gluster-4:/home/mrten# gluster volume geo-rep gl0 
> ssh://georep@gluster-4.glstr::glbackup config
please do not use ssh://
gluster volume geo-rep gl0 georep@gluster-4.glstr::glbackup config
just use it like config command.
> Volume gl0 does not exist
> geo-replication command failed
>
>
>
> But the location is correct:
>
> master:
>
> root@gluster-3:/home/mrten# which gluster
> /usr/sbin/gluster
>
> slave:
>
> root@gluster-4:/home/mrten# which gluster
> /usr/sbin/gluster
>
>
> and they work:
>
> root@gluster-3:/home/mrten# /usr/sbin/gluster
> gluster>
>
> root@gluster-4:/home/mrten# /usr/sbin/gluster
> gluster>
>
>
> >
> >> [2018-07-20 12:32:13.404048] W [gsyncd(slave 
> >> gluster-2.glstr/var/lib/gluster):293:main] : Session config file not 
> >> exists, using the default config 
> >> path=/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.conf
>
> this one is weird though; there is nothing in
> /var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.conf
> but
>
>
> [vars]
> stime-xattr-prefix = 
> trusted.glusterfs.4054e7ad-7eb9-41fe-94cf-b52a690bb655.f7ce9a54-0ce4-4056-9958-4fa3f1630154
> use-meta-volume = true
>
> Maarten.
- Sunny
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] georeplication woes

2018-07-20 Thread Maarten van Baarsel


Hi Sunny,

thanks for your reply!

> I think setup for gluster binary location is wrongly setup here.
> 
> Can you please try setting it up by using following command
> 
> on master
> #gluster vol geo-rep   config
> gluster-command-dir 
> 
> and on slave
> #gluster vol geo-rep   config
> slave-gluster-command-dir 

Is there a difference between _ and -? I only seem to have the _ ones:

(gluster-3 is one of the master 3-replica, gluster-4 is the slave)

root@gluster-3:/home/mrten# gluster volume geo-rep gl0 
ssh://georep@gluster-4.glstr::glbackup config | fgrep command
gluster_command:gluster
gluster_command_dir:/usr/sbin/
rsync_command:rsync
slave_gluster_command_dir:/usr/sbin/
ssh_command:ssh
tar_command:tar

Can't run that command on the slave, it doesn't know the gl0 volume:

root@gluster-4:/home/mrten# gluster volume geo-rep gl0 
ssh://georep@gluster-4.glstr::glbackup config
Volume gl0 does not exist
geo-replication command failed



But the location is correct:

master:

root@gluster-3:/home/mrten# which gluster
/usr/sbin/gluster

slave:

root@gluster-4:/home/mrten# which gluster
/usr/sbin/gluster


and they work:

root@gluster-3:/home/mrten# /usr/sbin/gluster
gluster>

root@gluster-4:/home/mrten# /usr/sbin/gluster
gluster>


> 
>> [2018-07-20 12:32:13.404048] W [gsyncd(slave 
>> gluster-2.glstr/var/lib/gluster):293:main] : Session config file not 
>> exists, using the default config 
>> path=/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.conf

this one is weird though; there is nothing in 
/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.conf 
but


[vars]
stime-xattr-prefix = 
trusted.glusterfs.4054e7ad-7eb9-41fe-94cf-b52a690bb655.f7ce9a54-0ce4-4056-9958-4fa3f1630154
use-meta-volume = true

Maarten.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] georeplication woes

2018-07-20 Thread Sunny Kumar
Hi Maarten,

On Fri, Jul 20, 2018 at 9:24 PM Maarten van Baarsel  wrote:
>
> Hi,
>
> I've upgraded my gluster 3.13 cluster to 4.0 on an Ubuntu
> server and it broke my geo-replication because of this missing
> file:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1601532
>
> However, between geo-replication dying and me finding that bugreport
> I've tried multiple other things including setting up geo-rep again
> so I've still got a Faulty georeplication now.
>
> Removed it, created it again with the suggested fix in place but at
> the moment I see this error on the georep slave, in
> /var/log/glusterfs/glusterd.log:
>
> E [MSGID: 106581] [glusterd-mountbroker.c:579:glusterd_do_mount] 
> 0-management: Missing mspec: Check the corresponding option in glusterd vol 
> file for mountbroker user: georep [No such file or directory]
>
>
> These messages also seem relevant from 
> /var/log/glusterfs/geo-replication-slaves/.../gsyncd.log:
>
> [2018-07-20 12:32:11.452244] I [resource(slave 
> gluster-1.glstr/var/lib/gluster):1064:connect] GLUSTER: Mounting gluster 
> volume locally...
> [2018-07-20 12:32:11.503766] E [resource(slave 
> gluster-1.glstr/var/lib/gluster):973:handle_mounter] MountbrokerMounter: 
> glusterd answered   mnt=
> [2018-07-20 12:32:11.504313] E [syncdutils(slave 
> gluster-1.glstr/var/lib/gluster):747:errlog] Popen: command returned error
>  cmd=/usr/sbin/gluster --remote-host=localhost system:: mount georep 
> user-map-root=georep aux-gfid-mount acl log-level=INFO 
> log-file=/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/mnt-gluster-1.glstr-var-lib-gluster.log
>  volfile-server=localhost volfile-id=glbackup client-pid=-1   error=1
> [2018-07-20 12:32:11.504446] E [syncdutils(slave 
> gluster-1.glstr/var/lib/gluster):751:logerr] Popen: /usr/sbin/gluster> 2 : 
> failed with this errno (No such file or directory)

I think setup for gluster binary location is wrongly setup here.

Can you please try setting it up by using following command

on master
#gluster vol geo-rep   config
gluster-command-dir 

and on slave
#gluster vol geo-rep   config
slave-gluster-command-dir 

> [2018-07-20 12:32:13.404048] W [gsyncd(slave 
> gluster-2.glstr/var/lib/gluster):293:main] : Session config file not 
> exists, using the default config 
> path=/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.conf
>
>
> Is there anyone out there that can tell me what to do with these
> error messages? I've resorted to checking out the source code
> but that road did not lead to enlightment either :(
>
> thanks in advance,
> Maarten.
>
>
> [1] The gluster master is a 3-replica; the slave is a single
> volume at another physical location, geo-replication user is
> georep. main volume is called 'gl0', backup 'glbackup'.
>
>
> root@gluster-4:/var/log/glusterfs# gluster-mountbroker status
> +---+-+-++---+
> |NODE   | NODE STATUS |MOUNT ROOT   |   GROUP|
>USERS   |
> +---+-+-++---+
> | localhost |  UP | /var/lib/georep-mountbroker(OK) | georep(OK) | 
> georep(glbackup)  |
> +---+-+-++---+
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

- Sunny
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] georeplication woes

2018-07-20 Thread Maarten van Baarsel
Hi,

I've upgraded my gluster 3.13 cluster to 4.0 on an Ubuntu 
server and it broke my geo-replication because of this missing 
file:

https://bugzilla.redhat.com/show_bug.cgi?id=1601532

However, between geo-replication dying and me finding that bugreport
I've tried multiple other things including setting up geo-rep again
so I've still got a Faulty georeplication now.

Removed it, created it again with the suggested fix in place but at 
the moment I see this error on the georep slave, in 
/var/log/glusterfs/glusterd.log:

E [MSGID: 106581] [glusterd-mountbroker.c:579:glusterd_do_mount] 0-management: 
Missing mspec: Check the corresponding option in glusterd vol file for 
mountbroker user: georep [No such file or directory]


These messages also seem relevant from 
/var/log/glusterfs/geo-replication-slaves/.../gsyncd.log:

[2018-07-20 12:32:11.452244] I [resource(slave 
gluster-1.glstr/var/lib/gluster):1064:connect] GLUSTER: Mounting gluster volume 
locally...
[2018-07-20 12:32:11.503766] E [resource(slave 
gluster-1.glstr/var/lib/gluster):973:handle_mounter] MountbrokerMounter: 
glusterd answered   mnt=
[2018-07-20 12:32:11.504313] E [syncdutils(slave 
gluster-1.glstr/var/lib/gluster):747:errlog] Popen: command returned error 
cmd=/usr/sbin/gluster --remote-host=localhost system:: mount georep 
user-map-root=georep aux-gfid-mount acl log-level=INFO 
log-file=/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/mnt-gluster-1.glstr-var-lib-gluster.log
 volfile-server=localhost volfile-id=glbackup client-pid=-1   error=1
[2018-07-20 12:32:11.504446] E [syncdutils(slave 
gluster-1.glstr/var/lib/gluster):751:logerr] Popen: /usr/sbin/gluster> 2 : 
failed with this errno (No such file or directory)
[2018-07-20 12:32:13.404048] W [gsyncd(slave 
gluster-2.glstr/var/lib/gluster):293:main] : Session config file not 
exists, using the default config 
path=/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.conf


Is there anyone out there that can tell me what to do with these 
error messages? I've resorted to checking out the source code 
but that road did not lead to enlightment either :(

thanks in advance,
Maarten.


[1] The gluster master is a 3-replica; the slave is a single 
volume at another physical location, geo-replication user is 
georep. main volume is called 'gl0', backup 'glbackup'.


root@gluster-4:/var/log/glusterfs# gluster-mountbroker status
+---+-+-++---+
|NODE   | NODE STATUS |MOUNT ROOT   |   GROUP|  
 USERS   |
+---+-+-++---+
| localhost |  UP | /var/lib/georep-mountbroker(OK) | georep(OK) | 
georep(glbackup)  |
+---+-+-++---+


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users