Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work
Hi Marcus, Well there is nothing wrong in setting up a symlink for gluster binary location, but there is a geo-rep command to set it so that gsyncd will search there. To set on master #gluster vol geo-rep config gluster-command-dir To set on slave #gluster vol geo-rep config slave-gluster-command-dir Thanks, Kotresh HR On Wed, Jul 18, 2018 at 9:28 AM, Kotresh Hiremath Ravishankar < khire...@redhat.com> wrote: > Hi Marcus, > > I am testing out 4.1 myself and I will have some update today. > For this particular traceback, gsyncd is not able to find the library. > Is it the rpm install? If so, gluster libraries would be in /usr/lib. > Please run the cmd below. > > #ldconfig /usr/lib > #ldconfig -p /usr/lib | grep libgf (This should list libgfchangelog.so) > > Geo-rep should be fixed automatically. > > Thanks, > Kotresh HR > > On Wed, Jul 18, 2018 at 1:27 AM, Marcus Pedersén > wrote: > >> Hi again, >> >> I continue to do some testing, but now I have come to a stage where I >> need help. >> >> >> gsyncd.log was complaining about that /usr/local/sbin/gluster was missing >> so I made a link. >> >> After that /usr/local/sbin/glusterfs was missing so I made a link there >> as well. >> >> Both links were done on all slave nodes. >> >> >> Now I have a new error that I can not resolve myself. >> >> It can not open libgfchangelog.so >> >> >> Many thanks! >> >> Regards >> >> Marcus Pedersén >> >> >> Part of gsyncd.log: >> >> OSError: libgfchangelog.so: cannot open shared object file: No such file >> or directory >> [2018-07-17 19:32:06.517106] I [repce(agent >> /urd-gds/gluster):89:service_loop] >> RepceServer: terminating on reaching EOF. >> [2018-07-17 19:32:07.479553] I [monitor(monitor):272:monitor] Monitor: >> worker died in startup phase brick=/urd-gds/gluster >> [2018-07-17 19:32:17.500709] I [monitor(monitor):158:monitor] Monitor: >> starting gsyncd worker brick=/urd-gds/gluster slave_node=urd-gds-geo-000 >> [2018-07-17 19:32:17.541547] I [gsyncd(agent /urd-gds/gluster):297:main] >> : Using session config file path=/var/lib/glusterd/geo-rep >> lication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf >> [2018-07-17 19:32:17.541959] I [gsyncd(worker /urd-gds/gluster):297:main] >> : Using session config file path=/var/lib/glusterd/geo-rep >> lication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf >> [2018-07-17 19:32:17.542363] I [changelogagent(agent >> /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining... >> [2018-07-17 19:32:17.550894] I [resource(worker >> /urd-gds/gluster):1348:connect_remote] SSH: Initializing SSH connection >> between master and slave... >> [2018-07-17 19:32:19.166246] I [resource(worker >> /urd-gds/gluster):1395:connect_remote] SSH: SSH connection between >> master and slave established.duration=1.6151 >> [2018-07-17 19:32:19.166806] I [resource(worker >> /urd-gds/gluster):1067:connect] GLUSTER: Mounting gluster volume >> locally... >> [2018-07-17 19:32:20.257344] I [resource(worker >> /urd-gds/gluster):1090:connect] GLUSTER: Mounted gluster volume >> duration=1.0901 >> [2018-07-17 19:32:20.257921] I [subcmds(worker >> /urd-gds/gluster):70:subcmd_worker] : Worker spawn successful. >> Acknowledging back to monitor >> [2018-07-17 19:32:20.274647] E [repce(agent /urd-gds/gluster):114:worker] >> : call failed: >> Traceback (most recent call last): >> File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 110, in >> worker >> res = getattr(self.obj, rmeth)(*in_data[2:]) >> File "/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py", >> line 37, in init >> return Changes.cl_init() >> File "/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py", >> line 21, in __getattr__ >> from libgfchangelog import Changes as LChanges >> File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", >> line 17, in >> class Changes(object): >> File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", >> line 19, in Changes >> use_errno=True) >> File "/usr/lib64/python2.7/ctypes/__init__.py", line 360, in __init__ >> self._handle = _dlopen(self._name, mode) >> OSError: libgfchangelog.so: cannot open shared object file: No such file >> or directory >> [2018-07-17 19:32:20.275093] E [repce(worker >> /urd-gds/gluster):206:__call__] RepceClient: call failed >> call=6078:139982918485824:1531855940.27 method=init error=OSError >> [2018-07-17 19:32:20.275192] E [syncdutils(worker >> /urd-gds/gluster):330:log_raise_exception] : FAIL: >> Traceback (most recent call last): >> File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, >> in main >> func(args) >> File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, >> in subcmd_worker >> local.service_loop(remote) >> File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line >> 1236, in service_loop >> changelog_agent.init() >> File
Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work
Hi Marcus, I am testing out 4.1 myself and I will have some update today. For this particular traceback, gsyncd is not able to find the library. Is it the rpm install? If so, gluster libraries would be in /usr/lib. Please run the cmd below. #ldconfig /usr/lib #ldconfig -p /usr/lib | grep libgf (This should list libgfchangelog.so) Geo-rep should be fixed automatically. Thanks, Kotresh HR On Wed, Jul 18, 2018 at 1:27 AM, Marcus Pedersén wrote: > Hi again, > > I continue to do some testing, but now I have come to a stage where I need > help. > > > gsyncd.log was complaining about that /usr/local/sbin/gluster was missing > so I made a link. > > After that /usr/local/sbin/glusterfs was missing so I made a link there as > well. > > Both links were done on all slave nodes. > > > Now I have a new error that I can not resolve myself. > > It can not open libgfchangelog.so > > > Many thanks! > > Regards > > Marcus Pedersén > > > Part of gsyncd.log: > > OSError: libgfchangelog.so: cannot open shared object file: No such file > or directory > [2018-07-17 19:32:06.517106] I [repce(agent /urd-gds/gluster):89:service_loop] > RepceServer: terminating on reaching EOF. > [2018-07-17 19:32:07.479553] I [monitor(monitor):272:monitor] Monitor: > worker died in startup phase brick=/urd-gds/gluster > [2018-07-17 19:32:17.500709] I [monitor(monitor):158:monitor] Monitor: > starting gsyncd worker brick=/urd-gds/gluster slave_node=urd-gds-geo-000 > [2018-07-17 19:32:17.541547] I [gsyncd(agent /urd-gds/gluster):297:main] > : Using session config file path=/var/lib/glusterd/geo- > replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf > [2018-07-17 19:32:17.541959] I [gsyncd(worker /urd-gds/gluster):297:main] > : Using session config file path=/var/lib/glusterd/geo- > replication/urd-gds-volume_urd-gds-geo-001_urd-gds-volume/gsyncd.conf > [2018-07-17 19:32:17.542363] I [changelogagent(agent > /urd-gds/gluster):72:__init__] ChangelogAgent: Agent listining... > [2018-07-17 19:32:17.550894] I [resource(worker > /urd-gds/gluster):1348:connect_remote] > SSH: Initializing SSH connection between master and slave... > [2018-07-17 19:32:19.166246] I [resource(worker > /urd-gds/gluster):1395:connect_remote] > SSH: SSH connection between master and slave established. > duration=1.6151 > [2018-07-17 19:32:19.166806] I [resource(worker > /urd-gds/gluster):1067:connect] > GLUSTER: Mounting gluster volume locally... > [2018-07-17 19:32:20.257344] I [resource(worker > /urd-gds/gluster):1090:connect] > GLUSTER: Mounted gluster volume duration=1.0901 > [2018-07-17 19:32:20.257921] I [subcmds(worker > /urd-gds/gluster):70:subcmd_worker] > : Worker spawn successful. Acknowledging back to monitor > [2018-07-17 19:32:20.274647] E [repce(agent /urd-gds/gluster):114:worker] > : call failed: > Traceback (most recent call last): > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 110, in > worker > res = getattr(self.obj, rmeth)(*in_data[2:]) > File "/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py", line > 37, in init > return Changes.cl_init() > File "/usr/libexec/glusterfs/python/syncdaemon/changelogagent.py", line > 21, in __getattr__ > from libgfchangelog import Changes as LChanges > File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", line > 17, in > class Changes(object): > File "/usr/libexec/glusterfs/python/syncdaemon/libgfchangelog.py", line > 19, in Changes > use_errno=True) > File "/usr/lib64/python2.7/ctypes/__init__.py", line 360, in __init__ > self._handle = _dlopen(self._name, mode) > OSError: libgfchangelog.so: cannot open shared object file: No such file > or directory > [2018-07-17 19:32:20.275093] E [repce(worker /urd-gds/gluster):206:__call__] > RepceClient: call failed call=6078:139982918485824:1531855940.27 > method=init error=OSError > [2018-07-17 19:32:20.275192] E [syncdutils(worker > /urd-gds/gluster):330:log_raise_exception] : FAIL: > Traceback (most recent call last): > File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in > main > func(args) > File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, in > subcmd_worker > local.service_loop(remote) > File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1236, > in service_loop > changelog_agent.init() > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 225, in > __call__ > return self.ins(self.meth, *a) > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 207, in > __call__ > raise res > OSError: libgfchangelog.so: cannot open shared object file: No such file > or directory > [2018-07-17 19:32:20.286787] I [repce(agent /urd-gds/gluster):89:service_loop] > RepceServer: terminating on reaching EOF. > [2018-07-17 19:32:21.259891] I [monitor(monitor):272:monitor] Monitor: > worker died in startup phase brick=/urd-gds/gluster > > > > --
[Gluster-users] Community Meeting, 18 July
Putting out a call for agenda items, our next Gluster Community Meeting is on July 18 at 15:00 UTC. Meeting agenda is https://bit.ly/gluster-community-meetings Anyone got anything they want to bring up? - amye -- Amye Scavarda | a...@redhat.com | Gluster Community Lead ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users