Hi again,
I had a hard time to configure geo-replication. After creating the session, the
start gave me faulty state and I had to deal with it, it was the
/nonexistent/gsyncd in
/var/lib/glusterd/geo-replication/data1_node3.example_data2/gsyncd.conf to
change to /usr/libexec/gluster/gsyncd
after that the geo-started, but nothing happened, no file or directory was
synced.
the gluster volume geo-replication data1 geoacco...@node3.example.com::data2
status
keeps saying: Changelog Crawl
[2014-12-13 21:57:17.129836] W [master(/mnt/srv1/brick1):1005:process]
_GMaster: incomplete sync, retrying changelogs: CHANGELOG.1418504186
[2014-12-13 21:57:22.648163] W [master(/mnt/srv1/brick1):294:regjob] _GMaster:
Rsync: .gfid/f066ca4a-2d31-4342-bc7e-a37da25b2253 [errcode: 23]
[2014-12-13 21:57:22.648426] W [master(/mnt/srv1/brick1):986:process] _GMaster:
changelogs CHANGELOG.1418504186 could not be processed - moving on...
but new files/directories are synced, so I deleted all data on the master
cluster and recreated them, and all was synced
but the status command (above) keeps saying Changelog Crawl
On the slave node I have these logs
[2014-12-13 21:11:12.149041] W
[client-rpc-fops.c:1210:client3_3_removexattr_cbk] 0-data2-client-0: remote
operation failed: No data available
[2014-12-13 21:11:12.149067] W [fuse-bridge.c:1261:fuse_err_cbk]
0-glusterfs-fuse: 3243: REMOVEXATTR()
/.gfid/cd26bcc2-9b9f-455a-a2b2-9b6358f24203 = -1 (No data available)
[2014-12-13 21:11:12.516674] W
[client-rpc-fops.c:1210:client3_3_removexattr_cbk] 0-data2-client-0: remote
operation failed: No data available
[2014-12-13 21:11:12.516705] W [fuse-bridge.c:1261:fuse_err_cbk]
0-glusterfs-fuse: 3325: REMOVEXATTR()
/.gfid/e6142b37-2362-4c95-a291-f396a122b014 = -1 (No data available)
[2014-12-13 21:11:12.517577] W
[client-rpc-fops.c:1210:client3_3_removexattr_cbk] 0-data2-client-0: remote
operation failed: No data available
[2014-12-13 21:11:12.517600] W [fuse-bridge.c:1261:fuse_err_cbk]
0-glusterfs-fuse: 3331: REMOVEXATTR()
/.gfid/e6142b37-2362-4c95-a291-f396a122b014 = -1 (No data available)
and
[2014-12-13 21:57:16.741321] W [syncdutils(slave):480:errno_wrap] top:
reached maximum retries (['.gfid/ba9c75ef-d4f7-4a6b-923f-82a8c7be4443',
'glusterfs.gfid.newfile',
'\x00\x00\x00\x1b\x00\x00\x00\x1bcab3ae81-7b52-4c55-ac33-37814ff374c4\x00\x00\x00\x81\xb0glustercli1.lower-test\x00\x00\x00\x01\xb0\x00\x00\x00\x00\x00\x00\x00\x00'])...
[2014-12-13 21:57:22.400269] W [syncdutils(slave):480:errno_wrap] top:
reached maximum retries (['.gfid/ba9c75ef-d4f7-4a6b-923f-82a8c7be4443',
'glusterfs.gfid.newfile',
'\x00\x00\x00\x1b\x00\x00\x00\x1bcab3ae81-7b52-4c55-ac33-37814ff374c4\x00\x00\x00\x81\xb0glustercli1.lower-test\x00\x00\x00\x01\xb0\x00\x00\x00\x00\x00\x00\x00\x00'])...
I didn't find what does this mean
any idea.
Regards
Le Vendredi 12 décembre 2014 19h35, wodel youchi wodel_d...@yahoo.fr a
écrit :
Thanks for your reply,
When executing the gverify.sh script I had these errors on slave.log[2014-12-12
18:12:45.423669] I [options.c:1163:xlator_option_init_double] 0-fuse: option
attribute-timeout convertion failed value 1.0
[2014-12-12 18:12:45.423689] E [xlator.c:425:xlator_init] 0-fuse:
Initialization of volume 'fuse' failed, review your volfile again
I think that the problem is linked to the locale variables, mine were
LANG=fr_FR.UTF-8
LC_CTYPE=fr_FR.UTF-8
LC_NUMERIC=fr_FR.UTF-8
LC_TIME=fr_FR.UTF-8
LC_COLLATE=fr_FR.UTF-8
LC_MONETARY=fr_FR.UTF-8
LC_MESSAGES=fr_FR.UTF-8
LC_PAPER=fr_FR.UTF-8
LC_NAME=fr_FR.UTF-8
LC_ADDRESS=fr_FR.UTF-8
LC_TELEPHONE=fr_FR.UTF-8
LC_MEASUREMENT=fr_FR.UTF-8
LC_IDENTIFICATION=fr_FR.UTF-8
LC_ALL=
I changed LC_CTYPE and LC_NUMERIC to C and then executed the gverify.sh script
again and it worked, but the gluster vol geo-rep ... failed.
I then changed the /etc/locale.conf file and modified the LANG from fr_FR.UTF-8
to C, rebooted the VM and voila, the geo-replication session was created
successfuly.
but I am not sure if my changes won't affect other things.
Regards
Le Vendredi 12 décembre 2014 8h30, Kotresh Hiremath Ravishankar
khire...@redhat.com a écrit :
Hi,
The setup is failing while doing compatibility test between master and slave
cluster.
The gverify.sh script is failing to get master volume details for the same.
Could you run the following and paste the output here?
bash -x /usr/local/libexec/glusterfs/gverify.sh master-vol-name root
slave-host-name slave-vol-name temp-log-file
If source installed gverify.sh is found in above path where as if rpm install,
it is found in /usr/libexec/glusterfs/gverify.sh
If you are sure the master and slave gluster versions and size is fine, the
easy workaround
is to use force.
gluster vol geo-replication master-vol slave-host::slave-vol create
push-pem force
Thanks and Regards,
Kotresh H R
- Original Message -
From: wodel youchi wodel_d...@yahoo.fr
To: gluster-users@gluster.org
Sent: Friday, December