Re: [Gluster-users] locking in 3.6.1

2014-12-13 Thread Atin Mukherjee
Scott,

We have identified a grey area in the code which is causing this issue.
FYI, fix has been sent in master - http://review.gluster.org/#/c/9269/

Once this patch is in master streamline I will backport it in 3.6 branch.

~Atin
On 12/12/2014 08:28 PM, Scott Merrill wrote:
 On 11/25/14, 10:11 AM, Scott Merrill wrote:
 On 11/25/14, 10:06 AM, Atin Mukherjee wrote:


 On 11/25/2014 07:08 PM, Scott Merrill wrote:
 On 11/24/14, 11:56 PM, Atin Mukherjee wrote:
 Can you please find/point out the first instance of the command and its
 associated glusterd log which failed to acquire the cluster wide lock.


 Can you help me identify what I should be looking for in the logs?
 grep for first instance of locking failed in glusterd log in the
 server where the command failed.
 
 
 This continues to be a problem for us.  Is there any common cause for
 this?  Is there a common workaround that is suggested?
 
 Or should we downgrade to Gluster 3.5?
 
 Thanks.
 
 
 smerrill@gluster2:PRODUCTION:~ sudo grep locking failed
 /var/log/glusterfs/etc-glusterfs-glusterd.vol.log*
 /var/log/glusterfs/etc-glusterfs-glusterd.vol.log-20141124:[2014-11-19
 19:05:19.312168] E [glusterd-syncop.c:105:gd_collate_errors] 0-:
 Unlocking failed on gluster1.innova.local. Please check log file for
 details.

 smerrill@gluster3:PRODUCTION:~ sudo grep locking failed
 /var/log/glusterfs/etc-glusterfs-glusterd.vol.log*
 /var/log/glusterfs/etc-glusterfs-glusterd.vol.log-20141123:[2014-11-20
 16:40:08.368442] E [glusterd-syncop.c:105:gd_collate_errors] 0-:
 Unlocking failed on gluster2.innova.local. Please check log file for
 details.

 smerrill@gluster1:PRODUCTION:~ sudo grep locking failed
 /var/log/glusterfs/etc-glust-glusterd.vol.log*
 /var/log/glusterfs/etc-glusterfs-glusterd.vol.log-20141124:[2014-11-22
 02:10:09.795695] E [glusterd-syncop.c:105:gd_collate_errors] 0-:
 Unlocking failed on gluster2.innova.local. Please check log file for
 details.

 
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Is it possible to start geo-replication between two volumes with data already present in the slave?

2014-12-13 Thread Nathan Aldridge
Hi,

I have a large volume that I want to geo-replicate using Gluster ( 1Tb). I 
have the data rsynced on both servers and up to date. Can I start a 
geo-replication session without having to send the whole contents over the wire 
to the slave, since it's already there? I'm running Gluster 3.6.1.

I've read through all the various on-line documents I can find but nothing pops 
out that describes this scenario.

Thanks in advance,

Nathan Aldridge
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Porblem creating Geo-replication

2014-12-13 Thread wodel youchi
Hi again,

I had a hard time to configure geo-replication. After creating the session, the 
start gave me faulty state and I had to deal with it, it was the 
/nonexistent/gsyncd in 
/var/lib/glusterd/geo-replication/data1_node3.example_data2/gsyncd.conf to 
change to /usr/libexec/gluster/gsyncd

after that the geo-started, but nothing happened, no file or directory was 
synced.

the gluster volume geo-replication data1 geoacco...@node3.example.com::data2 
status

keeps saying: Changelog Crawl

[2014-12-13 21:57:17.129836] W [master(/mnt/srv1/brick1):1005:process] 
_GMaster: incomplete sync, retrying changelogs: CHANGELOG.1418504186
[2014-12-13 21:57:22.648163] W [master(/mnt/srv1/brick1):294:regjob] _GMaster: 
Rsync: .gfid/f066ca4a-2d31-4342-bc7e-a37da25b2253 [errcode: 23]
[2014-12-13 21:57:22.648426] W [master(/mnt/srv1/brick1):986:process] _GMaster: 
changelogs CHANGELOG.1418504186 could not be processed - moving on...

but new files/directories are synced, so I deleted all data on the master 
cluster and recreated them, and all was synced
but the status command (above) keeps saying Changelog Crawl

On the slave node I have these logs

[2014-12-13 21:11:12.149041] W 
[client-rpc-fops.c:1210:client3_3_removexattr_cbk] 0-data2-client-0: remote 
operation failed: No data available
[2014-12-13 21:11:12.149067] W [fuse-bridge.c:1261:fuse_err_cbk] 
0-glusterfs-fuse: 3243: REMOVEXATTR() 
/.gfid/cd26bcc2-9b9f-455a-a2b2-9b6358f24203 = -1 (No data available)
[2014-12-13 21:11:12.516674] W 
[client-rpc-fops.c:1210:client3_3_removexattr_cbk] 0-data2-client-0: remote 
operation failed: No data available
[2014-12-13 21:11:12.516705] W [fuse-bridge.c:1261:fuse_err_cbk] 
0-glusterfs-fuse: 3325: REMOVEXATTR() 
/.gfid/e6142b37-2362-4c95-a291-f396a122b014 = -1 (No data available)
[2014-12-13 21:11:12.517577] W 
[client-rpc-fops.c:1210:client3_3_removexattr_cbk] 0-data2-client-0: remote 
operation failed: No data available
[2014-12-13 21:11:12.517600] W [fuse-bridge.c:1261:fuse_err_cbk] 
0-glusterfs-fuse: 3331: REMOVEXATTR() 
/.gfid/e6142b37-2362-4c95-a291-f396a122b014 = -1 (No data available)

and
[2014-12-13 21:57:16.741321] W [syncdutils(slave):480:errno_wrap] top: 
reached maximum retries (['.gfid/ba9c75ef-d4f7-4a6b-923f-82a8c7be4443', 
'glusterfs.gfid.newfile', 
'\x00\x00\x00\x1b\x00\x00\x00\x1bcab3ae81-7b52-4c55-ac33-37814ff374c4\x00\x00\x00\x81\xb0glustercli1.lower-test\x00\x00\x00\x01\xb0\x00\x00\x00\x00\x00\x00\x00\x00'])...
[2014-12-13 21:57:22.400269] W [syncdutils(slave):480:errno_wrap] top: 
reached maximum retries (['.gfid/ba9c75ef-d4f7-4a6b-923f-82a8c7be4443', 
'glusterfs.gfid.newfile', 
'\x00\x00\x00\x1b\x00\x00\x00\x1bcab3ae81-7b52-4c55-ac33-37814ff374c4\x00\x00\x00\x81\xb0glustercli1.lower-test\x00\x00\x00\x01\xb0\x00\x00\x00\x00\x00\x00\x00\x00'])...

I didn't find what does this mean

any idea.

Regards


 Le Vendredi 12 décembre 2014 19h35, wodel youchi wodel_d...@yahoo.fr a 
écrit :
   

 Thanks for your reply,
When executing the gverify.sh script I had these errors on slave.log[2014-12-12 
18:12:45.423669] I [options.c:1163:xlator_option_init_double] 0-fuse: option 
attribute-timeout convertion failed value 1.0
[2014-12-12 18:12:45.423689] E [xlator.c:425:xlator_init] 0-fuse: 
Initialization of volume 'fuse' failed, review your volfile again

I think that the problem is linked to the locale variables, mine were
LANG=fr_FR.UTF-8
LC_CTYPE=fr_FR.UTF-8
LC_NUMERIC=fr_FR.UTF-8
LC_TIME=fr_FR.UTF-8
LC_COLLATE=fr_FR.UTF-8
LC_MONETARY=fr_FR.UTF-8
LC_MESSAGES=fr_FR.UTF-8
LC_PAPER=fr_FR.UTF-8
LC_NAME=fr_FR.UTF-8
LC_ADDRESS=fr_FR.UTF-8
LC_TELEPHONE=fr_FR.UTF-8
LC_MEASUREMENT=fr_FR.UTF-8
LC_IDENTIFICATION=fr_FR.UTF-8
LC_ALL=

I changed LC_CTYPE and LC_NUMERIC to C and then executed the gverify.sh script 
again and it worked, but the gluster vol geo-rep ... failed.
I then changed the /etc/locale.conf file and modified the LANG from fr_FR.UTF-8 
to C, rebooted the VM and voila, the geo-replication session was created 
successfuly.
but I am not sure if my changes won't affect other things.
Regards


 

 Le Vendredi 12 décembre 2014 8h30, Kotresh Hiremath Ravishankar 
khire...@redhat.com a écrit :
   

 Hi,

The setup is failing while doing compatibility test between master and slave 
cluster.
The gverify.sh script is failing to get master volume details for the same.

Could you run the following and paste the output here?

bash -x /usr/local/libexec/glusterfs/gverify.sh master-vol-name root 
slave-host-name slave-vol-name temp-log-file

If source installed gverify.sh is found in above path where as if rpm install,
it is found in /usr/libexec/glusterfs/gverify.sh

If you are sure the master and slave gluster versions and size is fine, the 
easy workaround
is to use force.

gluster vol geo-replication master-vol slave-host::slave-vol create 
push-pem force



Thanks and Regards,
Kotresh H R

- Original Message -
From: wodel youchi wodel_d...@yahoo.fr
To: gluster-users@gluster.org
Sent: Friday, December