Re: [Gluster-users] Is it possible to start geo-replication between two volumes with data already present in the slave?

2014-12-16 Thread Nathan Aldridge
Hello again,

Ok - I managed to get a list of gfids and pathnames transferred to the slave 
server.

I began the process of updating the heal - but I get a bunch of stale file 
handles when I try and run the following on the slave volume:

cd $GLUSTER_SRC/extras/geo-rep
sh slave-upgrade.sh localhost: 
/tmp/master-gfid-values.txt $PWD/gsync-sync-gfid

setxattr on ./serverTempDir/FOO failed (Stale file handle)
...

This maps to a [fuse-bridge.c:1261:fust_err_cbk] 0-glusterfs-fuse error in 
/var/log/glusterfs/tmp-mount-log

Any thoughts on this?

Thanks in advance,

Nathan


From: Nathan Aldridge
Sent: Monday, December 15, 2014 11:55 AM
To: 'gluster-users@gluster.org'
Subject: RE: [Gluster-users] Is it possible to start geo-replication between 
two volumes with data already present in the slave?

Hi,

Thanks for your response. I attempted to run the "generate-gfid-file.sh" script 
but it failed when it encountered "whitespace" in some of the filenames. I'll 
take a look to see if I can see what is breaking in the scripts... but if you 
know offhand, that would be appreciated? I see a whole lot of getfattr errors 
due to not being able to find a file.

Cheers,

Nathan Aldridge
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is it possible to start geo-replication between two volumes with data already present in the slave?

2014-12-15 Thread Nathan Aldridge
Hi,

Thanks for your response. I attempted to run the "generate-gfid-file.sh" script 
but it failed when it encountered "whitespace" in some of the filenames. I'll 
take a look to see if I can see what is breaking in the scripts... but if you 
know offhand, that would be appreciated? I see a whole lot of getfattr errors 
due to not being able to find a file.

Cheers,

Nathan Aldridge
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is it possible to start geo-replication between two volumes with data already present in the slave?

2014-12-15 Thread Aravinda
When you directly sync files using rsync, rsync creates a file in slave 
if not exists. Due to this, rsync creates a file in slave with different 
GFID than the one available in master. This is a problem for geo-rep to 
continue syncing to slave.


Before starting geo-replication as a prerequisite steps you can do the 
following to fix the GFID changes,


Run this in a master node,
If you downloaded glusterfs source directory,

cd $GLUSTER_SRC/extras/geo-rep
sh generate-gfid-file.shlocalhost: 
$PWD/get-gfid.sh/tmp/master-gfid-values.txt


Copy the generated file to slave
scp /tmp/master-gfid-values.txt root@slavehost:/tmp/

Run this script in slave,
cd $GLUSTER_SRC/extras/geo-rep
sh slave-upgrade.sh localhost:NAME>/tmp/master-gfid-values.txt $PWD/gsync-sync-gfid


Once all these steps complete, GFID in master volume matches with GFID 
in slave.


Now update the stime xattr in each brick root in master volume. Enclosed 
a Python script to update stime of each brick root to current time, run 
it in each master node for each brick.


sudo python set_stime.py   



For example,

sudo python set_stime.py f8c6276f-7ab5-4098-b41d-c82909940799 
563681d7-a8fd-4cea-bf97-eca74203a0fe /exports/brick1


You can get master volume ID and slave volume ID using gluster volume 
info command,


gluster volume info  | grep -i "volume id"
gluster volume info  | grep -i "volume id"

Once this is done, create the geo-rep session using force option,

gluster volume geo-replication  
::create push-pem force


Start geo-replication,

gluster volume geo-replication  :: 
start force


Now onwards, geo-rep picks only new changes and syncs to slave.

Let me know if you face any issues.

--
regards
Aravinda
http://aravindavk.in


On 12/14/2014 12:32 AM, Nathan Aldridge wrote:


Hi,

I have a large volume that I want to geo-replicate using Gluster (> 
1Tb). I have the data rsynced on both servers and up to date. Can I 
start a geo-replication session without having to send the whole 
contents over the wire to the slave, since it’s already there? I’m 
running Gluster 3.6.1.


I’ve read through all the various on-line documents I can find but 
nothing pops out that describes this scenario.


Thanks in advance,

Nathan Aldridge



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


#!/usr/bin/env python

import struct
import xattr
import sys
import time


def get_stime(master_uuid, slave_uuid, brick):
stime_key = "trusted.glusterfs.%s.%s.stime" % (master_uuid, slave_uuid)
try:
return struct.unpack('!II', xattr.get(brick, stime_key))
except (OSError, IOError):
return "N/A"


def set_stime(master_uuid, slave_uuid, brick):
stime_key = "trusted.glusterfs.%s.%s.stime" % (master_uuid, slave_uuid)
t = time.time()
sec = int(t)
nsec = int((t - sec) * 100)
xattr.set(brick, stime_key, struct.pack("!II", sec, nsec))


if __name__ == "__main__":
"""
Usage:

sudo python set_stime.py f8c6276f-7ab5-4098-b41d-c82909940799 \
563681d7-a8fd-4cea-bf97-eca74203a0fe /exports/brick1
"""
print "BEFORE: %s" % repr(get_stime(sys.argv[1], sys.argv[2], sys.argv[3]))
set_stime(sys.argv[1], sys.argv[2], sys.argv[3])
print "AFTER: %s" % repr(get_stime(sys.argv[1], sys.argv[2], sys.argv[3]))
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Is it possible to start geo-replication between two volumes with data already present in the slave?

2014-12-13 Thread Nathan Aldridge
Hi,

I have a large volume that I want to geo-replicate using Gluster (> 1Tb). I 
have the data rsynced on both servers and up to date. Can I start a 
geo-replication session without having to send the whole contents over the wire 
to the slave, since it's already there? I'm running Gluster 3.6.1.

I've read through all the various on-line documents I can find but nothing pops 
out that describes this scenario.

Thanks in advance,

Nathan Aldridge
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users