Hello again,
Ok - I managed to get a list of gfids and pathnames transferred to the slave
server.
I began the process of updating the heal - but I get a bunch of stale file
handles when I try and run the following on the slave volume:
cd $GLUSTER_SRC/extras/geo-rep
When you directly sync files using rsync, rsync creates a file in slave
if not exists. Due to this, rsync creates a file in slave with different
GFID than the one available in master. This is a problem for geo-rep to
continue syncing to slave.
Before starting geo-replication as a prerequisite
Hi,
Thanks for your response. I attempted to run the generate-gfid-file.sh script
but it failed when it encountered whitespace in some of the filenames. I'll
take a look to see if I can see what is breaking in the scripts... but if you
know offhand, that would be appreciated? I see a whole lot
Hi,
I have a large volume that I want to geo-replicate using Gluster ( 1Tb). I
have the data rsynced on both servers and up to date. Can I start a
geo-replication session without having to send the whole contents over the wire
to the slave, since it's already there? I'm running Gluster 3.6.1.