Sounds related to this:
Reminder: 3.5.2 centos7
[2015-06-02 15:26:18.373590] I [master(/export/raid/usr_global):445:crawlwrap]
_GMaster: 20 crawls, 0 turns
[2015-06-02 15:27:20.3036] I [master(/export/raid/usr_global):445:crawlwrap]
_GMaster: 20 crawls, 0 turns
[2015-06-02 15:27:51.6190] I [mas
Sure,
https://dl.dropboxusercontent.com/u/2663552/logs.tgz
Yesterday I restart the geo-rep (and reset the changelog.changelog option).
Today it looks converged and changelog keeps doing his job.
BUT
hybridcrawl doesn’t seem to update symlink links if they changed on master:
From master:
ll -
Some news,
Looks like changelog is not working anymore. When I touch a file in master it
doesnt propagate to slave…
.processing folder contain a thousand of changelog not processed.
I had to stop the geo-rep, reset changelog.changelog to the volume and restart
the geo-rep. It’s now sending mis
So, to sum up, I finally found a workaround:
Get the diff between master and vol for data
rsync -avn —delete src dst > liste.txt
From here I delete deleted files from the slave. It was easy.
Now update the mismatched gfid…
I removed the unsynced files from the slave
On master I do a cp myfile
Oh and by the way, I’m using 3.5.2 so I don’t have the
http://review.gluster.org/#/c/9370/ feature you add…
--
Cyril Peponnet
On May 28, 2015, at 8:54 AM, Cyril Peponnet
mailto:cyril.pepon...@alcatel-lucent.com>>
wrote:
Hi Kotresh,
Inline.
Again, thank for you time.
--
Cyril Peponnet
On Ma
One more thing:
nfs.volume-access read-only works only for nfs clients, glusterfs client have
still write access
features.read-only on need a vol restart and set RO for everyone but in this
case, geo-rep goes faulty.
[2015-05-28 09:42:27.917897] E [repce(/export/raid/usr_global):188:__call__]
Hi Kotresh,
Inline.
Again, thank for you time.
--
Cyril Peponnet
> On May 27, 2015, at 10:47 PM, Kotresh Hiremath Ravishankar
> wrote:
>
> Hi Cyril,
>
> Replies inline.
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
>> From: "Cyril N PEPONNET (Cyril)"
>> To: "Kot
Hi and thanks again for those explanation.
Due to lot of missing files and not up to date (with gfid mismatch some time),
I reset the index (or I think I do) by:
deleting the geo-reop, reset geo-replication.indexing (set it to off does not
work for me), and recreate it again.
So for now it’s s
So, changelog is still active but I notice that some file were missing.
So I ‘m running a rsync -avn between the two vol (master and slave) to sync
then again by touching the missing files (hopping geo-rep will do the rest).
One question, can I pass the slave vol a RO ? Because if somebody chang
Thanks for the clarification.
Regarding our setup the xsync is done, but I had to force the change.detector
to changelog (not switched automatiquely even after a geo-rep stop / restart).
Now change log is enabled, let see how it behave :)
--
Cyril Peponnet
On May 25, 2015, at 12:43 AM, Kotresh
One last question, correct me if I’m wrong.
When you start a geo-rep process it starts with xsync aka hybrid crawling
(sending files every 60s, with files windows set as 8192 files per sent).
When the crawl is done it should use changelog detector and dynamically change
things to slaves.
1/ Du
Thanks to JoeJulian / Kaushal I managed to re-enable the changelog option and
the socket is now present.
For the record I had some clients running rhs gluster-fuse and our nodes are
running glusterfs release and op-version are not “compatible”.
Now I have to wait for the init crawl see if it sw
Hi,
Unfortunately,
# gluster vol set usr_global changelog.changelog off
volume set: failed: Staging failed on mvdcgluster01.us.alcatel-lucent.com.
Error: One or more connected clients cannot support the feature being set.
These clients need to be upgraded or disconnected before running this com
Hi Gluster Community,
I have a 3 nodes setup at location A and a two node setup at location B.
All running 3.5.2 under Centos-7.
I have one volume I sync through georeplication process.
So far so good, the first step of geo-replication is done (hybrid-crawl).
Now I’d like to use the change log
Hi,
We had an outage in our gluster setup and I had to rm -rf /var/lib/gluster/*
I rebuild the setup from scratch but op-version for volumes is set as 2 for
op-version and client-op-version.
op-version in glusterd.info is 30501.
The main issue is when I try to set property it fails only for ce
(master).
I still don’t understand why I have 3 masters…
--
Cyril Peponnet
On Feb 2, 2015, at 11:00 AM, PEPONNET, Cyril N (Cyril)
mailto:cyril.pepon...@alcatel-lucent.com>>
wrote:
But now I have strange issue:
After creating the geo-rep session and starting it (from nodeB):
[root@nodeB]# g
0
So.
1/Why there is 3 masters nodes ??? nodeB should be the master node only
2/Why it kept changing turn by turn from active to passive?
Thanks
--
Cyril Peponnet
On Feb 2, 2015, at 10:40 AM, PEPONNET, Cyril N (Cyril)
mailto:cyril.pepon...@alcatel-lucent.com>>
wrote
For the record, after adding
operating-version=2
on every nodes (ABC) AND slave node, the commands are working
--
Cyril Peponnet
On Feb 2, 2015, at 9:46 AM, PEPONNET, Cyril N (Cyril)
mailto:cyril.pepon...@alcatel-lucent.com>>
wrote:
More informations here:
I update the state of the p
/log/glusterfs/cli.log <==
[2015-02-02 17:44:49.308493] I [input.c:36:cli_batch] 0-: Exiting with: -1
I have only one node… I don’t understand the meaning of the errror: One or more
nodes do not support the required op version.
--
Cyril Peponnet
On Feb 2, 2015, at 8:49 AM, PEPONNET, Cyri
mailto:avish...@redhat.com>> wrote:
Looks like node C is in diconnected state. Please let us know the output of
`gluster peer status` from all the master nodes and slave nodes.
--
regards
Aravinda
On 01/22/2015 12:27 AM, PEPONNET, Cyril N (Cyril) wrote:
So,
On master node of my 3 node set
wrote:
On 01/20/2015 11:01 PM, PEPONNET, Cyril N (Cyril) wrote:
Hi,
I’m ready for new testing, I delete the geo-rep session between master and
slace, remove the lines in authorized keys file on slave.
I also remove the common secret pem from slave, and from master. There is only
the gsyncd_templat
t; slave_ip=gluster-slave01
>
> Also let us know the GlusterFS version in Master nodes and Slave nodes.
>
> --
> regards
> Aravinda
>
> On 01/15/2015 09:12 PM, PEPONNET, Cyril N (Cyril) wrote:
>> [2015-01-15 15:38:13.676252] E [run.c:190:runner_log] 0-management: Fai
glusterd/geo-replication/common_secret.pem.pub
> slave_ip=gluster-slave01
>
> Also let us know the GlusterFS version in Master nodes and Slave nodes.
>
> --
> regards
> Aravinda
>
> On 01/15/2015 09:12 PM, PEPONNET, Cyril N (Cyril) wrote:
>> [2015-01-15 15:38
Folks,
Gluster 3.5.2-1.el7 under centos7
Using gluster create master_vol slave::slave_vol create push-pem force (slave
volume already contains data) doesn’t append the keys to the authorized_keys
file.
Looks like peer_add_secret_pub is not executed or fails at some point.
No error from cli
Hi,
I will try to reproduce this in a vagrant-cluster environnement.
If it can help, here is the time line of the event.
t0: 2 serveurs in replicate mode no issue
t1: power down server1 due to hardware issue
t2: server2 still continue to serve files through NFS and fuse, and still
continue to b
25 matches
Mail list logo