Your ssh commands connect to port 2503 - is that port listening on the
slaves?
Does it use privilege-separation?
Don't force it to changelog without an initial sync using xsync.
The warning "fuse: xlator does not implement release_cbk" was fixed in
3.6.0alpha1 but looks like it could easily be
Hi Jeremy
I have found that glusterfs 3.7.5 dies when I disconnect from the client
if starting glusterfs from the command line but is stable when started
with a systemd unit file. You might try using an upstart unit instead of
init.d scripts.
Cheers,
Wade.
On 6/11/2015 9:43 AM, Jeremy Koerb
I also had problems getting geo-replication working correctly and
eventually gave it up due to project time constraints.
What version of gluster?
What is the topology of x, xx, and xxx/xxy/xxz?
I tried a 2x2 stripe-replica with geo-replication to a 2x1 stripe using
3.7.4. Starting replication
That is almost exactly what I am seeing too. It seems to be a problem
with geo-replicating a Stripe and/or Replicate volume.
On 21/10/2015 4:02 am, Родион Скрябин wrote:
I have a long time war with geo-replications. First two battles were won:
1. geo-replication on a cluster with glusterfs-3.4.
Thanks Saravana, that is starting to make sense. The change_detector was
already set to changelog (automatically). I updated it to xsync and the
volume successfully replicated to the remote volume, however I then
deleted all the data from the master and have not seen those changes
replicated ye
I have now tried to re-initialise the whole geo-rep setup but the
replication slave went Faulty immediately. Any help here would be
appreciated, I cannot even find how to recover a faulty node without
recreating the geo-rep.
root@james:~# gluster volume geo-replication static gluster-b1::stati
til
[2015-10-09 11:12:22.590574] I
[master(/data/gluster1/static/brick1):1249:crawl] _GMaster: finished
hybrid crawl syncing, stime: (1444349280, 617969)
[2015-10-09 11:13:22.650285] I
[master(/data/gluster1/static/brick1):552:crawlwrap] _GMaster: 1 crawls,
1 turns
[2015-10-09 11:13:22.653
tion/static_gluster-b1_static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.status:
Started
/var/lib/glusterd/geo-replication/static_gluster-b1_static/ssh%3A%2F%2Froot%40madonna%3Agluster%3A%2F%2F127.0.0.1%3Astatic.status:
Started
On 15/10/2015 10:25 pm, Wade Fitzpatrick w
looks good. Two master bricks are Active and participating in
syncing. Please let us know the issue you are observing.
regards
Aravinda
On 10/15/2015 11:40 AM, Wade Fitzpatrick wrote:
I have twice now tried to configure geo-replication of our
Stripe-Replicate volume to a remote Stripe volume but it a
I have twice now tried to configure geo-replication of our
Stripe-Replicate volume to a remote Stripe volume but it always seems to
have issues.
root@james:~# gluster volume info
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 5f446a10-651b-4ce0-a46b-69871f498dbc
Status: Started
I am trying to migrate away from our old server to a cluster of 3.7.4
servers but I need to modify ~100 clients and sync the contents between
the old and new servers at the same time.
I have tried applying the workaround mentioned in
http://www.gluster.org/pipermail/gluster-devel/2015-August/04
an A record such as "gluster-remote"
that has 2 addresses (for both palace and madonna)?
On 7/10/2015 8:10 am, Wade Fitzpatrick wrote:
Thanks for the response, it looks like the problem is
[2015-10-06 19:08:49.547874] E
[resource(/data/gluster1/static/brick1):2
I am trying to set up geo-replication of a striped-replicate volume. I
used https://github.com/aravindavk/georepsetup to configure the replication.
root@james:~# gluster volume info
Volume Name: static
Type: Striped-Replicate
Volume ID: 3f9f810d-a988-4914-a5ca-5bd7b251a273
Status: Started
Numbe
13 matches
Mail list logo